[qubes-users] Data recovery -- thin provisioned LVM metadata (?) problem

Dear all,

I had a lethally bad hardware failure on computer. Since I had to buy a
new machine this took a while, now I try to save some data: the old SSD
is attached to a brand-new qubes via usb adapter. I started

sudo pvscan and sudo vgscan --mknodes and sudo vgchange -ay as
manual says.

unexpected output:

PV /dev/mapper/OLDSSD VG qubes_dom0 lvm2 [238.27 GiB / <15.79
GiB free]
Total: 1 [238.27 GiB] / in use: 1 [238.27 GiB] / in no VG: 0 [0 ]
Found volume group "qubes_dom0" using metadata type lvm2
Check of pool qubes_dom0/pool00 failed (status:1). Manual repair
required!

1 logical volume(s) in volume group "qubes_dom0" now active

then I consulted dr. google for help, and found little help. This one

https://mellowhost.com/billing/index.php?rp=/knowledgebase/65/How-to-Repair-a-lvm-thin-pool.html

suggested to deactivate volumed so that a repair can work. Only swap was
active, I deactivated it. But repair does not work:

lvconvert --repair qubes_dom0/pool00
terminate called after throwing an instance of 'std::runtime_error'
what(): transaction_manager::new_block() couldn't allocate new block
Child 21255 exited abnormally
Repair of thin metadata volume of thin pool qubes_dom0/pool00 failed
(status:-1). Manual repair required!

So now I am struck and ask for help! This is not purely qubes-related, I
known, but I hope to find competent help within the community.

cheers, Bernhard

I suggest asking on the LVM mailing list.
- --
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

Hi Bernhard
Havent seen you for a while. Nice to see you back.
I've been able to fix this in the past by deleting snapshots - I'm not
sure if this will work for you.
If I were you I would take a copy of the SSD with dd, and then start to
prune - what happens if you use lvremove against some of the larger
snapshots?
unman