I don’t know if that’s rounded though. Use
ls -l to get the exact size in bytes.
I don’t know if that’s rounded though. Use
Is this last shorter one nevertheless the expected size, i.e. is the last encrypted chunk size 78088616 + 128?
In a case like yours with contiguous data that works:
cat vmX/private.img.??? > vmX/private.img.partial.gz
No the next step would be to find a tool that can uncompress
private.img.partial.gz, which is a gzip/DEFLATE stream with the beginning missing. gzrecover looks promising. Or this StackOverflow answer has some code.
Sort of. Mounting is just not going to be part of the recovery:
private.img.partial.recovered (btw what byte size is it?) contains a partial tar stream of a sparsely archived ext4 filesystem image. That filesystem is never going to be in a mountable state (because a big chunk is missing at the beginning) and in its current form it’s even less mountable (because the information to unsparsifiy it has been lost in the beginning of the tar stream).
But you may be able to extract the KeePass database from the file directly:
- Narrow down exactly which kind of KeePass database format you were saving to. I think there are several variations.
- For every format, check its specification to see how the header looks. Like what are the first few bytes. For KDBX 3.1 and 4 it’s
private.img.partial.recoveredfor these byte sequences, and wherever you find a sequence extract it plus the next… 1 MB? Just more than enough to cover your database file, which will hopefully have been allocated in a single extent.
- Try to open each of those little extracted blobs with KeePass.
I’m sure there must be some data recovery tools to streamline these tasks, but I’m not familiar with the space at all.
img=private.img.partial.recovered bgrep 03d9a29a67fb4bb5 "$img" | while read foo pos; do tail -c +$(( 0x$pos + 1 )) "$img" | head -c 1M > $pos.kdbx done
That gives me the “>” again. I did set +H and it’s the same. I tried adding semicolons where possible but it still wouldn’t run.
The file is 751350026 with an “ls -l”
Edit: I also think it was KeePass2, or something without Mono. I installed it years ago though and didn’t pay it much mind.
Edit 2: gist github com/HarmJ0y/116fa1b559372804877e604d7d367bbc came up with 03d9a29a67fb4bb5 for what I think I used, so that should work.
Yes, there’s a multi-line command again. First line
bgrep... then a continuation prompt for the second line
tail... then another continuation prompt for the third line
done then it runs for a few seconds, silently creating *.kdbx files.
No need for
set +H here (because there are no literal exclamation points) or for more semicolons.
You can also do the bgrep scan on its own first, to check if it even finds anything:
bgrep 03d9a29a67fb4bb5 private.img.partial.recovered
Nice. If necessary retry with just the first four bytes
03d9a29a to widen the search.
@diqsvwae per request I have increased your trust level to 1 so you can continue with the help without being so limited by the “new user” status. I hope this helps and that I am not too late to help.
My mistake, I meant that I typed all 3 lines, and I still get the >. Was I mistaken in adding the 's after “do” and “kbdx”? When I go back in my bash_history it appears to be one line either way. I also ran img=private.img.partial.recovered on its own, not sure if that’s the issue.
DON’T ADD ANYTHING
My apologies, it looked like the commands in the emergency backup recovery link with multiple lines like that one. When I got the first > I thought it was the same as before when I didn’t include the 's after each line. I get > when I do “bgrep 03d9a29a67fb4bb5 “$img” | while read foo pos; do tail -c +$(( 0x$pos + 1 )) “$img” | head -c 1M > $pos.kbdx done” exactly copy/pasted from my terminal. I also verified that typing $img in the terminal itself gives me the filename.
Well if you’re typing the multi-line
bgrep...done command as a single line (???) then you have to add a
This incorrect form (single line without
done) is also what you would see in the bash history after typing it in multiple lines, but incorrectly adding a
\ after the
My mistake again, sorry, I didn’t realize I could press enter without a \ or something saying it’s the same thing without it in a .sh or something. I think I see the issue though, I don’t have bgrep installed, and it’s not in my trusted repos. I’m going to work on getting it installed.
Make sure it’s the one I linked, because there are a couple of very different tools all named bgrep.
Haven’t compiled in a while, but I know piping curl to bash is bad. Does that apply to gcc too? I don’t mean it to sound like I’m changing all the commands I run, I just curl-ed it to a file that I skimmed and I’m trying to get that compiled. I’ll let you know.
Yep that’s a sensible approach. I do it the same.
gcc -O2 -o bgrep bgrep.c
You don’t have to install the the resulting
bgrep binary if you copy it into the directory where
private.img.partial.recovered is and change
./bgrep in the recovery command
I got a few files, still sorting out my machine read them. Would I be able to adjust the 1M if the file is somehow larger than 1 megabyte assuming that’s what the 1M is for?
Thanks A MILLION! I got my BTC, and 3/4 LUKS keys. I don’t need the keys, but I’ll keep digging and seeing if I can merge the KBDX files as it’s possible the 4th LUKS file is in them. I’ve switched to 20M and I’m still unable to open some of them. It’s weird since I only had one keepass DB but that’s more than I thought I’d get back. I don’t know if I can tip, but if it doesn’t break rules, do you have an XMR address I can send to as a thank you while keeping anonymity?
Edit: That may take a while given I have to resync the xmr chain, but I’ll do it when I have it synced.
Congrats! Pretty cool to see that actually work.
Probably old copies of the same database that were hanging around in the VM filesystem’s free space, some of them still complete and some partially discarded/overwritten. (Or maybe complete but fragmented - which would make those a lot harder to recover.)
You don’t have to at all, but if you’re still in a spendy mood by the time it’s synced: