Recovering an appvm after lvm pool reached 100%

I was performing a build of the arch template, the lvm pool was already around 85% but I was keeping a close eye.

I received warnings in the UI that it reached 95%. At that point I just deleted a StandaloneVM I had that I hadn’t used in a while. The build of the arch template completed successfully. I thought everything was fine as the pool is back to 89%.

However, I can’t get into one of my qubes that was running while the disk space was fluctuating. Looking at journal logs in dom0, it turns out the lvm pool did reach 100%, but maybe it was too quick for me to notice:

Sep 27 19:04:24 dom0 lvm[876]: Monitoring thin pool qubes_dom0-pool00-tpool.
Sep 27 19:04:34 dom0 lvm[876]: WARNING: Thin pool qubes_dom0-pool00-tpool data is now 98.07% full.
Sep 27 19:05:44 dom0 lvm[876]: WARNING: Thin pool qubes_dom0-pool00-tpool data is now 100.00% full.
Sep 27 19:06:55 dom0 kernel: device-mapper: thin: 253:3: switching pool to out-of-data-space (error IO) mode
Sep 27 19:07:07 dom0 kernel: EXT4-fs warning (device dm-4): ext4_end_bio:351: I/O error 3 writing to inode 2359948 starting block 19583872)
Sep 27 19:07:07 dom0 kernel: Buffer I/O error on device dm-4, logical block 19583872
Sep 27 19:07:23 dom0 kernel: EXT4-fs warning (device dm-4): ext4_end_bio:351: I/O error 3 writing to inode 2360213 starting block 19583873)
Sep 27 19:07:23 dom0 kernel: Buffer I/O error on device dm-4, logical block 19583873
Sep 27 19:26:44 dom0 dmeventd[876]: No longer monitoring thin pool qubes_dom0-pool00-tpool.
Sep 27 19:26:44 dom0 lvm[876]: Monitoring thin pool qubes_dom0-pool00-tpool.
Sep 27 19:26:45 dom0 dmeventd[876]: No longer monitoring thin pool qubes_dom0-pool00-tpool.
Sep 27 19:26:45 dom0 lvm[876]: Monitoring thin pool qubes_dom0-pool00-tpool.
Sep 27 19:26:55 dom0 lvm[876]: WARNING: Thin pool qubes_dom0-pool00-tpool data is now 89.49% full.

But after freeing up space, I can’t get into the qube. All qvm-run commands fail with (red text meaning it came from inside the client qube):

mkfifo: cannot create fifo '/tmp/qrexec-rpc-stderr.16122': Read-only file system
mkfifo: cannot create fifo '/tmp/qrexec-rpc-stderr-return.16122': Read-only file system
/usr/lib/qubes/qubes-rpc-multiplexer: 12: /usr/lib/qubes/qubes-rpc-multiplexer: cannot open /tmp/qrexec-rpc-stderr-return.16122: No such file
/usr/lib/qubes/qubes-rpc-multiplexer: 15: /usr/lib/qubes/qubes-rpc-multiplexer: cannot create /tmp/qrexec-rpc-stderr.16122: Read-only file system

Looking at the console logs of the qube, it apparently remounted readonly:

[ 6322.866694] EXT4-fs error (device xvda3): ext4_journal_check_start:84: Detected aborted journal
[ 6322.866724] EXT4-fs (xvda3): Remounting filesystem read-only
[ 6322.867781] EXT4-fs (xvdb): Delayed block allocation failed for inode 2771 at logical offset 2048 with max blocks 2048 with error 30
[ 6322.867818] EXT4-fs (xvdb): This should not happen!! Data will be lost
[ 6322.867818]
[ 6322.867844] EXT4-fs error (device xvdb) in ext4_writepages:2795: Journal has aborted
[ 6322.867911] EXT4-fs error (device xvdb) in ext4_reserve_inode_write:5618: Journal has aborted
[ 6322.876345] EXT4-fs error (device xvdb): ext4_journal_check_start:84: Detected aborted journal
[ 6322.876378] EXT4-fs (xvdb): Remounting filesystem read-only
[ 6322.879156] EXT4-fs (xvdb): failed to convert unwritten extents to written extents -- potential data loss!  (inode 2771, error -30)

But, since I fixed the disk space issue, I want to fix the qube. Except I can’t login using xl console. root user says login incorrect, and user says “user is locked”. I unfortunately did not have any windows open like a terminal that I could try just remounting rw.

How can I get back in without shutting down the qube (and potentially losing unsaved data)?