Can using Qubes OS wear down my HDD/SSD?

Unless I’ve gotten it wrong, whenever I start an appVM, Qubes creates a temporary copy of that appVM’s templateVM’s root image, and that is discarded when I shut the appVM down. This is why my free space is decreased whenever I start an appVM.

So, if a templateVM’s root image’s size is, say, 8 GB, then am I writing 8 GB to my disk every time I boot up an appVM?

Also semi-related: when installing qubes, I didn’t use LVM (because I wasn’t familiar with it). What am I missing out on?

What are you using for your filesystem? Typically, it uses copy-on-write as in the data isn’t really duplicated unless it’s been modified; At least that’s how it is on LVM and ZFS. Personally, I use ZFS.

I’m using ext4. Did I shoot myself in the foot?

1 Like

If it is plain ext4 partition, the answer is Yes. Supported file systems are thin provisioned LVM (default), ZFS or filesytems with reflink support (e.g. BTRFS).

You are wearing and tearing your SSD. It is advised to backup your important data and reinstall.

Further reading: Copy-on-write - Wikipedia

1 Like

Ah ok, I installed qubes some time ago so my memory is a bit fuzzy.

/dev/mapper/luks-8a1df5cd-9366-45b0-a869-48e68877dc87 /                       ext4    defaults,x-systemd.device-timeout=0,discard 1 1
UUID=08fd4613-d69d-459f-8a7e-990f32f989b0 /boot                   ext4    defaults,discard 1 2
UUID=B3B0-5F58          /boot/efi               vfat    umask=0077,shortname=winnt,discard 0 2
/dev/mapper/luks-77f96f0a-fd59-40cf-acaf-95ae81f974ed /var/lib/qubes          ext4    defaults,x-systemd.device-timeout=0,discard 1 2

If that is my fstab, then I’m definitely not using fstab, right?

I believe you have meant that you are not using LVM. And the answer is yes. Your root and /var/lib/qubes are directly on luks volume.

in my /etc/fstab from dom0, I see that /dev/mapper/qubes_dom0-swap none swap defaults,x-systemd.device-timeout=0 0 0 is written in its last line.

also on its first line I see /dev/mapper/qubes_dom0-root / ext4 defaults,x-systemd.device-timeout=0,discard 1 1

Comparing to this /etc/fstab of @Hellothere does my output show that I use ext4 thin provisioned LVM stuff (and am not wearing down my ssd)? I used default options when I installed qubesOS a year ago.

You can use sudo pvdisplay to be sure.

And you have an (LVM) mapper. So you are most probably using LVM

1 Like

what should I see at the output of sudo pvdisplay command in order to see that I am indeed using ext4 thin provisioned LVM stuff?

Yes, most probably as I didn’t change the defaults during my qubesos install a year ago. Still, good to see it verified. Thanks.

Also use vgdisplay and lvdisplay to see more info

At the output of sudo vgdisplay command, I see a line that reads: Format lvm2.

There is also lvs command. This guide shows where to look

Can someone suggest exactly what a newcomer could choose to do the first Qubes Install in - I guess - the best manner?

Are there any Security considerations for the different methods?
Which Method of setting disk format most Secure for the most difficult threat?

I don’t think there are any security considerations in choosing a filesystem, especially for qubes. I didn’t choose LVM because I’d never used it before and I didn’t understand it, and I didn’t use BTRFS because, ironically, I thought it was going to cause unnecessary writes to the disk (which BTRFS used to do, by the way).

@catacombs for a newcomer – just use the defaults.

2 Likes