Cloned (dd) Qubes OS system to bigger SSD - how to reclaim all available disk space?

Hi, having replaced my old 500GB SSD with a new 2 TB one due to upcoming disk failure. For convenience I dumped all content from old to new SSD via dd.

So far so good, disadvantage is: There is still only 500GB disk space instead of 2TB available. I’d like to fix this now by extending LUKS and LVM partitions of my running Qubes OS 4.2 system (command output simplified):

pvs

[user@dom0 ~]$ sudo pvs
  PV                                                    VG         Fmt  Attr PSize   PFree
  /dev/mapper/luks-1234                                 qubes_dom0 lvm2 a--  481.3g  

vgs

[user@dom0 ~]$ sudo vgs
  VG         #PV #LV #SN Attr   VSize   VFree
  qubes_dom0   1 320   0 wz--n- 481.3g  

lsblk

NAME                                                                                    MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda                                                                                       8:0    0  2000G  0 disk  
├─sda1                                                                                    8:1    0   600M  0 part  /boot/efi
├─sda2                                                                                    8:2    0     1G  0 part  /boot
└─sda3                                                                                    8:3    0 481.3G  0 part
  └─luks-1234                                                                           253:0    0 481.3G  0 crypt
    ├─qubes_dom0-root--pool_tmeta                                                       253:1    0    40M  0 lvm   
    │ └─qubes_dom0-root--pool-tpool                                                     253:3    0    20G  0 lvm   
    │   ├─qubes_dom0-root                                                               253:4    0    20G  0 lvm   /
    │   └─qubes_dom0-root--pool                                                         253:6    0    20G  1 lvm   
    ├─qubes_dom0-root--pool_tdata                                                       253:2    0    20G  0 lvm   
    │ └─qubes_dom0-root--pool-tpool                                                     253:3    0    20G  0 lvm   
    │   ├─qubes_dom0-root                                                               253:4    0    20G  0 lvm   /
    │   └─qubes_dom0-root--pool                                                         253:6    0    20G  1 lvm   
    ├─qubes_dom0-swap                                                                   253:5    0   3.9G  0 lvm   [SWAP]
    ├─qubes_dom0-vm--pool_tmeta                                                         253:7    0   112M  0 lvm   
    │ └─qubes_dom0-vm--pool-tpool                                                       253:9    0 405.3G  0 lvm
    │   ├─qubes_dom0-vm--pool                                                           253:10   0 405.3G  1 lvm
        ...
    └─qubes_dom0-vm--pool_tdata                                                         253:8    0 405.3G  0 lvm
      └─qubes_dom0-vm--pool-tpool                                                       253:9    0 405.3G  0 lvm
        ├─qubes_dom0-vm--pool                                                           253:10   0 405.3G  1 lvm
        ...

Afer some research, I now would execute:

# resize LUKS encrypted volume online
sudo cryptsetup resize /dev/mapper/luks-1234                                
# resize LVM physical volume inside LUKS encrypted volume
sudo pvresize /dev/mapper/luks-1234                                
# Not sure about this one - we probably need to extend 
# volume group "qubes_dom0" and its volumes somehow as well.
sudo lvextend -l +100%FREE /dev/qubes_dom0

1. Is above the correct way to extend available disk space in Qubes? Not sure about last command regarding LVM volume group and volumes, also read this:

sudo lvextend -l +100%FREE /dev/mapper/qubes_dom0-vm--pool -r

(reference; changed to +100%FREE )

2. There is also cfdisk available in dom0, which seems to provide a convenient TUI and a “Resize” option. Any experiences?

Thank you very much for any help.

Use +90%FREE instead as it’s configured by default. Additional space is desired so you’ll be able to recover from out of free space issues with filesystem.

1 Like

I have done this successfully and your commands sound about right (it’s been months). However I don’t see you mention resizing the thin pool. You need to do that as well

EDIT: nevermind, I see you have that command

1 Like

Appreciate your hint @apparatus ! So I would fire up below command sequence in running dom0:

# resize LUKS encrypted volume online
sudo cryptsetup resize /dev/mapper/luks-1234                                
# resize LVM physical volume inside LUKS encrypted volume
sudo pvresize /dev/mapper/luks-1234
# extend LVM volume pool for VMs
sudo lvextend -l +90%FREE /dev/mapper/qubes_dom0-vm--pool -r

For clarification of last command:
To be honest, the LVM volume structure looks a bit daunting on first sight. I guess the root volumes and swap volume shall not be extended, hence omit

  • qubes_dom0-root--pool_tmeta
  • qubes_dom0-root--pool_tdata
  • qubes_dom0-swap

, so there are qubes_dom0-vm--pool_tmeta and qubes_dom0-vm--pool_tdata left, which seem to reserved for all VMs. As I understand it, sudo lvextend -l +90%FREE /dev/mapper/qubes_dom0-vm--pool -r will then extend each qubes_dom0-vm--pool entry with filesystem (-r) under

  • qubes_dom0-vm--pool_tdata
  • qubes_dom0-vm--pool_tmeta

Is there any resources, where I can read more about needed distinction between -tdata and tmeta as well as Qubes OS structuring of LVM volumes?

Update: The _tdata and _tmeta entries seem common with LVM thin pools and not Qubes-specific (no LVM expert):

The thin pool itself consists of two “parts” – data and metadata internal logical volumes, these are the _tmeta (thin pool metadata) and _tdata (thin pool data)

If said is not completely wrong, I should be ready to get my hands dirty. Thanks again!

Resizing the partition (sda3 in this case) is indeed necessary and has to be done as the first step, otherwise the next steps (cryptsetup resize, …) will have no effect.

1 Like

Oops good catch! I misinterpreted cryptsetup resize in doing that. But seeing that lsblk shows the LUKS partition one layer below of /dev/sda3 partition, this makes sense.

Extending partition size of /dev/sda3 with available free space probably should be better done in a live boot environment using gparted, gdisk, fdisk etc., right?

It should work even on a running system. You may have to manually run sudo partprobe /dev/sda afterwards to pick up the new partition size, or reboot.

Cool. Just tried out some tools for disk partitioning that exist in dom0, before probably claiming freespace with cfdisk. With parted I get following error for print:

[user@dom0 ~]$ sudo parted
GNU Parted 3.5
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print                                                            
Error: The backup GPT table is corrupt, but the primary appears OK, so that will be used.
OK/Cancel? 

This is surprising, being a quite new and fresh Qubes OS 4.2 installation. Boot works without issues. Should that bother me?

[user@dom0 ~]$ sudo fdisk /dev/sda

Welcome to fdisk (util-linux 2.38.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

GPT PMBR size mismatch (488397167 != 1953525167) will be corrected by write.
The backup GPT table is corrupt, but the primary appears OK, so that will be used.
The backup GPT table is not on the end of the device. This problem will be corrected by write.
This disk is currently in use - repartitioning is probably a bad idea.
It's recommended to umount all file systems, and swapoff all swap
partitions on this disk.


Command (m for help): 

This warning is expected after copying from a smaller to a larger disk:

But I don’t know why you also get this one:

Changing the partition size should automatically recreate a valid backup GPT table though.

BTW if you haven’t backed up all your data, now would be a good time :wink:

1 Like

Thanks. I didn’t even know, there is a secondary GPT header (at the end)! Apparently there is, as image in GUID Partition Table - Wikipedia shows.

Hm might be that “corrupt” in this context means non-existing.

Yeah, at least this is also what I read. It says, this error is automatically fixed by parted.

Haha I expected this sentence sooner or later :wink: . Already done, but always a good advice!

1 Like

Solution worked flawlessly! Here for completeness all steps (execute in dom0):

For backup GPT table corruption error, I just had to rewrite the secondary header once:

sudo gdisk /dev/sda
# optionally verify before
-> v
# autofix by writing partition table again
-> w

These commands will reclaim available free space:

# resize /dev/sda3 partition to use all free space
sudo parted /dev/sda
(parted) resizepart 3 100%
(parted) quit
# resize LUKS encrypted volume online
sudo cryptsetup resize /dev/mapper/luks-1234
# resize LVM physical volume inside LUKS encrypted volume
sudo pvresize /dev/mapper/luks-1234
# extend LVM volume pool for VMs
sudo lvextend -l +90%FREE /dev/mapper/qubes_dom0-vm--pool -r
# pickup new partition size without reboot
sudo partprobe /dev/sda
2 Likes

Oh sorry I should have mentioned that if this is required (if the partitioning tool says that it didn’t automatically refresh the partition size because the partition is currently being used), this command would have to be run before cryptsetup resize. But apparently parted did it all automatically anyway, phew

1 Like

Oh well, I second the “phew” :sweat_smile:. And yes, parted didn’t complain.

Having said that, it was very helpful to check partition sizes after each command in sequence with lsblk. It reported changes directly, so any missbehaving partitioning tool probably would stand out.

1 Like