Storage configurations that perform best under common Qubes workflows

If you are willing to sacrifice the x16 PCIe slot, you can do x4 x4 x4 x4 bifurcation and run 4 5.0 drives in raid 0, it should be four times faster than a single drive. I personally think it would be a waste of money, and you will not be able to use a GPU as a display device.

With AMD mainboards, you have 24 CPU connected PCIe lanes, 16 are used for the GPU and 8 are general purpose. The last 8 lanes are either used for two 5.0 NVMe drives, or one NVMe drive and one extra x4 PCI slot.

If you want to use two GPUs, you can only have a single 5.0 drive, limiting the total amount of disk space at 5.0 speed.

2 Likes

One of the tools I have not had time to test (total Qubes noob - playing with it on my laptop) is mergerfs.

On my desktop I use mergerfs - I noticed that 80% my stuff was on one disc - I have 4 ssds - now I don’t have to care where it goes.

On my backups (a usb DAS box) I use a combo of mergerfs and snapraid.

So there’s another topology :slight_smile:

1 Like

That’s interesting. I observed that qvm-clone duration is roughly proportional to the size of the VM or template being cloned and assumed on-disk size was the key cause of the wait, but sounds like some/much of that extra time is rather because larger VMs/templates tend to have more exposed apps → more qvm-appmenu activity.

A waste because in the general case storage IO tends not to be a substantive performance bottleneck?

1 Like

Yes, the performance benefit would not be worth the 4x price, if you compare it to use a single 5.0 drive.

It would make a lot more sense if you are running a TB size database, but it’s not really your typical desktop application.

1 Like