(likely LLM post)
This is a well-known pain point in the Qubes community. Your experience is completely normal — Qubes is extraordinarily I/O-intensive compared to conventional operating systems, because it runs multiple VMs simultaneously, each with its own virtual disk image. An HDD simply gets hammered by all that concurrent random I/O, which is why even a fast CPU can’t save you. The GPU is essentially irrelevant to the bottleneck.
Why Qubes on HDD is so painful
Qubes runs dom0 plus several AppVMs and a sys-net, sys-firewall, and often sys-usb simultaneously. Each VM has its own qcow2 disk image, and they all compete for the same spinning disk heads at the same time. HDDs handle sequential reads well but are catastrophically slow at random concurrent I/O — latency of 5–15ms per seek versus ~0.05ms on an NVMe SSD. The result is exactly what you’re seeing: freezes, sluggishness, and near-unusable performance regardless of how fast your CPU is.
HDD selection tips if you’re committed to it
If you must use an HDD, the factors that matter most are:
- Spindle speed: A 7200 RPM drive is the minimum — 5400 RPM will be borderline unusable. Enterprise-class 10,000 RPM drives (like the old Western Digital Velociraptor line) are significantly better, though they’re now hard to find new.
- Cache size: Larger onboard cache (64MB or 128MB) helps with burst workloads.
- Brand/reliability: Enterprise-oriented lines like Seagate IronWolf Pro or WD Gold/WD Red Pro have better sustained throughput and are built for heavier workloads, though the fundamental seek-time problem remains.
- Dedicated swap: Put swap on a separate physical drive if possible, so VM memory pressure doesn’t compete directly with VM disk I/O on the same heads.
CPU and RAM are actually the more important levers
Since the HDD will be the bottleneck, you want to minimize how much disk activity happens by keeping more in RAM. Qubes’ own recommendations call for 16 GB as a practical minimum, but 32 GB makes a substantial difference because VMs can stay fully in memory rather than swapping. More RAM won’t fix disk I/O but it dramatically reduces how often it has to happen. CPU matters less, but VT-x/VT-d support with IOMMU is required, and more cores help with running more concurrent VMs without contention.
On your secure-erasure concern
Your concern is legitimate and well-founded. The difficulty of verifying SSD secure erase is a real issue, particularly with drives that do internal wear-leveling and remapping — data written to one logical block may physically reside in multiple locations, and TRIM-based erasure doesn’t guarantee overwriting every cell that ever held your data. Physical destruction (shredding, degaussing, or disintegration) is indeed far more reliable for HDDs than for SSDs, where shredding still leaves individual flash chips potentially recoverable in a sophisticated lab setting.
A practical middle-ground some security researchers use is full-disk encryption (which Qubes does by default via LUKS) combined with destroying only the LUKS header — this cryptographically destroys access to all data on the drive without needing to overwrite every sector. This works equally well on SSDs and HDDs and is considered cryptographically sound, though it requires trusting the encryption implementation rather than physical destruction. For the highest assurance, your instinct toward HDDs with physical destruction is the more conservative and defensible choice.
Realistic expectations
Even with a fast 7200 RPM enterprise HDD and 32 GB of RAM, Qubes will still feel noticeably slower than on an SSD — boot times will be long, VM startup will take many seconds, and heavy multitasking will still cause occasional pauses. It’s usable for security-critical work where you’re patient and disciplined about not running too many VMs simultaneously, but it won’t feel like a normal desktop OS.