I would like to access encrypted data from a remote NAS.
The decrypted data should only be visible to a data viewer VM disconnected from any net VM.
The remote NAS and the internet should not see the unencrypted data.
Writing to the unencrypted device on the data viewer VM should write back to the remote NAS in an encrypted fashion.
Here is a hypothetical setup for that purpose below:
Yes, this is a fairly standard setup although I don’t see the NAS purpose. You could directly use ssh-fs.
Btw qcrypt does pretty much exactly that (disclaimer: I’m the author).
Actually, if you fear that your viewer VM may become compromised, you might consider a second encryption layer as the viewer VM could otherwise decide to just not do its encryption job and write plain text data to the block device - effectively leaking your data to the cloud.
Presently, I am not sure if my use case will come across cryfs stability concerns. E.g. I don’t know if multiple processes from the same system writing to the same filesystem will cause corruption. It is possible that I might come across these concerns by accident in the future, causing corruption on possibly critical data.
My Qubes computer has limited space. If a VM requires more space, I would like to attach to it a remote archive space. Whatever that contains the VMs archive spaces should be incrementally backed up.
Here is a hypothetical setup for that purpose below:
It might be possible to replace the Main QCow2 image file + snapshots by using TrueNAS snapshots. I am going to have to familiarize with TrueNAS first to see if it is a viable option.
You would have a lot more better performances if you use a iscsi volume on the NAS, not sure how to plug that to Qubes OS, but it is meant to use block devices over the network. You could format it with LUKS and make it appear as a new drive for your pool.
Though I’m not working with VMs/disk images, some of this applies to my setup(s). My remote servers are connected over public internet.
I’m using NFS over VPN (wireguard). The NFS service is running on an encrypted ZFS filer. The NFS datasets are exported exclusively to that VPN address space and the shares are mapped to certain fixed IPs which are bound to authenticated wg-clients. Files are stored and transmitted encrypted at the application level as well.
Performance wise iSCSI is faster … in theory; but since I’m using sync writes anyway (and one should, when dealing with VMs) NFS is as “fast” as iSCSI, even when accessed locally. At least on my hardware. The same applies to encrypted SMB/CIFS. (Don’t forget to use a decent optane SLOG - not just for speed but for transaction safety, too!)
iSCSI should provide better performance as it caches a lot more things because it has an exclusive access to the storage. However, I found NFS 4.1 to achieve very good performance compared to its previous version. Especially with a local cache area (a few GB could greatly speed up what you access the most).
Sure. It does. But not on ZFS with sync writes. Any advantage iSCSI could have is neutered by the induced I/O latency when dealing with sync writes on ZFS. I wrote this not as a general „judgement“. Just hinted, because TrueNAS (as a ZFS appliance) got mentioned above.