Yes… that’s the ideal.
You probably didn’t understand me. Why can’t you do everything related to folder sharing in the same qube?
One man’s friction is another man’s traction. Can you think of examples of security that are frictionless compared to no security? It seems that friction is part of it. Not excessive friction… but some?
There was discussion about using
rsync to handle the connectivity which was received more favorably by the lead developer. Perhaps there is a way to achieve similar functionality without actually “file sharing”. Apparently file sharing has a much bigger impact on security than other forms of inter-VM transfer.
I was thinking it might be useful to have a secondary form of GUI for qvm-copy-to-vm. Instead of right click → copy to vm, maybe have a folder that allows properties to be set that “link” it to another VM. Whenever a file is dragged to it, the file is stored in the folder locally and copied to a given VM via the secure copy method. That would mimic file sharing without actually file sharing.
How about, given that this is likely to shoot me in the foot - anyone have a clearcut method to enact it? I absolve them of responsibility in advance.
If any of us knew more about the reason for this circus, perhaps a path could be advised that doesn’t lead thru this objectionable area? Are you for instance a developer with specific requirements about compartmentalization and the control over network access of a Qube but wouldn’t another VM solution like KVM or Virtualbox be able to perform instead?
Are you looking to create a “one VM can read/write, all the rest can only read” type of share, or do you want all participating VMs to have read/write capability?
If the former and assuming LVM thinpool…I can think of a scripting approach that’ll work.
You create, then attach a long-living read/write LV volume to a “publisher” VM, (you script attaching to that VM after each VM startup). That publisher VM aggregates all the data to be shared.
Your script would also create a read-only snapshot of the read/write LV after shutting down the publisher VM each time. That snapshot will have an unchanging name or a discoverable pattern.
Through additionalal scripting, each VM that needs the data would get it’s own locally-named snapshot of that publisher’s read-only snapshot, creating/destroying it on each startup/shutdown.
Using a snapshot of snapshot decouples the startup/shutdown of the publisher vs. the startup/shutdown recipients, as the middle snapshot can be removed without requiring the recipient VMs lose their copy. I think the snapshot-of-snapshot can also be set to be automatically discarded when no longer in use.
I does mean, similar to VM templates volumes, there’s a delay in delivering changes: you only get the changes made by the publisher after a) the publisher shuts down (syncronizing the read/write volume into the read-only snapshot) and b) the recipient VMs are started up (or restarted if they were already running).
While all other replies in this thread are right about questioning whether this really is necessary: if you feel confident that you know what you are doing you should have look at GitHub - unman/qubes-sync: Simple syncin between qubes over qrexec.
unman is also on this forum if you should need any “first-hand” help with this.
Again, keep in mind that nobody in here is just against your approach for the sake of being right. Instead, we tend to question every usage scenario in order to make it more secure. If you feel comfortable with sharing how you use Qubes and what exactly you need this functionality for it could probably help in order to understand whether file sharing is really the best option for you.
THANKS! That’s what I was looking for.
It’s pretty straightforward but if you need help, post here.
I think what would be extremely useful would be a “mount folder from qube B into qube A”, complete with qrexec permission dialog. I know I would use it all the time. This would probably come in the form of two parts:
- A qrexec service (that accepts the source folder as argument) to be deployed in qube B, which starts an RPC daemon that serves (2) via 9p.
- qube A mounts the file system using the builtin 9p kernel module, having it connect to the qrexec service (1) and, once established, mounts the file system served by qube B via qrexec.
- (optional) perhaps a simple GUI program to establish the mount from qube A, which also displays prior mounts, and maybe attempts to save these mounts so they run on boot.
Note that such an implementation would then be compatible with GVFS and KIO, so applications would transparently access folders and files from other qubes that have already been authorized by the qrexec subsystem.
FUSE-SSHFS is, sadly, not it, because it requires SSH keys, an SSH daemon, and it’s just crap to set up. I have yet to see a FUSE file system that is adequate to retrofit for the purpose, and (having been a contributor to zfs-fuse myself in my younger years) I do dread writing the FUSE code from scratch, in particular to get the code correct so as to avoid data corruption. I think FUSE is a non-starter here.
I would in fact be willing to fund the development of this component, and also work together with the person leading this project.
And, yes, I know it’s likely that there are security risks associated with this client/server model between qubes, and that we should just have files for purpose X be used only on qube X, and files for purpose Y be used only on qube Y… but I think (a) many of us just have a file server, likely mounted in one of our qubes, that we’d like to give other qubes access to a subset thereof (b) the design of this suggested project mitigates security risks substantially, reducing the compromise risk to two factors only:
- Exploitable bugs in the implementation of the “mount share” qrexec service.
- Data leaks arising from bad user diligence in allowing or denying access to a specific server folder to be mounted in the client qube.
EDIT: Eureka, I think? Instead of FUSE, perhaps 9p is the correct transport: v9fs: Plan 9 Resource Sharing for Linux — The Linux Kernel documentation — it’s widely supported and robust because it’s heavily used to do folder sharing in the context of virtualization, and it’s a dead-simple protocol that can go over any stateful connection transport, without the bullshit authentication and encryption overhead of something like SSHFS.
If you don’t need multiple AppVM access simultaneously you could set up an AppVM like sys-usb, create a image file(*.img), format it, copy your files into that file system, and then use qvm-block to attach that image file as a volume to the any other AppVM in read only mode.
With it read-only you might be able to refrence it from multiple places but I have no experience in doing that. Attaching a r/w volume in multiple places is simply asking for corruption.
More info here:
LOL I was bored so I did a thing:
That’s right, shared folders for Qubes VMs.
The documentation (README.md) is improving. Please be patient.
Wow, that’s like wearing a mask and then making a big fat hole where the nose is to make breathing more convenient.
- There is currently no way to control which folders of the server qube can be requested by client qubes. In principle this should be doable because diod can export only a subtree of any file system hierarchy, but the next point needs to be addressed first.
- The connection remains open after unmounting. This means that the client VM can in principle continue to access resources from the file system exported by diod before the unmount happened.
I get (maybe) qubes-sync. But this? Why not put everything in one qube and call it a day? Or just stop using Qubes OS / compartmentalization?
If qube A is compromised, the attacker will have access to EVERYTHING in qube B !?!?!
I get this is a first version and you want to make it that only dedicated folders are accessible. But I would NEVER consider using anything like this without it being audited to death and accepted by the core team.
Yes, this is a first version. Here is a to-do list for which I would gladly take contributions:
- Permission system to allow certain folders to certain VMs (the argument in qrexec is sanitized, rendering it useless for that)
- Persistent notification on tray that indicates a specific folder is exported to a certain VM
- Performance improvements (large mounts from network shares can be very slow)
- Security hardening (make check was disabled in the specfile because there are some issues I don’t know how to fix, sadly)
- UI for mount clients to configure certain mounts to be mounted upon start / list existing configured mountpoints and statuses and open file managers to these mountpoints
Note that Plan 9 (the implementation choice) file system clients and servers are used widely in virtualization and are considered relatively safe for use, although not sure if that applies to the Qubes OS use case. This would be no different from allowing SFTP access between VMs, except there is no encryption overhead (that needn’t be there).
I have always used good old NFS for this purpose. It was designed to share a filesystem between separately administered machines on a network, which is what Qubes looks like. It has all the controls required and is well documented. I use a small external (to Qubes) partition, a loopback device within Qubes works as well. A directory can be used with “subtree-check” in the export. Best to use a contained partition to minimise chance of leakage. Quite easy to set up qrexec on R4.0 (not sure about 4.1). The server Qube probably should not do anything else and have no network as all its TCP ports will be accessible to each client.
S = name of server Qube. C = name of client Qube(s).
in dom0: add line(s) to /etc/qubes-rpc/policy/qubes.ConnectTCP:
C @default allow,target=S
in S’s template ensure rpcbind and nfs-kernel-server (Debian) packages are installed [rpcbind and nfs-utils on Fedora]
On Debian-11 you need to unmask rpcbind:
systemctl unmask rpcbind
in S rc.local: Create /etc/exports here (or in above) assuming your shared device is mounted in /mnt:
echo "/mnt 127.0.0.1(rw,insecure)" >/etc/exports # “insecure” needed as sport > 1024
systemctl start rpcbind nfs-server
in each C rc.local:
qvm-connect-tcp ::111 qvm-connect-tcp ::2049 mount localhost:/mnt /rw/share # or wherever shared folder should appear
OK, shoot me down!
[ed] Note dom0 will attempt to start the server automatically when the first client is started during bootup - if there is a problem with the server then you are in for a surprise! The screen fills with Qubes status alert boxes as dom0 goes into an infinite loop trying to start the server… did post this on github, expect could be 5 years till it bubbles to the surface.
Please educate me:
What use cases cannot be reasonably accommodated using the build in file copy/move and copy&paste functionality?
And if such a case exists, why isn’t unidirectional sync enough?
What workflow would be spread over multiple qubes, yet at the same time require read/write access to the same files?
I didn’t push the OP to explain their use case because they might not be comfortable doing so. However, I see now multiple long-term users pile on and no one has spend any energy on explaining the use case.
I’m not looking for a “fight”, I want to understand.
I have a use case that may not be the same as OP’s.
I have a file server, attached to a specific VM, where I archive stuff that comes from the Web. For obvious reasons, since the file server has more than just that class of file, I don’t want to mount the file server on the Web browsing VM (otherwise total compromise of the Web browsing VM means full access to the file server’s whole contents). Thus the file server is attached to that specific VM in question, which I’ll call the media server VM.
So what I’m doing now is mounting on my browsing VM a subfolder of the media server VM. That way I can directly save archival material into that subfolder.
Prior to this setup, I was suffering through saving locally on the VM, then having to have these long-ass sessions where I organize the files by (1) qvm-copying them from Web VM to media server VM (2) drag-and-drop copying them from media server VM to file server. This was exhausting and completely discouraged actually organizing the files.
Now I can just use the file chooser on my Web browser to select the exact folder I want to drop my archival download in. All archival materials end up in their final destination — their corresponding folders on my NAS machine.
HUGE difference in terms of usability.
Is this riskier than copying files manually?
Yes, I’d argue it is — right now there is no policy mechanism to prevent the Web VM from mounting “the wrong folder”, although in the future there will be. Furthermore, since we’re talking about complex protocols between VMs, there is always a possibility of a bug being exploited. The same is true for SAMBA and NFS setups that involve
qubes.ConnectTCP by the way, the exact same way — my solution is just far less complex in both terms of code and configuration than something like SAMBA or NFS.
Is the current tradeoff worth it?
For me, it is. I can do something I could not do before, much more quickly, and with adequate risk tolerance (which will get better once the shared folders system asks for authorization to export a specific folder).
This code needs testers and reviewers!
Well, technically, if they don’t pull out the pipe from their mouth, and then harden the other end with a mask, they got it!