Best Way To Setup A Global Share Folder?

Wow, that’s like wearing a mask and then making a big fat hole where the nose is to make breathing more convenient.

  • There is currently no way to control which folders of the server qube can be requested by client qubes. In principle this should be doable because diod can export only a subtree of any file system hierarchy, but the next point needs to be addressed first.
  • The connection remains open after unmounting. This means that the client VM can in principle continue to access resources from the file system exported by diod before the unmount happened.

lol indeed

I get (maybe) qubes-sync. But this? Why not put everything in one qube and call it a day? Or just stop using Qubes OS / compartmentalization?

If qube A is compromised, the attacker will have access to EVERYTHING in qube B !?!?!

I get this is a first version and you want to make it that only dedicated folders are accessible. But I would NEVER consider using anything like this without it being audited to death and accepted by the core team.

1 Like

Yes, this is a first version. Here is a to-do list for which I would gladly take contributions:

  • Permission system to allow certain folders to certain VMs (the argument in qrexec is sanitized, rendering it useless for that)
  • Persistent notification on tray that indicates a specific folder is exported to a certain VM
  • Performance improvements (large mounts from network shares can be very slow)
  • Security hardening (make check was disabled in the specfile because there are some issues I don’t know how to fix, sadly)
  • UI for mount clients to configure certain mounts to be mounted upon start / list existing configured mountpoints and statuses and open file managers to these mountpoints

Note that Plan 9 (the implementation choice) file system clients and servers are used widely in virtualization and are considered relatively safe for use, although not sure if that applies to the Qubes OS use case. This would be no different from allowing SFTP access between VMs, except there is no encryption overhead (that needn’t be there).

I have always used good old NFS for this purpose. It was designed to share a filesystem between separately administered machines on a network, which is what Qubes looks like. It has all the controls required and is well documented. I use a small external (to Qubes) partition, a loopback device within Qubes works as well. A directory can be used with “subtree-check” in the export. Best to use a contained partition to minimise chance of leakage. Quite easy to set up qrexec on R4.0 (not sure about 4.1). The server Qube probably should not do anything else and have no network as all its TCP ports will be accessible to each client.

S = name of server Qube. C = name of client Qube(s).

in dom0: add line(s) to /etc/qubes-rpc/policy/qubes.ConnectTCP: C @default allow,target=S

in S’s template ensure rpcbind and nfs-kernel-server (Debian) packages are installed [rpcbind and nfs-utils on Fedora]
On Debian-11 you need to unmask rpcbind: systemctl unmask rpcbind

in S rc.local: Create /etc/exports here (or in above) assuming your shared device is mounted in /mnt:
echo "/mnt 127.0.0.1(rw,insecure)" >/etc/exports # “insecure” needed as sport > 1024
systemctl start rpcbind nfs-server

in each C rc.local:

        qvm-connect-tcp ::111
        qvm-connect-tcp ::2049
        mount localhost:/mnt /rw/share # or wherever shared folder should appear

OK, shoot me down!

[ed] Note dom0 will attempt to start the server automatically when the first client is started during bootup - if there is a problem with the server then you are in for a surprise! The screen fills with Qubes status alert boxes as dom0 goes into an infinite loop trying to start the server… did post this on github, expect could be 5 years till it bubbles to the surface.

Please educate me:

What use cases cannot be reasonably accommodated using the build in file copy/move and copy&paste functionality?

And if such a case exists, why isn’t unidirectional sync enough?

What workflow would be spread over multiple qubes, yet at the same time require read/write access to the same files?

I didn’t push the OP to explain their use case because they might not be comfortable doing so. However, I see now multiple long-term users pile on and no one has spend any energy on explaining the use case.

I’m not looking for a “fight”, I want to understand.

3 Likes

I have a use case that may not be the same as OP’s.

I have a file server, attached to a specific VM, where I archive stuff that comes from the Web. For obvious reasons, since the file server has more than just that class of file, I don’t want to mount the file server on the Web browsing VM (otherwise total compromise of the Web browsing VM means full access to the file server’s whole contents). Thus the file server is attached to that specific VM in question, which I’ll call the media server VM.

So what I’m doing now is mounting on my browsing VM a subfolder of the media server VM. That way I can directly save archival material into that subfolder.

Prior to this setup, I was suffering through saving locally on the VM, then having to have these long-ass sessions where I organize the files by (1) qvm-copying them from Web VM to media server VM (2) drag-and-drop copying them from media server VM to file server. This was exhausting and completely discouraged actually organizing the files.

Now I can just use the file chooser on my Web browser to select the exact folder I want to drop my archival download in. All archival materials end up in their final destination — their corresponding folders on my NAS machine.

HUGE difference in terms of usability.

Is this riskier than copying files manually?

Yes, I’d argue it is — right now there is no policy mechanism to prevent the Web VM from mounting “the wrong folder”, although in the future there will be. Furthermore, since we’re talking about complex protocols between VMs, there is always a possibility of a bug being exploited. The same is true for SAMBA and NFS setups that involve qubes.ConnectTCP by the way, the exact same way — my solution is just far less complex in both terms of code and configuration than something like SAMBA or NFS.

Is the current tradeoff worth it?

For me, it is. I can do something I could not do before, much more quickly, and with adequate risk tolerance (which will get better once the shared folders system asks for authorization to export a specific folder).

2 Likes

Soon:

image

See thread Design questions for the next steps of the Qubes shared folders service - #2 by Rudd-O

This code needs testers and reviewers!

Wow!

2 Likes

Well, technically, if they don’t pull out the pipe from their mouth, and then harden the other end with a mask, they got it!

Yeah… your notes mention:

Enable the ssh-setup service in the client. (This doesn’t exist as yet, but should check that ssh is installed and forwarder is enabled).
systemctl enable qubes-ssh-forwarder.socket
systemctl start qubes-ssh-forwarder.socket

File qubes-ssh-forwarder.socket has in it
ConditionPathExists=/var/run/qubes-service/ssh-setup

What exactly do I write in the file /var/run/qubes-service/ssh-setup?
And how (syntax?)

I’m guessing I have to set it root ownership like this file?

-rw-r–r-- 1 root root 0 date time qubes-update-check

I’m using R4.1 with debian-11-minimal

Thanks!

sudo touch /var/run/qubes-service/ssh-setup

And how (syntax?)

I’m guessing I have to set it root ownership like this file?

-rw-r–r-- 1 root root 0 date time qubes-update-check

Not tested with that config, but worth testing.

Even though it can be done, should it be done? Has an extent of condition been done to assure it can be used in an enterprise environment?
Probably a good exercise if determining the potential threat surface this can create is part of the answer.
I’ll err with the group on the conservative side.
If this thread is for a single user system then great. But why go through the effort or setting up a compartmenatlzed system when it seems all these questions are an attempt to downgrade security to make a M$ box with VMware. Why use Qubes, just get a M$ box and have at it.
With just the history of attacks on NFS, samba, etc. and just having uncontrolled access by a variety of users in an enterprise setup this should really be a red flag.

“It won’t happen to me”
“I know what I’m doing”
or… to create a honeypot.

For the sake of new Qubes users as must-a this and similar topics should have be flagged with:
[WARNING: Against Qubes OS Basic Principals]

How I see it, no one is born conservative. For me, it’s a sign that the one became responsible.

There is such great trust in this system.
How many people can tell when they’ve been hacked?
NFS is not encrypted.

Just some 1st order considerations that a new user may not consider:

https://www.netspi.com/blog/technical/network-penetration-testing/linux-hacking-case-studies-part-2-nfs/

another usecase:
After downloading 1-2Tb with my Torrent-qube, I move the payload to storage. Yet 200-250Gb still remains in .cache, unneeded (especially in the backups).
I could delete the content of .cache, or I could split torrent-qube up into torrent-disposable that stores the payload in torrent-payload-qube.
sure, if torrent-disposable gets compromised they may also have access to torrent-payload-qube…, but the same would apply to the orriginal case where it’s in one qube.
[edit] or better: let torrent-disposable store the unfinished payload on a temporary-storage-qube on a faster SSD, and have it moved to download-finished-storage-qube on a slower but bigger secondary storage.
(or does this method somehow expose all other qubes?)

another usecase:
Instead of having one email-qube, split it up into 3 as explained by @unman:

  • disposable mail fetcher
  • qube where mails are stored/processed without network
  • disposable mail sender

An attacker would have to compromise either disposable fetcher or sender to gain access to the emails… but he has to do that all in the short time the disposables are running.

This is under the assumption that sshfs does not expose all qubes/dom0 (which I dunno).

1. “When it fails…”
2. “… the effectiveness of the risk treatment plan is now regarded as being more important than the controls.”
Sounds familiar?

It’s not if, but when. And ITS is just a tiny fragment of a huge puzzle.

So, as I see it, the way you write about it, your first “usecase” belongs to “I know what I’m doing” (“I could…, or I could…”), and the other one to “It won’t happen to me” ("… but he has to do that all in the short time…").

P.S. for those lazy to read, unman never wrote about sharing in the linked note, but about qvm-move/copy and qubes-sync.

not sure that I know what I’m doing tbh :-s

Would following uman’s guide for qubes-sync make our system vulnerable?

oh my, just an empty file lol

Alright! it works! thanks :smiley:

although, reading @enmus … is it safe to do so?

@apoawaisoChae following any guide without understanding what it is you are doing will potentially make your system more vulnerable. This entirely independent of the guide or its author.

In general setting up a global share folder is a horrible idea for security.

If someone knows what they are doing, what the real world impact of their actions are, and they accept the resulting risk as appropriate for a specific circumstance, then they might go that route for the sake of convenience.

3 Likes

unman
You must consider the security implications in sharing data between qubes.

Yeah about that… when editing /etc/qubes/policy.d/30-user.policy

after doing [user@dom0 ~]$ qvm-tags client add sshfstest this works:

qubes.ssh * @tag:sshfstest @anyvm allow target=server

but this does not:

qubes.ssh * client @anyvm ask target=server

What is the syntax to give only one specific qube access? Or is giving it a unique tag the only way?

@ enmus, is this the right approach? giving one specific qube access to one specific other qube? Or will this introduce vulnerabilities?

[edit] to clarify: I don’t mind that when an attacker compromises client, he also has access to the data on server but I do not want that same attacker to have access to @anyVM’s data. Does the policy above prevent that?