This is my setup for Mullvad VPN on Qubes 4.2. It differs from solene’s guide in that it uses systemd path to trigger /usr/lib/qubes/qubes-setup-dnat-to-ns. No rc.local (which is legacy stuff and shouldn’t be used anymore) required.
You can also create multiple Mullvad ProxyVMs from a single template instead of needing to use standalone VMs.
Changing DNS multiple times in the application while connected, such as enabling the first 3 DNS filters, will cause the systemd service to crash and not change the value pulled from /etc/resolv.conf.
Even if it did work properly, it’s not reliable on Fedora. Running the qubes-setup-dnat-to-ns script manually will not even update the dnat-dns chain with the correct DNS IP after a few uses.
To fix this, you need to restart systemd-resolved first and then run qubes-setup-dnat-to-ns. This needs to be done every time the value changes.
Also, rc.local is not considered legacy on Qubes. It’s used by qubes (qubes-misc-post.service) to run things early in the boot process. It can also be done with systemd, but it’s more work than throwing a few command lines in that file.
Okay, I realized that this is an issue now. The rate limit is 5 changes per 10 seconds, so the user has to be toggling them fairly quickly to cause the issue. Let’s see what can be done about that.
Even if it did work properly, it’s not reliable on Fedora. Running the qubes-setup-dnat-to-ns script manually will not even update the dnat-dns chain with the correct DNS IP after a few uses.
To fix this, you need to restart systemd-resolved first and then run qubes-setup-dnat-to-ns. This needs to be done every time the value changes.
I’ll double check. I did notice the systemd-resolved restart needs to be done with IVPN, but I didn’t have any issue with Mullvad. The workaround if it is an actual issue will be simple though… just add the ExecStart=/usr/bin/systemctl restart systemd-resolved to the systemd service and that should be it
Also, rc.local is not considered legacy on Qubes. It’s used by qubes (qubes-misc-post.service ) to run things early in the boot process. It can also be done with systemd, but it’s more work than throwing a few command lines in that file.
I do think it is really bad practice to keep using it though. Other distros have already deprecated it. Just because Qubes has legacy cruft doesn’t mean we should be deploying new systems with it.
Slowly toggling the first 3 DNS filters still crashes the service. I have tried playing a little bit with the rate limits of both the service and the path units, but to no avail. It crashes or skips changes at some point, with or without the systemd-resolved service restart.
It’s just an easy way to do things early in the boot sequence, and to do different things in different qubes. I agree that it could be done with systemd, but I don’t think there’s any harm in keeping it this way, or calling it legacy. It’s just an easy alternative.
Changing DNS multiple times in the application while connected, such as enabling the first 3 DNS filters, will cause the systemd service to crash and not change the value pulled from /etc/resolv.conf .
Okay, this has been fixed in the guide by adding StartLimitIntervalSec=0.
The service seems to stay up with this, but since systemd-resolved has its own rate limits, it can still crash and not update the current DNS IP.
It takes quite a few changes to crash, so it’s probably not something that would happen often, but it’s still a possibility.
It seems to me there is some confusion here and the “in general” looks like the key.
While the rc.local (in the context of traditional init in *nix) is legacy, there is no document which says that /rw/config/rc.local is deprecated, legacy or discouraged to use in Qubes OS. On the contrary - it is an essential mechanism for customizing an AppVM without touching the template. In a previous discussion, it has been clarified that it is not possible to create and enable an AppVM-specific systemd unit without modifying the template too.
So, perhaps the confusion comes from the identical filenames.
In a previous discussion , it has been clarified that it is not possible to create and enable an AppVM-specific systemd unit without modifying the template too.
Well we are creating Templates for the ProxyVMs, so it is still more proper to use systemd services instead of manually adding to /rw/config/rc.local.
not possible to create and enable an AppVM-specific systemd unit
Somehow I doubt that. systemd can use non standard locations for units, and bind-dirs exist.
there is no document which says that /rw/config/rc.local is deprecated, legacy or discouraged to use in Qubes OS.
That may be the case, but it is still very bad practice. It works in the same manner as the old rc.local and comes with all of its drawbacks. There is a reason why traditional distros deprecated this stuff, so I don’t think it should be encouraged at all.
Note to moderators: This is going off-topic, so please kindly split as appropriate.
That may be the case, but it is still very bad practice. It works in the same manner as the old rc.local and comes with all of its drawbacks. There is a reason why traditional distros deprecated this stuff, so I don’t think it should be encouraged at all.
“I don’t think it should be encouraged” != “It has been deprecated” [in Qubes OS]. Hence my remark. I allowed myself to comment on this because you are writing an educational material and teaching non-facts is bad.
That said, if you have arguments against it, have you reported it to GitHub? Or, considering the link I provided, how would you approach an AppVM-specific boot? I hope you are not suggesting to create a separate template each time such customization is required.
The problem is that you need to reload systemd configuration during system startup and continue starting the services including these new loaded systemd services.
E.g. you add new systemd service and enabled it using bind-dirs in app qube.
You can also add the systemd service in template that’ll run systemctl daemon-reload after running bind-dirs service.
But systemd won’t take into account these new services during system startup and enabled services won’t start.
That said, if you have arguments against it, have you reported it to GitHub?
It makes a mess out of managing services. How are you gonna run 2 concurrent services? How are you gonna manage logging for them? How are you going to their status? What about timers? Path? Runtime hardening?
I mean the same logic applies to rc.local on normal Linux systems. It’s just a mess.
Or, considering the link I provided, how would you approach an AppVM-specific boot?
Something off the top of my head right now - need to verify it with an actual implementation later, of course:
In the Template VM:
Make custom.target
Make custom.target want graphical.target
Change default target to be custom.target
Bind /etc/systemd/custom and /etc/systemd/system/custom.target.wants
In the app VMs:
Just put whatever you want in /etc/systemd/custom, make it wanted by custom.target, and enable the systemd services in there.
No. The task is not to touch the template, as per the OP of the linked thread.
No, the task doesn’t make sense. Templates are meant to be touched so you can do security hardening and what not.
What does make sense is what you said right above:
I hope you are not suggesting to create a separate template each time such customization is required.
The setup I proposed (if it actually works, that is) does not require creating a separate template each time customization is needed. You just create that custom.target and bind-dirs once in the shared template. Then, in individual app vms, you can just throw your per-AppVM systemd service in /etc/systemd/custom and enable it.