Tool: Simple Set-up of New Qubes and Software

It was taking a while to do NOTHING.

Yes Steve, you have been a guinea pig.
Thanks for that.

The difficulty is that we want to keep targets set to all templates, and
there isn’t an easy way to NOT target Windows templates. (There isn’t for
example a guaranteed name.)
We cant use grains, because most users wont have their Windows templates
under salt control at all.
I’ve added a warning about this in the description.

For the moment I’ve just excluded whonix templates completely.
I’m assuming that all Whonix templates have nodename set to “host”,
which is what has been reported to me.
If any one has something different please report.

cacher1 is completely redundant.
All traffic leaving sys-whonix should be encrypted to the first Tor hop.
There’s nothing for cacher1 to see, or do.

Setting netvm of cacher2 to a Tor proxy should enable all update traffic
to run through Tor.(I’ve made this clear in the description.)

I’d value input on package handling:

Currently, in most cases, these packages don’t do anything when
installing an upgrade other than update the salt files. They don’t (e.g)
rerun the initial setup or configuration.
Is this desirable? expected?

In most case, when the package is removed, any templates or qubes that
were created are not removed. I used to do this but had complaints
from users.
Now (generally) the policy files are reverted but the qubes/templates
remain.
Is this desirable? expected?

This might be a way to distinguish windows templates. I don’t know if Salt can “see” that.

1 Like

Yes I know it’s redundant. i just tried to reply on @Insurgo’s question if it’s possible for whonix to use the same packages as Debian. So, my idea was the opposite: whonix to download them and to store them in cacher1 and then Debian to reuse them from cacher 1 storing them into cacher 2. No double downloading, but 2 cachers. Again, if I was sure, I wouldn’t ask.

Nothing salt related, but from a dom0 perspective, Windows templates need to set some qvm-features properties: qvm-create-windows-qube/qvm-create-windows-qube at master · ElliotKillick/qvm-create-windows-qube · GitHub

So qvm-features could be used to remove Windows templates from the list of Templates to be cacheable?

Exactly, see above.

I think we want to remove the “function”, that is, policies applied in most cases. I do not think it makes sense to delete the specialized template and service VM if any?

That seems correct.

As answered by @unman, cacher1 is redundant of cacher2, since the problem here lies in Whonix templates themselves (which are doing the right thing verifying first that tor is enforced), that check being implemented in Whonix’s update proxy check which makes sure the update proxy it will use, prior of usage, supports tor (in what the webserver exposes to the template when attempting to use the update proxy) as stated more concisely under Make whonix templates happy to use cacher · Issue #12 · unman/shaker · GitHub we seem to only have two options:

  • Modify cacher to “lie” in its apt-cache-ng served webpage that it enforces tor (that is not its role, but the one of the netvm it depends on: could be sys-whonix or not, so not a “proper” solution.)
  • Have cacher salt script modify whonix templates to deactivate its tor enabled proxy check. Not so proper either.

@enmus : per current implementation of Whonix templates and Qubes salt scripts at install, if sys-whonix is required to be used as the update proxy, Whonix templates enforce tor+https for its reprositories, and will fail to download them if Whonix templates use cacher as of today without additional modification of cacher itself, or having cacher disable the tor proxy check inside of Whonix templates.

To me, I implemented the first solution, since I know cacher service vm will always have sys-whonix as its netvm, so I do not mind making cacher served webpage for whonix templates check to lie about tor. But a better solution would be welcome.

In any of the two solutions above, if whonix templates were deployed with salt recipe, whonix templates will enforce downloading updates through tor, and those update attempts will fail if whonix templates cannot access tor+https urls.

@unman: I have not found a “dynamic” check, other then checking deployed policies prior of cacher salt deployments. If Qubes is installed to enforce updates through tor, then a policy is deployed to enforce sys-whonix as the update proxy for all templates. I consider that if that policy is enforced, then

  • sys-whonix should be the netvm of cacher
  • userinfo.html could be replaced for apt-cacher-ng served webpage, stating it enforces tor
  • dom0 policy stating that all Templates should be updated through cacher will work.
    Until the user decides to point cacher to another netvm then sys-whonix where Whonix templates will fail to update through tor.

That is why i’m asking for feedback under Make whonix templates happy to use cacher · Issue #12 · unman/shaker · GitHub if anyone thinks of a better check, a better dynamic implementation that cacher could do prior of applying changes.

From my point of view, someone deploying cacher is interested in caching all Templates updates packages and package lists for redundant templates, not just some. I do not see a case where users would want to revert that. And If they do, removing cacher.rpm should revert the changes it applied.

I don’t think so.
We target all templates with --templates, and that includes Windows
templates. We need this because we want to hit all templates regardless
of name.
This is separate from any qvm-features.

We could use a target that included all templates except those that
are Windows templates, or *BSD templates, or anything else that isn’t
controlled by Salt, but that’s not possible right now. (I suspect it
wont be possible at all because, again, that would be reliant on
users (not)taking some action, rather than some intrinsic property of
the qubes.)
I’ll think about this.

Thanks. I totally overlooked this notorious fact…

As I understand it, windows templates “automatically” get that feature value set. (I don’t know the mechanism; I just know I didn’t have to explicitly do anything.) So it’s not dependent on a user action, though it IS dependent on someone else in Qubes Development Land not changing the convention sometime down the road.

A dialog box at install to choose templates read by the “simple set”? With a button to allow changing it later?

Could I install this in template (fedora for example )

It’s an axiom of rpm packaging that they should not be interactive.
I do provide packages with a separate “Set up” menu item which
allows users to select which templates/qubes should have the package
applied.
I don’t want to do that here for two reasons:

  1. These are supposed to be simple set-up
  2. It points the user in the wrong place - at the implementation rather
    than the use. I don’t for the most part want naive users to start
    thinking about templates and qubes.

we’re at cross purposes.
My point is that the qvm-features “OS” doesn’t enter in to the salt
targeting. We could change that but it wont be for an unofficial
package like this.
The significant thing is that most people wont configure their
Windows/*BSD/other qubes to work with salt. That is what is dependent on
user Action.
You’ll see this if you select a Windows qube in the Qubes-Update tool,
no?

I never presume to speak for the Qubes team.
When I comment in the Forum or in the mailing lists I speak for myself.

Install what?
The qubes-task package is intended to be installed in dom0, as are the
“simple” packages.

But you can apply any of the salt states to any template.

If you want to change any of the features of the packages, then you can
do this, by downloading the package instead of installing it. (Use
–downloadonly), and then extracting the salt states.
Alternatively all of the states themselves are at
GitHub - unman/shaker.
You can download any of them to dom0, and then apply them using
qubesctl state.apply

For example, let’s say you wanted to add Mullvad VPN to an existing
template, say debian-11.
Download the salt states from https://github.com/unman/shaker/mullvad,
copy the files in to dom0, using the usual
qvm-run -p QUBE_WHERE_DOWNLOADED 'cat PATH_TO_FILE' > FILENAME
(You don’t need the clone and create files at all)

Then create your VPN qube, create a mullvad directory under /srv/salt in dom0,
move the files there.
Then run:

qubectl --skip-dom0 --targets=debian-11 state.apply mullvad.install
qubectl --skip-dom0 --targets=NAME_OF_VPN_QUBE state.apply mullvad.configure

The first line installs the necessary packages in to the template, and
the second configures your new VPN_QUBE.

Some states wil need more editing before you can apply them. You may
find that the top file specifies a target, or that the package names are
Debian specific and need to be change for Fedora or Arch. But the
changes you need to make should be obvious.

I never presume to speak for the Qubes team.
When I comment in the Forum or in the mailing lists I speak for myself.

Hello, I tried to install the software but I have some troubles.
At first I opened a dom0 terminal and go to the direction
/etc/yum.repos.d/
Then I paste a file with nano in the direction with the code from
https://qubes.3isec.org/tasks.html
and named it
3isec-dom0.repo
After this I used
Sudo qubes-dom0 Update
But after this nothing happened, so what have I do now to start the software?

Did you ever reach to execute

sudo rpm -i 3isec-qubes-task-manager-0.1.1.x86_64.rpm

There’s a dom0 update that has refused to install every time I’ve tried it, and the only thing I can think of that might have something to do with that is installing the cacher.

Is there a quick way to check this?

Did you change the updateVM to be cacher?
Check “Dom0 update qube” in Global Settings.

What is the update, and what error message do you see if you run
sudo qubes-dom0-update at the command line in dom0?

It wasn’t set to cacher. I’ve now set it, and instead of taking half an hour to fail with error code 1, it now fails immediately with error code 127.

    stderr:
        Running scope as unit: run-r8d2bc1e5dbcf4615a03a796ca4bf9c23.scope
        Using cacher as UpdateVM to download updates for Dom0; this may take some time...
        Running '/usr/lib/qubes/qubes-download-dom0-updates.sh --doit --nogui '--exclude=qubes-template-*' '--quiet' '-y' '--clean' '--action=upgrade'' on cacher
        cacher: command failed with code: 127
    stdout:

I wasnt suggesting you set it to cacher - I was asking if you had.
Set it back to what it was.
Run the update and then report back.