Salting your Qubes

Hi,

I'm in the process of uploading various salt configurations, which you
may find interesting.
The files are in GitHub - unman/shaker.

There's configuration for salting various useful stuff:
A caching proxy,
A version of split-ssh,
Multiple sys-usb using various controllers (specific to x230 model),
Kali template,
A qube for building Qubes,
A qube for building/flashing coreboot,
A multimedia qube,
A file sharing qube,
Adding support for Office or Windows filesystems to templates,

with more to come.

This isn't sophisticated salt - most of these are examples from training,
and are deliberately simple. Some are old, and may need tweaking.
They are also almost all based on a debian-10-minimal template
(naturally).
Even if you have no experience with salt you should be able to read the
files and understand what they are going to do.

I hope you use them to get started with using salt in Qubes.

I'm in the process of packaging these, and will host them at
https://qubes.3isec.org
Although it's simple to produce packages that will actually implement
the states, (as they do in initial Qubes setup), I prefer not to do
this - mainly because I think it better for users to see what's been
installed and what actions will be taken. So the packages will provide
the salt formula, for you to review, change and implement as you will.
(Change my mind if you like.)

Happy to take suggestions for other configurations, or features.

unman

9 Likes

Hi unman.

I am learning to "salting my Qubes", what decided me is your recipe for
apt-cacher-ng (sometimes my bandwith really matters).

So far I get some things working such as an automated
template/templateDispVM/staticDispVM for firefox and its configuration with some
extensions.

So with a little more confidence, right now I am installing your apt-cacher-ng
recipe.

I read from the docs of apt-cacher-ng :
"6.3 Fedora Core
Attempts to add apt-cacher-ng support ended up in pain and the author lost any
motivation in further research on this subject. "

Do you get it working ?

In all cases, can you explicit how you use apt-cacher-ng only for debian
derivatives? The RPC policy you provided will apply to all templates, and calls
to https from fedora will be denied, preventing them to update ; or I am
missing something ?

Happy to take suggestions for other configurations, or features.

unman

Sometimes ago you mentioned on the mailing list to use snort and tripwire.
I don't know if it is trivial to deploy but that is surely one of the next
things I will look at.

pillule <pillule@riseup.net> writes:

Hi unman.

I am learning to "salting my Qubes", what decided me is your recipe for
apt-cacher-ng (sometimes my bandwith really matters).

So far I get some things working such as an automated
template/templateDispVM/staticDispVM for firefox and its configuration with some
extensions.

So with a little more confidence, right now I am installing your apt-cacher-ng
recipe.

I read from the docs of apt-cacher-ng :
"6.3 Fedora Core
Attempts to add apt-cacher-ng support ended up in pain and the author lost any
motivation in further research on this subject. "

Do you get it working ?

In all cases, can you explicit how you use apt-cacher-ng only for debian
derivatives? The RPC policy you provided will apply to all templates, and calls
to https from fedora will be denied, preventing them to update ; or I am
missing something ?

Happy to take suggestions for other configurations, or features.

unman

Sometimes ago you mentioned on the mailing list to use snort and tripwire.
I don't know if it is trivial to deploy but that is surely one of the next
things I will look at.

I have been fooled by :

Actually my apt-cacher-ng is also denying qubes updates :

Hit:1 http://HTTPS///deb.debian.org/debian buster InRelease
Hit:2 http://HTTPS///deb.debian.org/debian-security buster/updates InRelease
Err:3 http://HTTPS///deb.qubes-os.org/r4.0/vm buster InRelease
  Connection failed [IP: 127.0.0.1 8082]
Err:4 http://HTTPS///deb.qubes-os.org/r4.0/vm buster-testing InRelease
  Connection failed [IP: 127.0.0.1 8082]
Reading package lists...
Building dependency tree...
Reading state information...
All packages are up to date.
W: Failed to fetch http://HTTPS///deb.qubes-os.org/r4.0/vm/dists/buster/InRelease Connection failed [IP: 127.0.0.1 8082]
W: Failed to fetch http://HTTPS///deb.qubes-os.org/r4.0/vm/dists/buster-testing/InRelease Connection failed [IP: 127.0.0.1 8082]
W: Some index files failed to download. They have been ignored, or old ones used instead.

Yes, apt-cacher-ng works for Fedora updates.

You have to make some changes -
First, on the client side, comment out "metalink" lines, and uncomment
"baseurl" lines. This is because the metalink will keep loading new
https:// repositories, and apt-cacher-ng cant cache those requests,
as you know.
Second, watch the caches in /var/cache/apt-cacher-ng , and add any new
ones to the fedora_mirrors file - this is because that file doesn't
contain all Fedora repositories.

After a while you will have almost all your Fedora updates cached, and
will see the speed increase.

The repository was unavailable for a while. Was that the issue?

unman <unman@thirdeyesecurity.org> writes:

The repository was unavailable for a while. Was that the issue?

Yes. I panicked.

Yes, apt-cacher-ng works for Fedora updates.

Thanks for the details. I finally took the time to look at it.

You have to make some changes -
First, on the client side, comment out "metalink" lines, and uncomment
"baseurl" lines.

The cisco repository of the codec openh264 does not have a baseurl, I
found that I could use
http://HTTPS///codecs.fedoraproject.org/openh264/$releasever/$basearch
in place, I assume this can be safely added to
/etc/apt-cacher-ng/fedora_mirrors

Also fedora ships with
#baseurl=https://download.example/\[\.\.\.\]
in /etc/yum.repos.d conf files, I assume I had to replace them with
baseurl=http://HTTPS///downloads.fedoraproject.org/\[\.\.\.\]

Then don't forget to
$ dnf clean all

This is because the metalink will keep loading new https://
repositories, and apt-cacher-ng cant cache those requests, as you
know.

I think we could also specify &protocol=http on metalinks as explained in

I have not tested it thought.

Second, watch the caches in /var/cache/apt-cacher-ng , and add
any new ones to the fedora_mirrors file - this is because that file
doesn't contain all Fedora repositories.

It is maybe too soon to see, I don't know yet if having manipulated the
url to use downloads.fedoraproject.org will effectively lead to mirrors
to manage. What I know is, it was creating a directory named
  downloads.fedoraproject.org
before I add
  https://downloads.fedoraproject.org/pub/fedora/linux/
to
  /etc/apt-cacher-ng/fedora_mirrors

And that downloads.fedoraproject.org is supposed to redirect to mirrors...

In the doubt I run a script to duplicate all http url of fedora_mirror
to https.

I put a systemd timer to watch new directories on /var/cache/apt-cacher-ng/

I also put a timer to run /etc/cron.daily/apt-cacher-ng that manage
expired files and make the html report.

Interestingly enough debian ships with scripts in
/usr/share/doc/apt-cacher-ng/examples/dbgenerators.gz
that may take care to update the mirrors files list at the cost of a
lengthy cycle of queries ... That could be triggered weekly.

Do you known about it?

Your instruction didn't said anything for the AppvM so I figured out
that I could put an instruction in /rw/config/rc.local to switch back
the repositories files to their initial values so I can still test out
packages there before really installing them in a template.

Lastly, whonix-* will fail to update with, in dom0:/etc/qubes-rpc/policy/qubes.UpdatesProxy

$type:TemplateVM $default allow,target=cacher

Because whonix ensure updates comes from the tor network. I didn't
figured yet if it is desirable to search to do something here.

unman <unman@thirdeyesecurity.org> writes:

> The repository was unavailable for a while. Was that the issue?

Yes. I panicked.

> Yes, apt-cacher-ng works for Fedora updates.

Thanks for the details. I finally took the time to look at it.

> You have to make some changes -
> First, on the client side, comment out "metalink" lines, and uncomment
> "baseurl" lines.

The cisco repository of the codec openh264 does not have a baseurl, I
found that I could use
http://HTTPS///codecs.fedoraproject.org/openh264/$releasever/$basearch
in place, I assume this can be safely added to
/etc/apt-cacher-ng/fedora_mirrors

Also fedora ships with
#baseurl=https://download.example/\[\.\.\.\]
in /etc/yum.repos.d conf files, I assume I had to replace them with
baseurl=http://HTTPS///downloads.fedoraproject.org/\[\.\.\.\]

Then don't forget to
$ dnf clean all

> This is because the metalink will keep loading new https://
> repositories, and apt-cacher-ng cant cache those requests, as you
> know.

I think we could also specify &protocol=http on metalinks as explained in
fedora - How to create an on-demand RPM mirror - Unix & Linux Stack Exchange
I have not tested it thought.

I have seen that, but generally I dont want clear traffic, so its not a
good option for me.

> Second, watch the caches in /var/cache/apt-cacher-ng , and add
> any new ones to the fedora_mirrors file - this is because that file
> doesn't contain all Fedora repositories.

It is maybe too soon to see, I don't know yet if having manipulated the
url to use downloads.fedoraproject.org will effectively lead to mirrors
to manage. What I know is, it was creating a directory named
  downloads.fedoraproject.org
before I add
  https://downloads.fedoraproject.org/pub/fedora/linux/
to
  /etc/apt-cacher-ng/fedora_mirrors

And that downloads.fedoraproject.org is supposed to redirect to mirrors...

In the doubt I run a script to duplicate all http url of fedora_mirror
to https.

I put a systemd timer to watch new directories on /var/cache/apt-cacher-ng/

I also put a timer to run /etc/cron.daily/apt-cacher-ng that manage
expired files and make the html report.

Interestingly enough debian ships with scripts in
/usr/share/doc/apt-cacher-ng/examples/dbgenerators.gz
that may take care to update the mirrors files list at the cost of a
lengthy cycle of queries ... That could be triggered weekly.

Do you known about it?

Yes, but I've never(?) used it - the default lists are pretty good, and
it takes nothing to check if there are any rogue mirrors being fetched.

Your instruction didn't said anything for the AppvM so I figured out
that I could put an instruction in /rw/config/rc.local to switch back
the repositories files to their initial values so I can still test out
packages there before really installing them in a template.

Lastly, whonix-* will fail to update with, in dom0:/etc/qubes-rpc/policy/qubes.UpdatesProxy

$type:TemplateVM $default allow,target=cacher

Because whonix ensure updates comes from the tor network. I didn't
figured yet if it is desirable to search to do something here.

I dont use Whonix.
Since you can configure cacher to fetch across the Tor network, this
looks brain dead to me. I think you must mean that Whonix ensures that
updates run through Whonix.

unman <unman@thirdeyesecurity.org> writes:

Because whonix ensure updates comes from the tor network. I didn't
figured yet if it is desirable to search to do something here.

I dont use Whonix.
Since you can configure cacher to fetch across the Tor network, this
looks brain dead to me. I think you must mean that Whonix ensures that
updates run through Whonix.

Yes. That's it.

In another thread you spoke about not indexing for each template (so
eventually reducing our fingerprint by reducing the request we made,
right?) ; and potential drawbacks, do you mind to share what you find
about that? I know there is this this checkbox in acng-report.html but
don't know what option exactly it correspond in acng.conf nor the
drawbacks and eventual mitigations.

The checkbox there is only used in admin operations.

You could look at FreshIndexMaxAge - this is used to "freeze" the index
files if clients are updating at nearly the same time.
In Qubes, this happens a lot.
Set that to a large value, and you can restrict the repeated calls to
fetch the indexes.
This is good - it means that (e.g.) there would be only 1 call to fetch
the Debian indexes while updating 15 templates.
This may be bad - if new packages are released during the "freeze", the
clients will only have the old versions in index and cache. They could
miss crucial security updates.
As always, it's a trade off.