How to disable an installed qubes-core-networking-agent?

Yeah, it sounds a bit strange–I want a way to quickly disable qubes-core-networking-agent in an AppVm if it happens to be on the template. (I realize that whatever it is, it may need to be done every time the AppVM starts up.)

I know one can just simply not connect sys-firewall (or sys-net) to that VM and it’s still isolated, but I want to be sure that even if I accidentally do so nothing can happen.

My motivation for this is I am finding I have LOTS of templates, many nearly identical (the same application) but different only in what block devices they mount and/or what network they’re on. I realized that in the first case, that’s all managed by scripts I wrote. I can therefore just create a template with only the app on it, then create multiple AppVMs based on that template and install my scripts not on the template but in the AppVM in /usr/local/bin–that persists in an AppVM. I can reduce the number of templates I have to update that way.

The one case where I can’t do this is for AppVMs that must access the network, since qubes-core-networking-agent is something I have no control over (and it looks like it starts daemons and so forth). So I have to install it on the template, which means it will be available to all AppVMs based on the template, even the ones I want to remain network isolated. Or I could install the app on one template, then clone it and install qubes-core-networking-agent on the clone–but now I have two templates where I’d rather have one. If I can set up a way for an AppVM to “permanently” and “non-recoverably” disable the functionality on startup that would be adequate (even if a bit of a kludge). [I put those in quotes because of course the agent comes back when you restart the AppVM since changes made to /usr/bin, etc., won’t persist–but they remain true for one single run of the AppVM.]

I am imagining some sort of script, run on startup, that stops services and then deletes some file those services need so they can’t be restarted. However, it could just be a matter of making some pref or feature change to the AppVM that would cause a network VM to not connect to it even if the agent is installed and running.

Why not set for that AppVMs, that should not have network access, the “Net Qube” parameter to “none” in the settings?
Then that AppVm has no networking, no matter, how many daemons runs in it.
By the way, the NetVm does not connect any AppVm, only the one, that is set in their own settings as “Net Qube”. The connect will be initiated always from the AppVm

That’s something I’m already doing; but I’m trying to make it more foolproof.

The point of this is to ensure that even if I accidentally attach a netvm to the qube, it won’t matter.

Hmm, understand.
How is it to create a “Null Net VM” and connect automatically all your AppVMs to that fake net VM, only the ones, that needs network, get the setting for the real working net VM?
Then the default is, if you create a new Qube, it has no net, no matter, if you install qubes-core-agent-networking in it. Only the ones, that you set after creation to the real net VM.

That STILL wouldn’t prevent me from inadvertently changing the netVM to sys-firewall and having the VM connected to the network.

I want to take a AppVM whose template has qubes-core-networking-agent installed, and “break” that install so that even if I connect sys-net to the qube, it can’t communicate with the network. That way I can use the same template for network-isolated and network-connected VMs, knowing the isolated qube CANNOT connect to the internet even if somehow the netvm gets set.

Hmm, that looks like you are trying to prevent you from yourself.
Then maybe the best way would be to have two templates, one with networking and one without.
But that is not 100% bulletproof also. If you switch your AppVM “template” parameter in the settings later from the one without core-agent-networking to the one with, then you have network on that AppVm.
Then the only way to reach your goal is to run a script in the AppVm on every start, that breaks the network connection.

Exactly.

I may have an answer. In the AppVM, execute systemctl stop network-pre.target.

Once this is done pinging sites by name results in Temporary failure in name resolution and pinging them by address (e.g., 142.250.69.238 is google.com) results in Network is unreachable.

Trying to restart network-pre.target results in an error message because it can only be started via dependency–it will refuse to start from the command line. Unless there’s some command line that will restart it, it’s not just disabled it’s “broken.”

(Edit to add: Apparently deleting the file /usr/lib/systemd/system/network-pre.target will prevent a restart even if I can figure out how to restart something that would trigger this to start.)

Mask qubes-network-uplink and qubes-network-uplink@ services in the template:

systemctl mask qubes-network-uplink qubes-network-uplink@

Unmask and start the services in the qubes that need network, place it in /rw/config/rc.local:

systemctl unmask qubes-network-uplink qubes-network-uplink@
systemctl start qubes-network-uplink

Your solution has the advantage of being a little less kludgy than what I did. (This comes of you understanding systemd better than I do; I was literally just stopping services until I found one that would make ping fail–and I knew nothing about masking!) Instead of doing something to kill the network where it isn’t wanted, I would be doing something to start it where it IS wanted. That seems cleaner and less clunky. Plus the isolated qube couldn’t be networked even for a fraction of a second, doing it your way.

It would still be possible for an attacker to ensure it gets turned on even in my isolated qubes by unmasking and starting, so I COULD delete the service file in the isolated qubes for added safety.

(edit for anyone else reading this; there should be an “eth0” after the @ sign in the commands MellowPoison provided.)

No, you don’t need to add “eth0” for masking/unmasking. If you want to start the qubes-network-uplink@ service, then you need to specify the interface, but you don’t need to start in manually.

Ah, I stand corrected then.

All I did to test this was stop those two services in a different VM to ensure their absence is sufficient to kill networking.

In order to fully test this setup I’m going to have to do a lot of prep work and it’s getting late here so I’ll tackle it tomorrow (actually later today). (GMT-6 a/k/a US Mountain Daylight Time here.)

Why don’t you add firewall rules to the qubes, so in case you add a netvm the traffic would be blocked anyway.

If you want to prevent in the qube from being able to do anything to enable the network, then you can do override the xen config and remove the network interface for qubes with specific feature set:
Create the /etc/qubes/templates/libvirt/xen-user.xml file in dom0 with this content:

{% extends 'libvirt/xen.xml' %}
{% block devices %}
    {% if vm.features.get('disable_network', '0') != '1' %}
        {{ super() }}
    {% else -%}
        {#
            HACK: The letter counter is implemented in this way because
            Jinja does not allow you to increment variables in a loop
            anymore. As of Jinja 2.10, we will be able to replace this
            with:
            {% set counter = namespace(i=0) %}
            {% set counter.i = counter.i + 1 %}
        #}
        {% set counter = {'i': 0} %}
        {# TODO Allow more volumes out of the box #}
        {% set dd = ['e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p',
            'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y']
        %}
        {% for device in vm.block_devices %}
            <disk type="block" device="{{ device.devtype }}">
                <driver name="phy" />
                <source dev="{{ device.path }}" />
                {% if device.name == 'root' %}
                    <target dev="xvda" />
                {% elif device.name == 'private' %}
                    <target dev="xvdb" />
                {% elif device.name == 'volatile' %}
                    <target dev="xvdc" />
                {% elif device.name == 'kernel' %}
                    <target dev="xvdd" />
                {% else %}
                    <target dev="xvd{{dd[counter.i]}}" />
                    {% if counter.update({'i': counter.i + 1}) %}{% endif %}
                {% endif %}

                {% if not device.rw %}
                    <readonly />
                {% endif %}

                {% if device.domain %}
                    <backenddomain name="{{ device.domain }}" />
                {% endif %}
                <script path="/etc/xen/scripts/qubes-block" />
            </disk>
        {% endfor %}

        {# start external devices from xvdi #}
        {% set counter = {'i': 4} %}
        {% for assignment in vm.devices.block.assignments(True) %}
            {% set device = assignment.device %}
            {% set options = assignment.options %}
            {% include 'libvirt/devices/block.xml' %}
        {% endfor %}

        {% for assignment in vm.devices.pci.assignments(True) %}
            {% set device = assignment.device %}
            {% set options = assignment.options %}
            {% set power_mgmt =
                vm.app.domains[0].features.get('suspend-s0ix', False) %}
            {% include 'libvirt/devices/pci.xml' %}
        {% endfor %}

        {% if vm.virt_mode == 'hvm' %}
            <!-- server_ip is the address of stubdomain. It hosts it's own DNS server. -->
            <emulator
                {% if vm.features.check_with_template('linux-stubdom', True) %}
                    type="stubdom-linux"
                {% else %}
                    type="stubdom"
                {% endif %}
                {% if vm.netvm %}
                  {% if vm.features.check_with_template('linux-stubdom', True) %}
                    cmdline="-qubes-audio:audiovm_xid={{ audiovm_xid }} -qubes-net:client_ip={{ vm.ip -}}
                        ,dns_0={{ vm.dns[0] -}}
                        ,dns_1={{ vm.dns[1] -}}
                        ,gw={{ vm.netvm.gateway -}}
                        ,netmask={{ vm.netmask }}"
                  {% else %}
                    cmdline="-qubes-audio:audiovm_xid={{ audiovm_xid }} -net lwip,client_ip={{ vm.ip -}}
                        ,server_ip={{ vm.dns[1] -}}
                        ,dns={{ vm.dns[0] -}}
                        ,gw={{ vm.netvm.gateway -}}
                        ,netmask={{ vm.netmask }}"
                  {% endif %}
                {% else %}
                  cmdline="-qubes-audio:audiovm_xid={{ audiovm_xid }}"
                {% endif %}
                {% if vm.stubdom_mem %}
                    memory="{{ vm.stubdom_mem * 1024 -}}"
                {% endif %}
                {% if vm.features.check_with_template('audio-model', False)
                or vm.features.check_with_template('stubdom-qrexec', False) %}
                    kernel="/usr/libexec/xen/boot/qemu-stubdom-linux-full-kernel"
                    ramdisk="/usr/libexec/xen/boot/qemu-stubdom-linux-full-rootfs"
                {% endif %}
                />
            <input type="tablet" bus="usb"/>
            {% if vm.features.check_with_template('audio-model', False) %}
                <sound model="{{ vm.features.check_with_template('audio-model', False) }}"/>
            {% endif %}
            {% if vm.features.check_with_template('video-model', 'vga') != 'none' %}
                <video>
                    <model type="{{ vm.features.check_with_template('video-model', 'vga') }}"/>
                </video>
                {% if vm.features.check_with_template('linux-stubdom', True) %}
                    {# TODO only add qubes gui if gui-agent is not installed in HVM #}
                    <graphics type="qubes"/>
                {% endif %}
            {% endif %}
        {% endif %}
            <console type="pty">
                <target type="xen" port="0"/>
            </console>
    {% endif %}
{% endblock %}

Then enable disable_network feature for your offline qube:

qvm-features my-offline-qube disable_network 1

The downside is that you’ll need to update the /etc/qubes/templates/libvirt/xen-user.xml file if the /etc/qubes/templates/libvirt/xen.xml will be changed in the future.
You’ll need to replace the {% block devices %} ... {% endblock %} content and remove these lines from it:

            {% if vm.netvm %}
                {% include 'libvirt/devices/net.xml' with context %}
            {% endif %}

It won’t block traffic if the net qube doesn’t support Qubes OS firewall, for example, sys-whonix.

Yes indeed, but that would still be an effective and reliable extra layer of defence when the netvm is supported.