Having read the Suricata documentation (particularly section 15 on setting up IPS/inline for Linux), I noted the importance to differentiate the traffic that passes Suricata (i.e. “host” or “gateway”) as each of the three available configuration methods can only suit one type of traffic.
While each app qube on Qubes performs like an independent virtual machine, my current settings appear to suggest app qubes can be “chained” to a certain extent. More specifically, app qube A (e.g. installed with Suricata) can be used to provide network for another qube B, which suggests that all traffic to/from qube b will pass through (and can be inspected by) qube A.
In that case, my Suricata runs on “host” or “gateway” mode? If “gateway” mode, how can I find out the interface names of the two interfaces facing Suricata?
Do you have any more specific guidance on how Suricata should be set up on Qubes?
It’s in host mode for traffic generated by the sys-suricata qube (e.g. if you ping from this qube’s terminal) and in gateway for the traffic generated by the qubes connected to sys-suricata qube.
If you want to associate the interface/IP in the sys-suricata qube with the name of the qube that is connected to it and is using this interface then you can try this:
As I am new to Linux and have little programming or syntax knowledge, would you mind sharing more on:
(1) which way (creating script in dom0 or adding a custom Qubes RPC service) would be easier to implement?
(2) the exact writing of the script itself for either way (I know it’s much to ask for but please pardon my ignorence)?
The script would be easier to implement, but you’ll need to run this script manually to update the list of the qubes connected to the sys-suricata qube each time you connect new qube to the sys-suricata qube.
I didn’t try it so I can’t provide you with a working script.
But you need to use qvm-ls command to list the qubes using sys-suricata qube as their net qube and print their IP addresses:
qvm-ls --raw-data --fields NAME,IP,NETVM | grep '|sys-suricata$' | sed 's/|sys-suricata$//g'
@apparatus I’ve got an update without using either way you suggested (as they are too complicated for me).
I seem to have managed to set up the gateway mode by manually finding out the NIC name of the two interfaces connected to sys-suricata (persistence across reboots yet to be tested though). While testing with a specific drop rule was successful in blocking connection, there was no entry in the Suricata fast.log at all. Would you know how to fix the issue? Thanks.
Further update: My manual way above turned out to be unsuccessful nor have persistence, so it’s gone back to square one (or two, with thanks to @apparatus 's advice). Can anybody help with the scripting please?
You can create a script in dom0 that will create the associated list “IP address - qube name” and pass it to your custom firewall VM.
You can also add a custom Qubes RPC service that could be called from sys-custom-fw qube script in /rw/config/network-hooks.d on new qube connect/disconnect event to request the dom0 to update the connected qubes associated list:
Would like to supplement that the Netfilter method from Suricata documentation can work as host, but not as gateway for traffic that originate from another QubeVM and pass through sys-suricata. Suspect it has something to do with the design of Qubes OS which imposes certain restrictions on inter-qube connection to enhance security.
If the Netfilter method can’t work due to Qubes OS design, the two methods left have to use two interfaces. Based on my understanding of @apparatus’s advice, the configuration may be broken down into the following steps:
Create a script in dom0 / Create custom Qubes RPC service
Dom0 feeds the two interface names into sys-suricata (automatically or upon being called) / Sys-suricata calls network hooks upon new qube connection/disconnection
Input the two interface names (in the form of variable) obtained from step 2 into suricata.yaml, to ensure the interface names are automatically updated in a timely manner.
I figure there may not be many users in the community who uses Suricata on Qubes OS, your advice on parts of the full process is still appreciated. Thanks.
If you examine the nftables used by Qubes you will see that there are
two relevant tables - qubes and qubes-firewall. Both have chains
that relate to forwarding traffic: nft list table qubes shows chain forward which is of type filter and
has priority filter. nft list table qubes-firewall shows chain forward which is of type
filter and of priority filter. This chain jumps to the chains that
contain the firewall entries for each connected qube.
The priority value is used to set in what order chains are evaluated.
There are key words with set values - priority filter has value 0.
This means that you can create a new chain with value 10 and it will be
evaluated after the standard Qubes forwarding filters, (and so after the Qubes firewall).
The first creates an IPS chain in the qubes table, which operates on
forwarded traffic and has priority 10.
The second creates a rule in the new IPS chain which queues traffic to
user space and stops evaluating rules. This traffic will pass to
suricata.
This is the simplest case. You will want to modify the rule set if you
only want to run suricata against certain qubes. Note also that a qube
may be infected and spewing traffic which could be dropped by the normal
qubes firewall and this traffic will not be seen by suricata. You
could work around this.
I never presume to speak for the Qubes team.
When I comment in the Forum I speak for myself.
Hi, as you have read my old guide… keep in mind that time has passed, booth Qubes and Suricata might changed how it works. So my old blog is just a reference point now, and if you want to implement the same thing, you might need to adopt to the current scenario…
The main change is Qubes no longer using iptables, but nftables instead. So you need to use that part from the suricata guide.
This part is crucial, as this will route the traffic to suricata for inspection. Without this part suricata will be ‘blind’
To make you packerfilter customistaiton permanent, read thre relevant guide for Qubes. hint: qubes-firewall-user-script As your suricate VM also act as a Firewall VM according to the Qubes terminology.
For the interface names:
You do not need to fetch those from dom0 for sure. Qubes using a method where the uplink interface is always eth*, and downstream interfaces are vif* - Youse these as wildcard in your packet filter.
For the architecture:
You need the gateway scenario, and your VM should be a ProxyVM (provides network) like this:
sys-net - sys-ips - AppVM
Chaining multiple firewall/proxy VMs is possible, but pointless if you ask me, as ‘sys-ips’ in this example act as a firewall too.
I would need to redo my PoC to be more specific, but I have not time for that now… sorry.