Reconceptualizing Old KVM System

Having a little trouble reimagining my old system into an effective Qubes system.

In my old system I had 16 VMs for different functions, including a ‘router’ vm. This had three NICs: one to The Internets, one to the LAN, and one to the DMZ. This router strictly funneled traffic to only those places it was allowed and only allowed necessary ports using nftables.

  1. I note that Qubes has a network instance which holds all the interfaces. This seems to defeat the isolation of the LAN and DMZ from The Internets. Can I/ should I make three network instances?

  2. Another VM was for Wireguard and DNS service to the LAN. This had an outWG interface to ProtonVPN, and an inWG interface so remote devices could seamlessly and securely connect to my LAN and the VPN. I guess I’d make a VPN server instance, but this creates a WG interface. Would this be a network instance? How would I install unbound into this? Unbound pretty much has to be in the same instance as WG.

  3. Now, so that daemons do not disappear with each reboot I must install the daemons in the source template. This could mean that all the daemons in my 16 old VMs would be installed into one source template! All instances using that template could run any of the daemons, which doesn’t make sense. I realize that the template would be read-only, but this is not partitioning. What’s up with that? How do I handle it?

  4. Outside of my old herd of KVM VMs in the server, there are separate LAN machines, like laptop, backups server, cameras server, etc, each of which accessed services provided by server VMs through dedicated LAN IPs/NICs. I can see how Xen instances can communicate through the hypervisor, but how do LAN machines access daemon instances in the Xen server?

First of all, to address the overarching concern I see, you can (pretty much) copy/paste your old KVM instance using StandaloneVMs. These are Qubes VMs that are basically just regular linux VMs with Qubes isolation and tools. That would allow you to have a setup very similar to your previous.

To address your other concerns:

On the interfaces, you can give any qube any PCI device (which NICs are), as long as the qube is in HVM mode and there is no overlap in running qubes (two running qubes cannot share the same PCI). For instance, I have a wifi card for accessing the internet, and an ethernet port for accessing secure networks directly or via a dedicated networking device. If you don’t mind the look, you could tape a small managed router to your laptop (or set it by your desktop) for network segregation.

I am not a network engineer so I can’t answer your questions with technical expertise, but Qubes VMs can run pretty much anything in my experience, as long as it runs on the hardware and can be virtualized (Qubes, for example, does not like being virtualized). If you can get familiar with Qubes, it should become evident on how to translate what you already have to Qubes.

The third problem can be solved with StandaloneVMs, but if you want template/app infrastructure and your daemons aren’t located in /home/ or /rw/, then yes, you’ll have to install your daemons in the same template or use different ones. Please note both that unexecuted code represents a minimal security risk, and that you can always solve segregation with separate templates or containers (like docker, for instance, although I am not personally endorsing the program).

I cannot help you with the fourth, as I don’t understand what you’re asking. If it was simple networking, you can network VMs in Qubes just like anything else, although it may require advanced configuration depending on what you are trying to do.

Ok so I can graft my old VMs into standalone qubes. Although I don’t have image backups of all; I generally back up their contents.

On giving qubes PCI devices, actually I am using SR-IOV, which is a method of dividing one NIC into 8-16 ‘virtual interfaces’. These virtual interfaes are available on the host for allocation to VMs as if they are physical interfaces. No idea how/where this would be done in Qubes.

I’ve read every usage doc and am in the process of watching a spewtube series, but my questions remain.

Installing multiple daemons into one template means that although they may not be in a qube’s menu, they are still available for execution. If a threat actor gets into a qube she may be able to execute a vulnerable app and pivot to higher privileges.

Ok one can network any qube, but I don’t yet see how it’s done. I need qubes to have stable IPs on the LAN, as they are running daemons which must have a location-certain.

Another issue I haven’t mentioned is this is a ‘lights-out’ server, meaning I absolutely need remote access to everything. It would seem that this means attaching dom0 to the LAN and installing X2Go on it, however unrecommendable that may be. Some kind of out-of-band IPMI function would be ideal for this.

Regrettably it is starting to look like Qubes is not geared for enterprise-style systems.

Qubes doesn’t package domU (any non-dom0 qube) with security packages, other than whonix workstations. You are responsible for OS and app-level hardening in these qubes. I would highly recommend not relying solely on Qubes’ isolation, and instead implement security in depth. I personally see this as a huge shortcoming, but I can also understand this isn’t exactly Qubes’ focus.

Just so I’m clear, this isn’t literal copy/paste. You may need to make some changes to get it to work in Qubes. What I was trying to say is that it isn’t a Mars/Venus thing—more like US/Canada. I’d have to know more to be of better help, and from the sound of it I’m not the best person to help with this specifically. (Networking isn’t my area of expertise.)

Please do not do this. Qubes is designed with the understanding that the risk to dom0 is minimized by not using applications nor networking. dom0 is extremely vulnerable if these guidelines are not followed, and dom0 has unfettered, uncontrolled access to everything. This would be like leaving the door to the pentagon unlocked and the computers open. A much better alternative if this is necessary is sys-gui-vnc. If you intend to expose dom0 to networking, Qubes is likely (although not necessarily) not for you.

Sadly, this is true in my experience. Qubes is a great project, but the management capabilities aren’t here yet. This is certainly not to say that Qubes can’t be used for enterprise—but you will have to reimagine how you manage it, and do it a Qubes way.

I’m not sure where to help on this, but this doesn’t seem out of reach for Qubes. You just may have to talk to someone who is more familiar.

None of this is to say Qubes isn’t for you, but it seems that the way you are currently doing things runs contrary to what the project does best. This is only my opinion, and you should probably get a second.

Alright, I just can’t make it with Qubes. There is shockingly little technical knowledge in the forums and here on IRC. The docs are geared for individual users, not larger-scale users. The lack of usable info is just too great for my use-case.

I’ll create a sys-net by NIC, with a dedicated sys-firewall, and connect the AppVms to the choosed network:

  • internet : sys-net ← sys-firewall ← my-web-browsing-app-vm
  • LAN : sys-net-lan ← sys-firewall-lan ← my-internal-app-vm
  • DMS : sys-net-dmz ← sys-firewall-dmz ← my-dmz-admin-app-vm

How to do a new sys-net? Search in this forum, but a basic sys-net clone should work.

Hm, interesting,

See, I use SR-IOC to break out one NIC into 16 virtual NICs, and allocate one to each VM. Would SR-IOC be done in dom0? Or in a qube?

Is there a way that I can allocate separate virtual NICs to different functions, like LAN and DMZ?

Alternatively, I have a PCI device with the four physical NICs, each of which I break out into virtual NICs – so can I allocate one of these physical NICs to LAN and another to DMZ, etc? Obviously I can’t dedicate the whole PCI device to one qube.

What NIC modes are available? I’m used to using macvtap in KVM to allocate specific virtual NICs to VMs. (using virt-manager)

Also, I am used to running a zero-trust network, IOW each VM has its own custom firewall built-in, and an SSH server (using certs) in each VM, all behind a router VM with an outside firewall of its own, and forwarding traffic by NAT. The router VM mediates between the LAN and DMZ, providing access to the outside for each. Anything wrong with this in Qubes?

Most of my VMs are servers, so must have an IP-certain. Is this possible with Qubes?

How about managing qubes graphically? Does virt-manager work with Qubes? Or what?

And what does this mean/ how is this possible? “*Recent evolutions of Xen allow running the hypervisor without a dom0, the feature is called “hyperlaunch” (previously ‘dom0-less’).”

Does Qubes require a given OS to be dom0? For example can I run dom0 on Alpine and domU’s on Debian? If on Alpine would I lose functionality as compared with using Debian as dom0?

Being an infosec type I just love the idea of a micro-kernel, and Xen is the closest we can get to this with a Linux ecosystem.

Hi,
a lot of your questions are in the official doc, please read it first :

Actually I’ve read them, and the docs haven’t answered any of my questions.

Additionally, my Qubes server is headless so I must have remote access to dom0 using either x11vnc or x2goserver to at least one local machine. Is there a Best Practice for this? How would I enable a network link in dom0 to a given machine?

Ty.

Rather than VNC I’d rather use x2go (which operates through SSH), but searching on G**gle for ‘formula “sys-gui-x2goserver”’, it gives that fishing gnome that you want to just punch in the face…

I guess I have to learn everything about everything about Salt YAML formulae.

Edit: Oh dear, it seems I do not have time to get entangled with that. Better that I install openSSH and x2goserver into dom0, and (somehow) get a network connexion from dom0 to local workstation.

Edit2: It seems that the GUI has been decoupled from dom0. And maybe sshd has been masked. It’s quite a faff to access the Qubes system where it is so this will be a hassle to learn.

A VNC server session is running on localhost:5900 in sys-gui-vnc.

This is clear enough, although I have to take its word for it since a terminal in sys-gui-vnc will not accept my username for unknown reasons.

I really want to set its port to 5904 in this instance though, and I presume this would be done in the template, although that would mean it’s set that way globally which is undesirable.

In order to reach the VNC server, we encourage to not connect sys-gui-vnc to a NetVM but rather to use another qube for remote access, say sys-remote. First, you need to bind port 5900 of sys-gui-vnc into a sys-remote local port (you may want to use another port than 5900 to reach sys-remote from the outside). For that, use qubes.ConnectTCP RPC service (see Firewall. Then, you can use any VNC client to connect to you sys-remote on the chosen local port (5900 if you kept the default one). For the first connection, you will reach lightdm for which you can log as user where user refers to the first dom0 user in qubes group and with corresponding dom0 password

This is indecipherable.

Running sudo qubesctl --all state.highstate took a long time, until the first stage timed out as unable to reach the network. No wonder, /etc/resolv.conf symlinks to a non-existant file under /run. Have no idea why.

The remaining stages completed though and for some reason it chose the Fedora40 template even though I’ve set Debian as the system default.

No idea what to do now.