How to make a new VM sys.***?

Hi everybody.

Since yesterday I am dedicated to make a new VM called “sys.nvidia”
I reinstalled so many time Qubes in last 6 months that I am crazy now.

Now I want to approach from another side.

To make it as I want, I have to firstly understand one thing only:

I want to know which program executes which config, when I call:
sudo qubesctl top.enable qvm.sys-net

Why? If I understand how Qubes configures sys-net and how qubes passthrough some PCI device to some VM, I will be able to “clone” this config and try to change PCI number to my desired ones.

Until now I managed to passthrough GPU to my VM.
VM does not crashes, nor shows any kind of error.
Actually, it works perfectly.

Only thing what is missing is not seeing black screen :smiley:

My opinion is that VM does not shows it’s video output to a dom0 because of some king of security.
So which one ?

I will try to debug already existing VM which works, and try to implement same logic to my new vm.

Is there any developer here, which can answer this question?

Who programmed a qubesctl ?
And how ?

I need doku or location of this config files (all of them)
Thank you very much.

p.s. I really want to help :smiley:
tnx

This is saltstack:

It basically creates qubes based on a written specification. (for sys-net it is written in dom0’s /srv/formulas/base/virtual-machines-formula/qvm/sys-net.sls).

But saltstack does nothing you couldn’t manually do. So my suggestion for you would be to create that qube manually. But someone else will have to give you pointers about GPU passthrough as I don’t know too much about it.

I should also note that VMs named sys-* don’t have anything special. It’s just a name. Possibly the only thing that will matter here is that you’ll possibly need create that with the HVM virtualization mode as opposed to the default for interacting with PCI devices.

1 Like

tnx for explaining.

now I am here: libvirt: Host device management
virsh