Internal Network Addressing

I’m curious why whatever decision was made to use “real” IPv4 addressing and why only class B at that? Wouldn’t it be more secure and better to distinguish what is local to the node using what is already the private IPv4 addressing specified internally for the entire node? I’m talking about the 127.x.x.x addressing space… Why not use that and so everyone knows that to route any of the traffic in and out of the node, it would be something to configure in sys-net and/or sys-firewall?? The reason I ask is because a default installation of Qubes (as of 4.1.2 is what I’m working with) using the 10.137.x.x and the 10.138.x.x addressing runs the risk of getting connected to a network (especially an enterprise company) that already uses those addresses.

Also, you could put in some IPv6 to IPv4 conversion qube somewhere and eliminate the headache and limitations of IPv4 altogether. Perhaps that would make your wonderful development a lot easier moving forward. Everyone knows, the internal to node IPv6 address space similar to the 127.x.x.x is like fe80:::: You could probably take a look at the pfSense project and maybe they will let you borrow their development of the IPv6 to IPv4 mechanics.

All outgoing/incoming internet connections from any qubes, go out of the single ip of the sys-net home/business network.
Addresses in qubes are only used locally and do not go outside of sys-net.
Qubes by default cannot communicate with other qubes and know about other qubes - well unless you allow traffic between qubes in iptables/nftables rules in Dom0.

According to wikipedia (and many others :innocent: ), “IPv4 network standards reserve the entire address block (more than 16 million addresses) for loopback purposes. That means any packet sent to any of those addresses is looped back.”

So I’m not sure how a single QubesOS could share that between AppVMs… :thinking:

You cant use 127.x.x.x for anything except loopback addresses. So this is
a non starter.
As Qubes uses standard eth0 devices per qube, allocating “real” IP
addresses makes good sense.
In the years I’ve been working with, and supporting, Qubes, I’ve rarely
hit the problem of addressing conflicts. Usually, it’s simply dealt with
by a combination of firewalling and routing.
I used to rewrite the site-prefix network allocation in
/usr/lib/python3.8/site-packages/qubes/vm/mix/, but I haven’t done
that for some time.

I never presume to speak for the Qubes team. When I comment in the Forum or in the mailing lists I speak for myself.

Thank you for your response…

I have actually used the 10.122.x.x and 10.241.x.x addresses for certain internal business networks, so in my experience, it would be very likely that the 10.137.x.x and 10.138.x.x addresses would already be in existence in a network in which I may connect my QubesOS node. Be it, it would cause a problem within the Qubes system itself not necessarily cause an issue on the business network.

The original reason that the 127.x.x.x “loopbacks” was set that way (and as explained by wikipedia) is because in the first implementations of IPv4, there was not a MASK used when addressing nodes on the internet/network… So, I believe these loopback reservations were set as such in order to reserve that CLASS A addresses for any possible future needs… Which, I believe we are at the point to need them considering the IPv4 public addresses are tapped out.

I have seen instances where MicroSoSoft’s HypeV and other software solutions (especially their WindersRDP on Winder10Pro) allows more than just (I have used myself,,, etc to allow multiple sessions on a single Winders10Pro setup as a WindersRDP terminal server)… When I modified the registry on the Winders10Pro to allow multiple WindersRDP connections, I was able to login to each session as a different user by using those loopback addresses without the previously connected sessions getting disconnected. Without using the loopback addresses the separate connected sessions was not able stay connected all at the same time.

I am sure if it can be done with WindersRDP, it could most definitely be done in Qubes…