Bash: -x: command not found
Edit:
I executed the command correctly now. Nothing shows on the terminal with this.
Bash: -x: command not found
Edit:
I executed the command correctly now. Nothing shows on the terminal with this.
@Bob3 I don’t know what to say except that this is not normal and I cannot reproduce it.
Thanks. I will set ‘rw’ to False.
You mean the pool’s ephemeral_volatile property? That’s just a default for the ephemeral property of ‘volatile’ volumes in the pool, nothing more.
Then why is it recommended to “ephemerize” the pool instead of the volatile volume only? IIUC from your explanation, the result is the same, no?
When I set AppVM’s root volume’s rw property to False, I get:
user@dom0:~ > qvm-volume info BBB:root | grep rw
rw False
user@dom0:~ > qvm-volume info whonix-ws-16:root | grep rw
rw True
BBB is a test qube which is practically a whonix-ws-16-dvm cloned into the pool and ran from there. IOW, it is expected that the original and the clone have the same root volume, i.e. whonix-ws-16:root. In that case, how come one has rw=False and the other one rw=True?
I am trying to figure whether I need to restore rw=True in cleanup before removing the RAM based qube.
It’s neither recommended nor recommended against, Marek just replied “you may be interested in” it. But it does fit your use case, since there’s no need for non-ephemeral ‘volatile’ volumes in the pool.
Also, you had trouble setting the property on the individual volume. (If you’re interested in debugging that, maybe invoke the script with bash -x
to see if it’s really calling the qvm-
tools correctly?)
No
The ‘root’ volume of an AppVM is not the same thing as a the ‘root’ volume of its TemplateVM. (Even though the associated block storage data of the former is derived from the latter.) The rw property of the ‘root’ volume of a TemplateVM tells you whether it will be writable when you start that TemplateVM.
Thanks for clarifying.
I have updated the script.
So, IIUC, in the case of one volatile volume residing in one pool, there is no actual difference between configuring the volatile volume itself to be ephemeral or setting the pool as ephemeral_volatile=True. The result will be the same.
And if the pool stores more than one volatile volumes (general case), then having the pool with ephemeral_volatile=True will make all these volatile volumes ephemeral.
Is the above correct?
Another thing:
When the tmpfs pool gets filled up, that takes memory from dom0. Considering the sudo swapoff --all
line and the 4 GiB of total RAM for dom0, what is expected to happen if one creates and attempts to fill a tmpfs pool >= 4 GiB?
I also have this generic monitor in my XFCE panel, running every 5 seconds the command xl info free_memory
. I notice that running a RAM based disposable created by the script always takes from that number exactly the value of maxmem qube’s property. That seems logical. The question is - what will happen if maxmem is bigger than Xen’s free memory? Will Xen break something or swap? Where will it swap? Is that configurable and how?
I am trying to understand the consequences of running too big and/or too many RAM based qubes and possibly improve the script, so that it prevents undesired memory situations (making proper verifications before attempting to allocate resources).
That’s right.
Hmm I’m guessing you’d make acquaintance with the dom0 Linux kernel’s OOM killer long before you reach 4 GiB of tmpfs usage.
Hmm I’m guessing you’d make acquaintance with the dom0 Linux kernel’s OOM killer long before you reach 4 GiB of tmpfs usage.
Where can I read about it?
How about the Xen questions? Is there any info about that too?
How about the Xen questions? Is there any info about that too?
BTW, when I run a system update and 3-4 dism-mgmt-* VMs start, xl info free_memory
goes all the way down to about 300 MiB (from 64 GiB total physical RAM). That is just before running the update there is about 20G free memory.
To look at dom0 memory, run free
. I think it’s expected for most of the system-wide memory to be shown as used in some sense by xl info
.
There’s no Xen-specific swap, only normal Linux swap in dom0 and normal Linux swap in the VMs (which ends up in their ‘volatile’ volume).
As for the OOM killer, sorry I don’t have any links in particular. But this too is a normal Linux mechanism, not specific to Qubes OS.
I think it’s expected for most of the system-wide memory to be shown as used in some sense by
xl info
.
Most of the time (having 10 VMs running), there is about 25-30 GiB free memory. So, it is 50/50 free/used. The sudden reduction to 300 MiB happens when these disp-mgmt- qubes start which is still strange because their maxmem value is 4 GiB and when 4 of them are running that is 16 GiB (not 25 or 30). That’s why I wondered why suddenly all free memory kind of disappears during updates.
There’s no Xen-specific swap, only normal Linux swap in dom0 and normal Linux swap in the VMs (which ends up in their ‘volatile’ volume).
What happens if Xen’s free memory hits 0?
Updated.
Off-topic: I really like your avatar, very nicely suited to the username! lol
Well done on this too!
Drawn in a RAM based qube.
Some thoughts about memory usage:
As dom0’s memory is more limited, it seems more efficient to use less from it and more from Xen, i.e. less storage (-s
) and more memory (-p maxmem
).
In case I am right, a little trick which may be useful:
In the DVM’s /rw/config/rc.local
set:
mount -o remount,size=4G /tmp # The size is up to you, you can thin provision
mkdir -p /tmp/download
chown -R user:user /tmp/download
ln -sfT /tmp/download /home/user/QubesIncoming
swapoff --all
And run:
ram-qube -p maxmem=4000 -p template=DVM ...
With that, files copied from other VMs to the RAM qube will go to the RAM of the qube (provided by Xen), instead to its pool (provided by dom0).
If /tmp
is too big (e.g. > qube’s total RAM), filling it up will crash the qube. So, thin provisioning (if necessary) must be done with caution.
This can be used for non-RAM based qubes too, if one prefers to store incoming files on RAM drives.
Ever since the last update to the script it has stopped working correctly for me. I enable no script to show on my toolbar and then go to a site that has something I need to mark as trusted. When I click on the no script icon it just displays a empty white box. The pool-usage script shows it at 99.95% of 1.0gb. It often will crash as well. Is there anything I can do to get it to function the way it did previously?
The pool-usage script shows it at 99.95% of 1.0gb. It often will crash as well. […] Is there anything I can do to get it to function the way it did previously?
It seems like you are running out of memory inside the pool, so try increasing the pool size (e.g. -s 1500M
or -s 2G
).
@qubist
As a newcomer to this topic, where can I find the current script and the future updated versions?
Thank you
@Theseus in the OP.