I’m not sure I am willing to follow you through this rabbit hole. I think what is important, for the default user, is what is deployed on a default installation, which is two thin-lvm pools, with Q4.1 seperating dom0 pool (20Gb) of vm-pool. Otherwise, you can dig down Help: Custom Installation (LVM Layout & Config) on 4.1
There are general articles, already proposed elsewhere in the forum, to explain things better then I could even try, for example this article from Fedora explaining why they are getting away of thin-lvm pool in favor of BRTFS Choose between Btrfs and LVM-ext4 - Fedora Magazine
But for the scope of your question:
- snapshot: is the -back voumes created by qubes to be used through qvm-volume qube:volume call to revert in a previous point in time. Those snapshots are rotated from qubes upon shutdown and were explained elsewhere Coping with OS-level snapshot rotation · Issue #88 · tasket/wyng-backup · GitHub
- backup: Qubes will take the root and private volumes directly if those qubes are not running, otherwise taking the last revertible snapshot as part of the backup and warn the user in such case.
- pools: file based and volume based, basically. Can be digged elsewhere and non-pertinent here.
- lvm image: hmm? a lvm file doesn’t make sense to me.
- device, block device, vm device: Not sure how far you want to follow the rabbit here… Qubes dom0 manages most of the non-passthrough pci controllers (non sys-gui, non sys-usb, non sys-net) devices directly.
Here, for relevance of this thread, that would be your hard drive device(s), as so many other devices.
dom0 deals with unlocking the encrypted container and then dealing with volume group (VG, that can be spanned across multiple hard drives or not) and then passes logical volumes (LVMs, thin provisioned by default) to qubes through qubes pool manager
- logical volumes: in default qubes installation, which is thin-provisionned LVMs (small logical volumes even though provisionned to consume 10Gb, will only consume used space. May cause pool issues because of over-provisionning if all qubes consumes what they think available. This is why Qubes will warn users if pool is reaching limits so the user can remove unused files inside of Qubes)
Is related to creating dom0’s root snapshots, which are not currently covered nor possible through
qvm-volume revert dom0:root. dom0 is considered special and for a reason, one cannot revert that volume without loosing configuration changes that could lead to qubes having been created in last session (that is, creation of qubes manager entrees, lvm volumes corresponding to private and root volumes that are needed, logs and other stuff) which is why dom0 is not currently revertible. Reverting bluntly to a prior state would have qubes manager to loose its entrees about newly created qubes, even though their disks (lvms) are still existing into the pool.
That posts offers the possibility to users to create dom0 snapshots manually, which can be mounted for forensic/auditability reasons from people interested. As said in that thread, users could create snapshots of dom0 after initial installation of Qubes OS (root-at-install as a dom0 root snapshot example here), and compare last snapshot against that root-at-install snapshot to see what was modified on the filesystem (block level, on dom0’s disk).
One cannot, today, revert dom0 snapshot without risks, since dom0 contains a lot of other states that would most probably break your current setup. That is, restoring to root-at-install snapshot would effectively put dom0 in its install state. But your /boot would contain newer boot entries, created qubes since that clean install would not be presented to you even if lvms of those qubes would be present, they would not been mapped through the qubes manager for those qubes to be usable.
So this is linked to your backup question here.
When qubes does a backup, it wraps all those pieces together (qubes.xml parts) so that a qube/template that was backuped also restores correctly, including configuration details, that is, restituating qubes.xml content so that qubes manager is able to launch/map qubes/templates/netvm dependencies correctly, including PCI devices, memory assignment, enabled services and preferences : everything else linked to a configured qube. The dom0 snapshot, just like any LVM of block device, is simply the “disks” passed to a qube.