Ooh, I am interested in reading more about this. I will take a look at the forum search.
Since I use both Qubes and a number of other *Nix systems (mostly OpenBSD), I have a âdotfilesâ git repo someplace secure enough, where I can access it from all machines. The internet is fast enough these days that I can do a âgit pullâ in my login scripts so I donât have to manually update, yet itâs always up to date. Thereâs a Makefile to symlink the files into place when I clone it onto a new machine. For stuff that shouldnât be on all the machines, this doesnât apply, but you could extend it to have a few repos or just handle those few files manually.
I stumbled upon another way to manage the dotfiles across qubes: using GNU stow and git:
Moved to User Support
.
Can you explain if using rsync to move git repo locally from qube1 to qube2 would overwrite the qube2âs local git repo version? I mean, letâs say I make a change to my dotfiles git repo on qube1. Then I use rsync to sync that change to qube2. What happens? Is it going to be overwritten?
I am trying to see if it is possible to keep track of a git repo for my dotfiles, with a âremoteâ server being one of my local qubes os qubes. Can I do git push/pull/fetch/merge operations that way?
Or, letâs say I simply qvm-copy the ~/.dotfiles
folder or, ~/.dotfiles/.git
folder (?) to other qubes that have the same ~/.dotfiles
folder but outdated git repo version.
What do I do with the ~/.dotfiles
folder or, ~/.dotfiles/.git
in the ~/QubesIncoming
directory now? Do I move that to the ~/.dotfiles
? If I do that is it going to automatically merge the commits or something?
I might be in the process of devising a solution for my use case. I am making use of git bundle
command. See here for an introduction: Git - Bundling
As I keep testing this workflow, I will straighten the kinks and if I am satisfied with it, probably will write a full guide. But for now, hereâs some quick notes on it:
Syncing dotfiles across qubes without a remote server on the internet:
on the dotfiles qube:
cd into ~/.dotfiles dir.
You already have a git repo in there. You have your dotfiles organized according to stow
program.
~/.dotfiles/zsh/.config/zsh/ â this dir contains your zsh dotfiles.
So, on ~/.dotfiles dir:
$ git bundle create dotfiles.bundle HEAD master
this will create a dotfiles.bundle file containing the version of ~/.dotfiles directory as it is on the dotfiles qube.
You then qvm-move that dotfiles.bundle into another qube that you want to propagate your dotfiles to.
Letâs say you qvm-move 'd that file to âmydebianâ qube. On the mydebian qube, create the ~/.dotfile dir. Then, move the dotfiles.bundle file there:
$ mv ~/QubesIncoming/dotfiles/dotfiles.bundle ~/.dotfiles
Then, cd into ~/.dotfiles dir and do
$ git pull dotfiles.bundle
You will have the same dir with the same git history and files as in dotfiles qube now in your mydebian qube.
Letâs say, you change some files in ~/.dotfiles, in this mydebian qube, and you want to propagate those changes back to the dotfiles qube. You commit your changes, and create a git bundle in mydebian:
$ git bundle create dotfiles.bundle HEAD master
Basically qvm-move that dotfiles.bundle file back to dotfiles qube. Move that bundle file again to the ~/.dotfiles dir, and git pull from it in the dotfiles dir.
You can further assign following git usernames for keeping track of which commits came from which qubes:
(on dotfiles ) $ git config user.name âdotfilesâ
(on dotfiles ) $ git config user.email âdotfiles@localhostâ
(on mydebian ) $ git config user.name âmydebianâ
(on mydebian ) $ git config user.email âmydebian@localhostâ
that way, your commit history will show the qubesâ names as the commit owners.
Yes - rsync will copy over the destination. Itâs designed to put the
source and destination in sync. (There are configuration options to deal
with various use cases.)
Itâs efficient and fast.
For git I would recommend git over qrexec.
New idea that I might implement. Currently I send a script to each new VM and run it, and itâll configure all my settings by running commands and editing files. This makes updating an issue as someone pointed out, but I rarely update my dotfiles after being set up so it hasnât bothered me yet. However you could centralize it by placing the scripts in dom0, then running some sort of for loop on each VM you want to update, where it sends the new centralized file from dom0 and then executes the file in each VM. That way you only have to update one file, and then run the update loop.
This is where salt comes in to its own.
You can place the configuration files in dom0, and then target delivery
, writing configuration as needed, and changing in accord with the
target system, either by name or by other features.
You can read about Salt here - I
have a basic tutorial here covering many
uses and examples.