Ooh, I am interested in reading more about this. I will take a look at the forum search.
Since I use both Qubes and a number of other *Nix systems (mostly OpenBSD), I have a ādotfilesā git repo someplace secure enough, where I can access it from all machines. The internet is fast enough these days that I can do a āgit pullā in my login scripts so I donāt have to manually update, yet itās always up to date. Thereās a Makefile to symlink the files into place when I clone it onto a new machine. For stuff that shouldnāt be on all the machines, this doesnāt apply, but you could extend it to have a few repos or just handle those few files manually.
I stumbled upon another way to manage the dotfiles across qubes: using GNU stow and git:
Moved to User Support
.
Can you explain if using rsync to move git repo locally from qube1 to qube2 would overwrite the qube2ās local git repo version? I mean, letās say I make a change to my dotfiles git repo on qube1. Then I use rsync to sync that change to qube2. What happens? Is it going to be overwritten?
I am trying to see if it is possible to keep track of a git repo for my dotfiles, with a āremoteā server being one of my local qubes os qubes. Can I do git push/pull/fetch/merge operations that way?
Or, letās say I simply qvm-copy the ~/.dotfiles
folder or, ~/.dotfiles/.git
folder (?) to other qubes that have the same ~/.dotfiles
folder but outdated git repo version.
What do I do with the ~/.dotfiles
folder or, ~/.dotfiles/.git
in the ~/QubesIncoming
directory now? Do I move that to the ~/.dotfiles
? If I do that is it going to automatically merge the commits or something?
I might be in the process of devising a solution for my use case. I am making use of git bundle
command. See here for an introduction: Git - Bundling
As I keep testing this workflow, I will straighten the kinks and if I am satisfied with it, probably will write a full guide. But for now, hereās some quick notes on it:
Syncing dotfiles across qubes without a remote server on the internet:
on the dotfiles qube:
cd into ~/.dotfiles dir.
You already have a git repo in there. You have your dotfiles organized according to stow
program.
~/.dotfiles/zsh/.config/zsh/ ā this dir contains your zsh dotfiles.
So, on ~/.dotfiles dir:
$ git bundle create dotfiles.bundle HEAD master
this will create a dotfiles.bundle file containing the version of ~/.dotfiles directory as it is on the dotfiles qube.
You then qvm-move that dotfiles.bundle into another qube that you want to propagate your dotfiles to.
Letās say you qvm-move 'd that file to āmydebianā qube. On the mydebian qube, create the ~/.dotfile dir. Then, move the dotfiles.bundle file there:
$ mv ~/QubesIncoming/dotfiles/dotfiles.bundle ~/.dotfiles
Then, cd into ~/.dotfiles dir and do
$ git pull dotfiles.bundle
You will have the same dir with the same git history and files as in dotfiles qube now in your mydebian qube.
Letās say, you change some files in ~/.dotfiles, in this mydebian qube, and you want to propagate those changes back to the dotfiles qube. You commit your changes, and create a git bundle in mydebian:
$ git bundle create dotfiles.bundle HEAD master
Basically qvm-move that dotfiles.bundle file back to dotfiles qube. Move that bundle file again to the ~/.dotfiles dir, and git pull from it in the dotfiles dir.
You can further assign following git usernames for keeping track of which commits came from which qubes:
(on dotfiles ) $ git config user.name ādotfilesā
(on dotfiles ) $ git config user.email ādotfiles@localhostā
(on mydebian ) $ git config user.name āmydebianā
(on mydebian ) $ git config user.email āmydebian@localhostā
that way, your commit history will show the qubesā names as the commit owners.
Yes - rsync will copy over the destination. Itās designed to put the
source and destination in sync. (There are configuration options to deal
with various use cases.)
Itās efficient and fast.
For git I would recommend git over qrexec.
New idea that I might implement. Currently I send a script to each new VM and run it, and itāll configure all my settings by running commands and editing files. This makes updating an issue as someone pointed out, but I rarely update my dotfiles after being set up so it hasnāt bothered me yet. However you could centralize it by placing the scripts in dom0, then running some sort of for loop on each VM you want to update, where it sends the new centralized file from dom0 and then executes the file in each VM. That way you only have to update one file, and then run the update loop.
This is where salt comes in to its own.
You can place the configuration files in dom0, and then target delivery
, writing configuration as needed, and changing in accord with the
target system, either by name or by other features.
You can read about Salt here - I
have a basic tutorial here covering many
uses and examples.
Iām currently using a solution very similar to the one described here, but itās a bit cumbersome to keep all of my VMs up-to-date with the main config.
Does your current workflow include a convenient way to automate all of this too?
Hereās the approach Iām trying to build/configure:
Specifically this:
Ideally, Iād have a disposable coding environment that attaches project data as a block device (like from a project-data VM) while always pulling fresh dotfiles from a known template source.
But Iām still tinkering around to understand how to do this and whether itās possible at all.
Are you still using the same method?
Itās been a while since I posted that. Nowadays, I donāt use git bundles for dotfiles replication. Instead, I have a dotfiles qube which is allowed to send files to any other qubes (except dom0 ofc). I have a simple script on dom0 which remote commands my dotfiles qube to send the select dotfile files to select qubes, which they then automatically stow into ~/.config/ directory.