Tool: Simple Set-up of New Qubes and Software

Thanks - it looks as if it is the Pihole and Share packages that are
affected - this is wrong.
I’ll fix it.

1 Like

4 posts were split to a new topic: Salting I2P

so happy you did this

delete

perpare for r4.2 repo? so we dont need to change releasever.

6 posts were merged into an existing topic: Salting I2P

Not sure if it’s just me, but I’m getting 404 when trying to download the RPM from https://qubes.3isec.org/3isec-qubes-task-manager-0.1-1.x86_64.rpm. Is that still the way to install the tool? (I’ve already installed the keys) Or can it just be installed via dnf in dom0 now? (shiver up my spine) :slight_smile:

Thanks!

1 Like

if you had type the unman repo in dom0, and save key under /etc/pki…

you can just run `sudo qubes-dom0-update 3isec-qubes-task-manager’

This is a great idea to include the qubes-app-split-browser!

I would really like it we had this for Firefox with arkenfox integrated into the Qube and a cron job/script to update arkenfox every day or at each boot.

That worked.

Thanks!

How to install this via salt ?!

Partway there:

(It doesn’t update arkenfox daily, but it does combine it with the split browser.)

There isn’t a salt package to set this up currently.
It’s on my list.

i installed this task manager yesterday and got 2 questions:
i installed pi-hole, it seems like it setted everything up, it switched all my qubes to pi-hole.
i still got internet, so everything is fine but, is there a way to reach the pihole webui, to check if pihole working really?
i was wanted to lookup netstat in pihole to check on which port it is running but seems like network tools arent installed here.

second thing is, i also installed split-ssh, but i cant see a qube here.
if i want to try install it again, it says, its already installed.

It’s available at standard address - http://127.0.0.0 or via pihole -a

If you ran qubes-task or qubes-task-gui from terminal you would have
seen any feedback there. Was the installation succesfull?

Try removing with sudo dnf remove 3isec-qubes-sys-ssh-agent and
reinstalling

1 Like

oh, yeah, in the pihole qube inside it would work if i write the ip/admin

no, sadly no success feedback because my laptop freezed while installing (because i got too much qubes and windows running probably and didnt realized, that it would also configure everything for me - thought have to do the networking stuff afterwards on myself)
but would try your command! thanks! <3 :slight_smile:

Yep, its still happening and im now pretty aure it wasnt pihole which freezed my qubes
I tried to remove cacher since im unsure if this is related woth my parrot template (maybe it didnt installed well)
So i was wanted to remove cacher and reinstall ist

Qubes is still freezing at uninstalling it, i had left top opened so im able to see if cpu is at 100% usage because fans spinning 100%

But nothing memorable happening here

I get the lighttpd placeholder page when I try to access the pihole WebUI. CLI tools work, and filtering does too, just not the WebUI.
Not sure if it’s a problem with the installation process or the current version of pihole, or something else entirely.

@dipole you have to add a /admin to your ip adress to reach the pi-hole dashboard, first i also thought there is anything wrong.

anyhow, i took today some time to check unmans templates.
setting up split-gpg was ez as … and its working.
now im messing around with split-ssh.
first, i didnt noticed theres a github page for the readme, would link it for everyone else here:

this helped me a lot to get further
some things were unclear to set it up, like, from where are we getting the SshAgent-policy.
i just got it from the linked repo in unmans repo, but im pretty unsure if we need it still?
then the “edit the contens of client” confused me also, till i noticed i could find the client file above, anyhow, everything is working.
im just getting one issue and if someone could help me i would love the person because im messing around now since ~4h?
in the split-ssh appvm we got a work-agent.sh where the content looks like this:

#!/usr/bin/bash
export SSH_AUTH_SOCK=/run/user/1000/ssh-work.socket
ssh-add keys/work/id_rsa

everything pretty ez to understand i guess? if this script gets called it sets the SSH_AUTH_SOCK variable to the socket path?
and then its just adding the rsa key.
if i run this script i would receive the key in my work appvm.
but how am i able to run this script at startup and do i want to do that? what if i run multiple ssh agents? how am i able to decide which one is getting which one?
guess for that i just have to copy the work-service and edit the work-socket to my liking. and then edit the 30-user policy in dom0?

anyhow i’ll figure this out.

my main problem is, how im able that this script get run automatically? i already tried to add a second ExecStart to the work-service but since the type=forking im not able to add a second execstart and think, if unman would want us to add a second execstart he wouldnt choose forking,
then i tried to write in my /rw/conf/rc.local something like that:

/bin/bash /home/user/work-agent.sh

but this is also not working for me
i also tried to link the work work-agent.sh to /etc/init.d/ but also didnt helped

guess if i got this, i would do a small writeup how to use split-ssh, this could make my life sooo much easier :smiley:
big thanks to everyone till now and especially to unman for this user friendly templates / appvms. /non ironic

EDIT:
yep, its just copying the old service as a new, enable it with systemctl --user enable
and if you dont want, that every qube is able to reach ssh keys, this helped me a lot:

would be still glad if someone is able to help me out how to start these *-agent scrips which are coming from default.

EDIT the 2nd:
i see some problems with this coming.
if i run test-agent which handles 1 ssh-key and then coding-agent, the keys are getting summerized, so if i run ssh-add -L i see both keys idk if this is our wished behavior?
idk how gpg script is working, but i wished something similiar in ssh-agent - like passthrough the keys for 5 minutes and then empty the ssh keys.
next thing i would wish here would be, if i run ssh to my server, it should check from which appvm its coming and run the correct script i.e if appvm runs git or ssh, it should add both keys, after ~5 minutes it should get wiped again
if i then run ssh in a different qube same behavior, just with different keys.
guess its also meant to run like this but anywhere is maybe a bug?

3rd EDIT and im sorry about that, but i tried to get things working till now…:
im just not able to use the sys-gpg.
see below:

┌─[user@coding] - [~/gpg-test] - [2023-03-16 04:47:28]
└─[1] <> ls
gpg-test.txt
┌─[user@coding] - [~/gpg-test] - [2023-03-16 04:47:51]
└─[0] <> qubes-gpg-client --sign gpg-test.txt                          
gpg: signing failed: Inappropriate ioctl for device
�����������
           �
����R�&�f����+���pgpg: signing failed: Inappropriate ioctl for device
┌─[user@coding] - [~/gpg-test] - [2023-03-16 04:48:13]
└─[2] <> 

anyone got ideas why im expecting this errors?
these commands are working in sys-gpg, so it must be an error betwen sys-gpg and my appvms (also tried different appvms)
i also tried the first google solution to export GPG tty

If you go to http://localhost you will see a handy link to the admin
panel

I dont understand what this means.
If you install the split-ssh package then a starter policy is created
for you in /etc/qubes/policy/30-user.policy

This will depend on whether you password protect your ssh keys, and
whether you want to set validity periods per key.

If you do want to start a service automatically and add a key, you can use systemd -
it’s already been covered in this thread.

[Unit]
Description=SSH agent

[Service]
Type=oneshot
Environment=SSH_AUTH_SOCK=%t/ssh-agent.socket
ExecStart=/usr/bin/ssh-agent -D -a $SSH_AUTH_SOCK
ExecStartPost=/usr/bin/ssh-add [key]

[Install]
WantedBy=default.target

The point in running multiple agents is that you can allocate keys as
you will -
work: keyA,keyB,keyC
projectA: keyA
SG: keyB,keyC

There’s a simple utility to set up new agents -
/home/user/Configure-new-ssh-agent.sh
You edit the 30-user policy so that each agent use is controlled,
adapting this:

qubes.SshAgent +AGENT @anyvm @anyvm ask default_target=sys-ssh-agent

where AGENT is the name of the ssh-agent you want to use (this is passed
through as parameter) - this gives you fine grained access control from
qubes to agents, and therefore qubes to ssh-keys.
You can also clone sys-ssh-agent if you will, storing different keys on
different backends, rather than having them all on one qube.

I never presume to speak for the Qubes team. When I comment in the Forum or in the mailing lists I speak for myself.
1 Like