Sys-blocky automated installation (5min you're done!)

Blocky vs Pi-hole: Key Advantages

  • Lightweight - Single Go binary (vs Pi-hole’s PHP/SQLite/dnsmasq stack)
  • Qubes-optimized - Native NFTables support & vif* interface handling
  • No web UI - Reduced attack surface (Pi-hole’s admin portal is a risk)
  • Simpler maintenance - Config = one YAML file (vs Pi-hole’s multiple configs/SQL DB)
  • Built for containers - Statically compiled Go binary works better in Qubes VMs
  • Native Prometheus - Metrics without add-ons (Pi-hole needs exporters)

Ideal for Qubes because:

  • Minimal template bloat
  • Secure by design (no unnecessary services)
  • Easier to firewall
  • Clean integration with Qubes networking

Pi-hole drawbacks in Qubes

  • Heavy dependencies (200MB+ footprint)
  • Web UI requires opening ports
  • dnsmasq often conflicts with Qubes networking
  • Complex backup/restore

Blocky delivers equivalent ad-blocking with Qubes-friendly architecture.

:zap: Quick Start
> Copy the script and run it from dom0 then:
Set other VMs to use sys-blocky as their NetVM.

Done! Ads/trackers blocked—zero performance overhead.

What does the script do?

  • Creates a dedicated Qubes OS VM (sys-blocky) for DNS blocking.
  • Sets up a minimal Debian template with Go and core dependencies.
  • Compiles and installs Blocky (optimized Go binary) as a systemd service.
  • Configures NFTables firewall rules to redirect VM DNS traffic
  • Blocks ads/trackers using pre-configured denylists (StevenBlack/hosts etc.).
  • Ensures reboot persistence via rc.local.d.
  • Provides native Prometheus metrics for monitoring.
  • Replaces Pi-hole with lower overhead and tighter Qubes integration.

Result: A lightweight, secure, self-contained DNS server for all Qubes VMs.

“This script is outdated; a newer version is available in the post further down.”

#!/bin/bash
set -euo pipefail

TEMPLATE_NAME="debian-12-minimal"
CLONED_TEMPLATE="d12m-blk-template"
VM_NAME="sys-blocky"
NETVM="sys-net"
MEMORY=1000
MAXMEM=2000
VCPUS=2
ADDITIONAL_SIZE="15G"
BLOCKY_REPO="https://github.com/0xERR0R/blocky.git"
BLOCKY_DEST="/opt/blocky"
BLOCKY_BIN="/usr/local/bin/blocky"
GO_VERSION="1.24.2"
LOG_FILE="/var/log/blocky_setup.log"

RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[0;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m'

log() {
    local level="$1"
    local message="$2"
    local color
    
    case "$level" in
        "INFO") color="${BLUE}[*]${NC}" ;;
        "SUCCESS") color="${GREEN}[✓]${NC}" ;;
        "WARNING") color="${YELLOW}[!]${NC}" ;;
        "ERROR") color="${RED}[✗]${NC}" ;;
        *) color="${BLUE}[*]${NC}" ;;
    esac
    
    echo -e "${color} ${message}${NC}" | tee -a "$LOG_FILE"
}

check_dependencies() {
    log "INFO" "Checking dependencies..."
    command -v qvm-clone >/dev/null 2>&1 || {
        log "ERROR" "Qubes OS tools not found!"
        exit 1
    }
    log "SUCCESS" "Dependencies verified"
}

prepare_template() {
    if ! qvm-ls --raw-list | grep -q "$TEMPLATE_NAME"; then
        log "INFO" "Installing base template $TEMPLATE_NAME..."
        qvm-template install "$TEMPLATE_NAME" || {
            log "ERROR" "Failed to install base template"
            exit 1
        }
    else
        log "INFO" "Base template $TEMPLATE_NAME exists. Updating..."
        qvm-run -p -u root "$TEMPLATE_NAME" "apt update && apt upgrade -y" || {
            log "WARNING" "Base template update failed (continuing)"
        }
    fi

    if ! qvm-ls --raw-list | grep -q "$CLONED_TEMPLATE"; then
        log "INFO" "Cloning $TEMPLATE_NAME to $CLONED_TEMPLATE..."
        qvm-clone "$TEMPLATE_NAME" "$CLONED_TEMPLATE" || {
            log "ERROR" "Template clone failed"
            exit 1
        }
        
        log "INFO" "Installing essential packages..."
        qvm-run -u root "$CLONED_TEMPLATE" "apt update && apt install -y git wget" || {
            log "ERROR" "Package installation failed"
            exit 1
        }
        qvm-shutdown --wait "$CLONED_TEMPLATE"
        log "SUCCESS" "Template cloned and configured"
    else
        log "WARNING" "Template $CLONED_TEMPLATE already exists. Checking state..."
        
        if ! qvm-run -p "$CLONED_TEMPLATE" "dpkg -l git wget" >/dev/null 2>&1; then
            log "INFO" "Installing packages in existing template..."
            qvm-run -u root "$CLONED_TEMPLATE" "apt update && apt install -y git wget" && \
            qvm-shutdown --wait "$CLONED_TEMPLATE" || {
                log "ERROR" "Failed to update existing template"
                exit 1
            }
        fi
        log "SUCCESS" "Using existing cloned template"
    fi
}

create_blocky_vm() {
    if qvm-ls --raw-list | grep -q "$VM_NAME"; then
        log "WARNING" "VM $VM_NAME exists. Removing previous version..."
        qvm-remove --force "$VM_NAME" || {
            log "ERROR" "Failed to remove existing VM"
            exit 1
        }
        log "SUCCESS" "Old VM removed"
    fi

    log "INFO" "Creating new VM $VM_NAME..."
    qvm-create --standalone -t "$CLONED_TEMPLATE" -l red "$VM_NAME" || {
        log "ERROR" "VM creation failed"
        exit 1
    }
    
    qvm-prefs "$VM_NAME" memory "$MEMORY"
    qvm-prefs "$VM_NAME" maxmem "$MAXMEM"
    qvm-prefs "$VM_NAME" vcpus "$VCPUS"
    qvm-prefs "$VM_NAME" netvm "$NETVM"
    qvm-prefs "$VM_NAME" provides_network true
    
    log "SUCCESS" "VM $VM_NAME created and configured"
}

install_components() {
    log "INFO" "Installing Go $GO_VERSION..."
    qvm-run -p -u root "$VM_NAME" "bash -c '
        wget -q https://go.dev/dl/go${GO_VERSION}.linux-amd64.tar.gz -O /tmp/go.tar.gz &&
        tar -C /usr/local -xzf /tmp/go.tar.gz &&
        echo \"export PATH=\\\$PATH:/usr/local/go/bin\" >> /etc/profile &&
        echo \"export PATH=\\\$PATH:/usr/local/go/bin\" >> /home/user/.bashrc &&
        rm -f /tmp/go.tar.gz
    '" || {
        log "ERROR" "Go installation failed"
        exit 1
    }
    
    log "INFO" "Installing Blocky..."
    ARCH=$(qvm-run -p "$VM_NAME" "uname -m | sed 's/x86_64/amd64/;s/aarch64/arm64/;s/armv7l/armv7/'") || {
        log "ERROR" "Failed to detect architecture"
        exit 1
    }
    
    qvm-run -p -u root "$VM_NAME" "bash -c '
        set -e
        rm -rf \"$BLOCKY_DEST\" && git clone --depth 1 \"$BLOCKY_REPO\" \"$BLOCKY_DEST\"
        cd \"$BLOCKY_DEST\"
        /usr/local/go/bin/go build \
            -ldflags=\"-X '\''github.com/0xERR0R/blocky/util.Version=$GO_VERSION'\'' \
                      -X '\''github.com/0xERR0R/blocky/util.BuildTime=\$(date +%Y-%m-%dT%H:%M:%SZ)'\'' \
                      -X '\''github.com/0xERR0R/blocky/util.Architecture=$ARCH'\''\" \
            -o \"$BLOCKY_BIN\"
    '" || {
        log "ERROR" "Blocky installation failed"
        exit 1
    }
    
    log "SUCCESS" "Components installed"
}

configure_services() {
    log "INFO" "Creating Blocky config..."
    qvm-run -p -u root "$VM_NAME" "bash -c '
        mkdir -p /etc/blocky &&
        cat > /etc/blocky/config.yml <<\"EOF\"
upstreams:
  groups:
    default: 
      - 46.227.67.134 #OVPN
      - 192.165.9.158 #OVPN
#       - 1.1.1.1
#       - 8.8.8.8
ports:
  dns: 53

blocking:
  denylists:
    ads:
      - \"https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts\"
      - \"https://s3.amazonaws.com/lists.disconnect.me/simple_ad.txt\"
      - \"http://sysctl.org/camaleon/hosts\"
      - \"https://s3.amazonaws.com/lists.disconnect.me/simple_tracking.txt\"
 #   custom:
 #    - file://etc/blocky/local-blacklist.txt
  clientGroupsBlock:
    default:
      - \"ads\"
 #    - \"custom\"
#  blockType: ZeroIp

#prometheus:
#  enable: true
#port: 53
#httpPort: 4000
EOF
    '" || {
        log "ERROR" "Blocky configuration failed"
        exit 1
    }
    
    log "INFO" "Configuring systemd service..."
    qvm-run -p -u root "$VM_NAME" "bash -c '
        cat > /etc/systemd/system/blocky.service <<\"EOF\"
[Unit]
Description=Blocky DNS
After=network.target

[Service]
ExecStart=/usr/local/bin/blocky --config /etc/blocky/config.yml
Restart=always
User=root

[Install]
WantedBy=multi-user.target
EOF
        systemctl daemon-reload &&
        systemctl enable --now blocky
    '" || {
        log "ERROR" "Service configuration failed"
        exit 1
    }
    
    log "SUCCESS" "Services configured"
}

setup_persistence() {
    log "INFO" "Configuring rc.local.d..."
    qvm-run -p -u root "$VM_NAME" "bash -c '
        mkdir -p /rw/config/rc.local.d &&
        cat > /rw/config/rc.local.d/blocky-start.sh <<\"EOF\"
#!/bin/bash
sudo systemctl unmask blocky.service
sudo systemctl daemon-reload
sudo systemctl enable --now blocky.service
EOF
        chmod +x /rw/config/rc.local.d/blocky-start.sh
    '" || {
        log "ERROR" "rc.local.d setup failed"
        exit 1
    }
    
    log "INFO" "Configuring rc.local..."
    qvm-run -p -u root "$VM_NAME" "bash -c '
        echo -e \"#!/bin/bash\\nexec /rw/config/rc.local.d/blocky-start.sh\" > /rw/config/rc.local &&
        chmod +x /rw/config/rc.local
    '" || {
        log "ERROR" "rc.local setup failed"
        exit 1
    }
    
    log "SUCCESS" "Persistence configured"
}

configure_firewall() {
    log "INFO" "Configuring firewall rules..."
    qvm-run -p -u root "$VM_NAME" "bash -c '
        mkdir -p /rw/config/{network-hooks.d,qubes-firewall.d} &&
        
        cat > /rw/config/network-hooks.d/internalise.sh <<\"EOF\"
#!/bin/sh
find /proc/sys/net/ipv4/conf -name \"vif*\" -exec bash -c \"echo 1 | tee {}/route_localnet\" \;
EOF
        
        cat > /rw/config/network-hooks.d/update_nft.sh <<\"EOF\"
#!/bin/sh
nft -f /rw/config/qubes-firewall.d/update_nft.nft
EOF
        
        cat > /rw/config/qubes-firewall.d/internalise.sh <<\"EOF\"
#!/bin/sh
find /proc/sys/net/ipv4/conf -name \"vif*\" -exec bash -c \"echo 1 | tee {}/route_localnet\" \;
EOF
        
        cat > /rw/config/qubes-firewall.d/update_nft.sh <<\"EOF\"
#!/bin/sh
nft -f /rw/config/qubes-firewall.d/update_nft.nft
EOF
        
        cat > /rw/config/qubes-firewall.d/update_nft.nft <<\"EOF\"
#!/usr/sbin/nft -f
flush chain qubes dnat-dns

flush chain qubes custom-forward
insert rule qubes custom-forward tcp dport 53 drop
insert rule qubes custom-forward udp dport 53 drop

flush chain qubes custom-input
insert rule qubes custom-input tcp dport 53 accept
insert rule qubes custom-input udp dport 53 accept

flush chain qubes dnat-dns
insert rule qubes dnat-dns iifname \"vif*\" tcp dport 53 dnat to 127.0.0.1
insert rule qubes dnat-dns iifname \"vif*\" udp dport 53 dnat to 127.0.0.1
EOF
        
        chmod +x /rw/config/rc.local \
                /rw/config/qubes-firewall.d/* \
                /rw/config/network-hooks.d/*
    '" || {
        log "ERROR" "Firewall configuration failed"
        exit 1
    }
    
    log "SUCCESS" "Firewall configured"
}

finalize_setup() {
    log "INFO" "Disabling unnecessary services..."
    qvm-features "$VM_NAME" service.cups 0
    qvm-features "$VM_NAME" service.cups-browsed 0
    
    log "INFO" "Expanding storage..."
    qvm-shutdown --wait "$VM_NAME"
    qvm-start "$VM_NAME"
    
    log "INFO" "Verifying installation..."
    qvm-run -p -u root "$VM_NAME" "systemctl status blocky"
    qvm-run -p -u root "$VM_NAME" "blocky version"

    log "SUCCESS" "Setup complete!"
    echo -e "${CYAN}╔═════════════════════════════════════╗"
    echo -e "║   BLOCKY INSTALLATION COMPLETE!   ║"
    echo -e "╠═════════════════════════════════════╣"
    echo -e "║ • Use sys-blocky as NetVM           ║"
    echo -e "║ • Live journal started              ║"
    echo -e "╚═════════════════════════════════════╝${NC}"
    log "INFO" "Starting live journal..."
    qvm-run -p -u root "$VM_NAME" "xterm -T 'BLOCKY LIVE JOURNAL' -e 'journalctl -u blocky -f'"   
}

main() {
    check_dependencies
    prepare_template
    create_blocky_vm
    install_components
    configure_services
    setup_persistence
    configure_firewall
    finalize_setup
}

main "$@"

8 Likes

Thanks for the guide. I’ve never heard about blocky before.

Is it this project? GitHub - 0xERR0R/blocky: Fast and lightweight DNS proxy as ad-blocker for local network with many features

1 Like

Yes, its an amazing project! you can use it with prometheus to generate dashboards with grafana and have full monitoring stacks

1 Like

Tested the script in a relatively clean install of R4.2.4: it failed with «:x: Go installation failed».
Looking at the running sys-blocky left by the failed install - it does not have an IP address, so the wget attempt to download the Go tarball obviously fails…

1 Like

I wonder why the script is trying to download Go although it’s packaged in all templates supported by Qubes OS :thinking:

3 Likes

Great question! I assumed that a certain Go version is required, but “assuming” is bad in general. :smile:

2 Likes

Thanks for the guide! This looks promising. Since the script assigns NETVM="sys-net", I assume the setup will be appVM -> sys-firewall -> sys-blocky -> sys-net. I’d also be interested in filtering DNS traffic that is tunneled through each of my sys-vpn’s, so presumably one would need an additional upstream sys-blocky VM for each sys-vpn.

I wonder if it would be possible to create an off-chain sys-blocky DNS qube (similar to having pi-hole on a separate raspberry pi device) to filter traffic using qvm-connect-tcp? I’m still baffled by DNS in Qubes, so forgive me if this question is impossibly naive… I was able to prototype something along these lines with opensnitch nodes, but opensnitch implements a client-server model to enable this possibility.

2 Likes

The script does not work, there are missing networking connection to install go. To change that, correct the following lines:
Line 72:
qvm-run -u root “$CLONED_TEMPLATE” “apt update && apt install -y qubes-core-agent-networking git wget” || {

Line 81:
if ! qvm-run -p “$CLONED_TEMPLATE” “dpkg -l qubes-core-agent-networking git wget” >/dev/null 2>&1; then

Line 83:
qvm-run -u root “$CLONED_TEMPLATE” “apt update && apt install -y qubes-core-agent-networking git wget” && \

2 Likes

would it make more sense to fetch the pre-compiiled binary from the releases page and to provide an additional script to update it as needed

1 Like

It has a different threat model, but I don’t think it’s worse than installing Go compiler and compiling the binary.

1 Like

Hello everybody,

Sorry for not answering earlier. I made another script and tested it on a fresh installation of Qubes 4.2. The script creates two VMs: one for data and one for DNS. You just need to run it, and the script will ask you to name the VMs. Then it will do all the work.

#!/bin/bash

# Configuration
BASE_CLONED_TEMPLATE="d12m-datagate"	
DATAKEEPER=$1
GATEKEEPER=$1
DATAKEEPER_IP=$1
GATEKEEPER_IP=$1
TEMPNET=$1
MINIMAL_TEMPLATE="debian-12-minimal"
LOG_FILE="/var/log/qubes-blocky-install-beta-script$(date +%Y%m%d).log"

separador() {
    echo "----------------------------------------"
}

# ESSENTIAL FUNCTIONS

verify_qubes_tools() {
    for cmd in qvm-ls qvm-run qvm-template qvm-clone qvm-create qvm-prefs qvm-shutdown qvm-kill; do
        if ! command -v "$cmd" >/dev/null; then
            echo "ERROR: Qubes command not found: $cmd"
            exit 1
        fi
    done
}

template_exists() {
    qvm-ls | grep -wq "$1"
}

vm_exists() {
    qvm-ls | grep -wq "^$1"
}

run_in_vm() {
    local vm=$1 cmd=$2
    echo "[$vm] Executing: $cmd"
    if ! qvm-run -u root --pass-io "$vm" "$cmd"; then
        echo "ERROR: Command failed in $vm: $cmd"
        return 1
    fi
    return 0
}

shutdown_vm() {
    local vm=$1
    if qvm-ls | grep -wq "$vm" && qvm-check --running "$vm"; then
        echo "Shutting down $vm..."
        qvm-shutdown "$vm" --wait || {
            echo "WARNING: Could not shutdown $vm normally, forcing..."
            qvm-kill "$vm"
        }
    fi
}

shutdown_template() {
    local template=$1
    if qvm-ls | grep -wq "$template" && qvm-check --running "$template"; then
        echo "Shutting down $template..."
        qvm-shutdown "$vm" --wait || {
            echo "WARNING: Could not shutdown $template normally, forcing..."
            qvm-kill "$vm"
        }
    fi
}

function inject_extrepo {
	local target=$1

run_in_vm "$target" "bash -c '
    set -e
    rm -rf /etc/apt/sources.list.d/extrepo_librewolf.sources
    touch /etc/apt/sources.list.d/extrepo_librewolf.sources
    cat > /etc/apt/sources.list.d/extrepo_librewolf.sources <<\"EOF\"
Types:  deb
Uris: https://repo.librewolf.net
Suites: librewolf
Architectures: amd64 arm64
Components: main
Signed-By: /var/lib/extrepo/keys/librewolf.asc
EOF
'"
run_in_vm "$target" "bash -c '
    set -e
    sudo mkdir -p /var/lib/extrepo/keys
		rm -rf /var/lib/extrepo/keys/librewolf.asc
		touch /var/lib/extrepo/keys/librewolf.asc
		cat > /var/lib/extrepo/keys/librewolf.asc <<\"EOF\"
-----BEGIN PGP PUBLIC KEY BLOCK-----

mQINBGJ/cYcBEADGzCTFlHVTGQ43a/7d0gsAzXbBhS+7kIexmS3vY19YSiGTKtBf
LYmM3JN1Rc1aF1FUD48omDXYVLhFveMh42B2Pf8kcZ8dHD+42Dwx/LKlKy0qw2yR
ftzmZNkwUoFhg/X+WEAnHKeOI11c9Cdc6sDwIC9aJ4o3VWkRdoEpG60zjCvhmEn4
1/YvaM3p4OfFk2zWrs9msGnW+ZFpSfnpFDH/zCrZcdPNP80Is0LEKfrW87klKZTY
JoWRsHJHn01U4RcjWQtooN7Sr0ku3kXkp3Yj2e739Kt1kVikV9l56OocSFbQRdLZ
UdAYeOengtHnTnKBuJTPm21FCJyQHai3TrCu2Lr/Wbi23HTHpRcvikjv+eiKiZSq
J7lr1Sc2s5wH/4RBUYSfxTwYPImAWPZotRGqboX+ZQVk/LknQ+dM8NdpZX0IFXW5
FzejS46HaYQCJhpwSyzREuu/5wm75AZUyDcP9hNck3BYULqXQsd5qls4bnPZcENu
ED6HQ/Y7f6PNWxBzIr3eRM5qq8MCe0ycs+Yr5eaIJePlEd1nn2+1L3L4i57Q3TVe
aL4CZdn/w13geV7Hq8spkCvVFouSzu5zS9n7tbOx+Ca9acOvp1Nw/T6OG1NfhnY1
pn4sng11xGnimvDzobJ+FbCmIQj4CF7A0IXccBOxRP3z/gDIDXgtnTgYOwARAQAB
tClMaWJyZVdvbGYgTWFpbnRhaW5lcnMgPGdwZ0BsaWJyZXdvbGYubmV0PokCTgQT
AQgAOAULCQgHAgYVCgkICwIEFgIDAQIeAQIXgBYhBGYuPN1v4ykALQylu0Azndgr
Eu8WBQJif4AMAhsDAAoJEEAzndgrEu8WEekP/AiFxNDA+y61tnj67n59Si11jZwc
cVl+J7h97ZcjKQ0fx7bYeFtIWxj36ZqbRRH0T4piu0QMCX3L5a1ztuSJFIl6s47P
ex+2fkqmd8F+wiXu0IX+8lF31l5SUtITZvYPqn+14jwOHStQ1Z75ihbgwGQGXOn4
2zCYk84KWuJen0DM8wDzL8BsXdQU53HuPwaXRzpaaG7paQlYLZpQ5ekCeG7A+Nng
ZyrNWhHeZErYCjSGe6xWyYPIeLO3gio1KgnpKkXQ5w4mZ3ecplqceewyE6HW+EdV
gjwAchsU6de3l4gSz1ELcyDwoG7ph1Ttu6TLz3rnkDPvaxF6rKWCMHK8j/oiYF2Q
H32h62nAGRw67sucPZi+HrBC69ogI1uY6YFehrFrxlt9IEv1ocS0w7NXm1LJ0UwI
DabDDDYZ+cUU9yHGX54X3YcdeSrmJ3Y4DsM9ggzFYhMZ32pZGnUIvG4R5hdqlSc2
QS5p/Lt7l7X7Np6M0HfSnovE0ROELiF6MBsg6ewsAWqlQsbWp0RiL0NVJqfYTZzN
9EzkSt/4VBVUOVn2A9V5dSxTkd3E36ba2xRmWDecGOhtRDHahGlCKZlGx6WIN5Ou
G5awI557EYAetOyOvb+E7/huKoEYwcRhw6cBoiVYAengPQOhApTJ9J5dn/VIiXsv
uNAyDCDt7u5Qtmp+iQIzBBABCAAdFiEEA093du9eDGE9L3k00p+9X5PAz8MFAmJ/
gGcACgkQ0p+9X5PAz8MkkA//d77nD4vza7JxC1jkEYCFD3nUEufdPmvcyzIOCFaz
nLqCrP/ve1NORqtYnCM9+jtUFqKHSIbcucpChBhMGhyLlcz/geBuJBfIAaow945k
7Cig317Gk7KBf63CwnDeufiIElmOXgxMmVhoZLneVDBIXgnX8BVR9pOyRagPbEbw
EK5sXJ443o9oEoJwqFVb9jWE3MmSYBascyyUuFnxOe7A5U/iJ1UJCO+chbSiyU1h
nn5F1wTSxQ443V+cYVu7VlsLrdCbhi8XBZuPWlEYTzy1ncxEUYHE2RWS4xwcU0Zu
u67aDvCIh82XfTO5vRFDfyPTrzVpZo3PWRkVNCFpmiZjnrswy6KUYbzzsQihbLkx
2UCpNNv1sKqsmYoqMOQ/vSg6k2BK/F/lZ+4gmmp1OPviemUsekYxikvd9Kd6qSRO
xRb8d9YGyE2glabdOgyBd1F8h+g/iCpYLKp+xBG5PI1x9EHCkI6uO0mHm9PryW0o
U/dnRP9vgEwHxLlK8TAGZnDSQfOQEgQt75IY8Ttuodfe5kNRUfiz3UfUr9T3URzy
XScm+pTMUdKzh9C5bBut/IBXGm4AKeaTSGATzVb1Txw/jSoNTqH9D1ohCL0YYzf2
J77skzCJ2XkZYoccF25wYinQ4fXIobxSQV9lMlvVfC+OfNyrfsAQgOUPv+HaQImG
ZO25Ag0EYn+AOwEQAKsUiDowDDXFi1oGWOvNASPc6asNGxE8LcfEYJ7CvYBR9tbx
TTPvQr06ZD28kg3fXLopPMObalPhXBrI0T+DiBBUJJAUDnMbhPDMvD3QaKLaRv7V
23ZKP8snU47WU4HNTIFfc4F4jyvHKhwoEkUIVT0mHrxzXjSBS0MFP9TUt64BV66Q
x4T2jFMb3WjYIVqm1EpbaxwrSQGqamcL+QfH1PSKGOlucyT3Z6GOct29Y4z3Rnc6
oUGbDs6X9HZP9aHEXNESHSkjjh8Q6zPOtu5vNlgfd2CVnPnDs2qwfTS2rPKYQhb8
Pc9z23RqFSAE+quoKKJ26otYTBych+sa2STeihhG13pcBWgk/PadoZ0fWoAeqi0R
urklpCQ+qIM1vz5ECe/SFZIodYLTJ/F9KLowTVOHqBjjaSIdletxEJznHFzjeQD/
NDO5P0dZCsYQBM3iD0AipePkaCsdQW7A4oIkVFhyBKNHVwwu4QG1dLc0tRCFRyDQ
8HVg6bctwrpNj/mhQPCdX7EbpIrZcTlqowtMz2cHcMhaYq49QFxjRmvv/d3Vt9q4
yRDT3WxlRymvPS+bxSkiLQqRBrAiuZ7A8LCFZhBPII60q9ADXv+Ujl29T3MRaSNK
INIdWGYYDI3GnbIY83J0PyBDJoV2RjClKUXrfSoVBiWtB3YPOoOSnSbuRQ2NABEB
AAGJBGwEGAEIACAWIQRmLjzdb+MpAC0MpbtAM53YKxLvFgUCYn+AOwIbAgJACRBA
M53YKxLvFsF0IAQZAQgAHRYhBE3Uu+yr0y55jMd6QEeYHqc9DhxjBQJif4A7AAoJ
EEeYHqc9DhxjcSEP/j5cxtquY2lv7jbi9HbowfHLndhtxS7gNmfrOWemp1d65hwl
FRtBdScc7XXQpE9xdpkY1tm2rCjLaf1EPQiSCI4m12J8KauvbJdi4fd7iMWCh//3
lxmgWKbrqBGvdd4bWz7Uf7iZc0aLkZJVq02eTeFt2eK5r0ABCyvZuU8gi7vFE85Y
BbzR36CkDbMn+CVtYQg+PQhr9lubey700qwgYowXRpZb3mDz7/4YEXe+Ul09LVJ0
dF74NuAOjqE5YUMHW+TwP+B9WVZlghUqT4JENGcbI1H8cbFquowOaniL99o3c0Q1
AI+z79hnbK6NstRcgNw9N/4IOnWpoG1wVQJGTGCy6ZTF9fFjBcCjTK8F0XEB6Muu
sdd7VeoUJdiknFarD6w0Ut5UASHDYvZgL5NadKqVgPU1DtaQQ3F0rGsgZxSSZbbt
kSHRquEDe2pLa/+ZfZFcSrjM6Z+GOS86HJXYikoiq9IEcC0Jj2CAjESvWYURkTuy
GiYY5WOzmxzgU9CFqm4PYuJjQfDlFfnjblyZ4qDHRrTcDyS1jmg1YKWiH3gvZ9JT
B98QpmCzNddJlnQggLUT6pvS0qVreV01S6FdU3Y5+Ii57088wb/cScXmf08B/DPT
VpOO260BzANEackFpN0gzOltDKY0rgqjBFpir6ztVZIUCXe5/CWYaiOYO0HUHgIQ
AJ9l0ritjKOm4XvCUZ11bVzHJLD3OJmlbUcY8HKdow/eEzdhufzl1+l299tHhMkJ
ginxBVZjjrhNuCqKgMpo7kzg9H4f8I+b+hbm2gTfOo3qSQ3z5pUAff7EMAYW3Gpp
BGQRgsJHxmydtb7sFgtzU6Xx16fGBDSlW7wnVGLrVsrpPylDGJF32No66y9jbLWC
sNnNc3iRyRpuBiU3Hfir/6oFwgLMywhdUqKQoCyJnWoSSX60Q0BZs+T9DRfw+Yfy
qRg1CxgPDcQgH/dRiwuLNVNqZe2jedStvJqlJUGf9XiBJxu8i0pZCn3B7aHMqXL0
UqKzUXKGtQtJZhYHqXSGr+wZ4wpHcBhBsgXFA9A8qS08EiLIVoY7OEDkrCbf83yX
6gMpqdOqiTuV1pioc+DQCeE1LVFYEF4FDO53NgWI+dDUDbk5GdLprBkdEpuPD1MU
c2PmtSXC3awO51UoU5BO8jTnNok5mR0WNKlfutpsdkPEBHclleWjf8EsOqkS0wF3
LrpVAnCyTHg98HvnCYTIB8CgTRDUemsOkF7thA2OpDJ/aMU5weqq+N5UfXLH5M8j
2BDoSwkcrRmAp+MatYMy+FYU9GWTZ8KLfx9tHX0REgRmzlJgBRepx7ozqbZ5LFMY
YBHEf+31S/A8FvJ1ZCAurl/Go4IBFUgoJejAk98IBIgj
=tZgh
-----END PGP PUBLIC KEY BLOCK-----

EOF
'"
}
function fix_locale() {
	local target=$1
	local LOCALE_CONTENT="LANG=en_US.UTF-8
	LANGUAGE=en_US:en
	LC_ALL=en_US.UTF-8"
run_in_vm "$target" "echo "$LOCALE_CONTENT" | sudo tee /etc/default/locale" > /dev/null
run_in_vm "$target" "sudo sed -i 's/^# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen"
run_in_vm "$target" "sudo locale-gen"
echo "Configuração de locale concluída."
}
function check_vm {
	local target="$1"
	if qvm-ls | grep -wq "$target" && qvm-check --running "$target"; then
        echo "Shutting down $target..."
        qvm-shutdown "$target" --wait || {
            echo "WARNING: Could not shutdown $target normally, forcing..."
            qvm-kill "$target"
            }
	fi
}
function resize {
	local target=$1
	local volume_size=$2
	if qvm-ls --raw-list | grep -wq "$target"; then
		echo "Booting $target.."
		qvm-start "$target"
			while ! qvm-check --running "$target" 2>/dev/null; do
				echo "Waiting $target finish boot..."
				sleep 1
			done
		echo "Redimensionando o volume de $target para $volume_size..."
		qvm-volume resize $target:private $volume_size
	else
		echo "$target nao existe. Abortando!"
	fi	
	shutdown_vm $target
}
function fnl {
	    local vmdata="$DATAKEEPER"
	    local vmgate="$GATEKEEPER"
    if qvm-ls | grep -wq "$vmdata" && qvm-check --running "$vmdata"; then
        echo "Shutting down $vmdata..."
        qvm-shutdown "$vmdata" --wait || {
            echo "WARNING: Could not shutdown $vmdata normally, forcing..."
            qvm-kill "$vmdata"
        }
    fi	
    if qvm-ls | grep -wq "$vmgate" && qvm-check --running "$vmgate"; then
        echo "Shutting down $vmgate..."
        qvm-shutdown "$vmgate" --wait || {
            echo "WARNING: Could not shutdown $vmgate normally, forcing..."
            qvm-kill "$vmgate"
        }
    fi	
    qvm-prefs "$vmdata" netvm "$vmgate"
    echo ""
    echo "$GATEKEEPER [ok]"
    echo "$DATAKEEPER [ok]"
    echo""
    separador
    echo "Installation completed!"
	separador
	echo ""
	echo ""
	echo ">>>>>>> IMPORTANT <<<<<<<"
	echo ""
	separador
	echo "INSIDE GRAFANA PANEL"
	echo "go to: Connections -> Data sources"
	echo "Add data source -> MySQL"
	separador
	echo "Host URL: localhost:3306"
	echo "Database: blocky"
	echo "Username: mysql-user"
	echo "Password: mysql-pass (you can change it later)"
	echo "Click on: Save & test"
	separador
	echo "Inside grafana panel after adding your database:"
	echo "Click [+] icon on top right of grafana panel"
	echo "Then select: Import dashboard"
	echo "Type the ID: 14980"
	echo "click on LOAD button"
	echo "Select your mysql data source and click import!"
	separador
	echo""
	read -p ">> Press enter << to continue and follow the steps above!"
	echo ""
	echo "Opening grafana panel at localhost:3000, use admin:admin to login!"
	qvm-run -q -a "$DATAKEEPER" "librewolf localhost:3000"
}
function getip {
    local target=$1
    IP=$(qvm-run -u root -p "$target" "qubesdb-read /qubes-ip 2>/dev/null")
    if [ -z "$IP" ]; then
        IP=$(qvm-run -p "$target" "ip -4 addr show eth0 | grep -oP '(?<=inet\s)\d+\.\d+\.\d+\.\d+' | head -n 1")
    fi
}
function blocky_stage_01 {
    local vm="$GATEKEEPER"
    local go_version="1.24.2"
    qvm-run -u root "$vm" "bash -c '
        wget -q https://go.dev/dl/go${go_version}.linux-amd64.tar.gz -O /tmp/go.tar.gz &&
        tar -C /usr/local -xzf /tmp/go.tar.gz &&
        echo \"export PATH=\\\$PATH:/usr/local/go/bin\" >> /etc/profile &&
        echo \"export PATH=\\\$PATH:/usr/local/go/bin\" >> /home/user/.bashrc &&
        rm -f /tmp/go.tar.gz
    '" || {
        echo "[$vm] Failed to install Go."
        return 1
    }
}
function blocky_stage_02 {
	local vm="$GATEKEEPER"
    local go_version="1.24.2"
    local blocky_repo="https://github.com/0xERR0R/blocky.git"
    local blocky_dest="/opt/blocky"
    local blocky_bin="/usr/local/bin/blocky"
	echo "[$vm] Detecting architecture.."
    local arch
    arch=$(qvm-run -p "$vm" "uname -m | sed 's/x86_64/amd64/;s/aarch64/arm64/;s/armv7l/armv7/'") || {
        echo "Failed to detect architecture"
        return 1
    }
    echo "[$vm] Compiling Blocky..."
    run_in_vm "$vm" "bash -c '
        set -e
        sudo rm -rf \"$blocky_dest\" 
        sudo git clone --depth 1 \"$blocky_repo\" \"$blocky_dest\"
        cd \"$blocky_dest\"
        sudo /usr/local/go/bin/go build \
            -ldflags=\"-X '\''github.com/0xERR0R/blocky/util.Version=$go_version'\'' \
                      -X '\''github.com/0xERR0R/blocky/util.BuildTime=\$(date +%Y-%m-%dT%H:%M:%SZ)'\'' \
                      -X '\''github.com/0xERR0R/blocky/util.Architecture=$arch'\''\" \
            -o \"$blocky_bin\"
        sudo chmod +x \"$blocky_bin\"
        '"
    return 0
	}
function blocky_stage_03 {
	local target="$GATEKEEPER"
	local DATA_IP=$(qvm-run -u root -p "$DATAKEEPER" "qubesdb-read /qubes-ip 2>/dev/null")
	local directory="/etc/blocky"
	local configfile="/etc/blocky/config.yml"
	run_in_vm $target "mkdir -p /etc/blocky"
	run_in_vm $target "touch /etc/blocky/local-blacklist.txt"
	run_in_vm $target "touch /etc/blocky/config.yml"
	run_in_vm $target "bash -c '
		cat > /etc/blocky/config.yml <<\"EOF\"
upstreams:
  groups:
    default:
      - 46.227.67.134  # OVPN upstream
      - 192.165.9.158  # OVPN upstream

blocking:
  denylists:
    ads:
      - https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts
      - https://raw.githubusercontent.com/hagezi/dns-blocklists/main/wildcard/tif.medium.txt
      - https://s3.amazonaws.com/lists.disconnect.me/simple_ad.txt
      - https://s3.amazonaws.com/lists.disconnect.me/simple_tracking.txt
      - https://raw.githubusercontent.com/hagezi/dns-blocklists/main/wildcard/nsfw.txt
      - https://raw.githubusercontent.com/hagezi/dns-blocklists/main/wildcard/popupads.txt
    custom:
      - file:///etc/blocky/local-blacklist.txt
  clientGroupsBlock:
    default:
      - ads
      - custom
  blockType: zeroIP
  blockTTL: 6h

ports:
  dns: 53
  http: 4000

prometheus:
  enable: true
  path: /metrics

queryLog:
  type: mysql
  target: mysql-user:mysql-pass@tcp(${DATA_IP}:3306)/blocky?charset=utf8mb4&parseTime=True&loc=Local
  logRetentionDays: 30
  flushInterval: 60s

log:
  level: info
  format: text
  timestamp: true

caching:
  minTime: 5m
  maxTime: 30m
EOF
'"
}
function blocky_stage_04 {
	local vm="$GATEKEEPER"
qvm-run -u root -p "$vm" "bash -c '
    set -e
    rm -rf /etc/systemd/system/blocky.service
    touch /etc/systemd/system/blocky.service
    cat > /etc/systemd/system/blocky.service <<\"EOF\"
[Unit]
Description=Blocky DNS
After=network.target

[Service]
ExecStart=/usr/local/bin/blocky --config /etc/blocky/config.yml
Restart=always
User=root

[Install]
WantedBy=multi-user.target
EOF
'"
}
function prometheus_stage_01 {
    local target="$DATAKEEPER"
    echo "Installing prometheus on $DATAKEEPER"
    URL="https://github.com/prometheus/prometheus/releases/download/v3.3.0-rc.1/prometheus-3.3.0-rc.1.linux-amd64.tar.gz"
    TEMP_DIR="/tmp"
    DEST_DIR="/etc/prometheus"
    FILE_NAME="prometheus-3.3.0-rc.1.linux-amd64.tar.gz"
    EXTRACTED_DIR="prometheus-3.3.0-rc.1.linux-amd64"
    run_in_vm $target "wget -q -O $TEMP_DIR/$FILE_NAME $URL" || {
        echo "ERROR: [$target] Failed to download prometheus"
        exit 1
    }
    
		if [ $? -ne 0 ]; then
			echo "Falha ao baixar o arquivo. Verifique a URL e a conexão com a internet."
			exit 1
		fi

    run_in_vm $target "sudo mkdir -p $DEST_DIR"
    run_in_vm $target "sudo tar -xzf $TEMP_DIR/$FILE_NAME -C $TEMP_DIR"
		if [ $? -ne 0 ]; then
			echo "Falha ao descompactar o arquivo."
			exit 1
		fi
    run_in_vm $target "sudo mv $TEMP_DIR/$EXTRACTED_DIR/* $DEST_DIR/"
    run_in_vm $target "sudo rm -rf $TEMP_DIR/$EXTRACTED_DIR"
    run_in_vm $target "rm $TEMP_DIR/$FILE_NAME"
	}
function prometheus_stage_02 {
	local target="$DATAKEEPER"
run_in_vm $target "rm -rf /etc/systemd/system/prometheus.service"
run_in_vm $target "touch /etc/systemd/system/prometheus.service"
run_in_vm $target "bash -c '
cat > /etc/systemd/system/prometheus.service <<\"EOF\"
[Unit]
Description=Prometheus
After=network.target

[Service]
ExecStart=/etc/prometheus/prometheus --config.file=/etc/prometheus/prometheus.yml
Restart=always
User=root

[Install]
WantedBy=multi-user.target
EOF
'" || {
	echo "ERROR: [$target] Failed to create prometheus service."
	exit 1
	}
}
function prometheus_stage_03 {
	local target="$DATAKEEPER"
	local IP=$(qvm-run -u root -p "$GATEKEEPER" "qubesdb-read /qubes-ip 2>/dev/null")
run_in_vm $target "bash -c '
    rm -rf /etc/prometheus/prometheus.yml
    touch /etc/prometheus/prometheus.yml
    cat > /etc/prometheus/prometheus.yml <<\"EOF\"
scrape_configs:
  - job_name: 'blocky'
    scrape_interval: 90s
    metrics_path: '/metrics'
    static_configs:
    - targets: [${IP}:4000]

    relabel_configs:
    - source_labels: [__address__]
      target_label: instance
      replacement: 'METRICS'
EOF
'"
}
function mdb01 {
    local target="$DATAKEEPER"
    local FILE_PATH="/etc/mysql/mariadb.conf.d/50-server.cnf"
    local DESTINATION_DIR="/home/user/"
    local TIMESTAMP=$(date +"%Y%m%d_%H%M%S")  # Formato de data e hora

    # Executa o comando na m__quina virtual usando qvm-run
    run_in_vm "$target" "bash -c '
        set -e
        if [ -f \"$FILE_PATH\" ]; then
            mv \"$FILE_PATH\" \"${DESTINATION_DIR}50-server.cnf.old.$TIMESTAMP\"
            echo \"INFO: Moved $FILE_PATH to ${DESTINATION_DIR}50-server.cnf.old.$TIMESTAMP\"
        else
			echo \"INFO: No file found\"
        fi
           touch \"$FILE_PATH\"
           cat > \"$FILE_PATH\" << \"EOF\"
[server]
[mysqld]
max_allowed_packet = 64M
wait_timeout = 28800
interactive_timeout = 28800
#default-time-zone = America/Fortaleza
pid-file = /run/mysqld/mysqld.pid
basedir = /usr
skip-name-resolve
bind-address = 0.0.0.0
max_connections = 100
innodb_buffer_pool_size = 256M  # ajusta conforme memoria disponivel
innodb_log_file_size = 64M
innodb_flush_log_at_trx_commit = 2  # Melhor performance com risco minimo
[embedded]
[mariadb]
[mariadb-10.11]
EOF'"
   run_in_vm $target "sudo systemctl restart mariadb"
}
function mdb02 {
	
	local GATE_IP=$(qvm-run -u root -p "$GATEKEEPER" "qubesdb-read /qubes-ip 2>/dev/null")
    local target="$DATAKEEPER"
    local user="mysql-user"
    local passwd="mysql-pass"
    local host="127.0.0.1"

    run_in_vm "$target" "sudo mysql -e 'SHOW DATABASES;'"
    run_in_vm "$target" "sudo mysql -e 'CREATE DATABASE blocky CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;'"
    run_in_vm "$target" "sudo mysql -e \"CREATE USER '${user}'@'${GATE_IP}' IDENTIFIED BY '${passwd}';\""
    run_in_vm "$target" "sudo mysql -e \"GRANT ALL PRIVILEGES ON *.* TO '${user}'@'${GATE_IP}';\""
    run_in_vm "$target" "sudo mysql -e \"FLUSH PRIVILEGES;\""
    
    run_in_vm "$target" "sudo mysql -e \"CREATE USER '${user}'@'${host}' IDENTIFIED BY '${passwd}';\""
    run_in_vm "$target" "sudo mysql -e \"GRANT ALL PRIVILEGES ON *.* TO '${user}'@'${host}';\""
    run_in_vm "$target" "sudo mysql -e \"FLUSH PRIVILEGES;\""
}

make_templates() {
    echo ""
    separador
    echo "INSTALL v0.2b"
    echo "MINIMAL TEMPLATE: $MINIMAL_TEMPLATE"
    echo "CLONED TEMPLATE: $BASE_CLONED_TEMPLATE"
    separador
    echo ""
    read -p "Press ENTER to continue..."
    _make_base_template
    _config_base_template
    echo "$BASE_CLONED_TEMPLATE [OK]"
    separador
    echo ""
}
_make_base_template() {
    separador
    echo "Starting template installation procedures."
    separador
    _verify_minimal_template_and_update
    _verify_template_to_clone
    install_packages_full
}
_config_base_template() {
    echo "Installing packages[qubes, essentials and themes]"
    install_packages_full
}
_verify_template_to_clone() {
    local VM_TEMPLATE="$BASE_CLONED_TEMPLATE"
    echo "Verifying templates before cloning..."
    if qvm-ls | grep -wq "$VM_TEMPLATE"; then
        echo "Template $VM_TEMPLATE already exists. Skipping clone."
    else
        echo "Cloning $MINIMAL_TEMPLATE to $VM_TEMPLATE"
        qvm-clone "$MINIMAL_TEMPLATE" "$VM_TEMPLATE" || {
            echo "ERROR: Failed to clone $VM_TEMPLATE"
            exit 1
        }
    fi
}
_verify_minimal_template_and_update() {
    local target="$MINIMAL_TEMPLATE"
    echo "Verifying if $target exists.."
    if template_exists "$target"; then
        echo "$template already exists. Skipping installation procedure."
    else
        echo "Installing $template"
        qvm-template install "$template" || {
            echo "ERROR: Failed to install $target"
            exit 1
        }
    fi
    echo "Updating $target"
    _full_update "$target"
}
_full_update() {
    local target="$MINIMAL_TEMPLATE"
    run_in_vm "$target" "apt update -y" || {
        echo "ERROR: Failed to update $target"
        exit 1
    }
    run_in_vm "$target" "apt upgrade -y" || {
        echo "ERROR: Failed to update $target"
        exit 1
    }
    run_in_vm "$target" "apt autoremove -y" || {
        echo "ERROR: Failed to update $target"
        exit 1
    }
    shutdown_vm "$target"
}
insertdata() {
triplex_tables
}

install_packages_full() {
	fix_locale "$BASE_CLONED_TEMPLATE"
	inject_extrepo "$BASE_CLONED_TEMPLATE"
    _install_qubes_packages
    _install_extra_packages
    _install_themes_packages
    shutdown_vm "$BASE_CLONED_TEMPLATE"
}
_install_qubes_packages() {
    local target="$BASE_CLONED_TEMPLATE"
    echo "[$target] Installing qubes packages.."
    run_in_vm "$target" "sudo apt install qubes-core-agent-passwordless-root qubes-usb-proxy qubes-input-proxy-sender qubes-core-agent-networking qubes-core-agent-dom0-updates qubes-core-agent-network-manager qubes-core-agent-thunar -y" || {
        echo "ERROR: [$target] Failed to install QUBES packages."
}

}
_install_extra_packages() {
    local target="$BASE_CLONED_TEMPLATE"
    echo "[$target] Installing extra pacakges.."
    run_in_vm "$target" "sudo apt install tcpdump telnet iftop nmap dnsutils ncat netcat-openbsd git wget xfce4-terminal geany* -y" || {
        echo "ERROR: [$target] Failed to install EXTRA TOOLS packages."
        exit 1
    }
	run_in_vm "$target" "sudo apt install librewolf  -y" || {
        echo "ERROR: [$target] Failed to install librewolf..."
        exit 1
    }
}
_install_themes_packages() {
    local target="$BASE_CLONED_TEMPLATE"
    echo "[$target] Installing themes packages.."
    run_in_vm "$target" "apt install -y gnome-themes-extra lxappearance yaru-theme-gtk" || {
        echo "ERROR: [$target] to install themes packages"
        exit 1
    }
}



make_keepers() {
    _create_gatekeeper_vm
    _create_datakeeper_vm
}

_create_gatekeeper_vm() {
    local vm_gatekeeper=$1
    local vm_template="$BASE_CLONED_TEMPLATE"	
    local vm_color
    local vm_netvm=$3
    local vm_mem=1000
    local vm_maxmem=2000
    local vm_vcpus=2
    local vm_size="20GB"

    echo -n "What should the gatekeeper VM be called? " 
    echo ""
    read vm_gatekeeper

    while true; do
        echo -n "Please provide the netvm for [$vm_gatekeeper]. " 
        read -p "NetVM: " vm_netvm
        TEMPNET="$vm_netvm"
        if qvm-ls | grep -wq "$vm_netvm"; then
            echo "$vm_gatekeeper netvm: $vm_netvm"
            break
        else
            clear
            echo "[$vm_netvm] not found! Please enter a valid NetVM."
        fi
    done

    # Validate the netvm before proceeding
    if ! qvm-ls --raw-list | grep -qw "$vm_netvm"; then
        echo "ERROR: The specified NetVM [$vm_netvm] does not exist. Exiting."
        exit 1
    fi

    echo "Choose a color for [$vm_gatekeeper]:"
    echo "1) Blue"
    echo "2) Orange"
    echo "3) Red"
    echo "4) Green"
    echo "5) Yellow"
    read -p "Enter the number of the desired color: " color_choice

    case $color_choice in
        1) vm_color="blue" ;;
        2) vm_color="orange" ;;
        3) vm_color="red" ;;
        4) vm_color="green" ;;
        5) vm_color="yellow" ;;
        *) echo "Invalid choice! Using default color: blue." 
           vm_color="blue" ;;
    esac

    check_vm "$vm_template"
    check_vm "$vm_gatekeeper"
    
    if qvm-ls | grep -wq "$vm_gatekeeper"; then
        echo "VM $vm_gatekeeper already exists."
    else 
        echo "Creating $vm_gatekeeper..."
        qvm-create --standalone --label "$vm_color" \
        --template "$vm_template" \
        --property memory="$vm_mem" \
        --property maxmem="$vm_maxmem" \
        --property vcpus="$vm_vcpus" \
        --property provides_network=true \
        --property netvm="$vm_netvm" "$vm_gatekeeper" || {
        echo "ERROR: [$vm_gatekeeper] Failed to create VM."
        exit 1
        }
    resize $vm_gatekeeper $vm_size
    getip $vm_gatekeeper
    echo "[$vm_gatekeeper] Created successfully!"
    fi

    echo ""
    separador
    GATEKEEPER="$vm_gatekeeper"
    GATEKEEPER_IP=$(qvm-run -u root -p "$GATEKEEPER" "qubesdb-read /qubes-ip 2>/dev/null")
    echo "GATEKEEPER: $vm_gatekeeper"
    echo "IP: $GATEKEEPER_IP"
    echo "NETVM: $vm_netvm"
    echo "Storage: $(qvm-volume info $vm_gatekeeper:private | grep -i "size" | awk '{print $2}')"
    separador
}
_config_gatekeeper_vm() { #blocky stages	blocky_tables
    qvm-start "$GATEKEEPER"
    blocky_stages
    blocky_tables
}
blocky_stages() {
blocky_stage_01
blocky_stage_02
blocky_stage_03
blocky_stage_04
}
blocky_tables() {
	local target="$GATEKEEPER"
	local firewalld="/rw/config/qubes-firewall.d"
	local networkhooksd="/rw/config/network-hooks.d"
	local rclocald="/rw/config/rc.local.d"
	local file01="/rw/config/qubes-firewall.d/update_nft.nft"		#ok
	local file02="/rw/config/qubes-firewall.d/update_nft.sh"		#ok
	local file03="/rw/config/qubes-firewall.d/internalise.sh"		#ok
	local file04="/rw/config/network-hooks.d/internalise.sh"		#ok
	local file05="/rw/config/network-hooks.d/update_nft.sh"		#ok
	local file06="/rw/config/rc.local.d/blocky.rc"


	echo "creating tables on $target"
	run_in_vm $target "bash -c '
mkdir -p \"$firewalld\"
touch \"$file01\" \"$file02\" \"$file03\"
mkdir -p \"$networkhooksd\"
touch \"$file04\" \"$file05\"
mkdir -p \"$rclocald\"
touch \"$file06\"
'"

	echo "update_nft.nft"
	run_in_vm $target "bash -c '
cat > \"$file01\" <<\"EOF\"
#!/usr/sbin/nft -f

flush chain qubes dnat-dns

flush chain qubes custom-forward
insert rule qubes custom-forward tcp dport 53 drop
insert rule qubes custom-forward udp dport 53 drop

flush chain qubes custom-input

insert rule qubes custom-input tcp dport 53 accept
insert rule qubes custom-input udp dport 53 accept
insert rule qubes custom-input tcp dport 4000 accept

flush chain qubes dnat-dns

insert rule qubes dnat-dns iifname \"vif*\" tcp dport 53 dnat to 127.0.0.1
insert rule qubes dnat-dns iifname \"vif*\" udp dport 53 dnat to 127.0.0.1
EOF
chmod +x \"$file01\"
'"

	echo "update_nft.sh"
	run_in_vm $target "bash -c '
cat > \"$file02\" <<\"EOF\"
#!/bin/sh
sudo nft -f /rw/config/qubes-firewall.d/update_nft.nft
EOF
chmod +x \"$file02\"
'"

	echo "internalise.sh"
	run_in_vm $target "bash -c '
cat > \"$file03\" <<\"EOF\"
#!/bin/sh
find /proc/sys/net/ipv4/conf -name \"vif*\" -exec bash -c \"echo 1 | tee {}/route_localnet\" \\;
EOF
chmod +x \"$file03\"
'"

	echo "internalise.sh"
	run_in_vm $target "bash -c '
cat > \"$file04\" <<\"EOF\"
#!/bin/sh
find /proc/sys/net/ipv4/conf -name \"vif*\" -exec bash -c \"echo 1 | tee {}/route_localnet\" \\;
EOF
chmod +x \"$file04\"
'"

	echo "update_nft.sh"
	run_in_vm $target "bash -c '
cat > \"$file05\" <<\"EOF\"
#!/bin/sh
sudo nft -f /rw/config/qubes-firewall.d/update_nft.nft
EOF
chmod +x \"$file05\"
'"

	echo "blocky.rc"
	run_in_vm $target "bash -c '
cat > \"$file05\" <<\"EOF\"
#!/bin/sh
systemctl unmask blocky.service
systemctl daemon-reload
systemctl enable --now blocky.service
exec /rw/config/qubes-firewall.d/update_nft.sh
EOF
chmod +x \"$file06\"
'"
	run_in_vm $target "echo 'exec /rw/config/rc.local.d/blocky.rc' >> /rw/config/rc.local"
}

_create_datakeeper_vm() {
    local vm_datakeeper=$1
    local vm_netvm="$TEMPNET"
    local vm_template="$BASE_CLONED_TEMPLATE"
    local vm_color=$3
	echo ""
    echo "What should the datakeeper VM be called?" 
    echo -n ""
    read vm_datakeeper

    echo "Choose a color for [$vm_datakeeper]:"
    echo "1) Blue"
    echo "2) Orange"
    echo "3) Red"
    echo "4) Green"
    echo "5) Black"
    read -p "Enter the number of the desired color: " color_choice

    case $color_choice in
        1) vm_color="blue" ;;
        2) vm_color="orange" ;;
        3) vm_color="red" ;;
        4) vm_color="green" ;;
        5) vm_color="black" ;;
        *) echo "Invalid choice! Using default color: blue." 
           vm_color="blue" ;;
    esac
    
    check_vm "$vm_template"
    check_vm "$vm_datakeeper"

    if qvm-ls | grep -wq "$vm_datakeeper"; then
        echo "VM $vm_datakeeper already exists. Skipping creation!"
    else
        echo "Creating $vm_datakeeper..."
        qvm-create --standalone --label "$vm_color" \
        --template "$vm_template" \
        --property netvm="$vm_netvm" "$vm_datakeeper" || {
            echo "ERROR: [$vm_datakeeper] Failed to create VM."
            exit 1
        }
    fi

    echo ""
    separador
    DATAKEEPER="$vm_datakeeper"
    DATAKEEPER_IP=$(qvm-run -u root -p "$DATAKEEPER" "qubesdb-read /qubes-ip 2>/dev/null")
    echo "DATAKEEPER: $DATAKEEPER"
    echo "IP: $DATAKEEPER_IP"
    echo "NETVM: $vm_netvm"
    echo "Storage: $(qvm-volume info $DATAKEEPER:private | grep -i "size" | awk '{print $2}')"
    separador

    echo "[$vm_datakeeper] Successfully created!"
}
_config_datakeeper_vm() { # grafana_prometheus_maria	data_tables
    qvm-start "$DATAKEEPER"
    grafana_prometheus_maria
    triplex_tables #data_tables
}
grafana_prometheus_maria() {
install_grafana
install_prometheus_stages
install_mariadb
#    qvm-prefs "$DATAKEEPER" netvm "$GATEKEEPER"
}
triplex_tables() {
	local target="$DATAKEEPER"
	local firewalld="/rw/config/qubes-firewall.d"
	local rclocald="/rw/config/rc.local.d"
	local file01="/rw/config/qubes-firewall.d/update_nft.nft"		#ok
	local file02="/rw/config/qubes-firewall.d/update_nft.sh"		#ok
	local file03="/rw/config/rc.local.d/triplex.rc"
	local IP_X=$(qvm-run -u root -p "$GATEKEEPER" "qubesdb-read /qubes-ip 2>/dev/null")

	echo "creating tables on $target"
	run_in_vm $target "mkdir -p $firewalld"
	run_in_vm $target "touch $file01 $file02"
	run_in_vm $target "mkdir -p $rclocald"

	echo "update_nft.nft"
	run_in_vm $target "bash -c '
cat > \"$file01\" <<\"EOF\"
#!/usr/sbin/nft -f

add rule qubes custom-input ip saddr \"$IP_X\" accept
EOF
chmod +x \"$file01\"
'"

	echo "update_nft.sh"
	run_in_vm $target "bash -c '
cat > \"$file02\" <<\"EOF\"
#!/bin/sh
sudo nft -f /rw/config/qubes-firewall.d/update_nft.nft
EOF
chmod +x \"$file02\"
'"

	echo "triplex.rc"
	run_in_vm $target "bash -c '
cat > \"$file03\" <<\"EOF\"
#!/bin/bash
sudo systemctl daemon-reload
sudo systemctl enable --now prometheus.service
sudo systemctl enable --now mariadb
sudo systemctl enable --now grafana.service
exec /rw/config/qubes-firewall.d/update_nft.sh
EOF
chmod +x \"$file03\"
'"
	run_in_vm $target "echo 'exec /rw/config/rc.local.d/triplex.rc' >> /rw/config/rc.local"
#    getip "$GATEKEEPER"
#    run_in_vm "$DATAKEEPER" "sed -i 's/TARGETACCESS2/$GATEKEEPER_IP/g' /rw/config/qubes-firewall.d/update_nft.nft"
}

install_grafana() {
    local localvm="$DATAKEEPER"
    local NETVM=$(qvm-prefs "$GATEKEEPER" netvm)
    local current_netvm=$(qvm-prefs "$localvm" netvm)

    if [ "$current_netvm" != "$NETVM" ]; then
        echo "Mudando $localvm netvm para $NETVM."
        qvm-prefs "$localvm" netvm "$NETVM"
    else
        echo "A netvm ja esta configurada como $NETVM."
    fi
    echo "INFO: Iniciando instalacao do Grafana..."  
    run_in_vm "$localvm" "sudo apt-get update -y && sudo apt-get install -y apt-transport-https software-properties-common"
    run_in_vm "$localvm" "sudo mkdir -p /etc/apt/keyrings/ && sudo wget -q -O - https://apt.grafana.com/gpg.key | gpg --dearmor | sudo tee /etc/apt/keyrings/grafana.gpg > /dev/null && echo 'deb [signed-by=/etc/apt/keyrings/grafana.gpg] https://apt.grafana.com stable main' | sudo tee /etc/apt/sources.list.d/grafana.list"
    run_in_vm "$localvm" "sudo apt-get update -y && sudo apt-get install grafana -y"
    run_in_vm "$localvm" "sudo systemctl start grafana-server && sudo systemctl enable grafana-server && sudo grafana-cli plugins install grafana-piechart-panel"
    echo "Grafana successfully installed!"
    echo "grafana-piechart-panel done!"
}
install_prometheus_stages() {
prometheus_stage_01
prometheus_stage_02
prometheus_stage_03
	}
install_mariadb() {
    local target="$DATAKEEPER"
    run_in_vm $target "sudo apt install mariadb-server -y"
    mdb01
    mdb02
}

main_execute() {
    make_templates
    make_keepers
    _config_gatekeeper_vm 
    _config_datakeeper_vm
    fnl
}

main_execute


I hope now you can successfully run it, @murdock.

@ephile, try this now and see if it works.

@solene, it’s necessary to compile Blocky with the correct version and architecture, and all packages are installed on the template and VMs.

@barto, I fixed it. This new script asks you for a NetVM; now you can choose it.

4 Likes

Sorry if this is a silly question.

  • Can the sys-blocky block be made immutable with a command added to the script?

  • What can I do if the qube blocks a connection I need for some reason? Is there a way to pause filtering?

Thanks for sharing!

I’m still searching for a Qubes OS Pi‑hole solution or alternative. Your introduction looks very promising, but my initial enthusiasm faded once I saw the complex setup and configuration. Don’t get me wrong, I appreciate the effort you put into documenting everything for easy reuse, but I would have to go through nearly 1,000 lines of code first.

A few suggestions:

  1. Update only the first script in your original post (use the edit function).
    Add a brief note with the date of the change so readers can track revisions easily.
  2. Replace the remaining Portuguese messages with English from the script:

  1. It might make sense to add a few words on Grafana, Prometheus and MariaDB (MySQL) server for Blocky’s stats and queries monitoring. Unfortunately, it will add more complexity but if you want to monitor Blocky you need a visualization and a record of the traffic.

If you need something simpler then Safing Portmaster - per appVM basis.
Unfortunately you can’t import filter list, you can use only provided.

For example in this forums appVM I block everything that’s on fliter list except for microsoft which is needed for github (azure?) and google (don’t remember now but something used it’s servers to).
In experimental it have Scott Helme lists and Daniel Cuthbert - if it means something.

You can use this URL filtering HTTPS proxy but it requires more work than using Safing :slight_smile:

Dear @whoami , here I come once again with MirageOS stuff :slight_smile:
Not so long ago, I wrote a unikernel that does that (/cc @xyhhx). Unfortunately, it’s not any easier to install than qubes-mirage-firewall or qubes-miragevpn.
As I don’t use it on a regular basis, it’s not as proof-tested as qubes-mirage-firewall, but I’ve left it in a usable state: GitHub - palainp/qubes-mirage-dnshole
If you want to try out, I can produce a docker/podman building script that check the resultingg hashsum like qmf.
The blocking list need to be downloaded or crafted/written manually and copied into the root disk of the unikernel VM, but it’s readonly, the unikernel is basically state-less, so you’ll need to update it manually if needed.

EDIT: I’ve started to use it as a netvm for some qubes AppVM, so I’ll be able to check as daily sys-vm :slight_smile:

Hi @den1ed (and everyone else here)

So of course when I saw (5mins and you are done) I knew I needed several hours (3:24 to be exact) to be able to get the template downloaded, save the script over to dom0, and then launch and configure.

Very impressed with the outcome so far!

However:

Probably a dumb question…

I set the gatekeeper to sys-net. Was that foolish? Does it actually need to be behind sys-firewall instead? Or something else?

Thanks in advance!

Well - for now I changed it to run behind the sys-firewall.

I now assume that to make this fully functional I need to route desired qubes through gatekeeper.

Now for anonymity, does every qube need to have seperate instances of gatekeeper? What are the repercussions for using this single instance globally for all qubes?

Also, is it wise to include (or to run another instance separately) sys-whonix?

At this moment I also use sys-whonix for all updates (as opposed to configuring every qube to update through tor) - and this was applied using Qubes Global Config (dumb idea?)

So currently my thinking is:

Qube -then- gatekeeper -then- sys-whonix -

Or maybe the question should be:

What is the best way to implement this code globally for all qubes using tor?

Thanks once more!

1 Like

Hi, I actually liked using Portmaster more because its installation and configuration is so accessible that even an average user can do it. In fact, I looked for a guide to install it on Qubes and couldn’t find one, so I was planning to publish an updated guide based on my implementation, but I’m postponing it until I’ve looked into some different and safer steps to the method I implemented.

1 Like