Configurations, scripts and stuff for Proxmox VE server and supporting client VMs
Setup entrypoints now live at project root:
./setup-template.shwritestemplate/setup.conf./setup-systems.shwritessystems/setup.conf
Compatibility wrappers still exist at template/setup.sh and systems/setup.sh.
VM clone entrypoints now live in systems/:
./systems/create-vm-from-template.sh./systems/*-server.shper-distro wrappers
Compatibility wrappers remain in template/ for older command paths.
Use the root-level sync helper to push this project folder to your Proxmox server without setting up Git on the server:
./sync-to-proxmox.shDefaults:
- Host:
192.168.50.60 - SSH user:
root - Remote path:
~
With this default, top-level scripts land directly in your home directory on Proxmox (for example ~/combine-keys.sh).
Override host (first argument):
./sync-to-proxmox.sh 192.168.50.70Override host and remote path:
./sync-to-proxmox.sh root@192.168.50.70 proxmox-scripts
./sync-to-proxmox.sh root@192.168.50.70 /root/proxmox-scriptsRemote path behavior:
- Absolute path (
/root/proxmox-scripts) syncs to that exact location. - Home-relative path (
proxmox-scriptsor~/proxmox-scripts) syncs under the remote user's home.
Optional flags:
--dry-runshows what would change--deleteremoves remote files that no longer exist locally
Windows Git Bash note:
- If
rsyncis unavailable in Git Bash, the script automatically falls back totaroverssh. - In fallback mode,
--deleteis not supported. - To install
rsyncon Windows, install MSYS2 and run in an MSYS2 shell:
pacman -Syu
pacman -S rsyncGenerate configs/common/network-data.yaml for the cloud-init profile workflow:
./create-network-snippet.shDefaults:
- Interface pattern:
ens18 - DHCPv4:
true - DHCPv6:
true - Output file:
./configs/common/network-data.yaml
If template/setup.conf exists, NAME_SERVERS and SEARCH_DOMAIN are used as prompt defaults.
Non-interactive example:
./create-network-snippet.sh --nameservers "192.168.50.10 1.1.1.1" --search-domains "homelab.local" --non-interactive --forceTwo helper scripts can recreate the stage and prod clusters by removing matching
VMs (identified by the stage or prod tag) and then invoking the per-distro
wrappers to recreate the VMs:
./stage-k3s.sh./prod-k3s.sh
They support --dry-run / -n (list VMIDs only) and --confirm / -y
(non-interactive) flags. Before running them, ensure the utilities under
systems/utils/ are present on the Proxmox host (or in the working tree):
systems/utils/find-vmids-by-tag.shsystems/utils/shutdown-vms.sh
Typical usage:
# verify which VMs would be affected
./prod-k3s.sh --dry-run
# when ready: recreate non-interactively
sudo ./prod-k3s.sh --confirmThese scripts are destructive — use --dry-run first and ensure you have
backups or snapshots of any data you need to keep.