FRR and OpenWRT on PVE 8.3 for Virtualized Networks
TL;DR: Poor man’s private cloud network without proper IPAM or DNS integration, and with a severed cluster firewall on Proxmox VE. As simple as it gets. I had a dream of trying out redundant storage on a private cloud (as in VPS hosting). It took a year and a half to comprehend the vocabulary, another year and a half to try KVM out on a Fedora workstation, yet another year and a half of cluster laying, but at last I’ve got to a PoC. ...
FOG Server Bare-metal Backups
That gent who is’t doest not checketh on his coff’rs shalt loseth his apples! TL;DR: I want (opensource) bare-metal backups of virtualization cluster nodes There is Clonezilla, and I had written about it before. There also is a server edition, which makes deploying a single image to multiple computers faster. But, it was hard to install for me. More importantly, not cut for my needs. FOG Project on the other hand has: ...
Unattended Clonezilla Disk Backup
TL;DR: Scripted away Clonezilla interactions to blindly backup and then restore Proxmox VE disk image, as wanted to move Ryzen 5850X PVE node onto faster NVMe SSD in an automated way. I can finally get to compiling that KVM-enabled LineageOS kernel for the coming post. There will be plenty of screenshots, but they are stashed at the end. Acquiring ocs-sr commands Clonezilla has ocs-sr programme to do its bidding. By default users interact with the Clonezilla TUI to provide their desired operation. ...
WoL-enabled Psuedo-headless Proxmox Nodes with Custom NIC Names
I had some spare parts and wanted to setup a Proxmox Backup Server for my 3-node ceph-enabled cluster. It didn’t go strictly as planned, but I found a way to ditch GPUs on my nodes, which then turned out to be too good to be true. Now, here I am with extra NICs and fancy names returned by ip addr. Lemons -> Lemonade WoL, How? As of now, my cluster is tethered only to a power outlet with a wireless network uplink. The cluter’s router is a (Teltonika RUT240) , which supports ZeroTier through a plug-in. Being able to turn machines on over an overlay network is a plus for me. I don’t use it across the globe, but this keeps thing simple and neatly isolated from smart home appliances connected to the main router. ...
Mermaid Diagram
I am late to the party! But hey, each to their own… My archaic plan of getting fluent in plantUML has failed, mermaid.js.org should do. To be honest, if I had known early on that diagrams.net had plantUML insertion capability, then I might have stuck to it. I also came across D2, but I do not need server-side image generation just yet. Until then mermaid’s ubiquity wins. So an example is placed below to ensure mermaid integration on my blog blocks Gitlab Pages pipeline. Henceforth, my wacky ASCII diagrams are no more [*]. ...
On Proxmox 8.1 Cloudinit ARM64 VM Creation
TL;DR: This is a guide on running 64-bit ARM operating systems on Proxmox (amd64) via emulation. Although cloudinit is the charm in here, an ordinary ISO mount for manually installing an OS works as well. This guide assumes previous experience in setting up a VM on Proxmox VE Web UI and acquaintance with CLI. Following are the steps taken in an ordered list. Terminal use is minimal, however, checkout Techno Tim’s Notes and Proxmox VE documentation for terminal use for creating and modifying a VM. If have not set cloudinit before, watching Techno Tim’s video is a good start. ...
Proxmox 8.1 Ceph with Routed IPv6 Physically Isolated Network
Another short one. After sourcing some almost-dead second-hand enterprise SSDs and crimping some short ethernet cables, I can migrate VMs and CTs at will or with HA policy. Homelab 2.0 is here. ASCII diagram: Below diagram does not show corona/internet NIC. Below is only CEPH related, physically closed loop network. 1 2 3 4 5 6 7 8 9 10 11 +------+ +------+ +------+ | | | | | | ###[0]### | ###[0]### | ###[0]### | # # | # # | # # | # pve # | # pve # | # pve # | # 1 # | # 2 # | # 3 # | # # | # # | # # | ###[1]### | ###[1]### | ###[1]### | T | T | T | | +------+ +------+ | +----------------------------------+ End result: ...
How I made my own R8 5850X (10c/20t)
TL;DR: Adventure of reviving a CPU as a hypervisor About few months ago, I found a bargain on a seemingly faulty Ryzen R9 5900X unit. Stangely enough, it was said to drop internet connectivity while gaming. Long story short, there are 2 defective physical cores out of the 12 present. I ended up isolating them with isolcpus= kernel parameter within boot menu. Methodology: Video recording of the process. Includes temporary solution for GRUB2 (see /etc/default/grub or /etc/kernel/cmdline for proxmox sysboot for persistence), an example of OS installation failure without modification, and a test example. ...
Terraform Provider Libvirt
dmacvicar’s libvirt provider is already in the official registry. Yet, I indend to contribute functionalities, which I would like to use in my homelab. This post is the progressive summary of the process. Development setup Set the environment: 1 2 mkdir -p ~/GitRepos mkdir -p ~/terraform.d/plugins/local-registry/cbugk/libvirt/0.7.0/linux_amd64 For installing terrraform and the initial provider test Fabian Lee’s introduction was followed. His main.tf file: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 terraform { required_version = ">= 1.0.1" required_providers { libvirt = { source = "dmacvicar/libvirt" version = "0.6.10" } } } provider "libvirt" { uri = "qemu:///system" } resource "libvirt_domain" "terraform_test" { name = "terraform_test" } Compilation from source, which can be done by simply runing make. terraform-provider-libvirt binary will be output in repository’s root. For more info checkout provider’s Github repository, version 0.7.0 was used. 1 2 3 4 5 6 7 8 # Clone repository $ mkdir -p ~/GitRepos $ git clone https://github.com/dmacvicar/terraform-provider-libvirt.git $ cd terraform-provider-libvirt # Compile $ make # Move provider binary to registry $mv terraform-provider-libvirt ~/.terraform.d/plugins/local-registry/cbugk/libvirt/0.7.0/linux_amd64/ For using a local copy of provider filesystem_mirror property was set under ~/.terraformrc (file was not present). Sources: Sam Debruyn’s blog post, tnom’s SO answer, terraform docs. 1 2 $ ls -la ~/.terraform.d/plugins/local-registry/cbugk/libvirt/0.7.0/linux_amd64/terraform-provider-libvirt -rwxr-xr-x. 1 cbugk cbugk 24139069 Nov 13 09:41 /home/cbugk/.terraform.d/plugins/local-registry/cbugk/libvirt/0.7.0/linux_amd64/terraform-provider-libvirt Note that due to location ~/.terraform.d/plugins being a default implicit override directory, creating rc file is not required. However, here is the respective configuration. ~/.terraformrc modified: ...
Windows Virtual Machine on Bare-metal
Why exactly? I wanted to have duos (dual-boot) of Windows and Fedora on my “server”. However, I don’t want to reboot every five minutes unlike the olden days. I also wanted to game natively on this system. One could run a VM under KVM and setup PCIe passthrough for peripherals and I intend to set it up and try gaming modest titles once my new GPU arrives. But there is another orthogonal way of solving this, booting the same disk both as bare-metal and as VM. ...