Another short one.

After sourcing some almost-dead second-hand enterprise SSDs and crimping some short ethernet cables, I can migrate VMs and CTs at will or with HA policy.

Homelab 2.0 is here.

Homelab 2.0


ASCII diagram:

Below diagram does not show corona/internet NIC. Below is only CEPH related, physically closed loop network.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
      +------+      +------+      +------+
      |      |      |      |      |      |
  ###[0]###  |  ###[0]###  |  ###[0]###  |
  #       #  |  #       #  |  #       #  |
  #  pve  #  |  #  pve  #  |  #  pve  #  |
  #   1   #  |  #   2   #  |  #   3   #  |
  #       #  |  #       #  |  #       #  |
  ###[1]###  |  ###[1]###  |  ###[1]###  |
      T      |      T      |      T      |
      |      +------+      +------+      |
      +----------------------------------+

End result:

Dual iperf3 test

Proxmox has a Full Mesh - Routed (Simple) example for IPv4, here is my IPv6 version.

Important part is interfaces enp4s0f0 and enp4s0f1 from a dual X540-T2 10G NIC. Taken from Michael Hampton’s answer. Also check out A.B’s answer

root@pve1:~# cat /etc/network/interfaces:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface enp6s0 inet manual

auto enp4s0f0
iface enp4s0f0 inet6 static
        address fd0c:b219:7c00:2781::1
        netmask 125
        mtu 9000
post-up /sbin/ip -f inet6 route add fd0c:b219:7c00:2781::2 dev enp4s0f0
post-down /sbin/ip -f inet6 route del fd0c:b219:7c00:2781::2 dev enp4s0f0

auto enp4s0f1
iface enp4s0f1 inet6 static
        address fd0c:b219:7c00:2781::1
        netmask 125
        mtu 9000
post-up /sbin/ip -f inet6 route add fd0c:b219:7c00:2781::3 dev enp4s0f1
post-down /sbin/ip -f inet6 route del fd0c:b219:7c00:2781::3 dev enp4s0f1

auto vmbr0
iface vmbr0 inet static
        address 192.168.240.11/24
        gateway 192.168.240.1
        bridge-ports enp6s0
        bridge-stp off
        bridge-fd 0

source /etc/network/interfaces.d/*

This manual change needs to be performed for all nodes and adapted accordingly for each node’s interfaces. My setup is homogenous, cycling numbers was enough in my case.

After the change apply configuration by:

1
systemctl restart networking

Tested by:

  • root@pve3:~# iperf3 -s --port 2345 # for pve1
    
  • root@pve3:~# iperf3 -s --port 6789 # for pve2
    
  • root@pve1:~# iperf3 -c fd0c:b219:7c00:2871::3 --port 2345 -P 4
    
  • root@pve2:~# iperf3 -c fd0c:b219:7c00:2871::3 --port 6789 -P 4
    

Previously tried:

I tried to make use of only Proxmox VE’s Web GUI.

  • Only Linux Bridge or OVS Bridge worked. Although it should suffice for a 3 node cluster, I wanted a 5 or 7 node capable approach.
  • Any type of bonding ,other than broadcast that is, did not work. Would love if XOR Linux bonding had worked automatically, but could only ping between node 1 and 3. XOR decision for the interfaces was not cyclic it seems, and I had no intension to try finding a solution, if any.

U-L-What?

During this adventure I came across ULA’s in a blog post. So while at it I chose myself a subnet: fd0c:b219:7c00::/48 [fdXc:bug:kXX::/48 if one squint’s their eyes hard enough]. Here is the declaration, I ask the globe to honor my rightful claim for the subnet. Anyone who opposes should apply for a duel of rock-paper-scissors and bring their own hand.

Now what?

At this point creating a CEPH pool for use of VMs/CTs is trivial. Then setting an HA policy and tweaking retension time, etc. would allow for resilliency through redundancy. Here is a LXC container automatically migrated from pve2 to pve1 after a shut down.

Container HA Migration

Bibliography