επιμελητής • (epimelitís) m (plural επιμελητές, feminine επιμελήτρια)
one who takes care of a thing, in an official capacity; a curator, an editor, (law) a caretaker or guardian
- We run Home Assistant
- ... but in a Container
- ... that runs on Kubernetes
- ... inside a Kernel-based Virtual Machine
- ... that runs on Alpine Linux
- ... on a Raspberry Pi
For this experiment I'll be using a Raspberry Pi 5 with a 256 GB class A2 microSD card (it was cheap).
Okay, hear me out:
- 1. Setting up Alpine Linux as the Hypervisor OS
- 2. Provision Talos Linux
- 3. The part where we use Kubernetes
Get Alpine Linux from the Downloads page and select the Raspberry Pi variant.
For example, Alpine Linux 3.19.1 for Raspberry Pi can be downloaded from:
curl -LO https://dl-cdn.alpinelinux.org/alpine/v3.19/releases/aarch64/alpine-rpi-3.19.1-aarch64.img.gz
Flash the image using [Raspberry Pi Imager] and boot your Pi from the SD card.
I had to plug in my keyboard to the first USB 3 port after the Pi was booted. Other ports or timings didn't work for me.
Following the setup-alpine instructions,
- log in as
root
with no password - run
setup-alpine
and answer truthfully- disable remote login for root
- create a new user for yourself
- enable lan and wifi
- enable SSH server
- create a
sys
partition
- reboot
- log in and run
ip a
to get the Pi's IP address
From your regular machine, run ssh-copy-id <USER>@<IP>
.
If this works you can unplug the display and keyboard.
See also Granting Your User Administrative Access for
doas
(sudo
in Ubuntu lingo). As root
, run
apk add doas
echo 'permit :wheel' > /etc/doas.d/doas.conf
addgroup <USER> wheel
After this, doas <command>
does the trick.
Edit the /etc/apk/repositories
file:
doas apk add vim
doas vim /etc/apk/repositories
Enable the http://alpine.sakamoto.pl/alpine/v3.19/community
rpo.
apk add \
libvirt-daemon libvirt-client \
qemu-img qemu-system-arm qemu-system-aarch64 qemu-modules \
openrc
rc-update add libvirtd
Add your user to libvirt
:
addgroup <USER> libvirt
By default, libvirt uses NAT for VM connectivity. If you want to use the default configuration, you need to load the
tun
module.
modprobe tun
echo "tun" >> /etc/modules-load.d/tun.conf
cat /etc/modules | grep tun || echo tun >> /etc/modules
If you prefer bridging a guest over your Ethernet interface, you need to make a bridge.
Add the scripts that will create bridges off /etc/network/interfaces
:
apk add bridge
Add the network bridge:
brctl addbr brlan
brctl addif brlan eth0
Change your /etc/network/interfaces
to
- disable
dhcp
on youreth0
- add
iface brlan inet dhcp
- set
bridge-ports eth0
to bridge it with eth0
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet manual
auto brlan
iface brlan inet dhcp
bridge-ports eth0
bridge-stp 0
post-up ip -6 a flush dev brlan; sysctl -w net.ipv6.conf.brlan.disable_ipv6=1
auto wlan0
iface wlan0 inet dhcp
For more information, see Bridge and Bridging for Qemu (this one is important).
To restart the networking stack, run
service networking restart
If it fails, reconnect your keyboard ...
In order to use libvirtd to remotely control KVM over ssh PolicyKit needs a
.pkla
informing it that this is allowed. Write the following file to/etc/polkit-1/localauthority/50-local.d/50-libvirt-ssh-remote-access-policy.pkla
apk add dbus polkit
rc-update add dbus
We do that:
mkdir -p /etc/polkit-1/localauthority/50-local.d/
cat <<EOF > file.txt
[Remote libvirt SSH access]
Identity=unix-group:libvirt
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes
EOF
For the Terraform libvirt
provider to work, we also need
to enable TCP forwarding for the SSH server.
sed -i '/^AllowTcpForwarding no$/s/no/yes/' /etc/ssh/sshd_config
service sshd restart
The
libvirt-guests
service (available from Alpine 3.13.5) allows running guests to be automatically suspended or shut down when the host is shut down or rebooted.The service is configured in /etc/conf.d/libvirt-guests. Enable the service with:
rc-update add libvirt-guests
curl -LO https://github.com/siderolabs/talos/releases/download/v1.6.4/metal-arm64.iso
qemu-img convert -O qcow2 metal-arm64.iso metal-arm64.qcow2
Change into the infra/
directory and run:
tofu apply
# or terraform apply
virsh console talos
In the UEFI shell, type exit
. You will be brought to the
UEFI, where you select the Boot Manager
. Pick the third
disk from the list and boot from there - you'll see Talos Linux' boot menu now:
GNU GRUB version 2.06
/------------------------------------------------\
|*Talos ISO |
| Reset Talos installation |
| |
\------------------------------------------------/
Boot Talos. Say hi. It'll greet you with
[ 9.851478] [talos] entering maintenance service {"component": "controller-runtime", "controller": "config.AcquireController"}
[ 9.854129] [talos] this machine is reachable at: {"component": "controller-runtime", "controller": "runtime.MaintenanceServiceController"}
[ 9.855517] [talos] 10.22.27.56 {"component": "controller-runtime", "controller": "runtime.MaintenanceServiceController"}
[ 9.856546] [talos] 2001:9e8:17ba:2200:5054:ff:feba:99ef {"component": "controller-runtime", "controller": "runtime.MaintenanceServiceController"}
[ 9.858176] [talos] server certificate issued {"component": "controller-runtime", "controller": "runtime.MaintenanceServiceController", "fingerprint": "rMWhs9V9Y30sbs9W5KNCgVRReKGrfvV0FwMtqEX4OW8="}
[ 9.860209] [talos] upload configuration using talosctl: {"component": "controller-runtime", "controller": "runtime.MaintenanceServiceController"}
[ 9.862119] [talos] talosctl apply-config --insecure --nodes 10.22.27.56 --file <config.yaml> {"component": "controller-runtime", "controller": "runtime.MaintenanceServiceController"}
[ 9.863452] [talos] or apply configuration using talosctl interactive installer: {"component": "controller-runtime", "controller": "runtime.MaintenanceServiceController"}
[ 9.864784] [talos] talosctl apply-config --insecure --nodes 10.22.27.56 --mode=interactive {"component": "controller-runtime", "controller": "runtime.MaintenanceServiceController"}
[ 9.866219] [talos] optionally with node fingerprint check: {"component": "controller-runtime", "controller": "runtime.MaintenanceServiceController"}
[ 9.867265] [talos] talosctl apply-config --insecure --nodes 10.22.27.56 --cert-fingerprint 'rMWhs9V9Y30sbs9W5KNCgVRReKGrfvV0FwMtqEX4OW8=' --file <config.yaml> {"component": "controller-runtime", "controller": "runtime.MaintenanceServiceController"}
Take note of the IP, in this case 10.22.27.56
.
You may now probably want to tell your router to always assign this IP address to that new device on your network.
First, get talosctl
:
curl -sL https://talos.dev/install | sh
Given the above IP we can now identify the available disks:
talosctl disks --insecure --nodes 10.22.27.56
The disk of 124 MB
is the one we just booted from, the 54 GB
is my system disk and the 107 GB
is for state.
DEV MODEL SERIAL TYPE UUID WWID MODALIAS NAME SIZE BUS_PATH SUBSYSTEM READ_ONLY SYSTEM_DISK
/dev/vda - - HDD - - virtio:d00000002v00001AF4 - 54 GB /pci0000:00/0000:00:01.2/0000:03:00.0/virtio2/ /sys/class/block
/dev/vdb - - HDD - - virtio:d00000002v00001AF4 - 107 GB /pci0000:00/0000:00:01.3/0000:04:00.0/virtio3/ /sys/class/block
/dev/vdc - - HDD - - virtio:d00000002v00001AF4 - 124 MB /pci0000:00/0000:00:01.4/0000:05:00.0/virtio4/ /sys/class/block
Do have a look at the Local Path Provisioner configuration mentioned in the Talos Linux documentation if you want to extend your storage.
Change into the talos/
directory and run the talosctl gen config
command. Make sure to use the proper disk, here /dev/vda
:
talosctl gen config "talos-epimelitis" https://talos-epimelitis.fritz.box:6443 \
--additional-sans=talos-epimelitis.fritz.box \
--additional-sans=talos-epimelitis \
--additional-sans=10.22.27.71 \
--install-disk=/dev/vda \
--output-dir=.talosconfig \
--output-types=controlplane,talosconfig \
--config-patch=@cp-patch.yaml
Edit the created .talosconfig/controlplane.yaml
in an editor and add the storage volume (as shown above):
machine:
# ...
kubelet:
extraMounts:
- destination: /var/mnt/storage
type: bind
source: /var/mnt/storage
options:
- bind
- rshared
- rw
# ...
disks:
- device: /dev/vdb
partitions:
- mountpoint: /var/mnt/storage
💡 Show details ...
On subsequent boots after the first one, talosctl dmesg will print something like
talos-epimelitis.fritz.box: user: warning: [2024-02-19T20:45:36.936258652Z]: [talos] phase udevSetup (12/17): done, 1.516129ms
talos-epimelitis.fritz.box: user: warning: [2024-02-19T20:45:36.936771652Z]: [talos] phase userDisks (13/17): 1 tasks(s)
talos-epimelitis.fritz.box: user: warning: [2024-02-19T20:45:36.937193652Z]: [talos] task mountUserDisks (1/1): starting
talos-epimelitis.fritz.box: user: warning: [2024-02-19T20:45:36.938507652Z]: [talos] task mountUserDisks (1/1): skipping setup of "/dev/vdb", found existing partitions
talos-epimelitis.fritz.box: kern: notice: [2024-02-19T20:45:36.940458652Z]: XFS (vdb1): Mounting V5 Filesystem
talos-epimelitis.fritz.box: kern: info: [2024-02-19T20:45:36.958119652Z]: XFS (vdb1): Ending clean mount
talos-epimelitis.fritz.box: user: warning: [2024-02-19T20:45:36.959145652Z]: [talos] task mountUserDisks (1/1): done, 21.875926ms
... indicating the disk has been correctly set up.
Update your talosconfig with the correct endpoint:
talosctl config endpoint --talosconfig .talosconfig/talosconfig talos-epimelitis.fritz.box
Now apply the configuration to the node:
talosctl apply-config --insecure --nodes talos-epimelitis.fritz.box --file .talosconfig/controlplane.yaml
After a while (observe the virsh console talos
output), bootstrap etcd:
talosctl bootstrap --talosconfig .talosconfig/talosconfig --nodes talos-epimelitis.fritz.box
Lastly, configure your kubeconfig file:
talosctl kubeconfig --talosconfig .talosconfig/talosconfig --nodes talos-epimelitis.fritz.box
Switch the the new context:
kubectl config use-context admin@talos-epimelitis
Then apply a patch for the metric server to avoid timeout related issues:
kubectl -k metric-server
Change into the cluster/
directory and appy the kustomizations, or:
kubectl kustomize --enable-helm cluster | kubectl apply -f -
Note that you'll need kubectl kustomize
due to Helm. As of writing this, kubectl apply -k
will not do.
The system foundation consists of these parts:
- Local Path Provisioner serves our
PersistentVolumeClaims
from the SD card - MetalLB assignes IP addresses to Load Balancers, both in our home network and in a dedicated one.
- Nginx Ingress Controller handles the
Ingress
resources
With this setup we could deploy our workloads, have them persisted and make them externally available (give or take a static route setup on your home network's Router). When requests come in over the network bridge, MetalLB takes care of routing them to the correct workload.
Note that by virtue of our Talos setup we already have
- [Kubelet Serving Cert Approver] for accepting certificate signing requests for system components, and
- Kubernetes Metrics Server for node and pod metrics.
Due to some timeout related issues the already installed Metrics Server will be patched with a slightly more relaxed one.
We can do one better:
- Pi-Hole is used as a DNS Server (and it also filters Ads). With the power of MetalLB we give it an address in our home network.
- ExternalDNS announces Ingress host names to an external DNS - in this case, Pi-Hole. It'll then be reachable as
http://pi-hole.home
.
With this, all we need to do is tell our Router to use our Pi-Hole as a DNS server. Whatever Ingress we deploy now, ExternalDNS will announce it to Pi-Hole and thus every device on our home network can access the Kubernetes-run workload by name.
Therefore and lastly,
- Mosquitto is our MQTT Broker for Smart Home appliances. I re-soldered and flashed some Sonoff Slampher E27 sockets with Tasmota, and they'll talk to Mosquitto.
- Home Assistant itself. It'll be reachable as
http://home-assistant.home
.
Home Assistant is deployed using a LoadBalancer
on the home network in order for
it to pick up UDP packets in that subnet.