Comparison of Kubernetes Distributions for a 3-Node Homelab

Choosing the best Kubernetes flavour for our homelab

Page content

I’m comparing self-hosted Kubernetes variants that suit the Ubuntu-based homelab with 3 nodes (16GB RAM, 4 cores each), focusing on ease of setup and maintenance, support for persistent volumes and LoadBalancers.

Scenario: We have three Ubuntu nodes (16 GB RAM, 4 CPU cores each) in a homelab. High-availability (HA) and ARM support are not priorities. We want an easy-to-install, low-maintenance Kubernetes cluster (either single-node or 3-node) with support for Persistent Volumes (PV) and LoadBalancer (LB) services (so that cloud-native apps requiring storage or external IPs work smoothly). We will concentrate on 3-node cluster options without HA or ARM CPU compatibility requirements.

kubernetes homelab

Below we compare popular lightweight [Kubernetes(https://www.glukhov.org/post/2024/10/kubernetes-cheatsheet/ “list and description of the most frequent and useful k8s commands - k8s cheatsheet”) distributions – K3s, MicroK8s, Minikube, and kubeadm (vanilla Kubernetes) – and how they stack up on the key criteria (features, installation/maintenance, PV/LB support, resource requirements, cluster setup, and tooling). Also have some recommendations for which to choose based on the homelab scenario.

K3s (Rancher’s Lightweight Kubernetes)

Key Features: K3s is a CNCF-certified Kubernetes distribution designed for minimal resource usage (it can run on as little as 512 MB of RAM). It packages the entire Kubernetes control plane into a single binary and process, using lightweight components (e.g. SQLite datastore by default instead of etcd, flannel for networking). It includes sensible defaults like a built-in ingress controller (Traefik) and a simple service load balancer. K3s strips out legacy/alpha features to reduce bloat.

  • Ease of Installation & Maintenance: Extremely simple - we can install it with a one-line script (curl ... | sh) or via tools like K3sup. The server boots up with default components out-of-the-box. Adding nodes is straightforward (run the K3s agent with a token from the server). Upgrading is easy (replace the binary or use the install script for new version). No separate etcd setup needed (unless you choose a multi-master HA setup). K3s is designed to require minimal fiddling once installed, making it popular for IoT and homelabs.

  • Persistent Volume Support: Built-in. By default, K3s ships with Rancher’s local-path StorageClass, which dynamically provisions PVs on the host filesystem for each PVC. This means any PVC will get fulfilled by creating a hostPath volume on the node automatically. It’s a single-node storage solution (each volume is on one node’s disk), but works out-of-the-box for stateful apps. For more advanced storage, you can add something like Rancher’s Longhorn (distributed block storage) which K3s supports, but for a homelab the default local-path provisioner usually suffices.

  • LoadBalancer Support: Built-in. K3s includes a lightweight controller called ServiceLB (formerly “Klipper LB”) that allows Services of type LoadBalancer to get an IP/port on the host without any external cloud provider. When we create a LoadBalancer service, K3s deploys a DaemonSet of tiny LB pods (svc-...) on each node, which use host ports to forward traffic to the service. Essentially, K3s reuses the node’s IP (internal or external) to serve as the LB IP, and uses iptables routing in the LB pods to send traffic to the service’s ClusterIP. This works with zero configuration – services won’t stay “pending” (unlike vanilla K8s on bare metal). The trade-off is that the LB external IP will be one of our node’s IPs (or all nodes) and we must ensure the port is free. For most homelab uses (exposing a few services on HTTP/HTTPS ports), this is perfectly fine. If needed, we can disable the built-in LB and install MetalLB manually, but most users stick with the convenient default.

  • Resource Requirements: Very low. K3s can run even on low-end hardware. Officially, 512 MB RAM and a few hundred MB of disk are sufficient for a server node. In practice, a small cluster might use a few hundred MB of memory for the control plane. Its binary is <100 MB. CPU overhead is minimal (K3s uses slightly more CPU when idle compared to MicroK8s, but not by a big margin). Our nodes (16 GB each) are more than enough to run K3s comfortably.

  • Single-Node vs Multi-Node: Suited for both. K3s can run single-node (all control-plane and workloads on one machine) or multi-node. For a 3-node setup, we might run 1 server (master) and 2 agents, or even make all 3 servers for HA (not needed here since HA isn’t a goal). Joining agents is trivial with the token. K3s’ default SQLite datastore works for single-server; for multi-server HA we’d switch to embedded etcd (K3s can do this automatically when starting multiple server nodes).

  • CLI & UI Tools: We interact with K3s just like any Kubernetes – via kubectl. K3s includes its own kubectl build (we can run k3s kubectl ... or just use a standard kubectl by pointing it to K3s’s kubeconfig). There isn’t a special K3s-specific CLI beyond installation commands; it’s intentionally minimalist. No built-in web UI is included (the upstream Dashboard is not installed by default). However, we can manually deploy the Kubernetes Dashboard or other tools on K3s like any standard cluster. Rancher (the GUI management tool by the same company) can also be installed on top of K3s if a full UI is desired, but it’s not part of K3s itself. Essentially, K3s provides the core k8s APIs and we add extras as we need (ingress, storage, etc. – some of which are already bundled as mentioned).

MicroK8s (Canonical’s Low-Ops Kubernetes)

Key Features: MicroK8s is Canonical’s lightweight Kubernetes distribution, delivered as a snap package. It installs a fully conformant Kubernetes cluster (upstream binaries) with a single command and is designed for ease-of-use (“low ops”) on a single machine or small cluster. It emphasizes a “batteries-included” approach – we get a lot of optional features that can be enabled with simple commands. MicroK8s defaults to a vanilla Kubernetes experience (all standard APIs) but with convenient add-ons for common needs. It supports both single-node and multi-node deployments (we can form a cluster by “joining” nodes), and even has an HA mode for the control plane (using Dqlite – a distributed SQLite – when we have 3 masters).

  • Ease of Installation & Maintenance: Extremely easy as well – just snap install microk8s --classic on an Ubuntu machine will set up a Kubernetes node. It’s one package that includes all components. MicroK8s is maintained via Ubuntu’s snap system, which means updates are atomic and can be automatic (we can track a channel for Kubernetes versions). This automatic update capability is unique among these options - we can opt-in to get security patches and minor upgrades via snap refreshes. Managing MicroK8s is done via the microk8s command, which has subcommands to enable/disable features. Overall, it’s very low-friction: no containers or VMs to manage (runs natively on the host), and no external etcd to configure (uses an internal datastore). Maintenance is mostly just updating the snap when needed and using microk8s.status to monitor.

  • Persistent Volume Support: Available via add-on. By default, MicroK8s does not automatically provision PVs until we enable the “hostpath-storage” add-on. Enabling this (with microk8s enable hostpath-storage) creates a default StorageClass that allocates volumes from a directory on the host. This is essentially a dynamic hostPath provisioner, similar in concept to K3s’s local-path. Once enabled, any PVC will bind to a hostpath PV on the MicroK8s node. (On a multi-node MicroK8s cluster, the hostpath PV will reside on one node – suitable for testing or homelab use but not distributed storage). MicroK8s also offers Mayastor and other storage add-ons if we want more advanced storage across nodes, but for simplicity the hostpath storage works well. Note: The add-on is not enabled by default (to keep things lean), so we’ll want to enable it to get PVCs working. Canonical notes this is not for production HA use (since it’s not replicated), but for a homelab it’s perfect.

  • LoadBalancer Support: Available via add-on. MicroK8s does not bundle a default load balancer, but it provides MetalLB as an easy add-on. Running microk8s enable metallb:<start-IP>-<end-IP> deploys MetalLB in Layer 2 mode and lets we specify a pool of IPs from your LAN to use for LoadBalancer Services. Once enabled, any Service of type LoadBalancer will automatically get an IP from that range (and MicroK8s will advertise it via ARP on the LAN). This gives a cloud-like experience on bare metal. (If we don’t enable MetalLB, a LoadBalancer Service will remain pending, since MicroK8s by itself doesn’t implement one – just like upstream Kubernetes.) In a homelab, MetalLB is straightforward and lets us access services on our local network. We’ll need to choose a free subnet or IP range in our network for it. MicroK8s’s enable command makes the setup painless, but it’s still a separate step. (In contrast, K3s works out-of-the-box for LB but with the limitation of using node IPs/ports.) Many MicroK8s users simply enable MetalLB as part of their initial setup for a functional LB.

  • Resource Requirements: MicroK8s is fairly lightweight, though slightly above K3s in footprint. It runs all core services (API server, controller, scheduler, kubelet, etcd or Dqlite) natively. Idle usage is typically ~500–600 MB RAM for the control plane, depending on what add-ons are enabled. Canonical cites ~540 MB baseline memory. CPU usage is low when idle, and our 16 GB nodes have plenty of headroom. Disk space requirement is small (snap ~ 200 MB plus etcd data). In sum, MicroK8s can run on modest hardware (it’s even offered for Raspberry Pis), and in our scenario the machines easily accommodate it. If multiple add-ons are enabled (dashboard, monitoring, etc.), resource use will increase accordingly.

  • Single-Node vs Multi-Node: MicroK8s works well for both. We can use it for a single-node cluster (e.g. on a dev machine or one server) or create a multi-node cluster by “joining” nodes. The MicroK8s documentation provides a command to add a node (we fetch a join token from the first node and use it on the others). In a multi-node setup without HA, one node will be the primary control plane. With 3 nodes, we also have the option to enable ha-cluster mode (making the control plane HA by running Dqlite on 3 nodes), though in our case HA isn’t needed. Whether single or three nodes, the experience is the same Kubernetes API. Multi-node MicroK8s is a bit newer than K3s’s approach but is quite stable now. It’s a good choice if we want a “micro cloud” of a few Ubuntu boxes running k8s with minimal effort.

  • CLI & UI Tools: MicroK8s comes with the microk8s command-line tool, which is very handy. For example, microk8s enable <addon> toggles various services (DNS, ingress, storage, metallb, dashboard, etc.), and microk8s status shows what’s running. It also includes an embedded kubectl: we can use microk8s kubectl ... (or alias it) which talks to the cluster – this is nice because we don’t need to configure a kubeconfig. For UI, MicroK8s provides the Kubernetes Dashboard as an add-on (microk8s enable dashboard), which will deploy the standard web UI. Once enabled, we can access it (it runs on port 10443 or via proxy). MicroK8s doesn’t have a custom GUI of its own – it relies on Kubernetes Dashboard or other tools – but the ease of enabling it is a plus. In summary, MicroK8s emphasizes a one-command experience for common tasks and a “it just works” philosophy, abstracting away a lot of complexity (at the cost of hiding some internals). This makes it very homelab-friendly for those who want a Kubernetes cluster without manual setup of each component.

Contrast K3s vs MicroK8s: Both are quite similar in goals and capabilities – lightweight, easy, multi-node capable. K3s tends to be a bit more “DIY” when adding extras (relying on Helm charts or manual manifests for things not included) whereas MicroK8s offers built-in add-ons via its CLI. MicroK8s is also a more “vanilla” Kubernetes under the hood (upstream components, just packaged as snap), whereas K3s uses some custom binaries and a single service for everything. Both are good choices for a homelab; the decision often comes down to preference: if Ubuntu/snaps and an integrated feel are preferable, MicroK8s is great, whereas if we prefer a minimalistic approach with perhaps a bit more manual control, K3s is excellent. (We’ll provide recommendations at the end.)

Minikube (Single-Node Local K8s)

Key Features: Minikube is primarily a tool for running Kubernetes locally (often on a developer’s PC or laptop) rather than a traditional cluster across multiple machines. It creates a single-node Kubernetes cluster in a VM or container on our machine. Minikube is known for its wide support of environments – it can run on Linux, macOS, Windows, and supports various hypervisors (VirtualBox, KVM, Hyper-V, Docker driver, etc.). It’s maintained by the Kubernetes SIGs and is often used for testing and learning. While Minikube can be coaxed into a multi-node configuration, it’s essentially meant for one-node clusters (the “multi-node” mode of Minikube actually just starts multiple nodes on the same host using container runtimes – useful for simulating a cluster, but not for running on separate physical machines).

  • Ease of Installation & Usage: Minikube is very easy to get started with on a single machine. We install the minikube binary, ensure we have a VM driver (or Docker), and run minikube start. This will set up a local VM/container and launch Kubernetes inside it. It’s arguably the simplest way to get a Kubernetes cluster running for the first time. The CLI (minikube) provides commands to interact with the VM (start, stop, delete) and to enable add-ons. Because Minikube is designed for local use, it’s not a “long-running” daemon – we typically start it when we need it and stop it when we’re done, though we can keep it running continuously on a homelab server if desired. Maintenance/upgrades are easy: just update the minikube version and restart the cluster (Minikube can also update the Kubernetes version it runs with a flag). However, one limitation is that Minikube is single-node by design (adding more nodes means starting additional VM instances on the same host, not joining real separate hosts). So, using Minikube on three separate physical nodes would essentially mean running three independent single-node clusters, which is likely not what we want. Minikube shines for development and testing on one machine, but it’s not intended for managing a cluster spread across multiple physical servers.

  • Persistent Volume Support: Built-in (hostPath). Minikube clusters automatically include a default StorageClass that uses a hostPath provisioner. In fact, Minikube runs a small controller that dynamically creates hostPath PVs inside the VM when PVCs are requested. This means we can create PersistentVolumeClaims and they’ll be satisfied using storage on the Minikube VM’s filesystem (usually under /tmp/hostpath-provisioner or similar). This works out-of-the-box – no configuration needed for basic PV usage. For example, the default “standard” StorageClass in Minikube maps to hostPath storage on the node. One caveat: if we restart or delete the Minikube VM, those hostPath volumes could be lost (Minikube does try to persist certain directories across restarts – e.g., /data, /var/lib/minikube, /tmp/hostpath* are preserved). But in general, for a non-production environment this is fine. If we want to simulate more, Minikube also has a CSI hostpath driver addon that supports snapshotting and can work in multi-node mode, but that’s more for experimentation. Bottom line: Minikube supports PVs by default via hostPath, which is sufficient for a homelab test of stateful apps.

  • LoadBalancer Support: Supported via tunneling (or MetalLB addon). In a cloud, a LoadBalancer service gets a cloud LB with an external IP. In Minikube, there’s obviously no cloud provider, so Minikube provides two mechanisms:

    • Minikube Tunnel: We can run minikube tunnel in a separate process which will create a network route on our host and assign an IP to our LoadBalancer service. Essentially, it uses ourr host to act as the “external LB”. When we create a LoadBalancer Service, Minikube will show an “external IP” (usually from the cluster’s subnet) and the minikube tunnel process will route that to the service. This requires the tunnel process to keep running (and typically root permission to create routes). It’s a bit of a manual step, but Minikube prints a reminder if we have a LoadBalancer service without a tunnel running (“External-IP pending” until we start the tunnel).
    • MetalLB Addon: Newer versions of Minikube also include a MetalLB addon. We can do minikube addons enable metallb and configure an IP range (Minikube will prompt us). This effectively deploys MetalLB inside the Minikube cluster, which then handles LoadBalancer services just like it would on any bare-metal Kubernetes. This is a more “in-cluster” solution and doesn’t require a separate tunnel process after initial setup.

    Both options work, and which to use is up to us. For a quick expose of one service, minikube tunnel is quick and ephemeral. For a more permanent setup, the MetalLB addon can be enabled so that LBs get real IPs (for example, we might give it a range like 10.0.0.240-250 on a Minikube with bridged networking). Remember that Minikube is typically single-node, so in effect the “load balancer” is not balancing across multiple nodes – it’s just providing external access to the single node’s service. This is fine for development. In a homelab, if we only use one of our nodes and run Minikube on it, we could use these to access ourr apps. But if we want to leverage all 3 nodes, Minikube’s approach to LB isn’t meant for that scenario.

  • Resource Requirements: Minikube itself is lightweight, but since it usually runs a full single-node Kubernetes (in a VM), the resource usage is similar to a normal small cluster. The minimum recommended is 2 GB RAM and 2 CPUs for the VM. By default, Minikube often allocates 2 GB RAM for its VM. In our case, with 16 GB per machine, that’s trivial. Idle Minikube (Kubernetes) might use ~600 MB memory. One thing to consider is that Minikube will be running on top of ourr OS – either as a VM via a hypervisor or a Docker container – which adds some overhead. For a homelab server, we could run Minikube with the “none” driver (which installs Kubernetes components directly on the host). However, the “none” (a.k.a. bare-metal) driver is essentially similar to just using kubeadm on that node, and requires manual cleanup, so it’s not as popular. Most use a VM or Docker driver. In summary, any of our homelab nodes can run Minikube easily. The only constraint is that Minikube will not use the resources of all 3 nodes – it’ll be confined to the single host it runs on (unless using the experimental multi-node within one host).

  • Single-Node vs Multi-Node: Primarily single-node. Minikube is ideal for a single-node cluster on one machine. It does have a feature to simulate multiple nodes (e.g. minikube start --nodes 3 with the Docker driver will start 3 Kubernetes nodes as containers on one host), but this is for testing only. We cannot use Minikube to create a real multi-node cluster across 3 separate physical servers. If we have 3 machines, we’d have to run 3 independent Minikube instances (not in one cluster). That’s not a real cluster experience (they won’t know about each other). Therefore, Minikube is not recommended if our goal is to utilize all three nodes in one Kubernetes cluster – it’s better for scenarios where we only have one machine (our PC or one server) and want to run K8s on it. It’s also great for learning Kubernetes basics and doing local development.

  • CLI & UI Tools: The minikube CLI is the main interface. We use it to start/stop the cluster, and it has nice shortcuts: e.g., minikube dashboard will enable the Kubernetes Dashboard and open it in our browser for us. minikube addons enable <addon> can enable a variety of optional components (ingress, metallb, metrics-server, etc.). Minikube sets up kubectl access automatically (it configures a context “minikube” in our kubeconfig). For UI, as mentioned, the Kubernetes Dashboard is easily accessible via the dashboard command. Minikube doesn’t have its own unique web UI; it relies on standard tools. Debugging Minikube is also easy since we can minikube ssh to get into the node VM if needed. Overall, the tooling is very user-friendly for a single-node scenario.

Kubeadm (Vanilla Kubernetes on Our Own Nodes)

Key Features: Kubeadm is not a “distribution” but rather the official toolkit for creating a Kubernetes cluster from scratch. Using kubeadm means we are deploying standard upstream Kubernetes components on our machines. It’s essentially how we “roll our own” cluster, following best practices but without the extras that distributions include. Kubeadm will set up a control plane node (running etcd, API server, controller manager, scheduler) and join worker nodes to it. The result is a fully standard Kubernetes cluster identical to what we’d get on cloud VMs. This approach gives us full control and flexibility – we choose the networking plugin, storage provisioner, etc. – but also requires the most initial manual work and know-how. It’s often used in production or learning environments to understand Kubernetes internals.

  • Ease of Installation & Maintenance: Compared to the others, kubeadm is the most involved. We have to manually install dependencies (container runtime like containerd, kubeadm, kubelet, kubectl) on each node. Then run kubeadm init on the master, and kubeadm join on workers with the token it gives. We also need to set up a CNI (network plugin) after initialization (Calico, Flannel, etc.) since kubeadm doesn’t install one by default. There are well-documented steps for all of this (the process isn’t hard, but it’s many steps) – essentially, kubeadm gets ua a starting cluster but expects us to handle the add-ons. Maintenance-wise, we are responsible for upgrades (kubeadm does have an upgrade command, but we must drain nodes, update binaries, etc. manually), as well as monitoring the etcd health, certificate renewals, etc. In short, kubeadm is powerful but manual. It is often called “the hard way” (though not as hard as building from source). For a homelab hobbyist who enjoys learning, kubeadm is a great way to understand Kubernetes deeply. But if our priority is “easy and low maintenance,” kubeadm will be more work than K3s/MicroK8s. Each upgrade or change will require hands-on effort. That said, once set up, it’s the real deal: a standard Kubernetes cluster with no hidden abstractions.

  • Persistent Volume Support: None by default (must add manually). A kubeadm-installed cluster is essentially a blank Kubernetes – it does not include a default StorageClass or dynamic provisioner out-of-the-box. In cloud environments, the cloud provider would normally supply a default StorageClass (e.g., AWS EBS, etc.), but in a homelab bare-metal environment, we’ll have to install our own solution. Common approaches:

    • HostPath Provisioner: We can replicate what K3s/Minikube do by installing something like the Rancher Local Path Provisioner (which is a small controller + StorageClass that creates hostPath PVs). This is a simple add-on (just a YAML manifest) and gives we local dynamic storage.
    • NFS or NAS: Some homelab users set up an NFS server (or use a NAS) and then use static PVs or a NFS CSI driver for provisioning.
    • OpenEBS/Longhorn/Ceph: There are more complex options like deploying Longhorn or Ceph RBD via Rook if we want distributed storage across nodes. These require more resources and complexity.

    The key point is that kubeadm doesn’t “solve” storage for us – we must decide and configure it. If ease is the priority, the simplest path is to deploy a hostPath provisioner or use NFS. For example, Rancher’s local-path provisioner can be installed in a couple of kubectl apply commands and will mimic K3s’s behavior (creating volumes under /var/lib/rancher/ on whatever node). But this is a manual step. Until we do that, any PVC we create will sit in Pending state because Kubernetes has no default provisioning in place. So, while kubeadm certainly supports Persistent Volumes (it’s full Kubernetes), the support for dynamic PV is as good as the effort we put in to set it up.

  • LoadBalancer Support: None by default (must add manually). Similar story here: in a traditional on-prem cluster, we don’t have a built-in LoadBalancer implementation. If we create a Service of type LoadBalancer on a plain kubeadm cluster, it will stay in pending state forever until we deploy a controller to handle it. The common solution is to install MetalLB ourselves. MetalLB can be installed via manifest or Helm chart – we would configure an IP range and it will then allocate those IPs for LoadBalancer services (just like in MicroK8s). Many guides exist for installing MetalLB on kubeadm clusters. Another alternative some use is kube-vip for control-plane VIP and service LBs, but MetalLB is simpler for services. Essentially, with kubeadm we have the freedom (or burden) to set up whatever load-balancing mechanism fits our needs. If we don’t set up any, we are limited to NodePort services for external access. For a homelab, installing MetalLB is highly recommended – it’s straightforward and gives us that cloud-like service IP functionality. But again, that’s an extra step we must perform (unlike K3s which works out-of-box or MicroK8s with a simple enable).

  • Resource Requirements: Standard Kubernetes is a bit heavier than the trimmed down versions. Each node will run a kubelet and kube-proxy, and the master will run etcd and control-plane components. A single control-plane node can still run in 2 GB RAM, but typically we might want a bit more for comfort if running pods on it. The padok guide suggests 2 GB for master, 2 GB for worker minimum. In our scenario (16 GB per node), that’s fine. Idle etcd and API server might use a few hundred MB memory each. There isn’t a big difference at runtime between a kubeadm cluster and MicroK8s – after all, MicroK8s is those same components. The difference is just what’s running by default (MicroK8s might enable DNS and storage by default, whereas on kubeadm we’d install those). So resource-wise, kubeadm can be as lean or heavy as we configure it. With nothing extra installed, it could be fairly lean. With a typical setup (say we add CoreDNS, Dashboard, etc.), expect ~1 GB or so used for system overheads. We have plenty of RAM, so resources are not a concern. It’s more about the human time/resources required to manage it rather than CPU/RAM.

  • Single-Node vs Multi-Node: Kubeadm can do both, with full flexibility. We can initialize a single-node cluster (and even tell kubeadm to let the single node run workloads by untainting the master). Or we can have one master and join two workers (a common 3-node setup). We could even set up 3 masters and 3 workers, etc. – any topology. In our case, a likely kubeadm setup would be 1 control-plane node and 2 workers (since HA isn’t needed, we don’t need multiple masters). That gives us a functional 3-node cluster. The process for multi-node is well-documented: essentially, get Kubernetes installed on all, init one, join the others. The result is identical to what a managed cluster or other distro would give: our 3 nodes show up in kubectl get nodes, etc. So kubeadm definitely meets the “can use all 3 nodes” requirement.

  • CLI & UI Tools: With kubeadm, the only special CLI is kubeadm itself, used for the setup and (later) upgrade steps. Day-to-day, use kubectl to manage the cluster. There is no integrated management CLI beyond what Kubernetes provides. For UI, nothing is included by default – we can manually deploy the Kubernetes Dashboard or any other tool (just like any cluster). Essentially, kubeadm gives us a blank Kubernetes canvas. It’s up to us to paint on it – which includes installing conveniences like dashboard, ingress controllers, storage classes, etc. Many third-party dashboards (Lens, Octant, etc.) can also connect to a kubeadm cluster if us want a GUI management experience, but those are external tools. In summary, with kubeadm er’re getting the pure Kubernetes environment – maximum flexibility, but also the need to set up everything as if this were a production cluster.

  • Kubespray See also how to install this flavour of Kubernetes with Kubespray.

Side-by-Side Comparison Table

Below is a summary comparing the four options on key points:

Aspect K3s (Lightweight Rancher K8s) MicroK8s (Canonical “Low-Ops” K8s) Minikube (Single-Node Dev K8s) Kubeadm (Vanilla Kubernetes)
Installation One-line install script (single binary). Runs as a single system service. Very quick setup. One-command install via snap on Ubuntu. All components included. Easy clustering with microk8s join. Install minikube CLI, then minikube start to launch a local VM/container. Cross-platform and newbie-friendly. Manual install of kubeadm, kubelet, etc. on each node. Run kubeadm init + kubeadm join with prerequisites. Involves multiple steps (runtime, networking plugin, etc.).
Maintenance & Upgrades Manual upgrades (replace binary or use install script for new version). Simple, since it’s a single binary; little to manage. No auto-update. Snap refresh for updates (can be automatic). Add-ons and cluster services managed via microk8s CLI. Generally low-ops; auto-upgrades available. Easy to delete/recreate cluster for dev. Upgrades by updating minikube version and restarting cluster. Meant for ephemeral use (less focus on in-place upgrade longevity). User responsible for all upgrades. Use kubeadm upgrade but must drain nodes, handle etcd backup, etc. Full control, but you do the work (no automatic updates).
K8s Version Follows upstream fairly closely (often used in edge releases). CNCF conformant. Some features disabled by default (alpha/legacy). Follows upstream releases (snap channels for 1.27, 1.28, etc.). CNCF conformant. Essentially vanilla K8s binaries. We can choose Kubernetes version at start (e.g. minikube start --kubernetes-version=v1.27). Defaults to latest stable. Any version we want (install specific kubeadm/kubelet versions). Full upstream Kubernetes – we decide the version and when to upgrade.
Default Features Bundled defaults: Flannel CNI, CoreDNS, Traefik ingress, service LB, local storage class, metrics-server, etc. (All can be disabled if not needed). Minimal config needed to be functional. Minimal default: DNS is usually on, others optional. Easy one-command add-ons for ingress (NGINX), MetalLB, hostpath storage, dashboard, etc.. Can enable HA mode on 3+ nodes. Bundled in VM: typically includes Docker/containerd runtime, Kubernetes with default addons like StorageProvisioner and DNS. Optional addons (ingress, dashboard, etc.) toggle via CLI. No multi-node by default. Barebones: Nothing beyond core Kubernetes. No ingress, no default storage or LB, no dashboard until we install them. We choose CNI plugin (must install one for networking). Essentially a DIY cluster.
Persistent Volume Support Yes – out-of-box. Rancher local-path dynamic provisioner included, which creates hostPath PVs on the node for any PVC. Default StorageClass “local-path” uses local disk automatically. Yes – easy add-on. Enable hostpath-storage to get a default StorageClass for dynamic PVs using hostPath. Until enabled, no default PV provisioner. Add-on not for multi-node production, but fine for homelab. Yes – out-of-box. Default StorageClass uses hostPath provisioner inside the minikube VM. PVCs are fulfilled by a simple controller that creates a hostPath PV on the node’s filesystem. Data persists across restarts on certain dirs. No (manual). No default provisioner – cluster will have no StorageClass initially. User must install a storage solution (e.g. local path provisioner, NFS, Ceph, etc.). Basic approach: apply a hostPath provisioner YAML to mimic what K3s/Minikube do. Until then, PVCs remain pending.
LoadBalancer Support Yes – built-in ServiceLB. K3s’s servicelb controller (Klipper) watches LoadBalancer services and deploys pods on nodes to expose them. Uses node’s IP and host ports to forward traffic. Works out-of-the-box without config. Suitable for small clusters; uses node internal/external IP for service. Yes – via MetalLB add-on. Enable metallb with an IP range to allocate. Provides true Layer-2 load balancer on bare metal. Not enabled by default. Once enabled, each LoadBalancer service gets a unique IP from our pool. Requires a little initial config (IP range). Yes – via tunnel or MetalLB. No cloud LB, but we can run minikube tunnel to assign an external IP to LoadBalancer services (creates route on host). Alternatively, enable the MetalLB addon in minikube for automatic LB IPs. By default, LB services will show “pending” until we use one of these methods. No (manual). No built-in LB. Typically install MetalLB manually for bare-metal LB functionality. Once MetalLB (or another LB controller) is set up, LoadBalancer services work. Without it, LB services stay pending indefinitely.
Networking (CNI) Default = Flannel (overlay networking). K3s also supports replacing CNI if needed. Comes with CoreDNS deployed for cluster DNS. Traefik ingress included by default (can disable). Default = Calico (for recent versions in HA mode) or an uncomplicated default. (MicroK8s historically used flannel; now tends to use Calico for strict confinement). CoreDNS enabled by default. NGINX ingress available via addon (microk8s enable ingress). Default = kubenet/bridge (depends on driver; often uses a simple NAT network). CoreDNS runs by default. We can enable an NGINX ingress add-on if needed. Networking is confined to the single VM; NodePort is accessible via minikube ip. Choice of CNI. Kubeadm doesn’t install any CNI plugin – we must deploy one (Calico, Flannel, Weave, etc.). We have full control. Most guides have we apply Calico YAML after kubeadm init. CoreDNS is installed by kubeadm by default as cluster DNS. Ingress controller – choose and install ourselves (e.g., NGINX or Traefik via manifests/Helm).
Multi-Node Clustering Yes. Designed for multi-node. Easy join with token. Can use external DB or embedded etcd for multi-master. Great for 2-3+ node homelabs. No extra dependencies needed – K3s has its own clustering built-in. Yes. Supports clustering multiple MicroK8s nodes (with microk8s join). Can enable HA with 3+ masters (Dqlite). Very simple to form a cluster, especially if all nodes run Ubuntu + snap. No (physical). Single-node by design. Can simulate multi-node on one machine (multiple nodes in Docker containers), but cannot span multiple physical hosts in one cluster. Use separate Minikube instances for separate clusters. Yes. Fully supports multi-node (that’s the point). We can have 1 master + many workers, or even multiple masters (though kubeadm HA setup is more complex). No built-in limitation on cluster size.
Resource Overhead Very low. Control plane ~<0.5 GB memory idle. Single process for control plane yields small footprint. Efficient on CPU (though can use slightly more CPU at idle than MicroK8s per some reports). Ideal for low-power or lots of spare capacity. Low. ~0.5–0.6 GB memory for control plane idle. Slightly higher base memory than K3s, but stays stable. Uses Ubuntu snap (might have some overhead). Still lightweight relative to full Kubernetes. Moderate. VM default 2 GB alloc (usage ~0.6 GB idle). Runs a full single-node Kubernetes, plus the VM layer. Not an issue on 16 GB systems, but essentially consumes resources of a small cluster on one machine. Moderate. A single-master with one worker might use ~1 GB idle after adding typical addons. Each additional node adds minimal overhead (just kubelet, proxy). Similar to running Kubernetes in cloud VMs of comparable size. On 3 nodes with 16 GB each, overhead is negligible in context.
CLI Tools Use k3s for installation and as a wrapper for kubectl (or use standard kubectl). No separate management CLI (K3s is mostly “set and forget”). Some helper scripts (e.g., k3s-killall.sh). Rancher’s GUI can be added on top if desired. Rich microk8s CLI: e.g., microk8s enable/disable <addon>, microk8s status. Also includes microk8s kubectl. Designed to simplify common tasks (no direct editing of system manifests needed for basics). minikube CLI to start/stop cluster, manage config and addons, and to get information (IP, service URL, logs). Also provides convenience commands like minikube dashboard. Interact with cluster via kubectl (config auto-set for minikube context). Only kubeadm for initial setup and upgrades. Day-to-day operations via standard kubectl and other Kubernetes tools. No distro-specific CLI beyond bootstrap. You’ll be working with raw Kubernetes commands and perhaps OS tools for maintenance.
UI / Dashboard Not included by default. Can manually install the Kubernetes Dashboard or use external tools (nothing Rancher-specific unless we add Rancher separately). K3s focus is headless operation. Not included by default, but available via one command: microk8s enable dashboard deploys the standard Dashboard UI. Easy access for cluster via microk8s dashboard-proxy. Also integrates well with Canonical’s Lens GUI if desired (Lens can directly access MicroK8s). Not enabled by default, but the minikube dashboard command will deploy and open the Dashboard web UI for us. This is meant for convenience in local dev – one command and we have a GUI to see workloads. Not included. We may install the Dashboard (apply YAML) if we want. Otherwise, use CLI or third-party dashboard apps. In a homelab, one might install the dashboard and create a NodePort or use kubectl proxy to view it. Kubeadm doesn’t concern itself with UIs.

Sources: The data above is synthesized from official docs and user guides: for instance, memory footprints from MicroK8s’ own comparison, default storage in K3s docs, K3s service LB behavior from K3s documentation, MicroK8s add-on details from Canonical docs, Minikube PV and tunnel from the Kubernetes docs, and general experience reports. (See References for full citations.)

Recommendations

Given the priorities (ease of setup/maintenance, and built-in support for storage and load balancers) and the scenario (3 Ubuntu nodes, not concerned with HA):

  • Top Choices: K3s or MicroK8s are the most suitable for a 3-node homelab:

    • Both are extremely easy to install (a single command on each node) and require minimal ongoing maintenance. They abstract away most of the complexity of running a cluster.
    • Both support multi-node clustering out-of-the-box (we can join our 3 nodes and see one unified cluster).
    • They each provide a solution for Persistent Volumes and LoadBalancers without much effort: K3s includes them by default (Local Path storage, Klipper LB) and MicroK8s makes them available via simple enable commands. This means we can deploy typical applications (databases with PVCs, services with type=LoadBalancer) with minimal manual setup.
    • K3s might appeal if we want the absolutely smallest footprint and don’t mind using its built-in defaults (Traefik ingress, etc.). It’s a “set up and it just works” approach with opinionated defaults. It’s also very popular in the homelab community for its simplicity. We’ll use standard kubectl mostly, and can tweak or disable the packaged components if needed. K3s might be preferable if we’re not on Ubuntu or if we like Rancher’s ecosystem (or plan to use Rancher’s management UI later).
    • MicroK8s might appeal if we prefer an Ubuntu-supported solution and like the idea of one-command enabling of features. It’s essentially vanilla Kubernetes under the hood, which some find easier to extend. The add-ons (like microk8s enable ingress dns storage metallb) can get us a fully functional “micro cloud” in minutes. MicroK8s also handles updates gracefully via snaps, which can be nice to keep our cluster up-to-date without manual intervention (we can turn this off or control the channel to avoid surprises). If we’re already running Ubuntu on all nodes (which we are) and don’t mind using snaps, MicroK8s is an excellent choice for a low-maintenance cluster.

    In short: Can’t go wrong with either K3s or MicroK8s for this scenario. Both will give us an easy, homelab-friendly Kubernetes with the features we need. Many users report positive experiences with both in 2–3 node home setups. MicroK8s might have a slight edge in ease-of-use (because of the add-ons and integration), while K3s might have a slight edge in running lean and being straightforward under the hood.

  • When to choose Minikube: If we were just running on a single machine or wanted a quick throwaway dev cluster, Minikube is fantastic for that. It’s the easiest way to spin up Kubernetes on a laptop or one node for testing. However, for a permanent 3-node cluster, Minikube is not the right tool – it won’t merge those 3 nodes into one cluster. We’d end up under-utilizing our hardware or managing 3 separate clusters, which is not desired. So, in this homelab with multiple nodes, Minikube is not recommended as the main solution. We might still use Minikube on our personal computer for trying things before deploying to the homelab cluster, but for the cluster itself, use something like K3s/MicroK8s.

  • When to choose Kubeadm: If our goal was to learn Kubernetes internals or to have full control and the “production-like” setup, kubeadm is a good exercise. It will force us to understand how to install CNI, storage, etc., and we’ll basically build the cluster piece by piece. In terms of ease-of-use, though, kubeadm is the most hands-on. Every feature we need (like storage or LB) we have to configure. For a learning-focused homelab, this could be a pro (educational); for a just-get-it-working homelab, this is a con. Also, maintenance will be more involved (especially during upgrades). Unless we specifically want the vanilla Kubernetes experience for learning or specific custom needs, using K3s or MicroK8s will save us a lot of time and headaches in a homelab environment. That said, some experienced users prefer kubeadm even at home to avoid any vendor-specific quirks and have everything under their control. It’s really up to how much effort we want to spend. For most, kubeadm is overkill for a small cluster where high availability isn’t a concern.

  • Other Options: There are a few other lightweight Kubernetes flavors, like k0s (by Mirantis) and tools like kind (Kubernetes in Docker). For completeness:

    • k0s is another single-binary Kubernetes distro, similar aim as K3s/MicroK8s, that focuses on flexibility and a minimal footprint. It’s relatively new but has fans in the homelab community. It can also run on our 3 nodes easily. It doesn’t (currently) have the same large user base as K3s/MicroK8s, but it’s an option to watch (especially if like the idea of an open-source, configurable, minimal Kubernetes – some reports even show k0s using slightly less idle resources than K3s/MicroK8s in similar setups).
    • kind is mainly for testing Kubernetes clusters in Docker containers (often used for CI pipelines). It’s not something we’d run as our always-on homelab cluster – it’s more for quick ephemeral clusters on one machine (similar to Minikube’s purpose).
    • Rancher Kubernetes Engine (RKE) or K3d or others are also out there, but those are either geared toward containerized clusters (k3d runs a K3s cluster in Docker) or more complex deployment scenarios. In a homelab, K3s and MicroK8s have kind of become the de facto easy solutions.

Conclusion: For a homelab with 3 decent nodes, MicroK8s or K3s are the recommended choices to get a functional Kubernetes cluster with minimal hassle. They will let we leverage all our nodes in one cluster, and provide built-in support for persistent volumes and LoadBalancer services, which is exactly what we asked for. If we prefer a more plug-and-play, Ubuntu-integrated solution, go with MicroK8s. If we prefer a super lightweight, proven solution with Rancher’s backing, go with K3s. We’ll have a working cluster in minutes either way. Once up, we can deploy the Kubernetes Dashboard or other tools to manage it, and start hosting our applications with persistent storage and easy service exposure. Enjoy our homelab Kubernetes journey!

Kubernetes Distributions Homepages