Kubernetes Setup on Proxmox with K3s, Terraform and Ansible

Jul 17, 2024

Kubernetes (aka K8s) vs K3s

K3s is a lightweight K8s distribution suitable for edge computing and resource-constrained environments where a low resource footprint is sought. When combined with Proxmox, a powerful open-source virtualization platform, it allows for efficient and flexible cluster management. Utilizing Terraform for infrastructure provisioning and Ansible for configuration management streamlines the deployment process, ensuring a consistent and repeatable setup.

Prerequisites

Assuming we have Proxmox VE, along with Terraform and Ansible installed on their, we need to ensure our Proxmox server has a properly configured network bridge to allow communication between VMs and the external network.

Infrastructure Setup

First things first, we need a couple VMs to use as K8s nodes. Create the virtual machines (VMs) that will form the K3s cluster. Each VM should have adequate CPU, memory, and storage resources allocated. Configure the network interfaces to ensure they are connected to the appropriate Proxmox bridge, allowing for network communication between the nodes.

In case you haven’t already, please refer the previous post on setting up Terraform to be used with Proxmox: https://hasithsen.pages.dev/tech/posts/setting-up-nextcloud-terraform-ansible/. In this post let’s not repeat ourselves and directly jump into the k8s cluster creation.

Create the below file in your Terraform root directory from above post.

# srv-k8s-cluster.tf

resource "proxmox_vm_qemu" "srv-k8s-masters" {
  count       = var.ci_k8s_master_count
  name        = "k8s-master-${count.index + 1}"
  desc        = "Kubernetes Master Node ${count.index + 1}"
  vmid        = var.ci_start_vmid + count.index
  target_node = "mate"

  clone = "ubuntu-cloud-init"

  agent   = 1
  cores   = 2
  sockets = 1
  cpu     = "host"
  memory  = 4096

  bootdisk                = "scsi0"
  scsihw                  = "virtio-scsi-pci"
  cloudinit_cdrom_storage = "local-lvm"
  onboot                  = true

  os_type      = "cloud-init"
  ipconfig0    = "ip=${var.ci_k8s_base_master_ip}${count.index}/${var.ci_network_cidr},gw=${var.ci_ip_gateway}"
  nameserver   = "8.8.8.8 8.8.4.4 192.168.100.1"
  searchdomain = "sen.local"
  ciuser       = var.ci_user
  cipassword   = var.ci_password
  sshkeys      = <<EOF
  ${file(var.ci_ssh_public_key)}
  EOF

  qemu_os = "other"

  network {
    bridge = "vmbr0"
    model  = "virtio"
  }

  disks {
    scsi {
      scsi0 {
        disk {
          size    = 20
          storage = "local-lvm"
        }
      }
    }
  }

  lifecycle {
    ignore_changes = [
      network
    ]
  }
}

resource "proxmox_vm_qemu" "srv-k8s-nodes" {
  count       = var.ci_k8s_node_count
  name        = "k8s-node-${count.index + 1}"
  desc        = "Kubernetes Node ${count.index + 1}"
  vmid        = var.ci_start_vmid + (count.index + var.ci_k8s_master_count)
  target_node = "mate"

  clone = "ubuntu-cloud-init"

  agent   = 1
  cores   = 2
  sockets = 1
  cpu     = "host"
  memory  = 4096

  bootdisk                = "scsi0"
  scsihw                  = "virtio-scsi-pci"
  cloudinit_cdrom_storage = "local-lvm"
  onboot                  = true

  os_type      = "cloud-init"
  ipconfig0    = "ip=${var.ci_k8s_base_node_ip}${count.index}/${var.ci_network_cidr},gw=${var.ci_ip_gateway}"
  nameserver   = "8.8.8.8 8.8.4.4 192.168.100.1"
  searchdomain = "sen.local"
  ciuser       = var.ci_user
  cipassword   = var.ci_password
  sshkeys      = <<EOF
  ${file(var.ci_ssh_public_key)}
  EOF

  qemu_os = "other"

  network {
    bridge = "vmbr0"
    model  = "virtio"
  }

  disks {
    scsi {
      scsi0 {
        disk {
          size    = 20
          storage = "local-lvm"
        }
      }
    }
  }

  lifecycle {
    ignore_changes = [
      network
    ]
  }
}

Also, add the below into your credentials.auto.tfvars file.

# credentials.auto.tfvars

# Cloud init configuration
ci_k8s_master_count   = 1
ci_k8s_node_count     = 1
ci_k8s_base_master_ip = "192.168.100.1" # Will generate 192.168.100.1X
ci_k8s_base_node_ip   = "192.168.100.2" # Will generate 192.168.100.2X

Then fire up a terminal and execute below to “materialize” the QEMU VMs.

# I've "t" aliased to terraform in ~/.bashrc
alias t=terraform
# Format all Terraform configuration files within the current directory and all subdirectories. This ensures that your Terraform code adheres to the standard formatting guidelines, making it easier to read, maintain, and collaborate on.
t fmt --recursive
t plan -out tf.plan # use --target=resource_type.resource_name for resource targeting
t apply tf.plan

We should be having 2 VMs up and running with Ubuntu by now.

Kubernetes Setup with K3s and Ansible

Here comes the part where we start a playbook execution and wait for things do appear as we wait (as if we did any manual work while Terraform created the VMs for us…hahaha)

Add the below into your requirements.yaml inside Ansible root directory (yes, again from “that” post) if you don’t have it already (we should!)

collections:
  - name: community.general
  - name: ansible.posix
  - name: community.crypto

Then download the specific version from the GitHub repo at https://github.com/k3s-io/k3s-ansible/tree/e53d895428d38b4757a9d00f417b484828de8b23 and copy the roles and playbooks directories into the Ansible root directory. If you want a “better” method, please check https://github.com/k3s-io/k3s-ansible/issues/316#issuecomment-2206164934.

Now, add the below into your inventory.yaml.

# inventory.yaml

k3s_cluster:
  children:
    server:
      hosts:
        192.168.100.10:
    agent:
      hosts:
        192.168.100.20:

  # Required Vars
  vars:
    ansible_port: 22
    ansible_user: preferred_username_from_tf_config
    k3s_version: v1.26.9+k3s1
    # The token should be a random string of reasonable length. You can generate
    # one with the following commands:
    # - openssl rand -base64 64
    # - pwgen -s 64 1
    # You can use ansible-vault to encrypt this value / keep it secret.
    token: "token-you-create-should-go-here"
    api_endpoint: "{{ hostvars[groups['server'][0]]['ansible_host'] | default(groups['server'][0]) }}"
    extra_server_args: ""
    extra_agent_args: ""

    server_config_yaml:  |
        disable:
          - traefik
          - servicelb
          - local-storage        

While there are also other configuration options, we’ll only need above for proceeding with the steps in this post.

# Ansible root directory structure

( pwd )
|
|- inventory.yaml
|
|- ansible.cfg
|
|- playbooks/
|    |
|    - k3s-ansible/
|        |
|        - reboot.yml
|        - reset.yml
|        - site.yml
|        - upgrade.yml
|
|- roles
|    | 
|    - k3s-ansible
|        |
|        - ( k3s roles installed here )

In my case, I avoided calling the airgap, and raspberryp roles from site.yml since neither is necessary in my case, YMMV based on the setup.

Next, we can invoke the playbook with below command.

ansible-playbook -i inventory.yaml playbooks/k3s-ansible/site.yml

Now we should have a shiny Kubernetes cluster (running K3s obviously) created. Time to deploy something…!

HomelabKubernetesProxmoxTerraformAnsibleDevOps

K8s Resources: Deployment vs ReplicaSet

Common Deployment Strategies