What is it?
Nextcloud is an opensource solution which provides functionality similar to Dropbox1. With my new homelab waiting for a hosted service, I choose to deploy Nextcloud on a VM.
Provisioning the Compute resources
We can create VMs in Proxmox with KVM supporting QEMU2 (short form for Quick Emulator), which is an open source hypervisor that emulates a physical computer. From the perspective of the host system where QEMU is running, QEMU is a user program which has access to a number of local resources like partitions, files, network cards which are then passed to an emulated computer which sees them as if they were real devices.
I wanted to automate the infrastructure provisioning, so I opted to use Terraform/OpenTofu.
But first, we need to have an OS template to use as a clone (which method I choose in the setup). For that, execute the below commands on Proxmox host.
wget https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img
apt install libguestfs-tools
# Customizes the virtual machine image
virt-customize -a jammy-server-cloudimg-amd64.img --install qemu-guest-agent,net-tools --truncate /etc/machine-id
# Creates a new VM
qm create 8000 --name ubuntu-cloud-init --core 2 --memory 2048 --net0 virtio,bridge=vmbr0
# Imports a disk image into the VM
qm disk import 8000 jammy-server-cloudimg-amd64.img local-lvm
# Sets the SCSI controller type to virtio-scsi-pci, which is a high-performance paravirtualized SCSI controller and attaches the imported disk image to the VM as the first SCSI disk (scsi0).
qm set 8000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-8000-disk-0
# Sets the boot order to boot from the first disk. In this context, c refers to the first disk and specifies that the VM should boot from the SCSI disk (scsi0), which was set in the previous command.
qm set 8000 --boot c --bootdisk scsi0
# Enables the QEMU Guest Agent for the VM with ID 8000. The QEMU Guest Agent is a service that runs inside the VM and allows the host to perform certain operations within the guest, such as shutdown, file system freeze, and thaw, fetching IP addresses, etc.
qm set 8000 --agent 1
# Sets up the first serial device (serial0) to use a Unix socket. This can be useful for connecting to the VM's console or for other serial communication purposes.
qm set 8000 --serial0 socket
# Configures the VM to use the serial console as its VGA device. This means that the display output of the VM will be directed to the serial console, which can be useful for VMs that don't require a graphical interface and are managed via the command line.
qm set 8000 --vga serial0
# Enables hotplugging for network interfaces, USB devices, and disks. Hotplugging allows these devices to be added or removed while the VM is running without needing to shut it down.
qm set 8000 --hotplug network,usb,disk
# Converts cloud init VM into template
qm template 8000
Once above is done, we can proceed with the rest.
For Terraform to authenticate with the Proxmox API, we’ll use an access token, with the other method being username/password authentication.
- Go to Datacenter in the left tab.
- To create a user for Proxmox Terraform configuration, navigate to
Permissions
>Users
, and clickAdd
to create theterraform
user. - Under API tokens, create a new token associated with the user. Save this key securely since it won’t be displayed again.
Looking into the HCL we have, following is the provider configuration.
# provider.tf
terraform {
required_version = ">= 1.9.0"
required_providers {
proxmox = {
source = "telmate/proxmox"
version = "3.0.1-rc1"
}
}
}
provider "proxmox" {
pm_api_url = var.proxmox_api_url
pm_api_token_id = var.proxmox_api_token_id
pm_api_token_secret = var.proxmox_api_token_secret
pm_tls_insecure = true
pm_log_enable = true
pm_log_file = "terraform-plugin-proxmox.log"
pm_debug = true
pm_log_levels = {
_default = "debug"
_capturelog = ""
}
}
Then we have the necessary variables defined.
# credentials.auto.tfvars
# Proxmox API
proxmox_api_url = "https://192.168.100.1:8006/api2/json"
proxmox_api_token_id = "your-api-token-id-here"
proxmox_api_token_secret = "your-api-token-secret-here"
# Cloud init configuration
ci_ssh_public_key = "~/.ssh/id_ed25519.pub"
ci_ssh_private_key = "~/.ssh/id_ed25519"
ci_user = "preferred_username"
ci_password = "strongpass"
ci_ip_gateway = "192.168.100.1"
ci_network_cidr = 24
ci_start_vmid = 100
ci_vm_node_count = 1
ci_vm_base_node_ip = "192.168.100.3" # Will generate 192.168.100.3X
And the variable declarations are as below.
# variables.tf
variable "proxmox_api_url" {
type = string
}
variable "proxmox_api_token_id" {
type = string
sensitive = true
}
variable "proxmox_api_token_secret" {
type = string
sensitive = true
}
variable "ci_ssh_public_key" {
type = string
sensitive = true
}
variable "ci_ssh_private_key" {
type = string
sensitive = true
}
variable "ci_user" {
type = string
sensitive = true
}
variable "ci_password" {
type = string
sensitive = true
}
variable "ci_ip_gateway" {
type = string
}
variable "ci_network_cidr" {
type = number
}
variable "ci_start_vmid" {
type = number
}
variable "ci_vm_node_count" {
type = number
}
variable "ci_vm_base_node_ip" {
type = string
}
And finally, the resource definitions.
# srv-vm-nodes.tf
resource "proxmox_vm_qemu" "srv-vm-nodes" {
count = var.ci_vm_node_count
name = "vm-node-${count.index + 1}"
desc = "VM Node ${count.index + 1}"
vmid = var.ci_start_vmid + count.index
target_node = "proxmox-node-name-here"
clone = "ubuntu-cloud-init"
agent = 1
cores = 2
sockets = 1
cpu = "host"
memory = 2048
bootdisk = "scsi0"
scsihw = "virtio-scsi-pci"
cloudinit_cdrom_storage = "local-lvm"
onboot = true
os_type = "cloud-init"
ipconfig0 = "ip=${var.ci_vm_base_node_ip}${count.index}/${var.ci_network_cidr},gw=${var.ci_ip_gateway}"
nameserver = "8.8.8.8 8.8.4.4 192.168.100.1"
searchdomain = "yourdomain.local"
ciuser = var.ci_user
cipassword = var.ci_password
sshkeys = <<EOF
${file(var.ci_ssh_public_key)}
EOF
qemu_os = "other"
network {
bridge = "vmbr0"
model = "virtio"
}
disks {
scsi {
scsi0 {
disk {
size = 20
storage = "local-lvm"
}
}
}
}
lifecycle {
ignore_changes = [
network
]
}
}
You’ll notice that we’ll have 10 VMs before we run out of IPs in as per above, this would be fine in my case since I got only 20G on the Proxmox host and do not plan on exceeding this ceiling. Also, we can increase/decrease the VM count just by setting up the ci_vm_node_count
variable, which was something I needed to have. You are welcome to customize the configurations as you see fit, YMMV :)
Next up, we can run the same old drill.
Since I’ve installed Terraform and Ansible on the Proxmox host, I’ll be running the commands from there. If your controller is different, make sure that the Proxmox API is accessible and you can ssh to the VM from the controller.
# I've "t" aliased to terraform in ~/.bashrc
alias t=terraform
t init
t plan -out tf.plan # use --target=resource_type.resource_name for resource targeting
t apply tf.plan
Now we have a VM up and running, time to install Nextcloud!
“A” is for Ansible!
What good ssh’ing into the instance and manually doing configurations which we couldn’t replicate? let’s use an Ansible role to get this done.
I’ve used some of the awesome Ansible roles for the task, including, but not limited to https://github.com/robertdebock/ansible-role-nextcloud.
You are advised to understand before executing code you find on the open Internet as a rule of thumb, or skim through the same at least.
# ansible.cfg
[defaults]
inventory = ./inventory.ini
roles_path = ./roles
host_key_checking = False # ;)
nocows = True
# requirements.yaml
---
collections:
- name: community.general
- name: ansible.posix
- name: community.crypto
- name: community.mysql
roles:
- name: robertdebock.nextcloud
version: 2.2.5
- name: robertdebock.bootstrap
version: 7.0.2
- name: robertdebock.buildtools
version: 3.1.21
- name: robertdebock.core_dependencies
version: 2.2.11
- name: robertdebock.cron
version: 2.2.5
- name: robertdebock.epel
version: 4.1.4
- name: robertdebock.httpd
version: 7.4.2
- name: robertdebock.mysql
version: 4.3.0
- name: robertdebock.openssl
version: 2.2.11
- name: robertdebock.php
version: 4.1.8
- name: robertdebock.php_fpm
version: 3.2.3
- name: robertdebock.python_pip
version: 4.4.2
- name: robertdebock.redis
version: 3.2.5
- name: robertdebock.remi
version: 3.1.6
- name: robertdebock.selinux
version: 3.1.11
We have a yaml
inventory, and ini
should do just fine.
# inventory.yaml
---
all:
hosts:
nextcloud:
ansible_host: 192.168.100.30
vars:
ansible_port: 22
ansible_user: preferred_username_from_tf_config
And we have the Ansible playbook with the necessary roles included.
# nextcloud.yaml
---
- name: Nextcloud
hosts: nextcloud
become: true
gather_facts: false
roles:
- role: robertdebock.bootstrap
- role: robertdebock.core_dependencies
- role: robertdebock.cron
- role: robertdebock.buildtools
- role: robertdebock.epel
- role: robertdebock.python_pip
- role: robertdebock.openssl
openssl_items:
- name: apache-httpd
common_name: "{{ ansible_fqdn }}"
- role: robertdebock.selinux
- role: robertdebock.httpd
- role: robertdebock.redis
- name: Continue prepare with facts
hosts: nextcloud
become: true
gather_facts: false
pre_tasks:
- name: Include remi
ansible.builtin.include_role:
name: robertdebock.remi
when:
- ansible_distribution != "Fedora"
vars:
remi_enabled_repositories:
- php74
roles:
- role: robertdebock.php
php_memory_limit: 512M
php_upload_max_filesize: 8G
php_post_max_size: 8G
- role: robertdebock.php_fpm
- role: robertdebock.mysql
mysql_databases:
- name: nextcloud
encoding: utf8
collation: utf8_bin
mysql_users:
- name: nextcloud
password: secure_password_here
priv: "nextcloud.*:ALL"
- name: Converge
hosts: nextcloud
become: true
gather_facts: true
roles:
- role: robertdebock.nextcloud
nextcloud_apps:
- name: richdocumentscode
nextcloud_settings:
- name: max_chunk_size
section: files
value: 0
- name: Post local
hosts: localhost
become: true
gather_facts: true
tasks:
- name: Export nfs path for nextcloud host
ansible.builtin.lineinfile:
path: /etc/exports
line: "/srv/shared {{ hostvars[nextcloud]['ansible_default_ipv4']['address'] }}(rw,sync,no_subtree_check)"
create: yes
- name: Post Nextcloud
hosts: nextcloud
become: true
gather_facts: true
tasks:
- name: Add nfs mount in nextcloud host
ansible.builtin.lineinfile:
path: /etc/fstab
line: "192.168.100.1:/srv/shared /mnt/shared nfs defaults 0 0"
create: yes
Note the plays “Post local” and “Post Nextcloud” in above, where I’ve created an NFS
mount on host and mounted the same on the VM. This is to use it as a shared storage, you might need to update the Nextcloud config to use the same if you also take that path and plan to use shared storage.
# directory structure
( pwd )
|
|- inventory.yaml
|
|- ansible.cfg
|
|- playbooks
| |
| - nextcloud.yaml
|
|- roles
| |
| - ( roles installed here )
With the below, we can execute the above playbook, assuming your directory structure is as above.
# install roles
ansible-galaxy role install -r requirements.yaml
# execute plays
ansible-playbook -i inventory.yaml playbooks/nextcloud.yaml
Once above is done, we are good to access the Nextcloud web UI. Need a reverse-proxy? in another post maybe :)
Note that while our Terraform config is declarative, the Ansible tasks are imperative, a distinction some interviewers might want you to explicitly point out. IMHO, that should go without saying if you both know your stuff enough.
Addendum: also, don’t forget to run terraform fmt --recursive
from the root folder of our terraform code :) thanks Chamara aiye!