Arclight Automata

Step-by-Step Guide: Host Your Own Apps on OVH Bare Metal with Proxmox, Ansible, & Kubernetes!

posted

Watch on YouTube: "YouTube video: Deploying Code to a Dedicated Server"

This is a written summary of the above YouTube video I made, along with code snippets and configuration files.

The Problem with Platforms-as-a-Service (PaaS) / Cloud

If you're like me, you've got a ton of programming side project ideas that you want to build. And in the age of AI-assisted programming, it's easier than ever to actually get them all built. But when it comes time to deploy, the options are still not great.

Platforms as a service like Render, Vercel, Heroku, Railway, and fly.io can make deployment simple, but for me, the price and performance is not quite there.

Let me show you what I mean. Looking at Render's Compute pricing, which I find comparable to similar platforms:

For web server and background worker processes:

  • Free tier: 0.1 virtual CPUs (vCPU), 512MB RAM (development only)
  • Starter: $7/month, 0.5 vCPU, 512MB RAM (still not production-ready)
  • Basic: $25/month, 1 vCPU, 2GB RAM (minimum viable for production)

For managed PostgreSQL:

  • Free tier: 30-day limit, 0.1 vCPU, 256MB RAM (testing only)
  • Starter: $19/month, 0.5 vCPU, 1GB RAM (still not production-ready)
  • Basic: $55/month, 1 vCPU, 4GB RAM (minimum viable)

So you're looking at $80/month minimum for a basic web service with database. And that's just for one project - if you're building multiple side projects, that's another $25+ per project (potentially $80+ per project if you need separate databases).

Not only is this expensive, but my experience is that these vCPUs are often sluggish and unpredictable. You might also deal with cold starts where your app gets frozen if it hasn't been used. And this isn't counting storage, network egress, or other variable costs.

The Dedicated Server Alternative

Instead, for the types of apps I run, I find it a lot more cost-effective to run a dedicated server. This is a managed, actual physical server (also called "bare metal") in a datacenter where I'm fully in control of the software. I get my own CPU, RAM, NVMe hard drives, and symmetric network transit - no noisy neighbors or unpredictable performance.

What We'll Build

In this guide, I'll walk through the entire process of:

  1. Getting a dedicated server from OVH
  2. Setting up Proxmox (hypervisor for managing VMs)
  3. Using Ansible to provision Proxmox settings, K3s, PostgreSQL and Redis VMs
  4. Setting up a Kubernetes cluster using K3s
  5. Deploying demo applications
  6. Using Tailscale for secure VM access
  7. Exposing services publicly with Cloudflare Tunnels

Trade-offs to Consider

Benefits of Dedicated Servers

Cost Predictability: You get exactly the hardware you pay for with a guaranteed internet connection (typically 1Gbps). No surprise bills from bandwidth or storage spikes.

Performance: Full resource utilization of all hardware. No worries about noisy neighbors. Your web server and database run on the same machine for ultra-low latency.

Freedom: Run custom workloads like PostGIS extensions, Vespa search, ClickHouse analytics, or any software you need. No pre-approved whitelists.

Drawbacks of Dedicated Servers

Fixed Cost: You pay a flat monthly fee regardless of utilization. If you're not using the resources, you're wasting money.

Scaling Ceiling: You only get as many resources as you have. Scaling requires buying additional servers or migrating to larger hardware.

Responsibility: You handle your own monitoring, alerting, backups, and fixes. No managed service support.

Availability: Hardware failures mean downtime while the data center fixes it. You need proper backup strategies.

Setting Up the OVH Dedicated Server

Choosing Your Server

OVH has several tiers of dedicated servers to choose from. I narrowed in on the "Rise" tier for its balance of performance and price. Once you select a server, you can make customizations to it in terms of RAM, storage, and networking. Pay attention to the availability as you make your selections as some servers may be available instantly in your chosen datacenter, some may require some extra time to provision, and some may not be available at the current time for purchase at all.

I went with the OVH Rise 3 server for its higher core count and more storage. Here's what I configured:

  • CPU: 12 cores/24 threads
  • RAM: Upgraded to 128GB
  • Storage: Upgraded to 2x ~2TB NVMe drives
  • Cost: ~$180/month (with annual commitment)

OVH Dashboard

After OVH emails you that your server is ready, you can log in to the OVH dashboard and view your server details. Take note of your IPv4 address as we'll need it later.

We'll want to disable interventions for now, as we don't want OVH being paged when our server stops responding to pings as we're installing a new operating system.

Installing Proxmox

Since I wanted to set up my two NVMe disks as a ZFS mirror, I couldn't use OVH's Proxmox template. Instead, I had to install manually:

  1. Download Proxmox VE ISO from the official site
  2. Access IPMI/KVM through OVH dashboard
  3. Mount the ISO via Java applet (requires OpenWebStart)
  4. Boot from virtual CD-ROM and install

Installation settings:

  • File system: ZFS
  • RAID level: Mirror (RAID 1 equivalent)
  • Hostname: proxmox-[random-hex].arclight.run

The installation took about an hour due to the virtual mount over residential cable internet.

Note: I use a couple scripts in the video for generating random alphanumeric and random hexadecimal strings. Here's the source of each:

#!/bin/zsh

randalpha() {
    local len=${1:-32}
    openssl rand -base64 "$len" | tr -dc 'A-Za-z0-9' | head -c"$len"
    echo
}

randalpha "$@"

and

#!/bin/zsh

randhex () {
    local char_len=${1:-8}
    local byte_len=$(( (char_len + 1) / 2 ))  # ceiling of char_len / 2
    openssl rand -hex "$byte_len" | head -c "$char_len"
    echo
}

randhex "$@"

Initial Proxmox Configuration

After installation:

  1. Set up DNS record(s) for your Proxmox hostname (A and optionally AAAA records using your server's dedicated IP address)
  2. Configure Let's Encrypt certificate for HTTPS
  3. Run community post-install script to disable nag screens and update repositories (download here)
  4. Download Ubuntu cloud image for VM creation

This needs to be executed on a shell on the Proxmox node (either a web shell in the web GUI or via SSH):

# Download Ubuntu Noble cloud image
cd /var/lib/vz/images
wget https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img
# Verify checksum
curl -s https://cloud-images.ubuntu.com/noble/current/MD5SUMS | grep "noble-server-cloudimg-amd64.img" | md5sum --check

Ansible Automation

I created an Ansible cookbook to automate the entire setup process. Clone the repository from GitHub here.

The playbook handles:

  • NAT Bridge Setup: Creates vmbr1 bridge on Proxmox node with NAT and DHCP
  • VM Creation: Spins up PostgreSQL, Redis, and K3s nodes
  • Service Configuration: Installs and configures all services
  • Tailscale Integration: Connects all VMs to private network

Key Configuration Files

Inventory (inventory/hosts.yml):

all:
  children:
    proxmox:
      hosts:
        proxmox-01:
          ansible_host: proxmox-72ab50d7fe.arclight.run
    postgres:
      hosts:
        postgres-01:
          ansible_host: postgres-01.your-tailnet.ts.net
    redis:
      hosts:
        redis-01:
          ansible_host: redis-01.your-tailnet.ts.net
    k3s:
      hosts:
        k3s-master-01:
          ansible_host: k3s-master-01.your-tailnet.ts.net
        k3s-worker-01:
          ansible_host: k3s-worker-01.your-tailnet.ts.net
        k3s-worker-02:
          ansible_host: k3s-worker-02.your-tailnet.ts.net

VM Specifications (group_vars/all.yml):

vms:
  - name: postgres-01
    id: 100
    cores: 6
    memory: 49152  # 48GB
    disk_size: 512G
    ip: 10.0.0.100

  - name: redis-01
    id: 101
    cores: 2
    memory: 8192   # 8GB
    disk_size: 64G
    ip: 10.0.0.110

  - name: k3s-master-01
    id: 102
    cores: 4
    memory: 16384  # 16GB
    disk_size: 128G
    ip: 10.0.0.120

Ansible setup

Follow these steps to initialize Ansible and encrypt the vault secrets:

# Install dependencies
./initialize.sh

# Activate Python virtual environment
source .venv/bin/activate

# Create and encrypt vault file
randalpha 32 > ~/.ansible-vault-pass
chmod 600 ~/.ansible-vault-pass
ansible-vault encrypt group_vars/all/vault.yml --vault-password-file ~/.ansible_vault_pass --encrypt-vault-id default

# Edit encrypted secrets with:
ansible-vault edit group_vars/all/vault.yml

Tailscale Network Setup

Tailscale creates a secure mesh network between all VMs and your development machine. This eliminates the need to expose SSH ports publicly.

Setup Steps:

  1. Create Tailscale account at tailscale.com
  2. Generate auth key (make sure to enable "reusable")
  3. Add key to vault.yml
  4. Install Tailscale on development machine running Ansible

After VMs are created (later), you can SSH to them using their MagicDNS addresses:

Running Ansible playbooks

Be sure that all variables are filled in

# Run the main provisioning playbook
ansible-playbook provision.yml

# After provisioning, this playbook creates demo application databases:
ansible-playbook application.yml

Kubernetes with K3s

K3s is a lightweight Kubernetes distribution perfect for this setup. The Ansible playbook automatically:

  • Installs K3s on master and worker nodes
  • Configures cluster with proper networking
  • Generates kubeconfig for kubectl access

Testing the Cluster

# Set kubeconfig
export KUBECONFIG=$(pwd)/files/k3s.yaml

# Check cluster status
kubectl cluster-info
kubectl get nodes

# Deploy test application
kubectl apply -f examples/nginx-hello-world.yml

kubectl get services
# Note port number of the hello-world service and replace below:
curl k3s-master-01.your-tailnet.ts.net:<port-number>

Database Setup

The application playbook creates production-ready databases:

PostgreSQL Configuration

  • Database: myprojectdb
  • User: app_user with appropriate permissions
  • Extensions: PostGIS, pgcrypto, etc.
  • Backup: PG Backrest to S3

Redis Configuration

  • ACL: Configured with secure passwords
  • Persistence: RDB and AOF enabled
  • Memory: Optimized for available RAM

Testing Database Connectivity

# Deploy database test app
kubectl apply -f examples/db-test-app.yml

# Check if services can connect
kubectl get services
# Note port number of the db-test-app service and replace below:
curl k3s-master-01.your-tailnet.ts.net:<port-number>

Public Exposure with Cloudflare Tunnels

To expose services publicly without opening firewall ports:

# Install cloudflared CLI
# Create tunnel
./scripts/setup-cloudflare-tunnel db-test-app

# When prompted, enter your chosen domain, e.g. dbtestapp.arclight.run

# Test public access
curl -s https://dbtestapp.arclight.run

The tunnel runs as a Kubernetes deployment and automatically routes traffic from Cloudflare to your internal services.

Conclusion

This setup provides a cost-effective, high-performance hosting solution for multiple applications. While it requires more initial setup and ongoing maintenance than PaaS solutions, the benefits include:

  • Significant cost savings compared to PaaS
  • Predictable performance with dedicated resources
  • Complete control over the software stack
  • Scalability within your resource limits
  • Learning opportunity with modern DevOps practices

The combination of Proxmox, Ansible, K3s, and Tailscale creates a robust, maintainable infrastructure that can grow with your projects.

Let me know what you think, or what content you'd like to see in the future related to this!


Thanks for reading! If you enjoyed this content, you can subscribe to my Youtube Channel (@ArclightAutomata), and like the video this article was adapted from.