Skip to content
Back to Blog
Linux Security DevOps Self-Hosting

Hardening a Self-Hosted Developer Server: What a Security Audit Actually Finds

Nur Ikhwan Idris · · 7 min read

I run a Ubuntu server that hosts over a dozen personal and side-project apps. It has UFW enabled with default-deny, all Docker containers bound to 127.0.0.1, SSH access restricted to my home LAN and Tailscale, and a Cloudflare tunnel as the only public entry point. By most developer home-lab standards, that's a reasonably solid setup.

So I did a proper security audit. Sat down, ran through every service, checked every port, read every config file I'd written or inherited from a cloud-init template. I found six issues: one critical, four moderate, one low. None of them required extraordinary effort to exploit — they just required someone to look.

This is what I found and how I fixed each one.


The Setup

Before diving into findings, here's the baseline architecture that matters:

  • OS: Ubuntu 22.04 LTS
  • Firewall: UFW, default deny inbound, allow outbound
  • Public access: Cloudflare Tunnel only — no ports open to the internet directly
  • SSH: Allowed from local LAN subnet + Tailscale VPN (100.x.x.x range)
  • Docker: Most containers use explicit port bindings to 127.0.0.1
  • Monitoring stack: node_exporter, GPU exporter, Prometheus, Grafana
  • DNS / ad-blocking: AdGuard Home
  • Reverse proxy: nginx

The assumption I was working with: UFW + Cloudflare tunnel + SSH restrictions = good posture. The audit confirmed the assumption was mostly correct — but "mostly" is the word that keeps security people awake at night.


Finding 1: SSH Password Authentication Was On Critical

What I found

SSH key authentication was configured and working. But password authentication was also still enabled, overridden by a cloud-init generated config file:

# /etc/ssh/sshd_config.d/50-cloud-init.conf
PasswordAuthentication yes   # ← cloud-init set this and I never changed it

The main /etc/ssh/sshd_config had PasswordAuthentication no — but sshd processes /etc/ssh/sshd_config.d/ drop-ins after the main file. The drop-in wins.

Why it matters

SSH is only exposed to LAN and Tailscale, so the exposure window is narrow. But if an attacker is on the same local network (or compromises a Tailscale-connected device), they can brute-force password login. Key-only SSH removes that vector entirely.

The fix

# Edit the cloud-init override file
sudo nano /etc/ssh/sshd_config.d/50-cloud-init.conf

# Change:
PasswordAuthentication yes
# To:
PasswordAuthentication no

# Restart sshd
sudo systemctl restart ssh

# Verify — should show no password auth
ssh -o PasswordAuthentication=no user@server  # should succeed with key
ssh -o PubkeyAuthentication=no user@server    # should be refused
Lesson: cloud-init writes to /etc/ssh/sshd_config.d/, not the main config file. Drop-ins override the main config. If you set PasswordAuthentication no in sshd_config and never checked the drop-in directory, your key auth is working but passwords still work too.

Finding 2: No Brute-Force Protection Moderate

What I found

With password auth now disabled, brute-force matters less — but failed key attempts still consume resources and create noise in logs. More importantly, it's a trivial fix.

The fix

sudo apt install fail2ban

# Create a local override (never edit /etc/fail2ban/jail.conf directly)
sudo nano /etc/fail2ban/jail.local
[DEFAULT]
bantime  = 1h
findtime = 10m
maxretry = 5

[sshd]
enabled = true
port    = ssh
logpath = %(sshd_log)s
backend = %(sshd_backend)s
sudo systemctl enable --now fail2ban
sudo fail2ban-client status sshd   # verify it's watching

With five failed attempts in 10 minutes triggering a 1-hour ban, automated scanners won't get anywhere useful.


Finding 3: Monitoring Exporters Binding to All Interfaces Moderate

What I found

My monitoring stack — node_exporter (port 9100) and a GPU metrics exporter (port 9445) — were running with --network=host in Docker. This means they bypass Docker's network isolation entirely and bind to the host network stack, including all interfaces.

# docker ps output showing host networking
PORTS: 0.0.0.0:9100->9100/tcp    ← bound to every interface
PORTS: 0.0.0.0:9445->9445/tcp    ← same

UFW was blocking these ports via default-deny, so they weren't actually reachable from outside. But UFW is one misconfiguration away from exposing them. Defence-in-depth says services that don't need to be internet-accessible shouldn't bind to internet-facing interfaces at all.

Why --network=host exists here

node_exporter needs to read host-level metrics (CPU, disk, network). It uses --network=host and volume mounts into /proc and /sys for this reason. The side effect is that it binds to all interfaces by default.

The fix

node_exporter supports a --web.listen-address flag. Set it to 127.0.0.1:9100:

# docker-compose.yml for monitoring stack
services:
  node_exporter:
    image: prom/node-exporter:latest
    network_mode: host
    command:
      - '--path.procfs=/host/proc'
      - '--path.sysfs=/host/sys'
      - '--web.listen-address=127.0.0.1:9100'   # ← add this
    volumes:
      - /proc:/host/proc:ro
      - /sys:/host/sys:ro

Same approach for any exporter that uses --network=host and exposes a metrics endpoint. Lock the listen address to localhost. Prometheus can still scrape it from localhost:9100 since both are on the host network.


Finding 4: AdGuard Admin UI on All Interfaces Moderate

What I found

AdGuard Home was also running with --network=host for DNS reasons (it needs to bind to port 53). Its admin web UI was listening on 0.0.0.0:3000 — every interface.

The fix

AdGuard Home's admin listen address is configurable in AdGuardHome.yaml:

# AdGuardHome.yaml
http:
  address: 127.0.0.1:3000   # was 0.0.0.0:3000

After restarting AdGuard, the admin UI is only reachable from localhost (or via SSH tunnel). DNS resolution on port 53 continues to work normally — that's a separate binding.

The general principle: separate the service port (what other things connect to) from the admin UI port (what you manage it through). The service port may need broad binding; the admin UI almost never does.


Finding 5: nginx Leaking Version String Low

What I found

Every HTTP response from my nginx reverse proxy included a Server header revealing the exact version:

HTTP/1.1 200 OK
Server: nginx/1.24.0 (Ubuntu)   # exact version disclosed

This is low severity because an attacker needs to find a vulnerability in that specific nginx version first. But version disclosure is easy to prevent and gives attackers less information to work with.

What I found in the config

# /etc/nginx/nginx.conf
http {
    # server_tokens off;   ← commented out, so defaults to "on"
}

Someone had set it — then commented it out, or it was always a comment. Easy miss.

The fix

# /etc/nginx/nginx.conf
http {
    server_tokens off;   # ← uncomment and enable
    # ...
}

sudo nginx -t && sudo systemctl reload nginx

After this, the Server header returns just nginx with no version. Combined with Cloudflare in front of everything (which strips and replaces the Server header anyway on proxied requests), this is belt-and-suspenders.


The Audit Process

Here's the rough checklist I ran through. Nothing exotic — just systematic.

1. Map what's actually listening

# What ports have active listeners on all interfaces?
ss -tlnp

# Cross-check: what does Docker think is exposed?
docker ps --format "table {{.Names}}\t{{.Ports}}"

Compare this against what you intended to expose. Anything on 0.0.0.0 that you didn't consciously decide to put there is worth investigating.

2. Check UFW rules

sudo ufw status verbose

Verify default policies are deny-in / allow-out. Verify each allowed port is intentional. Note: UFW rules don't apply to Docker's direct iptables manipulation — Docker bypasses UFW for container port mappings. This is why binding containers to 127.0.0.1 matters independently of UFW.

3. Verify SSH config

# Check effective SSH config including all drop-ins
sshd -T | grep -E 'passwordauthentication|pubkeyauthentication|permitemptypasswords|allowusers|allowgroups'

sshd -T prints the effective configuration after all files are merged, including drop-ins. If it shows passwordauthentication yes despite your main config saying no, you have a drop-in override.

4. Check HTTP response headers

curl -sI https://yourdomain.com | grep -i server

5. Review Docker network modes

# Find containers using host networking
docker ps -q | xargs docker inspect --format '{{.Name}} {{.HostConfig.NetworkMode}}' | grep host

For each --network=host container: check what ports it binds to and whether those ports need to be on all interfaces or just localhost.


Summary: What Was Found and Fixed

Here's the full picture in one place:

Finding Severity Fix
SSH PasswordAuthentication on (cloud-init override) Critical Edit 50-cloud-init.conf, set to no, restart sshd
No fail2ban Moderate Install fail2ban with sshd jail
node_exporter & GPU exporter on 0.0.0.0 Moderate Add --web.listen-address=127.0.0.1:port
AdGuard admin UI on 0.0.0.0:3000 Moderate Set http.address: 127.0.0.1:3000 in AdGuardHome.yaml
nginx server_tokens leaking version Low Uncomment server_tokens off in nginx.conf

What Good Posture Actually Looks Like

None of these findings required exotic techniques to discover. They all came from reading config files carefully and cross-referencing what was intended against what was actually running. The gap between "I set this up securely" and "this is actually secure" is often just a few untouched defaults and an inherited config nobody reviewed.

My server was in decent shape. The Cloudflare tunnel + UFW + container isolation combination means the actual attack surface exposed to the internet was minimal. But decent shape isn't the same as intentionally secure. The cloud-init SSH config was the most important find — a single vulnerability in my Tailscale setup and that becomes a real risk.

The audit took about two hours. I'd recommend running through the checklist above every few months, especially after adding new services or provisioning from a cloud image. Cloud-init in particular is a reliable source of lingering defaults.


Questions or corrections? Reach out via the contact section of my portfolio.