Skip to main content

Command Palette

Search for a command to run...

Building kreativarc-core for Kubernetes Observability on a Modest VPS

Updated
3 min read
A

Senior full-stack dev with an AI twist. I build weirdly useful things on my own infrastructure — often before coffee.

When I started designing the Kubernetes infrastructure behind kreativarc.com, I had an ambitious vision: secure-by-default routing, observable everything, modular app stacks, all declaratively provisioned via Pulumi. The whole thing was supposed to run on a dual Hetzner VPS.

It did. For about 15 minutes — until the memory spike hit.

This post documents how I refactored that over-engineered dream into something more sustainable: a minimal, modular, and Pulumi-powered system called kreativarc-core.


What is kreativarc-core?

It’s the glue between my base infrastructure (kreativarc-infra) and the actual app deployments. Think of it as the core system services needed to run, observe, and route workloads in my Kubernetes cluster — but stripped of anything non-essential.

The key components currently include:

  • Cloudflare Tunnel for ingress (no open ports, thank you)

  • WireGuard for internal cluster access

  • Prometheus stack, deployed with Helm

  • Traefik reverse proxy (with only the minimal config enabled — more on that later)

  • Pulumi modules for all of the above


Architecture in One Picture

Here’s the current routing stack — developer access, browser access, and cluster internals:

The exported Kubeconfig and WireGuard handle CLI/kubectl access.
Browser access goes through Cloudflare Tunnel and Traefik.
Prometheus is the only online admin tool — everything else runs headless.


The Original Stack Was Too Hungry

Initially, I planned to run the full Kubernetes observability suite: Grafana, Prometheus, Alertmanager, and even Traefik’s admin dashboard behind Cloudflare Access.

It looked good on paper. In practice, my VPS (2x CX22, 4GB RAM) spent half its memory just keeping these "admin" tools alive.

So I trimmed it.

Only Prometheus and the metrics exporters remained. Everything else got axed or deferred until I upgrade hardware. The Traefik dashboard is disabled, Alertmanager and Grafana are excluded.

The good news? I set things up so that reintroducing them later is just a helm upgrade away.


Desktop Monitoring, Not Online Dashboards

Since exposing dashboards publicly was off the table, I looked into desktop tools to monitor the cluster locally over WireGuard.

  • Headlamp turned out to be a great fit — it can read metrics from Prometheus directly and works without exposing anything online.

  • Lens was also considered (and is a better UX overall), but its free version no longer supports Prometheus integration, which makes it less useful for metrics dashboards.

So for now, Headlamp is the default. WireGuard gets me into the cluster, kubeconfig connects me to kubectl and Headlamp, and everything runs locked down by default.


Pulumi Setup

The kreativarc-core repo uses TypeScript-based Pulumi modules with the following layout:

index.ts ties everything together and provisions resources like Cloudflare tunnel, Prometheus stack, and Traefik reverse proxy.
Environment variables (see .env.sample) are used to keep secrets and configuration cleanly separated.

Smoke tests are included using Jest — they verify pod readiness and ensure logs are free of crash loops and noisy errors.


Final Thoughts

This setup isn’t flashy — but it works. It keeps the system lean, secure, and observably alive. When the time comes to scale or enable more services, I’ve got the structure in place to do it cleanly.

Until then, it’s back to building actual applications — not just dashboards to admire them.

More from this blog