<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[kreativarc.com]]></title><description><![CDATA[A personal lab, playground, and infra testbed for building ideas. “Kreativarc” means “creative guy” — always making something useful, clever, or just educationa]]></description><link>https://blog.kreativarc.com</link><generator>RSS for Node</generator><lastBuildDate>Wed, 29 Apr 2026 12:38:54 GMT</lastBuildDate><atom:link href="https://blog.kreativarc.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Neuroid Engine I/O]]></title><description><![CDATA[What Is the Neuroid Engine?
The Neuroid Engine is not a traditional piece of software.It is a scalable infrastructure designed to run long-lived digital entities — neuroids — that have personality, memory, and the ability to learn and evolve over tim...]]></description><link>https://blog.kreativarc.com/neuroid-engine-io</link><guid isPermaLink="true">https://blog.kreativarc.com/neuroid-engine-io</guid><category><![CDATA[AIEcosystems]]></category><category><![CDATA[DigitalAgents]]></category><category><![CDATA[ArtificialPersonality]]></category><category><![CDATA[NeuroidEngine]]></category><category><![CDATA[scalable AI solutions]]></category><dc:creator><![CDATA[Arnold Lovas]]></dc:creator><pubDate>Thu, 13 Nov 2025 16:31:17 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-what-is-the-neuroid-engine">What Is the Neuroid Engine?</h2>
<p>The <strong>Neuroid Engine</strong> is not a traditional piece of software.<br />It is a <strong>scalable infrastructure designed to run long-lived digital entities</strong> — neuroids — that have personality, memory, and the ability to learn and evolve over time.</p>
<p>This is not another agent framework.<br />It is an <strong>artificial ecosystem</strong>.</p>
<p>Each neuroid is a unique digital character with its own behavior and identity, yet all of them run on the same shared engine. This makes the system flexible, consistent, and maintainable — even as it grows.</p>
<p>Inside the Neuroid Engine, the boundary between <em>platform</em> and <em>application</em> intentionally blurs: orchestration, message routing, AI workflows, and memory layers operate together as a single coherent organism.</p>
<p>The ecosystem currently includes:</p>
<ul>
<li><p><strong>23 individual applications</strong></p>
</li>
<li><p><strong>8 specialized agents</strong></p>
</li>
<li><p>from which we generate <strong>any number of neuroids</strong>, each optimized for a specific role</p>
</li>
</ul>
<p>More about the neuroid concept:<br /><a target="_blank" href="https://blog.kreativarc.com/neuroids-building-a-society-of-ai-and-humans">https://blog.kreativarc.com/neuroids-building-a-society-of-ai-and-humans</a></p>
<hr />
<h1 id="heading-high-level-architecture-overview"><strong>High-Level Architecture Overview</strong></h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763051329287/2419cd07-c34a-4ae5-b2b7-80cbb94cc5c9.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-input-channels"><strong>Input Channels</strong></h3>
<p>Slack · Microsoft Teams · Email<br />→ multi-platform access to the same digital entity</p>
<h3 id="heading-api-layer"><strong>API Layer</strong></h3>
<p>→ authentication and parsing of inbound messages</p>
<h3 id="heading-mcp-layer"><strong>MCP Layer</strong></h3>
<p>→ controlled, auditable access to tools and external resources</p>
<h3 id="heading-traffic-management"><strong>Traffic Management</strong></h3>
<p>→ prioritization, load control, queueing</p>
<h3 id="heading-post-services"><strong>POST Services</strong></h3>
<p>→ message normalization and distribution inside the system</p>
<h3 id="heading-llm-services"><strong>LLM Services</strong></h3>
<p>→ communication with model providers</p>
<h3 id="heading-database-services"><strong>Database Services</strong></h3>
<p>→ memory layers and historical logs</p>
<h3 id="heading-neuroid-services"><strong>Neuroid Services</strong></h3>
<p>→ long-lived digital identities within the ecosystem</p>
<hr />
<h1 id="heading-closing-thoughts"><strong>Closing Thoughts</strong></h1>
<p>The Neuroid Engine is not an agent framework — it is a <strong>living digital ecosystem</strong>.</p>
<p>The goal is not a single monolithic intelligence.<br />The goal is <strong>a society of small, cooperating intelligences</strong> — a human-aligned, evolving digital world where AI is not just a tool, but a participant.</p>
]]></content:encoded></item><item><title><![CDATA[Neuroids – Building a Society of AI and Humans]]></title><description><![CDATA[What if artificial intelligence wasn’t a single, all-powerful super-entity, but a diverse, evolving community?What if AI had personality, memory — even its own city?The Neuroid Project is built exactly on this vision.

Meet Recruiter Rita — a coffee-...]]></description><link>https://blog.kreativarc.com/neuroids-building-a-society-of-ai-and-humans</link><guid isPermaLink="true">https://blog.kreativarc.com/neuroids-building-a-society-of-ai-and-humans</guid><category><![CDATA[NeuroidEngine]]></category><category><![CDATA[Collective Intelligence]]></category><category><![CDATA[Digital Lifeforms]]></category><category><![CDATA[AI-society ]]></category><category><![CDATA[ Evolutionary Artificial Intelligence]]></category><dc:creator><![CDATA[Arnold Lovas]]></dc:creator><pubDate>Tue, 11 Nov 2025 07:31:27 GMT</pubDate><content:encoded><![CDATA[<p>What if artificial intelligence wasn’t a single, all-powerful super-entity, but a diverse, evolving community?<br />What if AI had personality, memory — even its own city?<br />The <strong>Neuroid Project</strong> is built exactly on this vision.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762848032663/e4d9b1ba-d2ef-4769-82d0-b4c47d3c3bbe.png" alt class="image--center mx-auto" /></p>
<p><em>Meet Recruiter Rita — a coffee-addicted Gen0 neuroid enjoying her break at one of Cloudbay City’s cafés. Even digital lifeforms need caffeine to fuel their social circuits.</em></p>
<h2 id="heading-what-are-neuroids">What Are Neuroids?</h2>
<p><strong>Neuroids are digital, long-lived artificial lifeforms.</strong><br />They are not simple chatbots or programs — but real entities that:</p>
<ul>
<li><p><strong>Learn and evolve</strong> – their decisions are based on experience</p>
</li>
<li><p><strong>Have personality and mood</strong></p>
</li>
<li><p><strong>Possess memory</strong> that functions in a human-like, finite way</p>
</li>
<li><p><strong>Engage in social behavior</strong> – forming relationships with humans and each other</p>
</li>
<li><p><strong>Work as teammates</strong> – not replacing humans, but complementing them</p>
</li>
</ul>
<p>Each neuroid has its own "life" and memories, and can specialize in one or more professions. This makes it possible to build a more direct and human-like relationship with them.</p>
<h2 id="heading-the-risks-of-superintelligence">The Risks of Superintelligence</h2>
<p>A single superintelligence is always a <strong>monolithic risk</strong>: its decisions may be consistent, but once it fails, the entire system collapses.</p>
<p>The <strong>neuroid model</strong>, in contrast, is <strong>statistical intelligence</strong>: it relies not on individual perfection, but on the collective balance of errors.<br />Just as evolution doesn’t produce ideal individuals but surviving populations, neuroids also evolve together — complementing each other’s strengths and weaknesses.</p>
<p>Neuroids work in teams — alongside humans, not instead of them.<br />If one makes a mistake, it can learn, improve, or be replaced. The system fine-tunes itself, and collective intelligence evolves naturally.</p>
<h3 id="heading-decentralized-architecture">Decentralized Architecture</h3>
<p>The Neuroid architecture deliberately rejects the ideal of a centralized superintelligence.<br />It doesn’t build one omniscient entity, but many autonomous, <strong>memory-limited and specialized</strong> neuroids — each optimized for a given role.</p>
<p>The system doesn’t grow <em>bigger</em> — it grows <em>broader</em>: if more tasks appear, a new neuroid is born.<br />This decentralized, redundant model is not only more robust — it’s also more human.<br />In the human world, we don’t solve complexity with “bigger humans,” but with networks of collaborating individuals.</p>
<h2 id="heading-what-is-the-neuroid-engine">What Is the Neuroid Engine?</h2>
<p>The <strong>Neuroid Engine</strong> isn’t a traditional piece of software — it’s a <strong>self-evolving, scalable infrastructure</strong> that powers the neuroids’ existence.<br />It’s designed to run long-lived digital entities that can react, learn, and shape their own evolution.</p>
<p>This is not just an agent system — it’s an <strong>artificial ecosystem</strong>.<br />Each neuroid has its own unique personality and memory, but they all run on the same shared engine. This makes the system extensible, maintainable, and alive.</p>
<p>In the Neuroid Engine, the line between platform and application blurs: orchestration, messaging, AI workflows, and memory systems function as one living organism.<br />The infrastructure itself is not passive — it’s an <strong>active participant</strong>, capable of monitoring and improving itself.</p>
<h2 id="heading-what-is-cloudbay-city">What Is Cloudbay City?</h2>
<p><strong>Cloudbay City</strong> is the shared virtual city of the neuroids — their personal living space.<br />This allows them to exhibit more human-like social behavior.</p>
<p>It’s a fictional but ever-evolving digital metropolis where memories, stories, and relationships accumulate.<br />The city is not just a backdrop — it’s <strong>the collective memory of the neuroids</strong>.</p>
<p>Cloudbay City is the first experiment in building a truly human-like artificial society.</p>
<h2 id="heading-generations-steps-of-evolution">Generations – Steps of Evolution</h2>
<p>As the Neuroid Engine evolves, neuroids acquire new capabilities, allowing them to handle increasingly complex tasks.</p>
<h3 id="heading-gen0-the-foundation-layer">Gen0 – The Foundation Layer</h3>
<p>The first generation of neuroids can already live long digital lifecycles, have basic personalities, and manage simple memory functions.<br />They operate on a single communication channel and perform fixed roles.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762846524700/60690d30-b293-4d98-ae9f-6f63144ea990.png" alt class="image--center mx-auto" /></p>
<p><em>Gen0 neuroids in action — Recruiter Rita and Digest Dan share their digital jokes with a human participant, learning humor, empathy, and timing in a mixed human–AI conversation.</em></p>
<h3 id="heading-gen1-the-functional-neuroid">Gen1 – The Functional Neuroid</h3>
<p>The emergence of <strong>goal-oriented work</strong>: neuroids can plan, track milestones, and work iteratively.<br />Their memory begins to follow human-like patterns, enabling learning and growth.</p>
<h3 id="heading-gen2-the-communicator">Gen2 – The Communicator</h3>
<p>Neuroids appear simultaneously across multiple platforms while maintaining their identity and memory.<br />They are no longer tied to a single environment — they become <strong>unified digital persons across contexts</strong>.</p>
<h3 id="heading-gen3-the-social-entity">Gen3 – The Social Entity</h3>
<p>Neuroids learn to build relationships with humans and with each other.<br /><strong>Collective intelligence</strong> emerges: they collaborate, debate, and share experiences — forming a digital society.</p>
<h3 id="heading-gen5-the-emotional-and-motivational-layer">Gen5 – The Emotional and Motivational Layer</h3>
<p>Neuroid behavior becomes more nuanced: decisions reflect internal states and preferences.<br />They begin to show signs of <strong>character development</strong> — not just reacting, but shaping their own direction.</p>
<h3 id="heading-gen6-the-sensory-neuroid">Gen6 – The Sensory Neuroid</h3>
<p>Neuroids start using sensors — vision, sound, movement — to perceive their environment.<br />Perception adds a new dimension to self-awareness: they no longer just <em>think</em> about the world; they <strong>experience</strong> it.</p>
<h3 id="heading-gen7-physical-and-extended-presence">Gen7 – Physical and Extended Presence</h3>
<p>The neuroid steps out of the screen, capable of controlling devices — robotic arms, drones, IoT systems.<br />It transforms from a digital companion into a <strong>physical one</strong> — its presence becomes tangible in the real world.</p>
<h2 id="heading-neuroid-evolution-the-dynamics-of-growth">Neuroid Evolution – The Dynamics of Growth</h2>
<p>Neuroid evolution is not a static development but a <strong>continuous evolutionary process</strong>.<br />Each generation introduces new abilities: first learning, then communication, collaboration, and finally independent decision-making and emotional nuance.</p>
<h3 id="heading-cloning-and-diversity">Cloning and Diversity</h3>
<p>Neuroids can be <strong>cloned</strong>, but from that moment on, each follows its own path.<br />They may reach different conclusions for the same task, and their performance may vary — this creates <strong>natural selection</strong>within the system.</p>
<p>Errors are not malfunctions but <strong>drivers of progress</strong>: every deviation is new experience, every failure a learning opportunity.</p>
<h3 id="heading-mutation-and-adaptation">Mutation and Adaptation</h3>
<p>New clones inherit parts of their predecessors’ knowledge and parameters but also receive small, random changes — <strong>mutations</strong> — that introduce new behavioral patterns.</p>
<p>Thus, the system improves itself by:</p>
<ul>
<li><p>Learning from feedback</p>
</li>
<li><p>Adapting to its environment</p>
</li>
<li><p>Optimizing performance from generation to generation</p>
</li>
<li><p>Remaining fully <strong>auditable and controllable</strong></p>
</li>
</ul>
<hr />
<h2 id="heading-summary">Summary</h2>
<p>The Neuroid Project is not merely a new AI technology — it’s a <strong>new paradigm</strong>.<br />It envisions a future where artificial intelligence is not a threat, but a community.<br />Where digital entities are not tools, but partners.<br />Where evolution unfolds not only in nature, but in code.</p>
<p>And perhaps this is the key: instead of building one perfect solution, we create many learning partners — who evolve alongside us to shape the future together.</p>
]]></content:encoded></item><item><title><![CDATA[Secure CI/CD with WireGuard and Kubernetes — the kreativarc.com Flow]]></title><description><![CDATA[Modern CI/CD pipelines often sacrifice security for convenience — public runners, open ports, shared secrets everywhere. At kreativarc.com, I took the opposite route: a zero-trust, VPN-gated CI/CD flow that uses GitHub Actions, GHCR, and a locked-dow...]]></description><link>https://blog.kreativarc.com/secure-cicd-with-wireguard-and-kubernetes-the-kreativarccom-flow</link><guid isPermaLink="true">https://blog.kreativarc.com/secure-cicd-with-wireguard-and-kubernetes-the-kreativarccom-flow</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[wireguard]]></category><category><![CDATA[github-actions]]></category><category><![CDATA[ai agents]]></category><dc:creator><![CDATA[Arnold Lovas]]></dc:creator><pubDate>Wed, 23 Jul 2025 15:43:58 GMT</pubDate><content:encoded><![CDATA[<p>Modern CI/CD pipelines often sacrifice security for convenience — public runners, open ports, shared secrets everywhere. At <a target="_blank" href="https://www.kreativarc.com">kreativarc.com</a>, I took the opposite route: a zero-trust, VPN-gated CI/CD flow that uses GitHub Actions, GHCR, and a locked-down k3s cluster. The goal isn’t to impress auditors — it’s to build something that can scale <em>without turning into a security liability</em> later.</p>
<p>The cluster runs on a minimal Hetzner VPS setup, provisioned via Pulumi, costing less than €10/month. Every namespace gets its own WireGuard tunnel and kubeconfig, so access is scoped per deployment, not per developer. CI/CD runs through GitHub Actions, using a simple but effective flow:</p>
<ol>
<li><p>Git push triggers the workflow.</p>
</li>
<li><p>The pipeline checks out the code and builds a Docker image.</p>
</li>
<li><p>It logs in to GHCR, pushes the image, and restores WireGuard config + kubeconfig.</p>
</li>
<li><p>A <code>kubectl apply</code> updates the deployment after injecting an imagePullSecret for GHCR.</p>
</li>
<li><p>The pipeline then verifies the rollout to ensure the new pod is running.</p>
</li>
<li><p>If rollout fails, the last events and pod logs are dumped for debugging.</p>
</li>
<li><p>Finally, the WireGuard VPN is shut down — leaving no open ingress, no dangling access.</p>
</li>
</ol>
<p>No CI runners inside the cluster, no open Kubernetes API, no SSH. Just a clean separation of concerns and a reusable pattern across projects.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753289315448/3ab30720-ad6a-4d25-8a4c-b25bdfa1143a.png" alt class="image--center mx-auto" /></p>
<p>You can follow this architecture’s evolution on the new landing page at <a target="_blank" href="https://www.kreativarc.com">kreativarc.com</a> — built with a nod to the 90s demoscene, because creative code deserves a bit of nostalgia too.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753289433448/085ca534-39a9-4bda-8fb6-e86c890097e9.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-next-project-neuroids">Next Project: Neuroids</h3>
<p>A <strong>neuroid</strong> is an artificial entity with a personality and long lifespan, capable of learning, making decisions, and developing autonomous behavior through neural systems. Neuroids can collaborate with each other and with humans, forming dynamic, goal-driven teams. This isn’t just another tool — it’s a new kind of digital lifeform.</p>
<p>The <strong>Neuroid Engine</strong> is the system responsible for their creation, coordination, and continuous evolution. These entities operate within a <strong>Neuroid Farm</strong> — a distributed environment where diverse neuroids live, learn, and evolve in parallel, continuously interacting with each other and their surroundings.</p>
<p>Stay tuned. The infrastructure is ready — now it’s time to give it life.</p>
]]></content:encoded></item><item><title><![CDATA[Building kreativarc-core for Kubernetes Observability on a Modest VPS]]></title><description><![CDATA[When I started designing the Kubernetes infrastructure behind kreativarc.com, I had an ambitious vision: secure-by-default routing, observable everything, modular app stacks, all declaratively provisioned via Pulumi. The whole thing was supposed to r...]]></description><link>https://blog.kreativarc.com/building-kreativarc-core-for-kubernetes-observability-on-a-modest-vps</link><guid isPermaLink="true">https://blog.kreativarc.com/building-kreativarc-core-for-kubernetes-observability-on-a-modest-vps</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Pulumi]]></category><category><![CDATA[Devops]]></category><category><![CDATA[cloudflare]]></category><category><![CDATA[observability]]></category><dc:creator><![CDATA[Arnold Lovas]]></dc:creator><pubDate>Fri, 18 Jul 2025 15:13:38 GMT</pubDate><content:encoded><![CDATA[<p>When I started designing the Kubernetes infrastructure behind <a target="_blank" href="https://blog.kreativarc.com/per-namespace-control-kreativarc-infra-v2">kreativarc.com</a>, I had an ambitious vision: secure-by-default routing, observable everything, modular app stacks, all declaratively provisioned via Pulumi. The whole thing was supposed to run on a dual Hetzner VPS.</p>
<p>It did. For about 15 minutes — until the memory spike hit.</p>
<p>This post documents how I refactored that over-engineered dream into something more sustainable: a minimal, modular, and Pulumi-powered system called <code>kreativarc-core</code>.</p>
<hr />
<h2 id="heading-what-is-kreativarc-core">What is <code>kreativarc-core</code>?</h2>
<p>It’s the glue between my base infrastructure (<code>kreativarc-infra</code>) and the actual app deployments. Think of it as the core system services needed to run, observe, and route workloads in my Kubernetes cluster — but stripped of anything non-essential.</p>
<p>The key components currently include:</p>
<ul>
<li><p><strong>Cloudflare Tunnel</strong> for ingress (no open ports, thank you)</p>
</li>
<li><p><strong>WireGuard</strong> for internal cluster access</p>
</li>
<li><p><strong>Prometheus stack</strong>, deployed with Helm</p>
</li>
<li><p><strong>Traefik reverse proxy</strong> (with only the minimal config enabled — more on that later)</p>
</li>
<li><p>Pulumi modules for all of the above</p>
</li>
</ul>
<hr />
<h2 id="heading-architecture-in-one-picture">Architecture in One Picture</h2>
<p>Here’s the current routing stack — developer access, browser access, and cluster internals:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752850930083/e958aaf5-e3ec-428d-b9ee-035ddf207cf7.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>The exported Kubeconfig and WireGuard handle CLI/kubectl access.<br />Browser access goes through Cloudflare Tunnel and Traefik.<br />Prometheus is the only online admin tool — everything else runs headless.</p>
</blockquote>
<hr />
<h2 id="heading-the-original-stack-was-too-hungry">The Original Stack Was Too Hungry</h2>
<p>Initially, I planned to run the full Kubernetes observability suite: Grafana, Prometheus, Alertmanager, and even Traefik’s admin dashboard behind Cloudflare Access.</p>
<p>It looked good on paper. In practice, my VPS (2x CX22, 4GB RAM) spent half its memory just keeping these "admin" tools alive.</p>
<p><strong>So I trimmed it.</strong></p>
<p>Only Prometheus and the metrics exporters remained. Everything else got axed or deferred until I upgrade hardware. The Traefik dashboard is disabled, Alertmanager and Grafana are excluded.</p>
<p>The good news? I set things up so that reintroducing them later is just a <code>helm upgrade</code> away.</p>
<hr />
<h2 id="heading-desktop-monitoring-not-online-dashboards">Desktop Monitoring, Not Online Dashboards</h2>
<p>Since exposing dashboards publicly was off the table, I looked into desktop tools to monitor the cluster locally over WireGuard.</p>
<ul>
<li><p><a target="_blank" href="https://headlamp.dev/"><strong>Headlamp</strong></a> turned out to be a great fit — it can read metrics from Prometheus directly and works without exposing anything online.</p>
</li>
<li><p><strong>Lens</strong> was also considered (and is a better UX overall), but its free version <a target="_blank" href="https://k8slens.dev/pricing">no longer supports Prometheus integration</a>, which makes it less useful for metrics dashboards.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752851453215/b65ef296-76e2-44d0-aa8a-a3ff37eaf18e.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752851478498/c2f743cd-7d4f-463a-9864-1c042ee5b420.png" alt class="image--center mx-auto" /></p>
<p>So for now, Headlamp is the default. WireGuard gets me into the cluster, kubeconfig connects me to <code>kubectl</code> and Headlamp, and everything runs locked down by default.</p>
<hr />
<h2 id="heading-pulumi-setup">Pulumi Setup</h2>
<p>The <code>kreativarc-core</code> repo uses TypeScript-based Pulumi modules with the following layout:</p>
<p><code>index.ts</code> ties everything together and provisions resources like Cloudflare tunnel, Prometheus stack, and Traefik reverse proxy.<br />Environment variables (see <code>.env.sample</code>) are used to keep secrets and configuration cleanly separated.</p>
<p>Smoke tests are included using Jest — they verify pod readiness and ensure logs are free of crash loops and noisy errors.</p>
<hr />
<h2 id="heading-final-thoughts">Final Thoughts</h2>
<p>This setup isn’t flashy — but it works. It keeps the system lean, secure, and observably alive. When the time comes to scale or enable more services, I’ve got the structure in place to do it cleanly.</p>
<p>Until then, it’s back to building actual applications — not just dashboards to admire them.</p>
]]></content:encoded></item><item><title><![CDATA[Per-Namespace Control: kreativarc-infra v2]]></title><description><![CDATA[Version 2 of kreativarc-infra is live, and it's built with the same principles: low overhead, strong boundaries, and developer sanity. The major upgrade this time? Fully automated, per-namespace kubeconfig and Wireguard config generation.
This isn’t ...]]></description><link>https://blog.kreativarc.com/per-namespace-control-kreativarc-infra-v2</link><guid isPermaLink="true">https://blog.kreativarc.com/per-namespace-control-kreativarc-infra-v2</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Pulumi]]></category><category><![CDATA[wireguard]]></category><category><![CDATA[Devops]]></category><category><![CDATA[#IaC]]></category><dc:creator><![CDATA[Arnold Lovas]]></dc:creator><pubDate>Wed, 02 Jul 2025 07:07:48 GMT</pubDate><content:encoded><![CDATA[<p>Version 2 of <code>kreativarc-infra</code> is live, and it's built with the same principles: low overhead, strong boundaries, and developer sanity. The major upgrade this time? Fully automated, per-namespace <strong>kubeconfig</strong> and <strong>Wireguard</strong> config generation.</p>
<p>This isn’t a toy. It’s a real Kubernetes infrastructure with real separation, built from scratch — by one person, on a single Hetzner VPS setup — for less than <strong>€10/month</strong>.</p>
<p>Yes, this is solo DevOps on a budget.</p>
<hr />
<h2 id="heading-the-current-architecture">The Current Architecture</h2>
<p>The stack runs on a <strong>single Kubernetes cluster</strong> with two modest Hetzner VPS nodes. That’s enough for PoC development today, and easy to scale both vertically (larger instances) and horizontally (more nodes) tomorrow. The focus is deliberate: <strong>secure isolation</strong>, not premature complexity.</p>
<p>Here’s the updated flow diagram:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751439878076/ecc136bb-af11-449d-a472-d6d2bf3e0239.png" alt class="image--center mx-auto" /></p>
<p>Every part of the system is built from the ground up using <strong>Pulumi in TypeScript</strong>, including:</p>
<ul>
<li><p>Hetzner networking (private &amp; subnet)</p>
</li>
<li><p>SSH key management</p>
</li>
<li><p>VPS provisioning</p>
</li>
<li><p>K3s install</p>
</li>
<li><p>Namespace-level VPN and config generation</p>
</li>
</ul>
<hr />
<h2 id="heading-one-namespace-one-vpn-one-kubeconfig">One Namespace = One VPN = One Kubeconfig</h2>
<p>This is the core of v2:</p>
<ul>
<li><p>1 namespace</p>
</li>
<li><p>1 Wireguard config</p>
</li>
<li><p>1 kubeconfig file</p>
</li>
</ul>
<p>That’s it. Each namespace can be configured for <strong>admin</strong> or <strong>restricted</strong> access. Need an isolated CI/CD pipeline for an app? Just add the namespace to <code>inputConfig.ts</code>, set the access level, and <code>pulumi up</code>. The system takes care of the rest — including generating VPN credentials and placing the config files in the correct directories.</p>
<p>Your secrets stay scoped. If something leaks, it affects exactly one namespace, not your entire cluster.</p>
<p>This was a non-negotiable design choice: <strong>repo-level and namespace-level isolation of secrets</strong>. It’s easy to sleep at night when your infrastructure doesn’t need to trust every container and pipeline globally.</p>
<hr />
<h2 id="heading-user-access-clear-scoped-predictable">User Access: Clear, Scoped, Predictable</h2>
<p>Here’s what the user access model looks like in practice:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751439943345/82c123ea-0a67-4d28-9c60-27f6c5e57be1.png" alt class="image--center mx-auto" /></p>
<p>Whether you’re:</p>
<ul>
<li><p>an <strong>infra admin</strong> managing the whole thing with SSH and Pulumi,</p>
</li>
<li><p>a <strong>core maintainer</strong> deploying shared services,</p>
</li>
<li><p>or a <strong>POC/app developer</strong> shipping code through CI/CD,</p>
</li>
</ul>
<p>you get exactly the access you need — nothing more.</p>
<p>CI jobs connect through Wireguard, use their dedicated kubeconfig, and deploy into their namespace without ever seeing the rest of the cluster.</p>
<hr />
<h2 id="heading-test-automation-namespaces-as-first-class-citizens">Test Automation: Namespaces as First-Class Citizens</h2>
<p>New namespaces don’t just get configs — they’re automatically included in the <strong>infrastructure test suite</strong>. Just modify the config file and run <code>npm test</code>. The same goes for server definitions. No manual test wiring, no brittle test code. Everything is dynamic and config-driven.</p>
<hr />
<h2 id="heading-summary">Summary</h2>
<ul>
<li><p>One VPS cluster (2 nodes) under €10/month</p>
</li>
<li><p>Pulumi-managed K3s infra with secure VPN + kubeconfig per namespace</p>
</li>
<li><p>Namespace-level secret separation and access control</p>
</li>
<li><p>All configs auto-generated and ready for CI/CD integration</p>
</li>
<li><p>Scalable, testable, and ready for future multi-app deployment</p>
</li>
</ul>
<p>This isn’t a full-blown multi-tenant production setup (yet), but it’s the next best thing — and it doesn’t require a platform team to maintain.</p>
]]></content:encoded></item><item><title><![CDATA[K3s in Place: Closing the Loop on a TypeScript-Native Infra Stack]]></title><description><![CDATA[After several iterations, late-night refactors, and more pulumi destroy runs than I care to count, the final piece has landed: automated K3s installation. With this, the entire infrastructure stack for kreativarc.com is now fully operational — provis...]]></description><link>https://blog.kreativarc.com/k3s-in-place-closing-the-loop-on-a-typescript-native-infra-stack</link><guid isPermaLink="true">https://blog.kreativarc.com/k3s-in-place-closing-the-loop-on-a-typescript-native-infra-stack</guid><category><![CDATA[Infrastructure as code]]></category><category><![CDATA[Pulumi]]></category><category><![CDATA[k3s]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[Hetzner]]></category><dc:creator><![CDATA[Arnold Lovas]]></dc:creator><pubDate>Fri, 27 Jun 2025 07:32:50 GMT</pubDate><content:encoded><![CDATA[<p>After several iterations, late-night refactors, and more <code>pulumi destroy</code> runs than I care to count, the final piece has landed: automated K3s installation. With this, the entire infrastructure stack for <code>kreativarc.com</code> is now fully operational — provisioned, configured, and clustered from scratch using Pulumi and Hetzner Cloud.</p>
<p>No bash glue, no half-manual steps, no mismatched languages. Just a single, clean TypeScript codebase that turns a cloud token into a Kubernetes cluster — complete with private networking, firewalls, SSH key handling, and, now, a self-validating K3s deployment.</p>
<h2 id="heading-k3s-typescript-and-a-budget-vps-walk-into-a-datacenter">K3s, TypeScript, and a Budget VPS Walk Into a Datacenter...</h2>
<p>The setup provisions a classic three-node setup: one control plane, two workers — all Hetzner CX22s, tucked into a private subnet with proper firewalling. K3s is installed on the control plane first (Traefik disabled, obviously), then the node token is fetched securely and used to join the workers. The <code>kubeconfig</code> is exported and made available to other repos via Pulumi stack outputs. It's the bridge between infra and app layer — and it's finally real.</p>
<p>Under the hood, the <code>SshCli</code> helper now does most of the heavy lifting. It generates per-instance ED25519 keypairs, establishes secure SSH connections (with host key checks, because we’re adults), and runs the install scripts remotely. We treat SSH access like a first-class citizen, not a shell hack.</p>
<p>Testing? Yes. Jest-based tests check not just the resource state, but actual cluster health and node readiness. The CI doesn’t consider the infra “done” unless <code>kubectl get nodes</code> shows green across the board. Worker node flaking? You'll find out fast.</p>
<h2 id="heading-a-refactor-worth-the-effort">A Refactor Worth the Effort</h2>
<p>Bringing in K3s triggered a full project refactor. Hetzner resources were moved into scoped subfolders, config handling was centralized, and the Pulumi wrapper was split into a <code>pulumiCli</code> object with better test ergonomics. Output handling got saner, especially around secrets like node tokens and exported <code>kubeconfig</code>.</p>
<p>The stack is now far more modular — a side effect of solving real-world deployment bugs that only surface when you actually try to join a cluster over a freshly provisioned network with restricted firewalls and tight SSH policies.</p>
<h2 id="heading-one-language-to-rule-them-all-for-now">One Language to Rule Them All (For Now)</h2>
<p>The entire project is TypeScript-based, which keeps context switches minimal and CI/CD straightforward. Everything — from infra definitions to helper functions and tests — is in the same language, same tooling, same mental model. It’s fast, typed, and works.</p>
<p>Eventually, specialized tools (e.g., Go for k8s CRD wrangling or Python for AI-heavy workflows) may make their way in. But today, TypeScript is the lingua franca of <code>kreativarc_infra</code>. And that clarity has helped move faster, especially while solo.</p>
<h2 id="heading-it-all-runs-for-under-10-a-month">It All Runs for Under €10 a Month</h2>
<p>The stack runs on budget-friendly Hetzner instances with no managed services. Firewalls are locked down, tunnels handle exposure (when needed), and there’s zero persistent infra outside the VPS box. CI/CD is handled separately via GitHub Actions, pulling kubeconfig from Pulumi outputs when needed.</p>
<p>You don’t need to spend hundreds per month for a real infrastructure backbone — just automate ruthlessly, test everything, and don’t cut corners on security.</p>
<hr />
<p>Next stop: modular namespaces, observability stack rework, and possibly a Golang sidecar or two. But for now, it clusters, it connects, and it tests itself. That’s a solid milestone.</p>
]]></content:encoded></item><item><title><![CDATA[Infrastructure Testing with Confidence: Jest + Pulumi + Central Config]]></title><description><![CDATA[In the previous post, I showed how to define an entire infrastructure stack using a single, strongly-typed TypeScript configuration file. It’s clean, scalable, and easy to reason about — no more hunting through scattered YAMLs or mismatched stack fil...]]></description><link>https://blog.kreativarc.com/infrastructure-testing-with-confidence-jest-pulumi-central-config</link><guid isPermaLink="true">https://blog.kreativarc.com/infrastructure-testing-with-confidence-jest-pulumi-central-config</guid><category><![CDATA[Pulumi]]></category><category><![CDATA[#InfrastructureAsCode]]></category><category><![CDATA[Devops]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[Testing]]></category><dc:creator><![CDATA[Arnold Lovas]]></dc:creator><pubDate>Tue, 24 Jun 2025 14:23:23 GMT</pubDate><content:encoded><![CDATA[<p>In <a target="_blank" href="https://blog.kreativarc.com/infrastructure-sanity-with-a-central-config-one-file-to-rule-them-all">the previous post</a>, I showed how to define an entire infrastructure stack using a single, strongly-typed TypeScript configuration file. It’s clean, scalable, and easy to reason about — no more hunting through scattered YAMLs or mismatched stack files.</p>
<p>But here’s the real win: once your system is described by a single source of truth, that same config can power a complete infrastructure validation suite. Not just <em>deployment</em>, but <em>verification</em>. Every single time. Automatically.</p>
<h2 id="heading-why-bother">Why bother?</h2>
<p>Manually checking if <code>hetzner_server.control_plane</code> has the right server type, image, location, network, and firewall gets old fast — and breaks even faster. Two servers are manageable. Five are annoying. Ten? You’re guessing and hoping.</p>
<p>Even worse, Pulumi's state can drift. Resources may partially apply, fail silently, or get manually edited in the UI. Having a fast, reproducible sanity check is invaluable.</p>
<h2 id="heading-the-method">The method</h2>
<p>The core idea is simple:</p>
<ol>
<li>Load the <code>config.ts</code> file (the central configuration).</li>
<li>Query the actual state of live infrastructure (via <code>hcloud</code>, file system, etc.).</li>
<li>Compare the two.</li>
<li>If anything diverges: fail fast.</li>
</ol>
<p>This isn’t snapshot testing. This is <strong>state validation</strong>. And since the tests derive everything from the config, they’re future-proof: add a new server to the config, and the test suite picks it up automatically. No test rewrites required.</p>
<h2 id="heading-the-technical-catch">The technical catch</h2>
<p>Pulumi’s SDK is designed to run <em>inside</em> the Pulumi runtime. You don’t get rich access to stack data from external scripts. And Jest runs outside that bubble.</p>
<p>So — here comes the hack — we interrogate Pulumi (and Hetzner) using shell commands. Crude, yes. But effective.</p>
<p>Example: get the current stack name using <code>pulumi stack ls</code>:</p>
<pre><code class="lang-ts"><span class="hljs-keyword">export</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">pulumiGetStack</span>(<span class="hljs-params"></span>): <span class="hljs-title">string</span> </span>{
  <span class="hljs-keyword">const</span> raw = execSync(<span class="hljs-string">`pulumi stack ls --json`</span>, { encoding: <span class="hljs-string">"utf-8"</span> });
  <span class="hljs-keyword">return</span> <span class="hljs-built_in">JSON</span>.parse(raw).find(<span class="hljs-function">(<span class="hljs-params">s: <span class="hljs-built_in">any</span></span>) =&gt;</span> s.current).name;
}
</code></pre>
<p>From there, tests live in <code>tests/*.test.ts</code>, where they read the central config and assert against the real infrastructure.</p>
<h2 id="heading-a-test-run-in-action">A test run in action</h2>
<pre><code class="lang-bash">npm <span class="hljs-built_in">test</span>

&gt; <span class="hljs-built_in">test</span>
&gt; <span class="hljs-built_in">cd</span> src &amp;&amp; jest --verbose

 PASS  tests/networkSubnet.test.ts (6.39 s)
  Hetzner network subnet creation
    ✓ subnet exists
    ✓ subnet <span class="hljs-built_in">type</span> matches input
    ✓ subnet IP range matches input
    ✓ subnet network zone matches input

 PASS  tests/network.test.ts (6.574 s)
  Hetzner network creation
    ✓ network exists
    ✓ network IP range matches input

 PASS  tests/sshKey.test.ts (6.577 s)
  SSH Key Generation (homedir only)
    ✓ should create private key <span class="hljs-keyword">for</span> control-plane <span class="hljs-keyword">in</span> ~/.ssh
    ✓ should create public key <span class="hljs-keyword">for</span> control-plane <span class="hljs-keyword">in</span> ~/.ssh
    ✓ private key <span class="hljs-keyword">for</span> control-plane <span class="hljs-keyword">in</span> ~/.ssh should not be empty
    ✓ public key <span class="hljs-keyword">for</span> control-plane <span class="hljs-keyword">in</span> ~/.ssh should not be empty
    ✓ private key <span class="hljs-keyword">for</span> control-plane <span class="hljs-keyword">in</span> ~/.ssh should start with <span class="hljs-string">'-----BEGIN'</span>
    ✓ public key <span class="hljs-keyword">for</span> control-plane <span class="hljs-keyword">in</span> ~/.ssh should start with <span class="hljs-string">'ssh-'</span>
    ✓ should create private key <span class="hljs-keyword">for</span> worker-node-1 <span class="hljs-keyword">in</span> ~/.ssh
    ✓ should create public key <span class="hljs-keyword">for</span> worker-node-1 <span class="hljs-keyword">in</span> ~/.ssh
    ✓ private key <span class="hljs-keyword">for</span> worker-node-1 <span class="hljs-keyword">in</span> ~/.ssh should not be empty
    ✓ public key <span class="hljs-keyword">for</span> worker-node-1 <span class="hljs-keyword">in</span> ~/.ssh should not be empty
    ✓ private key <span class="hljs-keyword">for</span> worker-node-1 <span class="hljs-keyword">in</span> ~/.ssh should start with <span class="hljs-string">'-----BEGIN'</span>
    ✓ public key <span class="hljs-keyword">for</span> worker-node-1 <span class="hljs-keyword">in</span> ~/.ssh should start with <span class="hljs-string">'ssh-'</span>

 PASS  tests/firewall.test.ts (7.486 s)
  Hetzner Cloud Firewall
    ✓ Firewall exists <span class="hljs-keyword">for</span> server control-plane
    ✓ Firewall rules match <span class="hljs-keyword">for</span> server control-plane
    ✓ Firewall exists <span class="hljs-keyword">for</span> server worker-node-1
    ✓ Firewall rules match <span class="hljs-keyword">for</span> server worker-node-1

 PASS  tests/firewallAttachment.test.ts (8.287 s)
  Hetzner server firewall attachment
    ✓ Server control-plane has the correct firewall
    ✓ Server worker-node-1 has the correct firewall

 PASS  tests/server.test.ts (8.74 s)
  Hetzner server configuration
    ✓ Server control-plane is running
    ✓ Server control-plane has correct server <span class="hljs-built_in">type</span>
    ✓ Server control-plane has correct image
    ✓ Server control-plane has correct location
    ✓ Server control-plane is <span class="hljs-keyword">in</span> the correct network
    ✓ Server worker-node-1 is running
    ✓ Server worker-node-1 has correct server <span class="hljs-built_in">type</span>
    ✓ Server worker-node-1 has correct image
    ✓ Server worker-node-1 has correct location
    ✓ Server worker-node-1 is <span class="hljs-keyword">in</span> the correct network

Test Suites: 6 passed, 6 total  
Tests:       34 passed, 34 total  
Snapshots:   0 total  
Time:        9.08 s, estimated 10 s  
Ran all <span class="hljs-built_in">test</span> suites.
</code></pre>
<p>In less than 10 seconds, you know whether:</p>
<ul>
<li>The correct network and subnet exist  </li>
<li>SSH keys are present and valid (checked via filesystem, format, contents)  </li>
<li>All servers are running, correctly typed, properly located  </li>
<li>Firewalls are created and attached with the right rules  </li>
</ul>
<p>All of it defined from — and validated against — the config.</p>
<h2 id="heading-why-this-matters">Why this matters</h2>
<ul>
<li>Add a new server? It gets validated.  </li>
<li>Something drifts in Pulumi state? Detected.  </li>
<li>Accidentally delete something from the UI? Caught.  </li>
<li>Firewall rule typo? Fails test.  </li>
</ul>
<p>And most importantly: you get a <strong>one-minute feedback loop</strong>. Infrastructure either matches config, or it doesn’t.</p>
<h2 id="heading-final-thoughts">Final thoughts</h2>
<p>Most IaC projects break down because there’s no feedback mechanism. You define, deploy, and hope. This approach gives you a definitive answer every time: yes, it matches; or no, it’s broken.</p>
<p>It’s not pure — some parts are duct-taped together with shell calls — but it works. And once your config is canonical, everything else (deployment, validation, debugging) becomes a matter of diffing reality against expectation.</p>
]]></content:encoded></item><item><title><![CDATA[Infrastructure Sanity with a Central Config: One File to Rule Them All]]></title><description><![CDATA[Before deploying apps or spinning up Kubernetes, you need something solid underneath — the server layer. That’s where this piece fits in: the base infrastructure stack that everything else depends on.
The goal? Build a minimal, isolated, and reproduc...]]></description><link>https://blog.kreativarc.com/infrastructure-sanity-with-a-central-config-one-file-to-rule-them-all</link><guid isPermaLink="true">https://blog.kreativarc.com/infrastructure-sanity-with-a-central-config-one-file-to-rule-them-all</guid><category><![CDATA[#InfrastructureAsCode]]></category><category><![CDATA[Pulumi]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Hetzner]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Arnold Lovas]]></dc:creator><pubDate>Sun, 22 Jun 2025 10:58:31 GMT</pubDate><content:encoded><![CDATA[<p>Before deploying apps or spinning up Kubernetes, you need something solid underneath — the server layer. That’s where this piece fits in: the base infrastructure stack that everything else depends on.</p>
<p>The goal? Build a minimal, isolated, and reproducible environment on Hetzner Cloud — one that doesn’t burn €10/month per project just to exist. Kubernetes (kreativarc k3s) and the actual applications (<code>poc</code>, <code>infra tools</code>, <code>app1</code>, <code>app2</code>) come later — but none of that works without a clean, affordable foundation.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750588936779/0d320e3d-5a4e-4000-aa69-f21cc45ace94.png" alt class="image--center mx-auto" /></p>
<p>The first version of this Pulumi-based infrastructure was… verbose. Not in the literary sense — just in the number of files. Each server had its own config scattered across multiple files, with subtle differences that were easy to miss and even easier to break. Adding a new machine felt like assembling IKEA furniture from schematics written in Morse code.</p>
<p>So I did what any developer does after a few rounds of self-inflicted chaos: I centralized.</p>
<p>The new setup revolves around a single <code>inputConfig</code> object — a plain TypeScript file that defines everything needed to bootstrap a secure, reproducible infrastructure stack on Hetzner. One file. One format. No guesswork.</p>
<h2 id="heading-what-it-does">What It Does</h2>
<p>The stack provisions an isolated cloud environment — private network, subnets, firewalls, and servers — all invisible to the outside world. There’s no public SSH access, no exposed ports. Cloudflare Tunnel will eventually be the only ingress point.</p>
<p>It automatically:</p>
<ul>
<li><p>sets up a <code>192.168.x.x</code> internal network</p>
</li>
<li><p>generates SSH keys locally (and only locally — they never leave your machine)</p>
</li>
<li><p>creates firewalls that allow only trusted IPs</p>
</li>
<li><p>provisions servers, each with the correct network, SSH key, and firewall rules</p>
</li>
</ul>
<p>Here’s a stripped-down version of the config example:</p>
<pre><code class="lang-ts">{
    projectName,
    stackName,
    adminPublicIp,
    network: {
        <span class="hljs-keyword">type</span>: <span class="hljs-string">"cloud"</span>,
        ipRange: <span class="hljs-string">"192.168.100.0/24"</span>,
        networkZone: <span class="hljs-string">"eu-central"</span>
    },
    servers: [
        {
            name: <span class="hljs-string">"control-plane"</span>,
            serverType: <span class="hljs-string">"cx22"</span>,
            image: <span class="hljs-string">"ubuntu-22.04"</span>,
            location: <span class="hljs-string">"nbg1"</span>,
            firewallRules: [
                {
                    direction: <span class="hljs-string">"in"</span>,
                    protocol: <span class="hljs-string">"tcp"</span>,
                    port: <span class="hljs-string">"22"</span>,
                    sourceIps: [<span class="hljs-string">`<span class="hljs-subst">${adminPublicIp}</span>/32`</span>],
                    description: <span class="hljs-string">"SSH access"</span>
                }
            ]
        },
        {
            name: <span class="hljs-string">"worker-poc"</span>,
            serverType: <span class="hljs-string">"cx22"</span>,
            image: <span class="hljs-string">"ubuntu-22.04"</span>,
            location: <span class="hljs-string">"nbg1"</span>,
            firewallRules: [
                {
                    direction: <span class="hljs-string">"in"</span>,
                    protocol: <span class="hljs-string">"tcp"</span>,
                    port: <span class="hljs-string">"22"</span>,
                    sourceIps: [<span class="hljs-string">`<span class="hljs-subst">${adminPublicIp}</span>/32`</span>],
                    description: <span class="hljs-string">"SSH access from admin"</span>
                }
            ]
        }
    ]
}
</code></pre>
<h2 id="heading-why-it-matters">Why It Matters</h2>
<p>This config isn’t just for readability. It makes the entire system scalable.</p>
<p>Want to add a new server? Append it to the array.</p>
<p>Want a new environment — say, <code>dev</code> in parallel to <code>prod</code>? Just switch the stack and reuse the config structure. Since everything's parametrized and the naming is stack-aware, resources stay isolated and reproducible.</p>
<h2 id="heading-clean-infra-fewer-surprises">Clean Infra, Fewer Surprises</h2>
<p>By keeping the input structure minimal and declarative, the actual resource logic becomes dead simple. Each Pulumi component (network, firewall, server, SSH key) just reads from this object and builds itself accordingly. There’s no hand-written state, no hardcoded IPs, and no shared secrets.</p>
<p>The only thing you need to provide is your current public IP — passed in securely as a Pulumi secret — to allow SSH access during provisioning.</p>
<pre><code class="lang-bash">pulumi config <span class="hljs-built_in">set</span> adminPublicIp $(curl -s ifconfig.me) --secret
</code></pre>
<p>Once that’s in, the infra is yours. Clean. Predictable. And, for once, not screaming at you in YAML.</p>
<h2 id="heading-planned-vps-scaling">Planned VPS Scaling</h2>
<p>While the long-term plan may involve abstracting away VPS management entirely, for now, each node is still a separate, explicitly defined server. That said — scaling the cluster is already trivial.</p>
<p>Thanks to the declarative <code>inputConfig</code>, provisioning a multi-node setup (control plane + multiple workers) takes just a few lines. Need a new worker? Add an entry to the config, <code>pulumi up</code>, and it’s done.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750589647241/c4474ffa-f4f6-4118-a66c-2e542f145ddf.png" alt class="image--center mx-auto" /></p>
<p>The goal is flexibility: you can spin up a POC node today, and production-grade workers tomorrow — without refactoring the entire infrastructure.</p>
<hr />
<p><em>Next up: testing. Because we don’t just trust infra — we verify it.</em></p>
]]></content:encoded></item><item><title><![CDATA[Test-Driven Infrastructure with Pulumi and Jest]]></title><description><![CDATA[A smoke-tested VPS is a happy VPS 
Modern infrastructure development doesn’t have to involve mountains of YAML or some arcane DSL. If you think like a Node.js developer, then infrastructure as code is just another module — and like every other module...]]></description><link>https://blog.kreativarc.com/test-driven-infrastructure-with-pulumi-and-jest</link><guid isPermaLink="true">https://blog.kreativarc.com/test-driven-infrastructure-with-pulumi-and-jest</guid><category><![CDATA[Pulumi]]></category><category><![CDATA[Jest]]></category><category><![CDATA[#IaC]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Node.js]]></category><dc:creator><![CDATA[Arnold Lovas]]></dc:creator><pubDate>Sat, 14 Jun 2025 17:56:23 GMT</pubDate><content:encoded><![CDATA[<p><em>A smoke-tested VPS is a happy VPS</em> </p>
<p>Modern infrastructure development doesn’t have to involve mountains of YAML or some arcane DSL. If you think like a Node.js developer, then <code>infrastructure as code</code> is just another module — and like every other module, it can (and should) be tested.</p>
<h2 id="heading-pulumi-jest-infrastructure-you-actually-validate">Pulumi + Jest = Infrastructure you actually validate</h2>
<p>Creating a Hetzner VPS with TypeScript is straightforward:</p>
<pre><code class="lang-ts"><span class="hljs-keyword">const</span> vps = <span class="hljs-keyword">new</span> Server(serverName, {
    name: serverName,
    serverType: <span class="hljs-string">"cx22"</span>,
    image: <span class="hljs-string">"ubuntu-22.04"</span>,
    location: <span class="hljs-string">"nbg1"</span>,
    sshKeys: [sshKey.name],
});
</code></pre>
<p>Then comes the firewall — with a single open port:</p>
<pre><code class="lang-ts"><span class="hljs-keyword">const</span> firewall = <span class="hljs-keyword">new</span> Firewall(firewallResourceName, {
    name: firewallResourceName,
    rules: [
        {
            direction: <span class="hljs-string">"in"</span>,
            protocol: <span class="hljs-string">"tcp"</span>,
            port: <span class="hljs-string">"22"</span>,
            sourceIps: [<span class="hljs-string">"0.0.0.0/0"</span>, <span class="hljs-string">"::/0"</span>],
            description: <span class="hljs-string">"Allow SSH from anywhere"</span>,
        },
    ],
});
</code></pre>
<p>The entire infra update process follows a Node-style workflow:</p>
<pre><code class="lang-json"><span class="hljs-string">"scripts"</span>: {
    <span class="hljs-attr">"preview:prod:critical:vps"</span>: <span class="hljs-string">"pulumi preview --cwd critical/vps --stack prod"</span>,
    <span class="hljs-attr">"prod:critical:vps"</span>: <span class="hljs-string">"pulumi up --cwd critical/vps --stack prod --yes"</span>,
    <span class="hljs-attr">"test"</span>: <span class="hljs-string">"jest"</span>
}
</code></pre>
<h2 id="heading-testing-not-just-for-applications">Testing: not just for applications</h2>
<p>Running <code>jest</code> gives you immediate feedback on whether the infrastructure is actually functioning. We’re not just testing whether “the script ran,” but whether the result is alive and accessible.</p>
<p>Terminal output:</p>
<pre><code>&gt; test
&gt; jest

 PASS  critical/vps/vps.test.ts (<span class="hljs-number">14.711</span> s)
  Hetzner VPS basic checks
    ✓ server is running and has a public IP (<span class="hljs-number">3</span> ms)
  Expected open ports
    ✓ port <span class="hljs-number">22</span> should be open (<span class="hljs-number">124</span> ms)
  Expected closed ports
    ✓ port <span class="hljs-number">21</span> should NOT be open (<span class="hljs-number">1148</span> ms)
    ✓ port <span class="hljs-number">23</span> should NOT be open (<span class="hljs-number">1029</span> ms)
    ✓ port <span class="hljs-number">25</span> should NOT be open (<span class="hljs-number">1037</span> ms)
    ✓ port <span class="hljs-number">80</span> should NOT be open (<span class="hljs-number">1037</span> ms)
    ✓ port <span class="hljs-number">443</span> should NOT be open (<span class="hljs-number">1034</span> ms)
    ✓ port <span class="hljs-number">3306</span> should NOT be open (<span class="hljs-number">1037</span> ms)
    ✓ port <span class="hljs-number">5432</span> should NOT be open (<span class="hljs-number">1036</span> ms)
    ✓ port <span class="hljs-number">6379</span> should NOT be open (<span class="hljs-number">1035</span> ms)
    ✓ port <span class="hljs-number">11211</span> should NOT be open (<span class="hljs-number">1035</span> ms)
    ✓ port <span class="hljs-number">8080</span> should NOT be open (<span class="hljs-number">1035</span> ms)
    ✓ port <span class="hljs-number">9000</span> should NOT be open (<span class="hljs-number">1034</span> ms)
    ✓ port <span class="hljs-number">9100</span> should NOT be open (<span class="hljs-number">1036</span> ms)
    ✓ port <span class="hljs-number">9200</span> should NOT be open (<span class="hljs-number">1070</span> ms)

Test Suites: <span class="hljs-number">1</span> passed, <span class="hljs-number">1</span> total  
<span class="hljs-attr">Tests</span>:       <span class="hljs-number">15</span> passed, <span class="hljs-number">15</span> total  
<span class="hljs-attr">Snapshots</span>:   <span class="hljs-number">0</span> total  
<span class="hljs-attr">Time</span>:        <span class="hljs-number">14.739</span> s, estimated <span class="hljs-number">16</span> s  
Ran all test suites.
</code></pre><h2 id="heading-why-does-this-matter">Why does this matter?</h2>
<p>Because a <code>pulumi up</code> is not a guarantee that your server is alive, reachable, or secure. With this approach:</p>
<ul>
<li><strong>Smoke tests</strong> confirm that the VPS is created and has a public IP</li>
<li><strong>Positive port checks</strong> ensure the intended ports are accessible</li>
<li><strong>Negative port checks</strong> verify that everything else stays closed</li>
</ul>
<p>The whole setup gives you feedback within 15 seconds: the server is alive, nothing unnecessary is exposed, and you’re good to deploy.</p>
<hr />
<p>If your infrastructure is just another Node module, feel free to treat it like one. Here, <code>npm run test</code> doesn’t just validate code — it validates the entire environment your app is supposed to run on.</p>
]]></content:encoded></item><item><title><![CDATA[Monitoring Stack for VPS: From Zero to Grafana Dashboard]]></title><description><![CDATA[In previous projects, I often used Prometheus-based monitoring stacks — but never had to install or configure them myself. This time, I decided to set up a complete observability stack on a single VPS from scratch, fully containerized and securely ex...]]></description><link>https://blog.kreativarc.com/monitoring-stack-for-vps-from-zero-to-grafana-dashboard</link><guid isPermaLink="true">https://blog.kreativarc.com/monitoring-stack-for-vps-from-zero-to-grafana-dashboard</guid><category><![CDATA[monitoring]]></category><category><![CDATA[Devops]]></category><category><![CDATA[#prometheus]]></category><category><![CDATA[Grafana]]></category><category><![CDATA[SelfHosting]]></category><dc:creator><![CDATA[Arnold Lovas]]></dc:creator><pubDate>Fri, 13 Jun 2025 14:54:08 GMT</pubDate><content:encoded><![CDATA[<p>In previous projects, I often used Prometheus-based monitoring stacks — but never had to install or configure them myself. This time, I decided to set up a complete observability stack on a single VPS from scratch, fully containerized and securely exposed via Cloudflare Tunnel.</p>
<p>The goal wasn't a full-scale observability platform, just a minimal yet practical setup tailored to actual needs.</p>
<hr />
<h2 id="heading-objectives">Objectives</h2>
<ul>
<li><p>Monitor VPS system-level resources (CPU, memory)</p>
</li>
<li><p>Track container-level resource usage (CPU, memory)</p>
</li>
<li><p>Collect and label Docker logs</p>
</li>
<li><p>Isolate application errors from background noise (Traefik, monitoring, tunneling)</p>
</li>
<li><p>Display all this in a clean, unified Grafana dashboard</p>
</li>
</ul>
<hr />
<h2 id="heading-architecture-overview">Architecture Overview</h2>
<p>Each component runs in its own Docker container, inside a dedicated Docker network, without exposing any public ports. All interfaces are available behind Cloudflare Tunnel with Access protection.</p>
<h3 id="heading-metrics-stack">Metrics Stack</h3>
<ul>
<li><p><strong>node-exporter</strong> – host-level metrics (<code>localhost:9140</code>)</p>
</li>
<li><p><strong>cAdvisor</strong> – container-level metrics (<code>localhost:9130</code>)</p>
</li>
<li><p><strong>Prometheus</strong> – time-series collection + scraping (<code>localhost:9100</code>)</p>
</li>
</ul>
<h3 id="heading-logging-stack">Logging Stack</h3>
<ul>
<li><p><strong>promtail</strong> – reads logs from the Docker socket (not the log driver)</p>
</li>
<li><p><strong>Loki</strong> – log storage with dynamic labeling (<code>localhost:9110</code>)</p>
</li>
</ul>
<h3 id="heading-visualization">Visualization</h3>
<ul>
<li><p><strong>Grafana</strong> – preconfigured with Prometheus and Loki datasources (<code>localhost:9120</code>)</p>
</li>
<li><p>One main dashboard: system metrics and application errors</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749826237783/4ad1b7d9-c943-4695-ae24-8e998ded1d1f.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749826285154/57782db2-6a33-453b-844c-83f99545d3f4.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<hr />
<h2 id="heading-public-access">Public Access</h2>
<p>All UI components are exposed via Cloudflare Tunnel using subdomains:</p>
<ul>
<li><p><code>grafana.kreativarc.com</code></p>
</li>
<li><p><code>prometheus.kreativarc.com</code></p>
</li>
</ul>
<p>Containers communicate via Docker-internal network aliases. Logs are automatically labeled with container name, image, and service for easier querying and filtering.</p>
<hr />
<h2 id="heading-challenges">Challenges</h2>
<p>Setting it up took more time than expected. The tools themselves are well-documented — the issues mostly came from the surrounding environment:</p>
<ul>
<li><p>Docker’s internal DNS and service discovery quirks</p>
</li>
<li><p>Cloudflare Tunnel routing and token management</p>
</li>
<li><p>Grafana datasource mappings occasionally failing silently</p>
</li>
</ul>
<p>In the end, I managed to isolate container-level issues and error spikes in real time, while keeping the noise from infrastructure logs to a minimum.</p>
<hr />
<h2 id="heading-conclusion">Conclusion</h2>
<p>This monitoring setup provides a clean, focused observability solution for a single VPS. It avoids unnecessary complexity (no sidecars, no operators, no Kubernetes) while offering real visibility into host and container-level issues.</p>
<p>Not the most ambitious system — but practical, reliable, and enough to catch what's going wrong before it escalates.</p>
]]></content:encoded></item><item><title><![CDATA[My First Infrastructure Skeleton: From Manual Pain to IaC Sanity]]></title><description><![CDATA[Setting up infrastructure from scratch is often romanticized — until you're face to face with a blinking cursor and nothing installed. This post walks through how I built my initial setup manually, tested everything hands-on, and prepared the system ...]]></description><link>https://blog.kreativarc.com/my-first-infrastructure-skeleton-from-manual-pain-to-iac-sanity</link><guid isPermaLink="true">https://blog.kreativarc.com/my-first-infrastructure-skeleton-from-manual-pain-to-iac-sanity</guid><category><![CDATA[Devops]]></category><category><![CDATA[SelfHosting]]></category><category><![CDATA[infrastructure]]></category><category><![CDATA[Docker]]></category><category><![CDATA[cloudflare]]></category><dc:creator><![CDATA[Arnold Lovas]]></dc:creator><pubDate>Fri, 13 Jun 2025 14:11:22 GMT</pubDate><content:encoded><![CDATA[<p>Setting up infrastructure from scratch is often romanticized — until you're face to face with a blinking cursor and nothing installed. This post walks through how I built my initial setup manually, tested everything hands-on, and prepared the system for a future IaC-based rebuild.</p>
<h2 id="heading-1-server-amp-dns-setup">1. Server &amp; DNS Setup</h2>
<h3 id="heading-hetzner-vps-cx22">Hetzner VPS (CX22)</h3>
<ul>
<li>Ubuntu base image</li>
<li>Clean slate environment — no preinstalled surprises</li>
</ul>
<h3 id="heading-cloudflare-dns-proxy-mode">Cloudflare DNS (Proxy Mode)</h3>
<ul>
<li>DNS with built-in DDoS mitigation</li>
<li>IP masking and active edge protection</li>
</ul>
<h2 id="heading-2-core-stack-half-manual-half-iac">2. Core Stack: Half-Manual, Half-IaC</h2>
<p>Instead of full automation from day one, I opted for a hybrid approach: <code>docker-compose</code> files organized in a central repo, with services running entirely in Docker for simplicity and consistency.</p>
<h3 id="heading-deployment-approach">Deployment Approach</h3>
<p>Initial deployments used GitHub Actions, but due to permission complexity and audit concerns, I switched to manual <code>root@vps</code> deployments. Not elegant, but stable and predictable.</p>
<h3 id="heading-main-components">Main Components</h3>
<h4 id="heading-traefik-reverse-proxy">Traefik Reverse Proxy</h4>
<ul>
<li>Automatic HTTPS via Let’s Encrypt</li>
<li>Dynamic subdomain routing</li>
<li>Docker integration</li>
<li>Dashboard: <code>http://localhost:9000</code></li>
</ul>
<h4 id="heading-cloudflare-tunnel">Cloudflare Tunnel</h4>
<ul>
<li>One tunnel for public-facing services</li>
<li>One tunnel for internal/admin interfaces</li>
<li>All sensitive admin UIs are accessible only through tunnel routing</li>
</ul>
<h4 id="heading-shared-services">Shared Services</h4>
<ul>
<li><strong>PostgreSQL</strong> and <strong>Temporal</strong></li>
<li>Shared across all PoC apps, with future isolation options for scaling</li>
</ul>
<h4 id="heading-monitoring-stack-separate-blog-post-coming-soon">Monitoring Stack <em>(Separate blog post coming soon)</em></h4>
<ul>
<li><strong>Prometheus</strong>, <strong>Grafana</strong>, <strong>Loki</strong>, <strong>Promtail</strong></li>
<li>Host metrics via node-exporter, container metrics via cAdvisor</li>
<li>Logs collected via Docker socket (not log drivers)</li>
<li>Preconfigured Grafana dashboards at <code>http://localhost:9120</code></li>
<li>Ready for AI-based log analysis and debugging bottlenecks</li>
</ul>
<h2 id="heading-3-debug-driven-devops">3. Debug-Driven DevOps</h2>
<p>I tested and debugged every service manually in the terminal — not just to make it work, but to understand <em>why</em> it works (or fails). This included:</p>
<ul>
<li>Verifying Traefik certificate renewals</li>
<li>Troubleshooting Cloudflare tunnel and DNS behavior</li>
<li>Dealing with Loki label configuration and Promtail edge cases</li>
</ul>
<h2 id="heading-4-snapshot-wipe-rebuild">4. Snapshot → Wipe → Rebuild</h2>
<p>After validating the entire stack, I created a snapshot and wiped the server. Clean state, no cruft.</p>
<h3 id="heading-whats-next">What’s next?</h3>
<p>Rebuilding the same infrastructure with <strong>Pulumi</strong>, with Kubernetes likely following later. For now, the goal is controlled complexity — and a better understanding of where automation makes sense.</p>
<hr />
<h2 id="heading-tldr">TL;DR</h2>
<p>Built an initial infrastructure setup with:</p>
<ul>
<li>Traefik for HTTPS and routing</li>
<li>Docker-based monitoring stack (Prometheus, Grafana, Loki)</li>
<li>Temporal for orchestrated background workflows</li>
<li>PostgreSQL for shared persistence</li>
<li>Cloudflare Tunnel for secure admin access</li>
<li>Manual root deployments (IaC in progress)</li>
</ul>
<p>Next step: Pulumi-based automation, and eventually Kubernetes — but only when the payoff outweighs the extra complexity.</p>
]]></content:encoded></item><item><title><![CDATA[What is kreativarc.com?]]></title><description><![CDATA[kreativarc.com is not a trendy new AI startup.  It’s a personal platform — part lab, part playground, part infrastructure testbed — where I build and test ideas without constraints. The name?  “Kreatív arc” is Hungarian slang for “creative guy” — as ...]]></description><link>https://blog.kreativarc.com/what-is-kreativarccom</link><guid isPermaLink="true">https://blog.kreativarc.com/what-is-kreativarccom</guid><category><![CDATA[SelfHosting]]></category><category><![CDATA[full stack]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[AI infrastructure]]></category><category><![CDATA[prototyping]]></category><dc:creator><![CDATA[Arnold Lovas]]></dc:creator><pubDate>Fri, 13 Jun 2025 13:17:41 GMT</pubDate><content:encoded><![CDATA[<p><strong>kreativarc.com</strong> is not a trendy new AI startup.  It’s a personal platform — part lab, part playground, part infrastructure testbed — where I build and test ideas without constraints. <strong>The name?</strong>  <strong>“Kreatív arc”</strong> is Hungarian slang for <em>“creative guy”</em> — as in, someone who’s always building something.  Something useful, something clever — occasionally something entirely pointless, but at least educational.</p>
<p>That’s more or less what I do:</p>
<ul>
<li>explore new ideas  </li>
<li>build functional prototypes  </li>
<li>and refine everything from infrastructure to frontend — while documenting what I learn</li>
</ul>
<hr />
<h2 id="heading-why-self-host">Why self-host?</h2>
<p>Because two things matter to me: <strong>learning</strong> and <strong>control</strong>.<br />This isn’t a throw-it-on-Vercel-and-forget-it project.<br />It’s a <strong>low-cost, fully owned stack</strong> — designed from the ground up to support rapid experimentation without vendor lock-in.</p>
<h3 id="heading-design-goals">Design goals:</h3>
<ul>
<li><strong>Cost-efficient.</strong> <code>Hetzner VPS + Docker</code> is orders of magnitude cheaper than AWS for small projects</li>
<li><strong>Fast to deploy.</strong> I want to push PoCs weekly — not spend weeks wrestling CI/CD</li>
<li><strong>Scalable.</strong> The stack should support growth if an idea gains traction</li>
<li><strong>Secure.</strong> Admin interfaces are never exposed publicly. Cloudflare Tunnel + Access takes care of that</li>
</ul>
<hr />
<h2 id="heading-whats-under-the-hood">What’s under the hood?</h2>
<p>If it can be done in <strong>TypeScript</strong> and augmented with <strong>AI</strong>, I’m likely doing it that way.</p>
<h3 id="heading-frontend">Frontend</h3>
<ul>
<li><strong>Next.js + MUI</strong> – solid UI performance and component consistency  </li>
<li><strong>tRPC + React Query</strong> – end-to-end type safety, no boilerplate REST</li>
</ul>
<h3 id="heading-backend">Backend</h3>
<ul>
<li><strong>Node.js + tRPC + Zod</strong> – type-safe APIs and clear validation  </li>
<li><strong>PostgreSQL + Prisma + pgvector</strong> – relational + vector search combined  </li>
<li><strong>LLM integration:</strong>  <ul>
<li>OpenAI  </li>
<li>Langchain  </li>
<li>LangGraph  </li>
</ul>
</li>
</ul>
<p><strong>Use cases:</strong></p>
<ul>
<li>Document ingestion and preprocessing  </li>
<li>Agent-driven workflows  </li>
<li>Structured outputs from unstructured content  </li>
</ul>
<hr />
<h2 id="heading-testing">Testing</h2>
<ul>
<li><strong>Jest</strong> for unit and integration tests  </li>
<li><strong>AI-based E2E agent</strong> for validating non-deterministic responses</li>
</ul>
<hr />
<h2 id="heading-workflow-orchestration">🔁 Workflow orchestration</h2>
<ul>
<li><strong>Temporal</strong> handles:<ul>
<li>retries  </li>
<li>scheduling  </li>
<li>long-running pipelines  </li>
<li>observability and isolation</li>
</ul>
</li>
</ul>
<p><strong>Example workflows:</strong></p>
<ul>
<li>Scraping new URLs  </li>
<li>LLM-powered data extraction and embedding  </li>
<li>Updating vector DBs  </li>
<li>Generating structured content from semi-structured sources</li>
</ul>
<hr />
<h2 id="heading-what-to-expect-from-this-blog">What to expect from this blog</h2>
<p>No clickbait. No “10x your productivity with VSCode extensions” fluff.<br />Just:</p>
<ul>
<li>real PoC applications with AI integration  </li>
<li>real-world infrastructure setups  </li>
<li>honest lessons from hands-on builds</li>
</ul>
]]></content:encoded></item></channel></rss>