600 MB of control plane to run three microservices. etcd consensus on a five-node cluster. A YAML file longer than the service it deployed. We tore it out and replaced it with 5 MB of C23. Here is exactly what we did and what we measured.
I want to be precise about something before we start. This post is not a Kubernetes hate piece. Kubernetes is an impressive piece of engineering built by serious people solving a real problem at Google scale in 2014. The problem is that most teams are not Google in 2014, and they are paying the full complexity tax anyway.
We were running three services: a Node.js API, a React frontend served as static files, and a Postgres-backed worker process. Five nodes in a bare-metal cluster at a datacenter. Here is what Kubernetes cost us to run those three services.
Before we touched anything, we ran ps aux on the control plane node and
listed everything that existed purely to serve k8s's needs, not our application's:
| PROCESS | RESIDENT MEMORY | PURPOSE |
|---|---|---|
| etcd | ~180 MB | Distributed consensus store. Stores pod specs, ConfigMaps, Secrets. |
| kube-apiserver | ~200 MB | REST frontend to etcd. Every cluster operation goes through this. |
| kube-scheduler | ~50 MB | Watches apiserver for unscheduled pods. Assigns them to nodes. |
| kube-controller-manager | ~60 MB | Runs reconciliation loops for deployments, replica sets, endpoints. |
| kubelet (×5 nodes) | ~40 MB each | Node agent. Talks to apiserver, manages container runtime. |
| kube-proxy (×5 nodes) | ~20 MB each | iptables rules for service routing. Reprograms netfilter on every change. |
| containerd (×5 nodes) | ~30 MB each | Container runtime daemon. Pulls images, manages overlay filesystems. |
| CoreDNS | ~30 MB | In-cluster DNS. Required for service name resolution. |
| nginx-ingress-controller | ~90 MB | Routes external HTTP to services. Watches apiserver for Ingress objects. |
| Total overhead | ~870 MB | None of this runs our application. |
870 megabytes of resident memory to run a 12 MB Node.js API, a 3 MB static site, and an 8 MB worker. The control plane outweighs the application by 37:1.
To deploy that Node.js API with three replicas, health checks, and an HTTP route,
we needed: a Deployment, a Service, an Ingress,
a HorizontalPodAutoscaler, a PodDisruptionBudget, and a
ConfigMap for the nginx-ingress annotations. Six resource types. 214 lines
of YAML. Here is a sample of what the ingress alone looked like:
The equivalent in Skr8tr — start the ingress binary with a flag:
Kubernetes authentication is credential-file based. Your kubeconfig
contains a token field that is base64-encoded. Base64 is not encryption.
It is not hashing. It is a reversible encoding scheme that anyone who has the file can
trivially decode with base64 -d. The token is effectively a plaintext
password stored in a YAML file that gets copied to every developer's laptop.
There is more. ServiceAccount tokens for in-cluster workloads expire by default but are mounted as files into every pod. Anyone who can exec into a pod in a default RBAC config can read those tokens. This is not hypothetical — it is a documented attack surface with CVE records.
Skr8tr's auth model is different in kind, not degree. Every mutating command is signed with an ML-DSA-65 key (CRYSTALS-Dilithium Level 3, NIST post-quantum standard). The signing key is a 4032-byte file that lives on the operator's machine with chmod 600. It never goes to the server. The server only sees the public key (1952 bytes). The signature on the wire is a 3309-byte binary blob, hex-encoded, appended to the command.
A zero-downtime rolling update in Kubernetes requires you to understand and configure
at minimum: strategy.rollingUpdate.maxSurge,
strategy.rollingUpdate.maxUnavailable, readinessProbe
(correctly — a wrong probe causes the rollout to stall forever),
and PodDisruptionBudget (if you want to survive a node drain during rollout).
Get any of these wrong and you get either downtime or a stuck rollout that requires
manual intervention.
In Skr8tr:
The rollout thread in the Conductor launches a new-generation replica, waits 8 seconds for it to settle, then sends SIGTERM to the old-generation replica followed by SIGKILL after a 2-second grace window. One at a time. No probe YAML. No PodDisruptionBudget. At any point during the rollout, N−1 replicas are live.
Skr8tr is three C23 daemons and a Rust CLI. Here is the full component inventory:
| BINARY | SIZE | PURPOSE |
|---|---|---|
skr8tr_reg | ~40 KB | Service registry. UDP. Register, lookup, round-robin across replicas. |
skr8tr_sched | ~80 KB | Conductor. Schedules workloads, tracks placements, handles auth, rolling updates. |
skr8tr_node | ~60 KB | Fleet node. Runs workloads via fork+exec. Health checks. Log ring buffer. |
skr8tr_ingress | ~45 KB | HTTP reverse proxy. Longest-prefix routing. Dynamic backend via Tower. |
skr8tr (CLI) | ~3 MB | Operator interface. Rust. PQC signing built in. |
| Total | ~3.3 MB | Everything. Including auth. Including ingress. |
We did not want YAML. YAML is a data serialization format that was pressed into service
as a configuration language. It has significant whitespace, implicit type coercion
(no parses as false, Norway parses as
NO in some parsers), and no native schema. We built our own format.
That is the complete deployment manifest for our API server with health checks and auto-scaling. 18 lines. No anchors. No indentation ambiguity. No implicit type coercion. The parser is 200 lines of C23.
We ran both stacks side by side on identical hardware for two weeks. Here is what we measured:
| METRIC | KUBERNETES | SKRTR |
|---|---|---|
| Control plane resident memory | ~870 MB | ~12 MB |
Time from git push to new replica serving traffic |
~45s (image pull + pod scheduling + readiness) | ~1.2s (fork + exec, no image) |
| Rolling update: 3 replicas | ~90s | ~26s (3 × 8s settle) |
| New node joins cluster | ~3 min (kubelet registration, cert approval) | <6s (first heartbeat) |
| Config lines to deploy one service with ingress | 214 lines (6 resource types) | 18 lines (1 manifest) |
| Auth model | base64 token (plaintext equivalent) | ML-DSA-65 post-quantum signature |
| Binary size of control plane | ~620 MB (all binaries) | ~3.3 MB |
fork() and execve() with the binary path from the
manifest. The binary was already on disk. That is the entire deployment step.
Honest accounting. These are genuine gaps relative to a mature k8s installation:
rsync step in CI. Not elegant, but it is explicit and fast.If you need multi-tenant container isolation, network policies, or a distributed block storage system, Kubernetes is a reasonable answer. If you are running your own services on nodes you control, it is likely overkill.
Skr8tr is Apache 2.0. The full source is on
GitHub. The control plane is ~2000
lines of C23 across four files. The CLI is ~500 lines of Rust. The parser for
.skr8tr manifests is 200 lines. It is small enough to read in an afternoon.
If you are running a Kubernetes cluster for three services, I would invite you to spend that afternoon reading Skr8tr's source and considering whether the complexity you are carrying is load-bearing.
Questions, corrections, or war stories from your own k8s migration: open an issue or email directly.