← All posts

9 Microservices. 3 Nodes. Zero Kubernetes.

We built SovereignMarket — a full enterprise Angular marketplace backed by 9 Rust microservices — and deployed it across 3 QEMU nodes using skr8tr. No Docker. No Helm. No YAML. No etcd. Here is exactly what we built, how we wired it together, and what the numbers looked like.

SB
Scott Baker
Arch Linux · 20-core workstation · RTX 3060 · skr8tr author

The premise

Every Kubernetes tutorial uses the same toy example: a single "hello world" service with a LoadBalancer. That tells you nothing about how k8s behaves at real application complexity — multiple services, inter-service routing, stateful sessions, health checks, rolling updates.

So we built something real. SovereignMarket is an enterprise-grade product marketplace with a full Angular frontend, 500 mock products across 7 categories, shopping cart with sessions, order placement, user accounts, reviews, and ML-style recommendations. Nine services total. The kind of thing you'd actually run in production.

Then we orchestrated it with skr8tr instead of Kubernetes. Here is what happened.

9
Rust microservices
500
Mock products
3
QEMU nodes
5 MB
Control plane
1.2s
Deploy time
0
YAML files

The service map

Each service is a standalone Rust binary using axum for HTTP. No shared database. No shared memory. Purely message-passing over HTTP. The Angular frontend hits each service directly through the skr8tr_ingress reverse proxy.

product-svc
:8001
500 products, categories, pagination, filters, featured list
inventory-svc
:8002
Stock levels, warehouse routing, bulk queries
search-svc
:8003
Full-text search across name, category, brand
cart-svc
:8004
Session-based cart, add/remove, subtotals, line items
order-svc
:8005
Order placement, shipping calc, tax, user history
user-svc
:8006
Register, login, profile — UUID-based identity
review-svc
:8007
Ratings, review text, verified badge, distribution
recommendation-svc
:8008
Related products via affinity map, trending list
frontend-svc
:4200
Serves Angular static build via tower-http ServeDir

The node topology

Four QEMU VMs on a skr8tr-br0 bridge at 10.10.0.0/24. KVM acceleration on the host (20-core, 62GB RAM). One conductor node, three worker nodes.

Host — Arch Linux (20-core, 62GB, RTX 3060) │ ├── skr8tr-br0 10.10.0.1/24 (QEMU bridge, NATed via host NIC) │ ├── conductor 10.10.0.10 2 vCPU, 1GB │ skr8tr_sched (UDP :7771) — Conductor, PQC auth gate │ skr8tr_reg (UDP :7772) — Tower, service registry │ skr8tr_ingress (TCP :80) — HTTP reverse proxy │ ├── node-1 10.10.0.11 4 vCPU, 2GB │ skr8tr_node — Fleet node │ product-svc (:8001) ×2 │ inventory-svc (:8002) ×2 │ search-svc (:8003) ×2 │ ├── node-2 10.10.0.12 4 vCPU, 2GB │ skr8tr_node — Fleet node │ cart-svc (:8004) ×2 │ order-svc (:8005) ×2 │ user-svc (:8006) ×2 │ └── node-3 10.10.0.13 4 vCPU, 2GB skr8tr_node — Fleet node review-svc (:8007) ×2 recommendation-svc (:8008) ×2 frontend-svc (:4200) ×2

Each service runs with replicas 2 — skr8tr launches two instances per node, round-robining between them via the Tower registry. If one dies, the other keeps serving while skr8tr relaunches the failed replica. No readiness probes. No liveness probes. No PodDisruptionBudgets.

The manifests

This is the entire deployment definition for the product catalog service. Not a Helm chart. Not a values.yaml. Not a ConfigMap plus a Deployment plus a Service plus an HPA plus an Ingress. Just this:

app product-svc
  exec     /opt/sovereign-market/bin/product-svc
  port     8001
  replicas 2

  env {
    PORT 8001
    RUST_LOG info
  }

  health {
    check    GET /health 200
    interval 10s
    timeout  3s
    retries  3
  }

  scale {
    min       1
    max       4
    cpu-above 70
    cpu-below 20
  }

Nine of these files. One per service. That's the entire infrastructure definition for a production-grade marketplace application. Compare that to the Kubernetes equivalent: 9 Deployments, 9 Services, 9 HPAs, an Ingress, a ConfigMap or two, probably a ServiceAccount, and a Helm chart wrapping all of it.

Deploying the whole stack

After provisioning the QEMU nodes (one-time: copy binaries via scp), the entire stack deploys with a loop:

# Start Tower + Conductor on the conductor node
nohup skr8tr_reg   > /tmp/tower.log 2>&1 &
nohup skr8tr_sched --pubkey skrtrview.pub > /tmp/sched.log 2>&1 &

# Start fleet nodes (run on each worker)
nohup skr8tr_node > /tmp/node.log 2>&1 &

# Deploy all 9 services from the operator machine
for manifest in manifests/*.skr8tr; do
  skr8tr --key ~/.skr8tr/signing.sec up "$manifest"
done
Every deploy command is ML-DSA-65 signed. The Conductor verifies the post-quantum signature before accepting any SUBMIT, EVICT, or ROLLOUT command. If the key doesn't match, the command is rejected with ERR|UNAUTHORIZED. No passwords. No bearer tokens. No mTLS certificates to rotate.

The Angular frontend

The frontend is a proper Angular 19 SPA with lazy-loaded routes, Angular signals for reactive state, and standalone components throughout. Seven pages:

Production build: 408KB total (all JS + CSS). The Angular chunk for the entire catalog page — filters, pagination, product grid — is 9.7KB.

The numbers

Metric Kubernetes Skr8tr
Control plane memory ~870 MB (etcd + apiserver + scheduler + CM + kubelet) ~4.2 MB (skr8tr_sched + skr8tr_reg)
Time from up to serving traffic ~35–60s (image pull + pod scheduling + readiness) ~1.2s (fork + exec + first heartbeat)
Rolling update ReadinessProbe + PDB + strategy.rollingUpdate YAML skr8tr rollout product-svc.skr8tr
New node joins cluster ~3 min (kubelet cert approval + taint removal) <6s (first UDP heartbeat received by Tower)
Infrastructure definition 9 Deployments + 9 Services + 9 HPAs + Ingress + ConfigMaps + Helm chart 9 × .skr8tr manifests, avg 18 lines each
Auth model base64 bearer tokens (plaintext equivalent) or mTLS ML-DSA-65 post-quantum signatures (NIST FIPS 204)
Total binaries to run the cluster 7+ (etcd, kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy, CoreDNS) 3 (skr8tr_sched, skr8tr_reg, skr8tr_node)

These numbers are from running on an Arch Linux workstation (20-core, 62GB RAM) with QEMU/KVM VMs on a local bridge. Single-machine cluster — not a multi-datacenter deployment. The k8s numbers are from running minikube / k3s on the same hardware.

What honest limitations look like

skr8tr is not Kubernetes. There are things it does not do yet:

For SovereignMarket — a stateless REST API + static Angular frontend — none of these are blockers. All 9 services are stateless (in-memory data, which would be a DB in a real deployment). This is exactly the workload skr8tr is built for.

Build the Rust services yourself

All 9 services are in the public repo under demos/sovereign-market/services/. The entire workspace builds in under 10 seconds on modern hardware:

git clone https://github.com/NixOSDude/skr8tr
cd skr8tr/demos/sovereign-market/services
cargo build --release

# All 9 binaries land in target/release/
# product-svc inventory-svc search-svc cart-svc
# order-svc user-svc review-svc recommendation-svc frontend-svc

Then run them locally — no cluster needed:

# Start all 9 services
PORT=8001 ./target/release/product-svc &
PORT=8002 ./target/release/inventory-svc &
PORT=8003 ./target/release/search-svc &
PORT=8004 ./target/release/cart-svc &
PORT=8005 ./target/release/order-svc &
PORT=8006 ./target/release/user-svc &
PORT=8007 ./target/release/review-svc &
PORT=8008 ./target/release/recommendation-svc &

# Start Angular frontend
cd ../frontend/sovereign-market
ng serve --port 4200

Open http://localhost:4200. Browse 500 products. Add to cart. Place an order. All wired to real Rust HTTP backends. No mocks. No JSON fixtures.

What comes next

The QEMU node cluster is next. We'll document the full Alpine Linux provisioning, skr8tr_ingress routing configuration, and a live rolling update of product-svc with zero downtime — measuring exactly how long the old replica keeps serving during the 8-second settle window.

If you want RBAC, audit log export, SSO, or multi-tenant conductor for your team's deployment — that's the enterprise tier. Get in touch.