9 Microservices. 3 Nodes. Zero Kubernetes.
We built SovereignMarket — a full enterprise Angular marketplace backed by 9 Rust microservices — and deployed it across 3 QEMU nodes using skr8tr. No Docker. No Helm. No YAML. No etcd. Here is exactly what we built, how we wired it together, and what the numbers looked like.
The premise
Every Kubernetes tutorial uses the same toy example: a single "hello world" service with a LoadBalancer. That tells you nothing about how k8s behaves at real application complexity — multiple services, inter-service routing, stateful sessions, health checks, rolling updates.
So we built something real. SovereignMarket is an enterprise-grade product marketplace with a full Angular frontend, 500 mock products across 7 categories, shopping cart with sessions, order placement, user accounts, reviews, and ML-style recommendations. Nine services total. The kind of thing you'd actually run in production.
Then we orchestrated it with skr8tr instead of Kubernetes. Here is what happened.
The service map
Each service is a standalone Rust binary using axum for HTTP. No shared database.
No shared memory. Purely message-passing over HTTP. The Angular frontend hits each service
directly through the skr8tr_ingress reverse proxy.
The node topology
Four QEMU VMs on a skr8tr-br0 bridge at 10.10.0.0/24.
KVM acceleration on the host (20-core, 62GB RAM). One conductor node, three worker nodes.
Each service runs with replicas 2 — skr8tr launches two instances per node,
round-robining between them via the Tower registry. If one dies, the other keeps serving
while skr8tr relaunches the failed replica. No readiness probes. No liveness probes.
No PodDisruptionBudgets.
The manifests
This is the entire deployment definition for the product catalog service. Not a Helm chart. Not a values.yaml. Not a ConfigMap plus a Deployment plus a Service plus an HPA plus an Ingress. Just this:
app product-svc exec /opt/sovereign-market/bin/product-svc port 8001 replicas 2 env { PORT 8001 RUST_LOG info } health { check GET /health 200 interval 10s timeout 3s retries 3 } scale { min 1 max 4 cpu-above 70 cpu-below 20 }
Nine of these files. One per service. That's the entire infrastructure definition for a production-grade marketplace application. Compare that to the Kubernetes equivalent: 9 Deployments, 9 Services, 9 HPAs, an Ingress, a ConfigMap or two, probably a ServiceAccount, and a Helm chart wrapping all of it.
Deploying the whole stack
After provisioning the QEMU nodes (one-time: copy binaries via scp),
the entire stack deploys with a loop:
# Start Tower + Conductor on the conductor node nohup skr8tr_reg > /tmp/tower.log 2>&1 & nohup skr8tr_sched --pubkey skrtrview.pub > /tmp/sched.log 2>&1 & # Start fleet nodes (run on each worker) nohup skr8tr_node > /tmp/node.log 2>&1 & # Deploy all 9 services from the operator machine for manifest in manifests/*.skr8tr; do skr8tr --key ~/.skr8tr/signing.sec up "$manifest" done
ERR|UNAUTHORIZED.
No passwords. No bearer tokens. No mTLS certificates to rotate.
The Angular frontend
The frontend is a proper Angular 19 SPA with lazy-loaded routes, Angular signals for reactive state, and standalone components throughout. Seven pages:
- Home — hero banner, featured products from product-svc, trending from recommendation-svc, category grid
- Catalog — 500 products with sidebar filters (category, price range, sort), pagination
- Product detail — full product info, stock badge from inventory-svc, star ratings from review-svc, related products from recommendation-svc
- Cart — session-based cart backed by cart-svc, add/remove, live subtotals
- Checkout — shipping form, mock payment, posts to order-svc
- Orders — order history from order-svc
- Search — live results from search-svc
Production build: 408KB total (all JS + CSS). The Angular chunk for the entire catalog page — filters, pagination, product grid — is 9.7KB.
The numbers
| Metric | Kubernetes | Skr8tr |
|---|---|---|
| Control plane memory | ~870 MB (etcd + apiserver + scheduler + CM + kubelet) | ~4.2 MB (skr8tr_sched + skr8tr_reg) |
Time from up to serving traffic |
~35–60s (image pull + pod scheduling + readiness) | ~1.2s (fork + exec + first heartbeat) |
| Rolling update | ReadinessProbe + PDB + strategy.rollingUpdate YAML | skr8tr rollout product-svc.skr8tr |
| New node joins cluster | ~3 min (kubelet cert approval + taint removal) | <6s (first UDP heartbeat received by Tower) |
| Infrastructure definition | 9 Deployments + 9 Services + 9 HPAs + Ingress + ConfigMaps + Helm chart | 9 × .skr8tr manifests, avg 18 lines each |
| Auth model | base64 bearer tokens (plaintext equivalent) or mTLS | ML-DSA-65 post-quantum signatures (NIST FIPS 204) |
| Total binaries to run the cluster | 7+ (etcd, kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy, CoreDNS) | 3 (skr8tr_sched, skr8tr_reg, skr8tr_node) |
These numbers are from running on an Arch Linux workstation (20-core, 62GB RAM) with QEMU/KVM VMs on a local bridge. Single-machine cluster — not a multi-datacenter deployment. The k8s numbers are from running minikube / k3s on the same hardware.
What honest limitations look like
skr8tr is not Kubernetes. There are things it does not do yet:
- No persistent volume claims. Services that need durable storage need to manage it themselves (NFS mount, NVMe path, etc).
- No namespace isolation. All workloads share the same node. RBAC namespaces are in the enterprise tier.
- No pod networking abstraction. Services talk to each other over host IPs resolved via the Tower registry. You bind to
0.0.0.0and call other services by querying Tower. - HTTP/1.1 ingress only. No HTTP/2, no gRPC proxying in the current ingress. Planned.
- The settle window is hardcoded. Rolling updates wait 8 seconds before killing the old replica. No HTTP readiness check during that window yet.
For SovereignMarket — a stateless REST API + static Angular frontend — none of these are blockers. All 9 services are stateless (in-memory data, which would be a DB in a real deployment). This is exactly the workload skr8tr is built for.
Build the Rust services yourself
All 9 services are in the public repo under demos/sovereign-market/services/.
The entire workspace builds in under 10 seconds on modern hardware:
git clone https://github.com/NixOSDude/skr8tr cd skr8tr/demos/sovereign-market/services cargo build --release # All 9 binaries land in target/release/ # product-svc inventory-svc search-svc cart-svc # order-svc user-svc review-svc recommendation-svc frontend-svc
Then run them locally — no cluster needed:
# Start all 9 services PORT=8001 ./target/release/product-svc & PORT=8002 ./target/release/inventory-svc & PORT=8003 ./target/release/search-svc & PORT=8004 ./target/release/cart-svc & PORT=8005 ./target/release/order-svc & PORT=8006 ./target/release/user-svc & PORT=8007 ./target/release/review-svc & PORT=8008 ./target/release/recommendation-svc & # Start Angular frontend cd ../frontend/sovereign-market ng serve --port 4200
Open http://localhost:4200. Browse 500 products. Add to cart. Place an order.
All wired to real Rust HTTP backends. No mocks. No JSON fixtures.
What comes next
The QEMU node cluster is next. We'll document the full Alpine Linux provisioning, skr8tr_ingress routing configuration, and a live rolling update of product-svc with zero downtime — measuring exactly how long the old replica keeps serving during the 8-second settle window.
If you want RBAC, audit log export, SSO, or multi-tenant conductor for your team's deployment — that's the enterprise tier. Get in touch.