litestream-k8s
Litestream on Kubernetes with S3/R2
Run SQLite as your primary database in Kubernetes with continuous replication to S3-compatible object storage via Litestream. The database restores automatically on pod startup — no PersistentVolumeClaim needed.
Why this pattern
SQLite is fast, simple, and zero-dependency. The problem on Kubernetes is that pods are ephemeral — when a pod dies, the DB is gone. Litestream solves this by continuously streaming WAL changes to object storage (Cloudflare R2, AWS S3, etc.) and restoring on startup. This gives you:
- Single-binary app with no external database dependency
- Durability via object storage (cheaper and more resilient than a PVC on a single node)
- Point-in-time recovery for free — every WAL segment is preserved
- Works with any S3-compatible backend
Trade-off: single-writer only (one replica). If you need horizontal scaling or concurrent writes, use Postgres.
File layout
.
More from brojonat/llmsrules
ibis-data
Use Ibis for database-agnostic data access in Python. Use when writing data queries, connecting to databases (DuckDB, PostgreSQL, SQLite), or building portable data pipelines that should work across backends.
13go-service
Build Go microservices with stdlib HTTP handlers, sqlc, urfave/cli, and slog. Use when creating or modifying a Go HTTP server, adding routes, middleware, database queries, or CLI commands.
13temporal-go
Build Temporal workflow applications in Go. Use when creating or modifying Temporal workflows, activities, workers, clients, signals, queries, updates, retry policies, saga patterns, or writing Temporal tests.
13python-cli
Build Python CLIs with Click using subcommand groups. Use when creating or modifying a Python command-line interface, adding subcommands, or structuring a CLI package.
13parquet-analysis
Analyze parquet files using Python and Ibis. Use when the user wants to explore, transform, or analyze parquet data files, perform aggregations, joins, or export results. Works with local parquet files and provides database-agnostic data operations.
12ducklake
Work with DuckLake, an open lakehouse format built on DuckDB. Use when creating or querying DuckLake tables, managing snapshots, time travel, schema evolution, partitioning, or lakehouse maintenance operations.
12