analytics-pipeline
Analytics Pipeline
High-performance analytics with Redis counters and periodic database flush.
When to Use This Skill
- Need high-throughput event tracking (thousands/second)
- Want real-time counters without database bottlenecks
- Building dashboards with time-series data
- Tracking user activity, feature usage, or page views
Core Concepts
Write to Redis for speed, flush to PostgreSQL for persistence. Redis handles high write throughput, periodic workers batch-flush to the database.
Events → Redis Counters → Periodic Flush Worker → PostgreSQL → Dashboard Queries
More from dadbodgeoff/drift
sse-streaming
Implement Server-Sent Events (SSE) for real-time updates with automatic reconnection and heartbeats. Use when building live dashboards, notifications, progress indicators, or any feature needing server-to-client push.
78oauth-social-login
Implement OAuth 2.0 social login with Google, GitHub, and other providers. Handles token exchange, user creation, and account linking.
48multi-tenancy
Implement multi-tenant architecture with tenant isolation, data separation, and per-tenant configuration. Supports shared database and schema-per-tenant models.
45deduplication
Event deduplication with canonical selection, reputation scoring, and hash-based grouping for multi-source data aggregation. Handles both ID-based and content-based deduplication.
43fuzzy-matching
Multi-stage fuzzy matching pipeline for entity reconciliation. PostgreSQL trigram pre-filter, salient overlap check, and multi-factor similarity scoring.
40webhook-security
Implement secure webhook handling with signature verification, replay protection, and idempotency. Use when receiving webhooks from third-party services like Stripe, GitHub, Twilio, or building your own webhook system.
37