kafka-streams-programming
Kafka Streams — Architect, Build, Debug
JVM-embedded stream processing library with no separate cluster.
⚠️ IMPORTANT: Lazy-Load References Only
Do NOT read all reference files upfront. Read ONLY what you need, when you need it.
- User asks "how do I join two topics?" → Read
references/topology-patterns.md§ Joins Decision Tree only - User asks "build me a Kafka Streams app" → Read
references/build-templates.mdwhen writing build files, not before - User asks "my app is crashing" → Read the specific section in
references/debugging.mdfor that symptom - Most questions need 0-2 reference files total, not all 10
Never read multiple files preemptively "just in case"
Always Confirm Target Environment First
Before answering in any mode (Architect, Build, Debug), confirm the target environment if the user hasn't stated it: Apache Kafka | Confluent Platform | Confluent Cloud. Versions/auth shape every recommendation — KIP-1071 support, SASL config, ACL model, transactional-id expiry, CLI tool names all branch on this. Skip the question only if the user already named the environment.
Mode Detection
More from confluentinc/agent-skills
confluent-cloud-cdc-tableflow
Set up end-to-end Change Data Capture (CDC) pipelines on Confluent Cloud using Debezium source connectors, Flink for transformation, and Tableflow for data lake integration. Supports JSON_SR, Avro, and Protobuf formats. Handles schemaless topics (plain JSON without SR) and multi-event topics. This skill handles the complete workflow from database to Iceberg/Delta tables. Use this skill when users want to capture database changes and materialize them into Iceberg or Delta Lake tables via Confluent Cloud Tableflow. Trigger phrases include "CDC to Tableflow", "database to Iceberg", "database to Delta Lake", "stream database changes to data lake", "set up Tableflow pipeline", "schemaless topic to Tableflow", or "multi-event topic to Iceberg". Do NOT trigger for general CDC, Debezium, or database replication requests that do not involve Tableflow or Iceberg/Delta Lake as the destination.
11developing-kafka-python-client
Use when the user wants to build a Python Kafka producer or consumer, add Schema Registry to existing Python code, migrate from raw JSON to schema-backed serialization, or scaffold a confluent-kafka-python project for Confluent Cloud or local Docker.
10kafka-schema-registry
Scan a project to identify Kafka applications, extract schemas from data models, tag PII fields, generate Terraform for Confluent Schema Registry registration, and produce a migration report with rollout ordering. Use this skill when a user asks to analyze a folder or repo for Kafka usage, extract schemas, audit producer/consumer configurations, or generate Terraform for Schema Registry.
9