confluent-cloud-cdc-tableflow
Confluent Cloud CDC to Tableflow Pipeline
Build production-ready Change Data Capture pipelines that stream database changes through Confluent Cloud to Iceberg or Delta Lake tables using Debezium, Flink, and Tableflow.
Overview
This skill automates the setup of a complete CDC pipeline:
Database → Debezium CDC Connector → Kafka + Schema Registry → Flink (decode & transform) → Tableflow → Iceberg/Delta Tables
Supported Databases (Fully-Managed Debezium Connectors Only)
- Microsoft SQL Server CDC Source V2
- MySQL CDC Source V2
- PostgreSQL CDC Source V2
- Oracle XStream CDC Source
- DynamoDB CDC Source
More from confluentinc/agent-skills
developing-kafka-python-client
Use when the user wants to build a Python Kafka producer or consumer, add Schema Registry to existing Python code, migrate from raw JSON to schema-backed serialization, or scaffold a confluent-kafka-python project for Confluent Cloud or local Docker.
10kafka-schema-registry
Scan a project to identify Kafka applications, extract schemas from data models, tag PII fields, generate Terraform for Confluent Schema Registry registration, and produce a migration report with rollout ordering. Use this skill when a user asks to analyze a folder or repo for Kafka usage, extract schemas, audit producer/consumer configurations, or generate Terraform for Schema Registry.
9kafka-streams-programming
Architect, build, and debug Kafka Streams apps (JVM-embedded stream processing). Use when user mentions KStream, KTable, topology, TopologyTestDriver, StreamsBuilder, interactive queries, GlobalKTable, joins/windows/aggregations, or debugging issues (rebalancing, state stores, lag, deserialization errors). Do NOT trigger for Flink, connectors, CDC, or plain producer/consumer.
9