As organizations scale their data operations, data engineers face growing challenges in managing large, diverse datasets across multiple platforms. Data lakes are often used to store vast amounts of raw data, while Apache Kafka™ enables real-time data streaming. However, integrating this data into a lakehouse table format like Apache Iceberg™—which supports high-volume, large-scale analytics—remains a complex task.
In this webinar, we will demonstrate how to efficiently ingest data from both data lakes and Kafka sources into Iceberg. We’ll show how Snowflake’s native integrations make it easy to load data from both streaming and batch sources into Iceberg, enabling scalability, performance, and interoperability in a data lakehouse architecture.
If you are a data engineer or architect building pipelines for an Iceberg lakehouse, join this webinar to:
- Get an overview of Apache Iceberg and its advantages compared to data lakes
- Learn practical methods and considerations for ingesting data from data lakes and Kafka to Iceberg
- See a demo for ingesting data from Kafka to Iceberg with Snowflake
- See a demo for continuous integration of an existing data lake to Iceberg with Snowflake
Speakers
Ankit Gupta
Senior Product Manager
Snowflake
Xin Huang
Senior Product Manager
Snowflake
Scott Teal
Product Marketing Lead
Snowflake
Register Here