Confluent, Inc., the pioneer in data streaming, has announced the General Availability (GA) of its Delta Lake and Databricks Unity Catalog integrations within Confluent Tableflow, along with Early Access (EA) availability on Microsoft OneLake. With these new developments, Tableflow has evolved into a fully managed, end-to-end solution that bridges operational, analytical, and AI systems across hybrid and multicloud environments. The update enables organizations to move Apache Kafka topics directly into Delta Lake or Apache Iceberg tables, supported by automated data quality checks, synchronized catalogs, and enterprise-grade security.
Since its launch, Tableflow has reshaped how enterprises make streaming data analytics-ready. It removes the need for fragile ETL pipelines and manual Lakehouse integrations that often delay insights. With the GA release of Delta Lake and Unity Catalog integrations and the addition of OneLake support, Confluent has expanded its multicloud capabilities. These enhancements now offer businesses a unified solution to connect real-time and analytical data under a governed and secure framework, empowering faster innovation in AI and analytics.
AI Authority Trend: Confluent Unveils Confluent Intelligence for Real-Time AI
“Customers want to do more with their real-time data, but the friction between streaming and analytics has always slowed them down,” said Shaun Clowes, Chief Product Officer at Confluent. “With Tableflow, we’re closing that gap and making it easy to connect Kafka directly to governed lakehouses. That means high-quality data ready for analytics and AI the moment it’s created.”
Enterprise-Ready Enhancements
The GA release delivers several new enterprise-grade features that strengthen Tableflow’s reliability, security, and usability. Organizations can now:
- Simplify analytics by automatically converting Kafka topics into Delta Lake tables stored in Amazon S3 or Azure Data Lake Storage, with simultaneous support for Delta Lake and Iceberg formats.
- Unify governance through automatic synchronization of metadata, schemas, and access policies between Tableflow and Databricks Unity Catalog.
- Improve reliability using a Dead Letter Queue to capture malformed records and maintain uninterrupted data flow.
- Save time with automatic upsert functionality that keeps tables consistent and deduplicated.
- Strengthen security via the Bring Your Own Key (BYOK) model, enabling customers to manage their encryption keys for compliance in regulated sectors like finance and healthcare.
Additionally, Tableflow now supports schema evolution, compaction, and automated table maintenance, while integrating with Apache Iceberg, AWS Glue, and Snowflake Open Catalog offering a resilient, analytics-ready foundation.
“At Attune, delivering real-time insights from smart building Internet of Things (IoT) data is central to our mission,” said David Kinney, Principal Solutions Architect at Attune. “With just a few clicks, Confluent Tableflow lets us materialize key Kafka topics into trusted, analytics-ready tables, giving us accurate visibility into customer engagement and device behavior. These high-quality datasets now power analytics, machine learning (ML) models, and generative AI applications, all built on a reliable data foundation. Tableflow has simplified our data architecture while opening new possibilities for how we leverage data.”
AI Authority Trend: Confluent Launches Streaming Agents to Power Real-Time AI Innovation
Expanding to Microsoft OneLake
Confluent has also introduced Tableflow in Early Access on Azure, integrated with Microsoft OneLake. This expansion strengthens its multicloud presence and gives customers greater flexibility in deploying analytics across platforms. Organizations using Azure Databricks and Microsoft Fabric can now take advantage of Delta Lake and Unity Catalog integrations for a seamless, governed analytics experience from real-time data streams to cloud-based lakehouses.
This new integration allows users to:
- Accelerate insights by instantly materializing Kafka topics as open tables in OneLake and querying them directly from Microsoft Fabric or third-party tools.
- Eliminate complexity by automating schema mapping, type conversion, and maintenance for streaming data in Azure-native workflows.
- Enable AI and analytics services by integrating with Azure’s native AI tools via OneLake Table APIs, all manageable through Confluent Cloud’s UI, CLI, or Terraform.
“Access to real-time data is critical for customers to make fast and accurate decisions,” said Dipti Borkar, Vice President and GM of Microsoft OneLake and ISV Ecosystem. “With Confluent’s Tableflow now available on Microsoft Azure, customers can stream Kafka events to OneLake as Apache Iceberg or Delta Lake tables and query them instantly via Microsoft Fabric and popular 3rd party engines using OneLake Table APIs, cutting complexity and speeding up decisions.”
Through these new integrations, Confluent reinforces its position as a leader in real-time data streaming, offering enterprises an easier, faster, and more secure path to actionable analytics and AI innovation.
AI Authority Trend: 90 Percent of IT Leaders Say Data Streaming Fuels AI Innovation – Confluent Report
To share your insights, please write to us at info@intentamplify.com





