Join Red Hat at Current
Red Hat OpenShift Streams for Apache Kafka
OpenShift Streams for Apache Kafka is a cloud service for streaming data that reduces the operational cost and complexity of delivering real-time applications across hybrid-cloud environments.
Speaking Sessions
Tuesday, October 4 |
||||
---|---|---|---|---|
8:00 a.m. CDT |
Towards Client-Side Field-Level Cryptography for Streaming Data PipelinesApache Kafka offers several security features ranging from authentication and authorization mechanisms to over-the-wire encryption. This notwithstanding, end-to-end encryption between Kafka-based client applications, which fully protects payloads from fraudulent access at the broker's side can still be considered a blind spot. After highlighting the main benefits of explicit data-at-rest protection, this session discusses in-depth how to selectively encrypt and decrypt sensitive payload fields in the context of streaming data pipelines built upon Apache Kafka Connect and ksqlDB apps. In particular, an ecosystem community project named Kryptonite for Kafka - written and open-sourced by the speaker - is introduced. Hans-Peter Grahsl, Developer Advocate, Red Hat Many Sources, Many Sinks, One Stream: Joining Domains in Your Data Mesh With Canonical TopicsThe concept of the Data Mesh is making headway in enterprise data design, fueled by core principles of contextual data domains, local governance, and decentralized integration. Kafka makes the data mesh scalable and resilient with event sourcing and replication. But how do you join multiple data domains on a single node in your mesh, where they all need to stay consistent on the same data changes, without going back to the central data store? In this session we'll introduce the concept of the Canonical Stream, an ordered, declarative event stream of information about a thing that exists in the real world, with its own context and governance. The Canon is technology agnostic, and data context agnostic - events on the Canon provide updates about the thing itself, and must be consumed and interpreted differently for each data domain. Joel Eaton, Data Engineering Manager, Red Hat |
|||
11:15 a.m. CDT |
Kafka Client-Broker Interactions - What You Don't SeeDid you ever wonder why you need to configure advertised listeners for your Kafka brokers? How can clients be forward and backward compatible with brokers? Or how the consumer distributes the load amongst worker nodes in a cluster? Then this talk is for you! Assuming no prior knowledge, we'll look at how the Apache Kafka producer and consumer clients work from a protocol point of view. We'll cover how clients bootstrap their connection to Kafka as well as the basic protocols that support records being produced to, and consumed from, Kafka. We will then go on to look at more advanced interactions like consumer group coordination and transactions. Following this talk you'll know how the Kafka client protocols work in detail and be able to tell your leaders from coordinators! The next time you have a problem you will not only be able to debug it more easily but also understand how to best utilize the Kafka protocol for your applications. Tom Bentley, Senior Principal Software Engineer, Red Hat |
|||
5:15 p.m. CDT |
Keep Your Cache Always Fresh with Debezium!The saying goes that there are only two hard things in Computer Science: cache invalidation, and naming things. Well, turns out the first one is solved actually. Join us for this session to learn how to keep read views of your data in distributed caches close to your users, always kept in sync with your primary data stores change data capture. Gunnar Morling, Senior Principal Software Engineer, Red Hat |
Wednesday, October 5 |
||||
---|---|---|---|---|
8:00 a.m. CDT |
GitOps for Event-Driven Architecture -- Kube-style!Event streaming application architectures are inherently distributed, and comprise many moving parts, from Kafka clusters to connectors, streaming applications and monitoring systems. What if we can use a declarative Kubernetes API and GitOps practices to deploy and manage these complex event driven architectures across the hybrid cloud? Introducing KCP, an open-source prototype of a multi-tenant control plane for workloads across (Kubernetes) clusters and clouds. In this session, we will show how KCP can be used to transform the way you deploy, manage and maintain your event streaming application architecture, topology and deployments. Duncan Doyle, Product Manager, Red Hat |
|||
9:00 a.m. CDT |
Kafka at the Edge: an IoT scenario with OpenShift Streams for Apache KafkaWe'll be looking at integrating Kafka with an IoT Edge scenario in this demonstration. Using OpenShift Streams for Apache Kafka, we'll investigate how to use the Apache Kafka event streaming platform to power real-time digital content processing. In order to process the IoT data in the cloud, we will first send it to a small local cluster for collection before mirroring it to OpenShift Streams. Additionally, you'll learn how to quickly enrich content and process data using the Apache Kafka Streams API and the Quarkus extensions. Hugo Guerrero, Senior Principal Product Marketing Manager, Red Hat Kafka at the Edge: an IoT scenario with OpenShift Streams for Apache KafkaWe'll be looking at integrating Kafka with an IoT Edge scenario in this demonstration. Using OpenShift Streams for Apache Kafka, we'll investigate how to use the Apache Kafka event streaming platform to power real-time digital content processing. In order to process the IoT data in the cloud, we will first send it to a small local cluster for collection before mirroring it to OpenShift Streams. Additionally, you'll learn how to quickly enrich content and process data using the Apache Kafka Streams API and the Quarkus extensions. Hugo Guerrero, Developer Advocate, Red Hat |
|||
11:00 a.m. CDT |
Debezium - Meet and GreetCome talk about anything Debezium with us at this informal gathering. First, we'll briefly go over some of the exciting new features that have been added to Debezium throughout the last few releases, like incremental snapshotting and parallel tasks for the Debezium connector for SQL Server, as well as what to anticipate from the upcoming Debezium 2.0 release. Then we'll flip the tables and have a Q&A session. Something you've always wanted to know about? How can Debezium be used to accomplish a particular objective? Do you have any feature requests you'd like to see implemented? Bring any questions you have about Debezium and change data capture, and we'll address them. Gunnar Morling, Senior Principal Software Engineer, Red Hat Hugo Guerrero, Senior Principal Product Marketing Manager, Red Hat Hans-Peter Grahsl, Developer Advocate, Red Hat |
Red Hat AMQ
Extend integration to the outer edges of your enterprise
Red Hat® AMQ—based on open source communities like Apache ActiveMQ and Apache Kafka—is a flexible messaging platform that delivers information reliably, enabling real-time integration. The AMQ streams component makes Apache Kafka "OpenShift native" through the use of powerful operators that simplify the deployment, configuration, management, and use of Apache Kafka on OpenShift.
Open Source
OpenShift Streams for Apache Kafka is a part of the Red Hat OpenShift ecosystem and provides a streamlined experience for sharing streaming data between instances no matter where they run in hybrid cloud environments.
Apache Kafka is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. The project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds. Apache Kafka is a great option when using asynchronous, event-driven integration and is foundational to Red Hat's approach to agile integration.
Strimzi provides a way to run an Apache Kafka cluster on Kubernetes in various deployment configurations. For development, it’s easy to set up an instance in Minikube in a few minutes. For production you can tailor the instance to your needs, using features such as rack awareness to spread brokers across availability zones, and Kubernetes taints and tolerations to run Kafka on dedicated nodes. Strimzi is an open source project that provides container images and operators for running Apache Kafka on Kubernetes and Red Hat OpenShift.
Debezium is an open source distributed platform for change data capture. Point it at databases, and applications can start responding to all of the inserts, updates, and deletes that other applications commit. Debezium is durable and fast, so your applications can respond quickly and never miss an event, even when things go wrong. Debezium connectors are based on the popular Apache Kafka Connect API and are suitable to be deployed along Red Hat AMQ Streams Kafka instances.
Apache Camel is an open source integration framework that empowers you to quickly and easily integrate various systems consuming or producing data. It is a rule-based routing and mediation engine that provides a Java object-based implementation of the Enterprise Integration Patterns using an application programming interface to configure routing and mediation rules. Apache Camel and Red Hat Fuse enable developers to create complex integrations in a simple and maintainable format.
Apicurio is an API and schema registry for microservices. You can use the Apicurio Registry to store and retrieve service artifacts such as OpenAPI specifications and AsyncAPI definitions, as well as schemas such as Apache Avro, JSON, and Google Protocol Buffers. The Red Hat Integration Service Registry is based on the open source Apicurio Registry.