Red Hat OpenShift Streams for Apache Kafka
A fully hosted and managed Apache Kafka service.
OpenShift Streams for Apache Kafka simplifies the delivery of stream-based applications
OpenShift Streams for Apache Kafka is a managed cloud service for streaming data that reduces the operational cost and complexity of delivering real-time applications across hybrid-cloud environments.
Streamlined developer experience
Red Hat OpenShift Streams for Apache Kafka provides a developer-first, consistent experience that shields the user from administrative tasks, supports a consistent experience across clouds, and easily connects to other OpenShift workloads. Developers can self-serve, get productive rapidly with quick starts, and use the Command Line Interface (CLI) and Application Programming Interface (API) access to integrate into existing workflows.
Real-time streaming data broker
This service can be accessed by workloads running in any cloud to support large data transfer volumes between distributed microservices for enterprise-scale applications. Kafka is a real-time and durable data broker that enables applications to process, persist, and re-process streamed data. Kafka has a clear routing approach that uses a routing key to send messages and data to a topic.
Kafka brokers securely connect to distributed services, making it easy to consume and share streaming data between applications and enterprise systems, cloud provider services, and Saas applications. Kafka connectors are ready-to-use components that help developers import data from external systems into Kafka topics, and export data from Kafka topics into external systems. COMING SOON
Red Hat OpenShift Service Registry enables development teams to publish, discover, and communicate using well-defined data schemas with Apache Kafka. Service Registry supports both event-driven and API-driven applications with support for a wide variety of message schema types along with Open API v2 and v3. Developers can leverage the client SerDes in Kafka producer and consumer applications to eliminate boilerplate serialization logic and ensure consistency in data handling across applications. COMING SOON.
Delivered as a service
Red Hat's 24/7 global Site Reliability Engineering team fully manages the available multi-Availability Zone (AZ) Kafka infrastructure and daily operations including monitoring, logging, upgrades, and patching, to proactively address issues and quickly solve problems. SRE is a valuable practice when creating scalable and highly reliable software systems. It helps manage large systems through code which is more scalable and sustainable for sysadmins managing thousands of machines.
Change Data Capture (CDC)
Teams can use Debezium to easily incorporate existing databases into a data stream and an event-driven architecture. Change data capture is a pattern that enables database changes to be monitored and propagated to downstream systems. It is an effective way of enabling reliable microservices integration and solving typical challenges, such as gradually extracting microservices from existing monoliths. COMING SOON.
Build streaming applications using Apache Kafka
A deep-dive into managed Kafka service
Check out this DevNation Tech Talk with Edson Yanaga and Evan Shortiss.
Learn about OpenShift Streams for Apache Kafka, a service that provides fully hosted and managed Kafka instances. This service enables you to focus on building your real-time, data streaming applications while Red Hat takes care of your infrastructure.
After this session, you'll be familiar with the features of OpenShift Streams for Apache Kafka, you'll understand the related command-line interface (CLI) tooling, and you'll learn how it can be integrated with applications running on Red Hat OpenShift (or elsewhere!)
OpenShift Streams for Apache Kafka is open source
OpenShift Streams for Apache Kafka is a part of the Red Hat OpenShift ecosystem and provides a streamlined experience for sharing streaming data between clusters no matter where they run in hybrid cloud environments.
Apache Kafka is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. The project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds. Apache Kafka is a great option when using asynchronous, event-driven integration and is foundational to Red Hat's approach to agile integration. Learn more.
Strimzi provides a way to run an Apache Kafka cluster on Kubernetes in various deployment configurations. For development, it’s easy to set up a cluster in Minikube in a few minutes. For production you can tailor the cluster to your needs, using features such as rack awareness to spread brokers across availability zones, and Kubernetes taints and tolerations to run Kafka on dedicated nodes. Strimzi is an open source project that provides container images and operators for running Apache Kafka on Kubernetes and Red Hat OpenShift. Learn more.
Debezium is an open source distributed platform for change data capture. Point it at databases, and applications can start responding to all of the inserts, updates, and deletes that other applications commit. Debezium is durable and fast, so your applications can respond quickly and never miss an event, even when things go wrong. Debezium connectors are based on the popular Apache Kafka Connect API and are suitable to be deployed along Red Hat AMQ Streams Kafka clusters. Learn more.
Apache Camel is an open source integration framework that empowers you to quickly and easily integrate various systems consuming or producing data. It is a rule-based routing and mediation engine that provides a Java object-based implementation of the Enterprise Integration Patterns using an application programming interface to configure routing and mediation rules. Apache Camel and Red Hat Fuse enable developers to create complex integrations in a simple and maintainable format. Learn more.
Apicurio is an API and schema registry for microservices. You can use the Apicurio Registry to store and retrieve service artifacts such as OpenAPI specifications and AsyncAPI definitions, as well as schemas such as Apache Avro, JSON, and Google Protocol Buffers. The Red Hat Integration Service Registry is based on the open source Apicurio Registry. Learn more.