Connect everywhere with Serverless Integration

Optimized scaling with a better developer, it connects everywhere.

What is serverless connectivity?

Since the surge of Serverless-as-a-Service (SaaS) offering and building of microservices, developers find themselves doing more connecting and orchestrating work than ever before, but managing and optimizing the deployment of this integration code takes up too much time and effort. Serverless is a cloud computing model that makes the provisioning and scaling of servers transparent to developers.  The event-driven and stateless nature of integration applications makes them a perfect fit for the serverless model, and serverless allows developers to focus on application development with flexible deployment and on optimal resource usage.


Developer focus

Abstraction of infrastructure for developers with the help of an Enterprise Integration Pattern (EIP) and 300+ built-in components.



Cloud-native but can also integrate with classic (legacy) applications. Provides consistent experience with on-premise and hybrid deployments.


Lower operational cost

Reduces horizontal scaling costs with automatic scaling. Helps developers avoid random, arbitrary predictions.


Reduced packaging and deployment complexity

Operator patterns minimize effort in many phases of the application lifecycle, including building, deploying, and configuring.


Flexible Scalability

Event-driven architecture (EDA) nature handles best with the fluctuating flow of events/data and failure isolation. The application can be implemented in a more modular way.


Faster time to market

Cuts time to market significantly. No more complicated build and deploy processes to roll out new features. Gives developers a quicker turnaround.

Serverless integration in OpenShift


Camel K provides the best practice platform and framework to build serverless integration. With the help of operator patterns and underlying serverless technologies, developers can focus on building their code. Camel K runs on top of Quarkus with the power of supersonic, lightweight runtime.

Red Hat Serverless with AMQ Streams provides a high through-put but reliable foundation for the event mesh. The events are used to trigger the initiation of serverless applications.

Based on the OpenShift platform that simplifies container management and provides the basic infrastructure, OpenShift serverless provides on-demand scaling as well as a mechanism for hiding the lower-level complexity to developer.


serverless integration diagram

Can my integration application go serverless?

If you're already structuring your distributed applications with their fundamental event-driven nature, you're halfway there...but with a slight twist. Serverless applications are the smaller code snippets that connect and transform services.




Serverless integration is always event-driven. The application is either a consumer or a publisher of the events. The initial events predominantly come from sources that trigger changes, and the states are broadcast across systems. It orchestrates the event if needed with external sources, or retrieves events from the event mesh to transform or process the payload.



The majority of integration applications are stateless, and you'll want to keep them that way. A highly distributed, loosely coupled environment demands free scaling and the ability to run multiple instances in parallel. The best practice is to just keep the application reactive to states, not store the state. Doing so will prevent future problems. States should be managed at a higher business level.


Smaller than micro

Breaking the integration into small modules will help to optimize resource usage, and keeping the integration nice and small will allow faster boot time at its initiation. Plus, the integration will be easier to maintain in the future.

Integration components



Typically, collectors do source to sink, converts none cloud events into one, and passes it to the system. They don’t do a lot of processing, keeping them simple is the key.


Connectors (Triggers)

Like any event-driven architecture, not all consumers are created equal. There's always a need to orchestrate events and do content routing. With connectors, developers can filter unwanted parts in the payload for more efficient processing. Aggregating or splitting content helps manage these events and send them to the right consumer in the right format.



Typical business process tasks, that is more CPU bound, or web/API front end handling user requests. These change quite frequently so managing the revision is the key.

Cloud-native development that connects systems

Orchestrate real-time cloud events and enhance developer productivity with auto-detect dependencies and lifecycles.

Learn serverless integration in your browser

Interactive Tutorial

Learn the basics of Camel K

Understand how to use this lightweight framework for writing integrations.

Interactive Tutorial

Exposing Apache Kafka through the HTTP Bridge

Communicate between applications and clusters with the Red Hat AMQ Streams...

Interactive Tutorial

Change data capture with Debezium

Monitor your change data capture (CDC) events with Debezium, a set of...

Interactive Tutorial

Send events to Apache Kafka with Reactive Messaging

Create a Quarkus application that uses the MicroProfile Reactive Messaging...