Apache Camel and Quarkus on Red Hat OpenShift

The previous installments in this series explored the motivations behind different community-supported runtimes for Apache Camel and enumerated in detail the benefits of each runtime. This final installment gives you a simplified and opinionated decision flow containing basic questions to help you identify which Camel runtime will work best for you.

The obvious first question is what happens if your organization mandates all aspects of the development process, including the technologies and runtimes to use. If that's the case, the guidance in this series of articles would appear of no use. You can, however, suggest what in your view could be a positive change to their current choices.

Choose the right Camel runtime for your environment

The decision tree in Figure 1 tries to recommend the best Camel runtime for you, depending on your target environment.

The major division in Figure 1 is between containerized and non-containerized environments. The first part of this series described the evolution of integration applications and how the emergence of containers accelerated the shift to microservice architectures. This transition does not necessarily mean that all integration projects must become microservices running in containers. The trend just added new miles to our road—that is, new places to survey during your decision-making.

It's up to you whether or not to embrace Kubernetes, depending on your requirements and budget. If you don't, remember Kubernetes as a possible target environment. I's also wise to keep Kubernetes in mind in case plans come up in your organization to transition to containers and microservices.

Containers, complexity, and developer control are the main factors determining the best runtime for Apache Camel.
Figure 1: Containers, complexity, and developer control are the main factors determining the best runtime for Apache Camel.

Camel for standalone applications

For many developers, running integrations in a Kubernetes environment is not an option or is simply overkill. In that case, choose a standalone Camel runtime.

If you'll be running standalone, and you're not planning to implement many services, consider adopting Camel Quarkus for its simplicity, performance, super low memory footprint, and compatibility with Camel 3.

Another good reason to prefer Camel Quarkus over other standalone runtimes is that it represents the future of Camel, where the community invests most of its effort. You should also consider that while today you may be running just in standalone mode, Camel Quarkus leaves an open door to comfortably transition to container environments when the opportunity arises.

However, if you foresee the need to implement extra services, consider bundling your application in a fully featured OSGi container such as Karaf. Try to ensure that you keep your Camel projects small and independent, avoiding monolithic structures that are challenging to evolve; OSGI's architecture should help you with that.

Camel for Kubernetes

Let's look at your options If your target environment is Kubernetes. Because containers favor distributed architectures and encourage splitting big application servers into independent microservices, deploying OSGi engines in containers doesn't make much sense. Running one service per container is the best practice.

To fit the slim image sizes desired for containers and Kubernetes, many developers feel tempted to switch from Java to other languages that provide a lighter and smaller runtime. The downside is that most of these languages lack the maturity of Java, and more importantly, the rich connectivity and functionality specifically designed to solve enterprise integration patterns that Camel brings.

Up until now, the most popular runtime to deploy Camel in Kubernetes was Spring Boot, which seemed to be a lightweight alternative to existing Java frameworks when it was introduced but now appears relatively heavyweight. Now Quarkus is on the scene to provide supersonic, subatomic Java. Quarkus promises a long life for Camel in the container space, to the delight of many developers already using Java and Camel.

Camel Quarkus provides very economical memory usage and top performance. Thus, Camel Quarkus meets the requirements of serverless platforms, where applications scale to zero when idle and react to incoming traffic by waking up and responding in just a few milliseconds. This behavior, of course, has tremendous benefits in the use of platform resources.

For the reasons cited here, the diagram includes Camel Quarkus and Camel K (Quarkus based) only. We have discarded other non-Quarkus Java runtimes for missing out on those characteristics.  In conclusion, it is a matter of choosing between the two. 

Trading off simplicity and control

Suppose there are considerable data transformations to be defined and nontrivial processing logic in your service: for instance, interacting with multiple endpoints. In that case, you should retain maximum control of Camel's framework. That's what Camel Quarkus provides and what the traditional Camel developer is used to, except with the much-improved performance that Quarkus achieves. As an analogy, imagine you have complete design control over the electronics that power a multicolor LED bulb (Figure 2).

Quarkus can be imagined as giving you full control over the electronics of a multicolor LED bulb.
Figure 2: Quarkus can be imagined as giving you full control over the electronics of a multicolor LED bulb.

However, with today's explosion of data flows, very often, use cases are relatively simple. Developers generally seek to fetch data from data sources or feed data to target platforms, with relatively low data handling or none at all.

That's Camel K's sweet spot. It delivers super connectivity with Camel's rich palette of connectors and can still include a reasonable degree of data manipulation if desired; however, as a basic principle, you should try to keep service options for each application to a minimum. Back to our bulb example, imagine you can turn the light on and off with a knob that controls its intensity (Figure 3).

Camel K is comparable to a dimmer switch that controls light intensity.
Figure 3: Camel K is comparable to a dimmer switch that controls light intensity.

Of course, the animation in Figure 3 just illustrates the concept. Still, it might help you visualize how Camel K hides much of the complexity from you, in contrast with the fully designed electronics of the multicolor bulb where full attention to all details is necessary.

Camel K's Domain Specific Language (DSL) is an excellent choice for simple use cases. The DSL offers some degree of control (the knob controlling light intensity), simplifying the developer's work and letting the Operator manage the service's lifecycle.

To better grasp what a simple use case could look like and the power that Camel K can deliver, check this two-part series detailing a complete API integration. Following best practices, the example in that series exposes an OpenAPI service, applies data transformation, connects to an endpoint, and processes its response.

Kamelet bindings

The last decision in our diagram separates Camel K's traditional developer's style from the "no-code" style of using Kamelet bindings, as introduced in Part 2 of this series. The Kamelet binding construct is meant for an even higher level of abstraction, allowing for configuration-only use cases. The Kamelet binding uses pre-built, out-of-the-box Kamelets. Think of these as processing flows you enable and disable, but to which you don't apply much further control. Again, back to our bulb example, imagine we're turning the light on and off using a switch.

Kamelets are comparable to a switch that turns the light on and off.
Figure 4: Kamelets are comparable to a switch that turns the light on and off.

When you push a Kamelet binding, it's as if you press the switch to turn the light on. The Operator reads its definition and deploys an integration process that executes it. If you later delete the Kamelet binding resource, the Operator stops it and tears it down, as if switching the light off.

For more formal examples, look at this article that explains the use of both the DSL and Kamelet bindings. If you look at steps 1 and 3 of the article, in particular, you'll see how the configuration-only blocks enable integration stages of the end-to-end data flow.

Although Kamelet bindings can seem very restrictive, if you'd like a bit more control, you can define new Kamelets in your catalog to satisfy your requirements. In our metaphor, you can unscrew the light switch and tune it to your liking.

Final words

Apache Camel is known as the Swiss knife of integration, and you probably grasped by now that Camel goes even further than that. It offers a wide variety of different runtimes that help propagate a solid foundation for integrations.

It's helpful to understand Camel's timeline and how it has evolved, continuously looking at new horizons. You can then find out which Camel runtime fits better in your world and take advantage.

You can undoubtedly integrate systems with arbitrary languages and frameworks, but that's like random shooting from recoil. Apache Camel is like harnessing standard light energy to provide you with a sharp laser-pointing focus.

Learn more about Camel Quarkus and Camel K

See the following resources to learn more about Camel Quarkus and Camel K:

Last updated: September 20, 2023