Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Improve cross-team collaboration with Camel K

September 14, 2021
Bruno Meseguer
Related topics:
Event-DrivenKafkaKubernetesMicroservices
Related products:
Red Hat OpenShift

Share:

    No matter how much you know about Apache Camel, Camel K is designed to help and simplify how you connect systems in Kubernetes and Red Hat OpenShift, and it comes with cloud-native Kafka integration. This article helps you discover the power and simplicity of Camel K and opportunities for using it in ways you might not have previously considered.

    About Camel K: Camel K lives in Apache Camel, the most popular open source community project aimed at solving all things integration. Camel K simplifies working with Kubernetes environments so you can get your integrations up and running in a container quickly. See the end of this article for a list of resources for learning more about Camel K and the latest general availability (GA) release.

    Challenges in cross-team collaboration

    Teams in organizations frequently connect online to discuss their shared objectives and tasks. They also sometimes need to connect with other teams within the organization to answer questions and concerns and determine the right strategy forward. Figure 1 is an overview of this cross-functional collaboration between teams.

    Cross functional collaboration requires multiple teams to interact and synchronize.
    Figure 1: An example of cross-functional team collaboration.

    As a collaboration tool, teams frequently use Google spreadsheets to document open questions, as shown in Figure 2. Getting these questions answered requires finding the right individuals outside the team who can help complete the document. Contacting and communicating across teams is time-consuming and not always easy.

    Team members type their questions and concerns into a Google spreadsheet.
    Figure 2: Open questions in a Google spreadsheet.

    Automating cross-team interactions

    For our scenario, consider an organization that would like to automate the process of sending out questions for cross-team collaboration. The system would distribute the questions to the relevant teams and collect their responses back, as shown in Figure 3.

    When other teams provide answers they are automatically inserted into the Google spreadsheet.
    Figure 3: The system collects the answers automatically.

    We can use Camel K and Kafka, running on a Kubernetes platform, to solve this scenario. Camel K provides great agility, rich connectivity, and mature building blocks to address common integration patterns. Kafka brings an event-based backbone and keeps a record of all the cross-team interactions.

    Let's walk through the stages of the integration.

    Stage 1: Data as a stream

    Camel K introduces the concept of KameletBindings. In short, these are no-code configurable definitions that typically bind two Camel K connectors. We use them to open an integration path so that information flows from a source to a destination (the sink).

    In our simulated scenario, any regular Kubernetes user can define a KameletBinding that includes two connectors and enables the data flow shown in Figure 4.

    A KameletBinding fetches the Google Sheets data and streams it over Kafka.
    Figure 4: The KameletBinding for the cross-team data flow.

    In this data flow, the KameletBinding captures and forwards the information in a Google Sheets document to Kafka. This binding only requires configuring a source and a sink in a YAML definition, as follows:

    apiVersion: camel.apache.org/v1alpha1
    kind: KameletBinding
    metadata:
      name: stage-1-sheets2kafka
      namespace: demo-camelk
    spec:
    
      source:
        ref:
          kind: Kamelet
          apiVersion: camel.apache.org/v1alpha1
          name: google-sheets-source
        properties:
          accessToken: "the-token"
          applicationName: "the-app-name"
          clientId: "the-client-id"
          clientSecret: "the-client-secret"
          index: "the-index"
          refreshToken: "the-refresh-token"
          spreadsheetId: "the-spreadsheet-id"
          range: "the-range"
          delay: 500000
    
      sink:
        ref:
          apiVersion: kafka.strimzi.io/v1beta1
          kind: KafkaTopic
          name: questions
    

    The following command shows an example of how to create this Camel K integration using the Kubernetes (kubectl) or OpenShift (oc) client:

    oc apply -f kameletbindings/stage-1-sheets2kafka.yaml

    When the KameletBinding is created via the web console or command-line interface (CLI), Camel K automatically triggers a build and deploys a container that immediately starts the data transfer from Google Sheets to the Kafka platform.

    Stage 2: Data distribution

    The questions in the spreadsheet have now been ingested into Kafka: One event per question. Our next mission is to deliver them to the designated teams via email, as illustrated in Figure 5.

    A Camel K consumer routes incoming Kafka events to different teams via email.
    Figure 5: Messages ingested to Kafka are delivered by Camel K.

    Each Kafka message includes three pieces of information, as the following sample from the spreadsheet shows:

    1After our recent company's acquisition, how will we integrate their systems with ours?Architecture

    We apply the message router enterprise integration pattern (EIP). Note that we're using the third column in the spreadsheet ("Architecture") as the routing key to deliver each message to its designated team.

    As this integration requires routing logic, instead of defining a KameletBinding, we prefer using Camel's DSL (domain specific language) for its implementation. Only one Camel K source file is required. Its core logic would look as follows:

    Note: The team is extracted from the array’s position 2 using the syntax ${body[2]}.

    .choice()
        .when(simple("${body[2]} == 'Development'"))
            .setProperty("mail-to", constant("development@demo.camelk"))
            .to("direct:send-mail")
    
        .when(simple("${body[2]} == 'Architecture'"))
            .setProperty("mail-to", constant("architecture@demo.camelk"))
            .to("direct:send-mail")
    
        .when(simple("${body[2]} == 'Operations'"))
            .setProperty("mail-to", constant("operations@demo.camelk"))
            .to("direct:send-mail")
    
        .otherwise()
            .log("Message discarded: team is unknown.");

    The Camel K code consumes events using the Kafka connector, applies the message routing logic shown, and then pushes the data via the mail component connected to the mail server.

    Each team receives an email in their inbox containing the question directed to them. Any member of that team can pick up the message and reply with an answer, as shown in Figure 6.

    A team member opens the email containing the question, types a reply, and sends it.
    Figure 6: Any team member can reply to a message via email.

     

    Running this Camel K component takes one command in the CLI. Here is an example of how to run it on Kubernetes:

    kamel run stage-2-kafka2mail -d camel-jackson

    Stage 3: Data collection

    So far, we have successfully distributed the questions on behalf of the Strategy team. Teams answering the questions will direct their replies to this team's inbox. The next integration step in the chain (Figure 7) will channel email replies back to Kafka, thus enabling it to keep a history of events containing cross-team interactions.

    Teams reply to questions via email, then a KameletBinding fetches and streams them over Kafka.
    Figure 7: Camel K channels the email responses back to Kafka.

    Again, this piece does not require any coding or experience with Camel. To establish this data flow, we can define a KameletBinding as follows:

    apiVersion: camel.apache.org/v1alpha1
    kind: KameletBinding
    metadata:
      name: stage-3-mail2kafka
      namespace: demo-camelk
    spec:
    
      source:
        ref:
          kind: Kamelet
          apiVersion: camel.apache.org/v1alpha1
          name: mail-imap-insecure-source
        properties:
          host: "standalone.demo-mail.svc"
          port: "3143"
          username: "strategy"
          password: "demo"
    
      steps:
      - ref:
          kind: Kamelet
          apiVersion: camel.apache.org/v1alpha1
          name: mail-to-json-action
    
      sink:
        ref:
          apiVersion: kafka.strimzi.io/v1beta1
          kind: KafkaTopic
          name: answers

    The following command shows an example of how to create this Camel K integration using oc. The same command would work with kubectl:

    oc apply -f kameletbindings/stage-3-mail2kafka.yaml

    When the KameletBinding is created (via the web console or CLI), Camel K automatically triggers a build and deploys a container. The container immediately starts collecting email responses and pushing them to the Kafka platform.

    Stage 4: Push to cloud service

    The piece that closes the circle is setting up the data flow from Kafka to the Google Sheets document. It requires the necessary intelligence to correctly place each response into the corresponding cell in the spreadsheet, as shown in Figure 8.

    A Camel K Kafka consumer obtains the answers and updates the Google Sheets document.
    Figure 8: Camel K gets responses from Kafka and updates the spreadsheet.

    A Camel DSL implementation fits the purpose here. It will extract from the Kafka event the correlation information that indicates where to push the answer to in the spreadsheet, as shown in Figure 9.

    Kafka events in JSON format contain the email subject that includes the ID used to locate the cell in the spreadsheet.
    Figure 9: Camel K extracts the correlation information from the Kafka event.

    The Kafka event represents (in JSON format) the email reply. One of its fields, the email "subject," contains the key value that identifies the question in the Google Sheets document. Camel K extracts the ID and uses it when invoking the Google API.

    After processing all the responses, the Strategy team can advance to the next stage and decide what their next strategic decision will be (Figure 10).

    Camel K has updated the Google Sheets document with all the collected answers.
    Figure 10: The spreadsheet populated with answers.

    Running this Camel K component takes one command in the CLI. Here's an example of how to run it on Kubernetes:

    kamel run stage-4-kafka2sheets -d camel-jackson

    Stage 5: Stream post-processing

    The Kafka backbone keeps a historical record of all the events flowing in both directions: both questions and answers. It enables systems to subscribe to and (for example) replay and process event streams. Teams can then use this capability to generate valuable information for other departments in the organization.

    In our example, we use a Camel K unit to consume from both Kafka topics: questions and answers. We then render a report in PDF format that will be made available to anyone in the organization who is interested in cross-functional interactions related to strategy. Figure 11 illustrates the integration.

    Camel K consumes the questions and answers from Kafka to render a PDF document and upload it to a Google Drive.
    Figure 11: An overview of the Camel K flow to generate reports.

    This Camel K implementation uses the aggregator EIP pattern, which helps identify all the related Kafka events and merge them to produce a PDF report and upload it to Google Drive.

    The process involves two aggregators. The first one, the Q/A correlator, pairs each question with its corresponding answer into a unit. The second aggregator, the Document correlator, combines all of the questions and answers that belong to the same spreadsheet. The Camel PDF component renders the resulting document. Figure 12 illustrates the process flow.

    Camel K executes in sequence two aggregators, the first pairs questions and answers, the second groups all events of the same document.
    Figure 12: An overview of the aggregators in sequence in Camel K to generate the report.

    This logic might be a less trivial processing flow for the ordinary Kubernetes user, but it's familiar territory for an experienced integrator making good use of Camel’s functionality. The following code snippet highlights the main Camel K functionality:

    from("kafka:questions")
       [data extraction here]
       .to("direct:process-stream-events");
    
    from("kafka:answers")
       [data extraction here]
       .to("direct:process-stream-events");
    
    from("direct:process-stream-events")
       .aggregate(header("correlation"), new QAStrategy())
          .completionSize(2).
          .to("direct:aggregate-document-qas");
    
    from("direct:aggregate-qas")
       .aggregate(header("correlation"), new DocStrategy())
          .completionSize(3).
          .to("direct:process-document");
    
    from("direct:process-document")
       .to("pdf:create")
       .to("google-drive://drive-files/insert");

    Note: This code snippet only shows the most relevant parts of the integration. You can find the complete integration detail in the GitHub repository for this article.

    In total, we required five Camel routes to perform the execution:

    • Two Kafka consumers (questions and answers).
    • Two aggregators.
    • One processor to create the PDF and upload it to Google Drive.

    Running this Camel K component takes one command in the CLI. Here is an example of how to run it on Kubernetes:

    kamel run stage-5-pdf2drive -d camel-jackson -d camel-pdf

    Summary of the integration

    The overall automation contains a total of five Camel K pieces. Four of them mainly dedicate their effort to streaming data in and out of the Kafka platform by first distributing the questions from the strategy team to other departments and then returning all answers to the originating source. A fifth Camel K process replays both Kafka streams to produce a public report of team interactions.

    Overview of the full solution, with questions distribution, answers updates and PDF report upload
    Figure 13: Overview of the full solution, with questions distribution, answers updates and report upload.

     

    When this integration is implemented, the Strategy team can see all the answers popping up on their screens in no time, as shown in Figure 13. (This assumes a perfect world where teams answer quickly.)

    Incoming updates in Google Sheets fill up the Answers column in the document.
    Figure 14: The integration enables faster responses and decisions across teams.

    Anyone with sufficient access privileges to Google Drive, where Camel K uploads the report, can open the PDF document and inspect its contents, as shown in Figure 14.

    An authorized user opens the PDF report uploaded in Google Drive by Camel K.
    Figure 15: Questions and responses are also available in a PDF.

    Watch a video demonstration

    Watch the following video to see an execution of the use case described in this article:

    Conclusion

    You've seen in this article that Camel K makes it easy to move data between cloud services and corporate systems via stream-based messaging.

    Whatever you want to call them—cloud-native microservices, mediations, integrations, automations, enablers, etc.—Camel K has taken a giant leap forward in running them effortlessly. Its DNA empowers developers to implement event-driven architectures, and traditional synchronous flows, with or without serverless capabilities. Camel K offers connectivity to Kubernetes-native platforms (like Knative and Strimzi), with over 200 connectors and EIP building blocks.

    Crucially, Camel K has widened its audience by introducing no-code building blocks. Anyone can use these to rapidly deploy data flows that virtually connect any data source to any data target. For example, users on Kafka platforms can now enjoy supercharged connectivity with simple configure-and-run KameletBindings.

    This article showed you how Camel K resolves integrations with great elegance and simplicity. While this example was relatively simple, Camel K can solve integrations of the highest complexity, where its heritage of Apache Camel's rich, out-of-the-box functionality excels.

    Next steps with Camel K

    See the following resources to learn more about Camel K and the current GA release:

    • A good place to start learning about Camel K is the Camel K landing page on Red Hat Developer.
    • See Six reasons to love Camel K for an overview of the highlights of using Camel K.
    • Get a hands-on introduction to Camel K with our collection of interactive tutorials.
    • Learn more about Camel K in Apache Camel.
    • Be sure to visit the GitHub repository for the cross-team collaboration demo featured in this article.
    • Review what you learned by watching the video of the cross-team scenario implementation.
    Last updated: October 8, 2024

    Related Posts

    • Six reasons to love Camel K

    • Design event-driven integrations with Kamelets and Camel K

    • Call an existing REST service with Apache Camel K

    • Using GeoJSON with Apache Camel K for spatial data transformation

    Recent Posts

    • How to run AI models in cloud development environments

    • How Trilio secures OpenShift virtual machines and containers

    • How to implement observability with Node.js and Llama Stack

    • How to encrypt RHEL images for Azure confidential VMs

    • How to manage RHEL virtual machines with Podman Desktop

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue