Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Scalable Microservices through Messaging

May 26, 2016
Bilgin Ibryam
Related topics:
Java
Related products:
Streams for Apache KafkaRed Hat Fuse

Share:

    Microservices are everywhere nowadays, and so is the idea of using service choreography (instead of service orchestration) for microservices interactions. In this article I describe how to set up service choreography using ActiveMQ virtual topics, which also enables scalable event based service interactions.

    Service Interaction Styles

    There are two main types of service interaction: synchronous and asynchronous.

    With synchronous interactions, the service consumer makes a request and blocks until the operation completes and a response is received. The HTTP protocol is a great example for a synchronous interaction. This type of interaction is usually associated with request/response interaction style and the HTTP protocol. (Of course, it also possible to do request/response with asynchronous requests or event messaging, via registering a callback for the result, but that is a less common use case).

    With an asynchronous interaction style, the service consumer makes a request, but doesn't wait for the operation to complete. As soon as the request is acknowledged as received, the service consumer moves on. This type of interaction allows publish/subscribe style of service communication --- e.g. instead of a service consumer invoking an operation from another service, a producer raises an event and expects interested consumers to react.

    Apart from these technical considerations, there is also another aspect to consider with service interactions: coupling and responsibility.

    If service A has to interact with service B, is it the responsibility of service A to invoke service B (orchestration) or is it the responsibility of service B to subscribe for the right events (choreography)?

    With service orchestration, there is a central entity (as the service A itself in our case), which has the knowledge of other services to be called. With the choreography approach, this responsibility is delegated to the individual services and they are responsible for subscribing for the "interesting" events.

    To read more about this topic, checkout Chapter 4 from the excellent Building Microservices book. For the rest of this article, we will focus on doing service choreography using messaging.

    Service Orchestration Through Messaging

    Service orchestration in messaging is achieved through queues. A queue implements load balancer semantics using the competing consumer pattern and makes sure that there is only one consumer of a message.

    Let's say there is a "Customer Service" that has to interact with "Email Service".

    The easiest way to implement this is to use a queue and let "Customer Service" send a message to "Email Queue". If the "Customer Service" has to interact with "Loyalty Point Service", again, "Customer Service" has to send another message --- this time to "Loyalty Point Queue".

    With this approach, it is the responsibility of "Customer Service" to know about "Loyalty Point Service" and "Email Service", and subsequently send the right messages to the corresponding queues. In short, the whole interaction is orchestrated by "Customer Service".

    One advantage of using queues is that, it allows scaling of consumers easily. We could start multiple instances of "Loyalty Point Service" and "Email Service", and the queues will do the load balancing among the consumers.

     

    Service Choreography Through Messaging

    With the service choreography approach, "Customer Service" doesn't have any knowledge of "Loyalty Point Service" or "Email Service". "Customer Service" simply emits an event to "Customer Topic", and it is the responsibility of "Loyalty Point Service" and "Email Service" to know about the Customer event contract and subscribe to the right topic --- the publish/subscribe semantics of the topics will ensure that that every event is distribute to both subscribers.

     

    Scaling Service Choreography

    Since topics implement publish/subscribe semantics rather than competing consumers, scaling the consumers becomes harder. If we (horizontally) scale "Loyalty Point Service" and start two instances, both instances of the service will receive the same event and there won't be any benefit in scaling (unless the services are idempotent).

    ActiveMQ Virtual Topics to the Rescue

    So what we need is some kind of mixture between topic and queue semantics. We want the "Customer Service" to publish an event using publish/subscribe semantics so that all services receive the event, but we also want competing consumers, so that individual service instances can load balance events and scale.

    There are a number of ways we could achieve that with Camel and ActiveMQ:

    • The very obvious one that comes to (my) mind is to have a simple Camel route that is consuming events from "Customer Topic" and sends them to both "Loyalty Point Queue" and "Email Queue". This is easy to implement, but every time there is a new service interested from the "Customer Service" events, we have to update the Camel routes. Also, if you run the Camel route on a separate process than the broker, there will be unnecessary networking overhead only to move messages from a topic to a set queues in the same message broker.
    • An improvement to the above approach would be, to have Camel routes running in the ActiveMQ broker process using ActiveMQ Camel plugin. In that case, you still have to update the Camel route every time there is change to the subscribers, but the routing will happen in the broker process itself, so no networking overhead.
    • And even a better solution would be to have the queues subscribed to the topic w/o any coding, but using a declarative approach using ActiveMQ virtual topics (hence the whole reason for writing this article).

    Virtual topics are a declarative way of subscribing queues to a topic, by simply following a naming convention --- all you have to do is define or use the default naming convention for your topic and queues.

    For example, if we create a topic with a name matching VirtualTopic.> expression, such as: VirtualTopic.CustomerTopic, then have the "Loyalty Point Service" consume from Consumer.LoyaltyPoint.VirtualTopic.CustomerTopic queue, the message broker will forward every event from VirtualTopic.CustomerTopic topic to Consumer.LoyaltyPoint.VirtualTopic.CustomerTopic queue.

    Then we could scale Loyalty Point Service by starting multiple service instances, all of which consume from Consumer.LoyaltyPoint.VirtualTopic.CustomerTopic queue.

    Similarly, later we can create a queue for the Email Service by following the same naming convention:

    Consumer.Email.VirtualTopic.CustomerTopic. This feature allows us to simply name our topics and queues in a specific way,  and have them subscribed without any coding.

    Final thoughts

    This is only one of the many patterns I have described in my recently published Camel Design Patterns book. Camel is quite often used with ActiveMQ, and as such, you can find also some ActiveMQ patterns in the book too.

    Another way to scale microservices using choreography can be achieved through event sourcing. You can find a nice blog post describing it here.


    Editor's note: Apache Camel is part of Red Hat JBoss Fuse, and is available for download as part of your no-cost Red Hat Developers subscription.

    Last updated: January 19, 2023

    Recent Posts

    • Why some agentic AI developers are moving code from Python to Rust

    • Confidential VMs: The core of confidential containers

    • Benchmarking with GuideLLM in air-gapped OpenShift clusters

    • Run Qwen3-Next on vLLM with Red Hat AI: A step-by-step guide

    • How to implement observability with Python and Llama Stack

    What’s up next?

     

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue