Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Integrating Spring Boot with Red Hat Integration Service Registry

February 15, 2021
Roman Martin Gil
Related topics:
Event-DrivenJavaKubernetesMicroservicesSpring Boot
Related products:
Red Hat support for Spring Boot

Share:

    Most of the new cloud-native applications and microservices designs are based on event-driven architecture (EDA), responding to real-time information by sending and receiving information about individual events. This kind of architecture relies on asynchronous, non-blocking communication between event producers and consumers through an event streaming backbone such as Red Hat AMQ Streams running on top of Red Hat OpenShift. In scenarios where many different events are being managed, defining a governance model where each event is defined as an API is critical. That way, producers and consumers can produce and consume checked and validated events. We can use a service registry as a datastore for events defined as APIs.

    From my field experience working with many clients, I've found the most typical architecture consists of the following components:

    • AMQ Streams to deploy Apache Kafka clusters as the streaming backbone.
    • Apicurio Registry for storing events as APIs.
    • Red Hat OpenShift Container Platform to deploy and run the different components.
    • Spring Boot as the framework for developing Java client applications.
    • Avro as the data serialization system to declare schemas as event APIs.

    In this article, you will learn how to easily integrate your Spring Boot applications with Red Hat Integration Service Registry, which is based on the open source Apicurio Registry.

    Red Hat Integration Service Registry

    Service Registry is a datastore for sharing standard event schemas and API designs across APIs and event-driven architectures. Service Registry decouples the structure of your data from your client applications so that you can share and manage your data types and API descriptions at runtime. It also reduces costs by decreasing the overall message size, and it creates efficiencies by increasing the consistent reuse of schemas and API designs across your organization.

    Some of the most common use cases for Service Registry are:

    • Client applications that dynamically push or pull the latest schema updates to or from the service registry at runtime without redeploying.
    • Developer teams that query the registry for existing schemas required for services already deployed in production.
    • Developer teams that register new schemas required for new services in development or rolling to production.
    • Stored schemas used to serialize and deserialize messages. Client applications can reference the stored schemas to ensure the messages they send and receive are compatible with the schemas.

    Service Registry provides the following main features:

    • Support for multiple payload formats for standard event schemas and API specifications.
    • Pluggable storage options, including AMQ Streams, an embedded Infinispan in-memory data grid, and PostgreSQL database.
    • Registry content management using a web console, REST API commands, Maven plug-ins, or a Java client.
    • Rules for content validation and version compatibility to govern how registry content evolves.
    • Support for the Apache Kafka schema registry, including Kafka Connect integration for external systems.
    • A client serializer/deserializer (SerDes) to validate Kafka and other message types at runtime.
    • A cloud-native Quarkus Java runtime for low memory footprint and fast deployment times.
    • Compatibility with existing Confluent schema registry client applications.
    • Operator-based installation on OpenShift.

    Introducing Service Registry for client applications

    The typical workflow for introducing a new service registry to our architecture is to:

    1. Declare the event schema using common data formats like Apache Avro, a JSON schema, Google Protocol Buffers, OpenAPI, AsyncAPI, GraphQL, Kafka Connect schemas, WSDL, or XML schemas (XSD).
    2. Register the schema as an artifact in the service registry using the Service Registry UI, REST API, Maven plug-in, or a Java client. Client applications can then use the schema to validate that messages conform to the correct data structure at runtime.
    3. Use Kafka producer applications and serializers to encode messages that conform to a specific event schema.
    4. Use Kafka consumer applications and deserializers to validate that messages have been serialized using the correct schema based on a specific schema ID.

    This workflow ensures consistent schema use and helps to prevent data errors at runtime.

    The next sections discuss this workflow at a high level using a Spring Boot application. See the article's GitHub repository for the complete sample application source code.

    Register an Avro schema in Service Registry

    Avro provides a JSON schema specification to declare a variety of data structures. This simple example defines a message event:

    {
      "name": "Message",
      "namespace": "com.rmarting.kafka.schema.avro",
      "type": "record",
      "doc": "Schema for a Message.",
      "fields": [
        {
          "name": "timestamp",
          "type": "long",
          "doc": "Message timestamp."
        },
        {
          "name": "content",
          "type": "string",
          "doc": "Message content."
        }
      ]
    }

    Avro also provides a Maven plug-in to autogenerate Java classes based on the provided schema definitions (.avsc files).

    Once we have a schema, we can publish it in Service Registry. Publishing a schema in the registry makes it ready for client applications to use at runtime. The Apicurio Registry Maven plug-in makes it easy to publish our schema to Service Registry. We add a simple definition in our pom.xml file:

    <plugin>
       <groupId>io.apicurio</groupId>
       <artifactId>apicurio-registry-maven-plugin</artifactId>
       <version>${apicurio.version}</version>
       <executions>
          <execution>
             <phase>generate-sources</phase>
             <goals>
                <goal>register</goal>
             </goals>
             <configuration>
                <registryUrl>${apicurio.registry.url}</registryUrl>
                <artifactType>AVRO</artifactType>
                <artifacts>
                   <!-- Schema definition for TopicIdStrategy strategy -->
                   <messages-value>${project.basedir}/src/main/resources/schemas/message.avsc</messages-value>
                </artifacts>
             </configuration>
          </execution>
       </executions>
    </plugin>

    Note: With the Apicurio Registry Maven plug-in, we could use the Maven build lifecycle to define or extend our application lifecycle management and CI/CD pipelines. For example, we could extend the lifecycle to publish or update schemas whenever a new version was released. Explaining how to do it is not the objective of this article, but it's something you could explore.

    As soon as we publish our schema to Service Registry, we can manage it from the Service Registry UI shown in Figure 1.

    Manage your new schema in the Service Registry UI
    Figure 1: Manage your new schema in the Service Registry UI.

    Integrate Spring Boot, Apache Kafka, and AMQ Streams

    Spring Boot provides the Spring Kafka project to produce and consume messages to and from Apache Kafka. Using it is straightforward once we add the following dependency in our pom.xml file:

    <dependency>
       <groupId>org.springframework.kafka</groupId>
       <artifactId>spring-kafka</artifactId>
    </dependency>

    Adding the following property to your application.properties file connects your application with the AMQ Streams cluster:

    spring.kafka.bootstrap-servers = ${kafka.bootstrap-servers}

    Service Registry provides Kafka client serializers/deserializers for Kafka producer and consumer applications. Include the following dependency to add them to your application:

    <dependency>
       <groupId>io.apicurio</groupId>
       <artifactId>apicurio-registry-utils-serde</artifactId>
       <version>${apicurio.version}</version>
    </dependency>

    Produce messages from Spring Boot

    Spring Kafka provides a set of properties and beans to declare Kafka producers to send messages (Avro schema instances, in this case) to Apache Kafka. The two most important properties are spring.kafka.producer.key-serializer, which identifies the serializer class to serialize the Kafka record's key, and spring.kafka.producer.value-serializer, which identifies the serializer class to serialize the Kafka record's value.

    We have to add specific values in these properties so that the serialization process using Avro schemas can be registered in Service Registry:

    • The serializer class to use Avro schemas provided by the Apicurio SerDe class: io.apicurio.registry.utils.serde.AvroKafkaSerializer.
    • The Apicurio service registry endpoint to validate schemas: apicurio.registry.url.
    • The Apicurio service strategy to look up the schema definition: apicurio.registry.artifact-id.

    Here's a sample configuration for a producer template:

    # Spring Kafka Producer
    spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
    spring.kafka.producer.value-serializer=io.apicurio.registry.utils.serde.AvroKafkaSerializer
    spring.kafka.producer.properties.apicurio.registry.url = ${apicurio.registry.url}
    spring.kafka.producer.properties.apicurio.registry.artifact-id = io.apicurio.registry.utils.serde.strategy.TopicIdStrategy

    We can use the following properties to declare a KafkaTemplate to send messages (based on our Message schema):

    @Bean
    public ProducerFactory<String, Message> producerFactory(KafkaProperties kafkaProperties) {
        Map<String, Object> configProps = kafkaProperties.buildProducerProperties();
    
        return new DefaultKafkaProducerFactory<>(configProps);
    }
    
    @Bean
    public KafkaTemplate<String, Message> kafkaTemplate(KafkaProperties kafkaProperties) {
        return new KafkaTemplate<>(producerFactory(kafkaProperties));
    }

    Finally, we can send messages (storing the artifact ID from Service Registry) to Apache Kafka:

    @Autowired
    private KafkaTemplate<String, Message> kafkaTemplate;
    
    SendResult<String, Message> record = kafkaTemplate.send(topicName, message).get();

    The message will be serialized, adding the global ID associated with the schema used for this record. It will be important for the Kafka consumer applications to consume the global ID later.

    Consume messages from Spring Boot

    Spring Kafka also provides properties and beans to declare Kafka consumers to consume messages (Avro schema instances) from the Apache Kafka cluster. The most important properties are spring.kafka.consumer.key-deserializer, which identifies the deserializer class to deserialize the Kafka record's key, and spring.kafka.consumer.value-deserializer, which identifies the deserializer class to deserialize the Kafka record's value.

    Once again, we have to add specific values—in this case, to allow the deserialization process using Avro schemas registered in Service Registry:

    • The deserializer class to use Avro schemas, which is provided by the Apicurio SerDe class: io.apicurio.registry.utils.serde.AvroKafkaDeserializer.
    • Th Apicurio service registry endpoint to get valid schemas: apicurio.registry.url.

    Here's a sample configuration for a consumer template:

    # Spring Kafka Consumer
    spring.kafka.listener.ack-mode = manual
    spring.kafka.consumer.group-id = spring-kafka-clients-sb-sample-group
    spring.kafka.consumer.auto-offset-reset = earliest
    spring.kafka.consumer.enable-auto-commit=false
    spring.kafka.consumer.key-deserializer = org.apache.kafka.common.serialization.StringDeserializer
    spring.kafka.consumer.value-deserializer = io.apicurio.registry.utils.serde.AvroKafkaDeserializer
    spring.kafka.consumer.properties.apicurio.registry.url = ${apicurio.registry.url}
    # Use Specific Avro classes instead of the GenericRecord class definition
    spring.kafka.consumer.properties.apicurio.registry.use-specific-avro-reader = true

    We can declare a KafkaListener to consume messages (based on our Message schema) as:

    @KafkaListener(topics = {"messages"})
    public void handleMessages(@Payload Message message,
                               @Headers Map<String, Object> headers,
                               Acknowledgment acknowledgment) {
        LOGGER.info("Received record from Topic-Partition '{}-{}' with Offset '{}' -> Key: '{}' - Value '{}'",
                headers.get(KafkaHeaders.RECEIVED_TOPIC), headers.get(KafkaHeaders.RECEIVED_PARTITION_ID),
                headers.get(KafkaHeaders.OFFSET), headers.get(KafkaHeaders.MESSAGE_KEY),
                message.get("content"));
    
        // Commit message
        acknowledgment.acknowledge();
    }

    The schema is retrieved by the deserializer, which uses the global ID written into the message being consumed. With that, we're done!

    Summary

    In this article, you've seen how to integrate Spring Boot applications with Red Hat Integration Service Registry and AMQ Streams to build your event-driven architecture. Using these components together gives you the following benefits:

    • Consistent schema use between client applications.
    • Help with preventing data errors at runtime.
    • A defined governance model in your data schemas (such as for versions, rules, and validations).
    • Easy integration with Java client applications and components.

    See the Red Hat Integration homepage and Service Registry documentation for more about these components. Also, see First look at the new Apicurio Registry UI and Operator for more about using Service Registry in any Kubernetes or OpenShift cluster.

    Last updated: September 27, 2024

    Recent Posts

    • Migrating Ansible Automation Platform 2.4 to 2.5

    • Multicluster resiliency with global load balancing and mesh federation

    • Simplify local prototyping with Camel JBang infrastructure

    • Smart deployments at scale: Leveraging ApplicationSets and Helm with cluster labels in Red Hat Advanced Cluster Management for Kubernetes

    • How to verify container signatures in disconnected OpenShift

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue