Skip to main content
Redhat Developers  Logo
  • AI

    Get started with AI

    • Red Hat AI
      Accelerate the development and deployment of enterprise AI solutions.
    • AI learning hub
      Explore learning materials and tools, organized by task.
    • AI interactive demos
      Click through scenarios with Red Hat AI, including training LLMs and more.
    • AI/ML learning paths
      Expand your OpenShift AI knowledge using these learning resources.
    • AI quickstarts
      Focused AI use cases designed for fast deployment on Red Hat AI platforms.
    • No-cost AI training
      Foundational Red Hat AI training.

    Featured resources

    • OpenShift AI learning
    • Open source AI for developers
    • AI product application development
    • Open source-powered AI/ML for hybrid cloud
    • AI and Node.js cheat sheet

    Red Hat AI Factory with NVIDIA

    • Red Hat AI Factory with NVIDIA is a co-engineered, enterprise-grade AI solution for building, deploying, and managing AI at scale across hybrid cloud environments.
    • Explore the solution
  • Learn

    Self-guided

    • Documentation
      Find answers, get step-by-step guidance, and learn how to use Red Hat products.
    • Learning paths
      Explore curated walkthroughs for common development tasks.
    • Guided learning
      Receive custom learning paths powered by our AI assistant.
    • See all learning

    Hands-on

    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.
    • Interactive labs
      Learn by doing in these hands-on, browser-based experiences.
    • Interactive demos
      Click through product features in these guided tours.

    Browse by topic

    • AI/ML
    • Automation
    • Java
    • Kubernetes
    • Linux
    • See all topics

    Training & certifications

    • Courses and exams
    • Certifications
    • Skills assessments
    • Red Hat Academy
    • Learning subscription
    • Explore training
  • Build

    Get started

    • Red Hat build of Podman Desktop
      A downloadable, local development hub to experiment with our products and builds.
    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.

    Download products

    • Access product downloads to start building and testing right away.
    • Red Hat Enterprise Linux
    • Red Hat AI
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat Developer Toolset

    References

    • E-books
    • Documentation
    • Cheat sheets
    • Architecture center
  • Community

    Get involved

    • Events
    • Live AI events
    • Red Hat Summit
    • Red Hat Accelerators
    • Community discussions

    Follow along

    • Articles & blogs
    • Developer newsletter
    • Videos
    • Github

    Get help

    • Customer service
    • Customer support
    • Regional contacts
    • Find a partner

    Join the Red Hat Developer program

    • Download Red Hat products and project builds, access support documentation, learning content, and more.
    • Explore the benefits

How to manage and preserve Kafka Connect offsets smoothly

June 13, 2024
Abdellatif Bouchama
Related topics:
ContainersData integrationDatabasesIntegrationKafkaStream processing
Related products:
Streams for Apache KafkaRed Hat OpenShift

    Have you ever faced the challenge of needing to recreate a Kafka Connector while preserving its offset position? If so, we have fantastic news for you!

    Picture this: your Kafka Connect cluster is perfectly set up, your connectors are finely tuned, and everything is running smoothly. Suddenly, you need to recreate one of your connectors. The thought of losing all the offset information is daunting—nobody wants to risk data duplication or loss.

    But worry no more, because Red Hat AMQ Streams 2.6 on Red Hat OpenShift (based on Apache Kafka 3.6.0 and Strimzi 0.38.x) has introduced a revolutionary feature: first-class offsets support with KIP-875. This new feature includes REST API endpoints specifically designed for managing connector offsets, making it seamless to preserve your offset positions when recreating connectors.

    The following table summarizes the key REST API actions you can use to manage connector offsets effectively:

    ActionDescription
    GET /connectors/{connector}/offsetsRetrieve the offsets for a connector; the connector must exist
    PATCH /connectors/{connector}/offsetsAlter the offsets for a connector; the connector must exist, and must be in the STOPPED state
    DELETE /connectors/{connector}/offsetsReset the offsets for a connector; the connector must exist, and must be in the STOPPED state

    To illustrate, imagine you need to create a new Kafka Connector with a different name but want it to start from the same offset position as the old one. With the new feature in Kafka 3.6, you can effortlessly transfer offsets from the old connector to the new one, ensuring continuity and data integrity (Figure 1).

    Diagram depicting the process of transferring Kafka Connector offsets from an old cluster to a new cluster, preserving offset positions by using specific REST API endpoints. The steps include stopping the original connector, retrieving offsets, deleting the connector, recreating it in the new cluster, stopping the new connector, setting the offsets, and restarting the new connector.
    Figure 1: Transferring Kafka Connector offsets between clusters.

    So, how does it work?

    Let's break it down:

    1. Stop the connector: First, stop the connector in your original cluster using the /connectors/{name}/stop endpoint.

      a. Endpoint: PUT /connectors/{name}/stop

      b. Action: Stop the connector in the original Kafka Connect cluster. In AMQ Streams 2.6 you can simply add spec.state:stopped to Kafka Connector CR. 

    2. Retrieve offsets: Next, fetch the offsets for the connector using the /connectors/{name}/offsets endpoint.

      a. Endpoint: GET /connectors/{name}/offsets

      b. Action: Retrieve offsets for the connector in the original cluster.

      For instance:

      oc rsh debezium-connect-connect-0 curl localhost:8083/connectors/mysql-connector/offsets
                                                                                                                                                                      {"offsets":[{"partition":{"server":"mysql"},"offset":{"ts_sec":1716802895,"file":"binlog.000002","pos":1424}}]}
    3. Delete the connector: Once you have the offsets, delete the connector using the /connectors/{name} endpoint.

      a. Endpoint: DELETE /connectors/{name}

      b. Action: Delete the connector from the original Kafka Connect cluster. In AMQ Streams, you can just delete the Kafka Connector CR. 

      For instance:

      oc delete kafkaConnector mysql-connector
                                                                                                                                      kafkaconnector.kafka.strimzi.io "mysql-connector" deleted
    4. Recreate the connector: Set up your new cluster and recreate the connector.
    5. Stop the new connector: Stop the newly created connector using the /connectors/{name}/stop endpoint.

      a. Endpoint: PUT /connectors/{name}/stop

      b. Action: Create and stop the connector in the new Kafka Connect cluster. Both steps 4 and 5 can be performed by applying the Kafka Connector CR with  spec.state:stopped .

      Check the status:

      oc rsh debezium-connect-new-connect-0 curl localhost:8083/connectors/mysql-connector-new/status
                                                                                                                                              {"name":"mysql-connector-new","connector":{"state":"STOPPED","worker_id":"debezium-connect-new-connect-0.debezium-connect-new-connect.dbz-mysql.svc:8083"},"tasks":[],"type":"source"}%
    6. Set offsets: Now comes the magic. Use the /connectors/{name}/offsets endpoint to set the offsets in the new cluster, using the information you retrieved earlier.

      a. Endpoint: PATCH /connectors/{name}/offsets

      b. Action: Set the connector's offset in the new Kafka Connect cluster, reusing the output obtained from the original cluster.

      For instance:

      oc exec -i debezium-connect-new-connect-0 -- curl -X PATCH \
                         -H "Accept:application/json" \
                         -H "Content-Type:application/json" \
                         http://localhost:8083/connectors/mysql-connector-new/offsets -d @- <<'EOF'
                      {
                        "offsets":[
                           {
                              "partition":{
                                 "server":"mysql"
                              },
                              "offset":{
                                 "ts_sec":1716802895,
                                 "file":"binlog.000002",
                                 "pos":1424
                              }
                           }
                        ]
                      }
                      EOF                                                                        

      Output:

      {"message":"The Connect framework-managed offsets for this connector have been altered successfully. However, if this connector manages offsets externally, they will need to be manually altered in the system that the connector uses."}%
    7. Restart the connector: Finally, restart the connector using the /connectors/{name}/resume     endpoint, and voila! Your connector is up and running in its new home, with all the offset goodness preserved.

      a. Endpoint: PUT /connectors/{name}/resume

      b. Action: Restart the connector in the new Kafka Connect cluster. In AMQ Streams you have just to change the state in kafka Connector CR to state: running

    If you're eager to try this out yourself, we've prepared an end-to-end tutorial for you. It's complete with instructions and examples, so you can have fun exploring this new feature: Check it out here. 

    Conclusion

    This new feature opens up a world of possibilities for Kafka Connect users, from easier development workflows to improved fault tolerance and more reliable data processing.  So go ahead, give it a spin, and let us know what you think.

     

    Note

    Managing the offsets is only possible via the REST API; there's currently no support in the Kafka Connect/Connector CRs. There's a Strimzi proposal to add support in the CR: Strimzi Proposal. For more details about each endpoint, see the Kafka Connect REST API documentation.

    For additional context and details on the 3.5.0 release, you can refer to the original post on the Apache Kafka blog here.

    Enjoy!

    Related Posts

    • Securely connect Red Hat Integration Service Registry with Red Hat AMQ Streams

    • Using secrets in Kafka Connect configuration

    • Introduction to Kafka Connect

    • Kafka tiered storage deep dive

    Recent Posts

    • Every layer counts: Defense in depth for AI agents with Red Hat AI

    • Fun in the RUN instruction: Why container builds with distroless images can surprise you

    • Trusted software factory: Building trust in the agentic AI era

    • Build a zero trust AI pipeline with OpenShift and RHEL CVMs

    • Red Hat Hardened Images: Top 5 benefits for software developers

    What’s up next?

    Kafka Connect: Build and Run Data Pipelines Feature and Share image

    Read Kafka Connect to discover how to build data pipelines between Kafka clusters and a variety of data sources and sinks.

    Get the e-book
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Chat Support

    Please log in with your Red Hat account to access chat support.