Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Why Red Hat Service Interconnect version 2?

February 3, 2025
Justin Ross
Related topics:
DevSecOpsEdge computingGitOpsHybrid CloudIntegrationKubernetesApplication modernization
Related products:
Red Hat Service Interconnect

Share:

    Red Hat Service Interconnect v2 is a substantial redesign driven by four years of user feedback and experience.  This post outlines the core motivations behind these changes.

    Service Interconnect is based on the open-source project Skupper.  Service Interconnect is a multi-site, multi-platform application connectivity tool.  It is used for on-prem resource access, application modernization, and edge connectivity.  Customers such as Australia and New Zealand Banking Group Limited have used Service Interconnect to reduce risk and accelerate deployment in hybrid cloud environments.

    Addressing pain points

    Over time, we’ve received a lot of valuable feedback about what parts of Service Interconnect make our users’ lives difficult.  We’ve worked to fix them in v2.

    For example, a major problem in v1 was the need to recreate sites in order to change site configuration.  Recreating sites further meant you had to recreate site-to-site links.  V2 has a new controller implementation designed to handle dynamic configuration changes.  Most instances requiring recreating sites in v1 can now be changed on the fly in v2.

    Another problem in v1 was the use of labels and annotations on standard Kubernetes resources to carry configuration.  This sometimes led to ownership conflicts, where the Service Interconnect controller and other Kubernetes components tried to update the same concurrently.  V2 uses dedicated custom resources for all configuration, so there is no conflict of ownership: the custom resources containing all Service Interconnect configuration are managed exclusively by the Service Interconnect controller.

    A fully declarative approach

    Service Interconnect in v1 was initially focused on its command-line interface (CLI).  As a result, much of the control logic was embedded in the CLI implementation.

    Many of our early users were excited about Service Interconnect’s capabilities and ease of use, but as they turned their attention to how they would deploy and manage Service Interconnect at scale, they found they needed a declarative interface, something easier to manage with GitOps and easier to integrate with other tools.  In v1, we incrementally added declarative capabilities, but the result was fragmented and incomplete.

    V2 is declarative from the start.  Everything in v2 is founded on a uniform, declarative API.  All of the control logic resides in the controller.  The API and the underlying model are consistently applied across all Service Interconnect interfaces and platforms.  This shift has significant advantages for users embracing GitOps to manage their Service Interconnect networks and provides a solid foundation for a wide range of future integrations.

    An explicit model for service exposure

    In v1, service exposure was partially automatic and implicit. Exposing a service in one site would automatically propagate it to all other sites on the network via a mechanism called service sync. While convenient, this model had its problems.  Users often did not want or need every service exposed to every site.  The implicit propagation sometimes led to conflicts between user-created and Service Interconnect-created Service resources.  The service sync mechanism itself was a source of complexity and bugs.

    For v2, we wanted something simpler, with better separation of concerns.  We now have a fully explicit model for service exposure based on Listener and Connector resources.  These give users granular control over which services are exposed and where they are exposed.  The new model is more flexible, and most importantly, it is now easier to understand and more reliable.

    A basis for better documentation

    Feedback on Service Interconnect v1's documentation highlighted a disparity: while the CLI documentation was well received, the declarative configuration documentation was fragmented and difficult to use. This stemmed from the way v1's declarative interfaces evolved organically, leading to inconsistencies and a lack of common structure.

    Service Interconnect v2 is a clean break from this. Its design is centered around an orderly, consistent API.  This API, through which all configurations now flow, is the key to producing comprehensive reference documentation.

    Further, v2 introduces a more clearly defined and coherent model for application networking. This allows us to document the underlying concepts in a detailed and accessible way. Because this model is consistently applied across the resources and CLI, we can create documentation that connects the concepts, configuration, and interfaces.

    Designed for multiple platforms

    In v1, the initial focus was primarily on Kubernetes. While support for other platforms was added later, this was done in a way that introduced platform-specific differences and inconsistencies into the user interfaces. This made managing multi-platform deployments more complex than necessary.

    V2 supports Kubernetes, Docker, Podman, and Linux.  These platforms all share a common underlying model and a unified interface.  There are still important platform differences, but we have consciously avoided any unnecessary divergence.  This commonality across platforms simplifies integrations, reduces the learning curve, and makes Service Interconnect easier to operate in hybrid environments.

    Helping platform and application teams work together

    Service Interconnect’s evolution is driven by a deeper understanding of the needs of both platform teams and app teams, and the crucial synergy between them.

    Platform teams are increasingly focused on building better platforms that enable their app teams to deliver software faster.  This requires a balance between providing autonomy and maintaining control.  While v1 focused on app team needs, v2 is all about enabling the platform and app teams to work together effectively.

    Central to this collaborative approach is the concept of "guardrails".  Platform teams need the ability to establish boundaries and guidelines within which app teams can operate independently.  These guardrails are essential to making self-service application networking safe and reliable.  By giving app teams the ability to manage their own networking within defined limits, platform teams can foster faster development cycles without sacrificing stability or security.

    Service Interconnect v2: a foundation for the future of application networking

    Service Interconnect v2 is a big step forward, directly addressing the limitations of v1 and incorporating valuable feedback from our users. V2 is easier to operate, maintain, and extend.

    The strengthened foundation of v2 puts us in position to pursue a broader range of integrations with other essential tools and platforms, such as Ansible, cert-manager, and Developer Hub, adding to Service Interconnect's versatility and value.  Focusing on the needs of platform teams and app teams working in tandem will, we believe, drive broader adoption of Service Interconnect.

    We encourage you to explore v2 and consider migrating to experience these improvements firsthand.

    • Skupper v2 overview and resources
    • Service Interconnect on developers.redhat.com
    • Service Interconnect on redhat.com
    • Try Red Hat Service Interconnect
    Disclaimer: Please note the content in this blog post has not been thoroughly reviewed by the Red Hat Developer editorial team. Any opinions expressed in this post are the author's own and do not necessarily reflect the policies or positions of Red Hat.

    Recent Posts

    • A deep dive into Apache Kafka's KRaft protocol

    • Staying ahead of artificial intelligence threats

    • Strengthen privacy and security with encrypted DNS in RHEL

    • How to enable Ansible Lightspeed intelligent assistant

    • Why some agentic AI developers are moving code from Python to Rust

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue