Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Multi-primary multi-cluster setup in OpenShift Service Mesh

Now in developer preview in OpenShift Service Mesh 2.6

July 2, 2024
Eoin Fennessy Jacek Ewertowski
Related topics:
KubernetesMicroservicesService Mesh
Related products:
Red Hat OpenShift Service Mesh

Share:

Istio’s multi-cluster features will be generally available in Red Hat OpenShift Service Mesh 3. To help users prepare for these new features and assess potential use cases, the multi-primary deployment model has been made available for developer preview in OpenShift Service Mesh (OSSM) 2.6.

In addition to detailing how to enable multi-primary in OSSM 2.6, this article will provide an overview of multi-cluster topologies, touch on some security considerations, compare multi-cluster and mesh federation, and outline some key features and potential use cases of multi-cluster with OSSM.

Multi-cluster topologies and multi-primary

Istio’s multi-cluster service mesh topologies can be deployed using the multi-primary model, wherein multiple clusters contain a deployment of the Istiod control plane component, or the primary-remote model, wherein remote clusters do not contain a mesh control plane and instead connect to the Istiod instance running on the primary cluster.

If clusters reside on the same network, cross-cluster connections between service workloads can be established directly; otherwise, east-west gateways are exposed on each cluster dedicated to accommodating cross-cluster traffic, as shown in Figure 1.

Multi-primary on different networks
Multi-primary on different networks
Figure 1: Multi-primary on different networks.

To facilitate endpoint discovery on a remote cluster, the kube-apiserver for the remote cluster must be accessible to the control plane on the primary cluster. This is achieved by generating a "remote secret" on the primary cluster containing the necessary credentials to access the remote cluster’s kube-apiserver.

In multi-primary topologies, trust needs to be established between each primary cluster in the mesh. This can be achieved by, for example, configuring each cluster’s Istiod instance to use intermediate certificate authorities generated using a common root CA.

Security considerations

It is a requirement for Istio’s multi-cluster topologies that the kube-apiserver for each remote cluster can be accessed by primary clusters’ control planes. For multi-cluster deployments on different networks, the kube-apiserver must be exposed to the internet.

While istiod uses a ServiceAccount configured with limited permissions to authenticate, and the connection to the remote cluster’s kube-apiserver is encrypted, additional security measures may be worth considering when exposing a cluster’s kube-apiserver to the internet. Such measures may include configuring network firewall rules that limit ingress to the API server to specific IP addresses/ranges. More general information about securely configuring the kube-apiserver can be found in this article .

Federation and Istio’s multi-cluster topologies

OpenShift Service Mesh’s federation deployment model offers some comparable features to Istio’s multi-cluster topologies, such as allowing workloads to be shared across multiple clusters. However, while multi-cluster topologies enable primary clusters to discover all services on remote clusters, federation allows users to limit the services that get exported to each remote cluster, creating a stronger separation between administrative domains where minimal trust can be assumed between federated meshes. This stronger separation makes federation ideal for situations where each mesh may be managed by a different team, while Istio’s multi-cluster topologies will more often be managed by a common team.

With federation, trust bundles are shared between each federated mesh and traffic between meshes is always authenticated and encrypted using mTLS. However, clusters’ trust domains and SPIFFE information are not shared with requests, making it impossible to configure authorization for remote service accounts.

Unlike Istio’s multi-cluster topologies, exposing the kube-apiserver to the internet is not required when federating meshes. This is because discovery of services on remote clusters is enabled by configuring dedicated ingress and egress gateways.

Both federation and multi-primary have some scalability implications that may need to be considered. With federation, each cluster requires dedicated ingress and egress gateways for each ServiceMeshPeer in the federation. Assuming each peer must connect to all other peers, this means that n * (n - 1) ingress & egress gateways must be configured, where n is the count of peers in the federation. For large federations, this may introduce two scalability considerations:

  1. Operational complexity due to the large quantity of gateway configurations that must be managed.
  2. Increased running costs due to the requirement for each ingress gateway to be configured with a network load balancer.

With Istio’s multi-cluster topologies, for meshes that span many clusters, where each cluster may contain a large quantity of services, the service mesh may experience some performance issues due to the high count of services that need to be processed and propagated to each proxy.

Note that as mentioned in the blog post introducing a new operator for Istio, the migration of federation features to the upstream Istio community or a separate project is being planned, and thus the feature will evolve in OpenShift Service Mesh 3 and beyond. 

Features and use cases

Traffic management rules can be configured for multi-cluster meshes based on cluster topology. By default, cross-cluster traffic for a service deployed on multiple clusters will be load balanced equally across all clusters. Traffic can be kept in-cluster by configuring MeshConfig.serviceSettings.settings.clusterLocal: true for specified hosts. Additionally, the topology.istio.io/cluster label can be referenced in DestinationRule subsets and VirtualService matchLabels to define traffic management rules based on cluster topology.

Locality load balancing rules such as locality-based failovers and locality-weighted distribution can be configured for multi-cluster topologies. Locality failovers allow traffic destined for specified hosts to failover from one locality to another based on outlier detection rules. Locality-weighted distribution allows for traffic originating from one locality to be weighted across multiple localities.

Kubernetes’ well-known labels topology.kubernetes.io/region and topology.kubernetes.io/zone can be referenced in a DestinationRule’s spec.trafficPolicy.loadBalancer.localityLbSetting to configure locality load balancing rules. Additionally, the label topology.istio.io/subzone can be added to nodes and referenced in DestinationRules to provide more granular control over locality load balancing rules.

Kiali currently offers experimental multi-cluster features that can be used to offer observability into Istio’s multi-cluster deployment models, as shown in Figure 2. These features include unified graph visualizations of multi-cluster meshes, aggregated list views for applications, workloads, services and Istio configurations, and detailed views for applications on each mesh that include logs, metrics, traces, and Envoy config. See Kiali’s documentation for more information on multi-cluster features that will be coming to OSSM.

Kiali’s graph view showing a multi-cluster Istio deployment
Kiali’s graph view showing a multi-cluster Istio deployment
Figure 2: Kiali's graph view showing a mutli-cluster Istio deployment.

For authenticating cross-cluster traffic, each cluster’s mTLS policies will apply. These can be configured using PeerAuthentication resources. Istio’s AuthorizationPolicy resources can also be configured to control cross-cluster access from peer identities on remote clusters.

How to enable multi-primary in OpenShift Service Mesh 2.6

 

Info alert: Note

This guide is for dev preview purposes only. Multi-primary is not supported in OpenShift Service Mesh 2.6 and should not be used in production environments. 

The commands in this guide have been verified using the Z shell (zsh) and a Fedora 39 system. While this guide should be compatible with other Unix-like systems such as macOS and shells such as Bash, this has not been verified.

Prerequisites

  • Access to two OpenShift Container Platform (OCP) clusters, both with the OpenShift Service Mesh operator installed.
  • Install the oc command-line interface.

Installation steps

  1. Create a working directory for storing certificates and configuration files:

    mkdir multi-primary-ossm
    cd multi-primary-ossm
  2. Export locations of kubeconfig files:

    export KUBECONFIG_WEST=<path_to_kubeconfig_file>
    export KUBECONFIG_EAST=<path_to_kubeconfig_file>
  3. Create aliases for oc commands:

    alias oc-west="KUBECONFIG=$KUBECONFIG_WEST oc"
    alias oc-east="KUBECONFIG=$KUBECONFIG_EAST oc"
  4. Create root CA certificates:

    root_ca_dir=cacerts/root
    mkdir -p $root_ca_dir
    
    openssl genrsa -out ${root_ca_dir}/root-key.pem 4096
    cat <<EOF > ${root_ca_dir}/root-ca.conf
    [ req ]
    encrypt_key = no
    prompt = no
    utf8 = yes
    default_md = sha256
    default_bits = 4096
    req_extensions = req_ext
    x509_extensions = req_ext
    distinguished_name = req_dn
    [ req_ext ]
    subjectKeyIdentifier = hash
    basicConstraints = critical, CA:true
    keyUsage = critical, digitalSignature, nonRepudiation, keyEncipherment, keyCertSign
    [ req_dn ]
    O = Istio
    CN = Root CA
    EOF
    
    openssl req -sha256 -new -key ${root_ca_dir}/root-key.pem \
      -config ${root_ca_dir}/root-ca.conf \
      -out ${root_ca_dir}/root-cert.csr
    
    openssl x509 -req -sha256 -days 3650 \
      -signkey ${root_ca_dir}/root-key.pem \
      -extensions req_ext -extfile ${root_ca_dir}/root-ca.conf \
      -in ${root_ca_dir}/root-cert.csr \
      -out ${root_ca_dir}/root-cert.pem
  5. Create intermediate CA certificates:

    for cluster in west east; do
      int_ca_dir=cacerts/${cluster}
      mkdir $int_ca_dir
    
      openssl genrsa -out ${int_ca_dir}/ca-key.pem 4096
      cat <<EOF > ${int_ca_dir}/intermediate.conf
    [ req ]
    encrypt_key = no
    prompt = no
    utf8 = yes
    default_md = sha256
    default_bits = 4096
    req_extensions = req_ext
    x509_extensions = req_ext
    distinguished_name = req_dn
    [ req_ext ]
    subjectKeyIdentifier = hash
    basicConstraints = critical, CA:true, pathlen:0
    keyUsage = critical, digitalSignature, nonRepudiation, keyEncipherment, keyCertSign
    subjectAltName=@san
    [ san ]
    DNS.1 = istiod.istio-system.svc
    [ req_dn ]
    O = Istio
    CN = Intermediate CA
    L = $cluster
    EOF
    
      openssl req -new -config ${int_ca_dir}/intermediate.conf \
        -key ${int_ca_dir}/ca-key.pem \
        -out ${int_ca_dir}/cluster-ca.csr
    
      openssl x509 -req -sha256 -days 3650 \
        -CA ${root_ca_dir}/root-cert.pem \
        -CAkey ${root_ca_dir}/root-key.pem -CAcreateserial \
        -extensions req_ext -extfile ${int_ca_dir}/intermediate.conf \
        -in ${int_ca_dir}/cluster-ca.csr \
        -out ${int_ca_dir}/ca-cert.pem
    
      cat ${int_ca_dir}/ca-cert.pem ${root_ca_dir}/root-cert.pem \
        > ${int_ca_dir}/cert-chain.pem
      cp ${root_ca_dir}/root-cert.pem ${int_ca_dir}
    done
  6. Create cacerts secrets to be used by the Istio control plane:

    oc-west create namespace istio-system
    oc-west label namespace istio-system topology.istio.io/network=west-network
    oc-west create secret generic cacerts -n istio-system \
      --from-file=cacerts/west/ca-cert.pem \
      --from-file=cacerts/west/ca-key.pem \
      --from-file=cacerts/west/root-cert.pem \
      --from-file=cacerts/west/cert-chain.pem
    
    oc-east create namespace istio-system
    oc-east label namespace istio-system topology.istio.io/network=east-network
    oc-east create secret generic cacerts -n istio-system \
      --from-file=cacerts/east/ca-cert.pem \
      --from-file=cacerts/east/ca-key.pem \
      --from-file=cacerts/east/root-cert.pem \
      --from-file=cacerts/east/cert-chain.pem
  7. Deploy service mesh control planes:

    for cluster in west east; do
      mkdir ${cluster}-config
      cat <<EOF > ${cluster}-config/smcp.yaml
    apiVersion: maistra.io/v2
    kind: ServiceMeshControlPlane
    metadata:
      name: basic
    spec:
      addons:
        kiali:
          enabled: false
      cluster:
        name: ${cluster}-cluster
        network: ${cluster}-network
        multiCluster:
          enabled: true
          meshNetworks:
            remote-network:
              endpoints:
              - fromRegistry: remote-cluster
              gateways:
              - address: remote-gateway
                port: 15443
      general:
        logging:
          componentLevels:
            default: info
      mode: ClusterWide
      meshConfig:
        discoverySelectors:
        - matchLabels:
            istio-injection: enabled
      proxy:
        accessLogging:
          file:
            name: /dev/stdout
      security:
        dataPlane:
          mtls: true
        identity:
          type: ThirdParty
        manageNetworkPolicy: false
      techPreview:
        global:
          meshID: bookinfo-mesh
      tracing:
        type: None
    EOF
    done
    
    oc-west apply -n istio-system -f west-config/smcp.yaml
    oc-east apply -n istio-system -f east-config/smcp.yaml
  8. Deploy east-west gateways with TLS mode AUTO_PASSTHROUGH to enable cross-cluster communication:

    for cluster in west east; do
      cat <<EOF > ${cluster}-config/eastwest-gateway.yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: istio-eastwestgateway
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-type: nlb
      labels:
        istio: eastwestgateway
        app: istio-eastwestgateway
        topology.istio.io/network: ${cluster}-network
    spec:
      type: LoadBalancer
      selector:
        istio: eastwestgateway
      ports:
      - name: status-port
        port: 15021
        targetPort: 15021
      - name: tls
        port: 15443
        targetPort: 15443
      - name: tls-istiod
        port: 15012
        targetPort: 15012
      - name: tls-webhook
        port: 15017
        targetPort: 15017
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: istio-eastwestgateway
      labels:
        istio: eastwestgateway
        app: istio-eastwestgateway
    spec:
      selector:
        matchLabels:
          istio: eastwestgateway
      template:
        metadata:
          annotations:
            inject.istio.io/templates: gateway
          labels:
            istio: eastwestgateway
            sidecar.istio.io/inject: "true"
        spec:
          containers:
          - name: istio-proxy
            image: auto
            env:
            - name: ISTIO_META_REQUESTED_NETWORK_VIEW
              value: ${cluster}-network
    ---
    apiVersion: networking.istio.io/v1beta1
    kind: Gateway
    metadata:
      name: istio-eastwestgateway
    spec:
      selector:
        istio: eastwestgateway
      servers:
      - port:
          number: 15443
          name: tls
          protocol: TLS
        hosts:
        - "*.local"
        tls:
          mode: AUTO_PASSTHROUGH
    EOF
    done
    
    oc-west apply -n istio-system -f west-config/eastwest-gateway.yaml
    oc-east apply -n istio-system -f east-config/eastwest-gateway.yaml
  9. Generate kubeconfig files for remote clusters:

    mkdir kubeconfigs
    
    server=$(grep "server:" $KUBECONFIG_WEST | awk 'NR==1 { print $2 }')
    ca=$(grep "certificate-authority-data:" $KUBECONFIG_WEST | awk 'NR==1 { print $2 }')
    token=$(oc-west -n istio-system create token istio-reader-service-account)
    
    cat <<EOF > kubeconfigs/istio-reader-service-account-west-cluster.kubeconfig
    apiVersion: v1
    kind: Config
    clusters:
    - name: west-cluster
      cluster:
        certificate-authority-data: ${ca}
        server: ${server}
    contexts:
    - name: istio-reader-service-account@west-cluster
      context:
        cluster: west-cluster
        namespace: istio-system
        user: istio-reader-service-account
    users:
    - name: istio-reader-service-account
      user:
        token: ${token}
    current-context: istio-reader-service-account@west-cluster
    EOF
    
    server=$(grep "server:" $KUBECONFIG_EAST | awk 'NR==1 { print $2 }')
    ca=$(grep "certificate-authority-data:" $KUBECONFIG_EAST | awk 'NR==1 { print $2 }')
    token=$(oc-east -n istio-system create token istio-reader-service-account)
    
    cat <<EOF > kubeconfigs/istio-reader-service-account-east-cluster.kubeconfig
    apiVersion: v1
    kind: Config
    clusters:
    - name: east-cluster
      cluster:
        certificate-authority-data: ${ca}
        server: ${server}
    contexts:
    - name: istio-reader-service-account@east-cluster
      context:
        cluster: east-cluster
        namespace: istio-system
        user: istio-reader-service-account
    users:
    - name: istio-reader-service-account
      user:
        token: ${token}
    current-context: istio-reader-service-account@east-cluster
    EOF
  10. Create remote secrets from generated kubeconfigs.

    Remote secrets are created so that each cluster can authenticate to the kube-apiserver of the other cluster.

    oc-west create secret generic istio-remote-secret-east-cluster \
      -n istio-system \
      --from-file=east-cluster=kubeconfigs/istio-reader-service-account-east-cluster.kubeconfig \
      --type=string
    oc-west annotate secret istio-remote-secret-east-cluster -n istio-system \
      networking.istio.io/cluster='east-cluster'
    oc-west label secret istio-remote-secret-east-cluster -n istio-system \
      istio/multiCluster='true'
    
    oc-east create secret generic istio-remote-secret-west-cluster \
      -n istio-system \
      --from-file=west-cluster=kubeconfigs/istio-reader-service-account-west-cluster.kubeconfig \
      --type=string
    oc-east annotate secret istio-remote-secret-west-cluster -n istio-system \
      networking.istio.io/cluster='west-cluster'
    oc-east label secret istio-remote-secret-west-cluster -n istio-system \
      istio/multiCluster='true'
  11. Deploy bookinfo on the west cluster and sleep on the east cluster:

    oc-west new-project bookinfo
    oc-west label namespace bookinfo istio-injection=enabled
    oc-west apply -f https://raw.githubusercontent.com/maistra/istio/maistra-2.6/samples/bookinfo/platform/kube/bookinfo.yaml -n bookinfo
    
    oc-east new-project sleep
    oc-east label namespace sleep istio-injection=enabled
    oc-east apply -f https://raw.githubusercontent.com/maistra/istio/maistra-2.6/samples/sleep/sleep.yaml -n sleep

Bookinfo is a simple example application and sleep is a curl client that will send traffic to bookinfo.

  1. Update mesh networks in the east cluster’s control plane.

    The mesh networks in the east cluster’s control plane need to be updated to include the load balancer endpoint of the east-west gateway for the west cluster.

    # On AWS:
    west_hostname=$(oc-west get services istio-eastwestgateway \
      -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
    # On GCP or Azure:
    west_hostname=$(oc-west get services istio-eastwestgateway \
      -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    
    sed -i.tmp -e "s/remote-network/west-network/" \
      -e "s/remote-cluster/west-cluster/" \
      -e "s/remote-gateway/${west_hostname}/" east-config/smcp.yaml &&
      rm east-config/smcp.yaml.tmp
    oc-east apply -n istio-system -f east-config/smcp.yaml
  2. Verify connectivity between east and west clusters:

    oc-east exec $(oc-east get pods -l app=sleep -n sleep \
      -o jsonpath='{.items[].metadata.name}') \
      -n sleep -c sleep -- curl -v "productpage.bookinfo:9080/productpage"
  3. Update mesh networks in west cluster’s control plane.

    The mesh networks in the west cluster’s control plane need to be updated to include the load balancer endpoint of the east-west gateway for the east cluster.

    # On AWS:
    east_hostname=$(oc-east get services istio-eastwestgateway \
      -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
    # On GCP or Azure:
    east_hostname=$(oc-east get services istio-eastwestgateway \
      -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    
    sed -i.tmp -e "s/remote-network/east-network/" \
      -e "s/remote-cluster/east-cluster/" \
      -e "s/remote-gateway/${east_hostname}/" west-config/smcp.yaml &&
      rm west-config/smcp.yaml.tmp
    oc-west apply -n istio-system -f west-config/smcp.yaml
  4. Deploy bookinfo on the east cluster:

    oc-east new-project bookinfo
    oc-east label namespace bookinfo istio-injection=enabled
    oc-east apply -f https://raw.githubusercontent.com/maistra/istio/maistra-2.6/samples/bookinfo/platform/kube/bookinfo.yaml -n bookinfo
  5. Send a number of requests from sleep to the bookinfo service.

    oc-east exec $(oc-east get pods -l app=sleep -n sleep \
      -o jsonpath='{.items[].metadata.name}') \
      -n sleep -c sleep -- /bin/sh -c \
      'for i in `seq 1 10`; do curl -I "productpage.bookinfo:9080/productpage"; done'
  6. Verify that requests are load balanced between workloads on both clusters.

    oc-east logs -n bookinfo $(oc-east get pod -n bookinfo \
      -l app=productpage -o jsonpath='{.items[].metadata.name}')
      
    oc-west logs -n bookinfo $(oc-west get pod -n bookinfo \
      -l app=productpage -o jsonpath='{.items[].metadata.name}')
  7. Delete productpage service on east cluster and details service on west cluster.

    oc-east delete service -n bookinfo productpage
    oc-east delete deployment -n bookinfo productpage-v1
    
    oc-west delete service -n bookinfo details
    oc-west delete deployment -n bookinfo details-v1
  8. Verify that productpage and details remain reachable.

    Requests from sleep on the east cluster will now all reach productpage on the west cluster, while traffic from productpage on the west cluster to the details service will now go to the east cluster.

    oc-east exec $(oc-east get pods -l app=sleep -n sleep \
      -o jsonpath='{.items[].metadata.name}') \
      -n sleep -c sleep -- curl -v "productpage.bookinfo:9080/productpage"

Next steps

Upstream Istio’s website provides more information on multi-cluster tasks such as multi-cluster traffic management and locality load balancing. Additional tasks that could be carried out include configuring PeerAuthentication resources to enforce strict mTLS for cross-cluster traffic, or creating AuthorizationPolicy resources to define access policies for peer identities on remote clusters.

Related Posts

  • Integrating Kubeflow with Red Hat OpenShift Service Mesh

  • Troubleshooting "no healthy upstream" errors in Istio service mesh

  • Custom WebAssembly extensions in OpenShift Service Mesh

  • Integrate OpenShift Service Mesh with cert-manager and Vault

  • Why service mesh and API management are better together

Recent Posts

  • Alternatives to creating bootc images from scratch

  • How to update OpenStack Services on OpenShift

  • How to integrate vLLM inference into your macOS and iOS apps

  • How Insights events enhance system life cycle management

  • Meet the Red Hat Node.js team at PowerUP 2025

What’s up next?

Read Introducing Istio Service Mesh for Microservices for an introduction to several key microservices capabilities that Istio provides on Kubernetes and Red Hat OpenShift.

Get the e-book
Red Hat Developers logo LinkedIn YouTube Twitter Facebook

Products

  • Red Hat Enterprise Linux
  • Red Hat OpenShift
  • Red Hat Ansible Automation Platform

Build

  • Developer Sandbox
  • Developer Tools
  • Interactive Tutorials
  • API Catalog

Quicklinks

  • Learning Resources
  • E-books
  • Cheat Sheets
  • Blog
  • Events
  • Newsletter

Communicate

  • About us
  • Contact sales
  • Find a partner
  • Report a website issue
  • Site Status Dashboard
  • Report a security problem

RED HAT DEVELOPER

Build here. Go anywhere.

We serve the builders. The problem solvers who create careers with code.

Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

Sign me up

Red Hat legal and privacy links

  • About Red Hat
  • Jobs
  • Events
  • Locations
  • Contact Red Hat
  • Red Hat Blog
  • Inclusion at Red Hat
  • Cool Stuff Store
  • Red Hat Summit

Red Hat legal and privacy links

  • Privacy statement
  • Terms of use
  • All policies and guidelines
  • Digital accessibility

Report a website issue