Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

How to deploy OpenShift AI & Service Mesh 3 on one cluster

July 16, 2025
Leon Levy
Related topics:
Application modernizationArtificial intelligenceContainersData ScienceDeveloper ProductivityDevOpsMicroservicesObservabilityOperatorsService MeshServerlessSystem Design
Related products:
Red Hat AIRed Hat OpenShift AIRed Hat OpenShiftRed Hat OpenShift Container PlatformRed Hat OpenShift ServerlessRed Hat OpenShift Service Mesh

Share:

    This article provides a step-by-step guide to deploying and running Red Hat OpenShift AI and Red Hat OpenShift Service Mesh 3 on the same OpenShift cluster. We will outline key deployment practices, such as operator installation, namespace segmentation, and proper labeling. We'll also provide guidance on testing the setup using tools to ensure seamless, non-conflicting operation.

    This coexistence on a single OpenShift cluster enables organizations to harness OpenShift Service Mesh 3 traffic management and observability features alongside OpenShift AI's AI/ML capabilities. 

    OpenShift AI overview

    Artificial intelligence has advanced rapidly, becoming a transformative force across industries. As it moves into the mainstream, organizations are actively exploring its potential to drive innovation and efficiency.

    To help organizations facilitate their AI journey, Red Hat provides the Red Hat AI portfolio of products, which currently consists of Red Hat Enterprise Linux AI and OpenShift AI. These products help two types of AI users: those who are exploring how generative AI can be used, and organizations who are adopting MLOps, developing models and running them at scale. This applies to generative and predictive AI. 

    Red Hat OpenShift AI is a comprehensive AI/ML platform that supports the full lifecycle of AI models. It offers AI engineers, data scientists, and developers a powerful environment for training and serving models in both development and production. Building on Red Hat OpenShift extends enterprise-grade DevOps capabilities to the AI lifecycle, ensuring consistency, scalability, and operational efficiency.

    OpenShift Service Mesh overview

    As your applications evolve into distributed microservices, managing, monitoring, and securing the network communications between those services becomes increasingly complex.

    OpenShift Service Mesh manages and secures communication between microservices by providing traffic management, advanced routing, and load balancing. OpenShift Service Mesh also enhances security through features like mutual TLS (mTLS), and offers observability with metrics, logging, and tracing to monitor and troubleshoot applications. For more information, refer to the OpenShift Service Mesh documentation.

    As of this writing, the latest release of OpenShift Service Mesh 3.0 is built on Istio 1.24 and Kiali 2.4. This version marks a significant shift from OpenShift Service Mesh 2, introducing a new operator based on the community Sail operator to manage Istio. It also transitions from the Maistra.io distribution to a Red Hat-maintained build of the upstream Istio.io project. This change enables Red Hat to deliver the latest stable Istio features more quickly while continuing to provide enterprise-grade support.

    The previous major release, OpenShift Service Mesh 2.x, remains supported, with version 2.6.7 being the latest as of this article’s publication. While OSSM 2.x offers similar functionality to version 3.0, it features a different architecture and relies on distinct APIs. Some existing applications may still depend on these APIs, making continued support for OSSM 2.x important for backward compatibility.

    OpenShift AI incompatibility with OpenShift Service Mesh 3

    OpenShift AI currently depends on OpenShift Service Mesh 2.x and is not yet compatible with OpenShift Service Mesh 3. However, many organizations may still wish to adopt OpenShift Service Mesh 3 for their general application workloads while running OpenShift AI in the same cluster. The good news is that this coexistence is entirely achievable with the proper planning and configuration. Explore the Red Hat OpenShift AI supported configurations.

    Now that we’ve set the stage, we will show you how to successfully deploy OpenShift AI and OpenShift Service Mesh 3 (OSSM 3) on the same OpenShift cluster.

    OpenShift AI requirements

    To provision a data science cluster using the DataScienceCluster custom resource, the primary entry point for deploying OpenShift AI, install these three operators:

    1. Red Hat OpenShift AI
    2. Red Hat OpenShift Serverless
    3. Red Hat OpenShift Service Mesh 2

    These operators work together to create and manage the core components required for OpenShift AI to function. Once you have installed all three operators, you can create a DataScienceCluster resource, which leverages these operators to automatically provision the necessary OpenShift Service Mesh 2 components and namespaces it needs to function. This includes a ServiceMeshControlPlane named data-science-smcp, deployed in the DataScienceCluster-created istio-system namespace.

    OpenShift AI also configures OpenShift Service Mesh 2 by creating a ServiceMeshMemberRoll for the control plane, which instructs the mesh to manage the following namespaces created by OpenShift AI:

    • knative-serving
    • rhoai-model-registries

    OpenShift Service Mesh 2 creates and relies on the following Custom Resource Definitions (CRDs):

    • ServiceMeshControlPlane
    • ServiceMeshMemberRoll
    • ServiceMeshMember

    These CRDs are specific to OpenShift Service Mesh 2 and are not available in OpenShift Service Mesh 3. Additionally, version 2 automatically provisions ingress and egress gateways using the OpenShift Route resource, which differs from the OpenShift Service Mesh 3 gateway configuration model.

    A similar but different approach

    OpenShift Service Mesh 3 introduces a different approach to control plane management.

    In OpenShift Service Mesh 3, the control plane is defined by the Istio custom resource. By default, it manages all namespaces across the cluster. However, you can restrict this scope by using the discoverySelectors field, which allows you to limit mesh management to specific namespaces based on label selectors.

    The following is an example using discoverySelectors in OpenShift Service Mesh 3. In this example, the Istio control plane only manages namespaces that include the istio-discovery=enabled label.

    apiVersion: sailoperator.io/v1
    kind: Istio
    metadata:
      name: default
    spec:
      namespace: istio-system-3
      values:
        meshConfig:
          ...
          discoverySelectors:
            - matchLabels:
                istio-discovery: enabled

    The following is a sample namespace with matching labels:

    apiVersion: v1
    kind: Namespace
    metadata:
      labels:
        istio-discovery: enabled
        istio-injection: enabled
      name: my-namespace

    Note: The istio-injection=enabled label instructs OpenShift Service Mesh 3 to automatically inject the envoy sidecar proxy into pods deployed within the namespace.

    OpenShift Service Mesh 3 also takes a different approach to configuring ingress and egress gateways compared to OpenShift Service Mesh 2, but those details are beyond the scope of this article.

    What’s important to note is that, with thoughtful planning, both versions of OpenShift Service Mesh can coexist within the same cluster, each serving distinct use cases without conflict.

    How to plan for coexistence

    OpenShift AI automatically creates the istio-system namespace, which cannot be changed. This is where it deploys the ServiceMeshControlPlane and other core OpenShift Service Mesh 2 resources.

    For OpenShift Service Mesh 3, you must specify a separate namespace for its control plane like istio-system-3 rather than the namespace istio-system created by OpenShift AI.

    When deploying the OpenShift Service Mesh 3 control plane using the Istio CRD, use the discoverySelectors field to restrict which namespaces it manages. This ensures OpenShift Service Mesh 3 only operates in explicitly labeled namespaces, avoiding overlap with OpenShift Service Mesh 2.

    The discoverySelectors field supports Kubernetes label selectors using either matchLabels (for direct matches) or matchExpressions (for more advanced logic).

    For example, you can restrict OpenShift Service Mesh 3 to namespaces labeled istio-discovery=enabled.

    Testing and validation

    If you have access to an OpenShift cluster where you can test OpenShift AI and OpenShift Service Mesh 3 running side-by-side, follow these steps to validate the setup:

    1. Install OpenShift AI via OperatorHub in the OpenShift Web Console

    Install the following operators:

    • Red Hat OpenShift Serverless
    • Red Hat OpenShift Service Mesh 2
    • Red Hat OpenShift AI

    Once you have successfully installed the OpenShift AI operator, the console will prompt you to create a DataScienceCluster. As the Figure 1 shows, click the Create DataScienceCluster button and proceed with the default configuration to instantiate the cluster and its supporting resources.

    Create DataCenterCluster
    Figure 1: The Create DataScienceCluster page and the installed Red Hat OpenShift AI Operator.

    2. Validate OpenShift AI

    • Log into the OpenShift AI dashboard.
    • Create a test data science project.
    • Add a Workbench and, optionally, other components like a model server or data connection.
    • Launch the Workbench and verify that:
      • The environment starts without errors.
      • You can run a sample notebook or script inside the Workbench.

    3. Install OpenShift Service Mesh 3

    OpenShift Service Mesh 3 requires a combination of OperatorHub installations and CLI-based provisioning.

    Using the OpenShift Web Console, install the following operators:

    • Red Hat OpenShift Service Mesh 3
    • Kiali Operator
    • Red Hat build of OpenTelemetry
    • Tempo Operator

    Once you install these operators, you’ll complete the provisioning steps using the OpenShift CLI.

    Clone the OSSM 3 quickstart repository.

    A quickstart GitHub repository is available to help automate OSSM 3 setup with sample applications. Make sure to use the rhoai-cluster branch.

    git clone https://github.com/bugbiteme/ossm-3-demo.git
    cd ossm-3-demo
    git checkout rhoai-cluster

    Example output:

    Cloning into 'ossm-3-demo'...
    remote: Enumerating objects: 117, done.
    ...
    Switched to a new branch 'rhoai-cluster'

    Log in to the OpenShift cluster with your token and API endpoint:

    oc login --token=sha256xxxx --server=https://api.xxx.com:6443
    ...
    Logged into "https://api.xxx:6443" as "admin" using the token provided.
    You have access to 86 projects...

    Install the Kubernetes Gateway API (optional for OSSM 3 Ingress).

    If not already installed, apply the Gateway API CRDs:

    oc get crd gateways.gateway.networking.k8s.io &> /dev/null ||  { oc kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.0.0" | oc apply -f -; }

    Run the installation script to deploy OpenShift Service Mesh 3.

    Provision the OSSM 3 Service Mesh and sample applications by executing:

     sh install_ossm3_demo.sh

    Sample output highlights:

    Installing Minio for Tempo...
    Installing TempoCR...
    Installing OpenTelemetryCollector...
    Installing OSSM3...
    Installing IstioCR...
    Installing Kiali...
    Installing Bookinfo...
    ...
    Ingress route for Bookinfo: http://istio-ingressgateway-istio-ingress.apps.xxxx.com/productpage  
    Kiali route: https://kiali-istio-system-3.apps.xxxx.com

    Tip: After installation, visit the Kiali dashboard (via the Kiali route) to validate the deployment and monitor service mesh traffic. 

    Testing with a sample application

    Once the OSSM 3 installation script completes, you can validate the setup using the deployed bookinfo sample application.

    You can follow the output URL from the script to access the Kiali dashboard, where you can explore traffic flows, service health, and telemetry data for the Bookinfo app (Figure 2).

    Kiali Dashboard
    Figure 2: The Kiali dashboard shows the traffic graph.

    Note: Kiali provides cluster-wide observability by default, but can be configured to scope visibility to specific namespaces using the discovery_selectors field in its configuration.

    This is a great way to confirm that OSSM 3 is functioning correctly, side-by-side with OpenShift AI and Service Mesh 2, without interfering with RHOAI-managed namespaces. Learn more about how to configure discovery_selectors with Kiali.

    Summary

    Running Red Hat OpenShift AI and OpenShift Service Mesh 3 on the same OpenShift cluster empowers organizations to take advantage of OpenShift Service Mesh 3’s advanced traffic management, security, and observability features for general application workloads while also leveraging the OpenShift AI robust AI/ML platform for data science and model lifecycle management.

    This coexistence is achievable with thoughtful planning. By isolating the OpenShift Service Mesh 3 control plane in a separate namespace and using discoverySelectors to limit its scope to specific application namespaces, you can ensure that it operates independently of the OpenShift Service Mesh 2 resources required by RHOAI.

    Successful deployment hinges on a few key practices:

    • Installing the necessary operators for OpenShift AI and OpenShift Service Mesh 3.
    • Defining clear namespace boundaries.
    • Applying correct labels to control service mesh membership and sidecar injection.

    The testing and validation steps, including Kiali and the bookinfo sample app deployment, offer a hands-on way to confirm that both service meshes are functioning as expected without interfering with one another.

    For deeper insights and step-by-step instructions, consult the official Red Hat documentation for OpenShift AI and OpenShift Service Mesh. These resources provide comprehensive guidance to help you confidently implement this dual-service mesh architecture.

    Related Posts

    • How to serve embeddings models on OpenShift AI

    • Comparing OpenShift Service Mesh and Service Interconnect

    • How to get more from OpenShift Service Mesh monitoring

    • Create an OpenShift AI environment with Snorkel

    Recent Posts

    • A deep dive into Apache Kafka's KRaft protocol

    • Staying ahead of artificial intelligence threats

    • Strengthen privacy and security with encrypted DNS in RHEL

    • How to enable Ansible Lightspeed intelligent assistant

    • Why some agentic AI developers are moving code from Python to Rust

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue