Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

How Developer Hub and OpenShift AI work with OpenShift

April 14, 2025
Valentina Rodriguez Sosa
Related topics:
Artificial intelligenceData ScienceGitOpsHelmPython
Related products:
Red Hat AIRed Hat OpenShift AIRed Hat Developer HubRed Hat OpenShift GitOpsRed Hat OpenShift Container Platform

Share:

    In my previous article, How building workbenches accelerates AI/ML development, I discussed the user's experience building workbenches to accelerate AI/ML development. I also demonstrated how to select a software template and provide additional information as an input, such as namespace and cluster information to create the component. Now let’s explore how Red Hat Developer Hub and Red Hat OpenShift AI work together on top of Red Hat OpenShift.

    A GitOps approach to OpenShift AI

    The Software Templates are built with Helm charts, which will read from the user inputs and populate this information in a new source code creating a new repository. Thanks to Red Hat OpenShift GitOps, this source code is applied to OpenShift, building new components and ensuring that the desired state is always current. Red Hat OpenShift AI brings AI capabilities on top of OpenShift, allowing these configurations to be created and building components required to build AI applications for model serving, model training, and developing and deploying inference AI applications at scale in an enterprise environment (Figure 1).

    GitOps Workflow showing desired state vs current state in the cluster.
    Figure 1: This is an illustration of the GitOps workflow.

    Prerequisites: 

    • Red Hat OpenShift Service Mesh
    • Red Hat OpenShift Serverless

    Creating a workbench 

    A workbench gives you an environment to build, deploy, and test AI/ML. A workbench will provide a notebook based on a specific image and a notebook optimizing with tools and libraries needed for developing and testing models (Figure 2). Learn more about creating a workbench and a notebook and explore notebooks and GitOps. 

    Workbench definition
    Figure 2: Representing YAML Definition to build the workbench.

    It allows you to automate training, testing, and running your model, removing the burden of manual testing and ensuring consistency. 

    Enable the pipeline server

    To start working with data science pipelines, you need to enable the pipeline server. The pipeline server will be defined with the required storage information and a scheduler (Figure 3). Learn more about managing data science pipelines and data science pipelines and GitOps.

    Data Science Pipelines
    Figure 3:  Representing the YAML definition to build the pipeline server.

    Next, you can start adding a data science pipeline into your project. Learn how to start building a pipeline or import a pipeline.

    Model serving

    Once your model is tested we want to make it available for testing and other applications to be consumed. Serving or deploying the model makes the model available as a service, or model runtime server, that you can access using an API. You can then access the inference endpoints for the deployed model from the dashboard and see predictions based on data inputs that you provide through API calls. Querying the model through the API is also called model inferencing.

    • Single-model serving platform: Used for large models such as LLMs, uses its own runtime server and serverless deployment.
    • Multi-model serving platform: Used for small/medium  sized models which can use the same runtime server. Integrated with Istio.
    • NVIDIA NIM model serving platform: Used for NVIDIA Inference Microservices (NIM) on the NVIDIA NIM model serving platform.

    For this example, we are using a multi-model serving platform and OpenVINO as a model server runtime. For more information, consult the product documentation

    For the model to be served we need to specify a serving runtime. OpenShift AI supports several runtimes and you can also build your custom runtime. Serving runtimes will provide the tools needed for the model to be served in OpenShift (Figure 4).

    Serving Runtime Definition
    Figure 4:  Representing YAML definition to build the model serving.

    Explore more about model serving and GitOps.

    Inference service

    Once the model is deployed, you can access the model using an inference endpoint. The InferenceService will contain the model format, storage, and deployment mode (Figure 5).

    Inference Runtime Definition
    Figure 5: Representing YAML definition to build the inference service.

    Explore inference service and GitOps further.

    Additional configurations

    Ensure your namespace is set up with the correct label. The following is a sample namespace configuration.

    apiVersion: v1
    kind: Namespace
    metadata:
    	labels:
    	  kubernetes.io/metadata.name: {{ .Values.app.namespace }}
    	  modelmesh-enabled: "true"
    	  openshift-pipelines.tekton.dev/namespace-reconcile-version: 1.16.1
    	name: {{ .Values.app.namespace}}

    You have access to S3-compatible object storage. You have created the required storage configurations.

    Next steps

    In this article, you learned about what happens behind the scenes with OpenShift GitOps, OpenShift AI, and OpenShift. I discussed the significance of workbenches, model serving, and namespace configurations. I also provided many resources for further learning. If you haven't already, be sure to read part 1, How building workbenches accelerates AI/ML development.

    .

    Related Posts

    • How building workbenches accelerates AI/ML development

    • The road to AI: A guide to understanding AI/ML models

    • AI/ML pipelines using Open Data Hub and Kubeflow on Red Hat OpenShift

    • InstructLab: Advancing generative AI through open source

    Recent Posts

    • How to build a Model-as-a-Service platform

    • How Quarkus works with OpenTelemetry on OpenShift

    • Our top 10 articles of 2025 (so far)

    • The benefits of auto-merging GitHub and GitLab repositories

    • Supercharging AI isolation: microVMs with RamaLama & libkrun

    What’s up next?

    Learn how large language models (LLMs) are created and use Red Hat Enterprise Linux AI to experiment within an LLM in this hands-on learning path.

    Start the activity
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue