Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • See all Red Hat products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Red Hat OpenShift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • See all technologies
    • Programming languages & frameworks

      • Java
      • Python
      • JavaScript
    • System design & architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer experience

      • Productivity
      • Tools
      • GitOps
    • Automated data processing

      • AI/ML
      • Data science
      • Apache Kafka on Kubernetes
    • Platform engineering

      • DevOps
      • DevSecOps
      • Red Hat Ansible Automation Platform for applications and services
    • Secure development & architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & cloud native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • See all learning resources

    E-books

    • GitOps cookbook
    • Podman in action
    • Kubernetes operators
    • The path to GitOps
    • See all e-books

    Cheat sheets

    • Linux commands
    • Bash commands
    • Git
    • systemd commands
    • See all cheat sheets

    Documentation

    • Product documentation
    • API catalog
    • Legacy documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore the Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Fine-tune AI pipelines in Red Hat OpenShift AI 3.3

February 26, 2026
Ana Biazetti Brian Gallagher Michal Stokluska
Related topics:
Artificial intelligenceAutomation and managementData science
Related products:
Red Hat AIRed Hat OpenShift AI

    Enterprises are moving from experimentation toward customized, production-grade models. Every organization has unique data requirements, compliance standards, and performance goals. The path from a base model to a fine-tuned asset should be a flexible, repeatable process rather than a rigid "black box."

    To support this need for flexibility, we introduced new example AI pipelines for fine tuning in Red Hat OpenShift AI 3.3. These AI pipelines offer a modular, automated framework for model fine-tuning. They use Kubeflow Trainer to distribute workloads and support both supervised fine-tuning (SFT) and orthogonal supervised fine-tuning (OSFT) techniques via Training Hub.

    The challenge: Moving beyond manual one-offs

    Direct model training or fine-tuning, often called the fast path, is excellent for quick iterations. However, it often lacks the governance and reproducibility required for enterprise production. Conversely, fixed, manual end-to-end scripts can quickly become stale and are difficult to extend as project needs change.

    Enterprises need a way to:

    • Maintain governance: Create reproducible workflows where every fine-tuned model can be traced back to its specific run, code, and dataset.
    • Achieve precision: Go beyond general knowledge to achieve high-performance, custom behavior tailored to specific business data.
    • Avoid "shadow AI": Provide AI engineers with a centralized, easy-to-use platform that mitigates the risk of teams adopting non-compliant external solutions.

    Composable AI pipelines and reusable components help you streamline OpenShift AI capabilities.

    AI pipelines allow users to build portable, scalable workflows using a Python-centric SDK. Complementing this, our new reusable components repository acts as a centralized hub for workflow building blocks. This repository helps you connect different OpenShift AI capabilities.

    Fine-tuning pipeline: Data preparation, fine-tuning, evaluation, and model registration

    The sample fine-tuning pipelines and associated reusable components provide a baseline workflow for model customization that includes the following steps, as illustrated in Figure 1.

    Flowchart showing a machine learning pipeline starting with dataset-download, followed by train-model and ending at kubefl...istry.
    Figure 1: Overview of fine-tuning pipeline.
    1. Download and validate the dataset: Download a dataset from Amazon S3, Hugging Face, or an HTTP URL. The component validates the dataset to ensure it’s in a suitable format for the fine-tuning algorithm. You can optionally split the dataset for training and evaluation. The component passes the dataset to the model training stage and optionally the evaluation stage.
    2. Train the model with Kubeflow Trainer: This component simplifies SFT and OSFT techniques. It downloads a base model and fine-tunes it based on your dataset and algorithm. You can download models from Amazon S3, Hugging Face, or an OCI registry, including the internal Openshift AI model catalog. The step outputs a fine-tuned model, training loss graph (see Figure 2), and training metrics.

      Line graph showing training loss over 15 steps, starting at 0.3259, reaching a minimum of 0.1806, and finishing at 0.4405.
      Figure 2: Training loss chart generated by the training step.
    3. Evaluate the model: The LM-eval component runs benchmark tasks and ensures the new model meets performance requirements (Figure 3). You can select from a list of benchmark tasks and use the split dataset from step one for deeper evaluation.

      Dashboard showing a completed evaluation pipeline alongside a table of scalar metrics including accuracy and error values.
      Figure 3: Part of evaluation generated metrics.
    4. Register the model: The fine-tuned model and its metrics are registered in the OpenShift AI model registry (Figure 4). This centralizes model lineage and artifacts. From here, you can serve the model for inference.

      Red Hat OpenShift AI model registry page showing version details and scalar evaluation metrics for the registered model osft-model.
      Figure 4: Model registry with saved model.

    Together, these four steps create a complete, auditable pipeline from raw data to a production-ready model.

    Choosing the right path: Our new pipeline options

    To support both rapid experimentation and deep customization, we are providing four distinct pipeline versions. These allow you to choose between a "ready-to-go" experience and a fully configurable environment.

    • The minimal pipelines (sft_minimal_pipeline and osft_minimal_pipeline): Designed for initial trials, these versions include multiple defaults and only expose the most critical top-level input parameters. They are perfect for users who want to see results quickly without navigating complex configurations.
    • The full pipelines (sft_pipeline and osft_pipeline): These provide granular control over every aspect of the fine-tuning process, from specific hardware resource mapping to detailed algorithm presets.

    Customize the pipelines for your environment

    The new sample fine-tuning AI pipelines serve as a validated pattern focused on core model customization, fine-tuning, and governance steps. Designed for flexibility, these validated patterns can be cloned and modified to suit your specific architectural needs.

    To start using the fine-tuning AI pipelines, visit the guided example. It walks you through OSFT-based fine-tuning using the reusable pipelines.

    The guided example also provides information on:

    • Reusing components in your own pipelines.
    • Customizing the existing pipelines.
    • Adding and removing parameters.
    • Running the pipeline on the OpenShift AI platform.

    To build your own pipeline using a combination of the dataset, fine-tune, eval, or register components or any other custom component:

    1. Clone the pipelines-components repository.
    2. Import the components you need.
    3. Compose them into your custom pipeline.
    4. Add any additional custom steps (e.g., data preprocessing, model conversion, deployment).

    Why this approach matters: Your pipeline, your way

    Shifting from fixed solutions to modular building blocks in Red Hat OpenShift AI 3.3 helps you customize workflows at scale. Modular components ensure the pipeline is not a fixed solution. You can share and adapt them across different teams and use cases. The example fine-tuning pipelines provide a baseline for automating model customization in your AI scenarios.

    As you explore these new capabilities in OpenShift AI 3.3, remember that these are your building blocks—designed to be adapted into the specific workflows that drive your business forward

    Related Posts

    • Fine-tune a RAG model with Feast and Kubeflow Trainer

    • Fine-tune LLMs with Kubeflow Trainer on OpenShift AI

    • How to fine-tune Llama 3.1 with Ray on OpenShift AI

    • Introduction to supervised fine-tuning dataset formats

    • From tuning to serving: How open source powers the LLM life cycle

    • How AMD GPUs accelerate model training and tuning with OpenShift AI

    Recent Posts

    • How hosted control planes are getting smarter about resource management

    • Fine-tune AI pipelines in Red Hat OpenShift AI 3.3

    • How to use auto-instrumentation with OpenTelemetry

    • Facing a forced migration? You have a choice with OpenShift Virtualization

    • Use Global Hub to migrate managed clusters

    What’s up next?

    Learning Path intro-to-OS-LP-feature-image

    Introduction to OpenShift AI

    Learn how to use Red Hat OpenShift AI to quickly develop, train, and deploy...
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue