Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

How platform engineering accelerates enterprise AI adoption

September 4, 2025
Maarten Vandeperre Camille Nigon
Related topics:
Artificial intelligenceDeveloper ProductivityDevOpsGitOpsPlatform engineering
Related products:
Red Hat Advanced Developer SuiteRed Hat AIRed Hat Developer HubRed Hat OpenShiftStreams for Apache Kafka

Share:

    In previous posts, we looked at how supporting technologies like Kafka and service mesh strengthen enterprise AI deployments. Kafka helps AI applications process real-time data flows and can act as an event orchestrator within agentic architectures, while service mesh ensures secure and observable communication between AI-powered services.

    We also explored the foundations of platform engineering for developers, which has become a central strategy for enabling teams' self-service access to reliable, governed infrastructure. Platform engineering can also bring standardization and governance through software templates.

    Now, these threads come together. As organizations move beyond isolated AI experiments, the question becomes: How do you make AI development and deployment repeatable, governed, and accessible across the enterprise?

    This is where platform engineering plays a crucial role. By applying the same principles that transformed modern application delivery (such as self-service, automation, software templates, and trusted supply chains), we can create an AI-ready foundation that allows developers and data scientists to innovate with confidence, accompanied with a reduction of the cognitive load.

    In this post, we'll explore how Red Hat OpenShift, Red Hat OpenShift AI, and developer portals like Red Hat Developer Hub (based on Backstage) make AI a seamless part of enterprise platforms (and the other way around).

    The AI challenge

    Artificial intelligence has enormous potential, but running AI in production is not as simple as training a model in a notebook. Enterprises face challenges around reproducibility, scalability, governance, and integration with existing business systems.

    Training and serving models involves data pipelines, containerized environments, GPUs, and frequent updates. On top of this, regulations such as the EU AI Act and Digital Operational Resilience Act (DORA) demand explainability, audit trails, and compliance. Without the right foundation, many organizations struggle. Data scientists spin up cloud notebooks in isolation, developers manually deploy models (or just connect to online AI APIs), and security checks happen too late. This leads to shadow IT, compliance risks, and slow innovation.

    For example, a retail company deploying a recommendation model locally may find it impossible to reproduce the same code and dataset six months later for audit purposes. Without a governed platform, this becomes a barrier to compliance and customer trust.

    Next, in the era of vibe coding (that is, AI-generated code), automated security scanning should become part of your platform. For example, add it to the CI/CD pipeline. When I ask at developer conferences who's working with AI-generated code, almost every hand goes up in the air. When I then ask who's validating the introduced libraries, biases, and risks, almost every hand goes down again. This highlights the importance of a mature developer platform.

    Where platform engineering fits

    Platform engineering solves these challenges by embedding AI workloads into the same enterprise-grade environments that already run critical applications. By combining OpenShift, OpenShift AI, and developer portals such as Red Hat Developer Hub, organizations can provide a standardized, self-service platform for both developers and data scientists (as well as customers, infra, ops, SREs…).

    OpenShift AI provides the infrastructure for reproducible training pipelines, GPU-powered workloads, and enterprise-grade scaling. AI artifacts, such as code, models, and datasets, flow through the trusted software supply chain, ensuring that everything deployed is verified, signed, and compliant. With GitOps, the lifecycle of AI workloads is fully automated: versioning, rollbacks, and approvals are handled through familiar Git workflows.

    The nice thing about GitOps is that you now get audit trails "for free," as every commit counts as a trace. You only need to ensure that the one assigned to a commit was the one committing. Hence, signed commits as part of the trusted software supply chain. Next to that, you need to ensure that the build artifact is the deployed artifact (for example, not a tampered one from a malicious artifact mirror). Therefore, signed artifacts are part of the trusted software supply chain.

    The developer portal complements all of this by making AI capabilities discoverable and accessible. Instead of filing tickets, developers request GPU environments, training pipelines, or model endpoints through self-service templates. Teams can also reuse existing services or depend on standardized blueprints (that is, software templates). For example, a bank that has already built a fraud detection model can publish it in the portal. Other teams can consume it directly, avoiding duplication and ensuring consistent, compliant AI across the organization.

    Data platforms as part of AI-ready platforms

    AI without data is like a car without fuel. A critical part of an AI-ready platform is the ability to deliver the right data in the right shape for the right audience. A one-size-fits-all data layer rarely works in practice, because developers, data scientists, and compliance teams all have different needs.

    A mature platform therefore supports multiple types of data delivery:

    • Data scientists might want simple outputs such as CSV files or Parquet snapshots that can be quickly ingested into notebooks.
    • Application developers need resilient APIs or event-driven streams they can integrate directly into services.
    • AI development teams often rely on anonymized or synthetic datasets to train and test models safely, without exposing sensitive information. This can then be explored in, for example, cloud environments, while sensitive data remains on-premises, highlighting the need for a hybrid or multicloud strategy in your platform architecture.

    Solving the frustration gap

    Application developers often work against resilient APIs with strict SLAs and KPIs, where every change must go through design reviews, testing, and quality control. For production systems, this is essential. But for data scientists experimenting with models, waiting weeks for a schema update or a new endpoint is unacceptable. This difference in requirements and way to look at it, often results in frustrations between these teams (or even other teams as well).

    A data platform based on platform engineering principles allows organizations to handle this tension. You can keep your mission-critical APIs stable and resilient, while at the same time offering "quick fixes" in parallel for exploratory use cases. For example, when a new field is needed for an AI experiment, instead of waiting for the API team's full release cycle, the data platform can deliver a simple CSV or Parquet export in a matter of hours.

    This dual approach means:

    • Application developers keep consuming robust APIs with high availability guarantees.
    • Data scientists get the agility they need to experiment quickly, without being blocked by production release cycles.
    • Platform teams stay in control, because both flows are managed, governed, and monitored through the same (CDC-based) pipelines.

    The role of CDC patterns

    Patterns like change data capture (CDC) make this possible. Think about Kafka + Debezium and optionally Camel. Rather than building brittle point-to-point integrations or duplicating databases, CDC streams changes once from the source and allows multiple consumers to access the data in different forms. A CDC stream can power a high-availability API for customer-facing applications, while at the same time feeding an anonymized dataset into a data lake for AI training, and exporting a quick CSV for a data scientist's experiment.

    Example: Accelerating fraud detection in banking

    Consider a large bank developing fraud detection models. The core payment APIs must remain highly resilient, with strict SLAs and regulatory oversight. Adding new fields to these APIs requires design reviews, impact assessments, and weeks of testing before release.

    Meanwhile, the bank's AI team wants to experiment with new fraud signals based on transaction metadata. Instead of waiting weeks for the production API update, the platform team uses CDC pipelines to stream the same data to different destinations:

    • Resilient APIs continue to serve validated data for customer-facing applications.
    • Anonymized CSV extracts are generated within hours for data scientists, giving them the agility to experiment quickly and to validate the current data structures (e.g., are there data fields missing, are there data relationships missing, …, which can result in new requirements for the API layer).
    • Masked data lakes are populated for model training and long-term analytics.

    The result: the AI team can validate hypotheses and train fraud models in days instead of months, while the core systems remain compliant and stable. Both innovation speed and enterprise governance are preserved.

    By treating data delivery as part of the platform, enterprises can reduce friction, accelerate AI development, and still maintain governance and trust across all data products. In this regard, I often advocate for treating your platform as you most important core product, with a dedicated product owner.

    Enterprise benefits

    The combination of OpenShift, OpenShift AI, GitOps, Developer Hub, and a mature data platform creates an AI-ready foundation. This delivers several key benefits to enterprises:

    • Reproducibility: AI pipelines are defined as code and run in containerized environments, ensuring that models can be retrained and audited at any time.
    • Scalability: AI services can handle thousands of concurrent users or workloads, scaling dynamically with Kubernetes-native capabilities.
    • Compliance and trust: The trusted software supply chain and GitOps provide an auditable history of how models were trained, deployed, and updated, supporting EU AI Act and DORA requirements.
    • Self-service innovation: Developers and data scientists get frictionless access to AI resources and datasets, while platform teams maintain governance and cost control.

    For example, an e-commerce company using GitOps for AI can quickly roll back a biased recommendation model to a previous version, while training a corrected model. At the same time, its developers consume CDC-driven APIs, while its AI team trains on anonymized datasets: all powered by the same hybrid- and/or multi-cloud platform. This not only limits business risk but also ensures regulators can trace every change.

    Conclusion

    AI in the enterprise is not just about building smarter models; it is about creating smarter platforms. Platform engineering principles (such as standardization, self-service, GitOps-driven automation, trusted supply chains, and flexible data platforms) are what transform isolated AI experiments into reliable, production-ready systems.

    With OpenShift, OpenShift AI, and Developer Hub, enterprises can deliver AI that is not only innovative but also trustworthy, reproducible, and aligned with regulatory requirements.

    The future of enterprise AI will be powered by platforms that make AI a seamless, governed part of both the software and data lifecycle.

    Related Posts

    • Building trustworthy AI: A developer's guide to production-ready systems

    • How Kafka improves agentic AI

    • How to use service mesh to improve AI model security

    • Red Hat Developer Hub: The fastest path to Backstage on Kubernetes

    • Backstage authentication and catalog providers: A practical guide

    • What is platform engineering and why do we need it?

    Recent Posts

    • What's New in OpenShift GitOps 1.18

    • Beyond a single cluster with OpenShift Service Mesh 3

    • Kubernetes MCP server: AI-powered cluster management

    • Unlocking the power of OpenShift Service Mesh 3

    • Run DialoGPT-small on OpenShift AI for internal model testing

    What’s up next?

    Writing “Hello, World” hasn’t gotten any harder—but running it has. Download our developer’s guide to developer portals to learn how engineering teams can reduce friction and boost productivity using internal developer portals (IDPs).

    Get the e-book
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue