Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Building trustworthy AI: A developer's guide to production-ready systems

August 20, 2025
Maarten Vandeperre Camille Nigon
Related topics:
Artificial intelligenceDeveloper ProductivityDevOpsDevSecOpsGitOpsPlatform engineering
Related products:
Red Hat AIRed Hat Enterprise Linux AIRed Hat OpenShift AIRed Hat Trusted Application PipelineRed Hat Trusted Artifact SignerRed Hat Trusted Profile AnalyzerRed Hat Trusted Software Supply Chain

Share:

    As developers, our primary concerns with AI have often focused on performance, latency, and model accuracy. However, the landscape is maturing. Building successful AI applications now requires a deeper focus on trust, safety, and transparency. This isn't about box-ticking; it's about building better products that users can rely on and that businesses can scale with confidence.

    If you're an AI engineer, back-end developer, platform engineer, or DevOps lead, these principles will fundamentally shape how you develop, document, deploy, and maintain AI systems.

    A risk-based approach to AI development

    A practical way to frame the challenge is to think about AI systems in terms of their potential impact. The more significant the impact an AI system has on a person's life or opportunities, the more robust its design and operational guardrails should be.

    Consider these practical tiers of impact:

    • High impact (sensitive applications): Systems that directly influence critical outcomes. Examples include AI used in hiring, educational assessments, healthcare diagnostics, or credit scoring. These applications demand the highest level of rigor.
    • Moderate impact: Systems where undesirable behavior could cause user frustration or misinformation. Examples include public-facing chatbots, recommendation engines, and virtual assistants.
    • Minimal impact: Systems with low stakes, where errors are inconvenient but not harmful. Examples include spam filters, AI in video games, or internal tools for summarizing non-critical documents.

    Why this matters in your daily workflow

    As a developer (or even a platform engineer, DevOps engineer, and so on), you are at the forefront of implementing these systems. Your design choices have a direct effect on the application's trustworthiness.

    Scenario 1: You build a CV-scanning tool with AI

    Imagine you fine-tune a large language model (LLM) or use embeddings to classify résumés for a job opening. The goal is simple: save time for the HR department.

    This is a high-impact application because it directly influences someone's employment prospects. A poorly designed tool could introduce demographic bias, unfairly filtering out qualified candidates and exposing the business to reputational damage. To build this responsibly, you need to:

    • Document and version your training data: Use tools like DVC or LakeFS (or even just GitOps) to help you trace which dataset was used for which model version.
    • Proactively test for bias: Implement checks to confirm that your model performs equitably across different demographic groups.
    • Prioritize explainability: Design the system to explain why a particular résumé was flagged as relevant or not. This is no longer a "nice-to-have."
    • Implement human oversight: Build a user interface that allows an HR professional to easily review and override the AI's suggestions.

    In a team setting, these practices become part of your development lifecycle. You might even have a model-governance.yaml file that travels alongside your container file(s), defining data sources, bias test results, and model card information.

    Scenario 2: You add a chatbot using a foundation model

    You integrate a chatbot powered by an open source model into your company's support website. Without clear context, users might assume they are chatting with a person.

    This is a moderate-impact application. To maintain user trust, you should:

    • Clearly disclose the AI's role: A simple banner like "You are speaking with an AI assistant" sets the right expectation.
    • Implement robust logging: Use structured logging to record interactions, which is invaluable for debugging unexpected behavior and improving the model.
    • Provide an escape hatch: Always include a clear and easy way for users to escalate the conversation to a human agent.

    Scenario 3: You distribute a powerful open source model

    Even if you're fine-tuning and sharing a general-purpose AI (GPAI) model like LLaMA, Granite, or Mistral, you have a role to play in the ecosystem. As others build upon your work, providing clear documentation is critical for safe adoption.

    Best practices include:

    • Publishing detailed model cards: Document the model's intended uses, limitations, training data overview, and known biases.
    • Verifying model integrity: Implement cybersecurity measures to help protect your model weights from tampering before distribution (for example, check out Red Hat Trusted Software Supply Chain).
    • Tracking modifications: If you fine-tune a base model, document what you changed and why. A README.md file in your GitHub repo detailing the fine-tuning process and data can prevent misuse downstream.

    It's not just what you build, it's how you build it

    This shift toward trustworthy AI elevates the importance of process, traceability, and accountability. (Doesn't this remind you of GitOps?) We must treat our AI systems like any other piece of mission-critical infrastructure. This means embedding governance directly into your development workflow, an evolution of DevSecOps for the AI era/ (Doesn't this remind you of platform engineering and developer portals?)

    Key practices include:

    • Versioning datasets as rigorously as you version code.
    • Logging model inputs and decisions, especially for high-impact systems.
    • Automating documentation for your inference pipelines.
    • Building UIs that feature explainability widgets, showing users the "why" behind a decision.

    Is this a blocker for innovation?

    On the contrary, these practices can foster a more mature and sustainable AI ecosystem. Just as security best practices helped make software more resilient, a focus on responsible AI offers a clear path to more robust and widely adopted applications.

    We can expect to see the rise of:

    • New tooling for tracking model lineage and data provenance.
    • Frameworks that automatically generate model cards and transparency reports.
    • CI/CD pipelines that incorporate automated checks for bias, fairness, and explainability (again, check out Red Hat Trusted Software Supply Chain).

    How Red Hat prepares you for this new reality (in my opinion)

    Building trustworthy AI requires deep control over your entire stack. Here's how I believe that Red Hat's portfolio can give you the control needed to innovate responsibly.

    Full control over your AI runtime and tooling stack

    With Red Hat, you control the entire environment. You decide the versions of your container platform, AI frameworks, libraries, and models, and you decide when to upgrade. This is critical because even a minor version bump in an inference server can alter model behavior, potentially breaking downstream systems or eroding user trust. In contrast, many AI-as-a-Service offerings push updates without notice. By applying platform engineering concepts with OpenShift, you can provide stable, abstracted services to developers while a central platform team maintains governance and consistency.

    Full control over your AI environment

    Red Hat OpenShift allows you to run AI workloads anywhere: on-premises, in a private cloud, or in air-gapped environments. This is essential when data sovereignty is critical, especially in sensitive industries like finance or healthcare. This control prevents dependency on a single public cloud provider and enables your data to stay where you need it.

    Transparent, open (source) models

    Navigating the distinctions between terms like "open," "open source," and "transparent" for AI models can be complex and is often a sensitive topic. These classifications can also change as new model versions are released.

    To assess a model's transparency and avoid the risks of a "black box" system where the training data and inner workings are hidden, consider the following:

    • Model versus source: What is the specific difference between an "open model" and an "open source model"?
    • Traceability: Is the composition of the model, including its training data and hyperparameters, fully documented?
    • Auditability: Is there clear documentation and audit information available regarding the model's training process and behavioral assessments?

    Understanding these aspects is crucial for mitigating the risks associated with closed-source models, especially in applications where explainability is a requirement.

    Built-in auditing and lifecycle management

    Using GitOps and Red Hat tools like OpenShift Pipelines, model serving technologies, and the Red Hat OpenShift AI workbench, you can:

    • Track every model version, dataset, and deployment in a clear audit trail.
    • Log approvals and changes with full traceability.
    • Monitor model performance in production and roll back instantly if issues arise.

    Security from day 1

    Red Hat embeds security throughout the AI lifecycle:

    • Trusted Software Supply Chain (and this is the last time I will mention it): Sign Git commits, container images, and model artifacts to trace every change. Scan dependencies and container images for vulnerabilities early in the CI/CD pipeline.
    • Robust access control: Use RBAC and network policies to control who can access models and data.
    • Secure endpoints: Leverage service mesh and API gateways to secure inference endpoints.

    An example developer workflow for building trustworthy AI with Red Hat

    1. Fine-tune an AI model (open source or otherwise) using OpenShift AI pipelines.
    2. Track all artifacts in Git, including datasets, model configurations, and output metrics.
    3. Deploy multiple model versions with OpenShift AI for A/B testing and safe rollbacks.
    4. Store evidence for audits (for example, data lineage, training logs, and test results) using integrated tools like TrustyAI.
    5. Secure and monitor inference endpoints using Red Hat OpenShift Service Mesh, Red Hat OpenShift API Management, and observability tools.

    Everything remains under your control. No data leakage, no unexpected model behavior, no third-party magic.

    Final thoughts

    If you're serious about building AI systems that are safe, explainable, and trustworthy, you need the freedom to innovate without giving up control. Adopting responsible AI practices isn't a barrier; it's an opportunity to build better, more reliable products. With the right tools, you can lead the way in creating the future of responsible AI.

    Related Posts

    • TrustyAI Detoxify: Guardrailing LLMs during training

    • Build trust in your CI/CD pipelines with OpenShift Pipelines

    • An introduction to Red Hat Trusted Application Pipeline

    • Red Hat Trusted Software Supply Chain is now available

    • Implement AI safeguards with Python and Llama Stack

    • 5 myths about platform engineering, debunked

    Recent Posts

    • How to implement and monitor circuit breakers in OpenShift Service Mesh 3

    • Analysis of OpenShift node-system-admin-client lifespan

    • What's New in OpenShift GitOps 1.18

    • Beyond a single cluster with OpenShift Service Mesh 3

    • Kubernetes MCP server: AI-powered cluster management

    What’s up next?

    Read A developer’s guide to setting supply chain security in DevSecOps. This short guide provides a developer's introduction to software supply chain security, including the key principles, tools, and techniques you need to know to better audit and act on vulnerabilities in open source software components.

    Get the e-book
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue