Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • See all Red Hat products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Red Hat OpenShift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • See all technologies
    • Programming languages & frameworks

      • Java
      • Python
      • JavaScript
    • System design & architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer experience

      • Productivity
      • Tools
      • GitOps
    • Automated data processing

      • AI/ML
      • Data science
      • Apache Kafka on Kubernetes
    • Platform engineering

      • DevOps
      • DevSecOps
      • Red Hat Ansible Automation Platform for applications and services
    • Secure development & architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & cloud native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • See all learning resources

    E-books

    • GitOps cookbook
    • Podman in action
    • Kubernetes operators
    • The path to GitOps
    • See all e-books

    Cheat sheets

    • Linux commands
    • Bash commands
    • Git
    • systemd commands
    • See all cheat sheets

    Documentation

    • Product documentation
    • API catalog
    • Legacy documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore the Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

A guide to AI code assistants with Red Hat OpenShift Dev Spaces

January 28, 2026
Mokhtar Alarhabi
Related topics:
Artificial intelligenceDeveloper productivityDeveloper tools
Related products:
Red Hat AIRed Hat OpenShift AI

    AI code assistants have emerged as powerful tools that are changing how developers write, edit, and debug code. An AI code assistant is powered by a large language model (LLM) trained on (among other things) billions of lines of public code, allowing it to become the ultimate paired programming assistant. An AI code assistant can autocomplete partially written code, find bugs, explain and summarize a codebase, generate documentation, convert code between languages, and even generate a project. A recent McKinsey study showed double-digit productivity growth as a result of multiple efficiency gains from AI code assistants.

    Red Hat OpenShift Dev Spaces and AI-assisted development

    Before integrating an assistant, you need a development environment. Red Hat OpenShift Dev Spaces is a cloud development environment (CDE) included in your Red Hat OpenShift subscription that allows developers to remotely code, run, and test any application on OpenShift with VS Code or Jetbrains IDEs. Key benefits include:

    • Dev environment as code: Dev Spaces uses YAML (in the file devfile.yaml) to define development environments as code (including languages, runtimes, tools, and so on), so developers work in the same reproducible environment. This nearly eradicates the "works on my machine" problem, while also reducing operational strain.
    • AI guardrails for dev environments: AI agents can generate code that breaks dependencies or corrupts configurations. With Dev Spaces, these errors are trivial. You can instantly revert to a clean slate by redeploying from your devfile. Compare this to a local workstation, where a single bad output could leave you debugging your operating system for hours.
    • Development in air-gapped environments: For organizations with strict security requirements, Dev Spaces can be deployed in a restricted air-gapped environment while pointing to internal image and extension registries. This restricts developers (and AI agents) to pre-approved runtimes and plug-ins, which can prevent common vulnerabilities in agentic assistants such as prompt injections.
    • Quick onboarding and switching between projects: Because all configs and dependencies are automatically pre-installed from the devfile when a workspace boots up, onboarding a developer and establishing the correct environment setup is practically instant. This is especially useful for contractors, offshored devs, and anywhere else turnover is high.

    In short, OpenShift Dev Spaces provides a stable, reproducible developer environment by moving development to the cloud, eliminating configuration overhead, and allowing for the use of Code Assistants without risking local system integrity.

    With the developer environment established, it's time to integrate an AI assistant.

    Cloud and on-prem models: Balancing security with convenience

    First, you must choose between a cloud hosted model or a local model as the underlying LLM for your AI code assistant. There are advantages and disadvantages to both.

    Cloud hosted models

    A cloud model is hosted and managed by a third-party vendor (like Google Vertex, AWS Bedrock, or OpenAI). Your code assistant sends API requests to the provider's servers, which return AI-generated suggestions.

    Advantages:

    • Low barrier to entry: The provider handles all the infrastructure, hardware, and model maintenance. This keeps upfront costs low and makes it easy to scale up or down.
    • Access to specific models: Some models on the market today are not open source. If you require access to those models specifically, you have to sign up to use their provider.

    Disadvantages:

    • Data privacy and security risks: Your source code is sent to a third-party server, creating potential security risks. This is a deal-breaker for any organization with sensitive intellectual property or in regulated industries.
    • Model changes: Because a closed source model can be deprecated or removed at any time without notification or recourse, you could be forced to use a new model that doesn't behave as expected with your specific configurations.

    On-prem models

    An on-premise model runs entirely within your own infrastructure, for example on Red Hat OpenShift with Red Hat OpenShift AI. The model and the data it processes never leave your network.

    Advantages:

    • Maximum privacy and customized security: All code and data remains within your private network, eliminating third-party data exposure. You have full control over your data and how it is accessed.
    • Airgapped: Because these models can be air-gapped when used with a disconnected Dev Spaces instance, you can use a code assistant while mitigating the risk of common vulnerabilities such as prompt injections from the internet.
       
    • Customization and control: You aren't locked in with a vendor or specific model configuration. Instead, you can pick and choose which model, infrastructure, and setup works for you.

    Disadvantages:

    • High initial investment: Deploying an on-prem LLM can require a large upfront investment in powerful server hardware, especially GPUs.
    • Model capability: Some models are not available for self-hosting, so those aren't available in an on-prem instance.

    Choosing a code assistant

    Finding a code assistant that works for you is important, because some assistants limit what models you can run. There are plenty of choices out there. Here are some common ones validated with Dev Spaces, but this is by no means a complete list:

    Code assistantOn-prem models via RHOAIPay structureSupported IDEs in Dev SpacesNotes
    Kilo/Cline/RooYesBring your own API keyVS Code and Jetbrains (Kilo and Cline)Fully client-side and open source
    GitHub CopilotNoSubscriptionRemote SSHLocal VS Code connected to Dev spaces
    Github Copilot CLI, Claude Code, Gemini CLINoSubscription or BYOKVS Code and Jetbrains IDEsInteract through the Terminal
    Cursor IDENoSubscriptionRemote SSHLocal Cursor connected to Dev Spaces
    Amazon Kiro IDENoSubscriptionRemote SSHLocal Kiro connected to Dev Spaces
    Continue.devYesBring your own API keyVS CodeFully client-side and open source

    Nearly all possible choices, even the ones not in this list, work with Dev Spaces with the standard OpenVSX extension registry (in the Web IDE) or using SSH for Agentic IDEs (Desktop IDE). Figure 1 illustrates how models are configured for either the native Web IDE or the Destop IDE using remote SSH:

    A graphic showing the difference between on-prem models and hosted models
    Figure 1: When using the Web IDE, your client exists within your OpenShift cluster, and requests are sent to a local model. For a hosted model, an API can be sent directly to the model provider. When using the Desktop IDE, your client connects to your OpenShift cluster over SSH. If you're using an external model, API calls are sent to a third-party server, which forwards requests to the model provider.

    Configuring a code assistant with an on-prem model such as Red Hat Openshift AI

    Assuming you've chosen a local model running on premises using Red Hat OpenShift AI, you can connect your code assistant using an OpenAI-compatible endpoint.

    Serve a model on Openshift AI

    As of Red Hat OpenShift AI 3.0, you can choose an LLM from the catalog and deploy it.
    Otherwise, you can deploy your LLM as explained in this guide: https://github.com/IsaiahStapleton/rhoai-model-deployment-guide

    Connect your code assistant to your model

    With your LLM deployed, you must connect it to your code assistant:

    • Provider: In your assistant settings, select OpenAI Compatible.
    • Base URL: Input your Red Hat OpenShift AI inference endpoint and append /v1 to the end (for example, https://api.example.com/openai/v1).
    • API key: Input your token secret.
    • Model ID: Input the exact name you gave your model deployment.

    Configuring a code assistant with a cloud model

    If you've chosen a cloud model provider, here's how to set up a code assistant for it on Dev Spaces. If you're using a subscription-based code assistant, you are typically limited to its cloud models. These steps are for a bring your own key (BYOK) assistants.

    Input your API key and other access information

    When you sign on with a model provider, you're provided with an API key and other access information. Input this information into your code assistant provider settings. Figure 2 shows an example configuration for AWS Bedrock.

    Kilo Code with AWS Bedrock configuration
    Figure 2: Kilo Code with AWS Bedrock configuration

    Try it out today

    Integrating AI code assistants is no longer just about writing code faster, it's about doing so securely and consistently across your entire team. OpenShift Dev Spaces provides the solution, placing native guardrails that ensure safe and scalable AI development.

    Ready to get started? You can experiment with these workflows risk-free:

    • OperatorHub: If you have an OpenShift subscription, install Dev Spaces from the OperatorHub today to build your own AI-ready development platform
       
    • Developer Sandbox: Test these assistants with Dev Spaces in the Red Hat Developer Sandbox

    Related Posts

    • Enterprise multi-cluster scalability with OpenShift Dev Spaces

    • Understanding the recommender system's two-tower model

    • AI quickstart: Self-service agent for IT process automation

    Recent Posts

    • A guide to AI code assistants with Red Hat OpenShift Dev Spaces

    • Performance and load testing in Identity Management (IdM) systems using encrypted DNS (eDNS)

    • Migrate BuildConfig resources to Builds for Red Hat OpenShift with Crane

    • How to install Red Hat Developer Hub

    • Understanding the recommender system's two-tower model

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue