Skip to main content
Redhat Developers  Logo
  • AI

    Get started with AI

    • Red Hat AI
      Accelerate the development and deployment of enterprise AI solutions.
    • AI learning hub
      Explore learning materials and tools, organized by task.
    • AI interactive demos
      Click through scenarios with Red Hat AI, including training LLMs and more.
    • AI/ML learning paths
      Expand your OpenShift AI knowledge using these learning resources.
    • AI quickstarts
      Focused AI use cases designed for fast deployment on Red Hat AI platforms.
    • No-cost AI training
      Foundational Red Hat AI training.

    Featured resources

    • OpenShift AI learning
    • Open source AI for developers
    • AI product application development
    • Open source-powered AI/ML for hybrid cloud
    • AI and Node.js cheat sheet

    Red Hat AI Factory with NVIDIA

    • Red Hat AI Factory with NVIDIA is a co-engineered, enterprise-grade AI solution for building, deploying, and managing AI at scale across hybrid cloud environments.
    • Explore the solution
  • Learn

    Self-guided

    • Documentation
      Find answers, get step-by-step guidance, and learn how to use Red Hat products.
    • Learning paths
      Explore curated walkthroughs for common development tasks.
    • Guided learning
      Receive custom learning paths powered by our AI assistant.
    • See all learning

    Hands-on

    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.
    • Interactive labs
      Learn by doing in these hands-on, browser-based experiences.
    • Interactive demos
      Click through product features in these guided tours.

    Browse by topic

    • AI/ML
    • Automation
    • Java
    • Kubernetes
    • Linux
    • See all topics

    Training & certifications

    • Courses and exams
    • Certifications
    • Skills assessments
    • Red Hat Academy
    • Learning subscription
    • Explore training
  • Build

    Get started

    • Red Hat build of Podman Desktop
      A downloadable, local development hub to experiment with our products and builds.
    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.

    Download products

    • Access product downloads to start building and testing right away.
    • Red Hat Enterprise Linux
    • Red Hat AI
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat Developer Toolset

    References

    • E-books
    • Documentation
    • Cheat sheets
    • Architecture center
  • Community

    Get involved

    • Events
    • Live AI events
    • Red Hat Summit
    • Red Hat Accelerators
    • Community discussions

    Follow along

    • Articles & blogs
    • Developer newsletter
    • Videos
    • Github

    Get help

    • Customer service
    • Customer support
    • Regional contacts
    • Find a partner

    Join the Red Hat Developer program

    • Download Red Hat products and project builds, access support documentation, learning content, and more.
    • Explore the benefits

How to connect OpenShift Lightspeed MCP to your IDE

February 4, 2026
Diego Alvarez Ponce
Related topics:
Artificial intelligenceDeveloper productivityGitOpsIDEs
Related products:
Red Hat OpenShift GitOpsRed Hat OpenShiftRed Hat OpenShift Container PlatformRed Hat OpenShift Lightspeed

    Modern integrated development environments (IDEs) have evolved beyond simple code editors into intelligent platforms powered by AI assistants. These assistants can answer questions, generate code, and help debug issues. But they often lack deep expertise in specialized platforms like Red Hat OpenShift. This is where Model Context Protocol (MCP) comes into play. MCP is an open standard that allows AI assistants to connect with external tools, extending their capabilities far beyond their base knowledge and providing them access to specialized and real-time information.

    In this article, we'll walk through a proof of concept that connects an MCP server for Red Hat OpenShift Lightspeed to your preferred IDE. OpenShift Lightspeed, Red Hat's AI assistant, is purpose-built for OpenShift, grounded in official documentation, and capable of interacting with your cluster. By exposing OpenShift Lightspeed through an MCP server, we bring that expertise directly into your development workflow, allowing you to ask OpenShift questions, generate configurations, and even query cluster resources without leaving your editor.

    Prepare your environment

    For this article, you must have the following prerequisites:

    • A code editor with an AI assistant that supports MCP (e.g., Claude Code, Cursor, VS Code with GitHub Copilot, etc.).
    • Access to a running OpenShift cluster.
    • The OpenShift Lightspeed operator installed and connected to a supported LLM provider.
    • To query live cluster resources, enable the Cluster Interaction feature in the OLSConfig custom resource (optional).

    For reference, my environment uses Cursor as the IDE and an OpenShift 4.20 cluster with OpenShift Lightspeed 1.0.8, configured to communicate with a GPT-4 model hosted on Azure OpenAI.

    Additionally, you'll need to clone the MCP server repository to your local machine. This repository contains all the logic and dependencies required to run the OpenShift Lightspeed MCP server:

    git clone https://github.com/thoraxe/ols-mcp.git

    Make note of the full path to this directory, because you'll need it later to configure your IDE.

    Exposing the OpenShift Lightspeed service

    In our running cluster, we need to expose the OpenShift Lightspeed service to access it from outside the cluster. Apply the following YAML to create a secure route pointing to the lightspeed-app-server service. We use Reencrypt TLS termination to ensure traffic remains encrypted outside and inside the cluster.

    apiVersion: route.openshift.io/v1
    kind: Route
    metadata:
      name: ols-route
      namespace: openshift-lightspeed
    spec:
      to:
        name: lightspeed-app-server
        weight: 100
        kind: Service
      port:
        targetPort: https
      tls:
        termination: reencrypt
      wildcardPolicy: None

    Once applied, you'll get a URL that exposes the OpenShift Lightspeed API, such as https://ols-route-openshift-lightspeed.apps.mycluster.example.com. Keep this handy for the MCP configuration.

    You'll also need an authentication token. Retrieve it by running the following command:

    oc whoami -t

    The output will look something like sha256~NW5MJOynIEABEj_7pSna....

    With both the route URL and the authentication token ready, OpenShift Lightspeed is now accessible from outside the cluster. Let's move on to configuring the IDE.

    Configure your IDE to use the MCP

    While the process for configuring MCP servers varies slightly between IDEs, the general approach is similar. I'll show you the steps we need to follow in Cursor as an example.

    Open Cursor and select any project you want to use. You'll see the AI assistant interface on the right side of your screen.

    To enable the MCP server, go to Cursor Settings > Tools & MCP > New MCP Server. This will take you to the mcp.json file used in Cursor to configure the MCP servers available to the IDE. If you haven't configured any before, you'll see an empty server list.

    Add the following configuration, adjusting the parameters to match your environment:

    {
      "mcpServers": {
        "openshift-lightspeed": {
          "command": "uv",
          "args": ["--directory", "/path/to/your/folder/ols-mcp", "run", "python", "-m", "ols_mcp.server"],
          "env": {
            "OLS_API_URL": "https://ols-route-openshift-lightspeed.apps.mycluster.example.com",
            "OLS_API_TOKEN": "sha256~NW5MJOynIEABEj_7pSna...",
            "OLS_TIMEOUT": "30.0",
            "OLS_VERIFY_SSL": "true"
          }
        }
      }
    }

    Let's break down the key parameters:

    • args: The absolute path to the cloned ols-mcp repository. This tells uv where to find the project and execute the MCP server.
    • OLS_API_URL: The route URL we created earlier to expose OpenShift Lightspeed. Replace this with your own route.
    • OLS_API_TOKEN: The bearer token obtained from the cluster. This authenticates requests to the OpenShift Lightspeed API.
    • OLS_TIMEOUT: How long (in seconds) to wait for a response before timing out. Increase this if you're on a slow network or expect complex queries.
    • OLS_VERIFY_SSL: Set to true to verify SSL certificates.

    Save the file and verify the MCP server is running correctly. In Cursor, open the terminal panel at the bottom of the window. Switch to the Output tab and select MCP: user-openshift-lightspeed from the dropdown menu. Figure 1 displays the server's initialization logs.

    The MCP logs are displayed in the Cursor console.
    Figure 1: The MCP logs displayed in the Cursor console.

    If the connection is successful, you should see a message indicating that one tool has been discovered: openshift-lightspeed. This confirms the MCP server is running and ready to receive queries from the AI assistant.

    Interacting with OpenShift Lightspeed via MCP

    Now, we can start using the AI chat interface to interact with our OpenShift Lightspeed instance. Be sure to enable the Agent mode to allow the model calling tools. The default model in Cursor is Claude.

    We can start asking a simple OpenShift question:

    How do I create a route for the lightspeed-app-server service?

    The AI assistant will likely ask for permission to use the openshift-lightspeed tool. Once you approve it, you will see a response. Here's what OpenShift Lightspeed returned (Figure 2):

    A screenshot of the response to the question.
    Figure 2: The response to the question: How do I create a route for the lightspeed-app-server service?

    In the meantime, the following has happened behind the scenes. It's worth noting that this workflow involves two AI models working in sequence.

    1. The IDE's model (Claude, in Cursor's case) receives your question and decides whether to use the openshift-lightspeed tool. It may even rephrase or refine your question before forwarding it to the MCP server.
    2. Your question is sent to the OpenShift Lightspeed service via the MCP server.
    3. OpenShift Lightspeed retrieves relevant sections from the official OpenShift documentation using Retrieval-Augmented Generation (RAG).
    4. The retrieved documentation, along with your question, is sent to the LLM configured in OpenShift Lightspeed (in our case, GPT-4).
    5. The model generates a response grounded in official OpenShift documentation.
    6. The response travels back through the MCP server to your IDE AI assistant, where Claude may reformat or summarize it before presenting it to you.

    This two-model architecture combines the strengths of both: your IDE's assistant handles the conversation flow and tool orchestration, while OpenShift Lightspeed provides OpenShift-specific expertise backed by official documentation, reducing hallucinations and providing accurate, up-to-date guidance.

    Now let's take it a step further, querying live cluster resources. This requires you to enable the Cluster Interaction feature in your OpenShift Lightspeed instance.

    Let's ask about the pods running in the cluster:

    Show me the pods running in the openshift-lightspeed namespace

    Figure 3 displays the response:

    This shows the response to the question: Show me the pods running in the openshift-lightspeed namespace.
    Figure 3: The response to the query: Show me the pods running in the openshift-lightspeed namespace.

    As before, the IDE's model decides to call the openshift-lightspeed tool. But this time, something extra happens inside OpenShift Lightspeed. When the Cluster Interaction is enabled, OpenShift Lightspeed has access to a Kubernetes MCP server running within the cluster. The model configured in our OpenShift Lightspeed instance will call the necessary tools from that MCP server to retrieve information about the requested cluster resources. The collected data is then sent to GPT-4 to compose a response, which is sent back to our IDE.

    Combining MCP servers

    One of the most powerful aspects of MCP is the ability to combine multiple servers in a single workflow. For example, you could add the GitHub MCP server alongside OpenShift Lightspeed to enable a complete GitOps pipeline directly from your IDE.

    Imagine this scenario: As an application developer, you ask the AI assistant to generate a deployment manifest for your application. OpenShift Lightspeed provides the YAML based on best practices and official documentation. Then, in the same conversation, you ask the assistant to push that manifest to your GitOps repository. The GitHub MCP server handles the commit and push. Finally, if Red Hat OpenShift GitOps is configured in the cluster, it automatically detects and synchronizes the change, deploying the application without ever leaving your IDE.

    But that was just an example. The MCP ecosystem is growing rapidly, and they are developing new servers every day. By combining them with OpenShift Lightspeed, you can build powerful workflows and find new ways to optimize your OpenShift operations.

    Final thoughts

    This article explored a proof of concept. Please be aware that the described feature and configuration are not currently supported. The OpenShift Lightspeed team is actively seeking your feedback. Is this something you would like to see officially supported? Contact us at openshift-lightspeed-contact-requests@redhat.com. We’d love to hear your thoughts.

    Last updated: April 29, 2026

    Related Posts

    • OpenShift Lightspeed: Assessing AI for OpenShift operations

    • Assessing AI for OpenShift operations: Advanced configurations

    • Integrate incident detection with OpenShift Lightspeed via MCP

    • New: Local development with JetBrains IDEs in OpenShift Dev Spaces

    Recent Posts

    • Every layer counts: Defense in depth for AI agents with Red Hat AI

    • Fun in the RUN instruction: Why container builds with distroless images can surprise you

    • Trusted software factory: Building trust in the agentic AI era

    • Build a zero trust AI pipeline with OpenShift and RHEL CVMs

    • Red Hat Hardened Images: Top 5 benefits for software developers

    What’s up next?

    Learning Path Get started with Red Hat OpenShift Lightspeed share image

    Get started with Red Hat OpenShift Lightspeed

    This learning exercise explains the requirements for Red Hat OpenShift...
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Chat Support

    Please log in with your Red Hat account to access chat support.