Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • See all Red Hat products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Red Hat OpenShift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • See all technologies
    • Programming languages & frameworks

      • Java
      • Python
      • JavaScript
    • System design & architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer experience

      • Productivity
      • Tools
      • GitOps
    • Automated data processing

      • AI/ML
      • Data science
      • Apache Kafka on Kubernetes
    • Platform engineering

      • DevOps
      • DevSecOps
      • Red Hat Ansible Automation Platform for applications and services
    • Secure development & architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & cloud native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • See all learning resources

    E-books

    • GitOps cookbook
    • Podman in action
    • Kubernetes operators
    • The path to GitOps
    • See all e-books

    Cheat sheets

    • Linux commands
    • Bash commands
    • Git
    • systemd commands
    • See all cheat sheets

    Documentation

    • Product documentation
    • API catalog
    • Legacy documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore the Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Manage AI context with the Lola package manager

From prompt engineering to context engineering

April 8, 2026
Daniele Martinoli Igor Brandao Katie Mulliken
Related topics:
Artificial intelligenceDeveloper productivityIDEsOpen source
Related products:
Red Hat AI

    As developers build agents that do more than just talk, they increasingly rely on tools like the Model Context Protocol (MCP) and agent skills—portable packages of markdown and scripts that provide LLMs with on-demand procedural knowledge.

    While MCP and Skills provide the standardized framework for these agents, developers still lack a unified way to distribute and version them.

    Lola fixes this problem by acting as a universal package manager for AI context. By using Lola, you can treat your AI context as versioned, auditable code.

    Lola includes two major components: modules and marketplaces.

    Lola modules

    A Lola module, or lolas for short, is a portable package of AI context that you can distribute and install across various AI assistants. Instead of managing disparate files, a module bundles skills, command files, agent instructions (like AGENTS.md), and MCP servers into a single cohesive group. This approach allows you to package everything required for a specialized AI agent in one place.

    Lola marketplaces

    A Lola marketplace (or Lola market) is a curated catalog of modules. It allows you to search and install AI context modules without manually hunting for individual repositories. For enterprises, these marketplaces act as a centralized registry to share and distribute agent instructions at scale while maintaining control over the content.

    Demo workflow

    This guide follows a complete end-to-end workflow using Lola. We will:

    • Install a custom marketplace containing plugins and skills sourced from the official Claude Code and Cursor marketplaces.
    • Register the marketplace with Lola and inspect the available modules.
    • Install the modules into target AI assistants.
    • Verify that the new skills and context are ready for use in the assistant's environment.

    Install Lola

    Run the following command to install Lola using uv, the recommended method for managing agentic dependencies:

    $ uv tool install git+https://github.com/RedHatProductSecurity/lola

    Once installed, verify the version to ensure the lola CLI is ready:

    $ lola --version
    lola 0.4.2

    Add the Lola marketplace

    For this demo, we have prepared a marketplace containing existing plugins from Claude and Cursor. First, add the demo marketplace using the following command:

    $ lola market add demo https://raw.githubusercontent.com/dmartinol/lola-demo/main/demo.yaml

    After adding the marketplace, you can view the available modules:

    $ lola market ls demo
    demo (enabled)
      Demo marketplace with curated community plugins
      Version 0.1.0
      claude-md-management  v1.0.0
        Tools to maintain and improve CLAUDE.md files - audit quality,
        capture session learnings, and keep project memory current.
        Tags: claude-md, project-memory, maintenance, documentation
      teaching  v1.0.0
        Teaching workflows: skill mapping, practice plans, and feedback
        loops with personalized learning roadmaps and retrospectives.
        Tags: teaching, learning, education, cursor

    Install the modules

    Use the lola install command to add selected modules to your target assistants:

    $ lola install claude-md-management
    Found 'claude-md-management' in 'demo'
    Repository: https://github.com/anthropics/claude-plugins-official.git
    Added claude-md-management
    ? Select assistants to install to (Space to toggle, Enter to confirm): ['claude-code', 'cursor']
    Installing claude-md-management -> /private/tmp/lola-demo
      claude-code (1 skill, 1 command)
      cursor (1 skill, 1 command)
    Installed to 2 assistants
    $ lola install teaching            
    Found 'teaching' in 'demo'
    Repository: https://github.com/cursor/plugins.git
    Added teaching
    ? Select assistants to install to (Space to toggle, Enter to confirm): ['claude-code', 'cursor']
    Installing teaching -> /private/tmp/lola-demo
      claude-code (2 skills)
      cursor (2 skills)
    Installed to 2 assistants

    The lola list command inspects the installed modules for each target assistant:

    $ lola list
    Installed (2 modules)
    claude-plugins-official
    - scope: project
      path: "/private/tmp/lola-demo"
      assistants: [claude-code, cursor]
    plugins
    - scope: project
      path: "/private/tmp/lola-demo"
      assistants: [claude-code, cursor]

    Validate the installation

    Once you have launched Claude from your current folder, verify that your project skills are correctly installed and functional by running the /skills command, as shown in Figure 1.

    Listing the skills in Claude Code.
    Figure 1: The Claude Code command line interface displaying a list of installed project skills.

    Then, trigger the create-learning-path skill with a prompt matching its metadata, as shown in Figure 2.

    Trigger the learning skill in Claude Code.
    Figure 2: The terminal displays an interactive menu to select a topic or skill for the new learning path.

    After launching Cursor from the same folder, you can run the equivalent validations. First, verify that the revise-claude-md command is available, as shown in Figure 3. 

    Trigger the revise CLAUDE.md skill in Claude Code.
    Figure 3: The Cursor IDE executing the /revise-claude-md command to analyze the current workspace.

    Then, trigger the learning skill again with a matching prompt, shown in Figure 4.

    Trigger the learning skill in Cursor.
    Figure 4: The Cursor IDE prompts the user to select a learning topic to initialize the skill.

    Uninstall the modules

    As a package installer, Lola manages the full lifecycle of your skills and context modules. After you finish your work, you can remove a module using the uninstall command.

    $ lola uninstall claude-md-management

    We will cover more options, such as updates and custom skill bootstraps, in a future post. For now, we have introduced the primary commands for managing AI packages with Lola.

    Get involved

    Lola provides a way to bring auditable, versioned AI skills to your workflow without vendor lock-in. The project is open source under the GPL-2.0-or-later license. You can explore the code and contribute your own modules at the Red Hat Product Security Lola GitHub repository.

    Next steps for your versioned AI context

    We've shown how Lola separates concerns by using a custom marketplace and modules. As an author, you can curate and version AI context in a central repository. As a developer, you can use those capabilities through a standardized interface—regardless of the assistant you use.

    Now that you have organized your AI context locally with Lola, you can deploy your agents to Red Hat OpenShift to reliably execute these versioned skills in a production environment. Learn more about hosting and scaling your AI models:

    • Red Hat OpenShift AI documentation: Infrastructure for hosting and serving agents
    • Integrate Claude Code with Red Hat AI inference server on OpenShift: Connect your local AI assistants to production-ready, locally hosted models
    • Deploy an LLM inference service on OpenShift AI: Learn how to scale the models that power your agents

    Related Posts

    • Integrate Claude Code with Red Hat AI Inference Server on OpenShift

    • Scale LLM fine-tuning with Training Hub and OpenShift AI

    • Making LLMs boring: From chatbots to semantic processors

    • Reduce LLM benchmarking costs with oversaturation detection

    • Optimize and deploy LLMs for production with OpenShift AI

    • Agent Skills: Explore security threats and controls

    Recent Posts

    • Manage AI context with the Lola package manager

    • Agent-driven attestation: How Keylime's push model rethinks remote integrity verification

    • Harness engineering: Structured workflows for AI-assisted development

    • Running Karpathy's autoresearch on Red Hat OpenShift AI: 198 experiments, zero intervention

    • Distributed tracing for agentic workflows with OpenTelemetry

    What’s up next?

    AI-NodeJS-cheat-sheet-tile-card

    AI and Node.js cheat sheet

    Lucas Holmquist
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue