Skip to main content
Redhat Developers  Logo
  • AI

    Get started with AI

    • Red Hat AI
      Accelerate the development and deployment of enterprise AI solutions.
    • AI learning hub
      Explore learning materials and tools, organized by task.
    • AI interactive demos
      Click through scenarios with Red Hat AI, including training LLMs and more.
    • AI/ML learning paths
      Expand your OpenShift AI knowledge using these learning resources.
    • AI quickstarts
      Focused AI use cases designed for fast deployment on Red Hat AI platforms.
    • No-cost AI training
      Foundational Red Hat AI training.

    Featured resources

    • OpenShift AI learning
    • Open source AI for developers
    • AI product application development
    • Open source-powered AI/ML for hybrid cloud
    • AI and Node.js cheat sheet

    Red Hat AI Factory with NVIDIA

    • Red Hat AI Factory with NVIDIA is a co-engineered, enterprise-grade AI solution for building, deploying, and managing AI at scale across hybrid cloud environments.
    • Explore the solution
  • Learn

    Self-guided

    • Documentation
      Find answers, get step-by-step guidance, and learn how to use Red Hat products.
    • Learning paths
      Explore curated walkthroughs for common development tasks.
    • See all learning

    Hands-on

    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.
    • Interactive labs
      Learn by doing in these hands-on, browser-based experiences.
    • Interactive demos
      Click through product features in these guided tours.

    Browse by topic

    • AI/ML
    • Automation
    • Java
    • Kubernetes
    • Linux
    • See all topics

    Training & certifications

    • Courses and exams
    • Certifications
    • Skills assessments
    • Red Hat Academy
    • Learning subscription
    • Explore training
  • Build

    Get started

    • Red Hat build of Podman Desktop
      A downloadable, local development hub to experiment with our products and builds.
    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.

    Download products

    • Access product downloads to start building and testing right away.
    • Red Hat Enterprise Linux
    • Red Hat AI
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat Developer Toolset

    References

    • E-books
    • Documentation
    • Cheat sheets
    • Architecture center
  • Community

    Get involved

    • Events
    • Live AI events
    • Red Hat Summit
    • Red Hat Accelerators
    • Community discussions

    Follow along

    • Articles & blogs
    • Developer newsletter
    • Videos
    • Github

    Get help

    • Customer service
    • Customer support
    • Regional contacts
    • Find a partner

    Join the Red Hat Developer program

    • Download Red Hat products and project builds, access support documentation, learning content, and more.
    • Explore the benefits

Supercharging AI isolation: microVMs with RamaLama & libkrun

July 2, 2025
Eric Curtin Daniel Walsh Sergio Lopez Pascual Jake Correnti
Related topics:
Artificial intelligenceLinuxSecurity
Related products:
Podman DesktopRed Hat AI

    In our previous post, we explored how RamaLama revolutionizes AI model management by containerizing them with Podman, providing a robust default security posture. We highlighted the critical need for isolation in the age of widespread AI model deployment, especially with the non-binary nature of models regarding security and trust questions. While containers offer a significant leap in isolating AI models from the host system, the journey towards ultimate security and resource efficiency doesn't end there.

    Today, we're diving deeper into the isolation capabilities of RamaLama by introducing the power of microVMs, leveraged through libkrun and Podman. This approach takes AI model isolation to the next level, merging the best of container agility with the strong security boundaries of traditional virtual machines.

    The microVM advantage for AI

    Traditional virtual machines (VMs) provide strong isolation by running a complete guest operating system, including its own kernel, separate from the host. This offers a high degree of security, but comes with the overhead of increased resource consumption and slower startup times. Containers, on the other hand, share the host kernel, making them lightweight and fast, but with a slightly less stringent isolation boundary.

    microVMs are a game-changer because they strike a balance. They provide the hardware-level isolation of a VM by running a minimal, highly optimized kernel and virtualized hardware, but with boot times and resource footprints comparable to containers. This makes them ideal for workloads that demand both high security and efficient resource utilization, such as AI model inferencing.

    For AI models, microVMs offer several compelling benefits:

    • Enhanced security: Each AI model runs within its own dedicated microVM, providing a strong hardware-isolated boundary. This significantly reduces the attack surface compared to containers that share the host kernel. Even if a vulnerability were exploited within the AI model's environment, the blast radius would be contained within that specific microVM, preventing lateral movement to the host or other models.
    • True multi-tenancy: In scenarios where multiple AI models from different sources or users are running on the same hardware, microVMs ensure complete isolation between them. This is crucial for maintaining data privacy and preventing one model from impacting the performance or security of another.
    • Reduced overhead: Despite offering VM-level isolation, microVMs are designed to be incredibly lightweight with minimal memory overhead and sub-second boot times. This means you can run a higher density of isolated AI models on a single machine without significant performance penalties.

    RamaLama and libkrun: A powerful combination

    RamaLama is now capable of harnessing the power of microVMs through libkrun, a dynamic library that allows programs to easily run processes in a partially isolated environment using KVM virtualization on Linux. The integration is seamless with Podman, allowing you to specify krun as your OCI runtime.

    This means running your AI models with enhanced microVM isolation is as simple as:

    ramalama serve --oci-runtime krun smollm:135m

    By adding --oci-runtime krun to your ramalama serve command, you're instructing Podman to launch the smollm:135m AI model not just in a container, but within its own lightweight microVM, leveraging the isolation capabilities of libkrun. This provides an additional layer of security beyond traditional containerization, making your AI deployments even more robust.

    Current limitations and future directions: GPU enablement

    At present, libkrun with Podman primarily supports CPU inferencing and is currently limited to Linux hosts. 

    However, we are actively working on GPU enablement for libkrun and RamaLama. Our goal is to extend the benefits of microVM isolation to GPU-accelerated AI workloads, allowing you to run even the most demanding models with the highest level of security and performance. This involves complex engineering to efficiently pass through and virtualize GPU resources to the microVMs, and we are committed to bringing this capability to RamaLama users in the near future.

    Conclusion

    RamaLama's commitment to secure AI model management continues to evolve. By integrating microVMs via libkrun and Podman --runtime krun, we're providing an even stronger foundation for running untrusted or sensitive AI models. While CPU-only inferencing is the current scope, our ongoing work on GPU enablement promises a future where robust, isolated, and GPU-accelerated AI model deployments are the norm.

    Stay tuned for more updates as we continue to push the boundaries of secure and efficient AI model deployment with RamaLama!

    Related Posts

    • How RamaLama runs AI models in isolation by default

    • Podman AI Lab and RamaLama unite for easier local AI

    • Unleashing multimodal magic with RamaLama

    • Distributed inference with vLLM

    • Simplify AI data integration with RamaLama and RAG

    • How RamaLama makes working with AI models boring

    Recent Posts

    • Confidential virtual machine storage attack scenarios

    • Introducing virtualization platform autopilot

    • Integrate zero trust workload identity manager with Red Hat OpenShift GitOps

    • Best Practice Configuration and Tuning for Linux and Windows VMs

    • Red Hat UBI 8 builders have been promoted to the Paketo Buildpacks organization

    What’s up next?

    Discover how you can use the Podman AI Lab extension for Podman Desktop to work with large language models (LLMs) in a local environment.

    Start the activity
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility