Skip to main content
Redhat Developers  Logo
  • AI

    Get started with AI

    • Red Hat AI
      Accelerate the development and deployment of enterprise AI solutions.
    • AI learning hub
      Explore learning materials and tools, organized by task.
    • AI interactive demos
      Click through scenarios with Red Hat AI, including training LLMs and more.
    • AI/ML learning paths
      Expand your OpenShift AI knowledge using these learning resources.
    • AI quickstarts
      Focused AI use cases designed for fast deployment on Red Hat AI platforms.
    • No-cost AI training
      Foundational Red Hat AI training.

    Featured resources

    • OpenShift AI learning
    • Open source AI for developers
    • AI product application development
    • Open source-powered AI/ML for hybrid cloud
    • AI and Node.js cheat sheet

    Red Hat AI Factory with NVIDIA

    • Red Hat AI Factory with NVIDIA is a co-engineered, enterprise-grade AI solution for building, deploying, and managing AI at scale across hybrid cloud environments.
    • Explore the solution
  • Learn

    Self-guided

    • Documentation
      Find answers, get step-by-step guidance, and learn how to use Red Hat products.
    • Learning paths
      Explore curated walkthroughs for common development tasks.
    • See all learning

    Hands-on

    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.
    • Interactive labs
      Learn by doing in these hands-on, browser-based experiences.
    • Interactive demos
      Click through product features in these guided tours.

    Browse by topic

    • AI/ML
    • Automation
    • Java
    • Kubernetes
    • Linux
    • See all topics

    Training & certifications

    • Courses and exams
    • Certifications
    • Skills assessments
    • Red Hat Academy
    • Learning subscription
    • Explore training
  • Build

    Get started

    • Red Hat build of Podman Desktop
      A downloadable, local development hub to experiment with our products and builds.
    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.

    Download products

    • Access product downloads to start building and testing right away.
    • Red Hat Enterprise Linux
    • Red Hat AI
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat Developer Toolset

    References

    • E-books
    • Documentation
    • Cheat sheets
    • Architecture center
  • Community

    Get involved

    • Events
    • Live AI events
    • Red Hat Summit
    • Red Hat Accelerators
    • Community discussions

    Follow along

    • Articles & blogs
    • Developer newsletter
    • Videos
    • Github

    Get help

    • Customer service
    • Customer support
    • Regional contacts
    • Find a partner

    Join the Red Hat Developer program

    • Download Red Hat products and project builds, access support documentation, learning content, and more.
    • Explore the benefits

How RamaLama runs AI models in isolation by default

February 20, 2025
Daniel Walsh
Related topics:
Artificial intelligenceContainers
Related products:
Developer Toolset

    Over the last few weeks, we have seen a spike in both users and GitHub stars for RamaLama, an open source project that simplifies AI model management by leveraging OCI containers. (Read How RamaLama makes working with AI models boring for an overview of the project.)

    Coincidentally, this happened around the same time that the DeepSeek AI model was released. We realized a large number of individuals were downloading the model and running it with RamaLama, because the servers sharing the AI model were overloaded and started returning 503 errors, which triggered a bug in RamaLama. We quickly fixed the issue and pushed a new release.

    The challenge of AI model security

    There is some controversy about running DeepSeek models from a security point of view, but this is indicative of a larger problem with AI model proliferation. Entities globally, including the U.S. government, are considering how to monitor and potentially restrict the use of DeepSeek applications and models within their territories. The question at the core of this problem is this: Can we trust this AI model or the application that the model runs in?

    This reveals a significant issue with AI models and the applications that run them. With thousands of people experimenting with AI models locally on their laptops, does this present a security issue? Can a given model, DeepSeek or otherwise, be trusted?  Can a model trigger the software it’s running on to start stealing information off your laptop and sending it out to the internet?

    Compounding this are applications and websites that host many of these models. Consider that a large number of individuals accessed the DeepSeek model through the DeepSeek website and mobile app. In January 2025, the DeepSeek app rocketed to #1 in the iTunes store. This means that individual users are sharing their credentials, their smartphone details, and myriad of additional information as they type into the prompt with an untrusted entity to test an untrusted model. Many enterprise users and teams have security concerns, whether for geopolitical reasons or IT security in general. 

    RamaLama to the rescue

    RamaLama, however, offers a better way.

    The RamaLama llama standing behind a container at the beach giving a thumbs up.

    RamaLama defaults to running AI models inside of rootless containers using Podman or Docker. These containers isolate the AI models from information on the underlying host. With RamaLama containers, the AI model is mounted as a volume into the container in read-only mode. This results in the process running the model, llama.cpp or vLLM, being isolated from the host.  

    In addition, becuase ramalama run uses the --network=none option, the container cannot reach the network and leak any information out of the system. Finally, containers are run with --rm options, which means that any content written during the running of the container is wiped out when the application exits.

    Conclusion

    Here’s how RamaLama delivers a robust security footprint:

    • Container isolation: AI models run within isolated containers, preventing direct access to the host system.

    • Read-only volume mounts: The AI model is mounted in read-only mode, meaning that processes inside the container cannot modify host files.

    • No network access: ramalama run is executed with --network=none, so the model has no outbound connectivity through which information can be leaked.

    • Auto-cleanup: RamaLama runs containers with --rm, wiping out any temporary data once the session ends.

    • No access to Linux capabilities: RamaLama drops all Linux capabilities so there is no access to attack the underlying host.

    • No new privileges: A Linux kernel feature disables container processes from gaining additional privileges.

    Given these capabilities, RamaLama containerization addresses many of the common risks of testing models.

    How to try RamaLama

    Try out RamaLama on your machines by following these installation instructions.

    Last updated: March 20, 2025

    Related Posts

    • How RamaLama makes working with AI models boring

    • Simplifying AI with RamaLama and llama-run

    • Getting started with Podman AI Lab

    • Introducing Podman AI Lab: Developer tooling for working with LLMs

    • Introducing GPU support for Podman AI Lab

    • Getting started with InstructLab for generative AI model tuning

    Recent Posts

    • Confidential virtual machine storage attack scenarios

    • Introducing virtualization platform autopilot

    • Integrate zero trust workload identity manager with Red Hat OpenShift GitOps

    • Best Practice Configuration and Tuning for Linux and Windows VMs

    • Red Hat UBI 8 builders have been promoted to the Paketo Buildpacks organization

    What’s up next?

    Download free preview chapters from Applied AI for Enterprise Java Development (O’Reilly), a practical guide for Java developers who want to build AI applications.

    Get the e-book
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Chat Support

    Please log in with your Red Hat account to access chat support.