Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

How RamaLama runs AI models in isolation by default

February 20, 2025
Daniel Walsh
Related topics:
Artificial intelligenceContainers
Related products:
Developer Tools

Share:

    Over the last few weeks, we have seen a spike in both users and GitHub stars for RamaLama, an open source project that simplifies AI model management by leveraging OCI containers. (Read How RamaLama makes working with AI models boring for an overview of the project.)

    Coincidentally, this happened around the same time that the DeepSeek AI model was released. We realized a large number of individuals were downloading the model and running it with RamaLama, because the servers sharing the AI model were overloaded and started returning 503 errors, which triggered a bug in RamaLama. We quickly fixed the issue and pushed a new release.

    The challenge of AI model security

    There is some controversy about running DeepSeek models from a security point of view, but this is indicative of a larger problem with AI model proliferation. Entities globally, including the U.S. government, are considering how to monitor and potentially restrict the use of DeepSeek applications and models within their territories. The question at the core of this problem is this: Can we trust this AI model or the application that the model runs in?

    This reveals a significant issue with AI models and the applications that run them. With thousands of people experimenting with AI models locally on their laptops, does this present a security issue? Can a given model, DeepSeek or otherwise, be trusted?  Can a model trigger the software it’s running on to start stealing information off your laptop and sending it out to the internet?

    Compounding this are applications and websites that host many of these models. Consider that a large number of individuals accessed the DeepSeek model through the DeepSeek website and mobile app. In January 2025, the DeepSeek app rocketed to #1 in the iTunes store. This means that individual users are sharing their credentials, their smartphone details, and myriad of additional information as they type into the prompt with an untrusted entity to test an untrusted model. Many enterprise users and teams have security concerns, whether for geopolitical reasons or IT security in general. 

    RamaLama to the rescue

    RamaLama, however, offers a better way.

    The RamaLama llama standing behind a container at the beach giving a thumbs up.

    RamaLama defaults to running AI models inside of rootless containers using Podman or Docker. These containers isolate the AI models from information on the underlying host. With RamaLama containers, the AI model is mounted as a volume into the container in read-only mode. This results in the process running the model, llama.cpp or vLLM, being isolated from the host.  

    In addition, becuase ramalama run uses the --network=none option, the container cannot reach the network and leak any information out of the system. Finally, containers are run with --rm options, which means that any content written during the running of the container is wiped out when the application exits.

    Conclusion

    Here’s how RamaLama delivers a robust security footprint:

    • Container isolation: AI models run within isolated containers, preventing direct access to the host system.

    • Read-only volume mounts: The AI model is mounted in read-only mode, meaning that processes inside the container cannot modify host files.

    • No network access: ramalama run is executed with --network=none, so the model has no outbound connectivity through which information can be leaked.

    • Auto-cleanup: RamaLama runs containers with --rm, wiping out any temporary data once the session ends.

    • No access to Linux capabilities: RamaLama drops all Linux capabilities so there is no access to attack the underlying host.

    • No new privileges: A Linux kernel feature disables container processes from gaining additional privileges.

    Given these capabilities, RamaLama containerization addresses many of the common risks of testing models.

    How to try RamaLama

    Try out RamaLama on your machines by following these installation instructions.

    Last updated: March 20, 2025

    Related Posts

    • How RamaLama makes working with AI models boring

    • Simplifying AI with RamaLama and llama-run

    • Getting started with Podman AI Lab

    • Introducing Podman AI Lab: Developer tooling for working with LLMs

    • Introducing GPU support for Podman AI Lab

    • Getting started with InstructLab for generative AI model tuning

    Recent Posts

    • Cloud bursting with confidential containers on OpenShift

    • Reach native speed with MacOS llama.cpp container inference

    • A deep dive into Apache Kafka's KRaft protocol

    • Staying ahead of artificial intelligence threats

    • Strengthen privacy and security with encrypted DNS in RHEL

    What’s up next?

    Download free preview chapters from Applied AI for Enterprise Java Development (O’Reilly), a practical guide for Java developers who want to build AI applications.

    Get the e-book
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue