Skip to main content
Redhat Developers  Logo
  • AI

    Get started with AI

    • Red Hat AI
      Accelerate the development and deployment of enterprise AI solutions.
    • AI learning hub
      Explore learning materials and tools, organized by task.
    • AI interactive demos
      Click through scenarios with Red Hat AI, including training LLMs and more.
    • AI/ML learning paths
      Expand your OpenShift AI knowledge using these learning resources.
    • AI quickstarts
      Focused AI use cases designed for fast deployment on Red Hat AI platforms.
    • No-cost AI training
      Foundational Red Hat AI training.

    Featured resources

    • OpenShift AI learning
    • Open source AI for developers
    • AI product application development
    • Open source-powered AI/ML for hybrid cloud
    • AI and Node.js cheat sheet

    Red Hat AI Factory with NVIDIA

    • Red Hat AI Factory with NVIDIA is a co-engineered, enterprise-grade AI solution for building, deploying, and managing AI at scale across hybrid cloud environments.
    • Explore the solution
  • Learn

    Self-guided

    • Documentation
      Find answers, get step-by-step guidance, and learn how to use Red Hat products.
    • Learning paths
      Explore curated walkthroughs for common development tasks.
    • See all learning

    Hands-on

    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.
    • Interactive labs
      Learn by doing in these hands-on, browser-based experiences.
    • Interactive demos
      Click through product features in these guided tours.

    Browse by topic

    • AI/ML
    • Automation
    • Java
    • Kubernetes
    • Linux
    • See all topics

    Training & certifications

    • Courses and exams
    • Certifications
    • Skills assessments
    • Red Hat Academy
    • Learning subscription
    • Explore training
  • Build

    Get started

    • Red Hat build of Podman Desktop
      A downloadable, local development hub to experiment with our products and builds.
    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.

    Download products

    • Access product downloads to start building and testing right away.
    • Red Hat Enterprise Linux
    • Red Hat AI
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat Developer Toolset

    References

    • E-books
    • Documentation
    • Cheat sheets
    • Architecture center
  • Community

    Get involved

    • Events
    • Live AI events
    • Red Hat Summit
    • Red Hat Accelerators
    • Community discussions

    Follow along

    • Articles & blogs
    • Developer newsletter
    • Videos
    • Github

    Get help

    • Customer service
    • Customer support
    • Regional contacts
    • Find a partner

    Join the Red Hat Developer program

    • Download Red Hat products and project builds, access support documentation, learning content, and more.
    • Explore the benefits

Deploy a lightweight AI model with AI Inference Server containerization

September 12, 2025
Christina Zhang
Related topics:
Artificial intelligenceContainers
Related products:
Red Hat AI

    Note

    This tutorial provides a way to quickly try Red Hat AI Inference Server and learn the deployment workflow. This is not a production-grade deployment; it is intended for quick exploration on a personal machine with a local GPU.

    With the growing demand for running lightweight language models on personal GPUs, developers often struggle to test Red Hat AI Inference Server in a quick and simple way. This tutorial demonstrates how to containerize and run a small LLM using Red Hat AI Inference Server with minimal setup—ideal for developers looking to validate models locally before scaling to OpenShift.

    This post demonstrates how to deploy the lightweight AI model Llama-3.2-1B using Red Hat AI Inference Server containerization. The deployment workflow is shown in Figure 1.

    deploy llm through Thais
    Figure 1: Red Hat AI Inference Server deployment workflow.

    Prerequisites

    Account requirements:

    • Red Hat account: You will need a valid subscription or a free Developer account. This account lets you access the Red Hat AI Inference Server container images.

    • Hugging Face account (optional): This account lets you obtain an access token if you need to download private models. Register for an account. You can find all Red Hat-verified large language models (LLMs) on the Red Hat AI Hugging Face page. 

    Hardware requirements:

    • A computer with a GPU. For more details, see the Red Hat AI Inference Server documentation. 

    Tutorial

    1. Log in to the Red Hat container registry, you need to authenticate to access the Red Hat AI Inference Server container images:

      podman login registry.redhat.io
    2. Pull the latest image. You can find the latest Red Hat AI Inference Server image in the Red Hat Ecosystem Catalog; search for rhaiis. 

      podman pull registry.redhat.io/rhaiis/vllm-cuda-rhel9:3

      The size of this image is around 15 GB.

    3. Set your HF token if the model you are downloading is private. Better choose a small model for testing. 

      #Set the Hugging Face Token (if needed)
      export HF_TOKEN=your_HuggingFace_Token
    4. Create a local directory for model caching

      mkdir -p rhaiis-cache
      chmod g+rwX rhaiis-cache
    5. The following command generates a Container Device Interface (CDI) configuration file for NVIDIA GPUs.

      sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml
    6.  Now, let's test Llama 3.2 1B. This command starts a GPU-enabled container, loads the Llama 3.2 model, and launches the OpenAI-compatible API on port 8000. Start the container:

      podman run -d --device nvidia.com/gpu=all -p 8000:8000 -v ~/rhaiis-cache:/opt/app-root/src/.cache:Z --shm-size=4g --name rhaiis-llama --restart unless-stopped -e HUGGING_FACE_HUB_TOKEN=$HF_TOKEN -e CUDA_VISIBLE_DEVICES=0 -e NVIDIA_VISIBLE_DEVICES=all -e NVIDIA_DRIVER_CAPABILITIES=compute,utility -e HF_HUB_OFFLINE=0 registry.redhat.io/rhaiis/vllm-cuda-rhel9:latest --model RedHatAI/Llama-3.2-1B-Instruct-quantized.w8a8 --host 0.0.0.0 --port 8000 --max-model-len 1024 --max-num-seqs 8 --tensor-parallel-size 1 --enforce-eager --disable-custom-all-reduce

      To change the LLM, modify the content behind --model.

      While this tutorial uses Docker for simplicity, you can also use Podman as a drop-in replacement if your system does not support Docker. Here are explanations for each parameter:

      podman run -d \                                   # Run container in detached (background) mode
      --device nvidia.com/gpu=all \                     # Attach all available NVIDIA GPUs via CDI
      -p 8000:8000 \                                    # Map host port 8000 to container port 8000 for API access
      -v ~/rhaiis-cache:/opt/app-root/src/.cache:Z \    # Mount cache directory to store/download models & weights
      --shm-size=4g \                                   # Allocate 4GB shared memory (needed for large model inference)
      --name rhaiis-llama \                             # Assign container name "rhaiis-llama"
      --restart unless-stopped \                        # Auto-restart unless container is manually stopped
      -e HUGGING_FACE_HUB_TOKEN=$HF_TOKEN \             # Provide Hugging Face token for private model downloads
      -e CUDA_VISIBLE_DEVICES=0 \                       # Restrict container to use only GPU 0
      -e NVIDIA_VISIBLE_DEVICES=all \                   # Make all GPUs visible to container (overridden by CUDA_VISIBLE_DEVICES)
      -e NVIDIA_DRIVER_CAPABILITIES=compute,utility \   # Enable compute (CUDA) and utility (nvidia-smi) driver functions
      -e HF_HUB_OFFLINE=0 \                             # Enable Hugging Face online mode (allow auto model download)
      registry.redhat.io/rhaiis/vllm-cuda-rhel9:latest \ # Container image (Red Hat AI Inference Server with vLLM)
      --model RedHatAI/Llama-3.2-1B-Instruct-quantized.w8a8 \  # Model to load (quantized Llama 3.2 1B Instruct)
      --host 0.0.0.0 \                                  # Bind API server to all interfaces (not just localhost)
      --port 8000 \                                     # Serve API on port 8000
      --max-model-len 1024 \                            # Maximum token length for a single request
      --max-num-seqs 8 \                                # Maximum concurrent requests (batch size)
      --tensor-parallel-size 1 \                        # Tensor parallelism factor (1 = no parallel split)
      --enforce-eager \                                 # Force eager execution (disable graph optimizations for stability)
      --disable-custom-all-reduce                       # Disable custom all-reduce ops (avoid distributed sync issues)
    7. Test the LLM conversation:

      curl -X POST http://localhost:8000/v1/chat/completions \
        -H "Content-Type: application/json" \
        -d '{
          "model": "RedHatAI/Llama-3.2-1B-Instruct-quantized.w8a8",
          "messages": [{"role": "user", "content": "Hello, how are you?"}],
          "max_tokens": 100
        }'

    If you see the following output, your LLM has been deployed successfully:

    {"id":"chatcmpl-32b8543d39824253bd8b07e1a10dc4d3","object":"chat.completion","created":1757411067,"model":"RedHatAI/Llama-3.2-1B-Instruct-quantized.w8a8","choices":[{"index":0,"message":{"role":"assistant","content":"I'm doing well, thank you for asking. I'm a large language model, so I don't have feelings or emotions like humans do, but I'm here to help you with any questions or topics you'd like to discuss. How about you? How's your day going so far?","refusal":null,"annotations":null,"audio":null,"function_call":null,"tool_calls":[],"reasoning_content":null},"logprobs":null,"finish_reason":"stop","stop_reason":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":41,"total_tokens":101,"completion_tokens":60,"prompt_tokens_details":null},"prompt_logprobs":null,"kv_transfer_params":null}

    Conclusion

    In this tutorial, you learned how to deploy a small language model using Red Hat AI Inference Server in a containerized environment with minimal hardware requirements. This quick-start approach helps validate local setups, explore FP8 quantized models, and test vLLM-based APIs on your own machine. To take the next step, consider exploring.

    Related Posts

    • Getting started with llm-d for distributed AI inference

    • vLLM V1: Accelerating multimodal inference for large language models

    • Distributed inference with vLLM

    • Integrate Red Hat AI Inference Server & LangChain in agentic workflows

    • Speech-to-text with Whisper and Red Hat AI Inference Server

    • 2:4 Sparse Llama: Smaller models for efficient GPU inference

    Recent Posts

    • Federated identity across the hybrid cloud using zero trust workload identity manager

    • Confidential virtual machine storage attack scenarios

    • Introducing virtualization platform autopilot

    • Integrate zero trust workload identity manager with Red Hat OpenShift GitOps

    • Best Practice Configuration and Tuning for Linux and Windows VMs

    What’s up next?

    Open source AI for developers introduces and covers key features of Red Hat OpenShift AI, including Jupyter Notebooks, PyTorch, and enhanced monitoring and observability tools, along with MLOps and continuous integration/continuous deployment (CI/CD) workflows.

    Get the e-book
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Chat Support

    Please log in with your Red Hat account to access chat support.