Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Deploy a lightweight AI model with AI Inference Server containerization

September 12, 2025
Christina Zhang
Related topics:
Artificial intelligenceContainers
Related products:
Red Hat AI

Share:

    Note

    This tutorial provides a way to quickly try Red Hat AI Inference Server and learn the deployment workflow. This is not a production-grade deployment; it is intended for quick exploration on a personal machine with a local GPU.

    With the growing demand for running lightweight language models on personal GPUs, developers often struggle to test Red Hat AI Inference Server in a quick and simple way. This tutorial demonstrates how to containerize and run a small LLM using Red Hat AI Inference Server with minimal setup—ideal for developers looking to validate models locally before scaling to OpenShift.

    This post demonstrates how to deploy the lightweight AI model Llama-3.2-1B using Red Hat AI Inference Server containerization. The deployment workflow is shown in Figure 1.

    deploy llm through Thais
    Figure 1: Red Hat AI Inference Server deployment workflow.

    Prerequisites

    Account requirements:

    • Red Hat account: You will need a valid subscription or a free Developer account. This account lets you access the Red Hat AI Inference Server container images.

    • Hugging Face account (optional): This account lets you obtain an access token if you need to download private models. Register for an account. You can find all Red Hat-verified large language models (LLMs) on the Red Hat AI Hugging Face page. 

    Hardware requirements:

    • A computer with a GPU. For more details, see the Red Hat AI Inference Server documentation. 

    Tutorial

    1. Log in to the Red Hat container registry, you need to authenticate to access the Red Hat AI Inference Server container images:

      podman login registry.redhat.io
    2. Pull the latest image. You can find the latest Red Hat AI Inference Server image in the Red Hat Ecosystem Catalog; search for rhaiis. 

      podman pull registry.redhat.io/rhaiis/vllm-cuda-rhel9:3

      The size of this image is around 15 GB.

    3. Set your HF token if the model you are downloading is private. Better choose a small model for testing. 

      #Set the Hugging Face Token (if needed)
      export HF_TOKEN=your_HuggingFace_Token
    4. Create a local directory for model caching

      mkdir -p rhaiis-cache
      chmod g+rwX rhaiis-cache
    5. The following command generates a Container Device Interface (CDI) configuration file for NVIDIA GPUs.

      sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml
    6.  Now, let's test Llama 3.2 1B. This command starts a GPU-enabled container, loads the Llama 3.2 model, and launches the OpenAI-compatible API on port 8000. Start the container:

      podman run -d --device nvidia.com/gpu=all -p 8000:8000 -v ~/rhaiis-cache:/opt/app-root/src/.cache:Z --shm-size=4g --name rhaiis-llama --restart unless-stopped -e HUGGING_FACE_HUB_TOKEN=$HF_TOKEN -e CUDA_VISIBLE_DEVICES=0 -e NVIDIA_VISIBLE_DEVICES=all -e NVIDIA_DRIVER_CAPABILITIES=compute,utility -e HF_HUB_OFFLINE=0 registry.redhat.io/rhaiis/vllm-cuda-rhel9:latest --model RedHatAI/Llama-3.2-1B-Instruct-quantized.w8a8 --host 0.0.0.0 --port 8000 --max-model-len 1024 --max-num-seqs 8 --tensor-parallel-size 1 --enforce-eager --disable-custom-all-reduce

      To change the LLM, modify the content behind --model.

      While this tutorial uses Docker for simplicity, you can also use Podman as a drop-in replacement if your system does not support Docker. Here are explanations for each parameter:

      podman run -d \                                   # Run container in detached (background) mode
      --device nvidia.com/gpu=all \                     # Attach all available NVIDIA GPUs via CDI
      -p 8000:8000 \                                    # Map host port 8000 to container port 8000 for API access
      -v ~/rhaiis-cache:/opt/app-root/src/.cache:Z \    # Mount cache directory to store/download models & weights
      --shm-size=4g \                                   # Allocate 4GB shared memory (needed for large model inference)
      --name rhaiis-llama \                             # Assign container name "rhaiis-llama"
      --restart unless-stopped \                        # Auto-restart unless container is manually stopped
      -e HUGGING_FACE_HUB_TOKEN=$HF_TOKEN \             # Provide Hugging Face token for private model downloads
      -e CUDA_VISIBLE_DEVICES=0 \                       # Restrict container to use only GPU 0
      -e NVIDIA_VISIBLE_DEVICES=all \                   # Make all GPUs visible to container (overridden by CUDA_VISIBLE_DEVICES)
      -e NVIDIA_DRIVER_CAPABILITIES=compute,utility \   # Enable compute (CUDA) and utility (nvidia-smi) driver functions
      -e HF_HUB_OFFLINE=0 \                             # Enable Hugging Face online mode (allow auto model download)
      registry.redhat.io/rhaiis/vllm-cuda-rhel9:latest \ # Container image (Red Hat AI Inference Server with vLLM)
      --model RedHatAI/Llama-3.2-1B-Instruct-quantized.w8a8 \  # Model to load (quantized Llama 3.2 1B Instruct)
      --host 0.0.0.0 \                                  # Bind API server to all interfaces (not just localhost)
      --port 8000 \                                     # Serve API on port 8000
      --max-model-len 1024 \                            # Maximum token length for a single request
      --max-num-seqs 8 \                                # Maximum concurrent requests (batch size)
      --tensor-parallel-size 1 \                        # Tensor parallelism factor (1 = no parallel split)
      --enforce-eager \                                 # Force eager execution (disable graph optimizations for stability)
      --disable-custom-all-reduce                       # Disable custom all-reduce ops (avoid distributed sync issues)
    7. Test the LLM conversation:

      curl -X POST http://localhost:8000/v1/chat/completions \
        -H "Content-Type: application/json" \
        -d '{
          "model": "RedHatAI/Llama-3.2-1B-Instruct-quantized.w8a8",
          "messages": [{"role": "user", "content": "Hello, how are you?"}],
          "max_tokens": 100
        }'

    If you see the following output, your LLM has been deployed successfully:

    {"id":"chatcmpl-32b8543d39824253bd8b07e1a10dc4d3","object":"chat.completion","created":1757411067,"model":"RedHatAI/Llama-3.2-1B-Instruct-quantized.w8a8","choices":[{"index":0,"message":{"role":"assistant","content":"I'm doing well, thank you for asking. I'm a large language model, so I don't have feelings or emotions like humans do, but I'm here to help you with any questions or topics you'd like to discuss. How about you? How's your day going so far?","refusal":null,"annotations":null,"audio":null,"function_call":null,"tool_calls":[],"reasoning_content":null},"logprobs":null,"finish_reason":"stop","stop_reason":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":41,"total_tokens":101,"completion_tokens":60,"prompt_tokens_details":null},"prompt_logprobs":null,"kv_transfer_params":null}

    Conclusion

    In this tutorial, you learned how to deploy a small language model using Red Hat AI Inference Server in a containerized environment with minimal hardware requirements. This quick-start approach helps validate local setups, explore FP8 quantized models, and test vLLM-based APIs on your own machine. To take the next step, consider exploring.

    Related Posts

    • Getting started with llm-d for distributed AI inference

    • vLLM V1: Accelerating multimodal inference for large language models

    • Distributed inference with vLLM

    • Integrate Red Hat AI Inference Server & LangChain in agentic workflows

    • Speech-to-text with Whisper and Red Hat AI Inference Server

    • 2:4 Sparse Llama: Smaller models for efficient GPU inference

    Recent Posts

    • Run Qwen3-Next on vLLM with Red Hat AI: A step-by-step guide

    • How to implement observability with Python and Llama Stack

    • Deploy a lightweight AI model with AI Inference Server containerization

    • vLLM Semantic Router: Improving efficiency in AI reasoning

    • Declaratively assigning DNS records to virtual machines

    What’s up next?

    Open source AI for developers introduces and covers key features of Red Hat OpenShift AI, including Jupyter Notebooks, PyTorch, and enhanced monitoring and observability tools, along with MLOps and continuous integration/continuous deployment (CI/CD) workflows.

    Get the e-book
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue