Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • View All Red Hat Products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Secure Development & Architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • Product Documentation
    • API Catalog
    • Legacy Documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

Run Qwen3-Next on vLLM with Red Hat AI: A step-by-step guide

September 12, 2025
Red Hat AI
Related topics:
Artificial intelligence
Related products:
Red Hat AI

Share:

    Key takeaways

    • Qwen3-Next introduces a new hybrid attention + sparse MoE architecture, aiming for greater training efficiency, faster inference, and support for longer contexts.
    • vLLM and the open source community delivered Day 0 support, making Qwen3-Next immediately deployable, in part thanks to prior work on hybrid attention models (notably with contributions from IBM Research).
    • Red Hat AI provides enterprise-ready deployment, enabling organizations to run Qwen3-Next securely, efficiently, and at scale across the hybrid cloud. Included in this blog is the step-by-step guide on how to do so today using either the Red Hat AI Inference Server or Red Hat OpenShift AI.

    The open source AI ecosystem moves fast. Every few weeks, we see new foundation models released with groundbreaking architectures, capabilities, and performance benchmarks. "How quickly can I deploy it?" is an immediate question from community and enterprises alike. 

    That's where vLLM and Red Hat AI come in. Together, they ensure that when the latest models are released, you can start experimenting with and deploying them right away.

    The latest example: Qwen3-Next.

    What's new in Qwen3-Next

    Qwen3-Next introduces a new model architecture designed to improve both training and inference efficiency under long-context and large-parameter settings. Key architectural changes include:

    • Hybrid attention mechanism: Combining standard attention with Gated DeltaNet layers to achieve higher performance at very long context while retaining in-context learning abilities.
    • Highly sparse Mixture of Experts (MoE): Compared to denser MoE in Qwen3, Qwen3-Next uses a more selective activation pattern, reducing active parameters during inference.
    • Training-stability-friendly optimizations: Techniques aimed at stabilizing training despite the complexity of hybrid attention + sparse MoE.
    • Multi-token prediction mechanism: Allows the model to predict multiple tokens per step, which improves inference speeds.

    Based on this design, they trained the Qwen3-Next-80B-A3B-Base model, an 80 billion parameter system that activates only about 3 billion parameters during inference. The Qwen team reports that this base model slightly outperforms the dense Qwen3-32B, while requiring less than 10% of the training cost. For inference with long context (above 32k tokens), it achieves 10x higher throughput compared to previous models.

    Additionally, the Qwen team released two post-trained versions derived from the base model:

    • Qwen3-Next-80B-A3B-Instruct: Tuned for general-purpose instruction following.
    • Qwen3-Next-80B-A3B-Thinking: Specialized for reasoning-oriented tasks.

    They emphasize that architectural and training improvements allowed them to overcome long-standing stability and efficiency challenges in reinforcement learning (RL) training for hybrid attention and high-sparsity MoE models, leading to faster RL training and better final performance.

    For more details on the release, check out the official Qwen3-Next launch blog.

    The power of open: Immediate support in vLLM

    One of the most powerful aspects of the open AI ecosystem is its speed of adoption. As soon as Qwen3-Next was released, it became available for inference through vLLM, the de facto open source inference engine trusted by thousands of organizations. Here's vLLM's blog that outlines how to run Qwen3-Next in vLLM starting on Day 0: vLLM Now Supports Qwen3-Next: Hybrid Architecture with Extreme Efficiency (spoiler alert: it's very simple).

    This isn't unique to Qwen3-Next. Over the past year, the vLLM community has delivered Day 0 support for many other major model releases, like Llama 4 Herd. This pattern highlights the strength of open AI collaboration: when new architectures appear, the community quickly integrates and optimizes them, making them available for everyone. 

    This is also great news for Red Hat AI customers. As the leading commercial contributor to the vLLM project, Red Hat ensures our customers can experiment with new models as soon as they are released, using the Red Hat AI platform.

    As mentioned earlier, Qwen3-Next is a hybrid linear attention model, a class of architectures that improve scaling efficiency for longer sequences. Thanks to significant work from IBM Research and the vLLM community, support for hybrid attention is already optimized within vLLM. This means customers get a proven path to deploying cutting-edge architectures like Qwen3-Next today, not someday.

    Note

    Together with IBM Research, we'll be hosting vLLM office hours on hybrid models in vLLM on September 25, 2025. Register here, and see all vLLM office hours recordings here.

    Deploy with Red Hat AI today

    Red Hat's AI Inference Server, built on vLLM, enables customers to run open source AI models securely and efficiently in production, on-premises, or in the cloud, without waiting for weeks or months of vendor integration cycles. 

    If you want to use Red Hat OpenShift AI, you can simply import the image registry.redhat.io/rhaiis-preview/vllm-cuda-rhel9:qwen3-preview as a custom runtime and use it to serve the model in the standard way, eventually adding the vLLM parameters described in the following procedure to enable certain features (speculative decoding, function calling…).

    Serve and inference a large language model with Podman and Red Hat AI Inference Server (CUDA)

    This guide explains how to serve and run inference on a large language model using Podman and Red Hat AI Inference Server, leveraging NVIDIA CUDA AI accelerators.

    Prerequisites

    Make sure you meet the following requirements before proceeding:

    System requirements: 

    • A Linux server with data center-grade NVIDIA AI accelerators installed.

    Software requirements:

    • You have installed either Podman or Docker.
    • You have access to Red Hat container images:
      • Logged into registry.redhat.io

    Technology Preview notice

    The Red Hat AI Inference Server images used in this guide are in Technology Preview and not yet fully supported. They are for evaluation only, and production workloads should wait for the upcoming official GA release from the Red Hat container registries.

    Procedure: Serve and inference a model using Red Hat AI Inference Server (CUDA)

    This section walks you through the steps to run a large language model with Podman and Red Hat AI Inference Server using NVIDIA CUDA AI accelerators. For deployments in OpenShift AI, simply import the image registry.redhat.io/rhaiis-preview/vllm-cuda-rhel9:qwen3-preview as a custom runtime and use it to serve the model in the standard way, eventually adding the vLLM parameters described in the following procedure to enable certain features (speculative decoding, function calling…).

    1. Log in to the Red Hat registry

    Open a terminal on your server and log in to registry.redhat.io:

    podman login registry.redhat.io

    2. Pull the Red Hat AI Inference Server Image (CUDA Version)

    podman pull registry.redhat.io/rhaiis-preview/vllm-cuda-rhel9:qwen3-preview

    3. Configure SELinux (if enabled)

    If SELinux is enabled on your system, allow container access to devices:

    sudo setsebool -P container_use_devices 1

    4. Create a volume directory for model caching

    Create and set proper permissions for the cache directory:

    mkdir -p rhaiis-cache
    chmod g+rwX rhaiis-cache

    5. Add your Hugging Face token

    Create or append your Hugging Face token to a local private.env file and source it:

    echo "export HF_TOKEN=<your_HF_token>" > private.env
    source private.env

    6.  Start the AI Inference Server container

    If your system includes multiple NVIDIA GPUs connected via NVSwitch, perform the following steps:

    1. Check for NVSwitch. To detect NVSwitch support, check for the presence of devices:

      ls /proc/driver/nvidia-nvswitch/devices/

      Example output:

      0000:0c:09.0  0000:0c:0a.0  0000:0c:0b.0  0000:0c:0c.0  0000:0c:0d.0  0000:0c:0e.0
    2. Start NVIDIA Fabric Manager (root required):

      sudo systemctl start nvidia-fabricmanager

      Important: NVIDIA Fabric Manager is only required for systems with multiple GPUs using NVSwitch.

    3. Verify GPU visibility from the container. Run the following command to verify GPU access inside a container:

      podman run --rm -it \
      --security-opt=label=disable \
      --device nvidia.com/gpu=all \
      nvcr.io/nvidia/cuda:12.4.1-base-ubi9 \
      nvidia-smi
    4. Start the Red Hat AI Inference Server Container with Qwen3-Next models:

      podman run --rm -it \
      --device nvidia.com/gpu=all \
      --security-opt=label=disable \
      --shm-size=4g \
      -p 8000:8000 \
      --userns=keep-id:uid=1001 \
      --env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
      --env "HF_HUB_OFFLINE=0" \
      --env "VLLM_ALLOW_LONG_MAX_MODEL_LEN=1" \
      -v ./rhaiis-cache:/opt/app-root/src/.cache:Z \
      registry.redhat.io/rhaiis-preview/vllm-cuda-rhel9:qwen3-preview \
      --model Qwen/Qwen3-Next-80B-A3B-Instruct\
      --tensor-parallel-size 4 \
      --max-model-len 256K \
      --uvicorn-log-level debug
    5. Start the Red Hat AI Inference Server container with Qwen3-Next models for multi-token prediction (MTP) speculative decoding.

      Qwen3-Next also supports MTP. You can launch the inference server with the following arguments to enable MTP:

      podman run --rm -it \
      --device nvidia.com/gpu=all \
      --security-opt=label=disable \
      --shm-size=4g \
      -p 8000:8000 \
      --userns=keep-id:uid=1001 \
      --env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
      --env "HF_HUB_OFFLINE=0" \
      --env "VLLM_ALLOW_LONG_MAX_MODEL_LEN=1" \
      -v ./rhaiis-cache:/opt/app-root/src/.cache:Z \
      registry.redhat.io/rhaiis-preview/vllm-cuda-rhel9:qwen3-preview \
      --model Qwen/Qwen3-Next-80B-A3B-Instruct\
      --tensor-parallel-size 4 \
      --max-model-len 256K \ 
      --no-enable-chunked-prefill \
      --tokenizer-mode auto \
      --uvicorn-log-level debug \
      --speculative-config '{"method": "qwen3_next_mtp", "num_speculative_tokens": 2}'
    6. Start the Red Hat AI Inference Server ontainer with Qwen3-Next models for function calling:

      podman run --rm -it \
      --device nvidia.com/gpu=all \
      --security-opt=label=disable \
      --shm-size=4g \
      -p 8000:8000 \
      --userns=keep-id:uid=1001 \
      --env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
      --env "HF_HUB_OFFLINE=0" \
      --env "VLLM_ALLOW_LONG_MAX_MODEL_LEN=1" \
      -v ./rhaiis-cache:/opt/app-root/src/.cache:Z \
      registry.redhat.io/rhaiis-preview/vllm-cuda-rhel9:qwen3-preview \
      --model Qwen/Qwen3-Next-80B-A3B-Instruct\
      --tensor-parallel-size 4 \
      --max-model-len 256K \ 
      --enable-auto-tool-choice \
      --tool-call-parser hermes \
      --uvicorn-log-level debug

    Conclusion

    The release of Qwen3-Next is another reminder that the open AI ecosystem is moving faster than ever. With vLLM as the de facto inference engine, the community ensures that new models can be tested and integrated immediately.

    But for organizations, it's not just about trying the latest model. It's about deploying it with confidence. That's where Red Hat AI comes in: giving you the scalability, efficiency, and enterprise-grade reliability needed to move from experimentation to production without delay.

    The future of AI isn't locked behind proprietary sacks. It's open, collaborative, and production-ready today. And with Red Hat and vLLM, you can start deploying Qwen3-next now.

    Last updated: September 18, 2025

    Related Posts

    • Llama 4 herd is here with Day 0 inference support in vLLM

    • vLLM V1: Accelerating multimodal inference for large language models

    • How we optimized vLLM for DeepSeek-R1

    • LLM Compressor is here: Faster inference with vLLM

    • Integrate Red Hat AI Inference Server & LangChain in agentic workflows

    • vLLM V1 Alpha: A major upgrade to vLLM's core architecture

    Recent Posts

    • How to implement and monitor circuit breakers in OpenShift Service Mesh 3

    • Analysis of OpenShift node-system-admin-client lifespan

    • What's New in OpenShift GitOps 1.18

    • Beyond a single cluster with OpenShift Service Mesh 3

    • Kubernetes MCP server: AI-powered cluster management

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2025 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue