Key takeaways
- Qwen3-Next introduces a new hybrid attention + sparse MoE architecture, aiming for greater training efficiency, faster inference, and support for longer contexts.
- vLLM and the open source community delivered Day 0 support, making Qwen3-Next immediately deployable, in part thanks to prior work on hybrid attention models (notably with contributions from IBM Research).
- Red Hat AI provides enterprise-ready deployment, enabling organizations to run Qwen3-Next securely, efficiently, and at scale across the hybrid cloud. Included in this blog is the step-by-step guide on how to do so today using either the Red Hat AI Inference Server or Red Hat OpenShift AI.
The open source AI ecosystem moves fast. Every few weeks, we see new foundation models released with groundbreaking architectures, capabilities, and performance benchmarks. "How quickly can I deploy it?" is an immediate question from community and enterprises alike.
That's where vLLM and Red Hat AI come in. Together, they ensure that when the latest models are released, you can start experimenting with and deploying them right away.
The latest example: Qwen3-Next.
What's new in Qwen3-Next
Qwen3-Next introduces a new model architecture designed to improve both training and inference efficiency under long-context and large-parameter settings. Key architectural changes include:
- Hybrid attention mechanism: Combining standard attention with Gated DeltaNet layers to achieve higher performance at very long context while retaining in-context learning abilities.
- Highly sparse Mixture of Experts (MoE): Compared to denser MoE in Qwen3, Qwen3-Next uses a more selective activation pattern, reducing active parameters during inference.
- Training-stability-friendly optimizations: Techniques aimed at stabilizing training despite the complexity of hybrid attention + sparse MoE.
- Multi-token prediction mechanism: Allows the model to predict multiple tokens per step, which improves inference speeds.
Based on this design, they trained the Qwen3-Next-80B-A3B-Base model, an 80 billion parameter system that activates only about 3 billion parameters during inference. The Qwen team reports that this base model slightly outperforms the dense Qwen3-32B, while requiring less than 10% of the training cost. For inference with long context (above 32k tokens), it achieves 10x higher throughput compared to previous models.
Additionally, the Qwen team released two post-trained versions derived from the base model:
- Qwen3-Next-80B-A3B-Instruct: Tuned for general-purpose instruction following.
- Qwen3-Next-80B-A3B-Thinking: Specialized for reasoning-oriented tasks.
They emphasize that architectural and training improvements allowed them to overcome long-standing stability and efficiency challenges in reinforcement learning (RL) training for hybrid attention and high-sparsity MoE models, leading to faster RL training and better final performance.
For more details on the release, check out the official Qwen3-Next launch blog.
The power of open: Immediate support in vLLM
One of the most powerful aspects of the open AI ecosystem is its speed of adoption. As soon as Qwen3-Next was released, it became available for inference through vLLM, the de facto open source inference engine trusted by thousands of organizations. Here's vLLM's blog that outlines how to run Qwen3-Next in vLLM starting on Day 0: vLLM Now Supports Qwen3-Next: Hybrid Architecture with Extreme Efficiency (spoiler alert: it's very simple).
This isn't unique to Qwen3-Next. Over the past year, the vLLM community has delivered Day 0 support for many other major model releases, like Llama 4 Herd. This pattern highlights the strength of open AI collaboration: when new architectures appear, the community quickly integrates and optimizes them, making them available for everyone.
This is also great news for Red Hat AI customers. As the leading commercial contributor to the vLLM project, Red Hat ensures our customers can experiment with new models as soon as they are released, using the Red Hat AI platform.
As mentioned earlier, Qwen3-Next is a hybrid linear attention model, a class of architectures that improve scaling efficiency for longer sequences. Thanks to significant work from IBM Research and the vLLM community, support for hybrid attention is already optimized within vLLM. This means customers get a proven path to deploying cutting-edge architectures like Qwen3-Next today, not someday.
Note
Deploy with Red Hat AI today
Red Hat's AI Inference Server, built on vLLM, enables customers to run open source AI models securely and efficiently in production, on-premises, or in the cloud, without waiting for weeks or months of vendor integration cycles.
If you want to use Red Hat OpenShift AI, you can simply import the image registry.redhat.io/rhaiis-preview/vllm-cuda-rhel9:qwen3-preview as a custom runtime and use it to serve the model in the standard way, eventually adding the vLLM parameters described in the following procedure to enable certain features (speculative decoding, function calling…).
Serve and inference a large language model with Podman and Red Hat AI Inference Server (CUDA)
This guide explains how to serve and run inference on a large language model using Podman and Red Hat AI Inference Server, leveraging NVIDIA CUDA AI accelerators.
Prerequisites
Make sure you meet the following requirements before proceeding:
System requirements:
- A Linux server with data center-grade NVIDIA AI accelerators installed.
Software requirements:
- You have installed either Podman or Docker.
- You have access to Red Hat container images:
- Logged into registry.redhat.io
Technology Preview notice
The Red Hat AI Inference Server images used in this guide are in Technology Preview and not yet fully supported. They are for evaluation only, and production workloads should wait for the upcoming official GA release from the Red Hat container registries.
Procedure: Serve and inference a model using Red Hat AI Inference Server (CUDA)
This section walks you through the steps to run a large language model with Podman and Red Hat AI Inference Server using NVIDIA CUDA AI accelerators. For deployments in OpenShift AI, simply import the image registry.redhat.io/rhaiis-preview/vllm-cuda-rhel9:qwen3-preview as a custom runtime and use it to serve the model in the standard way, eventually adding the vLLM parameters described in the following procedure to enable certain features (speculative decoding, function calling…).
1. Log in to the Red Hat registry
Open a terminal on your server and log in to registry.redhat.io:
podman login registry.redhat.io
2. Pull the Red Hat AI Inference Server Image (CUDA Version)
podman pull registry.redhat.io/rhaiis-preview/vllm-cuda-rhel9:qwen3-preview
3. Configure SELinux (if enabled)
If SELinux is enabled on your system, allow container access to devices:
sudo setsebool -P container_use_devices 1
4. Create a volume directory for model caching
Create and set proper permissions for the cache directory:
mkdir -p rhaiis-cache
chmod g+rwX rhaiis-cache
5. Add your Hugging Face token
Create or append your Hugging Face token to a local private.env
file and source it:
echo "export HF_TOKEN=<your_HF_token>" > private.env
source private.env
6. Start the AI Inference Server container
If your system includes multiple NVIDIA GPUs connected via NVSwitch, perform the following steps:
Check for NVSwitch. To detect NVSwitch support, check for the presence of devices:
ls /proc/driver/nvidia-nvswitch/devices/
Example output:
0000:0c:09.0 0000:0c:0a.0 0000:0c:0b.0 0000:0c:0c.0 0000:0c:0d.0 0000:0c:0e.0
Start NVIDIA Fabric Manager (root required):
sudo systemctl start nvidia-fabricmanager
Important: NVIDIA Fabric Manager is only required for systems with multiple GPUs using NVSwitch.
Verify GPU visibility from the container. Run the following command to verify GPU access inside a container:
podman run --rm -it \ --security-opt=label=disable \ --device nvidia.com/gpu=all \ nvcr.io/nvidia/cuda:12.4.1-base-ubi9 \ nvidia-smi
Start the Red Hat AI Inference Server Container with Qwen3-Next models:
podman run --rm -it \ --device nvidia.com/gpu=all \ --security-opt=label=disable \ --shm-size=4g \ -p 8000:8000 \ --userns=keep-id:uid=1001 \ --env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \ --env "HF_HUB_OFFLINE=0" \ --env "VLLM_ALLOW_LONG_MAX_MODEL_LEN=1" \ -v ./rhaiis-cache:/opt/app-root/src/.cache:Z \ registry.redhat.io/rhaiis-preview/vllm-cuda-rhel9:qwen3-preview \ --model Qwen/Qwen3-Next-80B-A3B-Instruct\ --tensor-parallel-size 4 \ --max-model-len 256K \ --uvicorn-log-level debug
Start the Red Hat AI Inference Server container with Qwen3-Next models for multi-token prediction (MTP) speculative decoding.
Qwen3-Next also supports MTP. You can launch the inference server with the following arguments to enable MTP:
podman run --rm -it \ --device nvidia.com/gpu=all \ --security-opt=label=disable \ --shm-size=4g \ -p 8000:8000 \ --userns=keep-id:uid=1001 \ --env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \ --env "HF_HUB_OFFLINE=0" \ --env "VLLM_ALLOW_LONG_MAX_MODEL_LEN=1" \ -v ./rhaiis-cache:/opt/app-root/src/.cache:Z \ registry.redhat.io/rhaiis-preview/vllm-cuda-rhel9:qwen3-preview \ --model Qwen/Qwen3-Next-80B-A3B-Instruct\ --tensor-parallel-size 4 \ --max-model-len 256K \ --no-enable-chunked-prefill \ --tokenizer-mode auto \ --uvicorn-log-level debug \ --speculative-config '{"method": "qwen3_next_mtp", "num_speculative_tokens": 2}'
Start the Red Hat AI Inference Server ontainer with Qwen3-Next models for function calling:
podman run --rm -it \ --device nvidia.com/gpu=all \ --security-opt=label=disable \ --shm-size=4g \ -p 8000:8000 \ --userns=keep-id:uid=1001 \ --env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \ --env "HF_HUB_OFFLINE=0" \ --env "VLLM_ALLOW_LONG_MAX_MODEL_LEN=1" \ -v ./rhaiis-cache:/opt/app-root/src/.cache:Z \ registry.redhat.io/rhaiis-preview/vllm-cuda-rhel9:qwen3-preview \ --model Qwen/Qwen3-Next-80B-A3B-Instruct\ --tensor-parallel-size 4 \ --max-model-len 256K \ --enable-auto-tool-choice \ --tool-call-parser hermes \ --uvicorn-log-level debug
Conclusion
The release of Qwen3-Next is another reminder that the open AI ecosystem is moving faster than ever. With vLLM as the de facto inference engine, the community ensures that new models can be tested and integrated immediately.
But for organizations, it's not just about trying the latest model. It's about deploying it with confidence. That's where Red Hat AI comes in: giving you the scalability, efficiency, and enterprise-grade reliability needed to move from experimentation to production without delay.
The future of AI isn't locked behind proprietary sacks. It's open, collaborative, and production-ready today. And with Red Hat and vLLM, you can start deploying Qwen3-next now.
Last updated: September 18, 2025