Key takeaways
- DeepSeek-V3.2-Exp introduces Sparse Attention, a two-stage process ("lightning indexer" + "fine-grained token selection") that enables efficient long-context inference. Early reports show up to 50% lower cost for long-context API calls.
- vLLM delivered Day 0 support, making DeepSeek-V3.2-Exp immediately runnable on NVIDIA Hopper architectures (NVIDIA H100/H200/H20) and NVIDIA Blackwell architectures (NVIDIA B200/GB200). Optimizations are just beginning, and work is underway to extend support across more hardware platforms.
- Red Hat AI makes enterprise deployment straightforward, with supported experimentation ready on Red Hat AI Inference Server and scalable rollout on Red Hat OpenShift AI.
- llm-d is the path to scale, bringing Kubernetes-native distributed inference with PD disaggregation and efficient routing across data-parallel ranks. It is the preferred way to serve DeepSeek-V3.2-Exp efficiently across clusters.
DeepSeek has released DeepSeek-V3.2-Exp, an open-weight model featuring a novel Sparse Attention mechanism designed for long-context tasks. From Day 0, vLLM supports the model on NVIDIA H100/H200/H20 and NVIDIA B200/GB200, giving developers immediate access to leading inference capabilities.
For Red Hat AI users, the model can be deployed today in both Red Hat AI Inference Server and Red Hat OpenShift AI, providing a more consistent path from experimentation to enterprise-grade production. Scaling to clusters is around the corner with llm-d, our recommended approach for disaggregated and efficient serving.
What is DeepSeek Sparse Attention?
Traditional attention scales quadratically with context length, creating large compute and memory costs for long-sequence workloads. DeepSeek introduces a two-stage Sparse Attention pipeline (paper on GitHub):
- Lightning indexer: Quickly identifies relevant excerpts from the entire context window.
- Fine-grained token selection: Narrows down to the most critical tokens within those excerpts to pass into the limited attention window.
This hierarchical selection drastically reduces the tokens each layer processes, maintaining quality while cutting compute costs. Early testing suggests up to 50% lower costs for long-context API calls (source: TechCrunch). Because the model is open-sourced, including weights, techniques, and kernels, researchers and enterprises can directly validate and optimize these claims.
The power of open: Day 0 in vLLM
vLLM delivered Day 0 support for DeepSeek-V3.2-Exp across state-of-the-art hardware:
- H100/H200/H20: supported out-of-the-box with tensor parallelism.
- B200/GB200: supported from the start. Blackwell is now treated as a first-class citizen in vLLM releases.
This means developers can immediately launch and experiment with DeepSeek-V3.2-Exp on the latest leading platforms, as well as deploy on prior generations of the same architecture.
See the vLLM DeepSeek-V3.2-Exp usage guide for launch recipes and configuration details.
Deploy with Red Hat AI today
Please note that DeepSeek-V3.2-Exp is a large model and we suggest using the
The same flow we demonstrated in our recent Qwen3-Next guide applies here. Swap in the DeepSeek-V3.2-Exp model and adjust tensor-parallel settings for your hardware:
- Option A: Red Hat AI Inference Server. Run the model locally with Podman using Red Hat's vLLM-based inference images. This is the fastest way to prototype workloads with the model on Red Hat platforms.
- Option B: Red Hat OpenShift AI. For production-grade orchestration, import the same vLLM runtime as a custom serving runtime in OpenShift AI. Then configure a Model Serving instance to expose the model over an OpenAI-compatible API endpoint.
Both paths give you a more consistent way to evaluate, test, and eventually integrate DeepSeek-V3.2-Exp into enterprise workloads. Read on for a step-by-step guide for each of the options.
Note: DeepSeek-V3.2-Exp has a large memory footprint, especially for long-context workloads. For single-node pilots, use NVIDIA H200 or NVIDIA B200/GB200 GPUs so the model fits comfortably in memory. On NVIDIA H100 GPUs, you will need multi-GPU and usually multi-node to run at intended context lengths. In that case, skip the single-node Red Hat AI Inference Server quick start and go directly to llm-d on Kubernetes with tensor parallelism and PD disaggregation. See the llm-d blog for updates.
Option A: Red Hat AI Inference Server
Technology Preview notice
The Red Hat AI Inference Server images used in this guide are a technology preview and not yet fully supported. They are for evaluation only, and production workloads should wait for the upcoming official GA release from the Red Hat container registries.
Prerequisites
Make sure you meet the following requirements before proceeding.
System requirements:
- A Linux server with data center-grade NVIDIA AI accelerators installed.
Software requirements:
- You have installed either Podman or Docker.
- You have access to Red Hat container images and are logged into registry.redhat.io.
Serve and inference a model using Red Hat AI Inference Server
This section walks you through the steps to run a large language model with Podman and Red Hat AI Inference Server, using NVIDIA CUDA AI accelerators.
1. Log in to the Red Hat Registry
Open a terminal on your server and log in to registry.redhat.io:
podman login registry.redhat.io
2. Pull the Red Hat AI Inference Server image (CUDA version)
podman pull registry.redhat.io/rhaiis-preview/vllm-cuda-rhel9:DeepSeek-v3.2-exp
3. Configure SELinux (if enabled)
If SELinux is enabled on your system, allow container access to devices:
sudo setsebool -P container_use_devices 1
4. Create a volume directory for model caching
Create and set proper permissions for the cache directory:
mkdir -p rhaiis-cache
chmod g+rwX rhaiis-cache
5. Add your Hugging Face token
Create or append your Hugging Face token to a local private.env file and source it:
echo "export HF_TOKEN=<your_HF_token>" > private.env
source private.env
6. Start the AI Inference Server container
If your system includes multiple NVIDIA GPUs connected via NVSwitch, perform the following steps:
Check for NVSwitch. To detect NVSwitch support, check for the presence of devices:
ls /proc/driver/nvidia-nvswitch/devices/
Example output:
0000:0c:09.0 0000:0c:0a.0 0000:0c:0b.0 0000:0c:0c.0 0000:0c:0d.0 0000:0c:0e.0
Start NVIDIA Fabric Manager (root required):
sudo systemctl start nvidia-fabricmanager
Important: NVIDIA Fabric Manager is only required for systems with multiple GPUs using NVSwitch.
Verify GPU visibility from container. Run the following command to verify GPU access inside a container:
podman run --rm -it \ --security-opt=label=disable \ --device nvidia.com/gpu=all \ nvcr.io/nvidia/cuda:12.4.1-base-ubi9 \ nvidia-smi
Start the Red Hat AI Inference Server container with the DeepSeek-V3.2-Exp model:
podman run --rm -it \ --device nvidia.com/gpu=all \ --security-opt=label=disable \ --shm-size=4g \ -p 8000:8000 \ --userns=keep-id:uid=1001 \ --env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \ --env "HF_HUB_OFFLINE=0" \ -v ./rhaiis-cache:/opt/app-root/src/.cache:Z \ registry.redhat.io/rhaiis-preview/vllm-cuda-rhel9:DeepSeek-v3.2-exp \ --model deepseek-ai/DeepSeek-V3.2-Exp \ --data-parallel-size 8 \ --enable-expert-parallel \ --uvicorn-log-level debug
Start the Red Hat AI Inference Server container with the DeepSeek-V3.2-Exp model for Multi-Token Prediction (MTP) speculative decoding:
podman run --rm -it \ --device nvidia.com/gpu=all \ --security-opt=label=disable \ --shm-size=4g \ -p 8000:8000 \ --userns=keep-id:uid=1001 \ --env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \ --env "HF_HUB_OFFLINE=0" \ -v ./rhaiis-cache:/opt/app-root/src/.cache:Z \ registry.redhat.io/rhaiis-preview/vllm-cuda-rhel9:DeepSeek-v3.2-exp \ --model deepseek-ai/DeepSeek-V3.2-Exp \ --tensor_parallel_size 8 \ --tokenizer-mode auto \ --uvicorn-log-level debug \ --speculative-config '{"method": "deepseek_mtp", "num_speculative_tokens": 2}'
Start the Red Hat AI Inference Server Container with the DeepSeek-V3.2-Exp model for function calling:
podman run --rm -it \ --device nvidia.com/gpu=all \ --security-opt=label=disable \ --shm-size=4g \ -p 8000:8000 \ --userns=keep-id:uid=1001 \ --env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \ --env "HF_HUB_OFFLINE=0" \ --env "VLLM_ALLOW_LONG_MAX_MODEL_LEN=1" \ -v ./rhaiis-cache:/opt/app-root/src/.cache:Z \ registry.redhat.io/rhaiis-preview/vllm-cuda-rhel9:DeepSeek-v3.2-exp \ --model deepseek-ai/DeepSeek-V3.2-Exp \ --data-parallel-size 8 \ --enable-expert-parallel \ --enable-auto-tool-choice \ --tool-call-parser deepseek_v31 --chat-template /opt/app-root/template/tool_chat_template_deepseekv31.jinja \ --uvicorn-log-level debug
Option B: Red Hat OpenShift AI
For deployments in OpenShift AI, simply import the image registry.redhat.io/rhaiis-preview/vllm-cuda-rhel9:DeepSeek-v3.2-exp as a custom runtime and use it to serve the model in the standard way. Eventually, you can add the vLLM parameters described in the following procedure to enable certain features (speculative decoding, function calling, and others).
Scaling with Kubernetes and llm-d (coming soon)
For cluster-scale deployments, llm-d is our preferred path. It's Kubernetes-native and built around vLLM, providing a well-lit route for distributed inference.
The upcoming pattern for DeepSeek-V3.2-Exp will:
- Launch vLLM with prefill/decode (PD) disaggregation using NIXL.
- Route requests efficiently across data-parallel ranks.
- Handle long-context workloads while keeping hardware usage efficient.
Documentation and Helm-based deployment flows are coming soon. For anyone planning to serve large fleets of DeepSeek models, llm-d is the way forward.
State of optimizations and community plan
Support for DeepSeek-V3.2-Exp in vLLM is functional today, but optimizations are still in early stages. Nobody has fully unlocked the performance potential of Sparse Attention yet. Here's our roadmap:
- Architectures: Extend beyond NVIDIA Hopper and NVIDIA Blackwell architectures to other accelerators.
- Hardware diversity: Expand support to AMD GPUs and TPUs. Experimental support for DeepSeek-V3.2 is already underway in community variants like
vllm-ascend
andvllm-mlu
, though optimizations and validation are still in progress. - Scaling: Continue testing wide expert parallelism and disaggregated serving.
- RL loops: Enable end-to-end reinforcement learning workflows with this model.
- Short-sequence prefilling: Explore DeepSeek's "masked MHA" mode for prefill efficiency.
- Simplifications: We've already removed Hadamard transforms (no measurable accuracy benefit).
This is just the beginning. Contributions from across the community will help unlock Sparse Attention's true efficiency.
Closing thoughts
DeepSeek-V3.2-Exp is a major step forward in long-context efficiency, and with vLLM you can deploy it on Day 0 across the latest NVIDIA accelerated compute hardware and Red Hat AI platforms.
Scaling with llm-d and deeper kernel/system optimizations are next on the horizon. Meanwhile, researchers and enterprises can already begin experimenting with this open-weight release today.
A huge thanks to the DeepSeek team for open-sourcing the model, its techniques, and kernels. And for trusting vLLM as a Day 0 deployment partner! Read more in the vLLM community blog and come to the vLLM Developer Slack channel to ask questions and engage with the community.