Skip to main content
Redhat Developers  Logo
  • Products

    Platforms

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat AI
      Red Hat AI
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • See all Red Hat products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat Developer Hub
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat OpenShift Local
    • Red Hat Developer Sandbox

      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Red Hat OpenShift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • See all technologies
    • Programming languages & frameworks

      • Java
      • Python
      • JavaScript
    • System design & architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer experience

      • Productivity
      • Tools
      • GitOps
    • Automated data processing

      • AI/ML
      • Data science
      • Apache Kafka on Kubernetes
    • Platform engineering

      • DevOps
      • DevSecOps
      • Red Hat Ansible Automation Platform for applications and services
    • Secure development & architectures

      • Security
      • Secure coding
  • Learn

    Featured

    • Kubernetes & cloud native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • AI/ML
      AI/ML Icon
    • See all learning resources

    E-books

    • GitOps cookbook
    • Podman in action
    • Kubernetes operators
    • The path to GitOps
    • See all e-books

    Cheat sheets

    • Linux commands
    • Bash commands
    • Git
    • systemd commands
    • See all cheat sheets

    Documentation

    • Product documentation
    • API catalog
    • Legacy documentation
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore the Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

From 200 lines to 15: How Helion is rewriting the rules of GPU programming

Helion: Simplifying GPU orogramming with PyTorch-like syntax

April 24, 2026
Sumantro Mukherjee Parshant Sharma
Related topics:
Artificial intelligenceCompilers
Related products:
Developer ToolsRed Hat Enterprise Linux

    The evolution of programming efficient GPU kernels has led to a continuous push towards higher levels of abstraction, moving developer focus from hardware management to computational logic. CUDA provides maximum control, with developers manually managing every detail like thread blocks, memory access, synchronization, and index calculations. It's a powerful process, but also complex. Triton emerged as a new GPU language, simplifying the task by introducing block-based programming, allowing developers to manage teams of threads rather than individuals. However, Triton still demands manual effort, like defining block sizes and calculating program IDs. The latest language is Helion, a Python embedded domain-specific language that abstracts all low-level parallelism detail to allow developers to write GPU operations using simple, intuitive syntax using PyTorch.

    What if writing a GPU kernel felt like writing PyTorch?

    Helion automates almost every part of GPU kernel development. Instead of forcing you to manage low-level details of GPU execution, Helion lets you write code that describes the computation you want. A matrix multiplication (matmul) kernel might take over 200 lines in CUDA or around 80 lines in Triton due to manual indexing, masking, and stride handling. That's reduced to about 15 lines of PyTorch-like code in Helion.

    You write a simple loop like for tile_m, tile_n in hl.tile([m, n]): and use operations like torch.addmm(), while Helion handles indexing, tiling, masks, grid sizing, memory layouts, and all the hardware-level configuration. Helion searches through hundreds and thousands of possible implementations to select the fastest one for the specific hardware and problem size, giving developers performance without complexity.

    import torch, helion, helion.language as hl
    
    @helion.kernel()
    def matmul(x: torch.Tensor, y: torch. Tensor) -> torch. Tensor:
    m, k = x.size()
    k2, n = y.size()
    out = torch.empty([m, n], dtype=x.dtype, device=x.device)
    for tile_m, tile_n in hl.tile([m, n]):
    acc = hl.zeros([tile_m, tile_n], dtype=torch.float32)
    for tile_k in hl.tile(k):
    acc torch.addmm (acc, x[tile_m, tile_k], y[tile_k, tile_n])
    out[tile_m, tile_n] = acc
    return out

    One kernel, 1000 variants, zero manual tuning

    Helion's real advantage comes from its autotuning system. Instead of writing and tweaking GPU kernels manually, you create a single Helion kernel and the compiler automatically generates hundreds or even thousands of Triton variants, each with different choices, including block sizes, loop orders, indexing methods, program ID mappings, wrap counts, pipeline depths, unrolling strategies, and cache optimizations. It uses an LFBO-based pattern search for autotuning, while also supporting evolutionary algorithms for completeness. Typical kernels tune within minutes, while more complex kernels may take longer.

    After the optimal configuration is found, you can lock it in production so that there's no tuning cost at runtime. The result is performance portability. The same kernel adapts automatically to different GPU generations (Ampere, Hopper, Blackwell) without manual changes. This process is illustrated in Figure 1.

    An overview of how Helion processes your code, optimizes for target architecture, and provides a config.
    Figure 1: An overview of how Helion processes your code, optimizes for target architecture, and provides a config.

    How Helion works

    When the Helion kernel is called for the first time, it parses your Python function into an abstract syntax tree (AST) and runs type propagation to determine tensor shapes, data types, and how different values depend on each other. It then separates what should run on the host (tensor allocations, shape calculations) and what should run on the GPU, which is identified through hl.tile loops. The GPU portion is captured through PyTorch's FX system and lowered through TorchInductor, which translates operators such as torch.addmm, torch.sum, torch.exp into Triton form. The many steps, most of which you don't manually perform, are shown in Figure 2.

    The compiler builds the full configuration space, and for each option converts the internal representation into Triton code for right indexing, masking, and memory access logic. Triton compiles into GPU machine code, which is cached so that repeated calls with the same tensor signature run instantly.

    Steps toward Triton codegen.
    The optimization process.
    The config is applied only in the final step of the process.
    Figure 2: Steps toward Triton codegen and an optimized config.

    Real-world impact: Less code, faster kernels

    Helion provides boost to both performance and productivity for machine learning (ML) engineers requiring custom GPU kernels. The examples in the Helion Git repository show how flexible it is. Simple functions take only 5 to 10 lines, while fused kernels like GEGLU are implemented in 30 lines instead of hundreds. Even complex components like attention mechanisms and layer norms remain concise and easy to maintain.

    Debugging is also straightforward. You can print the generated Triton code with HELION_PRINT_OUTPUT_CODE=1, run kernels in an eager, Python-like mode with HELION_INTERPRET=1, or generate full repro scripts when filing bug reports. Although autotuning takes 10 to 15 mins for each kernel for each shape, the savings in time spent coding is huge. It's easier to understand and maintain, and the resulting performance often matches or exceeds hand-written and hand-optimized kernels, while automatically adapting across GPU generations.

    Future of GPU programming

    Helion is changing the way we think about writing GPU kernels. Just as high-level languages freed programmers from writing assembly, and frameworks like PyTorch removed the burden of hand-written back-propagation, Helion removes the need to manually manage low-level GPU details while still delivering top performance.

    The evolution from CUDA (hundreds of lines and fully manual tuning) to Triton (dozens of lines with block level abstractions) to Helion (10 to 30 lines of PyTorch-like code with hundreds of automatically tuned variants) shows that the direction of GPU programming is moving towards high-level tools that make expert-level results broadly accessible. And because Helion can explore optimization spaces far beyond what a human can test, developers can spend more time innovating and less time with thread layouts and memory management. Here are common workflows for a Helion developer, from idea to production.

    Phase 1: Write

    The code you write is often no more than 15 lines of code.

    1. Define kernel functions: @helion.kernel() decorator
    2. Write host code (CPU): Allocate tensors, compute shapes
    3. Write device code (GPU): hl.tile loops and PyTorch ops
    4. Debug in eager Python mode: Test with HELION_INTERPRET=1

    Phase 2: Tune

    This usually takes 10 minutes for each kernel, for each shape.

    1. First call triggers autotune: Automatic, no code changes
    2. LFBO explores configs: 1000+ Triton variants tested
    3. Best config printed: Copy into @helion.kernel(configs…)
    4. Inspect with PRINT_OUTPUT: See generated Triton code

    Phase 3: Deploy

    Your project is deployed with no runtime tuning overhead.

    1. Lock config in decorator: Zero tuning cost at runtime
    2. Deterministic compilation: Single optimized Triton kernel
    3. Binary cached: Instant on repeat calls
    4. Re-tune for new hardware: The same code is re-tuned to run on a different GPU

    See it in action: 3 kernels in 30 lines of code

    Here's a vector addition function in 7 lines of code:

    @helion.kernel()
    def add_kernel(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
        size = x.size(0)
        out=torch.empty_like(x)
        for tile in hl.tile(size):
            out[tile] = x[tile] + y[tile]
        return out

    Softmax in 10 lines:

    @helion.kernel()
    def softmax kernel(x: torch.Tensor) -> torch.Tensor:
    n, _m = x.size()
    out = torch.empty_like(x)
    for tile_n in hl.tile(n):
    values = x[tile_n, :]
    amax = torch.amax(values, dim=1, keepdim=True)
    exp torch.exp(values - amax)
    sum exp = torch.sum(exp, dim=1, keepdim=True)
    out[tile_n, :] = exp / sum_exp
    return out

    Debug like it's Python

    You can use the same techniques you use in Python to debug your code. To see the generated Triton code:

    HELION_PRINT_OUTPUT_CODE=1 python my_kernel.py

    To debug without GPU compilation:

    HELION_INTERPRET=1 python my_kernel.py

    Locking the config for production use

    Once autotuning completes, you can lock the optimal config for zero-overhead production use. For example:

    @helion.kernel(config=helion.Config(
    block_sizes=[64, 64, 64],
    loop_orders=[[0, 1]],
    num_warps=8,
    num_stages=6,
    indexing='block_ptr',
    pid_type='flat'
    ))
    def matmul(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
    ...

    Getting started

    With Helion, you write minimal code, and you get automated block sizes, program IDs, grid dims, tensor indexing, masking, strides, and autotuning config lists. It's open source, and ready for use.

    To start using Helion for GPU kernel, the setup is just 4 commands:

    $ python3.12 -m venv helion_env && \
    source helion_env/bin/activate
    $ pip install "torch>=2.9" --index-url \ https://download.pytorch.org/whl/cu128
    $ pip install helion packaging
    $ python -c "import helion; import torch; print('CUDA:', torch.cuda.is_available())"

    Our example code is in a Git repository, so feel free to clone and iterate!

    Related Posts

    • Configure NVIDIA Blackwell GPUs for Red Hat AI workloads

    • Estimate GPU memory for LLM fine-tuning with Red Hat AI

    • Network performance in distributed training: Maximizing GPU utilization on OpenShift

    Recent Posts

    • From 200 lines to 15: How Helion is rewriting the rules of GPU programming

    • Sky computing with OpenShift Service Mesh and SPIRE, part 2: External and multicloud integration

    • Sky computing with OpenShift Service Mesh and SPIRE, part 1: Foundations

    • Introducing Apache Tomcat 10.1 in RHEL 10

    • Deploy OpenViking on OpenShift AI to improve AI agent memory

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue