Skip to main content
Redhat Developers  Logo
  • AI

    Get started with AI

    • Red Hat AI
      Accelerate the development and deployment of enterprise AI solutions.
    • AI learning hub
      Explore learning materials and tools, organized by task.
    • AI interactive demos
      Click through scenarios with Red Hat AI, including training LLMs and more.
    • AI/ML learning paths
      Expand your OpenShift AI knowledge using these learning resources.
    • AI quickstarts
      Focused AI use cases designed for fast deployment on Red Hat AI platforms.
    • No-cost AI training
      Foundational Red Hat AI training.

    Featured resources

    • OpenShift AI learning
    • Open source AI for developers
    • AI product application development
    • Open source-powered AI/ML for hybrid cloud
    • AI and Node.js cheat sheet

    Red Hat AI Factory with NVIDIA

    • Red Hat AI Factory with NVIDIA is a co-engineered, enterprise-grade AI solution for building, deploying, and managing AI at scale across hybrid cloud environments.
    • Explore the solution
  • Learn

    Self-guided

    • Documentation
      Find answers, get step-by-step guidance, and learn how to use Red Hat products.
    • Learning paths
      Explore curated walkthroughs for common development tasks.
    • See all learning

    Hands-on

    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.
    • Interactive labs
      Learn by doing in these hands-on, browser-based experiences.
    • Interactive demos
      Click through product features in these guided tours.

    Browse by topic

    • AI/ML
    • Automation
    • Java
    • Kubernetes
    • Linux
    • See all topics

    Training & certifications

    • Courses and exams
    • Certifications
    • Skills assessments
    • Red Hat Academy
    • Learning subscription
    • Explore training
  • Build

    Get started

    • Red Hat build of Podman Desktop
      A downloadable, local development hub to experiment with our products and builds.
    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.

    Download products

    • Access product downloads to start building and testing right away.
    • Red Hat Enterprise Linux
    • Red Hat AI
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat Developer Toolset

    References

    • E-books
    • Documentation
    • Cheat sheets
    • Architecture center
  • Community

    Get involved

    • Events
    • Live AI events
    • Red Hat Summit
    • Red Hat Accelerators
    • Community discussions

    Follow along

    • Articles & blogs
    • Developer newsletter
    • Videos
    • Github

    Get help

    • Customer service
    • Customer support
    • Regional contacts
    • Find a partner

    Join the Red Hat Developer program

    • Download Red Hat products and project builds, access support documentation, learning content, and more.
    • Explore the benefits

Unsloth and Training Hub: Lightning-fast LoRA and QLoRA fine-tuning

Unsloth and Training Hub: Lightning-fast LoRA and QLoRA fine-tuning

April 1, 2026
Aditi Saluja Mustafa Eyceoz Oleg Silkin
Related topics:
Artificial intelligence
Related products:
Red Hat AI

    Fine-tuning large language models in enterprise environments requires both algorithmic innovation and production-grade execution. Even the most advanced post-training methods fall short if teams struggle with distributed systems, infrastructure complexity, or framework glue code. Training Hub bridges that gap by turning cutting-edge training techniques into scalable, enterprise-ready workflows.

    What is Training Hub

    Training Hub is an open source, algorithm-centered Python library for LLM post-training, maintained by Red Hat's AI Innovation team. Rather than forcing teams to discover, learn, and wire together a growing collection of independent training libraries, each with its own API, setup, and inevitable quirks, Training Hub provides a single interface with each post-training algorithm exposed as a simple Python function. Each algorithm maps to a production-tested backend implementation. The interface is constant across all of them, so you get the same common arguments for model path, data, learning rate, and distributed setup. Switching algorithms never means rewriting your pipeline. You focus on what training algorithm you want, not how it runs.

    Training Hub is available as a stand-alone open source project on PyPI and GitHub. It is also included as part of Red Hat AI, where enterprise customers can access a fully supported build through the Red Hat Python Index.

    What's new: LoRA and QLoRA using Unsloth

    An LLM is a complex model made up of, sometimes, billions of parameters. Unlike full supervised fine-tuning (SFT), which updates every parameter in a model, low-ranking adaptation (LoRA) instead works by training a small set of lightweight "adapter" weights, leaving the base model untouched. Quantized low-ranking adaptation (QLoRA) takes this further by quantizing the base model to 4-bit precision during training, slashing memory requirements even more. The result is fast, cost-effective model adaptation that runs on hardware most teams already have.

    As of v0.4.0, Training Hub supports both LoRA and QLoRA fine-tuning, powered by the Unsloth backend. This is a significant addition for teams that need to adapt models quickly and cheaply without the compute requirements of full fine-tuning.

    With Unsloth as the backend, performance requires roughly 70% less VRAM than full fine-tuning and approximately 2 times faster training than standard LoRA pipelines. You can fine-tune 7B models on a single 24GB GPU using 4-bit NF4 quantization, or scale up with multi-GPU support for both data and model parallelism. All production-tested, with full support available through Red Hat AI.

    What this looks like in practice

    Unsloth provides kernel-level CUDA optimizations, memory-efficient attention, gradient checkpointing, and mixed-precision training. In Training Hub, all of that collapses into:

    
    from training_hub import lora_sft
    
    result = lora_sft(
        model_path="Qwen/Qwen2.5-7B-Instruct",
        data_path="./training_data.jsonl",
        ckpt_output_dir="./outputs",
    
        lora_r=16,
        lora_alpha=32,
        load_in_4bit=True,
    
        num_epochs=3,
        learning_rate=2e-4,
        micro_batch_size=8,
        gradient_accumulation_steps=4
    )

    We've removed the burden of manually navigating the intricate wiring of upstream backends by abstracting those complexities entirely. Instead of forcing you to manage low-level connections such as quantization boilerplate, we provide a unified interface that mirrors the rest of the Training Hub ecosystem. This allows you to trigger sophisticated workflows with a single function call, keeping your implementation clean, consistent, and free from the usual infrastructure overhead.

    When to use LoRA or QLoRA vs. OSFT and SFT

    LoRA is the right choice when GPU memory is constrained, when you need fast experimentation cycles, or when you want to maintain multiple task-specific adapters without the cost of full training runs. It gets you most of the way at a fraction of the compute.

    When LoRA isn't enough, Training Hub gives you a path forward without switching tools. Full SFT delivers maximum task performance when you have the compute to support it. And OSFT, an orthogonal subspace fine-tuning method developed by Red Hat's AI Innovation team and now part of Hugging Face PEFT, is designed for continual learning: teaching a model new capabilities while strictly preserving its existing performance. Each method maps to its own production-tested backend, and the interface remains consistent across all of them.

    Backend support without lock-in

    Training Hub is not a new training framework, it's a routing layer to the right backend for the job. Today it supports:

    • Unsloth for LoRA / QLoRA
    • InstructLab-Training for SFT
    • RHAI Mini-Trainer for OSFT / continual learning

    All backends are production-tested. For Red Hat AI customers, they are fully supported and integrated with OpenShift AI.

    Why this matters for data scientists

    Training Hub is built around a simple idea: LLM training should be algorithm-driven rather than framework-driven. Select the right training algorithm — LoRA, SFT, or OSFT — based on your adaptation goal, not your infrastructure constraints. By utilizing production-tested backends without coupling workflows to a single framework, teams can compose multi-stage training pipelines as their models and data evolve. Your training logic remains stable even as the underlying infrastructure, scale, and backends continue to change.

    Get started

    Training Hub remains your algorithm-focused entry point for LLM fine-tuning on Red Hat AI, now with production-grade LoRA/QLoRA support with Unsloth. To learn more, check out these resources:

    • Documentation
    • Source code and examples
    • LoRA/QLoRA runnable Unsloth + QLoRA example in the repo

    Related Posts

    • Integrate Claude Code with Red Hat AI Inference Server on OpenShift

    • Scale LLM fine-tuning with Training Hub and OpenShift AI

    • Getting started with the vLLM Semantic Router project's Athena release: Optimize your tokens for agentic AI

    • Run Model-as-a-Service for multiple LLMs on OpenShift

    Recent Posts

    • Tekton joins the CNCF as an incubating project

    • Federated identity across the hybrid cloud using zero trust workload identity manager

    • Confidential virtual machine storage attack scenarios

    • Introducing virtualization platform autopilot

    • Integrate zero trust workload identity manager with Red Hat OpenShift GitOps

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Chat Support

    Please log in with your Red Hat account to access chat support.