Skip to main content
Redhat Developers  Logo
  • AI

    Get started with AI

    • Red Hat AI
      Accelerate the development and deployment of enterprise AI solutions.
    • AI learning hub
      Explore learning materials and tools, organized by task.
    • AI interactive demos
      Click through scenarios with Red Hat AI, including training LLMs and more.
    • AI/ML learning paths
      Expand your OpenShift AI knowledge using these learning resources.
    • AI quickstarts
      Focused AI use cases designed for fast deployment on Red Hat AI platforms.
    • No-cost AI training
      Foundational Red Hat AI training.

    Featured resources

    • OpenShift AI learning
    • Open source AI for developers
    • AI product application development
    • Open source-powered AI/ML for hybrid cloud
    • AI and Node.js cheat sheet

    Red Hat AI Factory with NVIDIA

    • Red Hat AI Factory with NVIDIA is a co-engineered, enterprise-grade AI solution for building, deploying, and managing AI at scale across hybrid cloud environments.
    • Explore the solution
  • Learn

    Self-guided

    • Documentation
      Find answers, get step-by-step guidance, and learn how to use Red Hat products.
    • Learning paths
      Explore curated walkthroughs for common development tasks.
    • Guided learning
      Receive custom learning paths powered by our AI assistant.
    • See all learning

    Hands-on

    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.
    • Interactive labs
      Learn by doing in these hands-on, browser-based experiences.
    • Interactive demos
      Click through product features in these guided tours.

    Browse by topic

    • AI/ML
    • Automation
    • Java
    • Kubernetes
    • Linux
    • See all topics

    Training & certifications

    • Courses and exams
    • Certifications
    • Skills assessments
    • Red Hat Academy
    • Learning subscription
    • Explore training
  • Build

    Get started

    • Red Hat build of Podman Desktop
      A downloadable, local development hub to experiment with our products and builds.
    • Developer Sandbox
      Spin up Red Hat's products and technologies without setup or configuration.

    Download products

    • Access product downloads to start building and testing right away.
    • Red Hat Enterprise Linux
    • Red Hat AI
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Featured

    • Red Hat build of OpenJDK
    • Red Hat JBoss Enterprise Application Platform
    • Red Hat OpenShift Dev Spaces
    • Red Hat Developer Toolset

    References

    • E-books
    • Documentation
    • Cheat sheets
    • Architecture center
  • Community

    Get involved

    • Events
    • Live AI events
    • Red Hat Summit
    • Red Hat Accelerators
    • Community discussions

    Follow along

    • Articles & blogs
    • Developer newsletter
    • Videos
    • Github

    Get help

    • Customer service
    • Customer support
    • Regional contacts
    • Find a partner

    Join the Red Hat Developer program

    • Download Red Hat products and project builds, access support documentation, learning content, and more.
    • Explore the benefits

Announcing the General Availability of Red Hat Enterprise Linux AI (RHEL AI) Version 1.2!

October 15, 2024
Yashwanth Maheshwaram
Related topics:
Artificial intelligence
Related products:
Red Hat Enterprise Linux AI

    We are excited to announce the general availability of RHEL AI 1.2, the foundation model platform designed to help organizations develop, fine-tune, deploy, and run open-source Granite Large Language Models (LLMs) to power enterprise applications. With RHEL AI, developers can now seamlessly fine-tune Red Hat and IBM-produced Granite foundational models to meet their specific needs. RHEL AI 1.2 builds on the success of previously available features, such as the indemnified Granite LLMs and the supported InstructLab workflow for model alignment, enhancing the overall experience for developers with support for wider hardware and public clouds. The images for RHEL AI 1.2 are available here.

    What’s new in RHEL AI 1.2?

    1. Expanded Hardware Support:
      1. RHEL AI 1.2 now supports AMD accelerators. Visit Hardware Support for further information.
      2. Technology Preview is now available for Google Cloud Platform (GCP) and Azure with 8xA100 and 8xH100 accelerators for the full end-to-end InstructLab Model Alignment workflow and Model Inferencing.
      3. Continued support for NVIDIA accelerated computing continues to be supported on bare metal and AWS and IBM cloud
      4. Support for Lenovo ThinkSystem SR675 V3 servers including factory preload option
      5. Support for AMD Instinct Accelerators (technology preview)
    2. Enhanced Cloud Availability: With BYOS (bring your own subscriptions), in addition to existing support for AWS and IBM Cloud, you can now install RHEL AI 1.2 on GCP and Azure as a technology preview. This makes RHEL AI even more accessible for a wider range of cloud infrastructures.
    3. InstructLab Tools and Workflow Enhancements:
      1. RHEL AI’s InstructLab workflow—used for fine-tuning models by adding new knowledge—is now generally available on IBM Cloud, Google Cloud and Azure alongside bare metal and AWS.
      2. Hardware auto-detection now simplifies the setup process, and the new --training-journal flag enables users to continue previously failed training runs, making model fine-tuning more efficient and resilient.
      3. Training checkpoint and resume: Long training runs during model fine tuning can now be saved at regular intervals, thanks to periodic checkpointing. This feature allows InstructLab users to resume training from the last saved checkpoint instead of starting over, saving valuable time and computational resources.
      4. Enhanced training with PyTorch FSDP (technology preview): For multi-phase training of models with synthetic data, ilab train now uses PyTorch Fully Sharded Data Parallel (FSDP). This dramatically reduces training times by sharding a model’s parameters, gradients and optimizer states across data parallel workers (e.g., GPUs). Users can pick FSDP for their distributed training by using ilab config edit.
    4. Synthetic Data Generation (SDG): The enhanced LAB synthetic data generation (SDG) allows you to create large artificial datasets with advanced multi-phase training strategies, enabling more accurate and efficient model training.
    5. Model Inferencing Support: Model inferencing with vLLM, a memory-efficient engine, is now available as a technology preview on GCP along with previously supported generally availability for bare metal, AWS, and IBM Cloud.
    6. Supported LLMs: granite-7b-starter, granite-7b-redhat-lab, mixtral-8x7B-instruct-v0-1, and prometheus-8x7b-v2.0 models continue to be generally available (GA). The granite-8b-code-instruct and granite-8b-code-base models, designed for code generation, remain in technology preview.

    RHEL AI 1.2 represents a significant step forward in fine-tuning enterprise-grade LLMs and expands deployment options across leading public cloud providers with expanded infrastructure support for both NVIDIA and AMD. The enhanced hardware support, robust synthetic data generation, and improved tooling ensure your organization can tailor AI models to meet specific enterprise needs efficiently.

    For more details, check out the full release notes and explore all the new features and enhancements! For further information and a detailed understanding of what you can do with RHEL AI, visit RHEL AI Overview. 

    Important notice:

    With the introduction of RHEL AI 1.2, we will be deprecating support for RHEL AI 1.1 in 30 days. Please ensure your systems are upgraded to RHEL AI 1.2 to continue receiving support.

    Last updated: October 17, 2024
    Disclaimer: Please note the content in this blog post has not been thoroughly reviewed by the Red Hat Developer editorial team. Any opinions expressed in this post are the author's own and do not necessarily reflect the policies or positions of Red Hat.

    Recent Posts

    • Every layer counts: Defense in depth for AI agents with Red Hat AI

    • Fun in the RUN instruction: Why container builds with distroless images can surprise you

    • Trusted software factory: Building trust in the agentic AI era

    • Build a zero trust AI pipeline with OpenShift and RHEL CVMs

    • Red Hat Hardened Images: Top 5 benefits for software developers

    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Platforms

    • Red Hat AI
    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform
    • See all products

    Build

    • Developer Sandbox
    • Developer tools
    • Interactive tutorials
    • API catalog

    Quicklinks

    • Learning resources
    • E-books
    • Cheat sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site status dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit
    © 2026 Red Hat

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Chat Support

    Please log in with your Red Hat account to access chat support.