InstructLab

InstructLab is a model-agnostic AI Tool that makes it possible to train Large Language Models(LLMs). A supported version of InstructLab is available on RHEL AI while the community version is available freely that can run on any laptop.

Try InstructLab

What's included with InstructLab on RHEL AI?

InstructLab comes with a powerful set of tools that enable anyone to easily work use a foundational model, run and train LLMs.

Indemnified LLMs

The supported Granite model helps developers to build and extend the models with confidence.

Learn More 

CLI

ilab is a Command-Line Interface (CLI) tool that allows you to download, chat and train the LLMs.

Learn More 

Taxonomy

Taxonomy is a tree that allows developers to create models tuned with additional data.

Learn More 

Synthetic Data Generation

Leverage synthetic data generation to provide a larger dataset for your model to train on, that's generated based on the data provided to instructlab.

 Learn More 

 

The community version of InstructLab doesn't support indemnification from Red Hat. You can view the community version of the models here.

Model Training workflow

Install

RHEL AI comes with InstructLab already setup, but if you are on an another supported environment, you can use the steps here to install InstructLab.

Learn More  

Creating new knowledge & skills

The taxonomy tree that allows you to train a foundation model with your data. Learn more about using taxonomy to prepare your data here.

Learn More  

Generate Synthetic Data

Synthetic data generation creates more token that the model can use to be trained more effectively. Learn more about synthetic data generation and how it is used here.

Learn More 

Train the model

This is the most crucial step in adding your knowledge and skills to generate a model that meets your usecases. Learn more about training the model here.

Learn More 

Inference

Inferencing is the ability to add intelligence to your applications based on the data you have added to the model. Learn more about how developers can use this locally with Podman and integrate AI inferencing into their applications.

 Learn More