OCI images are now available on the registries Docker Hub and Quay.io to make it even easier to use the Granite family of large language models (LLMs) and InstructLab, the open source project designed to enhance LLM capabilities for generative artificial intelligence (AI) applications.
About Granite LLMs
Granite models are pre-trained LLMs from IBM’s Granite model series, under an Apache 2.0 license for community and commercial use. Granite was pre-trained from scratch on IBM-curated data as an open reference implementation of Meta’s Llama-2-7B. For more information about IBM Granite, visit their website.
About InstructLab
InstructLab is an open-source project co-created by IBM and Red Hat with the goal of improving the alignment and performance of large language models (LLMs) in generative AI workflows. InstructLab focuses on creating a cost-effective and accessible environment, allowing contributors—even those with minimal machine learning experience—to get involved in advancing the capabilities of LLMs.
With InstructLab, you can more easily fine-tune models, perform inference, or work with model-serving solutions, enabling a wide range of applications from intelligent automation to personalized content generation. For more information about InstructLab, visit their website.
New Docker Hub and Quay.io OCI images available
To make getting started easier than ever, Red Hat is publishing OCI images to simplify the serving of the granite-7b-lab model (additional Granite models are planned to be published over the coming months) and the InstructLab tools. These images cover use cases from desktop inference to full fine-tuning workflows on powerful GPU equipped servers. Red Hat has published 2 different OCI images to cover these use cases and different hardware configurations:
- InstructLab: Ideal for desktop or Mac users looking to explore InstructLab, this image provides a simple introduction to the platform without requiring specialized hardware. It's perfect for prototyping and testing before scaling up.
- granite-7b-lab: This image is optimized for model serving and inference on desktop or Mac environments, using the granite-7b-lab model. It allows for efficient and scalable inference tasks without needing a GPU, perfect for smaller-scale deployments or local testing.
Why use these images?
By making InstructLab and granite-7b-lab available on Quay.io and Docker Hub, Red Hat simplifies the process of working with and fine-tuning large language models in a range of environments. Whether you're experimenting on a Mac or deploying on a GPU-accelerated server, these OCI images eliminate much of the setup and dependency management, allowing you to focus on building and fine-tuning your AI models.
Here are a few of the key benefits:
- Inference on desktop/Mac: Run efficient, local AI inference tasks without the need for specialized hardware.
- Consistency across environments: OCI images ensure that your AI development environment is consistent, from local testing to cloud-scale deployments.
For detailed usage instructions and to get started with these images, visit the project’s GitHub README. You can also check out the blog post published by Docker for additional information about how to pull the images on Docker Hub.
Next steps
With the OCI images for granite-7b-lab and InstructLab now available, it's extremely easy to begin experimenting with generative AI workflows in a variety of environments. Once you’ve had a chance to explore these tools, consider expanding your companies capabilities by leveraging other AI-related products from Red Hat.
Red Hat OpenShift AI is a flexible, scalable artificial intelligence and machine learning platform that enables enterprises to create and deliver AI-enabled applications at scale across hybrid cloud environments. OpenShift AI provides trusted, operationally consistent capabilities for teams to experiment, serve models, and deliver innovative apps.
Red Hat Enterprise Linux AI is a foundation model platform to seamlessly develop, test, and run Granite family large language models for enterprise applications.
Red Hat Enterprise Linux AI brings together:
- The Granite family of open source-licensed LLMs, distributed under the Apache-2.0 license with complete transparency on training datasets.
- InstructLab model alignment tools, which open the world of community-developed LLMs to a wide range of users.
- A bootable image of Red Hat Enterprise Linux, including popular AI libraries such as PyTorch and hardware optimized inference for NVIDIA, Intel, and AMD.
- Enterprise-grade technical support and Open Source Assurance legal protections.
By exploring these products, you'll not only simplify model deployment and orchestration but also create a scalable infrastructure for AI innovation across your organization.
Last updated: October 31, 2024