A quick look at large language models with Node.js, Podman Desktop, and the Granite model
Explore large language models (LLMs) by trying out the Granite model on Podman AI Lab.
Explore large language models (LLMs) by trying out the Granite model on Podman AI Lab.
This article demonstrates how to register the SKlearn runtime as a Custom ServingRuntime, deploy the iris model on KServe with OpenDataHub, and apply authentication using Authorino to protect the model endpoints.
Seamlessly develop, deploy, and run open source Granite generative AI models to
Explore how to use OpenVINO Model Server (OVMS) built on Intel's OpenVINO toolkit to streamline the deployment and management of deep learning models.
Event-driven Sentiment Analysis using Kafka, Knative and AI/ML
End-to-end AI-enabled applications and data pipelines across the hybrid cloud
Learn a simplified method for installing KServe, a highly scalable and standards-based model inference platform on Kubernetes for scalable AI.
Learn how to generate complete Ansible Playbooks using natural language prompts and boost automation productivity with Red Hat's new Ansible VS Code extension.
Podman AI Lab provides a containerized environment for exploring, testing, and integrating open source AI models locally using Podman Desktop.
A practical example to deploy machine learning model using data science...
Learn how to fine-tune large language models with specific skills and knowledge
This learning exercise delves into the end-to-end process of building and
Are you curious about the power of artificial intelligence (AI) but not sure
This blog post explores the integration of Large Language Models (LLMs) with
The Edge to Core Pipeline Pattern automates a continuous cycle for releasing and deploying new AI/ML models using Red Hat build of Apache Camel and more.
Explore fundamental concepts of artificial intelligence (AI), including machine learning and deep learning, and learn how to integrate AI into your platforms and applications.
Introducing InstructLab, an open source project for enhancing large language models (LLMs) used in generative AI applications through a community approach.
Learn about Konveyor AI, an open souce tool that uses generative AI to shorten the time and cost of application modernization at scale.
The AI Lab Recipes repository offers recipes for building and running containerized AI and LLM applications to help developers move quickly from prototype to production.
Explore the advantages of Podman AI Lab, which lets developers easily bring AI into their applications without depending on infrastructure beyond a laptop.
Learn how to build a containerized bootable operating system to run AI models using image mode for Red Hat Enterprise Linux, then deploy a custom image.
In this learning exercise, we'll explore how to set up a robust system for
Learn how to deploy a trained AI model onto MicroShift, Red Hat’s lightweight Kubernetes distribution optimized for edge computing.
Explore the complete MLOps pipeline, utilizing OpenShift AI. The MLOps pipeline