The benefits of dynamic GPU slicing in OpenShift
Learn how the dynamic accelerator slicer operator improves GPU resource management in OpenShift by dynamically adjusting allocation based on workload needs.
Learn how the dynamic accelerator slicer operator improves GPU resource management in OpenShift by dynamically adjusting allocation based on workload needs.
Learn about the Red Hat OpenShift AI model fine-tuning stack and how to run performance and scale validation.
Learn how NVIDIA GPUDirect RDMA over Ethernet enhances distributed model training performance and reduces communication bottlenecks in Red Hat OpenShift AI.
Discover how to fine-tune large language models (LLMs) with Kubeflow Training, PyTorch FSDP, and Hugging Face SFTTrainer in OpenShift AI.
Explore how Red Hat Developer Hub and OpenShift AI work together with OpenShift to build workbenches and accelerate AI/ML development.
This article demystifies AI/ML models by explaining how they transform raw data into actionable business insights.
Learn how to build AI applications with OpenShift AI by integrating workbenches in Red Hat Developer Hub for training models (part 1 of 2).
Learning the naming conventions of large language models (LLMs) helps users select the right model for their needs.
This article demonstrates how to fine-tune LLMs in a distributed environment with open source tools and the Kubeflow Training Operator on Red Hat OpenShift AI.
Learn how to integrate NVIDIA NIM with OpenShift AI to build, deploy, and monitor AI-enabled applications efficiently within a unified, scalable platform.
Podman AI Lab, which integrates with Podman Desktop, provides everything you need to start developing Node.js applications that leverage large language models.
Learn how to securely integrate Microsoft Azure OpenAI Service with Red Hat OpenShift Lightspeed using temporary child credentials.
Discover how NVIDIA MIG technology on Red Hat OpenShift AI enhances GPU resource utilization.
Learn how to run distributed AI training on Red Hat OpenShift using RoCE with
Learn how to build a ModelCar container image and deploy it with OpenShift AI.
Let's take a look at how you can get started working with generative AI in your application development process using open-source tools like Podman AI Lab (https://podman-desktop.io/extensions/...) to help build and serve applications with LLMs, InstructLab (https://instructlab.ai) to fine-tune models locally from your machine, and OpenShift AI (https://developers.redhat.com/product...) to handle the operationalizing of building and serving AI on an OpenShift cluster.
Learn how to integrate Model Context Protocol (MCP) with LLMs using Node.js
Explore the benefits of open source AI models and tools and learn how Red Hat OpenShift AI helps you build innovative AI-based applications in this e-book.
This year's top articles on AI include an introduction to GPU programming, a guide to integrating AI code assistants, and the KServe open source project.
Find Kubernetes and OpenShift articles on performance and scale testing, single-node OpenShift, OpenShift Virtualization for VMware vSphere admins, and more.
Join us as we get ready for the holidays with a few AI holiday treats! We will demo AI from laptop to production using Quarkus and LangChain4j with ChatGPT, Dall-E, Podman Desktop AI and discover how we can get started with Quarkus+LangChain4j, use memory, agents and tools, play with some RAG features, and test out some images for our holiday party.
Learn how a developer can work with RAG and LLM leveraging their own data chat for queries.
Download this 15-page e-book to explore 5 key ways OpenShift benefits developers, including integrated tools and workflows and simplified AI app development.
Explore the evolution and future of Quarkus, Red Hat’s next-generation Java framework designed to optimize applications for cloud-native environments.
A practical example to deploy machine learning model using data science...