Artificial intelligence

Video Thumbnail
Video

AI Quarkus and LangChain4j Christmas

Red Hat Developers

Join us as we get ready for the holidays with a few  AI holiday treats! We will demo AI from laptop to production using Quarkus and LangChain4j with ChatGPT, Dall-E, Podman Desktop AI and discover how we can get started with Quarkus+LangChain4j, use memory, agents and tools, play with some RAG features, and test out some images for our holiday party.

Video Thumbnail
Video

End-of-the year tech talk round up

Red Hat Developers

Come and join us for our year-end review as we enjoy the company of a few guests, discuss things that happened in 2024, and talk about what we think 2025 will bring. Feel free to bring your topics to the discussion and we’ll make sure to ask the guests what their thoughts are on your favorite topics.

Video Thumbnail
Video

Red Hat and Intel Hackathon 2024 highlights | Watch

Red Hat Developers

Meet some of the winners from the 2024 Red Hat and Intel AI Hackathon. They will review their Gen AI or an AI Retrieval Augmented Generation (RAG) application built on the Red Hat OpenShift AI environment on AWS with Intel’s Xeon Processor AMX features. Learn more about their process and how they did this!

Share graphics_5 ways developers benefit from Red Hat OpenShift
E-book

5 ways developers benefit from Red Hat OpenShift

Valentina Rodriguez Sosa

Download this 15-page e-book to explore 5 key ways OpenShift benefits developers, including integrated tools and workflows and simplified AI app development.

Coding shared image
Article

Testing Farm as GitHub Action: User stories

Petr Hracek +1

Learn how to configure Testing Farm as a GitHub Action and avoid the work of setting up a testing infrastructure, writing workflows, and handling PR statuses.

Video Thumbnail
Video

InstructLab: Democratizing generative AI through open source collaboration

Cedric Clyburn +2

The rapid advancement of generative artificial intelligence (gen AI) has unlocked incredible opportunities. However, customizing and iterating on large language models (LLMs) remains a complex and resource intensive process. Training and enhancing models often involves creating multiple forks, which can lead to fragmentation and hinder collaboration.