Red Hat at DeveloperWeek 2025

February 11-13 | Santa Clara, California | Santa Clara Convention Center

 

 

Booth #401

Visit the Red Hat Developer booth to speak with our expert technologists. Building and delivering modern, innovative apps and services is more complicated and fast-moving than ever. Red Hat Developer has the answers and expertise to help you succeed.

Speak with our Red Hat technology experts and ask them what’s on your mind.

Register

Agenda

Developer Experience, Platform Engineering and AI powered Apps

Wednesday, Feb 12 | 10:30 a.m. PST
DeveloperWeek Main Stage

Ignacio Riesgo, Senior Director, Developer Marketing and Strategy, Red Hat
Cedric Clyburn, Developer Advocate, Red Hat

An open source environment enables microservices, containers, Kubernetes and AI to be deployed to optimize agility and speed at scale. The challenge can be staying ahead of the technology curve and positioning your enterprise for continued success. At Red Hat, hybrid cloud is the foundation of everything we do, leveraging open source in a multi-cloud environment to kickstart innovation. The advent of Generative AI has acted as an accelerant to business transformation increasing efficiency and productivity. Join the Developer Advocates team who will share a series of live demo innovations available today with OpenShift AI, OpenShift and Developer Hub that will accelerate development cycles and optimize your releasing performance.

 

Building AI Applications from your desktop with Podman AI Lab

Wednesday, Feb 12 | 12:00 p.m. PST
Dev Innovation World Stage

Cedric Clyburn, Developer Advocate, Red Hat

Generative AI is revolutionizing how we build modern applications, but for developers, it can be daunting, particularly with evaluating models, building with Gen AI, and the path to production. But, it doesn’t have to be worrisome! Join us in this session to be ahead of the curve when it comes to AI-enabled cloud-native application development.

Using container technology and open source models from Hugging Face, we’ll show how to practically integrate Gen AI in an existing application from your local development environment and ultimately deploy it onto Kubernetes. Why work with local and open-source models? From reducing cloud computing costs, keeping control of your sensitive data, and alleviating vendor-locking, it’s an increasingly popular way for developers to prototype AI applications quickly. We’ll demonstrate a sample of the whole AI journey, starting from assessing models, building applications with LLMs, and deploying/serving AI applications.