
Developer Advocate
Cedric Clyburn
Cedric Clyburn (@cedricclyburn), Senior Developer Advocate at Red Hat, is an enthusiastic software technologist with a background in Kubernetes, DevOps, and container tools. He has experience speaking and organizing conferences including DevNexus, WeAreDevelopers, The Linux Foundation, KCD NYC, and more. Cedric loves all things open-source, and works to make developer's lives easier! Based out of New York.
Cedric Clyburn's contributions
Article
How to run OpenAI's gpt-oss models locally with RamaLama
Cedric Clyburn
Learn to run and serve OpenAI's gpt-oss models locally with RamaLama, a CLI tool that automates secure, containerized deployment and GPU optimization.
Article
Getting started with llm-d for distributed AI inference
Cedric Clyburn
+1
llm-d optimizes LLM inference at scale with disaggregated prefill/decode, smart caching, and Kubernetes-native architecture for production environments.
Article
Get started with bootable containers and image mode for RHEL
Cedric Clyburn
Learn how to use and build bootable containers for disk image operating system deployment with Podman Desktop.
Article
Enhance LLMs and streamline MLOps using InstructLab and KitOps
Cedric Clyburn
+1
Add knowledge to large language models with InstructLab and streamline MLOps using KitOps for efficient model improvement and deployment.
Article
Introducing GPU support for Podman AI Lab
Evan Shortiss
+1
With GPU acceleration for Podman AI Lab, developers can inference models faster and build AI-enabled applications with quicker response times.
Article
How InstructLab enables accessible model fine-tuning for gen AI
Cedric Clyburn
+1
Discover how InstructLab simplifies LLM tuning for users.
Blog
What you should know about DevConf.US 2024
Cedric Clyburn
DevConf.US 2024: A free, community-driven open source conference featuring 60+
Article
Open source AI coding assistance with the Granite models
Cedric Clyburn
Boost your coding productivity with private and free AI code assistance using Ollama or InstructLab to run large language models locally.

How to run OpenAI's gpt-oss models locally with RamaLama
Learn to run and serve OpenAI's gpt-oss models locally with RamaLama, a CLI tool that automates secure, containerized deployment and GPU optimization.

Getting started with llm-d for distributed AI inference
llm-d optimizes LLM inference at scale with disaggregated prefill/decode, smart caching, and Kubernetes-native architecture for production environments.

Get started with bootable containers and image mode for RHEL
Learn how to use and build bootable containers for disk image operating system deployment with Podman Desktop.

Enhance LLMs and streamline MLOps using InstructLab and KitOps
Add knowledge to large language models with InstructLab and streamline MLOps using KitOps for efficient model improvement and deployment.

Introducing GPU support for Podman AI Lab
With GPU acceleration for Podman AI Lab, developers can inference models faster and build AI-enabled applications with quicker response times.

How InstructLab enables accessible model fine-tuning for gen AI
Discover how InstructLab simplifies LLM tuning for users.

What you should know about DevConf.US 2024
DevConf.US 2024: A free, community-driven open source conference featuring 60+

Open source AI coding assistance with the Granite models
Boost your coding productivity with private and free AI code assistance using Ollama or InstructLab to run large language models locally.