Python

Jupyter Notebooks on Red Hat OpenShift AI share/feature image
Article

Accelerated expert-parallel distributed tuning in Red Hat OpenShift AI

Karel Suta +4

Discover how to optimize training of MoE models with fms-hf-tuning, an open source tuning library for PyTorch FSDP and Hugging Face libraries. Learn about preprocessing data, throughput and memory efficiency features, distributed training, and expert parallelism. Improve your AI and agentic applications on domain-specific enterprise tasks.

Featured image for agentic AI
Article

Automate AI agents with the Responses API in Llama Stack

Michael Dawson

Learn how the Responses API in Llama Stack automates complex tool calling while maintaining granular control over conversation flow for AI agents. Discover the benefits and implementation details.

Red Hat AI
Article

Estimate GPU memory for LLM fine-tuning with Red Hat AI

Mohib Azam

Learn how to estimate memory requirements for your LLM fine-tuning experiments using Red Hat Training Hub's memory_estimator.py API. This guide covers the memory components, adjusting training setups for specific GPU specifications, and using the memory estimator in your code. Streamline your model fine-tuning process with runtime estimates and automated hyperparameter suggestions.

ai-ml
Article

Optimize PyTorch training with the autograd engine

Vishal Goyal

Understand the PyTorch autograd engine internals to debug gradient flows. Learn about computational graphs, saved tensors, and performance optimization techniques.

Featured image for: micropipenv: Installing Python dependencies in containerized applications.
Article

How to reduce false positives in security scans

Miro Hrončok

Learn about Fedora Rawhide testing a solution that embeds SBOM metadata directly into Python wheels, allowing scanners to recognize backported security fixes.