Debug Ansible errors faster with an AI monitoring agent
Automate Ansible error resolution with AI. Learn how to ingest logs, group templates, and generate step-by-step solutions using RAG and agentic workflows.
Automate Ansible error resolution with AI. Learn how to ingest logs, group templates, and generate step-by-step solutions using RAG and agentic workflows.
Build a multichannel IT self-service AI agent that maintains session context across Slack, email, and ServiceNow using CloudEvents and Knative for cross-channel automation.
Learn how to deploy Voxtral Mini 4B Realtime, a streaming automatic speech recognition model for low-latency voice workloads, using Red Hat AI Inference Server.
Headed to DevNexus? Visit the Red Hat Developer booth on-site to speak to our expert technologists.
See how to use Apache Camel to turn LLMs into reliable text-processing engines for generative parsing, semantic routing, and "air-gapped" database querying.
Learn about NVFP4, a 4-bit floating-point format for high-performance inference on modern GPUs that can deliver near-baseline accuracy at large scale.
Explore how Red Hat OpenShift AI uses LLM-generated summaries to distill product reviews into a form users can quickly process.
Deploy an enterprise-ready RAG chatbot using OpenShift AI. This quickstart automates provisioning of components like vector databases and ingestion pipelines.
Explore the pros and cons of on-premises and cloud-based language learning models (LLMs) for code assistance. Learn about specific models available with Red Hat OpenShift AI, supported IDEs, and more.
Explore the architecture and training behind the two-tower model of a product recommender built using Red Hat OpenShift AI.
Discover the self-service agent AI quickstart for automating IT processes on Red Hat OpenShift AI. Deploy, integrate with Slack and ServiceNow, and more.
Learn how to build AI-enabled applications for product recommendations, semantic product search, and automated product review summarization with OpenShift AI.
Deploy an Oracle SQLcl MCP server on an OpenShift cluster and use it with the OpenShift AI platform in this AI quickstart.
Discover the AI Observability Metric Summarizer, an intelligent, conversational tool built for Red Hat OpenShift AI environments.
Explore the latest release of LLM Compressor, featuring attention quantization, MXFP4 support, AutoRound quantization modifier, and more.
Headed to DeveloperWeek? Visit the Red Hat Developer booth on-site to speak to our expert technologists.
This article compares the performance of llm-d, Red Hat's distributed LLM inference solution, with a traditional deployment of vLLM using naive load balancing.
Discover the advantages of using Java for AI development in regulated industries. Learn about architectural stability, performance, runtime guarantees, and more.
Whether you're just getting started with artificial intelligence or looking to deepen your knowledge, our hands-on tutorials will help you unlock the potential of AI while leveraging Red Hat's enterprise-grade solutions.
Learn how Model Context Protocol (MCP) enhances agentic AI in OpenShift AI, enabling models to call tools, services, and more from an AI application.
Take a look back at Red Hat Developer's most popular articles of 2025, covering AI coding practices, agentic systems, advanced Linux networking, and more.
Discover 2025's leading open models, including Kimi K2 and DeepSeek. Learn how these models are transforming AI applications and how you can start using them.
Learn how to deploy and test the inference capabilities of vLLM on OpenShift using GuideLLM, a specialized performance benchmarking tool.
Learn how to fine-tune a RAG model using Feast and Kubeflow Trainer. This guide covers preprocessing and scaling training on Red Hat OpenShift AI.
Learn how to implement retrieval-augmented generation (RAG) with Feast on Red Hat OpenShift AI to create highly efficient and intelligent retrieval systems.