
Camel integration quarterly digest: Q1 2025
Dive into the Q1’25 edition of Camel integration quarterly digest, covering the
Dive into the Q1’25 edition of Camel integration quarterly digest, covering the
The technology preview of incident detection is now available in the Red Hat OpenShift web console monitoring UI plug-in.
Explore how Red Hat Developer Hub and OpenShift AI work together with OpenShift to build workbenches and accelerate AI/ML development.
This article demystifies AI/ML models by explaining how they transform raw data into actionable business insights.
Learn how to build AI applications with OpenShift AI by integrating workbenches in Red Hat Developer Hub for training models (part 1 of 2).
A listing of Essential Node.js Observability Posts from Red Hat Developer and
Develop AI-integrated Java applications more efficiently using Quarkus. This article covers implementing chatbots, real-time interaction, and RAG functionality.
Discover the new Llama 4 Scout and Llama 4 Maverick models from Meta, with mixture of experts architecture, early fusion multimodality, and Day 0 model support.
Discover Async-GRPO, a new library for reinforcement learning tasks that efficiently handles large models, eliminates bottlenecks, and accelerates experiments.
Discover how the adaptive SVD approach enables LLMs to continually learn and adapt without forgetting previously acquired knowledge.
Explore how RamaLama makes it easier to share data with AI models using retrieval-augmented generation (RAG), a technique for enhancing large language models.
Learning the naming conventions of large language models (LLMs) helps users select the right model for their needs.
Explore how to run tools with Node.js using Llama Stack's completions API, agent API, and support for in-line tools, local MCP tools, and remote MCP tools.
Learn how quantized vision-language models enable faster inference, lower costs, and scalable AI deployment without compromising capability.
Sign up for the Red Hat Developer Newsletter
This article demonstrates how to run the Microsoft TRELLIS AI workload using Podman on RHEL to generate 3D assets.
Headed to DevNexus 2025? Visit the Red Hat Developer booth on-site to speak to our expert technologists.
Contact Sales - Red Hat Developer
The Red Hat Developer program brings developers together to learn from each other and create more extraordinary things, faster. We serve the builders. Those who solve problems and create their careers with code.
This article demonstrates how to fine-tune LLMs in a distributed environment with open source tools and the Kubeflow Training Operator on Red Hat OpenShift AI.
Learn how to integrate NVIDIA NIM with OpenShift AI to build, deploy, and monitor AI-enabled applications efficiently within a unified, scalable platform.
Enable hardware-enabled networking for containerized workloads using Red Hat OpenShift, BlueField DPUs, and NVIDIA DOCA Platform Framework.
Explore inference performance improvements that help vLLM serve DeepSeek AI models more efficiently in this technical deep dive.
Podman AI Lab, which integrates with Podman Desktop, provides everything you need to start developing Node.js applications that leverage large language models.