Build an AI agent to automate TechDocs in Red Hat Developer Hub
Automate technical documentation with an AI agent that scans repositories, generates full TechDocs, and seamlessly integrates with Red Hat Developer Hub.
Automate technical documentation with an AI agent that scans repositories, generates full TechDocs, and seamlessly integrates with Red Hat Developer Hub.
Explore how to utilize guardrails for safety mechanisms in large language models (LLMs) with Node.js and Llama Stack, focusing on LlamaGuard and PromptGuard.
Learn how to optimize GPU resource use with NVIDIA Multi-Instance GPU (MIG) and discover how MIG-Adapter enhances GPU resource utilization in Kubernetes.
Members from the Red Hat Node.js team were recently at PowerUp 2025. It was held
Discover how IBM used OpenShift AI to maximize GPU efficiency on its internal AI supercomputer, using open source tools like Kueue for efficient AI workloads.
Learn how integration powers AI with Apache Camel at Devoxx UK 2025. Explore
Gain detailed insights into vLLM deployments on OpenShift AI. Learn to build dashboards with Dynatrace and OpenTelemetry to enable reliable LLM performance.
Learn how to use Red Hat OpenShift AI to quickly develop, train, and deploy
Explore the complete machine learning operations (MLOps) pipeline utilizing Red
llm-d delivers Kubernetes-native distributed inference with advanced optimizations, reducing latency and maximizing throughput.
LLM Semantic Router uses semantic understanding and caching to boost performance, cut costs, and enable efficient inference with llm-d.
Optimize model inference and reduce costs with model compression techniques like quantization and pruning with LLM Compressor on Red Hat OpenShift AI.
Learn how to use synthetic data generation (SDG) and fine-tuning in Red Hat AI to customize reasoning models for your enterprise workflows.
Learn how to deploy a trained model with Red Hat OpenShift AI and use its
Explore how to use large language models (LLMs) with Node.js by observing Ollama
Discover how you can use the Podman AI Lab extension for Podman Desktop to work
More Essential AI tutorials for Node.js Developers
Learn how to run a fraud detection AI model using confidential virtual machines on RHEL running in the Microsoft Azure public cloud.
Configure your Red Hat Enterprise Linux AI machine, download, serve, and
vLLM empowers macOS and iOS developers to build powerful AI-driven applications by providing a robust and optimized engine for running large language models.
PowerUP 2025 is the week of May 19th. It's held in Anaheim, California this year
Learn how to use pipelines in OpenShift AI to automate the full AI/ML lifecycle on a single-node OpenShift instance.
Jupyter Notebook works with OpenShift AI to interactively classify images. In
LLM Compressor bridges the gap between model training and efficient deployment via quantization and sparsity, enabling cost-effective, low-latency inference.
Learn how to set up NVIDIA NIM on Red Hat OpenShift AI and how this benefits AI and data science workloads.