
Build and evaluate a fraud detection model with TensorFlow and ONNX
Learn how to deploy a trained model with Red Hat OpenShift AI and use its
Learn how to deploy a trained model with Red Hat OpenShift AI and use its
Explore how to use large language models (LLMs) with Node.js by observing Ollama
Discover how you can use the Podman AI Lab extension for Podman Desktop to work
More Essential AI tutorials for Node.js Developers
Learn how to run a fraud detection AI model using confidential virtual machines on RHEL running in the Microsoft Azure public cloud.
Configure your Red Hat Enterprise Linux AI machine, download, serve, and
vLLM empowers macOS and iOS developers to build powerful AI-driven applications by providing a robust and optimized engine for running large language models.
PowerUP 2025 is the week of May 19th. It's held in Anaheim, California this year
Learn how to use pipelines in OpenShift AI to automate the full AI/ML lifecycle on a single-node OpenShift instance.
Jupyter Notebook works with OpenShift AI to interactively classify images. In
LLM Compressor bridges the gap between model training and efficient deployment via quantization and sparsity, enabling cost-effective, low-latency inference.
Learn how to set up NVIDIA NIM on Red Hat OpenShift AI and how this benefits AI and data science workloads.
Learn how the dynamic accelerator slicer operator improves GPU resource management in OpenShift by dynamically adjusting allocation based on workload needs.
Get an introduction to AI function calling using Node.js and the LangGraph.js framework, now available in the Podman AI Lab extension.
This tutorial shows you how to use the Llama Stack API to implement retrieval-augmented generation for an AI application built with Node.js.
Learn about the Red Hat OpenShift AI model fine-tuning stack and how to run performance and scale validation.
Learn how NVIDIA GPUDirect RDMA over Ethernet enhances distributed model training performance and reduces communication bottlenecks in Red Hat OpenShift AI.
Learn how the DeepSeek training process used reinforcement learning algorithms to generate human-like text and improve overall performance.
Explore performance and usability improvements in vLLM 0.8.1 on OpenShift, including crucial architectural overhauls and multimodal inference optimizations.
This Red Hat solution pattern implements key aspects of a modern IoT/edge architecture in an exemplary manner. It uses Red Hat OpenShift Container Platform and various middleware components optimized for cloud-native use. This enterprise architecture can serve as a foundation for an IoT/edge hybrid cloud environment supporting various use cases like over-the-air (OTA) deployments, driver monitoring, AI/ML, and others. Bobbycar aims to showcase an end-to-end workflow, from connecting in-vehicle components to a cloud back-end, processing telemetry data in batch or as stream, and training AI/ML models, to deploying containers through a DevSecOps pipeline and by leveraging GitOps to the edge.
Explore Knative Serving, Eventing, and Functions through an example use case. You’ll see how to collect telemetry data from simulated vehicles, process the data with OpenShift Serverless, and use the data to train a machine learning model with Red Hat OpenShift AI, Red Hat's MLOps platform. The model will then be deployed as a Knative Service, providing the inference endpoint for our business application.
A comprehensive offering for developers that includes a range of tools to
Discover a new combinatorial approach to decoding AI’s hidden logic, exploring how neural networks truly compute and reason."
Discover how to fine-tune large language models (LLMs) with Kubeflow Training, PyTorch FSDP, and Hugging Face SFTTrainer in OpenShift AI.