
Multilingual semantic-similarity search with Elasticsearch
Discover how to use machine learning techniques to analyze context, semantics, and relationships between words and phrases indexed in Elasticsearch.
Discover how to use machine learning techniques to analyze context, semantics, and relationships between words and phrases indexed in Elasticsearch.
Explore features that enhance automation productivity for developers in Ansible Lightspeed with IBM watsonx Code Assistant, now generally available.
Learn how to communicate with OpenAI ChatGPT from a Quarkus application using the ChatGPT API in this demo.
Sparse fine-tuning in combination with sparsity-aware inference software, like DeepSparse, unlocks ubiquitous CPU hardware as a deployment target for LLM inference.
GPT4All is an open source tool that lets you deploy large language models locally without a GPU. Learn how to integrate GPT4All into a Quarkus application.
This article explores leveraging AI to generate Apache Camel routes using ChatGPT.
Get up and running with Ansible Lightspeed, a new generative AI service for Ansible automation, and the Ansible VS Code extension by Red Hat.
Discover the power of AI/ML in software testing with Bunsen, a Python-based toolkit that lets you analyze and report test-suite logs using an SQLite database.
Walk through the basics of fine-tuning a large language model using Red Hat OpenShift Data Science and HuggingFace Transformers.
Learn how to use the Red Hat OpenShift Data Science platform and Starburst to develop a fraud detection workflow with an AI/ML use case.
Compress large language models (LLMs) with SparseGPT to make your machine learning inference fast and efficient. Prune in one-shot with minimal accuracy loss.
Learn why graphics processing units (GPUs) have become the foundation of artificial intelligence and how they are being used.
In this article, you will learn how to perform inference on JPEG images using the gRPC API in OpenVINO Model Server in OpenShift. Model servers play an important role in smoothly bringing models from development to production. Models are served via network endpoints which expose an APIs to run predictions.
Intel AI tools save cloud costs, date scientists' time, and time spent developing models. Learn how the AI Kit can help you.
Online events and regional events held around the world with Red Hat's Developer Advocates.
OpenVINO helps you tackle speech-to-text conversion, a common AI use case. Learn more.
You can perform edge detection on images using this Jupyter notebook on any Kubernetes cluster. Learn how.
Create an open source machine learning environment quickly with Pachyderm, JupyterHub, and Ceph Nano on Open Data Hub.
CodeReady Containers lets you easily deploy a virtual cluster environment on your local system, with open source AI tools from Open Data Hub.
Once you have data, how do you start building a PyTorch model? This learning path shows you how to create a PyTorch model with OpenShift Data Science.
OpenShift AI gives data scientists and developers a powerful AI/ML platform for building AI-enabled applications. Data scientists and developers can collaborate to move from experiment to production in a consistent environment quickly.
Streaming data is key to many modern applications. This tutorial walks you through setting up a data stream using Amazon Kinesis and Node.js.
Don't miss a thing! Here's a roundup of new articles, tutorials, and more published this month on Red Hat Developer.
Learn how to set up a Pulp Python repository and publish and consume Python packages using Pulp on the Red Hat Developer Operate First cloud.