AI & Node.js
Create intelligent, efficient, and user-friendly experiences by integrating AI into JavaScript applications
Create intelligent, efficient, and user-friendly experiences by integrating AI into JavaScript applications
AI and Large Language Models (LLMs) are becoming increasingly important tools for web applications. As a JavaScript/Node.js developer, it's crucial to understand how to integrate them into your projects. While Python is often considered the go-to language for AI and model development, this doesn't mean that all application development will shift to Python. Instead, the tools and languages best suited for each part of an application will continue to be chosen, and these components will be integrated to create the overall solution.
For JavaScript/Node.js developers, this means understanding how to make requests to a running model. Until recently, this typically involved making HTTPS calls to a bespoke service, depending on the AI service or product your company used. However, as the field matures, libraries are emerging that abstract the model endpoints and simplify the implementation of common flows. Some interesting libraries that support JavaScript/TypeScript include Langchain.js and LlamaIndexTS, with the list growing rapidly.
One of the key advantages of using these libraries is their ability to easily switch between accessing a model locally, using a cloud service like OpenAI, or using a model hosted by your organization running in OpenShift AI. This flexibility is important because it allows you to change how your application accesses models without being locked into a specific vendor. It also facilitates experimentation while ensuring that deployments use models in a way that protects proprietary information.
Red Hat® OpenShift® AI is a flexible, scalable MLOps platform with tools to build, deploy, and manage AI-enabled applications. Built using open-source technologies, it provides trusted, operationally consistent capabilities for teams to experiment, serve models, and deliver innovative apps.
Red Hat’s integrated hybrid cloud AI/ML platform provides a consistent way to support the end-to-end lifecycle of both ML Models and cloud-native applications in terms of how they are developed, packaged, deployed, and managed.
40 minutes: Intermediate
Take an initial journey using Node.js and Langchain.js to run queries against a model with retrieval-augmented generation (RAG) using a model running locally, with OpenAI and a model running in OpenShift AI.
40 minutes: Intermediate
In this learning path we dig deeper into using large language models (LLMs) with Node.js by looking at Ollama, LlamaIndex, function calling, agents and observability with Open Telemetry.
Explore large language models (LLMs) by trying out the Granite model on Podman AI Lab.
1 hour: Intermediate
In this learning exercise we will deploy an existing Node.js application based on LangChain.js to OpenShift and demonstrate how easy it is to switch between local, cloud and Openshift.ai based model serving.
20 minutes: Intermediate
In this learning exercise we will use Retrieval Augmented Generation (RAG) with a Node.js application in order to optimize an AI application.
40 minutes: Intermediate
Ollama recently announced tool support and like many popular libraries for using AI and large language models (LLMs) Ollama provides a JavaScript API along with its Python API.