Deploying your LangChain.js Node.js applications to OpenShift AI

In this learning exercise we will deploy an existing Node.js application based on LangChain.js to OpenShift and demonstrate how easy it is to switch between local, cloud and Openshift.ai based model serving.

Red Hat Build of Node.js OpenShift.AI

Overview: Deploying your LangChain.js Node.js applications to OpenShift AI

This learning exercise will deploy an existing Node.js application based on LangChain.js to OpenShift and demonstrate how easy it is to switch between local, cloud and Openshift.ai based model serving. While we may use local models for experimentation and an inner development loop, production deployments will most often use either a cloud based service or a model hosted by your enterprise in an environment like Openshift.ai. The great news is that when using Langchain.js only minor tweaks are needed to move between running your application between any of these approaches.

Langchain.js has support for a number of backends and we can find the current list in Langchain.js GitHub repository (in langchainjs/langchain/src/llms). We are not fully up to speed on what they all are but there was quite a list when we looked:

Picture of langchainjs GitHub directory langchainjs/langchain/src/llms showing greater than 20 llms supported including watsonx_ai, ollama, openai and llama_cpp

Adding to the options that these  integrations provide, some LLM runtimes like vLLM aim to provide an OpenAI compatible endpoint so that we can use the OpenAI  API support to connect to models running in those environments as well.

In this learning exercise we will walk through connecting to a model in both OpenAI and Openshift.ai. In both cases we’ll use the OpenAI implementation of the model API to create a model.