How to get started with large language models and Node.js

Learn how to access a large language model using Node.js and LangChain.js. You’ll also explore LangChain.js APIs that simplify common requirements like retrieval-augmented generation (RAG).

Overview: How to get started with large language models and Node.js

Artificial intelligence (AI) and large language models (LLMs) are becoming increasingly powerful tools for building web applications. As JavaScript and Node.js developers, it's important to understand how to fit into this growing space. While Python is often thought of as the language for AI and model development, this does not mean that all application development will shift to Python. Instead, the tools and languages that are best suited for each part of an application will continue to win out, and those components will be integrated to form the overall solution.

For JavaScript/Node.js developers, that means you need to understand how to make requests to a running model. Until recently, that would likely have meant making HTTPS calls to a bespoke service, depending on which AI service or product your company was using. As the space matures, the JavaScript community is starting to develop libraries that abstract the model endpoints and add layers that enable common flows to be implemented more easily. One of these libraries is LangChain.

LangChain supports both Python and JavaScript/TypeScript, which is great news for JavaScript developers. In this learning path, we’ll take you through the steps of accessing a model using LangChain.js and JavaScript using Node.js. We will also highlight some of the APIs LangChain.js provides to simplify common requirements like retrieval-augmented generation (RAG).


  • A GitHub account.
  • A Git client.
  • Node.js 18.x or later.
  • Optionally, an NVIDIA GPU, the NVIDIA SDK, and C++ compiler for your platform.
  • Optionally, an OpenAI account.

In this learning path, you will:

  • Learn about LangChain.js.
  • Run a simple Node.js application that interacts with a running LLM locally.
  • Explore and run a simple Node.js application that implements retrieval-augmented generation using the content from the Node.js Reference Architecture.
  • Learn how easy it is with Langchain.js to switch between deployed models such as models running locally in a cloud-based service like OpenAI or a model running on a Red Hat OpenShift AI instance managed by your organization.

How long will this learning path take?

  • About 60-75 minutes

Info alert: While this learning path is written to allow you to follow along and run the examples, this is not necessary to follow the learning path. The explanation and walkthrough for each lesson include the output of running the examples, so you can follow along without installing or configuring anything. This might not be quite as fun as running the examples yourself, but you will still learn the core information covered in the learning path.