Dive deeper into large language models and Node.js

Explore how to use large language models (LLMs) with Node.js by observing Ollama, LlamaIndex, function calling, and agents.

With Ollama and Node.js installed, let’s now bring in LlamaIndex.ts to run a simple program.

Prerequisites:

  • An environment where you can install and run Node.js.
  • An environment where you can install and run Ollama.
  • Git client

In this lesson, you will:

  • Review a simple application built with LlamaIndex.ts.
  • Run the simple application.
  • Compare the output to the earlier application using Langchain.js.

Set up the environment

Change into the lesson-6 directory with: 

cd ../lesson-6

Then, install the dependencies for the example with:

npm install

This will install Llamaindex.ts and related dependencies.

Run the basic LlamaIndex.ts example

First, we’ll start by looking at a basic example using LlamaIndex to ask an LLM a question, which is in llamaindex-ollama.js.

As with the previous example, you will need to change the address of the Ollama endpoint. If you run it locally, change it to 127.0.0.1. Otherwise, provide the IP address of the Ollama application endpoint. The following is the example code extract:

import {
    Ollama,
    SimpleChatEngine,
} from "llamaindex"

////////////////////////////////
// GET THE MODEL
const llm = new Ollama({
    config: { host: "http://127.0.0.1:11434" },
    model: "mistral", // Default value
});

////////////////////////////////
// CREATE THE ENGINE
const chatEngine = new SimpleChatEngine({ llm });

////////////////////////////////
// ASK QUESTION
const input = 'should I use npm to start a Node.js application';
console.log(new Date());
const response = await chatEngine.chat({
  message: `Answer the following question if you don't know the answer say so: Question: ${input}`
});
console.log(response.response);
console.log(new Date());

In this case, we’ve kept it simple and got the LLM directly instead of using a getModel() function. But you can see the code to get the LLM is similar to the Langchain.js, and the same parameters (although with different names) are passed in the following:

////////////////////////////////
// GET THE MODEL
const llm = new Ollama({
    config: { host: "http://10.1.1.39:11434" },
    model: "mistral", // Default value
});

We’re using the simplest LlamaIndex class available, SimpleChatEngine, to ask questions. So we need to substitute the question into the prompt ourselves as follows:

const response = await chatEngine.chat({
  message: `Answer the following question if you don't know the answer say so: Question: ${input}`
});

Now run the program with the following command:

node llamaindex-ollama.mjs

You should see an output like this:

2024-06-10T21:08:46.869Z
 Yes, it is common to use npm (Node Package Manager) when starting a Node.js application. npm provides access to a vast number of libraries and tools that can help streamline development, testing, and deployment processes for Node.js projects. You can initialize your project with `npm init`, install dependencies with `npm install`, and start the application using the appropriate command (e.g., `node app.js` or another command based on your project's package.json file).
There are alternatives to npm, such as Yarn and pnpm, but they are also compatible with Node.js projects and offer similar functionalities. Ultimately, the choice between these tools depends on your specific needs and preferences.
2024-06-10T21:08:50.289Z

While answers from the LLM vary, even when you ask the exact same question, running it a number of times will show that the answers using the LlamaIndex version will look similar to those running with Langchain.js. This is expected since they’re both fronting the same LLM running under Ollama.

Similar to Langchain.js, we should be able to switch relatively easily between how our LLM is hosted and LlamaIndex supporting different options.

Also similar to Langchain.js, LlamaIndex provides APIs to make it easier to do retrieval-augmented generation (RAG) and incorporate data into queries. There is a short example of this on the main README.md. Since this is not the focus of this learning path, we’ll leave it up to you to explore that further, if you are interested.

Now that we’ve run a simple program built with LlamaIndex.ts and Node.js, next we will dive into a more complex application that provides functions for the LLM to call when needed to better answer questions.

Previous resource
Get to know Ollama with Node.js
Next resource
Using the function calling tool with Node.js and LLMs