(This tutorial is intentionally oversimplified and overly instructive with the objective of helping make AI more approachable. Examples are from the LangChain library with more "hand holding" for those who need help getting started)
Pre-requisites
Node.js
This tutorial requires that you have Node.js installed. Node.js is the Javascript compiler/interpreter that is used to run Javascript applications. If you do not have Node.js installed, install Node.js from the Node.js installation site hereInstall the NPM package and project manager
If you do not have NPM already installed, you can install it from here
Setup
Create the tutorial project folder
$ mkdir langchainjs-tutorial-1 $ cd langchainjs-tutorial-1
Initialize your project as an
npm
project and create the package.json project configuration file
(The es6 flag tells npm to create a package.json with the "type": "module" setting. The -y flag tells npm to accept the defaults)$ npm init es6 -y
Install the TypeScript Node package. Configures Node to run TypeScript code.
$ npm install typescript
Configure capability to run Typescript code using Node commands
$ npm install -D ts-node
Add the type definitions for Node to run TypeScript
$ npm install @types/node
Create the tsconfig.json that contains the information required to compile TypeScript to JavaScript
$ npx tsc --init --rootDir src --outDir ./dist --esModuleInterop --lib ES2020 --target ES2020 --module nodenext --noImplicitAny true
Add build and run scripts to package.json. Replace "scripts" entry with the JSON below. If you do not have a "scripts" entry in your package.json, place the JSON below above dependencies (or devDependencies)
"scripts": { "build": "tsc", "start": "node ./dist/app.js", "dev": "ts-node --esm ./src/app.ts" },
Create an app.ts source file in the src folder. Add code to it to display a test string.
(if the "echo" command does not work for you, simply create app.ts in the src folder and cut and paste the "console.log..." statement)
$ mkdir src $ echo "console.log('Welcome to the LangChain.js tutorial by LangChainers.')" > src/app.ts
Perform a test to verify that the tutorial project can build and run using "yarn"
$ npm run build $ npm run start
You can also use
$ npm run dev
for a single command that executes both of the above.If everything works, you are ready for some LanngChain.js. LET'S GO!!
Installation
Install the LangChain.js package
$ npm install langchain
Install the OpenAI package
$ npm install openai
Install the dotenv package for managing environment variables including your OpenAI API key
$ npm install dotenv
Create a file named
.env
in the project directory and add your OpenAI API Key to the file as shown below
(if you do not have an OpenAI API Key, go to your OpenAI API Key page, then cut and paste the OpenAI API Key below)OPENAI_API_KEY=<remove the < and > and put your OpenAI API key here>
Now let's walk through some LangChain.js modules
LangChain Modules
LLMs
LLMs (Large Language Models) are the core of LangChain functionality. The LLM module is simply a wrapper around different LLMs. This wrapper makes it simple for LangChain.js developers to communicate with different LLMs using a single interface, without having to worry about the difference between the different LLMs.
Copy the following code into src/app.ts
//Import the OpenAPI Large Language Model (you can import other models here eg. Cohere) import { OpenAI } from "langchain/llms"; //Load environment variables (populate process.env from .env file) import * as dotenv from "dotenv"; dotenv.config(); export const run = async () => { //Instantiante the OpenAI model //Pass the "temperature" parameter which controls the RANDOMNESS of the model's output. A lower temperature will result in more predictable output, while a higher temperature will result in more random output. The temperature parameter is set between 0 and 1, with 0 being the most predictable and 1 being the most random const model = new OpenAI({ temperature: 0.9 }); //Calls out to the model's (OpenAI's) endpoint passing the prompt. This call returns a string const res = await model.call( "What would be a good company name a company that makes colorful socks?" ); console.log({ res }); }; run();
Execute the code with the following command
$ npm run dev
Result of the execution
(NOTE: The result you will get may differ from what you see below. It is, after all, AI){ res: '\n\nSocktastic!' }
We will cover LLMs in more detail in future tutorials. You can find API documentation for LLMs here
Prompt Template
A prompt template is an object that is responsible for constructing the final prompt to pass to an LLM. The same object can be reused with different data. The data can consist of the prompt ie. the input to the language model and user input data.
Copy the following code into src/app.ts
//Import the PromptTemplate module import { PromptTemplate } from "langchain/prompts"; export const run = async () => { //Create the template. The template is actually a "parameterized prompt". A "parameterized prompt" is a prompt in which the input parameter names are used and the parameter values are supplied from external input const template = "What is a good name for a company that makes {product}?"; //Instantiate "PromptTemplate" passing the prompt template string initialized above and a list of variable names the final prompt template will expect const prompt = new PromptTemplate({ template, inputVariables: ["product"] }); //Create a new prompt from the combination of the template and input variables. Pass the value for the variable name that was sent in the "inputVariables" list passed to "PromptTemplate" initialization call const res = prompt.format({ product: "colorful socks" }); console.log({ res }); }; run();
Execute the code with the following command
$ npm run dev
Result of the execution
The result is showing you that {product} in the "template" string has been replaced by the value "colorful socks" that was passed on the "prompt.format(..)" call.{ res: 'What is a good name for a company that makes colorful socks?' }
We will cover Prompts and Prompt Templates in more detail in future tutorials. You can find API documentation for Prompt Templates here
Chains
Chains enable LangChain.js developers to combine multiple components together to create a functional application.
The following code will show an example of a chain that takes user input, formats it using PromptTemplate and then passes the formatted response to an LLM.
Copy the following code into src/app.ts
//Import the OpenAPI Large Language Model (you can import other models here eg. Cohere) import { OpenAI } from "langchain/llms"; //Import the PromptTemplate module import { PromptTemplate } from "langchain/prompts"; //Import the Chains module import { LLMChain } from "langchain/chains"; //Load environment variables (populate process.env from .env file) import * as dotenv from "dotenv"; dotenv.config(); export const run = async () => { //Instantiante the OpenAI model //Pass the "temperature" parameter which controls the RANDOMNESS of the model's output. A lower temperature will result in more predictable output, while a higher temperature will result in more random output. The temperature parameter is set between 0 and 1, with 0 being the most predictable and 1 being the most random const model = new OpenAI({ temperature: 0.9 }); //Create the template. The template is actually a "parameterized prompt". A "parameterized prompt" is a prompt in which the input parameter names are used and the parameter values are supplied from external input const template = "What is a good name for a company that makes {product}?"; //Instantiate "PromptTemplate" passing the prompt template string initialized above and a list of variable names the final prompt template will expect const prompt = new PromptTemplate({ template, inputVariables: ["product"] }); //Instantiate LLMChain, which consists of a PromptTemplate and an LLM. Pass the result from the PromptTemplate and the OpenAI LLM model const chain = new LLMChain({ llm: model, prompt }); //Run the chain. Pass the value for the variable name that was sent in the "inputVariables" list passed to "PromptTemplate" initialization call const res = await chain.call({ product: "colorful socks" }); console.log({ res }); }; run();
Execute the code with the following command
$ npm run dev
Result of the execution
$ { res: { text: '\n\nSplash-a-Sox!' } }
We will cover Chains in more detail in future tutorials. You can find API documentation for Chains here
Agents
"Some applications will require not just a predetermined chain of calls to LLMs/other tools, but potentially an unknown chain that depends on the user input. In these types of chains, there is an “agent” which has access to a suite of tools. Depending on the user input, the agent can then decide which, if any, of these tools to call." - LangChain
Unlike "Chains" that are executed in a predetermined order, "Agents" use an LLM to determine the actions to take and the order in which the actions are executed.
In the following example, we will create an agent that uses a search tool and a calculator tool.
Install Search Tool
Install the SerpApi packange. SerpApi is a real-time API to access Google search results$ npm install serpapi
Add your SerpApi API Key to the
.env
file as shown below
(if you do not have a SerpAPI API Key, go to your SerpApi API Key page, then cut and paste the SerpApi API Key below)SERPAPI_API_KEY=<remove the < and > and put your SerpApi API key here>
Copy the following code into src/app.ts
//Import the OpenAPI Large Language Model (you can import other models here eg. Cohere) import { OpenAI } from "langchain/llms"; //Import the agent executor module import { initializeAgentExecutor } from "langchain/agents"; //Import the SerpAPI and Calculator tools import { SerpAPI, Calculator } from "langchain/tools"; //Load environment variables (populate process.env from .env file) import * as dotenv from "dotenv"; dotenv.config(); export const run = async () => { //Instantiante the OpenAI model //Pass the "temperature" parameter which controls the RANDOMNESS of the model's output. A lower temperature will result in more predictable output, while a higher temperature will result in more random output. The temperature parameter is set between 0 and 1, with 0 being the most predictable and 1 being the most random const model = new OpenAI({ temperature: 0 }); //Create a list of the instatiated tools const tools = [new SerpAPI(), new Calculator()]; //Construct an agent from an LLM and a list of tools //"zero-shot-react-description" tells the agent to use the ReAct framework to determine which tool to use. The ReAct framework determines which tool to use based solely on the tool’s description. Any number of tools can be provided. This agent requires that a description is provided for each tool. const executor = await initializeAgentExecutor( tools, model, "zero-shot-react-description" ); console.log("Loaded agent."); //Specify the prompt const input = "Who is Beyonce's husband?" + " What is his current age raised to the 0.23 power?"; console.log(`Executing with input "${input}"...`); //Run the agent const result = await executor.call({ input }); console.log(`Got output ${result.output}`); }; run();
Execute the code with the following command
$ npm run dev
Result of the execution
Loaded agent. Executing with input "Who is Beyonce's husband? What is his current age raised to the 0.23 power?"... Got output Jay-Z is 53 years old and his age raised to the 0.23 power is 2.4922032039503494.
We will cover Agents in more detail in future tutorials. You can find API documentation for Agents here
Memory
Chains and agents are stateless by default. Memory persists state between calls of chain or agent, thereby enabling true conversations. Practical conversations are those that can recall previous conversations or elements of previous conversations (eg. facts given in an earlier part of the conversation). This is useful for ChatBots.
The following code will show an example of memory.
Copy the following code into src/app.ts
//Import the OpenAPI Large Language Model (you can import other models here eg. Cohere) import { OpenAI } from "langchain/llms"; //Import the BufferMemory module import { BufferMemory } from "langchain/memory"; //Import the Chains module import { LLMChain } from "langchain/chains"; //Import the PromptTemplate module import { PromptTemplate } from "langchain/prompts"; //Load environment variables (populate process.env from .env file) import * as dotenv from "dotenv"; dotenv.config(); export const run = async () => { //Instantiate the BufferMemory passing the memory key for storing state const memory = new BufferMemory({ memoryKey: "chat_history" }); //Instantiante the OpenAI model //Pass the "temperature" parameter which controls the RANDOMNESS of the model's output. A lower temperature will result in more predictable output, while a higher temperature will result in more random output. The temperature parameter is set between 0 and 1, with 0 being the most predictable and 1 being the most random const model = new OpenAI({ temperature: 0.9 }); //Create the template. The template is actually a "parameterized prompt". A "parameterized prompt" is a prompt in which the input parameter names are used and the parameter values are supplied from external input //Note the input variables {chat_history} and {input} const template = `The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: {chat_history} Human: {input} AI:`; //Instantiate "PromptTemplate" passing the prompt template string initialized above const prompt = PromptTemplate.fromTemplate(template); //Instantiate LLMChain, which consists of a PromptTemplate, an LLM and memory. const chain = new LLMChain({ llm: model, prompt, memory }); //Run the chain passing a value for the {input} variable. The result will be stored in {chat_history} const res1 = await chain.call({ input: "Hi! I'm Morpheus." }); console.log({ res1 }); //Run the chain again passing a value for the {input} variable. This time, the response from the last run ie. the value in {chat_history} will alo be passed as part of the prompt const res2 = await chain.call({ input: "What's my name?" }); console.log({ res2 }); //BONUS!! const res3 = await chain.call({ input: "Which epic movie was I in and who was my protege?" }); console.log({ res3 }); }; run();
Execute the code with the following command
$ npm run dev
Result of the execution
(The response you will get may be slightly different in wording from what you see below)
{ res1: { text: " Hi Morpheus! It's nice to meet you. I'm an AI created to answer your questions. What can I do for you today?" } } { res2: { text: ' Your name is Morpheus. Is there anything else I can help you with?' } } { res3: { text: 'You were in The Matrix, and your protege was Neo. Would you like to talk about something else?' } }
We will cover Memory in more detail in future tutorials. You can find API documentation for Memory here
We hope this Step-By-Step Tutorial was helpful in getting you started with LangChain.js. We will be digging deeper into the individual modules and use cases in upcoming tutorials. We cant wait. We love LangChain. STAY TUNED!!