What is LangChain?
LangChain oversimplified for newbies.
Our objective is to simplify LangChain for LLM application development newbies.
Before we get started explaining LangChain, it helps to start by understanding the whole LLM (Large Language Model) picture so that we can understand where LangChain fits in the picture and why it’s needed.
The LLM architecture, at present, consists of four distinct layers. Those layers are as follows:
LLM Layer 1: Model Layer
The layer that contains the LLMs eg. GPT-3
LLM Layer 2: Model API Layer
The layer that contains the API to access the LLMs eg. OpenAI API
LLM Layer 3: Infrastructure Layer
The layer that contains APIs, components and frameworks that enable the logical orchestration of LLM inputs and outputs into applications eg. LangChain
LLM Layer 4: Application Layer
The layer that contains applications that use LLMs eg. ChatBots
LangChain is a framework at the Infrastructure layer. The infrastructure layer makes it possible for application developers to compose LLM applications. It makes it possible to build applications that combine LLM data with data from other data sources to meet real-world needs.
For example, since the data used by ChatGPT is from 2021, that means ChatGPT does not have current 2023 data, for example, today's temperature. If you ask ChatGPT for a recommendation of what to wear based on today's weather, it will respond with:
Prompt: Based on the weather forecast for Denver, Colorado today, what do you recommend I wear today?
ChatGPT Response: I'm sorry, but as an AI language model, I do not have access to real-time weather information
To solve this problem, we need a way to get real-time weather information and pass it to ChatGPT when we ask for a recommendation for what to wear. This is where LangChain comes in. LangChain provides a way for us to compose a two-step process that 1) accesses real-time weather data from a third-party component and 2) uses the weather data to create a prompt to ask for a clothing recommendation from ChatGPT. The new prompt will yield a more meaningful response like this:
Prompt: Today's weather forecast for Denver Colorado is a high of 15 degrees Fahrenheit and a low of -6 degrees Fahrenheit.
Based on the weather forecast for Denver, Colorado today, what do you recommend I wear today?
ChatGPT Response: Based on the weather forecast for Denver, Colorado today, I recommend wearing warm and layered clothing to stay comfortable in the cold temperatures. Here are some specific clothing items to consider:
A heavy winter coat or parka to keep you warm in the cold temperatures.
Thermal or wool socks to keep your feet warm.
Insulated gloves or mittens to protect your hands from the cold.
A warm hat or beanie to keep your head and ears warm.
Thermal or long underwear to keep your core warm.
Scarf or neck warmer to protect your neck from the cold wind.
It's also important to make sure that your clothing is made of materials that can withstand the cold temperatures and that you are wearing appropriate footwear for icy conditions. Stay safe and warm!
This ability to compose prompts by orchestrating the sequence of data inputs from different sources is called "chaining". In this case, we "chained" real-time weather data with our prompt for a recommendation for what to wear.
For LLMs to take part in solving real-world problems, we need the ability to compose applications that integrate with other sources of information in both simple and complex workflows. LangChain is a framework that makes it possible for us to put LLMs to use to solve real-world problems at scale.
The following are the feature areas of the LangChain framework :
A single generic interface to all the different LLMs. You don't have to know the different interfaces for the different LLMs - just use the LangChain interface.
Prompt creation, management and optimization. Prompt templates will change your life.
Standard interface for components that take part in chaining, chain sequencing/orchestration and chain execution.
Data Augmented Generation
When you want the LLM to incorporate external data in the generation of its response. Very useful when you want to specify data from a company database or summarized text or Q&A data to be sent to the LLM to use in the generation of its response.
One of the coolest features of LangChain. The agent concept involves the use of the LLM to orchestrate the sequences of calls to the components (called "tools") that complete a task. The LLM is telling us the order in which we should use different components to arrive at a result. Way too cool!!
A standard interface for the LLM to "remember" the context from one prompt to another. The LLM can remember what was asked two prompts ago, for example.
Think of this as a "test suite" for LLMs. We want to know whether then LLM is providing the correct information. The interesting thing here is that LangChain uses LLMs to perform the evaluation.
Now that we have simplified LangChain for you, the next step is for you to dig into the LangChain documentation located here
Next up, you are welcome to learn LangChain with us. We will be delivering simplified LangChain tutorials using LangChainJS.