LangChain Documentation
URL: https://python.langchain.com/docs
v1.0
Stable release (Oct 2025)
LangGraph
Production agent framework
50+
LLM providers supported
Open
Source (MIT license)
What is this resource?
LangChain is an open-source framework designed to make building applications with large language models faster, more modular, and more maintainable. In October 2025, the team released LangChain v1.0 and LangGraph v1.0 simultaneously — signaling production stability after years of rapid iteration. LangChain handles the component layer (prompt templates, model integrations, output parsers, retrievers), while LangGraph handles the orchestration layer for stateful, multi-step agent workflows. As of April 2026, LangGraph is at version 1.1.8 and is the recommended way to build production agents — the older "Agents" abstraction in LangChain itself now delegates to LangGraph under the hood.
LangChain matters because almost every production AI application needs features that go beyond a single API call: persistent conversation memory, the ability to query a database or search the web, structured output parsing, and routing logic that decides which model or tool to use based on the input. Building these from scratch with raw API calls is tedious and error-prone. LangChain provides tested, reusable implementations of all of them, backed by extensive documentation and a large community. The ecosystem also includes LangSmith — a hosted observability platform for logging, tracing, and evaluating LLM calls in production — which integrates directly with both LangChain and LangGraph.
What's in it?
The documentation is organized around LangChain's core abstractions. Chat Models provides a unified interface to 50+ LLM providers — OpenAI, Anthropic, Hugging Face, Google, Mistral, and more — through the same Python class structure. Switching providers is a one-line change, meaning your application logic stays provider-agnostic. Prompt Templates allows you to define prompts with variable placeholders that get filled at runtime, making prompts reusable, testable, and easy to version-control.
Chains and LCEL is where LangChain's power shows most clearly. Using the pipe operator (|), you compose a prompt, model, and parser into a single executable chain: chain = prompt | llm | parser. This is the core pattern for simple, linear workflows. For anything more complex — loops, branching, tool use, human-in-the-loop — you use LangGraph. LangGraph models workflows as a directed graph where each node is a function and edges are conditional. It has native support for node caching (skip re-running expensive nodes if inputs haven't changed), deferred nodes (run in background while others proceed), and pre/post hooks for observability. Agents in LangChain are now built on LangGraph internally.
The Retrieval-Augmented Generation (RAG) section is one of the most practically valuable parts of the documentation. RAG is the architecture behind "chat with your documents" applications: embed a document into vector representations, store in a vector database (Chroma, Pinecone, FAISS), and retrieve relevant chunks at query time to include in the prompt. LangChain has built-in support for dozens of document loaders (PDF, HTML, CSV, code files), text splitters, embedding models, and vector stores. LangSmith is the companion observability tool — it logs every LLM call, input, output, and latency in your LangChain app, making it far easier to debug and improve prompts in production than reading raw API logs.
How is it relevant to your purpose?
Once you're comfortable making basic API calls to an LLM, LangChain becomes the natural next step for building anything more complex. Consider a common scenario: you're building a chatbot that needs to answer questions based on a PDF document the user uploads, remember context across a multi-turn conversation, and occasionally look up current information from the web. Implementing all of that from scratch with raw API calls requires hundreds of lines of custom code and careful handling of edge cases. With LangChain, each of those features is a few lines using built-in components.
LangChain also has significant professional relevance. It's one of the most popular AI frameworks in the industry, appearing frequently in job descriptions for AI engineering and product development roles. Knowing the framework — not just the raw APIs it wraps — demonstrates a level of applied AI engineering sophistication that matters when presenting projects to employers or in interviews. Even if you ultimately build production applications on raw API calls for performance reasons, understanding how LangChain works gives you a clear mental model for the architecture of AI applications.
When to use LangChain vs LangGraph
A good rule of thumb as of 2025–2026: use LangChain (LCEL) for simple, linear pipelines — a single prompt, an RAG lookup, a quick tool call. Use LangGraph for anything with state, loops, or branching — multi-step agents, workflows where you need to retry on failure, or applications where different paths run based on model output. LangGraph is production-hardened and the standard choice for agents; LangChain's older "Agent" abstraction wraps LangGraph now anyway. For observability on either, wire in LangSmith from day one.
Recommended Watch
LangChain Crash Course for Beginners
A practical introduction to LangChain's core concepts — covers chains, prompt templates, memory, and RAG with working code examples. Ideal for developers who already know basic Python and the OpenAI API.
Building a Chain with LCEL
Install with pip install langchain langchain-openai. The pipe operator (|) is the core LCEL syntax — it passes output from one component to the next.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
# 1. Define a reusable prompt template with variables
prompt = ChatPromptTemplate.from_messages([
("system", "You are a concise technical explainer."),
("human", "Explain {concept} to a {level} developer in 2 sentences.")
])
# 2. Initialize the model
llm = ChatOpenAI(model="gpt-5.4-mini", temperature=0)
# 3. Compose the chain with LCEL pipe syntax
chain = prompt | llm | StrOutputParser()
# 4. Invoke with template variables
response = chain.invoke({
"concept": "vector embeddings",
"level": "beginner"
})
print(response)
# Chains are reusable — call with different inputs without rewriting anything
response2 = chain.invoke({"concept": "RAG pipelines", "level": "intermediate"})
print(response2)