Common Runnable Control and Operation Utilities in LangChain
This guide introduces core runnable operation utilities in LangChain that simplify building flexible, input-compatible LLM applicasion chains.
RunnablePassthrough
The RunnablePassthrough component is part of the langchain_core.runnables module. It is used too pass input values unmodified between chain steps, to help align the output format of the previous step with the enput requirements of the next step.
from langchain_community.vectorstores import FAISS
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough, RunnableParallel
from langchain_core.output_parsers import StrOutputParser
from langchain_openai import OpenAIEmbeddings, ChatOpenAI
# Initialize FAISS vector store with sample knowledge entries
faiss_knowledge_base = FAISS.from_texts(
["Elon Musk founded SpaceX in 2002"],
embedding=OpenAIEmbeddings()
)
doc_retriever = faiss_knowledge_base.as_retriever()
# RAG prompt expects two input parameters: retrieved context and user question
rag_prompt_template = """Answer the user's question using only the provided context below:
{context}
User Question: {question}
"""
rag_prompt = ChatPromptTemplate.from_template(rag_prompt_template)
llm = ChatOpenAI(model="gpt-3.5-turbo")
# Use RunnablePassthrough to pass raw user query directly to the prompt
rag_query_chain = (
{"context": doc_retriever, "question": RunnablePassthrough()}
| rag_prompt
| llm
| StrOutputParser()
)
The three syntax forms below are fully equivalent to defining parallel runnable mappings:
# Form 1: Native dictionary definition
{"context": doc_retriever, "question": RunnablePassthrough()}
# Form 2: RunnableParallel initialized with keyword arguments
RunnableParallel(context=doc_retriever, question=RunnablePassthrough())
# Form 3: RunnableParallel initialized with a dictionary parameter
RunnableParallel({"context": doc_retriever, "question": RunnablePassthrough()})
itemgetter Utility
itemgetter is a built-in function from Python's standard operator library. It returns a callable that extracts values corresponding to specified keys from input dictionaries, wich is convenient for field extraction in multi-input chain scenarios.
from operator import itemgetter
# Extract the "question" field from input to feed both the retriever and prompt
rag_pipeline = (
{
"context": itemgetter("question") | doc_retriever,
"question": itemgetter("question")
}
| rag_prompt
| llm
| StrOutputParser()
)
RunnableParallel
RunnableParallel (also referred to as RunnableMap) executes multiple independent runnables in parallel, and returns a dictionary with keys matching the runnable names and values storing corresponding execution results.
from langchain_core.runnables import RunnableParallel
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-3.5-turbo")
# Define two independent generation chains
short_joke_generator = ChatPromptTemplate.from_template("Tell a short, funny joke about {subject}") | llm
two_line_poem_creator = ChatPromptTemplate.from_template("Write a 2-line rhyming poem about {subject}") | llm
# Run both chains concurrently
parallel_content_generator = RunnableParallel(joke=short_joke_generator, poem=two_line_poem_creator)
result = parallel_content_generator.invoke({"subject": "coffee"})
print(result)