LangChain教程 - 表达式语言 (LCEL) -构建智能链
LangChain提供了一种灵活且强大的表达式语言 (LangChain Expression Language, LCEL),用于创建复杂的逻辑链。通过将不同的可运行对象组合起来,LCEL可以实现顺序链、嵌套链、并行链、路由以及动态构建等高级功能,从而满足各种场景下的需求。本文将详细介绍这些功能及其实现方式。
顺序链
LCEL的核心功能是将可运行对象按顺序组合起来,其中前一个对象的输出会自动传递给下一个对象作为输入。我们可以使用管道操作符 (|
) 或显式的 .pipe()
方法来构建顺序链。
以下是一个简单的例子:
from langchain_ollama import OllamaLLM
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
model = OllamaLLM(model="qwen2.5:0.5b")
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
chain = prompt | model | StrOutputParser()
result = chain.invoke({"topic": "bears"})
print(result)
输出:
Here's a bear joke for you:
Why did the bear dissolve in water?
Because it was a polar bear!
在上述例子中,提示模板将输入格式化为聊天模型的输入格式,聊天模型生成笑话,最后通过输出解析器将结果转换为字符串。
嵌套链
嵌套链允许我们将多个链组合起来以创建更复杂的逻辑。例如,可以将一个生成笑话的链与另一个链组合,该链负责分析笑话的有趣程度。
analysis_prompt = ChatPromptTemplate.from_template("is this a funny joke? {joke}")
composed_chain = {"joke": chain} | analysis_prompt | model | StrOutputParser()
result = composed_chain.invoke({"topic": "bears"})
print(result)
输出:
Haha, that's a clever play on words! Using "polar" to imply the bear dissolved or became polar/polarized when put in water. Not the most hilarious joke ever, but it has a cute, groan-worthy pun that makes it mildly amusing.
并行链
RunnableParallel
使得可以并行运行多个链,并将每个链的结果组合成一个字典。这种方式适用于需要同时处理多个任务的场景。
from langchain_core.runnables import RunnableParallel
joke_chain = ChatPromptTemplate.from_template("tell me a joke about {topic}") | model
poem_chain = ChatPromptTemplate.from_template("write a 2-line poem about {topic}") | model
parallel_chain = RunnableParallel(joke=joke_chain, poem=poem_chain)
result = parallel_chain.invoke({"topic": "bear"})
print(result)
输出:
{
'joke': "Why don't bears like fast food? Because they can't catch it!",
'poem': "In the quiet of the forest, the bear roams free\nMajestic and wild, a sight to see."
}
路由
路由允许根据输入动态选择要执行的子链。LCEL提供了两种实现路由的方式:
使用自定义函数
通过 RunnableLambda
实现动态路由:
from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnableLambda
chain = (
PromptTemplate.from_template(
"""Given the user question below, classify it as either being about `LangChain`, `Anthropic`, or `Other`.
Do not respond with more than one word.
<question>
{question}
</question>
Classification:"""
)
| OllamaLLM(model="qwen2.5:0.5b")
| StrOutputParser()
)
langchain_chain = PromptTemplate.from_template(
"""You are an expert in langchain. \
Always answer questions starting with "As Harrison Chase told me". \
Respond to the following question:
Question: {question}
Answer:"""
) | OllamaLLM(model="qwen2.5:0.5b")
anthropic_chain = PromptTemplate.from_template(
"""You are an expert in anthropic. \
Always answer questions starting with "As Dario Amodei told me". \
Respond to the following question:
Question: {question}
Answer:"""
) | OllamaLLM(model="qwen2.5:0.5b")
general_chain = PromptTemplate.from_template(
"""Respond to the following question:
Question: {question}
Answer:"""
) | OllamaLLM(model="qwen2.5:0.5b")
def route(info):
if "anthropic" in info["topic"].lower():
return anthropic_chain
elif "langchain" in info["topic"].lower():
return langchain_chain
else:
return general_chain
full_chain = {"topic": chain, "question": lambda x: x["question"]} | RunnableLambda(route)
result = full_chain.invoke({"question": "how do I use LangChain?"})
print(result)
def route(info):
if "anthropic" in info["topic"].lower():
return anthropic_chain
elif "langchain" in info["topic"].lower():
return langchain_chain
else:
return general_chain
from langchain_core.runnables import RunnableLambda
full_chain = {"topic": chain, "question": lambda x: x["question"]} | RunnableLambda(route)
result = full_chain.invoke({"question": "how do I use LangChain?"})
print(result)
使用 RunnableBranch
RunnableBranch
通过条件匹配选择分支:
from langchain_core.runnables import RunnableBranch
branch = RunnableBranch(
(lambda x: "anthropic" in x["topic"].lower(), anthropic_chain),
(lambda x: "langchain" in x["topic"].lower(), langchain_chain),
general_chain,
)
full_chain = {"topic": chain, "question": lambda x: x["question"]} | branch
result = full_chain.invoke({"question": "how do I use Anthropic?"})
print(result)
动态构建
动态构建链可以根据输入在运行时生成链的部分。通过 RunnableLambda
的返回值机制,可以返回一个新的 Runnable
。
from langchain_core.runnables import chain, RunnablePassthrough
llm = OllamaLLM(model="qwen2.5:0.5b")
contextualize_instructions = """Convert the latest user question into a standalone question given the chat history. Don't answer the question, return the question and nothing else (no descriptive text)."""
contextualize_prompt = ChatPromptTemplate.from_messages(
[
("system", contextualize_instructions),
("placeholder", "{chat_history}"),
("human", "{question}"),
]
)
contextualize_question = contextualize_prompt | llm | StrOutputParser()
@chain
def contextualize_if_needed(input_: dict):
if input_.get("chat_history"):
return contextualize_question
else:
return RunnablePassthrough() | itemgetter("question")
@chain
def fake_retriever(input_: dict):
return "egypt's population in 2024 is about 111 million"
qa_instructions = (
"""Answer the user question given the following context:\n\n{context}."""
)
qa_prompt = ChatPromptTemplate.from_messages(
[("system", qa_instructions), ("human", "{question}")]
)
full_chain = (
RunnablePassthrough.assign(question=contextualize_if_needed).assign(
context=fake_retriever
)
| qa_prompt
| llm
| StrOutputParser()
)
result = full_chain.invoke({
"question": "what about egypt",
"chat_history": [
("human", "what's the population of indonesia"),
("ai", "about 276 million"),
],
})
print(result)
输出:
According to the context provided, Egypt's population in 2024 is estimated to be about 111 million.
完整代码实例
from operator import itemgetter
from langchain_ollama import OllamaLLM
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
print("\n-----------------------------------\n")
# Simple demo
model = OllamaLLM(model="qwen2.5:0.5b")
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
chain = prompt | model | StrOutputParser()
result = chain.invoke({"topic": "bears"})
print(result)
print("\n-----------------------------------\n")
# Compose demo
analysis_prompt = ChatPromptTemplate.from_template("is this a funny joke? {joke}")
composed_chain = {"joke": chain} | analysis_prompt | model | StrOutputParser()
result = composed_chain.invoke({"topic": "bears"})
print(result)
print("\n-----------------------------------\n")
# Parallel demo
from langchain_core.runnables import RunnableParallel
joke_chain = ChatPromptTemplate.from_template("tell me a joke about {topic}") | model
poem_chain = ChatPromptTemplate.from_template("write a 2-line poem about {topic}") | model
parallel_chain = RunnableParallel(joke=joke_chain, poem=poem_chain)
result = parallel_chain.invoke({"topic": "bear"})
print(result)
print("\n-----------------------------------\n")
# Route demo
from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnableLambda
chain = (
PromptTemplate.from_template(
"""Given the user question below, classify it as either being about `LangChain`, `Anthropic`, or `Other`.
Do not respond with more than one word.
<question>
{question}
</question>
Classification:"""
)
| OllamaLLM(model="qwen2.5:0.5b")
| StrOutputParser()
)
langchain_chain = PromptTemplate.from_template(
"""You are an expert in langchain. \
Always answer questions starting with "As Harrison Chase told me". \
Respond to the following question:
Question: {question}
Answer:"""
) | OllamaLLM(model="qwen2.5:0.5b")
anthropic_chain = PromptTemplate.from_template(
"""You are an expert in anthropic. \
Always answer questions starting with "As Dario Amodei told me". \
Respond to the following question:
Question: {question}
Answer:"""
) | OllamaLLM(model="qwen2.5:0.5b")
general_chain = PromptTemplate.from_template(
"""Respond to the following question:
Question: {question}
Answer:"""
) | OllamaLLM(model="qwen2.5:0.5b")
def route(info):
if "anthropic" in info["topic"].lower():
return anthropic_chain
elif "langchain" in info["topic"].lower():
return langchain_chain
else:
return general_chain
full_chain = {"topic": chain, "question": lambda x: x["question"]} | RunnableLambda(route)
result = full_chain.invoke({"question": "how do I use LangChain?"})
print(result)
print("\n-----------------------------------\n")
# Branch demo
from langchain_core.runnables import RunnableBranch
branch = RunnableBranch(
(lambda x: "anthropic" in x["topic"].lower(), anthropic_chain),
(lambda x: "langchain" in x["topic"].lower(), langchain_chain),
general_chain,
)
full_chain = {"topic": chain, "question": lambda x: x["question"]} | branch
result = full_chain.invoke({"question": "how do I use Anthropic?"})
print(result)
print("\n-----------------------------------\n")
# Dynamic demo
from langchain_core.runnables import chain, RunnablePassthrough
llm = OllamaLLM(model="qwen2.5:0.5b")
contextualize_instructions = """Convert the latest user question into a standalone question given the chat history. Don't answer the question, return the question and nothing else (no descriptive text)."""
contextualize_prompt = ChatPromptTemplate.from_messages(
[
("system", contextualize_instructions),
("placeholder", "{chat_history}"),
("human", "{question}"),
]
)
contextualize_question = contextualize_prompt | llm | StrOutputParser()
@chain
def contextualize_if_needed(input_: dict):
if input_.get("chat_history"):
return contextualize_question
else:
return RunnablePassthrough() | itemgetter("question")
@chain
def fake_retriever(input_: dict):
return "egypt's population in 2024 is about 111 million"
qa_instructions = (
"""Answer the user question given the following context:\n\n{context}."""
)
qa_prompt = ChatPromptTemplate.from_messages(
[("system", qa_instructions), ("human", "{question}")]
)
full_chain = (
RunnablePassthrough.assign(question=contextualize_if_needed).assign(
context=fake_retriever
)
| qa_prompt
| llm
| StrOutputParser()
)
result = full_chain.invoke({
"question": "what about egypt",
"chat_history": [
("human", "what's the population of indonesia"),
("ai", "about 276 million"),
],
})
print(result)
print("\n-----------------------------------\n")
J-LangChain实现上面实例
J-LangChain - 智能链构建
总结
LangChain的LCEL通过提供顺序链、嵌套链、并行链、路由和动态构建等功能,为开发者构建复杂的语言任务提供了强大的工具。无论是简单的逻辑流还是复杂的动态决策,LCEL都能高效地满足需求。通过合理使用这些功能,开发者可以快速搭建高效、灵活的智能链,为各种场景的应用提供支持。