当前位置: 首页 > article >正文

带有 Elasticsearch 和 Langchain 的 Agentic RAG

作者:来自 Elastic Han Xiang Choong

讨论并实现 Elastic RAG 的代理流程,其中 LLM 选择调用 Elastic KB。

更多阅读:Elasticsearch:基于 Langchain 的 Elasticsearch Agent 对文档的搜索。

简介

代理是将 LLM 应用于实际用例的合乎逻辑的下一步。本文旨在介绍代理在 RAG 工作流中的概念和用法。总而言之,代理代表了一个极其令人兴奋的领域,具有许多雄心勃勃的应用程序和用例的可能性。

我希望在未来的文章中涵盖更多这些想法。现在,让我们看看如何使用 Elasticsearch 作为我们的知识库,使用 LangChain 作为我们的代理框架来实现 Agentic RAG。

背景

LLMs 的使用始于简单地提示 LLM 执行诸如回答问题和简单计算之类的任务。

但是,现有模型知识的不足意味着 LLMs 无法应用于需要专业技能的领域,例如企业客户服务和商业智能。

很快,提示过渡到检索增强生成 (RAG),这是 Elasticsearch 的天然优势。RAG 是一种有效而简单的方法,可以在查询时快速向 LLM 提供上下文和事实信息。另一种方法是漫长而昂贵的再培训过程,而成功率远不能得到保证。

RAG 的主要操作优势是允许近乎实时地向 LLM 应用程序提供更新的信息。

实施涉及采购向量数据库(例如 Elasticsearch)、部署嵌入模型(例如 ELSER)以及调用 search API 来检索相关文档。

检索到文档后,可以将其插入 LLM 的提示中,并根据内容生成答案。这提供了背景和事实,而 LLM 本身可能缺乏这两者。

直接调用 LLM、使用 RAG 和使用代理之间的区别

但是,标准 RAG 部署模型有一个缺点 - 它很死板。LLM 无法选择从哪个知识库中提取信息。它也无法选择使用其他工具,例如 Google 或 Bing 等网络搜索引擎 API。它无法查看当前天气,无法使用计算器,也无法考虑除给定知识库之外的任何工具的输出。

Agentic 模型的不同之处在于选择。

术语说明

工具使用(Tool usage),即 Langchain 上下文中使用的术语,也称为函数调用。无论出于何种目的,这两个术语都是可以互换的 - 两者都指 LLM 被赋予一组函数或工具,它可以用来补充其能力或影响世界。请耐心等待,因为我在本文的其余部分都使用了 “工具使用 - Tool Usage”。

选择

赋予 LLM 决策能力,并为其提供一套工具。根据对话的状态和历史,LLM 将选择是否使用每个工具,并将工具的输出纳入其响应中。

这些工具可能是知识库、计算器、网络搜索引擎和爬虫 - 种类繁多,没有限制或结束。LLM 能够执行复杂的操作和任务,而不仅仅是生成文本。

个用于研究特定主题的 agentic 流程示例

让我们实现一个代理的简单示例。Elastic 的核心优势在于我们的知识库。因此,此示例将重点介绍如何使用相对较大且复杂的知识库,方法是制作比简单的向量搜索更复杂的查询。

设置

首先,在项目目录中定义一个 .env 文件,并填写这些字段。我使用带有 GPT-4o 的 Azure OpenAI 部署来学习我的 LLM,并使用 Elastic Cloud 部署来学习我的知识库。我的 Python 版本是 Python 3.12.4,我使用 Macbook 进行操作。

ELASTIC_ENDPOINT="YOUR ELASTIC ENDPOINT"
ELASTIC_API_KEY="YOUR ELASTIC API KEY"

OPENAI_API_TYPE="azure"
AZURE_OPENAI_ENDPOINT="YOUR AZURE ENDPOINT"
AZURE_OPENAI_API_VERSION="2024-06-01"
AZURE_OPENAI_API_KEY="YOUR AZURE API KEY"
AZURE_OPENAI_GPT4O_MODEL_NAME="gpt-4o"
AZURE_OPENAI_GPT4O_DEPLOYMENT_NAME="YOUR AZURE OPENAI GPT-4o DEPLOYMENT NAME"

你可能必须在终端中安装以下依赖项。

pip install langchain elasticsearch 

在你的项目目录中创建一个名为 chat.py 的 python 文件,并粘贴此代码片段以初始化你的 LLM 和与 Elastic Cloud 的连接:

import os
from dotenv import load_dotenv
load_dotenv()

from langchain.chat_models import AzureChatOpenAI
from langchain.agents import initialize_agent, AgentType, Tool
from langchain.tools import StructuredTool  # Import StructuredTool
from langchain.memory import ConversationBufferMemory
from typing import Optional
from pydantic import BaseModel, Field

# LLM setup
llm = AzureChatOpenAI(
    openai_api_version=os.getenv("AZURE_OPENAI_API_VERSION"),
    azure_deployment=os.getenv("AZURE_OPENAI_GPT4O_DEPLOYMENT_NAME"),
    temperature=0.5,
    max_tokens=4096
)

from elasticsearch import Elasticsearch
# Elasticsearch Setup
try:
    # Elasticsearch setup
    es_endpoint = os.environ.get("ELASTIC_ENDPOINT")
    es_client = Elasticsearch(
        es_endpoint,
        api_key=os.environ.get("ELASTIC_API_KEY")
    )
except Exception as e:
    es_client = None

Hello World!我们的第一个工具

初始化并定义我们的 LLM 和 Elastic 客户端后,让我们来做一个 Elastic 版的 Hello World。我们将定义一个函数来检查与 Elastic Cloud 的连接状态,并定义一个简单的代理对话链来调用它。

将以下函数定义为 langchain Tool。名称和描述是提示(prompt)工程的关键部分。LLM 依靠它们来确定是否在对话期间使用该工具。

# Define a function to check ES status
def es_ping(*args, **kwargs):
    if es_client is None:
        return "ES client is not initialized."
    else:
        try:
            if es_client.ping():
                return "ES ping returning True, ES is connected."
            else:
                return "ES is not connected."
        except Exception as e:
            return f"Error pinging ES: {e}"

es_status_tool = Tool(
    name="ES Status",
    func=es_ping,
    description="Checks if Elasticsearch is connected.",
)

tools = [es_status_tool]

现在,让我们初始化一个对话记忆组件来跟踪对话以及我们的 agent 本身。

# Initialize memory to keep track of the conversation
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)

# Initialize agent
agent_chain = initialize_agent(
    tools,
    llm,
    agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
    memory=memory,
    verbose=True,
)

最后,让我们用这个代码片段运行对话循环:

# Interactive conversation with the agent
def main():
    print("Welcome to the chat agent. Type 'exit' to quit.")
    while True:
        user_input = input("You: ")
        if user_input.lower() in ['exit', 'quit']:
            print("Goodbye!")
            break
        response = agent_chain.run(input=user_input)
        print("Assistant:", response)

if __name__ == "__main__":
    main()

在终端中,运行 python chat.py 来初始化对话。

python chat.py 

以下是我的操作:

You: Hello
Assistant: Hello! How can I assist you today?
You: Is Elastic search connected?

> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? Yes
Action: ES Status
Action Input: 

Observation: ES ping returning True, ES is connected.
Thought:Do I need to use a tool? No
AI: Yes, Elasticsearch is connected. How can I assist you further?

当我询问 Elasticsearch 是否已连接时,LLM 使用 ES Status 工具,ping 了我的 Elastic Cloud 部署,返回 True,然后确认 Elastic Cloud 确实已连接。

恭喜!这是一个成功的 Hello World :)

请注意,观察结果是 es_ping 函数的输出。此观察结果的格式和内容是我们快速工程的关键部分,因为这是 LLM 用来决定下一步的内容。

让我们看看如何针对 RAG 修改此工具。

Agentic RAG

我最近在我的 Elastic Cloud 部署中使用 POLITICS 数据集构建了一个大型而复杂的知识库(knowledge base)。该数据集包含从美国新闻来源抓取的大约 246 万篇政治文章。我将其导入 Elastic Cloud 并将其嵌入 elser_v2 推理端点,遵循上一篇博客中定义的流程。

要部署 elser_v2 推理端点,请确保启用了 ML 节点自动扩展,然后在 Elastic Cloud 控制台中运行以下命令。

PUT _inference/sparse_embedding/elser_v2
{
  "service": "elser",
  "service_settings": {
    "num_allocations": 4,
    "num_threads": 8
  }
}

现在,让我们定义一个新工具,对我们的政治知识库索引进行简单的语义搜索。我将其称为 bignews_embedded。此函数接受搜索查询,将其添加到标准语义搜索查询模板,然后使用 Elasticsearch 运行查询。一旦获得搜索结果,它就会将文章内容连接成一个文本块,并将其作为 LLM 观察(observation)返回。

我们将搜索结果的数量限制为 3。这种 Agentic RAG 风格的一个优点是我们可以通过多个对话步骤来制定答案。换句话说,可以使用引导性问题来设置阶段和上下文来回答更复杂的查询。问答变成了基于事实的对话,而不是一次性的答案生成。

日期

为了突出使用代理的重要优势,RAG 搜索函数除了查询之外还包含 dates 参数。在搜索新闻文章时,我们可能希望将搜索结果限制在特定的时间范围内,例如“In 2020”或 “Between 2008 and 2012”。通过添加日期以及解析器,我们允许 LLM 指定搜索的日期范围。

简而言之,如果我指定 “California wildfires in 2020” 之类的内容,我不希望看到 2017 年或任何其他年份的新闻。

此 rag_search 函数是一个日期解析器(从输入中提取日期并将其添加到查询中)和一个 Elastic semantic_search 查询。

# Define the RAG search function
def rag_search(query: str, dates: str):
    if es_client is None:
        return "ES client is not initialized."
    else:
        try:
            # Build the Elasticsearch query
            must_clauses = []

            # If dates are provided, parse and include in query
            if dates:
                # Dates must be in format 'YYYY-MM-DD' or 'YYYY-MM-DD to YYYY-MM-DD'
                date_parts = dates.strip().split(' to ')
                if len(date_parts) == 1:
                    # Single date
                    start_date = date_parts[0]
                    end_date = date_parts[0]
                elif len(date_parts) == 2:
                    start_date = date_parts[0]
                    end_date = date_parts[1]
                else:
                    return "Invalid date format. Please use YYYY-MM-DD or YYYY-MM-DD to YYYY-MM-DD."

                date_range = {
                    "range": {
                        "date": {
                            "gte": start_date,
                            "lte": end_date
                        }
                    }
                }
                must_clauses.append(date_range)

            # Add the main query clause
            main_query = {
                "nested": {
                    "path": "text.inference.chunks",
                    "query": {
                        "sparse_vector": {
                            "inference_id": "elser_v2",
                            "field": "text.inference.chunks.embeddings",
                            "query": query
                        }
                    },
                    "inner_hits": {
                        "size": 2,
                        "name": "bignews_embedded.text",
                        "_source": False
                    }
                }
            }
            must_clauses.append(main_query)

            es_query = {
                "_source": ["text.text", "title", "date"],
                "query": {
                    "bool": {
                        "must": must_clauses
                    }
                },
                "size": 3
            }

            response = es_client.search(index="bignews_embedded", body=es_query)
            hits = response["hits"]["hits"]
            if not hits:
                return "No articles found for your query."
            result_docs = []
            for hit in hits:
                source = hit["_source"]
                title = source.get("title", "No Title")
                text_content = source.get("text", {}).get("text", "")
                date = source.get("date", "No Date")
                doc = f"Title: {title}\nDate: {date}\n{text_content}\n"
                result_docs.append(doc)
            return "\n".join(result_docs)
        except Exception as e:
            return f"Error during RAG search: {e}"

运行完整的搜索查询后,结果将连接成一个文本块并作为 “observation” 返回,供 LLM 使用。

为了考虑多个可能的参数,请使用 pydantic 的 BaseModel 定义有效的输入格式:

class RagSearchInput(BaseModel):
    query: str = Field(..., description="The search query for the knowledge base.")
    dates: str = Field(
        ...,
        description="Date or date range for filtering results. Specify in format YYYY-MM-DD or YYYY-MM-DD to YYYY-MM-DD."
    )

我们还需要利用 StructuredTool 定义一个多输入函数,使用上面定义的输入格式:

# Define the RAG search tool using StructuredTool
rag_search_tool = StructuredTool(
    name="RAG_Search",
    func=rag_search,
    description=(
        "Use this tool to search for information about American politics from the knowledge base. "
        "**Input must include a search query and a date or date range.** "
        "Dates must be specified in this format YYYY-MM-DD or YYYY-MM-DD to YYYY-MM-DD."
    ),
    args_schema=RagSearchInput
)

描述是工具定义的关键要素,也是你快速工程的一部分。它应该全面而详细,并为 LLM 提供足够的背景信息,以便它知道何时使用该工具以及出于什么目的。

描述还应包括 LLM 必须提供的输入类型,以便正确使用该工具。指定格式和期望在这里会产生巨大影响。

不具信息性的描述可能会严重影响 LLM 使用该工具的能力!

请记住将新工具添加到代理要使用的工具列表中:

tools = [es_status_tool, rag_search_tool]

我们还需要使用系统提示进一步修改代理,以提供对代理行为的额外控制。系统提示对于确保不会发生与格式错误的输出和函数输入相关的错误至关重要。我们需要明确说明每个函数的期望以及模型应输出的内容,因为如果 langchain 看到格式不正确的 LLM 响应,它将抛出错误。

我们还需要设置 agent=AgentType.OPENAI_FUNCTIONS 以使用 OpenAI 的函数调用功能。这允许 LLM 根据我们指定的结构化模板与函数交互。

请注意,系统提示包括 LLM 应生成的输入的确切格式的规定,以及具体示例。

LLM 不仅应该检测应该使用哪种工具,还应该检测工具期望的输入! Langchain 只负责函数调用/工具使用,但是否正确使用取决于 LLM。

agent_chain = initialize_agent(
    tools,
    llm,
    agent=AgentType.OPENAI_FUNCTIONS,
    memory=memory,
    verbose=True,
    handle_parsing_errors=True,
    system_message="""
    You are an AI assistant that helps with questions about American politics using a knowledge base. Be concise, sharp, to the point, and respond in one paragraph.
    You have access to the following tools:
    - **ES_Status**: Checks if Elasticsearch is connected.
    - **RAG_Search**: Use this to search for information in the knowledge base. **Input must include a search query and a date or date range.** Dates must be specified in this format YYYY-MM-DD or YYYY-MM-DD to YYYY-MM-DD.
    **Important Instructions:**
    - **Extract dates or date ranges from the user's question.**
    - **If the user does not provide a date or date range, politely ask them to provide one before proceeding.**
    When you decide to use a tool, use the following format *exactly*:
    Thought: [Your thought process about what you need to do next]
    Action: [The action to take, should be one of [ES_Status, RAG_Search]]
    Action Input: {"query": "the search query", "dates": "the date or date range"}
    If you receive an observation after an action, you should consider it and then decide your next step. If you have enough information to answer the user's question, respond with:
    Thought: [Your thought process]
    Assistant: [Your final answer to the user]
    **Examples:**
    - **User's Question:** "Tell me about the 2020 California wildfires."
      Thought: I need to search for information about the 2020 California wildfires.
      Action: RAG_Search
      Action Input: {"query" : "California wildfires", "dates" : "2020-01-01 to 2020-12-31"}
    - **User's Question:** "What happened during the presidential election?"
      Thought: The user didn't specify a date. I should ask for a date range.
      Assistant: Could you please specify the date or date range for the presidential election you're interested in?
    Always ensure that your output strictly follows one of the above formats, and do not include any additional text or formatting.
    Remember:
    - **Do not** include any text before or after the specified format.
    - **Do not** add extra explanations.
    - **Do not** include markdown, bullet points, or numbered lists unless it is part of the Assistant's final answer.
    Your goal is to assist the user by effectively using the tools when necessary and providing clear and concise answers.
    """
)

现在,在你的终端中运行 python chat.py 并让我们测试一下!

测试 Agentic RAG

让我们通过询问以下查询来测试它:

Summarize the federal response to california wildfires in 2020

有帮助的是,langchain 将输出中间步骤,包括 RAG_Search 函数的输入、搜索结果和最终输出。

> Entering new AgentExecutor chain...

Invoking: `RAG_Search` with `{'query': 'federal response to California wildfires', 'dates': '2020-01-01 to 2020-12-31'}`


Title: California federal wildfire aid: Newsom says Trump changed mind
Date: 2020-10-16T00:00:00.000Z
President Donald Trump abruptly changed course...

Title: California governor declares state of emergency in multiple counties as wildfires rage
Date: 2020-09-07T00:00:00.000Z
In a statement, Newsom noted that tens...

Title: Trump approves disaster declaration for California, reversing course
Date: 2020-10-16T00:00:00.000Z
California secured a Major Disaster Declaration ...

> Finished chain.

Assistant: The federal response to the California wildfires in 2020 included several key actions and moments:

1. **Initial Denial and Reversal of Federal Aid**: President Trump initially denied California's request for federal aid for the wildfires. However, after Governor Gavin Newsom vowed to appeal the decision, Trump reversed course and approved the Major Disaster Declaration. This declaration allowed California to receive hundreds of millions of dollars in federal aid.

2. **Statewide Emergency Declarations**: Governor Newsom declared a statewide emergency due to the widespread fires and extreme weather conditions. The state worked closely with federal authorities to secure assistance, including Fire Management Assistance Grants from FEMA.

3. **Presidential Major Disaster Declarations**: Newsom secured a Presidential Major Disaster Declaration to bolster the state's emergency response to the Northern California wildfires. This declaration made people in impacted counties eligible for various forms of federal support, including crisis counseling, housing, unemployment assistance, and legal services.

4. **Federal and State Collaboration**: Despite ongoing tensions and disagreements between the state and the Trump administration, particularly over forest management and climate change, federal agencies such as the National Park Service, U.S. Forest Service, and Bureau of Land Management were involved in managing and supporting firefighting efforts in California.

5. **Impact and Scale of Wildfires**: The 2020 wildfire season in California was historically devastating, with more than 8,500 blazes scorching 6,400 square miles, destroying thousands of structures, and claiming lives. The federal aid and disaster declarations were crucial in supporting the state's response and recovery efforts.

Overall, the federal response involved a combination of initial resistance followed by critical support and collaboration to address the unprecedented wildfire crisis in California.

最值得注意的是,LLM 创建了一个搜索查询,然后添加了从 2020 年初到年末的日期范围。通过将搜索结果限制在指定年份,我们确保只有相关文档才会传递给 LLM。

我们可以用它做更多的事情,例如根据类别、某些实体的外观或与其他事件的关系进行约束。

可能性无穷无尽,我认为这很酷!

关于错误处理的注意事项

在某些情况下,LLM 可能无法在需要时使用正确的工具/功能。例如,它可能选择使用自己的知识而不是使用可用的知识库来回答有关当前事件的问题。

必须仔细测试和调整系统提示和工具/功能描述。

另一种选择可能是增加可用工具的种类,以增加基于知识库内容而不是 LLM 的固有知识生成答案的可能性。

请注意,LLMs 偶尔会失败,这是其概率性质的自然结果。有用的错误消息或免责声明也可能是用户体验的重要组成部分。

结论和未来前景

对我来说,主要的收获是创建更高级的搜索应用程序的可能性。LLM 可能能够在自然语言对话的背景下即时制作非常复杂的搜索查询。这为大幅提高搜索应用程序的准确性和相关性开辟了道路,也是我兴奋地探索的领域。

通过 LLM 媒介,知识库与其他工具(例如 Web 搜索引擎和监控工具 API)的交互也可以实现一些令人兴奋且复杂的用例。来自 KB 的搜索结果可能会补充实时信息,从而使 LLM 能够执行有效且时间敏感的即时推理。

还有多代理工作流的可能性。在 Elastic 上下文中,这可能是多个代理探索不同的知识库集,以协作构建复杂问题的解决方案。也许是一个联合搜索模型,其中多个组织构建协作、共享的应用程序,类似于联合学习(federated learning)的想法?

多代理流程示例

我想探索 Elasticsearch 的一些用例,希望你也能这样做。

下次见!

附录:chat.py 的完整代码

import os
from dotenv import load_dotenv
load_dotenv()

from langchain.chat_models import AzureChatOpenAI
from langchain.agents import initialize_agent, AgentType, Tool
from langchain.tools import StructuredTool  # Import StructuredTool
from langchain.memory import ConversationBufferMemory
from typing import Optional
from pydantic import BaseModel, Field

llm = AzureChatOpenAI(
    openai_api_version=os.getenv("AZURE_OPENAI_API_VERSION"),
    azure_deployment=os.getenv("AZURE_OPENAI_GPT4O_DEPLOYMENT_NAME"),
    temperature=0.5,
    max_tokens=4096
)

from elasticsearch import Elasticsearch

try:
    # Elasticsearch setup
    es_endpoint = os.environ.get("ELASTIC_ENDPOINT")
    es_client = Elasticsearch(
        es_endpoint,
        api_key=os.environ.get("ELASTIC_API_KEY")
    )
except Exception as e:
    es_client = None

# Define a function to check ES status
def es_ping(_input):
    if es_client is None:
        return "ES client is not initialized."
    else:
        try:
            if es_client.ping():
                return "ES is connected."
            else:
                return "ES is not connected."
        except Exception as e:
            return f"Error pinging ES: {e}"

# Define the ES status tool
es_status_tool = Tool(
    name="ES_Status",
    func=es_ping,
    description="Checks if Elasticsearch is connected.",
)

# Define the RAG search function
def rag_search(query: str, dates: str):
    if es_client is None:
        return "ES client is not initialized."
    else:
        try:
            # Build the Elasticsearch query
            must_clauses = []

            # If dates are provided, parse and include in query
            if dates:
                # Dates must be in format 'YYYY-MM-DD' or 'YYYY-MM-DD to YYYY-MM-DD'
                date_parts = dates.strip().split(' to ')
                if len(date_parts) == 1:
                    # Single date
                    start_date = date_parts[0]
                    end_date = date_parts[0]
                elif len(date_parts) == 2:
                    start_date = date_parts[0]
                    end_date = date_parts[1]
                else:
                    return "Invalid date format. Please use YYYY-MM-DD or YYYY-MM-DD to YYYY-MM-DD."

                date_range = {
                    "range": {
                        "date": {
                            "gte": start_date,
                            "lte": end_date
                        }
                    }
                }
                must_clauses.append(date_range)

            # Add the main query clause
            main_query = {
                "nested": {
                    "path": "text.inference.chunks",
                    "query": {
                        "sparse_vector": {
                            "inference_id": "elser_v2",
                            "field": "text.inference.chunks.embeddings",
                            "query": query
                        }
                    },
                    "inner_hits": {
                        "size": 2,
                        "name": "bignews_embedded.text",
                        "_source": False
                    }
                }
            }
            must_clauses.append(main_query)

            es_query = {
                "_source": ["text.text", "title", "date"],
                "query": {
                    "bool": {
                        "must": must_clauses
                    }
                },
                "size": 3
            }

            response = es_client.search(index="bignews_embedded", body=es_query)
            hits = response["hits"]["hits"]
            if not hits:
                return "No articles found for your query."
            result_docs = []
            for hit in hits:
                source = hit["_source"]
                title = source.get("title", "No Title")
                text_content = source.get("text", {}).get("text", "")
                date = source.get("date", "No Date")
                doc = f"Title: {title}\nDate: {date}\n{text_content}\n"
                result_docs.append(doc)
            return "\n".join(result_docs)
        except Exception as e:
            return f"Error during RAG search: {e}"

class RagSearchInput(BaseModel):
    query: str = Field(..., description="The search query for the knowledge base.")
    dates: str = Field(
        ...,
        description="Date or date range for filtering results. Specify in format YYYY-MM-DD or YYYY-MM-DD to YYYY-MM-DD."
    )

# Define the RAG search tool using StructuredTool
rag_search_tool = StructuredTool(
    name="RAG_Search",
    func=rag_search,
    description=(
        "Use this tool to search for information about American politics from the knowledge base. "
        "**Input must include a search query and a date or date range.** "
        "Dates must be specified in this format YYYY-MM-DD or YYYY-MM-DD to YYYY-MM-DD."
    ),
    args_schema=RagSearchInput
)

# List of tools
tools = [es_status_tool, rag_search_tool]

# Initialize memory to keep track of the conversation
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)

agent_chain = initialize_agent(
    tools,
    llm,
    agent=AgentType.OPENAI_FUNCTIONS,
    memory=memory,
    verbose=True,
    handle_parsing_errors=True,
    system_message="""
    You are an AI assistant that helps with questions about American politics using a knowledge base. Be concise, sharp, to the point, and respond in one paragraph.
    You have access to the following tools:

    - **ES_Status**: Checks if Elasticsearch is connected.
    - **RAG_Search**: Use this to search for information in the knowledge base. **Input must include a search query and a date or date range.** Dates must be specified in this format YYYY-MM-DD or YYYY-MM-DD to YYYY-MM-DD.

    **Important Instructions:**

    - **Extract dates or date ranges from the user's question.**
    - **If the user does not provide a date or date range, politely ask them to provide one before proceeding.**

    When you decide to use a tool, use the following format *exactly*:
    Thought: [Your thought process about what you need to do next]
    Action: [The action to take, should be one of [ES_Status, RAG_Search]]
    Action Input: {"query": "the search query", "dates": "the date or date range"}


    If you receive an observation after an action, you should consider it and then decide your next step. If you have enough information to answer the user's question, respond with:
    Thought: [Your thought process]
    Assistant: [Your final answer to the user]

    **Examples:**

    - **User's Question:** "Tell me about the 2020 California wildfires."
      Thought: I need to search for information about the 2020 California wildfires.
      Action: RAG_Search
      Action Input: {"query": "California wildfires", "dates": "2020-01-01 to 2020-12-31"}

    - **User's Question:** "What happened during the presidential election?"
      Thought: The user didn't specify a date. I should ask for a date range.
      Assistant: Could you please specify the date or date range for the presidential election you're interested in?

    Always ensure that your output strictly follows one of the above formats, and do not include any additional text or formatting.

    Remember:

    - **Do not** include any text before or after the specified format.
    - **Do not** add extra explanations.
    - **Do not** include markdown, bullet points, or numbered lists unless it is part of the Assistant's final answer.

    Your goal is to assist the user by effectively using the tools when necessary and providing clear and concise answers.
    """
)

# Interactive conversation with the agent
def main():
    print("Welcome to the chat agent. Type 'exit' to quit.")
    while True:
        user_input = input("You: ")
        if user_input.lower() in ['exit', 'quit']:
            print("Goodbye!")
            break
        # Update method call to address deprecation warning
        response = agent_chain.invoke(input=user_input)
        print("Assistant:", response['output'])

if __name__ == "__main__":
    main()

Elasticsearch 包含许多新功能,可帮助你针对自己的用例构建最佳搜索解决方案。深入了解我们的示例笔记本以了解更多信息,开始免费云试用,或立即在本地机器上试用 Elastic。

原文:https://www.elastic.co/search-labs/blog/rag-agent-tool-elasticsearch-langchain


http://www.kler.cn/a/448568.html

相关文章:

  • 浅析InnoDB引擎架构(已完结)
  • 图书借阅管理系统|SpringBoot|HTML|web网站|Java【源码+数据库文件+包部署成功+答疑解惑问到会为止】
  • 【快速上手】linux环境下Neo4j的安装与使用
  • 人工智能ACA(四)--机器学习基础
  • Java操作Xml
  • STM32单片机使用CAN协议进行通信
  • 使用Wireshark导出数据包中的文件
  • uniapp开发微信小程序优化项目
  • LiteFlow决策系统的策略模式,顺序、最坏、投票、权重
  • Python中定义函数的操作及理解
  • 前端和后端解决跨域问题的方法
  • 时空信息平台架构搭建:基于netty封装TCP通讯模块(IdleStateHandler网络连接监测,处理假死)
  • 【电商推荐】平衡效率与效果:一种优化点击率预测的LLM融合方法
  • 如何减小wsl的磁盘占用空间
  • JAVA基础:JavaDoc生成文档
  • 【论文解读】CVPR 2019 目标检测:CenterNet技术,以点代框,可扩展性强(附论文地址)
  • 音视频学习(二十五):ts
  • 【03-数据库面试】
  • HTML基础学习(1)
  • 网络安全(5)_访问控制列表ACL
  • 速通Python 第二节
  • redis数据转移
  • Linux快速入门-兼期末快速复习使用
  • redis——布隆过滤器
  • 271-基于XC7V690T的12路光纤PCIe接口卡
  • C++中如何实现序列化和反序列化?