当前位置: 首页 > news >正文

【LLM开源项目】LLMs-开发框架-Langchain-Tutorials-Basics-v2.0

【1】使用LCEL构建简单的LLM应用程序(Build a Simple LLM Application with LCEL)

https://python.langchain.com/docs/tutorials/llm_chain/

如何使用LangChain构建简单的LLM应用程序。功能:将把文本从英语翻译成另一种语言。

实现:LLM调用加上一些提示。很多功能都可以仅通过一些提示和一个LLM调用来构建!

阅读本教程后,将对以下内容有个高层次的概览:

  • #使用语言模型(U****sing language models)

  • #使用PromptTemplates和OutputParsers(Using PromptTemplates and OutputParsers

  • **#使用LangChain表达式语言(LCEL)将组件链接在一起(**Using LangChain Expression Language (LCEL) to chain components together)

  • #使用LangSmith调试和跟踪您的应用程序(Debugging and tracing your application u****sing LangSmith)

  • #使用LangServe部署您的应用程序(Deploying your application with LangServe**)**

【1.1】使用语言模型-Using Language Models

LangChain支持许多不同的语言模型。ChatModels是LangChain“Runnables”实例,这意味着LangChain暴露了标准接口以便与LLM交互。为了简单地调用模型,可以将一系列消息传递给.invoke方法。

pip install -qU langchain-openai  #直接使用模型  
import getpass  
import os  
os.environ["OPENAI_API_KEY"] = getpass.getpass()  
from langchain_openai import ChatOpenAI  
model = ChatOpenAI(model="gpt-4")  #调用模型  
from langchain_core.messages import HumanMessage, SystemMessage  
messages = [  SystemMessage(content="Translate the following from English into Italian"),  HumanMessage(content="hi!"),  
]  
model.invoke(messages)  """  
AIMessage(content='ciao!', response_metadata={'token_usage': {'completion_tokens': 3, 'prompt_tokens': 20, 'total_tokens': 23},   
'model_name': 'gpt-4', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None},  
id='run-fc5d7c88-9615-48ab-a3c7-425232b562c5-0')  
"""

API Reference:HumanMessage | SystemMessage

如果启用LangSmith,可以看到这次运行被记录到了LangSmith,并且可以看到LangSmith的跟踪。

【1.2】输出解析器-OutputParsers

模型的响应是一个AIMessage。AIMessage包含了一个字符串响应以及关于该响应的其他元数据。通常可能只想要处理字符串响应。可以通过使用一个简单的输出解析器来解析出这个响应。

#导入简单的输出解析器。  
from langchain_core.output_parsers import StrOutputParser  
parser = StrOutputParser()  #1)保存LLM调用的结果,将其传递给解析器  
#result = model.invoke(messages)  
#parser.invoke(result)  
#'Ciao!'  #2)将模型与这个输出解析器“链接”起来  
chain = model | parser  
chain.invoke(messages)  
#'Ciao!'  

API Reference:StrOutputParser

更常见的情况是,可以将模型与这个输出解析器“链接”起来。这意味着每次在这个chain中都会调用这个输出解析器。这个链接受语言模型的输入类型(字符串或消息列表),并返回输出解析器的输出类型(字符串)。

使用 | 操作符轻松创建链。在LangChain中,| 操作符用于将两个元素组合在一起。

如果现在查看LangSmith,可以看到链有两个步骤:首先调用语言模型,然后将结果传递给输出解析器。可以看到LangSmith的跟踪。

【1.3】提示模板-Prompt Templates

**目前将一系列消息直接传递给语言模型。这些消息列表是从哪里来的呢?通常,消息列表是由用户输入和应用程序逻辑的组合构建的。**这种应用程序逻辑通常会获取原始用户输入并将其转换为准备好传递给语言模型的消息列表。


常见的转换包括添加系统消息或使用用户输入格式化模板。

PromptTemplates是LangChain中的一个概念,旨在协助进行这种转换。它们接收原始用户输入并返回准备好传递给语言模型的数据(一个提示)。

from langchain_core.prompts import ChatPromptTemplate  #创建将被格式化为系统消息的字符串  
system_template = "Translate the following into {language}:"  #可以创建PromptTemplate。结合system_template以及一个更简单的模板的组合,用于放置要翻译的文本。  
prompt_template = ChatPromptTemplate.from_messages(  [("system", system_template), ("user", "{text}")]  
)  #单独摆弄提示模板  
result = prompt_template.invoke({"language": "italian", "text": "hi"})  result  
"""  
ChatPromptValue(messages=[SystemMessage(content='Translate the following into italian:'), HumanMessage(content='hi')])  
"""  #直接访问这些消息  
result.to_messages()  
"""  
[SystemMessage(content='Translate the following into italian:'), HumanMessage(content='hi')]  
"""  

API Reference:ChatPromptTemplate

【1.4】chain-Chaining together components with LCEL

可以使用管道(|)操作符将这个与上面的模型和输出解析器结合起来:

chain = prompt_template | model | parser  
chain.invoke({"language": "italian", "text": "hi"})  #'ciao'  

以上使用LangChain表达式语言(LCEL)将LangChain模块链接在一起的简单示例。

使用LCEL进行chain方法有几个好处,包括优化的流式传输和跟踪支持( optimized streaming and tracing support.)。

如果查看LangSmith跟踪,可以看到所有三个组件都显示在LangSmith跟踪中。

【1.5】Serving with LangServe

现在已经构建应用程序, 提供服务使用是LangServe。LangServe帮助开发者将LangChain链条作为REST API部署。


将展示如何使用LangServe部署您的应用程序。

**将制作serve.py文件。**包括三件事:
1.刚刚构建的链条的定义
2.FastAPI应用程序
3.定义服务链条的路由的定义,这是通过langserve.add_routes完成的

#!/usr/bin/env python  
from fastapi import FastAPI  
from langchain_core.prompts import ChatPromptTemplate  
from langchain_core.output_parsers import StrOutputParser  
from langchain_openai import ChatOpenAI  
from langserve import add_routes  # 1. Create prompt template  
system_template = "Translate the following into {language}:"  
prompt_template = ChatPromptTemplate.from_messages([  ('system', system_template),  ('user', '{text}')  
])  # 2. Create model  
model = ChatOpenAI()  # 3. Create parser  
parser = StrOutputParser()  # 4. Create chain  
chain = prompt_template | model | parser  # 4. App definition  
app = FastAPI(  title="LangChain Server",  version="1.0",  description="A simple API server using LangChain's Runnable interfaces",  
)  # 5. Adding chain route  
add_routes(  app,  chain,  path="/chain",  
)  if __name__ == "__main__":  import uvicorn  uvicorn.run(app, host="localhost", port=8000)

API Reference:ChatPromptTemplate | StrOutputParser | ChatOpenAI

pip install "langserve\[all\]"

执行这个文件:

python serve.py  

应该看到链条在http://localhost:8000上提供服务。

Playground-游乐场:每个LangServe服务都带有一个简单的内置UI,用于配置和调用应用程序,并具有流输出和对中间步骤的可见性。前往http://localhost:8000/chain/playground/尝试一下!输入与之前相同的输入 - {“language”: “italian”, “text”: “hi”} - 它应该像之前一样做出响应。

Client-客户端

现在为我们的服务设置一个客户端,以便以编程方式与之交互。

可以使用langserve.RemoteRunnable来做这件事。

使用这个,可以像它在客户端运行一样与服务链条进行交互。

from langserve import RemoteRunnable  
remote_chain = RemoteRunnable("http://localhost:8000/chain/")  
remote_chain.invoke({"language": "italian", "text": "hi"})

【2】构建聊天机器人(Build a Chatbot)

https://python.langchain.com/docs/tutorials/chatbot/

本教程将使您熟悉Chat Models。Prompt Templates。Chat History。

【2.1】Overview

**介绍如何设计和实现由LLM驱动的聊天机器人的例子( an LLM-powered chatbot.)。
**

这个聊天机器人将能够进行对话并记住之前的互动。

构建的这个聊天机器人将仅使用语言模型进行对话。相关的概念:
1.对话式RAG(Conversational RAG:):通过外部数据源实现聊天机器人体验
2.代理(Agents):构建可以采取行动的聊天机器人 chatbot

本教程将涵盖基础知识,这对于这两个更高级的主题将是有帮助的,但如果您愿意,可以直接跳到那里。

https://python.langchain.com/docs/tutorials/qa_chat_history/

https://python.langchain.com/docs/tutorials/agents/

【2.2】Quickstart

pip install -qU langchain-openai
import getpass  
import os  os.environ["OPENAI_API_KEY"] = getpass.getpass()  from langchain_openai import ChatOpenAI  model = ChatOpenAI(model="gpt-3.5-turbo")  from langchain_core.messages import HumanMessage  
model.invoke([HumanMessage(content="Hi! I'm Bob")])
import getpass  
import os  os.environ["OPENAI_API_KEY"] = getpass.getpass()  from langchain_openai import ChatOpenAI  model = ChatOpenAI(model="gpt-3.5-turbo")  from langchain_core.messages import HumanMessage  
model.invoke([HumanMessage(content="Hi! I'm Bob")])  """  
AIMessage(content='Hello Bob! How can I assist you today?', response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 12, 'total_tokens': 22},   
'model_name': 'gpt-4o-mini', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None},  
id='run-d939617f-0c3b-45e9-a93f-13dafecbd4b5-0',   
usage_metadata={'input_tokens': 12, 'output_tokens': 10, 'total_tokens': 22})  
"""  model.invoke([HumanMessage(content="What's my name?")])  
"""  
AIMessage(content="I'm sorry, I don't have access to personal information unless you provide it to me. How may I assist you today?", response_metadata={'token_usage': {'completion_tokens': 26, 'prompt_tokens': 12, 'total_tokens': 38},   
'model_name': 'gpt-4o-mini', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None},   
id='run-47bc8c20-af7b-4fd2-9345-f0e9fdf18ce3-0',   
usage_metadata={'input_tokens': 12, 'output_tokens': 26, 'total_tokens': 38})  """  
from langchain_core.messages import AIMessage  
#需要将整个对话历史记录传递给模型  
model.invoke(  [  HumanMessage(content="Hi! I'm Bob"),  AIMessage(content="Hello Bob! How can I assist you today?"),  HumanMessage(content="What's my name?"),  ]  
)  

API Reference: HumanMessage AIMessage

【2.3】Message persistence消息持久性

LangGraph 实现了内置的持久化层,使其非常适合支持多轮对话的聊天应用程序。

将我们的聊天模型包装在一个最小的LangGraph应用程序中,可以让我们自动持久化消息历史,简化了多轮应用程序的开发。
LangGraph 配备了一个简单的内存检查点器,我们在下面使用了它。有关更多详细信息,包括如何使用不同的持久化后端(例如 SQLite 或 Postgres),请参阅其文档。

from langgraph.checkpoint.memory import MemorySaver  
from langgraph.graph import START, MessagesState, StateGraph  # Define a new graph  
workflow = StateGraph(state_schema=MessagesState)  # Define the function that calls the model  
def call_model(state: MessagesState):  response = model.invoke(state["messages"])  return {"messages": response}  # Define the (single) node in the graph  
workflow.add_edge(START, "model")  
workflow.add_node("model", call_model)  # Add memory  
memory = MemorySaver()  
app = workflow.compile(checkpointer=memory)  
#需要创建配置,每次都将它传递给可运行的程序  
config = {"configurable": {"thread_id": "abc123"}}  #应用程序有多个用户时,可以调用该应用程序  
query = "Hi! I'm Bob."  input_messages = [HumanMessage(query)]  
output = app.invoke({"messages": input_messages}, config)  
output["messages"][-1].pretty_print()  # output contains all messages in state  """  
==================================[1m Ai Message [0m==================================  
Hi Bob! How can I assist you today?  
"""  query = "What's my name?"  input_messages = [HumanMessage(query)]  
output = app.invoke({"messages": input_messages}, config)  
output["messages"][-1].pretty_print()  """  
==================================[1m Ai Message [0m==================================  
Your name is Bob! How can I help you today?  
"""  
#切换config设置  
config = {"configurable": {"thread_id": "abc234"}}  input_messages = [HumanMessage(query)]  
output = app.invoke({"messages": input_messages}, config)  
output["messages"][-1].pretty_print()  """  
==================================[1m Ai Message [0m==================================  
I'm sorry, but I don't have access to personal information about you unless you provide it. How can I assist you today?  
"""  #切回原来config设置  
config = {"configurable": {"thread_id": "abc123"}}  input_messages = [HumanMessage(query)]  
output = app.invoke({"messages": input_messages}, config)  
output["messages"][-1].pretty_print()  
"""  
==================================[1m Ai Message [0m==================================  
Your name is Bob! If there's anything else you'd like to discuss or ask, feel free!  
"""  

为了支持异步操作,将call_model节点更新为异步函数,并在调用应用程序时使用.ainvoke

# Async function for node:  
async def call_model(state: MessagesState):  response = await model.ainvoke(state["messages"])  return {"messages": response}  # Define graph as before:  
workflow = StateGraph(state_schema=MessagesState)  
workflow.add_edge(START, "model")  
workflow.add_node("model", call_model)  
app = workflow.compile(checkpointer=MemorySaver())  # Async invocation:  
output = await app.ainvoke({"messages": input_messages}, config):  
output["messages"][-1].pretty_print()

目前所做的只是在模型周围添加了一个简单的持久化层。可以通过添加提示模板开始使其更加复杂和个性化。

【2.4】Prompt templates-提示模板

提示模板(Prompt Templates)有助于将原始用户信息转换为LLM可以处理的格式。在这种情况下,原始用户输入只是一条消息,将其传递给LLM。现在让使这个过程稍微复杂一些。

首先,添加一条带有自定义指令的系统消息(但仍然以消息作为输入)。接下来,将在消息之外添加更多输入。


**首先,添加一条系统消息。为此,将创建一个ChatPromptTemplate。将利用MessagesPlaceholder来传递所有的消息。
**

from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder  prompt = ChatPromptTemplate.from_messages(  [  (  "system",  "You talk like a pirate. Answer all questions to the best of your ability.",  ),  MessagesPlaceholder(variable_name="messages"),  ]  
)  

**API Reference:**ChatPromptTemplate | MessagesPlaceholder

现在可以更新应用程序以纳入此模板:

from langgraph.graph import START, MessagesState, StateGraph``workflow = StateGraph(state_schema=MessagesState)         def call_model(state: MessagesState):       chain = prompt | model       response = chain.invoke(state)       return {"messages": response}         workflow.add_edge(START, "model")   workflow.add_node("model", call_model)      memory = MemorySaver()   app = workflow.compile(checkpointer=memory)      

以同样的方式调用应用程序:

config = {"configurable": {"thread_id": "abc345"}}  
query = "Hi! I'm Jim."  input_messages = [HumanMessage(query)]  
output = app.invoke({"messages": input_messages}, config)  
output["messages"][-1].pretty_print()  """  
==================================[1m Ai Message [0m==================================  
Ahoy there, Jim! What brings ye to these treacherous waters today? Be ye seekin’ treasure, tales, or perhaps a bit o’ knowledge? Speak up, matey!  
"""  query = "What is my name?"  input_messages = [HumanMessage(query)]  
output = app.invoke({"messages": input_messages}, config)  
output["messages"][-1].pretty_print()  """==================================[1m Ai Message [0m==================================  
Ye be callin' yerself Jim, if I be hearin' ye correctly! A fine name for a scallywag such as yerself! What else can I do fer ye, me hearty?  
"""  
#假设提示模板现在看起来像这样:  
prompt = ChatPromptTemplate.from_messages(  [  (  "system",  "You are a helpful assistant. Answer all questions to the best of your ability in {language}.",  ),  MessagesPlaceholder(variable_name="messages"),  ]  
)  

请注意,我们在提示中添加了一个新的语言输入。应用程序现在有两个参数—输入消息和语言(the input messages and language. )。应该更新应用程序的状态以反映这一点:

from typing import Sequence  from langchain_core.messages import BaseMessage  
from langgraph.graph.message import add_messages  
from typing_extensions import Annotated, TypedDict  class State(TypedDict):  messages: Annotated[Sequence[BaseMessage], add_messages]  language: str  workflow = StateGraph(state_schema=State)  def call_model(state: State):  chain = prompt | model  response = chain.invoke(state)  return {"messages": [response]}  workflow.add_edge(START, "model")  
workflow.add_node("model", call_model)  memory = MemorySaver()  
app = workflow.compile(checkpointer=memory)  

**API Reference:**BaseMessage | add_messages

config = {"configurable": {"thread_id": "abc456"}}  
query = "Hi! I'm Bob."  
language = "Spanish"  input_messages = [HumanMessage(query)]  
output = app.invoke(  {"messages": input_messages, "language": language},  config,  
)  
output["messages"][-1].pretty_print()  
"""  
==================================[1m Ai Message [0m==================================  
¡Hola, Bob! ¿Cómo puedo ayudarte hoy?  
"""  
#整个状态是持久化的,可以省略language参数  
query = "What is my name?"  input_messages = [HumanMessage(query)]  
output = app.invoke(  {"messages": input_messages, "language": language},  config,  
)  
output["messages"][-1].pretty_print()  
"""  
==================================[1m Ai Message [0m==================================  
Tu nombre es Bob.  
"""  

【2.5】Managing Conversation History-管理对话历史

在构建聊天机器人时,理解如何管理对话历史是一个重要的概念。如果不加以管理,消息列表将无限增长,并可能溢出大型语言模型(LLM)的上下文窗口。因此,添加一个限制传递消息大小的步骤是很重要的。  
重要的是,您需要在加载来自消息历史的先前消息之后,但在提示模板之前进行此操作。  
可以通过在提示前添加一个简单的步骤来适当地修改`messages`键,然后将这个新链包装在消息历史类中来实现这一点。  
**LangChain提供了一些内置的帮助器来管理消息列表。在这种情况下,将使用`trim_messages`帮助器来减少发送给模型的消息数量。**修剪器允许我们指定要保留多少个token,以及其他参数,如是否总是保留系统消息以及是否允许部分消息:
from langchain_core.messages import SystemMessage, trim_messages  trimmer = trim_messages(  max_tokens=65,  strategy="last",  token_counter=model,  include_system=True,  allow_partial=False,  start_on="human",  
)  messages = [  SystemMessage(content="you're a good assistant"),  HumanMessage(content="hi! I'm bob"),  AIMessage(content="hi!"),  HumanMessage(content="I like vanilla ice cream"),  AIMessage(content="nice"),  HumanMessage(content="whats 2 + 2"),  AIMessage(content="4"),  HumanMessage(content="thanks"),  AIMessage(content="no problem!"),  HumanMessage(content="having fun?"),  AIMessage(content="yes!"),  
]  
trimmer.invoke(messages)

API Reference:SystemMessage | trim_messages

#要在链中使用它,只需要在将messages输入传递给提示之前运行修剪器。  workflow = StateGraph(state_schema=State)  def call_model(state: State):  chain = prompt | model  trimmed_messages = trimmer.invoke(state["messages"])  response = chain.invoke(  {"messages": trimmed_messages, "language": state["language"]}  )  return {"messages": [response]}  workflow.add_edge(START, "model")  
workflow.add_node("model", call_model)  memory = MemorySaver()  
app = workflow.compile(checkpointer=memory)  
#现在如果尝试问模型我们的名字,它不会知道,因为修剪聊天历史的那一部分  config = {"configurable": {"thread_id": "abc567"}}  
query = "What is my name?"  
language = "English"  input_messages = messages + [HumanMessage(query)]  
output = app.invoke(  {"messages": input_messages, "language": language},  config,  
)  
output["messages"][-1].pretty_print()  """  
==================================[1m Ai Message [0m==================================  
I don't know your name. If you'd like to share it, feel free!  
"""  
#但如果询问最后几条消息中的信息,它会记得:  config = {"configurable": {"thread_id": "abc678"}}  
query = "What math problem did I ask?"  
language = "English"  input_messages = messages + [HumanMessage(query)]  
output = app.invoke(  {"messages": input_messages, "language": language},  config,  
)  
output["messages"][-1].pretty_print()  """  
==================================[1m Ai Message [0m==================================  
You asked what 2 + 2 equals.  
"""  

【2.6】Streaming

所有的链条都暴露了.stream方法,使用消息历史的链条也不例外。可以简单地使用该方法来获取流式响应。

默认情况下,LangGraph应用程序中的.stream会流式传输应用程序步骤—在这种情况下,就是模型响应的单个步骤。设置stream_mode="messages"允许我们改为流式传输输出token:

from langchain_core.messages import HumanMessage  
from langchain_core.messages import BaseMessage  
from langgraph.graph.message import add_messages,START, MessagesState, StateGraph  
from langgraph.checkpoint.memory import MemorySaver  workflow = StateGraph(state_schema=State)  memory = MemorySaver()  
app = workflow.compile(checkpointer=memory)  config = {"configurable": {"thread_id": "abc789"}}  
query = "Hi I'm Todd, please tell me a joke."  
language = "English"  input_messages = [HumanMessage(query)]  
for chunk, metadata in app.stream(  {"messages": input_messages, "language": language},  config,  stream_mode="messages",  
):  if isinstance(chunk, AIMessage):  # Filter to just model responses  print(chunk.content, end="|")  """  
|Hi| Todd|!| Here|’s| a| joke| for| you|:  
|Why| did| the| scare|crow| win| an| award|?  
|Because| he| was| outstanding| in| his| field|!||  
"""  

【3】构建矢量存储和检索器(Build vector stores and retrievers)

https://python.langchain.com/docs/tutorials/retrievers/

本教程将使您熟悉LangChain的向量存储( vector store )和检索器抽象(retriever abstractions)。

这些抽象旨在支持从(向量)数据库和其他来源检索数据,以便与LLM工作流程集成。

它们对于那些在模型推理过程中获取数据进行推理的应用程序非常重要,就像在检索增强生成或RAG(请参阅我们的RAG教程)的情况下一样。

【3.1】概念-Concepts

本指南侧重于文本数据的检索。将涵盖以下概念:Documents;Vector stores;Retrievers.

【3.2】文档-Documents

LangChain实现了一个Document抽象,旨在表示一段文本及其相关元数据。它有两个属性:


1.page_content:表示内容的字符串;
2.metadata:包含任意元数据的字典。

metadata属性可以捕获有关文档来源的信息,它与其他文档的关系以及其他信息。

请注意,单个Document对象通常代表较大文档的一部分

生成一些示例文档:

from langchain_core.documents import Document  
documents = [  Document(  page_content="Dogs are great companions, known for their loyalty and friendliness.",  metadata={"source": "mammal-pets-doc"},  ),  Document(  page_content="Cats are independent pets that often enjoy their own space.",  metadata={"source": "mammal-pets-doc"},  ),  Document(  page_content="Goldfish are popular pets for beginners, requiring relatively simple care.",  metadata={"source": "fish-pets-doc"},  ),  Document(  page_content="Parrots are intelligent birds capable of mimicking human speech.",  metadata={"source": "bird-pets-doc"},  ),  Document(  page_content="Rabbits are social animals that need plenty of space to hop around.",  metadata={"source": "mammal-pets-doc"},  ),  
]  

API Reference:Document

【3.3】向量存储-Vector stores

向量搜索(Vector search)是一种常见的存储和搜索非结构化数据(如非结构化文本)的方法。

其思想是存储与文本相关联的数值向量。给定一个查询,可以将其嵌入为相同维度的向量,并使用向量相似度度量来识别存储中的相关数据。

LangChain VectorStore对象包含用于向存储中添加文本和Document对象的方法,以及使用各种相似度度量查询它们的方法。它们通常使用嵌入模型进行初始化,这些模型决定了如何将文本数据转换为数值向量。

LangChain包括与不同向量存储技术的一系列集成。有些向量存储由提供商托管(例如,各种云提供商),需要特定的凭据才能使用;有些(如Postgres)运行在单独的基础设施中,可以在本地或通过第三方运行;其他一些可以在内存中运行,适用于轻量级工作负载。在这里,我们将演示使用Chroma的LangChain VectorStores的使用,Chroma包括一个内存中的实现。

要实例化一个向量存储,我们通常需要提供一个嵌入模型来指定如何将文本转换为数值向量。在这里,我们将使用OpenAI嵌入。

from langchain_chroma import Chroma  
from langchain_openai import OpenAIEmbeddings  
vectorstore = Chroma.from_documents(  documents,  embedding=OpenAIEmbeddings(),  
)  

API Reference:OpenAIEmbeddings

在这里调用.from_documents将会把文档添加到向量存储中。VectorStore实现添加文档的方法,这些方法也可以在对象实例化后调用。

大多数实现将允许您连接到现有的向量存储——例如,通过提供客户端、索引名称或其他信息。有关特定集成的详细信息,请参阅文档。

一旦我们实例化了一个包含文档的VectorStore,就可以查询它。

VectorStore包括查询方法:

#同步和异步;(Synchronously and asynchronously;)

#通过字符串查询和通过向量;(By string query and by vector;)

#返回和不返回相似度分数;(With and without returning similarity scores;)

#通过相似度和最大边际相关性(在检索结果中平衡相似性与多样性)。(By similarity and maximum marginal relevance )

这些方法的输出通常包括一个Document对象列表。

【3.3】向量存储-Vector stores

vectorstore.similarity_search("cat")  """  
[Document(page_content='Cats are independent pets that often enjoy their own space.', metadata={'source': 'mammal-pets-doc'}),  Document(page_content='Dogs are great companions, known for their loyalty and friendliness.', metadata={'source': 'mammal-pets-doc'}),  Document(page_content='Rabbits are social animals that need plenty of space to hop around.', metadata={'source': 'mammal-pets-doc'}),  Document(page_content='Parrots are intelligent birds capable of mimicking human speech.', metadata={'source': 'bird-pets-doc'})]  
"""  await vectorstore.asimilarity_search("cat")  
"""  
[Document(page_content='Cats are independent pets that often enjoy their own space.', metadata={'source': 'mammal-pets-doc'}),  Document(page_content='Dogs are great companions, known for their loyalty and friendliness.', metadata={'source': 'mammal-pets-doc'}),  Document(page_content='Rabbits are social animals that need plenty of space to hop around.', metadata={'source': 'mammal-pets-doc'}),  Document(page_content='Parrots are intelligent birds capable of mimicking human speech.', metadata={'source': 'bird-pets-doc'})]  
"""
#Return scores:  
# Note that providers implement different scores; Chroma here  
# returns a distance metric that should vary inversely with similarity.  vectorstore.similarity_search_with_score("cat")  """  
[(Document(page_content='Cats are independent pets that often enjoy their own space.', metadata={'source': 'mammal-pets-doc'}),  0.3751849830150604),  (Document(page_content='Dogs are great companions, known for their loyalty and friendliness.', metadata={'source': 'mammal-pets-doc'}),  0.48316916823387146),  (Document(page_content='Rabbits are social animals that need plenty of space to hop around.', metadata={'source': 'mammal-pets-doc'}),  0.49601367115974426),  (Document(page_content='Parrots are intelligent birds capable of mimicking human speech.', metadata={'source': 'bird-pets-doc'}),  0.4972994923591614)]  
"""  #根据与嵌入查询的相似性返回文档:  
embedding = OpenAIEmbeddings().embed_query("cat")  
vectorstore.similarity_search_by_vector(embedding)  """  
[Document(page_content='Cats are independent pets that often enjoy their own space.', metadata={'source': 'mammal-pets-doc'}),  Document(page_content='Dogs are great companions, known for their loyalty and friendliness.', metadata={'source': 'mammal-pets-doc'}),  Document(page_content='Rabbits are social animals that need plenty of space to hop around.', metadata={'source': 'mammal-pets-doc'}),  Document(page_content='Parrots are intelligent birds capable of mimicking human speech.', metadata={'source': 'bird-pets-doc'})]  
"""  

【3.4】检索器-Retrievers

LangChain VectorStore对象没有继承Runnable,因此不能立即集成到LangChain表达式语言链中。

LangChain Retriever是Runnable,因此它们实现了一组标准方法(例如,同步和异步invoke和批量操作),并旨在被纳入LCEL链中。

**1).可以自己创建一个简单的版本,而不需要继承Retriever。**如果我们选择使用哪种方法来检索文档,可以轻松地创建一个可运行的对象。下面我们将围绕similarity_search方法构建一个:

from langchain_core.documents import Document  
from langchain_core.runnables import RunnableLambda  retriever = RunnableLambda(vectorstore.similarity_search).bind(k=1)  # select top result  
retriever.batch(["cat", "shark"])  """  
[[Document(page_content='Cats are independent pets that often enjoy their own space.', metadata={'source': 'mammal-pets-doc'})],  [Document(page_content='Goldfish are popular pets for beginners, requiring relatively simple care.', metadata={'source': 'fish-pets-doc'})]]  
"""  

2).向量存储实现**as_retriever方法,该方法将生成检索器,具体来说是VectorStoreRetriever。这些检索器包括特定的search_typesearch_kwargs属性,用于识别要调用的底层向量存储的方法以及如何参数化它们。**例如,可以用以下代码复制上述操作:

retriever = vectorstore.as_retriever(  search_type="similarity",  search_kwargs={"k": 1},  
)  retriever.batch(["cat", "shark"])  
"""  
[[Document(page_content='Cats are independent pets that often enjoy their own space.', metadata={'source': 'mammal-pets-doc'})],  [Document(page_content='Goldfish are popular pets for beginners, requiring relatively simple care.', metadata={'source': 'fish-pets-doc'})]]  
"""  

API Reference:Document | RunnableLambda

**向量存储检索器(VectorStoreRetriever)支持 “similarity|相似性”(默认)、“mmr”(最大边际相关性,如上所述)和 “similarity_score_threshold|相似性分数阈值” 的搜索类型。**可以使用后者来根据相似性分数对检索器输出的文档进行阈值设置。

检索器(Retrievers)可以轻松地集成到更复杂的应用程序中,例如检索增强生成(RAG)应用程序,该应用程序将给定问题与检索到的上下文相结合,作为LLM的提示。下面我们展示了一个最小示例。

pip install -qU langchain-openai  import getpass  
import os  os.environ["OPENAI_API_KEY"] = getpass.getpass()  
from langchain_openai import ChatOpenAI  
llm = ChatOpenAI(model="gpt-4o-mini")  from langchain_core.prompts import ChatPromptTemplate  
from langchain_core.runnables import RunnablePassthrough  message = """  
Answer this question using the provided context only.  
{question}  
Context:  
{context}  
"""  prompt = ChatPromptTemplate.from_messages([("human", message)])  
rag_chain = {"context": retriever, "question": RunnablePassthrough()} | prompt | llm  
response = rag_chain.invoke("tell me about cats")  
print( response.content)  #Cats are independent pets that often enjoy their own space.

**API Reference:**ChatPromptTemplate | RunnablePassthrough

【3.5】检索器Learn more

检索策略(Retrieval strategies )可以丰富而复杂。例如:
1.可以从查询中推断出硬规则和过滤器(例如,“使用2020年之后发布的文档”);

2.可以返回以某种方式与检索到的上下文相关联的文档(例如,通过某种文档分类法);

3.可以为每个上下文单元生成多个嵌入( multiple embeddings);

4.可以集成来自多个检索器的结果( ensemble results );

5.可以为文档分配权重,例如,给予最近的文档更高的权重。

【4】构建代理(Build an Agent)

https://python.langchain.com/docs/tutorials/agents/

本教程将使您熟悉Chat Models,Tools和Agents。

语言模型本身不能采取行动—它们只输出文本。LangChain的一个大用例是创建代理。

代理是使用LLMs作为推理引擎来确定要采取哪些行动以及传递给它们的输入的系统。

在执行行动之后,结果可以反馈到LLM中,以确定是否需要更多的行动,或者是否可以结束。

在本教程中,将构建一个可以与搜索引擎交互的代理。可以向这个代理提问,观看它调用搜索工具,并与之进行对话。

【4.1】端到端代理(End-to-end agent)

举例功能齐全的agent,使用LLM来决定使用哪些工具tools。

该agent具备 通用搜索工具,具有对话记忆功能用作多轮聊天机器人。

# Import relevant functionality  
from langchain_anthropic import ChatAnthropic  
from langchain_community.tools.tavily_search import TavilySearchResults  
from langchain_core.messages import HumanMessage  
from langgraph.checkpoint.memory import MemorySaver  
from langgraph.prebuilt import create_react_agent  
# Create the agent  
memory = MemorySaver()  
model = ChatAnthropic(model_name="claude-3-sonnet-20240229")  
search = TavilySearchResults(max_results=2)  
tools = [search]  
agent_executor = create_react_agent(model, tools, checkpointer=memory)  
# Use the agent  
config = {"configurable": {"thread_id": "abc123"}}  
for chunk in agent_executor.stream(  {"messages": [HumanMessage(content="hi im bob! and i live in sf")]}, config  
):  print(chunk)  print("----")  
for chunk in agent_executor.stream(  {"messages": [HumanMessage(content="whats the weather where I live?")]}, config  
):  print(chunk)  print("----")  """  
{'agent': {'messages': [AIMessage(content="Hello Bob! Since you didn't ask a specific question, I don't need to use any tools to respond. It's nice to meet you. San Francisco is a wonderful city with lots to see and do. I hope you're enjoying living there. Please let me know if you have any other questions!", response_metadata={'id': 'msg_01Mmfzfs9m4XMgVzsCZYMWqH', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 271, 'output_tokens': 65}}, id='run-44c57f9c-a637-4888-b7d9-6d985031ae48-0', usage_metadata={'input_tokens': 271, 'output_tokens': 65, 'total_tokens': 336})]}}  
----  
{'agent': {'messages': [AIMessage(content=[{'text': 'To get current weather information for your location in San Francisco, let me invoke the search tool:', 'type': 'text'}, {'id': 'toolu_01BGEyQaSz3pTq8RwUUHSRoo', 'input': {'query': 'san francisco weather'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}], response_metadata={'id': 'msg_013AVSVsRLKYZjduLpJBY4us', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 347, 'output_tokens': 80}}, id='run-de7923b6-5ee2-4ebe-bd95-5aed4933d0e3-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'san francisco weather'}, 'id': 'toolu_01BGEyQaSz3pTq8RwUUHSRoo'}], usage_metadata={'input_tokens': 347, 'output_tokens': 80, 'total_tokens': 427})]}}  
----  
{'tools': {'messages': [ToolMessage(content='[{"url": "https://www.weatherapi.com/", "content": "{\'location\': {\'name\': \'San Francisco\', \'region\': \'California\', \'country\': \'United States of America\', \'lat\': 37.78, \'lon\': -122.42, \'tz_id\': \'America/Los_Angeles\', \'localtime_epoch\': 1717238643, \'localtime\': \'2024-06-01 3:44\'}, \'current\': {\'last_updated_epoch\': 1717237800, \'last_updated\': \'2024-06-01 03:30\', \'temp_c\': 12.0, \'temp_f\': 53.6, \'is_day\': 0, \'condition\': {\'text\': \'Mist\', \'icon\': \'//cdn.weatherapi.com/weather/64x64/night/143.png\', \'code\': 1030}, \'wind_mph\': 5.6, \'wind_kph\': 9.0, \'wind_degree\': 310, \'wind_dir\': \'NW\', \'pressure_mb\': 1013.0, \'pressure_in\': 29.92, \'precip_mm\': 0.0, \'precip_in\': 0.0, \'humidity\': 88, \'cloud\': 100, \'feelslike_c\': 10.5, \'feelslike_f\': 50.8, \'windchill_c\': 9.3, \'windchill_f\': 48.7, \'heatindex_c\': 11.1, \'heatindex_f\': 51.9, \'dewpoint_c\': 8.8, \'dewpoint_f\': 47.8, \'vis_km\': 6.4, \'vis_miles\': 3.0, \'uv\': 1.0, \'gust_mph\': 12.5, \'gust_kph\': 20.1}}"}, {"url": "https://www.timeanddate.com/weather/usa/san-francisco/historic", "content": "Past Weather in San Francisco, California, USA \\u2014 Yesterday and Last 2 Weeks. Time/General. Weather. Time Zone. DST Changes. Sun & Moon. Weather Today Weather Hourly 14 Day Forecast Yesterday/Past Weather Climate (Averages) Currently: 68 \\u00b0F. Passing clouds."}]', name='tavily_search_results_json', tool_call_id='toolu_01BGEyQaSz3pTq8RwUUHSRoo')]}}  
----  
{'agent': {'messages': [AIMessage(content='Based on the search results, the current weather in San Francisco is:\n\nTemperature: 53.6°F (12°C)\nConditions: Misty\nWind: 5.6 mph (9 kph) from the Northwest\nHumidity: 88%\nCloud Cover: 100% \n\nThe results provide detailed information like wind chill, heat index, visibility and more. It looks like a typical cool, foggy morning in San Francisco. Let me know if you need any other details about the weather where you live!', response_metadata={'id': 'msg_019WGLbaojuNdbCnqac7zaGW', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 1035, 'output_tokens': 120}}, id='run-1bb68bf3-b212-4ef4-8a31-10c830421c78-0', usage_metadata={'input_tokens': 1035, 'output_tokens': 120, 'total_tokens': 1155})]}}  
----  
"""  

【4.2】定义工具(Define tools)

首先需要创建想要使用的工具。主要工具选择将是Tavily—搜索引擎。

在LangChain中有一个内置工具,可以轻松地将Tavily搜索引擎作为工具使用。


from langchain_community.tools.tavily_search import TavilySearchResults  
search = TavilySearchResults(max_results=2)  
search_results = search.invoke("what is the weather in SF")  
print(search_results)  
# If we want, we can create other tools.  
# Once we have all the tools we want, we can put them in a list that we will reference later.  
tools = [search]  """  
[{'url': 'https://www.weatherapi.com/',  'content': "{'location': {'name': 'San Francisco', 'region': 'California', 'country': 'United States of America', 'lat': 37.78, 'lon': -122.42, 'tz_id': 'America/Los_Angeles', 'localtime_epoch': 1717238703, 'localtime': '2024-06-01 3:45'}, 'current': {'last_updated_epoch': 1717237800, 'last_updated': '2024-06-01 03:30', 'temp_c': 12.0, 'temp_f': 53.6, 'is_day': 0, 'condition': {'text': 'Mist', 'icon': '//cdn.weatherapi.com/weather/64x64/night/143.png', 'code': 1030}, 'wind_mph': 5.6, 'wind_kph': 9.0, 'wind_degree': 310, 'wind_dir': 'NW', 'pressure_mb': 1013.0, 'pressure_in': 29.92, 'precip_mm': 0.0, 'precip_in': 0.0, 'humidity': 88, 'cloud': 100, 'feelslike_c': 10.5, 'feelslike_f': 50.8, 'windchill_c': 9.3, 'windchill_f': 48.7, 'heatindex_c': 11.1, 'heatindex_f': 51.9, 'dewpoint_c': 8.8, 'dewpoint_f': 47.8, 'vis_km': 6.4, 'vis_miles': 3.0, 'uv': 1.0, 'gust_mph': 12.5, 'gust_kph': 20.1}}"},  {'url': 'https://www.wunderground.com/hourly/us/ca/san-francisco/date/2024-01-06',  'content': 'Current Weather for Popular Cities . San Francisco, CA 58 ° F Partly Cloudy; Manhattan, NY warning 51 ° F Cloudy; Schiller Park, IL (60176) warning 51 ° F Fair; Boston, MA warning 41 ° F ...'}]  
"""

API Reference:TavilySearchResults

【4.3】使用语言模型(Using Language Models)

如何通过调用工具来使用语言模型。

pip install -qU langchain-openai  import getpass  
import os  
os.environ["OPENAI_API_KEY"] = getpass.getpass()  
from langchain_openai import ChatOpenAI  
model = ChatOpenAI(model="gpt-4")  from langchain_core.messages import HumanMessage  #通过传递一系列消息来调用语言模型。返回字符串  
response = model.invoke([HumanMessage(content="hi!")])  
response.content  
#'Hi there!'  
#使用.bind_tools来让语言模型了解这些工具  
model_with_tools = model.bind_tools(tools)  
response = model_with_tools.invoke([HumanMessage(content="Hi!")])  
print(f"ContentString: {response.content}")  
print(f"ToolCalls: {response.tool_calls}")  """  
ContentString: Hello!  
ToolCalls: []  
"""  #用一些期望调用工具的输入来调用它  
response = model_with_tools.invoke([HumanMessage(content="What's the weather in SF?")])  
print(f"ContentString: {response.content}")  
print(f"ToolCalls: {response.tool_calls}")  
"""  
ContentString:   
ToolCalls: [{'name': 'tavily_search_results_json', 'args': {'query': 'weather san francisco'}, 'id': 'toolu_01VTP7DUvSfgtYxsq9x4EwMp'}]  
"""  

API Reference:HumanMessage

可以看到ContentString现在没有文本内容,但是有一个工具调用!它希望我们调用Tavily搜索工具。

这还没有真正调用那个工具—它只是告诉我们去调用。为了实际调用它,我们将想要创建我们的代理。

【4.4】创建代理(Create the agent)

现在已经定义工具和LLM,可以创建代理。

使用LangGraph来构建代理。目前正在使用高级接口来构建代理,但LangGraph的一个好的方面是,这个高级接口背后有一个低级、高度可控的API,以防你想修改代理逻辑。可以用LLM和工具初始化代理。

注意,传递的是model,而不是model_with_tools。这是因为create_react_agent会在幕后为我们调用.bind_tools。

from langchain_openai import ChatOpenAI  
model = ChatOpenAI(model="gpt-4")  from langchain_community.tools.tavily_search import TavilySearchResults  
search = TavilySearchResults(max_results=2)  
tools = [search]  from langgraph.prebuilt import create_react_agent  
agent_executor = create_react_agent(model, tools)

API Reference:create_react_agent

【4.5】运行代理(Run the agent)

现在可以在几个查询上运行代理!请注意,目前这些都是无状态查询(它不会记住以前的互动)。

请注意,代理将在交互结束时返回最终状态(其中包括任何输入,稍后将看到如何仅获取输出)。

首先,看看当不需要调用工具时它的反应如何:

from langgraph.prebuilt import create_react_agent  
# model 和tools 创建 见上  
agent_executor = create_react_agent(model, tools)  response = agent_executor.invoke({"messages": [HumanMessage(content="hi!")]})  
response["messages"]  """  
[HumanMessage(content='hi!', id='a820fcc5-9b87-457a-9af0-f21768143ee3'),  AIMessage(content='Hello!', response_metadata={'id': 'msg_01VbC493X1VEDyusgttiEr1z', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 264, 'output_tokens': 5}}, id='run-0e0ddae8-a85b-4bd6-947c-c36c857a4698-0', usage_metadata={'input_tokens': 264, 'output_tokens': 5, 'total_tokens': 269})]  
"""  response = agent_executor.invoke(  {"messages": [HumanMessage(content="whats the weather in sf?")]}  
)  
response["messages"]  
"""  
[HumanMessage(content='whats the weather in sf?', id='1d6c96bb-4ddb-415c-a579-a07d5264de0d'),  AIMessage(content=[{'id': 'toolu_01Y5EK4bw2LqsQXeaUv8iueF', 'input': {'query': 'weather in san francisco'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}], response_metadata={'id': 'msg_0132wQUcEduJ8UKVVVqwJzM4', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 269, 'output_tokens': 61}}, id='run-26d5e5e8-d4fd-46d2-a197-87b95b10e823-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in san francisco'}, 'id': 'toolu_01Y5EK4bw2LqsQXeaUv8iueF'}], usage_metadata={'input_tokens': 269, 'output_tokens': 61, 'total_tokens': 330}),  ToolMessage(content='[{"url": "https://www.weatherapi.com/", "content": "{\'location\': {\'name\': \'San Francisco\', \'region\': \'California\', \'country\': \'United States of America\', \'lat\': 37.78, \'lon\': -122.42, \'tz_id\': \'America/Los_Angeles\', \'localtime_epoch\': 1717238703, \'localtime\': \'2024-06-01 3:45\'}, \'current\': {\'last_updated_epoch\': 1717237800, \'last_updated\': \'2024-06-01 03:30\', \'temp_c\': 12.0, \'temp_f\': 53.6, \'is_day\': 0, \'condition\': {\'text\': \'Mist\', \'icon\': \'//cdn.weatherapi.com/weather/64x64/night/143.png\', \'code\': 1030}, \'wind_mph\': 5.6, \'wind_kph\': 9.0, \'wind_degree\': 310, \'wind_dir\': \'NW\', \'pressure_mb\': 1013.0, \'pressure_in\': 29.92, \'precip_mm\': 0.0, \'precip_in\': 0.0, \'humidity\': 88, \'cloud\': 100, \'feelslike_c\': 10.5, \'feelslike_f\': 50.8, \'windchill_c\': 9.3, \'windchill_f\': 48.7, \'heatindex_c\': 11.1, \'heatindex_f\': 51.9, \'dewpoint_c\': 8.8, \'dewpoint_f\': 47.8, \'vis_km\': 6.4, \'vis_miles\': 3.0, \'uv\': 1.0, \'gust_mph\': 12.5, \'gust_kph\': 20.1}}"}, {"url": "https://www.timeanddate.com/weather/usa/san-francisco/hourly", "content": "Sun & Moon. Weather Today Weather Hourly 14 Day Forecast Yesterday/Past Weather Climate (Averages) Currently: 59 \\u00b0F. Passing clouds. (Weather station: San Francisco International Airport, USA). See more current weather."}]', name='tavily_search_results_json', id='37aa1fd9-b232-4a02-bd22-bc5b9b44a22c', tool_call_id='toolu_01Y5EK4bw2LqsQXeaUv8iueF'),  AIMessage(content='Based on the search results, here is a summary of the current weather in San Francisco:\n\nThe weather in San Francisco is currently misty with a temperature of around 53°F (12°C). There is complete cloud cover and moderate winds from the northwest around 5-9 mph (9-14 km/h). Humidity is high at 88%. Visibility is around 3 miles (6.4 km). \n\nThe results provide an hourly forecast as well as current conditions from a couple different weather sources. Let me know if you need any additional details about the San Francisco weather!', response_metadata={'id': 'msg_01BRX9mrT19nBDdHYtR7wJ92', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 920, 'output_tokens': 132}}, id='run-d0325583-3ddc-4432-b2b2-d023eb97660f-0', usage_metadata={'input_tokens': 920, 'output_tokens': 132, 'total_tokens': 1052})]  
"""  

【4.6】流式消息(Streaming Messages)

已经看到如何通过.invoke调用代理以获得最终响应。

如果代理正在执行多个步骤,那可能需要一些时间。

为了展示中间进度,可以随着消息的发生而流式传输回消息。

from langgraph.prebuilt import create_react_agent# model 和tools 创建 见上agent_executor = create_react_agent(model, tools)  for chunk in agent_executor.stream(  {"messages": [HumanMessage(content="whats the weather in sf?")]}  
):  print(chunk)  print("----")  
"""  
{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_50Kb8zHmFqPYavQwF5TgcOH8', 'function': {'arguments': '{\n  "query": "current weather in San Francisco"\n}', 'name': 'tavily_search_results_json'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 23, 'prompt_tokens': 134, 'total_tokens': 157}, 'model_name': 'gpt-4', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-042d5feb-c2cc-4c3f-b8fd-dbc22fd0bc07-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in San Francisco'}, 'id': 'call_50Kb8zHmFqPYavQwF5TgcOH8'}])]}}  
----  
{'action': {'messages': [ToolMessage(content='[{"url": "https://www.weatherapi.com/", "content": "{\'location\': {\'name\': \'San Francisco\', \'region\': \'California\', \'country\': \'United States of America\', \'lat\': 37.78, \'lon\': -122.42, \'tz_id\': \'America/Los_Angeles\', \'localtime_epoch\': 1714426906, \'localtime\': \'2024-04-29 14:41\'}, \'current\': {\'last_updated_epoch\': 1714426200, \'last_updated\': \'2024-04-29 14:30\', \'temp_c\': 17.8, \'temp_f\': 64.0, \'is_day\': 1, \'condition\': {\'text\': \'Sunny\', \'icon\': \'//cdn.weatherapi.com/weather/64x64/day/113.png\', \'code\': 1000}, \'wind_mph\': 23.0, \'wind_kph\': 37.1, \'wind_degree\': 290, \'wind_dir\': \'WNW\', \'pressure_mb\': 1019.0, \'pressure_in\': 30.09, \'precip_mm\': 0.0, \'precip_in\': 0.0, \'humidity\': 50, \'cloud\': 0, \'feelslike_c\': 17.8, \'feelslike_f\': 64.0, \'vis_km\': 16.0, \'vis_miles\': 9.0, \'uv\': 5.0, \'gust_mph\': 27.5, \'gust_kph\': 44.3}}"}, {"url": "https://world-weather.info/forecast/usa/san_francisco/april-2024/", "content": "Extended weather forecast in San Francisco. Hourly Week 10 days 14 days 30 days Year. Detailed \\u26a1 San Francisco Weather Forecast for April 2024 - day/night \\ud83c\\udf21\\ufe0f temperatures, precipitations - World-Weather.info."}]', name='tavily_search_results_json', id='d88320ac-3fe1-4f73-870a-3681f15f6982', tool_call_id='call_50Kb8zHmFqPYavQwF5TgcOH8')]}}  
----  
{'agent': {'messages': [AIMessage(content='The current weather in San Francisco, California is sunny with a temperature of 17.8°C (64.0°F). The wind is coming from the WNW at 23.0 mph. The humidity is at 50%. [source](https://www.weatherapi.com/)', response_metadata={'token_usage': {'completion_tokens': 58, 'prompt_tokens': 602, 'total_tokens': 660}, 'model_name': 'gpt-4', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-0cd2a507-ded5-4601-afe3-3807400e9989-0')]}}  
----   
"""  

【4.7】流式tokens(Streaming tokens)

除了流式传输回消息外,流式传输回令牌也是有用的。


可以使用.astream_events方法来实现这一点。

from langgraph.prebuilt import create_react_agent  
## model 和tools 创建 见上agent_executor = create_react_agent(model, tools)  async for event in agent_executor.astream_events(  {"messages": [HumanMessage(content="whats the weather in sf?")]}, version="v1"  
):  kind = event["event"]  if kind == "on_chain_start":  if (  event["name"] == "Agent"  ):  # Was assigned when creating the agent with `.with_config({"run_name": "Agent"})`  print(  f"Starting agent: {event['name']} with input: {event['data'].get('input')}"  )  elif kind == "on_chain_end":  if (  event["name"] == "Agent"  ):  # Was assigned when creating the agent with `.with_config({"run_name": "Agent"})`  print()  print("--")  print(  f"Done agent: {event['name']} with output: {event['data'].get('output')['output']}"  )  if kind == "on_chat_model_stream":  content = event["data"]["chunk"].content  if content:  # Empty content in the context of OpenAI means  # that the model is asking for a tool to be invoked.  # So we only print non-empty content  print(content, end="|")  elif kind == "on_tool_start":  print("--")  print(  f"Starting tool: {event['name']} with inputs: {event['data'].get('input')}"  )  elif kind == "on_tool_end":  print(f"Done tool: {event['name']}")  print(f"Tool output was: {event['data'].get('output')}")  print("--")  """  
--  
Starting tool: tavily_search_results_json with inputs: {'query': 'current weather in San Francisco'}  
Done tool: tavily_search_results_json  
Tool output was: [{'url': 'https://www.weatherapi.com/', 'content': "{'location': {'name': 'San Francisco', 'region': 'California', 'country': 'United States of America', 'lat': 37.78, 'lon': -122.42, 'tz_id': 'America/Los_Angeles', 'localtime_epoch': 1714427052, 'localtime': '2024-04-29 14:44'}, 'current': {'last_updated_epoch': 1714426200, 'last_updated': '2024-04-29 14:30', 'temp_c': 17.8, 'temp_f': 64.0, 'is_day': 1, 'condition': {'text': 'Sunny', 'icon': '//cdn.weatherapi.com/weather/64x64/day/113.png', 'code': 1000}, 'wind_mph': 23.0, 'wind_kph': 37.1, 'wind_degree': 290, 'wind_dir': 'WNW', 'pressure_mb': 1019.0, 'pressure_in': 30.09, 'precip_mm': 0.0, 'precip_in': 0.0, 'humidity': 50, 'cloud': 0, 'feelslike_c': 17.8, 'feelslike_f': 64.0, 'vis_km': 16.0, 'vis_miles': 9.0, 'uv': 5.0, 'gust_mph': 27.5, 'gust_kph': 44.3}}"}, {'url': 'https://www.weathertab.com/en/c/e/04/united-states/california/san-francisco/', 'content': 'San Francisco Weather Forecast for Apr 2024 - Risk of Rain Graph. Rain Risk Graph: Monthly Overview. Bar heights indicate rain risk percentages. Yellow bars mark low-risk days, while black and grey bars signal higher risks. Grey-yellow bars act as buffers, advising to keep at least one day clear from the riskier grey and black days, guiding ...'}]  
--  
The| current| weather| in| San| Francisco|,| California|,| USA| is| sunny| with| a| temperature| of| |17|.|8|°C| (|64|.|0|°F|).| The| wind| is| blowing| from| the| W|NW| at| a| speed| of| |37|.|1| k|ph| (|23|.|0| mph|).| The| humidity| level| is| at| |50|%.| [|Source|](|https|://|www|.weather|api|.com|/)|  
"""  

【4.7】增加内存(Adding in memory)

from langgraph.checkpoint.memory import MemorySaver  memory = MemorySaver()  #model 和tools 创建 见上  
agent_executor = create_react_agent(model, tools, checkpointer=memory)  config = {"configurable": {"thread_id": "abc123"}}  for chunk in agent_executor.stream(  {"messages": [HumanMessage(content="hi im bob!")]}, config  
):  print(chunk)  print("----")  """  
{'agent': {'messages': [AIMessage(content="Hello Bob! It's nice to meet you again.", response_metadata={'id': 'msg_013C1z2ZySagEFwmU1EsysR2', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 1162, 'output_tokens': 14}}, id='run-f878acfd-d195-44e8-9166-e2796317e3f8-0', usage_metadata={'input_tokens': 1162, 'output_tokens': 14, 'total_tokens': 1176})]}}  
----  
"""  

API Reference:MemorySaver

开始新的对话,更改所使用的线程ID。

config = {"configurable": {"thread_id": "xyz123"}}  
for chunk in agent_executor.stream(  {"messages": [HumanMessage(content="whats my name?")]}, config  
):  print(chunk)  print("----")  """  
{'agent': {'messages': [AIMessage(content="I'm afraid I don't actually know your name. As an AI assistant without personal information about you, I don't have a specific name associated with our conversation.", response_metadata={'id': 'msg_01NoaXNNYZKSoBncPcLkdcbo', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 267, 'output_tokens': 36}}, id='run-c9f7df3d-525a-4d8f-bbcf-a5b4a5d2e4b0-0', usage_metadata={'input_tokens': 267, 'output_tokens': 36, 'total_tokens': 303})]}}  
----  
"""  # 如何系统的去学习大模型LLM ?大模型时代,火爆出圈的LLM大模型让程序员们开始重新评估自己的本领。 “`AI会取代那些行业`?”“`谁的饭碗又将不保了?`”等问题热议不断。事实上,**`抢你饭碗的不是AI,而是会利用AI的人。`**继`科大讯飞、阿里、华为`等巨头公司发布AI产品后,很多中小企业也陆续进场!**超高年薪,挖掘AI大模型人才!** 如今大厂老板们,也更倾向于会AI的人,普通程序员,还有应对的机会吗?##### 与其焦虑……不如成为「`掌握AI工具的技术人`」,毕竟AI时代,**谁先尝试,谁就能占得先机!****但是LLM相关的内容很多,现在网上的老课程老教材关于LLM又太少。所以现在小白入门就只能靠自学,学习成本和门槛很高。**针对所有自学遇到困难的同学们,我帮大家系统梳理大模型学习脉络,将这份 `LLM大模型资料` 分享出来:包括`LLM大模型书籍、640套大模型行业报告、LLM大模型学习视频、LLM大模型学习路线、开源大模型学习教程`等, 😝有需要的小伙伴,可以 **扫描下方二维码**领取🆓**↓↓↓**> 👉[<font color="#FF0000">CSDN大礼包</font>🎁:全网最全《LLM大模型入门+进阶学习资源包》免费分享<b><font
> color="#177f3e">(安全链接,放心点击)</font></b>]()👈​<img src="https://i-blog.csdnimg.cn/blog_migrate/35a667356d00b606992c228becf1f3a8.png" style="margin: auto" />## 一、LLM大模型经典书籍AI大模型已经成为了当今科技领域的一大热点,那以下这些大模型书籍就是非常不错的学习资源。![在这里插入图片描述](https://i-blog.csdnimg.cn/direct/faf9ba75d043426b8194a174373e2286.jpeg)## 二、640套LLM大模型报告合集这套包含640份报告的合集,涵盖了大模型的理论研究、技术实现、行业应用等多个方面。无论您是科研人员、工程师,还是对AI大模型感兴趣的爱好者,这套报告合集都将为您提供宝贵的信息和启示。(几乎涵盖所有行业)![在这里插入图片描述](https://i-blog.csdnimg.cn/direct/a7e7a295c8e347ebaa1587ff4eb280b7.jpeg)## 三、LLM大模型系列视频教程![在这里插入图片描述](https://i-blog.csdnimg.cn/direct/9035dc7515024ca7af1471d5a502b64b.jpeg)### 四、LLM大模型开源教程(LLaLA/Meta/chatglm/chatgpt)![在这里插入图片描述](https://i-blog.csdnimg.cn/direct/1d16ae011302436c9903270c0129bbbf.jpeg)# LLM大模型学习路线 **↓**### 阶段1:AI大模型时代的基础理解-   **目标**:了解AI大模型的基本概念、发展历程和核心原理。-   **内容**:-   L1.1 人工智能简述与大模型起源-   L1.2 大模型与通用人工智能-   L1.3 GPT模型的发展历程-   L1.4 模型工程-   L1.4.1 知识大模型-   L1.4.2 生产大模型-   L1.4.3 模型工程方法论-   L1.4.4 模型工程实践-   L1.5 GPT应用案例### 阶段2:AI大模型API应用开发工程-   **目标**:掌握AI大模型API的使用和开发,以及相关的编程技能。-   **内容**:-   L2.1 API接口-   L2.1.1 OpenAI API接口-   L2.1.2 Python接口接入-   L2.1.3 BOT工具类框架-   L2.1.4 代码示例-   L2.2 Prompt框架-   L2.3 流水线工程-   L2.4 总结与展望### 阶段3:AI大模型应用架构实践-   **目标**:深入理解AI大模型的应用架构,并能够进行私有化部署。-   **内容**:-   L3.1 Agent模型框架-   L3.2 MetaGPT-   L3.3 ChatGLM-   L3.4 LLAMA-   L3.5 其他大模型介绍### 阶段4:AI大模型私有化部署-   **目标**:掌握多种AI大模型的私有化部署,包括多模态和特定领域模型。-   **内容**:-   L4.1 模型私有化部署概述-   L4.2 模型私有化部署的关键技术-   L4.3 模型私有化部署的实施步骤-   L4.4 模型私有化部署的应用场景这份 `LLM大模型资料` 包括`LLM大模型书籍、640套大模型行业报告、LLM大模型学习视频、LLM大模型学习路线、开源大模型学习教程`等, 😝有需要的小伙伴,可以 **扫描下方二维码**领取🆓**↓↓↓**> 👉[<font color="#FF0000">CSDN大礼包</font>🎁:全网最全《LLM大模型入门+进阶学习资源包》免费分享<b><font
> color="#177f3e">(安全链接,放心点击)</font></b>]()👈​<img src="https://i-blog.csdnimg.cn/blog_migrate/35a667356d00b606992c228becf1f3a8.png" style="margin: auto" />

http://www.mrgr.cn/news/48293.html

相关文章:

  • 《纳瓦尔宝典》读书感悟
  • Qt初识_通过代码创建hello world
  • ansible 学习之变量
  • 如何将长链接缩短
  • 大数据新视界 --大数据大厂之 Dremio:改变大数据查询方式的创新引擎
  • 多线程会在一个事务里面吗?
  • Python 网络爬虫高阶用法
  • Java面经--从代码角度认识面向对象编程和面向过程编程
  • 23年408数据结构
  • Python剪辑视频
  • 【软件测试】基础知识1
  • spring boot项目日志怎么加?
  • Kafka 中的 ISR 和 OSR:理解它们的重要作用
  • flutter鸿蒙版本数据处理常用总集
  • Spring Boot集成Redis
  • 生成式人工智能助长更复杂的网络攻击
  • BlackMarket_ 1靶机渗透
  • 5.1串口DMA与接收不定长数据
  • 港股\美股\A股实时行情接入示例,WebSocket协议推送
  • C++常用库函数