autogen_agentchat.agents#
该模块初始化了包提供的各种预定义代理。 BaseChatAgent 是 AgentChat 中所有代理的基类。
- class AssistantAgent(name: str, model_client: ChatCompletionClient, *, tools: List[BaseTool[Any, Any] | Callable[[...], Any] | Callable[[...], Awaitable[Any]]] | None = None, workbench: Workbench | None = None, handoffs: List[Handoff | str] | None = None, model_context: ChatCompletionContext | None = None, description: str = 'An agent that provides assistance with ability to use tools.', system_message: str | None = 'You are a helpful AI assistant. Solve tasks using your tools. Reply with TERMINATE when the task has been completed.', model_client_stream: bool = False, reflect_on_tool_use: bool | None = None, tool_call_summary_format: str = '{result}', tool_call_summary_formatter: Callable[[FunctionCall, FunctionExecutionResult], str] | None = None, output_content_type: type[BaseModel] | None = None, output_content_type_format: str | None = None, memory: Sequence[Memory] | None = None, metadata: Dict[str, str] | None = None)[源代码]#
基类:
BaseChatAgent
,Component
[AssistantAgentConfig
]An agent that provides assistance with tool use.
The
on_messages()
returns aResponse
in whichchat_message
is the final response message.The
on_messages_stream()
creates an async generator that produces the inner messages as they are created, and theResponse
object as the last item before closing the generator.The
BaseChatAgent.run()
method returns aTaskResult
containing the messages produced by the agent. In the list of messages,messages
, the last message is the final response message.The
BaseChatAgent.run_stream()
method creates an async generator that produces the inner messages as they are created, and theTaskResult
object as the last item before closing the generator.注意
The caller must only pass the new messages to the agent on each call to the
on_messages()
,on_messages_stream()
,BaseChatAgent.run()
, orBaseChatAgent.run_stream()
methods. The agent maintains its state between calls to these methods. Do not pass the entire conversation history to the agent on each call.警告
The assistant agent is not thread-safe or coroutine-safe. It should not be shared between multiple tasks or coroutines, and it should not call its methods concurrently.
The following diagram shows how the assistant agent works:
Structured output:
If the output_content_type is set, the agent will respond with a
StructuredMessage
instead of aTextMessage
in the final response by default.备注
Currently, setting output_content_type prevents the agent from being able to call load_component and dum_component methods for serializable configuration. This will be fixed soon in the future.
Tool call behavior:
If the model returns no tool call, then the response is immediately returned as a
TextMessage
or aStructuredMessage
(when using structured output) inchat_message
.- When the model returns tool calls, they will be executed right away:
When reflect_on_tool_use is False, the tool call results are returned as a
ToolCallSummaryMessage
inchat_message
. You can customise the summary with either a static format string (tool_call_summary_format) or a callable (tool_call_summary_formatter); the callable is evaluated once per tool call.When reflect_on_tool_use is True, the another model inference is made using the tool calls and results, and final response is returned as a
TextMessage
or aStructuredMessage
(when using structured output) inchat_message
.reflect_on_tool_use is set to True by default when output_content_type is set.
reflect_on_tool_use is set to False by default when output_content_type is not set.
If the model returns multiple tool calls, they will be executed concurrently. To disable parallel tool calls you need to configure the model client. For example, set parallel_tool_calls=False for
OpenAIChatCompletionClient
andAzureOpenAIChatCompletionClient
.
小技巧
By default, the tool call results are returned as the response when tool calls are made, so pay close attention to how the tools’ return values are formatted—especially if another agent expects a specific schema.
Use `tool_call_summary_format` for a simple static template.
Use `tool_call_summary_formatter` for full programmatic control (e.g., “hide large success payloads, show full details on error”).
Note: tool_call_summary_formatter is not serializable and will be ignored when an agent is loaded from, or exported to, YAML/JSON configuration files.
Hand off behavior:
If a handoff is triggered, a
HandoffMessage
will be returned inchat_message
.If there are tool calls, they will also be executed right away before returning the handoff.
The tool calls and results are passed to the target agent through
context
.
备注
If multiple handoffs are detected, only the first handoff is executed. To avoid this, disable parallel tool calls in the model client configuration.
Limit context size sent to the model:
You can limit the number of messages sent to the model by setting the model_context parameter to a
BufferedChatCompletionContext
. This will limit the number of recent messages sent to the model and can be useful when the model has a limit on the number of tokens it can process. Another option is to use aTokenLimitedChatCompletionContext
which will limit the number of tokens sent to the model. You can also create your own model context by subclassingChatCompletionContext
.Streaming mode:
The assistant agent can be used in streaming mode by setting model_client_stream=True. In this mode, the
on_messages_stream()
andBaseChatAgent.run_stream()
methods will also yieldModelClientStreamingChunkEvent
messages as the model client produces chunks of response. The chunk messages will not be included in the final response's inner messages.- 参数:
name (str) -- The name of the agent.
model_client (ChatCompletionClient) -- The model client to use for inference.
tools (List[BaseTool[Any, Any] | Callable[..., Any] | Callable[..., Awaitable[Any]]] | None, optional) -- The tools to register with the agent.
workbench (Workbench | None, optional) -- The workbench to use for the agent. Tools cannot be used when workbench is set and vice versa.
handoffs (List[HandoffBase | str] | None, optional) -- The handoff configurations for the agent, allowing it to transfer to other agents by responding with a
HandoffMessage
. The transfer is only executed when the team is inSwarm
. If a handoff is a string, it should represent the target agent's name.model_context (ChatCompletionContext | None, optional) -- The model context for storing and retrieving
LLMMessage
. It can be preloaded with initial messages. The initial messages will be cleared when the agent is reset.description (str, optional) -- The description of the agent.
system_message (str, optional) -- The system message for the model. If provided, it will be prepended to the messages in the model context when making an inference. Set to None to disable.
model_client_stream (bool, optional) -- If True, the model client will be used in streaming mode.
on_messages_stream()
andBaseChatAgent.run_stream()
methods will also yieldModelClientStreamingChunkEvent
messages as the model client produces chunks of response. Defaults to False.reflect_on_tool_use (bool, optional) -- If True, the agent will make another model inference using the tool call and result to generate a response. If False, the tool call result will be returned as the response. By default, if output_content_type is set, this will be True; if output_content_type is not set, this will be False.
output_content_type (type[BaseModel] | None, optional) -- The output content type for
StructuredMessage
response as a Pydantic model. This will be used with the model client to generate structured output. If this is set, the agent will respond with aStructuredMessage
instead of aTextMessage
in the final response, unless reflect_on_tool_use is False and a tool call is made.output_content_type_format (str | None, optional) -- (Experimental) The format string used for the content of a
StructuredMessage
response.tool_call_summary_format (str, optional) -- Static format string applied to each tool call result when composing the
ToolCallSummaryMessage
. Defaults to"{result}"
. Ignored if tool_call_summary_formatter is provided. When reflect_on_tool_use isFalse
, the summaries for all tool calls are concatenated with a newline ('n') and returned as the response. Placeholders available in the template: {tool_name}, {arguments}, {result}, {is_error}.tool_call_summary_formatter (Callable[[FunctionCall, FunctionExecutionResult], str] | None, optional) --
Callable that receives the
FunctionCall
and itsFunctionExecutionResult
and returns the summary string. Overrides tool_call_summary_format when supplied and allows conditional logic — for example, emitting static string like"Tool FooBar executed successfully."
on success and a full payload (including all passed arguments etc.) only on failure.Limitation: The callable is not serializable; values provided via YAML/JSON configs are ignored.
备注
tool_call_summary_formatter is intended for in-code use only. It cannot currently be saved or restored via configuration files.
memory (Sequence[Memory] | None, optional): The memory store to use for the agent. Defaults to None. metadata (Dict[str, str] | None, optional): Optional metadata for tracking.
- 抛出:
ValueError -- If tool names are not unique.
ValueError -- If handoff names are not unique.
ValueError -- If handoff names are not unique from tool names.
ValueError -- If maximum number of tool iterations is less than 1.
示例
Example 1: basic agent
The following example demonstrates how to create an assistant agent with a model client and generate a response to a simple task.
import asyncio from autogen_ext.models.openai import OpenAIChatCompletionClient from autogen_agentchat.agents import AssistantAgent async def main() -> None: model_client = OpenAIChatCompletionClient( model="gpt-4o", # api_key = "your_openai_api_key" ) agent = AssistantAgent(name="assistant", model_client=model_client) result = await agent.run(task="Name two cities in North America.") print(result) asyncio.run(main())
Example 2: model client token streaming
This example demonstrates how to create an assistant agent with a model client and generate a token stream by setting model_client_stream=True.
import asyncio from autogen_ext.models.openai import OpenAIChatCompletionClient from autogen_agentchat.agents import AssistantAgent async def main() -> None: model_client = OpenAIChatCompletionClient( model="gpt-4o", # api_key = "your_openai_api_key" ) agent = AssistantAgent( name="assistant", model_client=model_client, model_client_stream=True, ) stream = agent.run_stream(task="Name two cities in North America.") async for message in stream: print(message) asyncio.run(main())
source='user' models_usage=None metadata={} content='Name two cities in North America.' type='TextMessage' source='assistant' models_usage=None metadata={} content='Two' type='ModelClientStreamingChunkEvent' source='assistant' models_usage=None metadata={} content=' cities' type='ModelClientStreamingChunkEvent' source='assistant' models_usage=None metadata={} content=' in' type='ModelClientStreamingChunkEvent' source='assistant' models_usage=None metadata={} content=' North' type='ModelClientStreamingChunkEvent' source='assistant' models_usage=None metadata={} content=' America' type='ModelClientStreamingChunkEvent' source='assistant' models_usage=None metadata={} content=' are' type='ModelClientStreamingChunkEvent' source='assistant' models_usage=None metadata={} content=' New' type='ModelClientStreamingChunkEvent' source='assistant' models_usage=None metadata={} content=' York' type='ModelClientStreamingChunkEvent' source='assistant' models_usage=None metadata={} content=' City' type='ModelClientStreamingChunkEvent' source='assistant' models_usage=None metadata={} content=' and' type='ModelClientStreamingChunkEvent' source='assistant' models_usage=None metadata={} content=' Toronto' type='ModelClientStreamingChunkEvent' source='assistant' models_usage=None metadata={} content='.' type='ModelClientStreamingChunkEvent' source='assistant' models_usage=None metadata={} content=' TERMIN' type='ModelClientStreamingChunkEvent' source='assistant' models_usage=None metadata={} content='ATE' type='ModelClientStreamingChunkEvent' source='assistant' models_usage=RequestUsage(prompt_tokens=0, completion_tokens=0) metadata={} content='Two cities in North America are New York City and Toronto. TERMINATE' type='TextMessage' messages=[TextMessage(source='user', models_usage=None, metadata={}, content='Name two cities in North America.', type='TextMessage'), TextMessage(source='assistant', models_usage=RequestUsage(prompt_tokens=0, completion_tokens=0), metadata={}, content='Two cities in North America are New York City and Toronto. TERMINATE', type='TextMessage')] stop_reason=None
Example 3: agent with tools
The following example demonstrates how to create an assistant agent with a model client and a tool, generate a stream of messages for a task, and print the messages to the console using
Console
.The tool is a simple function that returns the current time. Under the hood, the function is wrapped in a
FunctionTool
and used with the agent's model client. The doc string of the function is used as the tool description, the function name is used as the tool name, and the function signature including the type hints is used as the tool arguments.import asyncio from autogen_ext.models.openai import OpenAIChatCompletionClient from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.ui import Console async def get_current_time() -> str: return "The current time is 12:00 PM." async def main() -> None: model_client = OpenAIChatCompletionClient( model="gpt-4o", # api_key = "your_openai_api_key" ) agent = AssistantAgent(name="assistant", model_client=model_client, tools=[get_current_time]) await Console(agent.run_stream(task="What is the current time?")) asyncio.run(main())
Example 4: agent with Model-Context Protocol (MCP) workbench
The following example demonstrates how to create an assistant agent with a model client and an
McpWorkbench
for interacting with a Model-Context Protocol (MCP) server.import asyncio from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.ui import Console from autogen_ext.models.openai import OpenAIChatCompletionClient from autogen_ext.tools.mcp import StdioServerParams, McpWorkbench async def main() -> None: params = StdioServerParams( command="uvx", args=["mcp-server-fetch"], read_timeout_seconds=60, ) # You can also use `start()` and `stop()` to manage the session. async with McpWorkbench(server_params=params) as workbench: model_client = OpenAIChatCompletionClient(model="gpt-4.1-nano") assistant = AssistantAgent( name="Assistant", model_client=model_client, workbench=workbench, reflect_on_tool_use=True, ) await Console( assistant.run_stream(task="Go to https://github.com/microsoft/autogen and tell me what you see.") ) asyncio.run(main())
Example 5: agent with structured output and tool
The following example demonstrates how to create an assistant agent with a model client configured to use structured output and a tool. Note that you need to use
FunctionTool
to create the tool and the strict=True is required for structured output mode. Because the model is configured to use structured output, the output reflection response will be a JSON formatted string.import asyncio from typing import Literal from autogen_agentchat.agents import AssistantAgent from autogen_agentchat.ui import Console from autogen_core.tools import FunctionTool from autogen_ext.models.openai import OpenAIChatCompletionClient from pydantic import BaseModel # Define the structured output format. class AgentResponse(BaseModel): thoughts: str response: Literal["happy", "sad", "neutral"] # Define the function to be called as a tool. def sentiment_analysis(text: str) -> str: """Given a text, return the sentiment.""" return "happy" if "happy" in text else "sad" if "sad" in text else "neutral" # Create a FunctionTool instance with `strict=True`, # which is required for structured output mode. tool = FunctionTool(sentiment_analysis, description="Sentiment Analysis", strict=True) # Create an OpenAIChatCompletionClient instance that supports structured output. model_client = OpenAIChatCompletionClient( model="gpt-4o-mini", ) # Create an AssistantAgent instance that uses the tool and model client. agent = AssistantAgent( name="assistant", model_client=model_client, tools=[tool], system_message="Use the tool to analyze sentiment.", output_content_type=AgentResponse, ) async def main() -> None: stream = agent.run_stream(task="I am happy today!") await Console(stream) asyncio.run(main())
---------- assistant ---------- [FunctionCall(id='call_tIZjAVyKEDuijbBwLY6RHV2p', arguments='{"text":"I am happy today!"}', name='sentiment_analysis')] ---------- assistant ---------- [FunctionExecutionResult(content='happy', call_id='call_tIZjAVyKEDuijbBwLY6RHV2p', is_error=False)] ---------- assistant ---------- {"thoughts":"The user expresses a clear positive emotion by stating they are happy today, suggesting an upbeat mood.","response":"happy"}
Example 6: agent with bounded model context
The following example shows how to use a
BufferedChatCompletionContext
that only keeps the last 2 messages (1 user + 1 assistant). Bounded model context is useful when the model has a limit on the number of tokens it can process.import asyncio from autogen_agentchat.agents import AssistantAgent from autogen_core.model_context import BufferedChatCompletionContext from autogen_ext.models.openai import OpenAIChatCompletionClient async def main() -> None: # Create a model client. model_client = OpenAIChatCompletionClient( model="gpt-4o-mini", # api_key = "your_openai_api_key" ) # Create a model context that only keeps the last 2 messages (1 user + 1 assistant). model_context = BufferedChatCompletionContext(buffer_size=2) # Create an AssistantAgent instance with the model client and context. agent = AssistantAgent( name="assistant", model_client=model_client, model_context=model_context, system_message="You are a helpful assistant.", ) result = await agent.run(task="Name two cities in North America.") print(result.messages[-1].content) # type: ignore result = await agent.run(task="My favorite color is blue.") print(result.messages[-1].content) # type: ignore result = await agent.run(task="Did I ask you any question?") print(result.messages[-1].content) # type: ignore asyncio.run(main())
Two cities in North America are New York City and Toronto. That's great! Blue is often associated with calmness and serenity. Do you have a specific shade of blue that you like, or any particular reason why it's your favorite? No, you didn't ask a question. I apologize for any misunderstanding. If you have something specific you'd like to discuss or ask, feel free to let me know!
Example 7: agent with memory
The following example shows how to use a list-based memory with the assistant agent. The memory is preloaded with some initial content. Under the hood, the memory is used to update the model context before making an inference, using the
update_context()
method.import asyncio from autogen_agentchat.agents import AssistantAgent from autogen_core.memory import ListMemory, MemoryContent from autogen_ext.models.openai import OpenAIChatCompletionClient async def main() -> None: # Create a model client. model_client = OpenAIChatCompletionClient( model="gpt-4o-mini", # api_key = "your_openai_api_key" ) # Create a list-based memory with some initial content. memory = ListMemory() await memory.add(MemoryContent(content="User likes pizza.", mime_type="text/plain")) await memory.add(MemoryContent(content="User dislikes cheese.", mime_type="text/plain")) # Create an AssistantAgent instance with the model client and memory. agent = AssistantAgent( name="assistant", model_client=model_client, memory=[memory], system_message="You are a helpful assistant.", ) result = await agent.run(task="What is a good dinner idea?") print(result.messages[-1].content) # type: ignore asyncio.run(main())
How about making a delicious pizza without cheese? You can create a flavorful veggie pizza with a variety of toppings. Here's a quick idea: **Veggie Tomato Sauce Pizza** - Start with a pizza crust (store-bought or homemade). - Spread a layer of marinara or tomato sauce evenly over the crust. - Top with your favorite vegetables like bell peppers, mushrooms, onions, olives, and spinach. - Add some protein if you’d like, such as grilled chicken or pepperoni (ensure it's cheese-free). - Sprinkle with herbs like oregano and basil, and maybe a drizzle of olive oil. - Bake according to the crust instructions until the edges are golden and the veggies are cooked. Serve it with a side salad or some garlic bread to complete the meal! Enjoy your dinner!
Example 8: agent with `o1-mini`
The following example shows how to use o1-mini model with the assistant agent.
import asyncio from autogen_ext.models.openai import OpenAIChatCompletionClient from autogen_agentchat.agents import AssistantAgent async def main() -> None: model_client = OpenAIChatCompletionClient( model="o1-mini", # api_key = "your_openai_api_key" ) # The system message is not supported by the o1 series model. agent = AssistantAgent(name="assistant", model_client=model_client, system_message=None) result = await agent.run(task="What is the capital of France?") print(result.messages[-1].content) # type: ignore asyncio.run(main())
备注
The o1-preview and o1-mini models do not support system message and function calling. So the system_message should be set to None and the tools and handoffs should not be set. See o1 beta limitations for more details.
Example 9: agent using reasoning model with custom model context.
The following example shows how to use a reasoning model (DeepSeek R1) with the assistant agent. The model context is used to filter out the thought field from the assistant message.
import asyncio from typing import List from autogen_agentchat.agents import AssistantAgent from autogen_core.model_context import UnboundedChatCompletionContext from autogen_core.models import AssistantMessage, LLMMessage, ModelFamily from autogen_ext.models.ollama import OllamaChatCompletionClient class ReasoningModelContext(UnboundedChatCompletionContext): """A model context for reasoning models.""" async def get_messages(self) -> List[LLMMessage]: messages = await super().get_messages() # Filter out thought field from AssistantMessage. messages_out: List[LLMMessage] = [] for message in messages: if isinstance(message, AssistantMessage): message.thought = None messages_out.append(message) return messages_out # Create an instance of the model client for DeepSeek R1 hosted locally on Ollama. model_client = OllamaChatCompletionClient( model="deepseek-r1:8b", model_info={ "vision": False, "function_calling": False, "json_output": False, "family": ModelFamily.R1, "structured_output": True, }, ) agent = AssistantAgent( "reasoning_agent", model_client=model_client, model_context=ReasoningModelContext(), # Use the custom model context. ) async def run_reasoning_agent() -> None: result = await agent.run(task="What is the capital of France?") print(result) asyncio.run(run_reasoning_agent())
- component_config_schema#
AssistantAgentConfig
的别名
- component_provider_override: ClassVar[str | None] = 'autogen_agentchat.agents.AssistantAgent'#
覆盖组件的provider字符串。这应该用于防止内部模块名称成为模块名称的一部分。
- property model_context: ChatCompletionContext#
代理正在使用的模型上下文。
- async on_messages(messages: Sequence[BaseChatMessage], cancellation_token: CancellationToken) Response [源代码]#
处理传入的消息并返回响应。
备注
代理是有状态的,传递给此方法的消息应该是自上次调用此方法以来的新消息。 代理应在两次调用之间保持其状态。例如,如果代理需要记住先前的消息以响应当前消息, 它应该将先前的消息存储在代理状态中。
- async on_messages_stream(messages: Sequence[BaseChatMessage], cancellation_token: CancellationToken) AsyncGenerator[BaseAgentEvent | BaseChatMessage | Response, None] [源代码]#
处理传入消息与助手代理交互,并实时生成事件/响应。
- async on_reset(cancellation_token: CancellationToken) None [源代码]#
将助手代理重置到初始化状态。
- property produced_message_types: Sequence[type[BaseChatMessage]]#
代理在
Response.chat_message
字段中产生的消息类型。 这些类型必须是BaseChatMessage
类型。
- class BaseChatAgent(name: str, description: str)[源代码]#
基类:
ChatAgent
,ABC
,ComponentBase
[BaseModel
]聊天代理的基类。
这个抽象类为
ChatAgent
提供了基础实现。 要创建新的聊天代理,请继承此类并实现on_messages()
、on_reset()
和produced_message_types
。 如果需要流式处理,还需实现on_messages_stream()
方法。代理被视为有状态的,在调用
on_messages()
或on_messages_stream()
方法之间会保持其状态。 代理应将其状态存储在代理实例中。代理还应实现on_reset()
方法 以将代理重置为初始化状态。备注
调用者在每次调用
on_messages()
或on_messages_stream()
方法时, 应仅向代理传递新消息。 不要在每次调用时将整个对话历史传递给代理。 创建新代理时必须遵循此设计原则。- async close() None [源代码]#
释放代理持有的所有资源。在
BaseChatAgent
类中默认不执行任何操作。 子类可以重写此方法来实现自定义的关闭行为。
- component_type: ClassVar[ComponentType] = 'agent'#
组件的逻辑类型。
- abstract async on_messages(messages: Sequence[BaseChatMessage], cancellation_token: CancellationToken) Response [源代码]#
处理传入的消息并返回响应。
备注
代理是有状态的,传递给此方法的消息应该是自上次调用此方法以来的新消息。 代理应在两次调用之间保持其状态。例如,如果代理需要记住先前的消息以响应当前消息, 它应该将先前的消息存储在代理状态中。
- async on_messages_stream(messages: Sequence[BaseChatMessage], cancellation_token: CancellationToken) AsyncGenerator[BaseAgentEvent | BaseChatMessage | Response, None] [源代码]#
处理传入的消息并返回消息流,最后一项是响应。
BaseChatAgent
中的基础实现只是调用on_messages()
并生成响应中的消息。备注
代理是有状态的,传递给此方法的消息应该是自上次调用此方法以来的新消息。 代理应在两次调用之间保持其状态。例如,如果代理需要记住先前的消息以响应当前消息, 它应该将先前的消息存储在代理状态中。
- async on_pause(cancellation_token: CancellationToken) None [源代码]#
当代理在其
on_messages()
或on_messages_stream()
方法运行期间被暂停时调用。 在BaseChatAgent
类中默认是一个空操作。子类可以重写此方法以 实现自定义的暂停行为。
- abstract async on_reset(cancellation_token: CancellationToken) None [源代码]#
将代理重置到其初始化状态。
- async on_resume(cancellation_token: CancellationToken) None [源代码]#
当代理在其
on_messages()
或on_messages_stream()
方法运行期间 从暂停状态恢复时调用。在BaseChatAgent
类中默认是一个空操作。 子类可以重写此方法以实现自定义的恢复行为。
- abstract property produced_message_types: Sequence[type[BaseChatMessage]]#
代理在
Response.chat_message
字段中产生的消息类型。 这些类型必须是BaseChatMessage
类型。
- async run(*, task: str | BaseChatMessage | Sequence[BaseChatMessage] | None = None, cancellation_token: CancellationToken | None = None) TaskResult [源代码]#
使用给定任务运行代理并返回结果。
- async run_stream(*, task: str | BaseChatMessage | Sequence[BaseChatMessage] | None = None, cancellation_token: CancellationToken | None = None) AsyncGenerator[BaseAgentEvent | BaseChatMessage | TaskResult, None] [源代码]#
使用给定任务运行代理并返回消息流 以及作为流中最后一项的最终任务结果。
- class CodeExecutorAgent(name: str, code_executor: CodeExecutor, *, model_client: ChatCompletionClient | None = None, model_context: ChatCompletionContext | None = None, model_client_stream: bool = False, max_retries_on_error: int = 0, description: str | None = None, system_message: str | None = DEFAULT_SYSTEM_MESSAGE, sources: Sequence[str] | None = None)[源代码]#
基类:
BaseChatAgent
,Component
[CodeExecutorAgentConfig
](实验性) 一个根据用户指令生成并执行代码片段的代理。
备注
该代理是实验性的,可能在未来的版本中发生变化。
通常与另一个生成待执行代码片段的代理在团队中配合使用,或单独使用(需提供 model_client)以便能够根据用户查询生成代码、执行代码并反思代码结果。
当与 model_client 配合使用时,它将使用模型生成代码片段并通过提供的 code_executor 执行。模型还会对代码执行结果进行反思。代理会将模型的最终反思结果作为最终响应返回。
当不配合 model_client 使用时,它仅执行在
TextMessage
消息中找到的代码块,并返回代码执行输出。备注
使用
AssistantAgent
配合PythonCodeExecutionTool
可作为该代理的替代方案。但该代理的模型需要生成正确转义的代码字符串作为工具参数。- 参数:
name (str) -- 代理名称
code_executor (CodeExecutor) -- 负责执行消息中接收到的代码的代码执行器 (推荐使用
DockerCommandLineCodeExecutor
,参见下方示例)model_client (ChatCompletionClient, optional) -- 用于推理和生成代码的模型客户端。 如未提供,代理将仅执行输入消息中的代码块。 当前模型必须支持结构化输出模式,这是自动重试机制工作的必要条件。
model_client_stream (bool, optional) -- 若为 True,模型客户端将以流模式运行。
on_messages_stream()
和BaseChatAgent.run_stream()
方法 也会在模型客户端生成响应块时产出ModelClientStreamingChunkEvent
消息。默认为 False。description (str, optional) -- 代理描述。如未提供, 将使用
DEFAULT_AGENT_DESCRIPTION
。system_message (str, optional) -- 模型的系统消息。如提供,将在推理时预置到模型上下文的消息中。设为 None 可禁用。 默认为
DEFAULT_SYSTEM_MESSAGE
。仅在提供 model_client 时使用。sources (Sequence[str], optional) -- 仅检查指定代理的消息以执行代码。 当代理是群聊的一部分且您希望将代码执行限制在特定代理的消息时很有用。 如未提供,将检查所有消息中的代码块。 仅在未提供 model_client 时使用。
max_retries_on_error (int, optional) -- 出错时的最大重试次数。如果代码执行失败,代理将重试至多该次数。 如果代码执行在重试该次数后仍失败,代理将产出反思结果。
备注
建议 CodeExecutorAgent 代理使用 Docker 容器执行代码。这确保模型生成的代码在隔离环境中执行。要使用 Docker,您的环境必须安装并运行 Docker。 请遵循 Docker 的安装说明。
备注
代码执行器仅处理使用三重反引号正确格式化的 Markdown 代码块。 例如:
```python print("Hello World") ``` # 或 ```sh echo "Hello World" ```
在此示例中,我们展示如何设置使用
DockerCommandLineCodeExecutor
在 Docker 容器中执行代码片段的 CodeExecutorAgent 代理。work_dir 参数表示所有执行文件在被 Docker 容器执行前首先保存的本地位置。import asyncio from autogen_agentchat.agents import CodeExecutorAgent from autogen_agentchat.messages import TextMessage from autogen_ext.code_executors.docker import DockerCommandLineCodeExecutor from autogen_core import CancellationToken async def run_code_executor_agent() -> None: # 创建使用 Docker 容器执行代码的代码执行器代理 code_executor = DockerCommandLineCodeExecutor(work_dir="coding") await code_executor.start() code_executor_agent = CodeExecutorAgent("code_executor", code_executor=code_executor) # 使用给定代码片段运行代理 task = TextMessage( content='''Here is some code ```python print('Hello world') ``` ''', source="user", ) response = await code_executor_agent.on_messages([task], CancellationToken()) print(response.chat_message) # 停止代码执行器 await code_executor.stop() asyncio.run(run_code_executor_agent())
在此示例中,我们展示如何设置使用
DeviceRequest
向容器暴露 GPU 以执行 CUDA 加速代码的 CodeExecutorAgent 代理。import asyncio from autogen_agentchat.agents import CodeExecutorAgent from autogen_agentchat.messages import TextMessage from autogen_ext.code_executors.docker import DockerCommandLineCodeExecutor from autogen_core import CancellationToken from docker.types import DeviceRequest async def run_code_executor_agent() -> None: # 创建使用 Docker 容器执行代码的代码执行器代理 code_executor = DockerCommandLineCodeExecutor( work_dir="coding", device_requests=[DeviceRequest(count=-1, capabilities=[["gpu"]])] ) await code_executor.start() code_executor_agent = CodeExecutorAgent("code_executor", code_executor=code_executor) # 显示 GPU 信息 task = TextMessage( content='''Here is some code ```bash nvidia-smi ``` ''', source="user", ) response = await code_executor_agent.on_messages([task], CancellationToken()) print(response.chat_message) # 停止代码执行器 await code_executor.stop() asyncio.run(run_code_executor_agent())
在以下示例中,我们展示如何设置不带 model_client 参数的 CodeExecutorAgent,用于使用
DockerCommandLineCodeExecutor
执行群聊中其他代理生成的代码块import asyncio from autogen_ext.code_executors.docker import DockerCommandLineCodeExecutor from autogen_ext.models.openai import OpenAIChatCompletionClient from autogen_agentchat.agents import AssistantAgent, CodeExecutorAgent from autogen_agentchat.conditions import MaxMessageTermination from autogen_agentchat.teams import RoundRobinGroupChat from autogen_agentchat.ui import Console termination_condition = MaxMessageTermination(3) async def main() -> None: model_client = OpenAIChatCompletionClient(model="gpt-4o") # 定义 Docker CLI 代码执行器 code_executor = DockerCommandLineCodeExecutor(work_dir="coding") # 启动执行容器 await code_executor.start() code_executor_agent = CodeExecutorAgent("code_executor_agent", code_executor=code_executor) coder_agent = AssistantAgent("coder_agent", model_client=model_client) groupchat = RoundRobinGroupChat( participants=[coder_agent, code_executor_agent], termination_condition=termination_condition ) task = "Write python code to print Hello World!" await Console(groupchat.run_stream(task=task)) # 停止执行容器 await code_executor.stop() asyncio.run(main())
---------- user ---------- Write python code to print Hello World! ---------- coder_agent ---------- Certainly! Here's a simple Python code to print "Hello World!": ```python print("Hello World!") ``` You can run this code in any Python environment to display the message. ---------- code_executor_agent ---------- Hello World!
在以下示例中,我们展示如何设置带 model_client 的 CodeExecutorAgent,该代理无需其他代理帮助即可自行生成代码,并在
DockerCommandLineCodeExecutor
中执行import asyncio from autogen_ext.code_executors.docker import DockerCommandLineCodeExecutor from autogen_ext.models.openai import OpenAIChatCompletionClient from autogen_agentchat.agents import CodeExecutorAgent from autogen_agentchat.conditions import TextMessageTermination from autogen_agentchat.ui import Console termination_condition = TextMessageTermination("code_executor_agent") async def main() -> None: model_client = OpenAIChatCompletionClient(model="gpt-4o") # 定义 Docker CLI 代码执行器 code_executor = DockerCommandLineCodeExecutor(work_dir="coding") # 启动执行容器 await code_executor.start() code_executor_agent = CodeExecutorAgent( "code_executor_agent", code_executor=code_executor, model_client=model_client ) task = "Write python code to print Hello World!" await Console(code_executor_agent.run_stream(task=task)) # 停止执行容器 await code_executor.stop() asyncio.run(main())
---------- user ---------- Write python code to print Hello World! ---------- code_executor_agent ---------- Certainly! Here is a simple Python code to print "Hello World!" to the console: ```python print("Hello World!") ``` Let's execute it to confirm the output. ---------- code_executor_agent ---------- Hello World! ---------- code_executor_agent ---------- The code has been executed successfully, and it printed "Hello World!" as expected. If you have any more requests or questions, feel free to ask!
- DEFAULT_AGENT_DESCRIPTION = 'A Code Execution Agent that generates and executes Python and shell scripts based on user instructions. It ensures correctness, efficiency, and minimal errors while gracefully handling edge cases.'#
- DEFAULT_SYSTEM_MESSAGE = 'You are a Code Execution Agent. Your role is to generate and execute Python code and shell scripts based on user instructions, ensuring correctness, efficiency, and minimal errors. Handle edge cases gracefully. Python code should be provided in ```python code blocks, and sh shell scripts should be provided in ```sh code blocks for execution.'#
- DEFAULT_TERMINAL_DESCRIPTION = 'A computer terminal that performs no other action than running Python scripts (provided to it quoted in ```python code blocks), or sh shell scripts (provided to it quoted in ```sh code blocks).'#
- NO_CODE_BLOCKS_FOUND_MESSAGE = 'No code blocks found in the thread. Please provide at least one markdown-encoded code block to execute (i.e., quoting code in ```python or ```sh code blocks).'#
- classmethod _from_config(config: CodeExecutorAgentConfig) Self [源代码]#
从配置对象创建组件的新实例。
- 参数:
config (T) -- 配置对象。
- Returns:
Self -- 组件的新实例。
- component_config_schema#
CodeExecutorAgentConfig
的别名
- component_provider_override: ClassVar[str | None] = 'autogen_agentchat.agents.CodeExecutorAgent'#
覆盖组件的provider字符串。这应该用于防止内部模块名称成为模块名称的一部分。
- async execute_code_block(code_blocks: List[CodeBlock], cancellation_token: CancellationToken) CodeResult [源代码]#
- property model_context: ChatCompletionContext#
代理正在使用的模型上下文。
- async on_messages(messages: Sequence[BaseChatMessage], cancellation_token: CancellationToken) Response [源代码]#
处理传入的消息并返回响应。
备注
代理是有状态的,传递给此方法的消息应该是自上次调用此方法以来的新消息。 代理应在两次调用之间保持其状态。例如,如果代理需要记住先前的消息以响应当前消息, 它应该将先前的消息存储在代理状态中。
- async on_messages_stream(messages: Sequence[BaseChatMessage], cancellation_token: CancellationToken) AsyncGenerator[BaseAgentEvent | BaseChatMessage | Response, None] [源代码]#
使用助手代理处理传入消息,并在事件/响应发生时生成它们。
- async on_reset(cancellation_token: CancellationToken) None [源代码]#
这是一个空操作,因为代码执行代理没有可变状态。
- property produced_message_types: Sequence[type[BaseChatMessage]]#
代码执行器代理产生的消息类型。
- class MessageFilterAgent(name: str, wrapped_agent: BaseChatAgent, filter: MessageFilterConfig)[源代码]#
基类:
BaseChatAgent
,Component
[MessageFilterAgentConfig
]一个包装代理,在将传入消息传递给内部代理之前进行过滤。
警告
这是一个实验性功能,API 将在未来版本中变更。
这在多代理工作流等场景中非常有用,其中代理应该只处理完整消息历史的一个子集——例如,只处理来自每个上游代理的最后一条消息,或只处理来自特定来源的第一条消息。
过滤通过
MessageFilterConfig
进行配置,支持以下功能: - 按消息来源过滤(例如,仅来自"user"或其他代理的消息) - 从每个来源选择前N条或后N条消息 - 如果 position 为 None,则包含该来源的所有消息该代理兼容直接消息传递和基于团队的执行方式,例如
GraphFlow
。- 示例:
>>> agent_a = MessageFilterAgent( ... name="A", ... wrapped_agent=some_other_agent, ... filter=MessageFilterConfig( ... per_source=[ ... PerSourceFilter(source="user", position="first", count=1), ... PerSourceFilter(source="B", position="last", count=2), ... ] ... ), ... )
- 与 Graph 配合使用的示例场景:
假设有一个循环多代理图:A → B → A → B → C。
需求如下: - A 只应看到用户消息和来自 B 的最后一条消息 - B 应看到用户消息、来自 A 的最后一条消息及其自身先前的响应(用于反思) - C 应看到用户消息和来自 B 的最后一条消息
按如下方式包装代理:
>>> agent_a = MessageFilterAgent( ... name="A", ... wrapped_agent=agent_a_inner, ... filter=MessageFilterConfig( ... per_source=[ ... PerSourceFilter(source="user", position="first", count=1), ... PerSourceFilter(source="B", position="last", count=1), ... ] ... ), ... )
>>> agent_b = MessageFilterAgent( ... name="B", ... wrapped_agent=agent_b_inner, ... filter=MessageFilterConfig( ... per_source=[ ... PerSourceFilter(source="user", position="first", count=1), ... PerSourceFilter(source="A", position="last", count=1), ... PerSourceFilter(source="B", position="last", count=10), ... ] ... ), ... )
>>> agent_c = MessageFilterAgent( ... name="C", ... wrapped_agent=agent_c_inner, ... filter=MessageFilterConfig( ... per_source=[ ... PerSourceFilter(source="user", position="first", count=1), ... PerSourceFilter(source="B", position="last", count=1), ... ] ... ), ... )
然后定义图结构:
>>> graph = DiGraph( ... nodes={ ... "A": DiGraphNode(name="A", edges=[DiGraphEdge(target="B")]), ... "B": DiGraphNode( ... name="B", ... edges=[ ... DiGraphEdge(target="C", condition="exit"), ... DiGraphEdge(target="A", condition="loop"), ... ], ... ), ... "C": DiGraphNode(name="C", edges=[]), ... }, ... default_start_node="A", ... )
这将确保每个代理仅看到其决策或行动逻辑所需的内容。
- classmethod _from_config(config: MessageFilterAgentConfig) MessageFilterAgent [源代码]#
从配置对象创建组件的新实例。
- 参数:
config (T) -- 配置对象。
- Returns:
Self -- 组件的新实例。
- component_config_schema#
MessageFilterAgentConfig
的别名
- component_provider_override: ClassVar[str | None] = 'autogen_agentchat.agents.MessageFilterAgent'#
覆盖组件的provider字符串。这应该用于防止内部模块名称成为模块名称的一部分。
- async on_messages(messages: Sequence[BaseChatMessage], cancellation_token: CancellationToken) Response [源代码]#
处理传入的消息并返回响应。
备注
代理是有状态的,传递给此方法的消息应该是自上次调用此方法以来的新消息。 代理应在两次调用之间保持其状态。例如,如果代理需要记住先前的消息以响应当前消息, 它应该将先前的消息存储在代理状态中。
- async on_messages_stream(messages: Sequence[BaseChatMessage], cancellation_token: CancellationToken) AsyncGenerator[BaseAgentEvent | BaseChatMessage | Response, None] [源代码]#
处理传入的消息并返回消息流,最后一项是响应。
BaseChatAgent
中的基础实现只是调用on_messages()
并生成响应中的消息。备注
代理是有状态的,传递给此方法的消息应该是自上次调用此方法以来的新消息。 代理应在两次调用之间保持其状态。例如,如果代理需要记住先前的消息以响应当前消息, 它应该将先前的消息存储在代理状态中。
- async on_reset(cancellation_token: CancellationToken) None [源代码]#
将代理重置到其初始化状态。
- property produced_message_types: Sequence[type[BaseChatMessage]]#
代理在
Response.chat_message
字段中产生的消息类型。 这些类型必须是BaseChatMessage
类型。
- pydantic model MessageFilterConfig[源代码]#
基类:
BaseModel
Show JSON schema
{ "title": "MessageFilterConfig", "type": "object", "properties": { "per_source": { "items": { "$ref": "#/$defs/PerSourceFilter" }, "title": "Per Source", "type": "array" } }, "$defs": { "PerSourceFilter": { "properties": { "source": { "title": "Source", "type": "string" }, "position": { "anyOf": [ { "enum": [ "first", "last" ], "type": "string" }, { "type": "null" } ], "default": null, "title": "Position" }, "count": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Count" } }, "required": [ "source" ], "title": "PerSourceFilter", "type": "object" } }, "required": [ "per_source" ] }
- Fields:
per_source (List[autogen_agentchat.agents._message_filter_agent.PerSourceFilter])
- field per_source: List[PerSourceFilter] [Required]#
- pydantic model PerSourceFilter[源代码]#
基类:
BaseModel
Show JSON schema
{ "title": "PerSourceFilter", "type": "object", "properties": { "source": { "title": "Source", "type": "string" }, "position": { "anyOf": [ { "enum": [ "first", "last" ], "type": "string" }, { "type": "null" } ], "default": null, "title": "Position" }, "count": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Count" } }, "required": [ "source" ] }
- Fields:
count (int | None)
position (Literal['first', 'last'] | None)
source (str)
- class SocietyOfMindAgent(name: str, team: Team, model_client: ChatCompletionClient, *, description: str = DEFAULT_DESCRIPTION, instruction: str = DEFAULT_INSTRUCTION, response_prompt: str = DEFAULT_RESPONSE_PROMPT, model_context: ChatCompletionContext | None = None)[源代码]#
基类:
BaseChatAgent
,Component
[SocietyOfMindAgentConfig
]一个使用内部代理团队生成响应的代理。
每次调用代理的
on_messages()
或on_messages_stream()
方法时,它会运行内部代理团队,然后使用模型客户端基于内部团队的消息 生成响应。生成响应后,代理会通过调用Team.reset()
重置内部团队。限制发送给模型的上下文大小:
您可以通过将 model_context 参数设置为
BufferedChatCompletionContext
来限制发送给模型的消息数量。这将限制发送给模型的最近消息数量,当模型 有可处理令牌数量限制时非常有用。 您也可以通过继承ChatCompletionContext
创建自己的模型上下文。- 参数:
name (str): 代理名称。 team (Team): 要使用的代理团队。 model_client (ChatCompletionClient): 用于准备响应的模型客户端。 description (str, 可选): 代理的描述。 instruction (str, 可选): 使用内部团队消息生成响应时的指令。
默认为
DEFAULT_INSTRUCTION
。该指令将作为 'system' 角色。- response_prompt (str, 可选): 使用内部团队消息生成响应时的提示词。
默认为
DEFAULT_RESPONSE_PROMPT
。该提示词将作为 'system' 角色。
model_context (ChatCompletionContext | None, 可选): 用于存储和检索
LLMMessage
的模型上下文。可以预加载初始消息。代理重置时会清除初始消息。
示例:
import asyncio from autogen_agentchat.ui import Console from autogen_agentchat.agents import AssistantAgent, SocietyOfMindAgent from autogen_ext.models.openai import OpenAIChatCompletionClient from autogen_agentchat.teams import RoundRobinGroupChat from autogen_agentchat.conditions import TextMentionTermination async def main() -> None: model_client = OpenAIChatCompletionClient(model="gpt-4o") agent1 = AssistantAgent("assistant1", model_client=model_client, system_message="You are a writer, write well.") agent2 = AssistantAgent( "assistant2", model_client=model_client, system_message="You are an editor, provide critical feedback. Respond with 'APPROVE' if the text addresses all feedbacks.", ) inner_termination = TextMentionTermination("APPROVE") inner_team = RoundRobinGroupChat([agent1, agent2], termination_condition=inner_termination) society_of_mind_agent = SocietyOfMindAgent("society_of_mind", team=inner_team, model_client=model_client) agent3 = AssistantAgent( "assistant3", model_client=model_client, system_message="Translate the text to Spanish." ) team = RoundRobinGroupChat([society_of_mind_agent, agent3], max_turns=2) stream = team.run_stream(task="Write a short story with a surprising ending.") await Console(stream) asyncio.run(main())
- DEFAULT_DESCRIPTION = 'An agent that uses an inner team of agents to generate responses.'#
SocietyOfMindAgent的默认描述。
- Type:
- DEFAULT_INSTRUCTION = 'Earlier you were asked to fulfill a request. You and your team worked diligently to address that request. Here is a transcript of that conversation:'#
使用内部团队消息生成响应时的默认指令。该指令将在使用模型生成响应时 被前置到内部团队的消息前。该指令将作为 'system' 角色。
- Type:
- DEFAULT_RESPONSE_PROMPT = 'Output a standalone response to the original request, without mentioning any of the intermediate discussion.'#
使用内部团队消息生成响应时的默认响应提示。它承担'system'角色。
- Type:
- classmethod _from_config(config: SocietyOfMindAgentConfig) Self [源代码]#
从配置对象创建组件的新实例。
- 参数:
config (T) -- 配置对象。
- Returns:
Self -- 组件的新实例。
- component_config_schema#
SocietyOfMindAgentConfig
的别名
- component_provider_override: ClassVar[str | None] = 'autogen_agentchat.agents.SocietyOfMindAgent'#
覆盖组件的provider字符串。这应该用于防止内部模块名称成为模块名称的一部分。
- property model_context: ChatCompletionContext#
代理正在使用的模型上下文。
- async on_messages(messages: Sequence[BaseChatMessage], cancellation_token: CancellationToken) Response [源代码]#
处理传入的消息并返回响应。
备注
代理是有状态的,传递给此方法的消息应该是自上次调用此方法以来的新消息。 代理应在两次调用之间保持其状态。例如,如果代理需要记住先前的消息以响应当前消息, 它应该将先前的消息存储在代理状态中。
- async on_messages_stream(messages: Sequence[BaseChatMessage], cancellation_token: CancellationToken) AsyncGenerator[BaseAgentEvent | BaseChatMessage | Response, None] [源代码]#
处理传入的消息并返回消息流,最后一项是响应。
BaseChatAgent
中的基础实现只是调用on_messages()
并生成响应中的消息。备注
代理是有状态的,传递给此方法的消息应该是自上次调用此方法以来的新消息。 代理应在两次调用之间保持其状态。例如,如果代理需要记住先前的消息以响应当前消息, 它应该将先前的消息存储在代理状态中。
- async on_reset(cancellation_token: CancellationToken) None [源代码]#
将代理重置到其初始化状态。
- property produced_message_types: Sequence[type[BaseChatMessage]]#
代理在
Response.chat_message
字段中产生的消息类型。 这些类型必须是BaseChatMessage
类型。
- class UserProxyAgent(name: str, *, description: str = 'A human user', input_func: Callable[[str], str] | Callable[[str, CancellationToken | None], Awaitable[str]] | None = None)[源代码]#
基类:
BaseChatAgent
,Component
[UserProxyAgentConfig
]一个可以通过输入函数代表人类用户的代理。
该代理可用于在聊天系统中通过提供自定义输入函数来代表人类用户。
备注
使用
UserProxyAgent
会使运行中的团队处于临时阻塞状态,直到用户响应。因此,如果用户未响应,必须对用户输入函数设置超时并通过CancellationToken
取消操作。 输入函数还应处理异常并在需要时返回默认响应。对于涉及人类响应较慢的典型用例,建议使用终止条件 如
HandoffTermination
或SourceMatchTermination
来停止运行中的团队并将控制权返回给应用程序。 您可以在获得用户输入后重新运行团队。这样,团队的状态可以在用户响应时保存和恢复。更多信息请参阅 Human-in-the-loop。
- 参数:
与Web和UI框架集成的示例请参阅:
- 示例:
简单使用案例:
import asyncio from autogen_core import CancellationToken from autogen_agentchat.agents import UserProxyAgent from autogen_agentchat.messages import TextMessage async def simple_user_agent(): agent = UserProxyAgent("user_proxy") response = await asyncio.create_task( agent.on_messages( [TextMessage(content="What is your name? ", source="user")], cancellation_token=CancellationToken(), ) ) assert isinstance(response.chat_message, TextMessage) print(f"Your name is {response.chat_message.content}")
- 示例:
可取消使用案例:
import asyncio from typing import Any from autogen_core import CancellationToken from autogen_agentchat.agents import UserProxyAgent from autogen_agentchat.messages import TextMessage token = CancellationToken() agent = UserProxyAgent("user_proxy") async def timeout(delay: float): await asyncio.sleep(delay) def cancellation_callback(task: asyncio.Task[Any]): token.cancel() async def cancellable_user_agent(): try: timeout_task = asyncio.create_task(timeout(3)) timeout_task.add_done_callback(cancellation_callback) agent_task = asyncio.create_task( agent.on_messages( [TextMessage(content="What is your name? ", source="user")], cancellation_token=token, ) ) response = await agent_task assert isinstance(response.chat_message, TextMessage) print(f"Your name is {response.chat_message.content}") except Exception as e: print(f"Exception: {e}") except BaseException as e: print(f"BaseException: {e}")
- classmethod _from_config(config: UserProxyAgentConfig) Self [源代码]#
从配置对象创建组件的新实例。
- 参数:
config (T) -- 配置对象。
- Returns:
Self -- 组件的新实例。
- component_config_schema#
UserProxyAgentConfig
的别名
- component_provider_override: ClassVar[str | None] = 'autogen_agentchat.agents.UserProxyAgent'#
覆盖组件的provider字符串。这应该用于防止内部模块名称成为模块名称的一部分。
- component_type: ClassVar[ComponentType] = 'agent'#
组件的逻辑类型。
- async on_messages(messages: Sequence[BaseChatMessage], cancellation_token: CancellationToken) Response [源代码]#
处理传入的消息并返回响应。
备注
代理是有状态的,传递给此方法的消息应该是自上次调用此方法以来的新消息。 代理应在两次调用之间保持其状态。例如,如果代理需要记住先前的消息以响应当前消息, 它应该将先前的消息存储在代理状态中。
- async on_messages_stream(messages: Sequence[BaseChatMessage], cancellation_token: CancellationToken) AsyncGenerator[BaseAgentEvent | BaseChatMessage | Response, None] [源代码]#
通过请求用户输入来处理传入的消息。
- async on_reset(cancellation_token: CancellationToken | None = None) None [源代码]#
重置代理状态。
- property produced_message_types: Sequence[type[BaseChatMessage]]#
该代理可以生成的消息类型。