模型客户端#

AutoGen 提供了一套内置模型客户端用于调用 ChatCompletion API。 所有模型客户端都实现了 ChatCompletionClient 协议类。

目前我们支持以下内置模型客户端:

有关如何使用这些模型客户端的更多信息,请参阅每个客户端的文档。

记录模型调用#

AutoGen 使用标准 Python 日志模块来记录模型调用和响应等事件。 日志记录器名称为 autogen_core.EVENT_LOGGER_NAME,事件类型为 LLMCall

import logging

from autogen_core import EVENT_LOGGER_NAME

logging.basicConfig(level=logging.WARNING)
logger = logging.getLogger(EVENT_LOGGER_NAME)
logger.addHandler(logging.StreamHandler())
logger.setLevel(logging.INFO)

调用模型客户端#

要调用模型客户端,可以使用 create() 方法。 此示例使用 OpenAIChatCompletionClient 来调用 OpenAI 模型。

from autogen_core.models import UserMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient

model_client = OpenAIChatCompletionClient(
    model="gpt-4", temperature=0.3
)  # 假设环境变量中已设置 OPENAI_API_KEY。

result = await model_client.create([UserMessage(content="What is the capital of France?", source="user")])
print(result)
finish_reason='stop' content='The capital of France is Paris.' usage=RequestUsage(prompt_tokens=15, completion_tokens=8) cached=False logprobs=None thought=None

流式令牌#

你可以使用 create_stream() 方法来创建一个 支持流式令牌分块的聊天补全请求。

from autogen_core.models import CreateResult, UserMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient

model_client = OpenAIChatCompletionClient(model="gpt-4o")  # 假设环境变量中已设置 OPENAI_API_KEY。

messages = [
    UserMessage(content="Write a very short story about a dragon.", source="user"),
]

# 创建流式连接。
stream = model_client.create_stream(messages=messages)

# 遍历流并打印响应内容。
print("Streamed responses:")
async for chunk in stream:  # type: ignore
    if isinstance(chunk, str):
        # 数据块是一个字符串。
        print(chunk, flush=True, end="")
    else:
        # 最终数据块是一个CreateResult对象。
        assert isinstance(chunk, CreateResult) and isinstance(chunk.content, str)
        # 最后一个响应是包含完整消息的CreateResult对象。
        print("\n\n------------\n")
        print("The complete response:", flush=True)
        print(chunk.content, flush=True)
Streamed responses:
In the heart of an ancient forest, beneath the shadow of snow-capped peaks, a dragon named Elara lived secretly for centuries. Elara was unlike any dragon from the old tales; her scales shimmered with a deep emerald hue, each scale engraved with symbols of lost wisdom. The villagers in the nearby valley spoke of mysterious lights dancing across the night sky, but none dared venture close enough to solve the enigma.

One cold winter's eve, a young girl named Lira, brimming with curiosity and armed with the innocence of youth, wandered into Elara’s domain. Instead of fire and fury, she found warmth and a gentle gaze. The dragon shared stories of a world long forgotten and in return, Lira gifted her simple stories of human life, rich in laughter and scent of earth.

From that night on, the villagers noticed subtle changes—the crops grew taller, and the air seemed sweeter. Elara had infused the valley with ancient magic, a guardian of balance, watching quietly as her new friend thrived under the stars. And so, Lira and Elara’s bond marked the beginning of a timeless friendship that spun tales of hope whispered through the leaves of the ever-verdant forest.

------------

The complete response:
In the heart of an ancient forest, beneath the shadow of snow-capped peaks, a dragon named Elara lived secretly for centuries. Elara was unlike any dragon from the old tales; her scales shimmered with a deep emerald hue, each scale engraved with symbols of lost wisdom. The villagers in the nearby valley spoke of mysterious lights dancing across the night sky, but none dared venture close enough to solve the enigma.

One cold winter's eve, a young girl named Lira, brimming with curiosity and armed with the innocence of youth, wandered into Elara’s domain. Instead of fire and fury, she found warmth and a gentle gaze. The dragon shared stories of a world long forgotten and in return, Lira gifted her simple stories of human life, rich in laughter and scent of earth.

From that night on, the villagers noticed subtle changes—the crops grew taller, and the air seemed sweeter. Elara had infused the valley with ancient magic, a guardian of balance, watching quietly as her new friend thrived under the stars. And so, Lira and Elara’s bond marked the beginning of a timeless friendship that spun tales of hope whispered through the leaves of the ever-verdant forest.


------------

The token usage was:
RequestUsage(prompt_tokens=0, completion_tokens=0)

备注

流式响应中的最后一个响应始终是CreateResult类型的最终响应。

备注

默认的用量响应返回零值。要启用用量统计, 请参阅create_stream() 获取更多详情。

结构化输出#

通过在OpenAIChatCompletionClientAzureOpenAIChatCompletionClient中 将response_format字段设置为Pydantic BaseModel类,可以启用结构化输出。

备注

结构化输出仅适用于支持该功能的模型。同时 需要模型客户端也支持结构化输出。 目前,OpenAIChatCompletionClientAzureOpenAIChatCompletionClient 支持结构化输出。

from typing import Literal

from pydantic import BaseModel


# 代理的响应格式作为Pydantic基础模型。
class AgentResponse(BaseModel):
    thoughts: str
    response: Literal["happy", "sad", "neutral"]


# 创建一个使用OpenAI GPT-4o模型并带有自定义响应格式的代理。
model_client = OpenAIChatCompletionClient(
    model="gpt-4o",
    response_format=AgentResponse,  # type: ignore
)

# 向模型发送消息列表并等待响应。
messages = [
    UserMessage(content="I am happy.", source="user"),
]
response = await model_client.create(messages=messages)
assert isinstance(response.content, str)
parsed_response = AgentResponse.model_validate_json(response.content)
print(parsed_response.thoughts)
print(parsed_response.response)

# 关闭与模型客户端的连接。
await model_client.close()
I'm glad to hear that you're feeling happy! It's such a great emotion that can brighten your whole day. Is there anything in particular that's bringing you joy today? 😊
happy

你还可以在 create() 方法中使用 extra_create_args 参数 来设置 response_format 字段,从而为每个请求配置结构化输出。

缓存模型响应#

autogen_ext 实现了 ChatCompletionCache,它可以包装任何 ChatCompletionClient。使用这个包装器可以避免在多次使用相同提示查询底层客户端时产生令牌消耗。

ChatCompletionCache 使用 CacheStore 协议。我们已经实现了一些有用的 CacheStore 变体,包括 DiskCacheStoreRedisStore

以下是使用 diskcache 进行本地缓存的示例:

# pip install -U "autogen-ext[openai, diskcache]"
import asyncio
import tempfile

from autogen_core.models import UserMessage
from autogen_ext.cache_store.diskcache import DiskCacheStore
from autogen_ext.models.cache import CHAT_CACHE_VALUE_TYPE, ChatCompletionCache
from autogen_ext.models.openai import OpenAIChatCompletionClient
from diskcache import Cache


async def main() -> None:
    with tempfile.TemporaryDirectory() as tmpdirname:
        # 初始化原始客户端
        openai_model_client = OpenAIChatCompletionClient(model="gpt-4o")

        # 然后初始化 CacheStore,这里使用 diskcache.Cache
        # 。你也可以像这样使用 redis:from autogen_ext.cache_store.redis
        # import RedisStore import redis redis_instance
        # = redis.Redis() cache_store = RedisCacheStore[CHAT_CACHE_VALUE_TYPE](redis_instance)
        # 
        # 
        cache_store = DiskCacheStore[CHAT_CACHE_VALUE_TYPE](Cache(tmpdirname))
        cache_client = ChatCompletionCache(openai_model_client, cache_store)

        response = await cache_client.create([UserMessage(content="Hello, how are you?", source="user")])
        print(response)  # 应打印来自OpenAI的响应
        response = await cache_client.create([UserMessage(content="Hello, how are you?", source="user")])
        print(response)  # 应打印缓存的响应

        await openai_model_client.close()
        await cache_client.close()


asyncio.run(main())
True

检查cached_client.total_usage()(或model_client.total_usage())在缓存响应前后的值应该得到相同的计数。

注意缓存机制对提供给cached_client.createcached_client.create_stream的精确参数非常敏感,因此更改toolsjson_output参数可能导致缓存未命中。

使用模型客户端构建代理#

让我们创建一个简单的AI代理,该代理可以使用ChatCompletion API响应消息。

from dataclasses import dataclass

from autogen_core import MessageContext, RoutedAgent, SingleThreadedAgentRuntime, message_handler
from autogen_core.models import ChatCompletionClient, SystemMessage, UserMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient


@dataclass
class Message:
    content: str


class SimpleAgent(RoutedAgent):
    def __init__(self, model_client: ChatCompletionClient) -> None:
        super().__init__("A simple agent")
        self._system_messages = [SystemMessage(content="You are a helpful AI assistant.")]
        self._model_client = model_client

    @message_handler
    async def handle_user_message(self, message: Message, ctx: MessageContext) -> Message:
        # 准备聊天补全模型的输入
        user_message = UserMessage(content=message.content, source="user")
        response = await self._model_client.create(
            self._system_messages + [user_message], cancellation_token=ctx.cancellation_token
        )
        # 返回模型的响应。
        assert isinstance(response.content, str)
        return Message(content=response.content)

SimpleAgent 类是 autogen_core.RoutedAgent 的子类, 用于方便地自动将消息路由到相应的处理程序。 它有一个处理程序 handle_user_message,用于处理来自用户的消息。它使用 ChatCompletionClient 生成对消息的响应。 然后按照直接通信模型将响应返回给用户。

备注

类型为 autogen_core.CancellationTokencancellation_token 用于取消 异步操作。它与消息处理程序内部的异步调用相关联, 调用方可以使用它来取消处理程序。

# 创建运行时并注册代理。
from autogen_core import AgentId

model_client = OpenAIChatCompletionClient(
    model="gpt-4o-mini",
    # api_key="sk-...", # 可选,如果你在环境变量中设置了 OPENAI_API_KEY
)

runtime = SingleThreadedAgentRuntime()
await SimpleAgent.register(
    runtime,
    "simple_agent",
    lambda: SimpleAgent(model_client=model_client),
)
# 启动运行时处理消息。
runtime.start()
# 向代理发送消息并获取响应。
message = Message("Hello, what are some fun things to do in Seattle?")
response = await runtime.send_message(message, AgentId("simple_agent", "default"))
print(response.content)
# 停止运行时处理消息。
await runtime.stop()
await model_client.close()
Seattle is a vibrant city with a wide range of activities and attractions. Here are some fun things to do in Seattle:

1. **Space Needle**: Visit this iconic observation tower for stunning views of the city and surrounding mountains.

2. **Pike Place Market**: Explore this historic market where you can see the famous fish toss, buy local produce, and find unique crafts and eateries.

3. **Museum of Pop Culture (MoPOP)**: Dive into the world of contemporary culture, music, and science fiction at this interactive museum.

4. **Chihuly Garden and Glass**: Marvel at the beautiful glass art installations by artist Dale Chihuly, located right next to the Space Needle.

5. **Seattle Aquarium**: Discover the diverse marine life of the Pacific Northwest at this engaging aquarium.

6. **Seattle Art Museum**: Explore a vast collection of art from around the world, including contemporary and indigenous art.

7. **Kerry Park**: For one of the best views of the Seattle skyline, head to this small park on Queen Anne Hill.

8. **Ballard Locks**: Watch boats pass through the locks and observe the salmon ladder to see salmon migrating.

9. **Ferry to Bainbridge Island**: Take a scenic ferry ride across Puget Sound to enjoy charming shops, restaurants, and beautiful natural scenery.

10. **Olympic Sculpture Park**: Stroll through this outdoor park with large-scale sculptures and stunning views of the waterfront and mountains.

11. **Underground Tour**: Discover Seattle's history on this quirky tour of the city's underground passageways in Pioneer Square.

12. **Seattle Waterfront**: Enjoy the shops, restaurants, and attractions along the waterfront, including the Seattle Great Wheel and the aquarium.

13. **Discovery Park**: Explore the largest green space in Seattle, featuring trails, beaches, and views of Puget Sound.

14. **Food Tours**: Try out Seattle’s diverse culinary scene, including fresh seafood, international cuisines, and coffee culture (don’t miss the original Starbucks!).

15. **Attend a Sports Game**: Catch a Seahawks (NFL), Mariners (MLB), or Sounders (MLS) game for a lively local experience.

Whether you're interested in culture, nature, food, or history, Seattle has something for everyone to enjoy!

上述 SimpleAgent 总是返回一个仅包含系统消息和最新用户消息的全新上下文。 我们可以使用来自 autogen_core.model_context 的模型上下文类 来让代理"记住"之前的对话。 更多详情请参阅模型上下文页面。

从环境变量获取API密钥#

在上面的示例中,我们展示了可以通过 api_key 参数提供API密钥。重要的是,OpenAI和Azure OpenAI客户端使用了openai包,如果没有提供密钥,它会自动从环境变量中读取API密钥。

  • 对于OpenAI,可以设置 OPENAI_API_KEY 环境变量。

  • 对于Azure OpenAI,可以设置 AZURE_OPENAI_API_KEY 环境变量。

此外,对于Gemini(Beta),可以设置 GEMINI_API_KEY 环境变量。

这是一个值得采用的良好实践,因为它可以避免在代码中包含敏感的API密钥。