autogen_ext.models.openai#
- class OpenAIChatCompletionClient(**kwargs: Unpack)[源代码]#
基类:
BaseOpenAIChatCompletionClient
,Component
[OpenAIClientConfigurationConfigModel
]OpenAI托管模型的聊天补全客户端。
使用此客户端需要安装`openai`扩展包:
pip install "autogen-ext[openai]"
此客户端也可用于兼容OpenAI的ChatCompletion端点。 将此客户端用于非OpenAI模型未经测试且不保证可用性。
对于非OpenAI模型,请先查看我们的`社区扩展 <https://microsoft.github.io/autogen/dev/user-guide/extensions-user-guide/index.html>`_ 以获取其他模型客户端。
- 参数:
model (str): 使用的OpenAI模型名称。 api_key (可选, str): API密钥。当环境变量中未找到'OPENAI_API_KEY'时必须提供。 organization (可选, str): 组织ID。 base_url (可选, str): 基础URL。当模型未托管在OpenAI时必须提供。 timeout: (可选, float): 请求超时时间(秒)。 max_retries (可选, int): 最大重试次数。 model_info (可选, ModelInfo): 模型能力描述。当模型名称不是有效的OpenAI模型时必须提供。 frequency_penalty (可选, float): logit_bias: (可选, dict[str, int]): max_tokens (可选, int): n (可选, int): presence_penalty (可选, float): response_format (可选, Dict[str, Any]): 响应格式。可选值包括:
# 文本响应(默认) {"type": "text"}
# JSON响应,需确保指示模型返回JSON {"type": "json_object"}
# 结构化输出响应,带有预定义JSON模式 { "type": "json_schema", "json_schema": { "name": "模式名称,必须是标识符", "description": "模型描述", # 可通过`model_json_schema()`方法将Pydantic(v2)模型转为JSON模式 "schema": "<JSON模式本身>", # 是否在生成输出时严格遵循模式 # 设为true时模型将完全遵循`schema`字段定义的模式 # 仅支持JSON Schema的子集 # 详见https://platform.openai.com/docs/guides/structured-outputs "strict": False, # 或True }, }
对于结构化输出,建议使用
create_stream()
方法中的`json_output`参数而非`response_format`。 `json_output`参数更灵活,可直接指定Pydantic模型类。seed (可选, int): stop (可选, str | List[str]): temperature (可选, float): top_p (可选, float): user (可选, str): default_headers (可选, dict[str, str]): 自定义头部,用于认证或其他定制需求。 add_name_prefixes (可选, bool): 是否在每条:class:`~autogen_core.models.UserMessage`内容前添加`source`值。
例如"this is content"变为"Reviewer said: this is content." 适用于不支持消息中`name`字段的模型。默认为False。
stream_options (可选, dict): 流式传输的附加选项。目前仅支持`include_usage`。
示例:
以下代码片段展示如何使用OpenAI模型客户端:
from autogen_ext.models.openai import OpenAIChatCompletionClient from autogen_core.models import UserMessage openai_client = OpenAIChatCompletionClient( model="gpt-4o-2024-08-06", # api_key="sk-...", # 如果设置了OPENAI_API_KEY环境变量则可选 ) result = await openai_client.create([UserMessage(content="What is the capital of France?", source="user")]) # type: ignore print(result) # 使用完毕后关闭客户端 # await openai_client.close()
使用非OpenAI模型时需提供模型基础URL和模型信息。 例如使用Ollama的代码片段:
from autogen_ext.models.openai import OpenAIChatCompletionClient from autogen_core.models import ModelFamily custom_model_client = OpenAIChatCompletionClient( model="deepseek-r1:1.5b", base_url="http://localhost:11434/v1", api_key="placeholder", model_info={ "vision": False, "function_calling": False, "json_output": False, "family": ModelFamily.R1, "structured_output": True, }, ) # 使用完毕后关闭客户端 # await custom_model_client.close()
使用流式模式的代码片段:
import asyncio from autogen_core.models import UserMessage from autogen_ext.models.openai import OpenAIChatCompletionClient async def main() -> None: # AzureOpenAIChatCompletionClient用法类似 model_client = OpenAIChatCompletionClient(model="gpt-4o") # 假设环境变量已设置OPENAI_API_KEY messages = [UserMessage(content="Write a very short story about a dragon.", source="user")] # 创建流 stream = model_client.create_stream(messages=messages) # 遍历流并打印响应 print("流式响应:") async for response in stream: if isinstance(response, str): # 部分响应是字符串 print(response, flush=True, end="") else: # 最终响应是包含完整消息的CreateResult对象 print("\n\n------------\n") print("完整响应:", flush=True) print(response.content, flush=True) # 使用完毕后关闭客户端 await model_client.close() asyncio.run(main())
同时使用结构化输出和函数调用的代码片段:
import asyncio from typing import Literal from autogen_core.models import ( AssistantMessage, FunctionExecutionResult, FunctionExecutionResultMessage, SystemMessage, UserMessage, ) from autogen_core.tools import FunctionTool from autogen_ext.models.openai import OpenAIChatCompletionClient from pydantic import BaseModel # 定义结构化输出格式 class AgentResponse(BaseModel): thoughts: str response: Literal["happy", "sad", "neutral"] # 定义作为工具调用的函数 def sentiment_analysis(text: str) -> str: """给定文本返回情感倾向""" return "happy" if "happy" in text else "sad" if "sad" in text else "neutral" # 创建FunctionTool实例,需设置`strict=True`以支持结构化输出模式 tool = FunctionTool(sentiment_analysis, description="情感分析", strict=True) async def main() -> None: # 创建OpenAIChatCompletionClient实例 model_client = OpenAIChatCompletionClient(model="gpt-4o-mini") # 使用工具生成响应 response1 = await model_client.create( messages=[ SystemMessage(content="使用提供的工具分析输入文本情感"), UserMessage(content="I am happy.", source="user"), ], tools=[tool], ) print(response1.content) # 应为工具调用列表 # [FunctionCall(name="sentiment_analysis", arguments={"text": "I am happy."}, ...)] assert isinstance(response1.content, list) response2 = await model_client.create( messages=[ SystemMessage(content="使用提供的工具分析输入文本情感"), UserMessage(content="I am happy.", source="user"), AssistantMessage(content=response1.content, source="assistant"), FunctionExecutionResultMessage( content=[FunctionExecutionResult(content="happy", call_id=response1.content[0].id, is_error=False, name="sentiment_analysis")] ), ], # 使用结构化输出格式 json_output=AgentResponse, ) print(response2.content) # 应为结构化输出 # {"thoughts": "用户很高兴", "response": "happy"} # 使用完毕后关闭客户端 await model_client.close() asyncio.run(main())
从配置加载客户端的示例:
from autogen_core.models import ChatCompletionClient config = { "provider": "OpenAIChatCompletionClient", "config": {"model": "gpt-4o", "api_key": "REPLACE_WITH_YOUR_API_KEY"}, } client = ChatCompletionClient.load_component(config)
完整配置选项列表请参考:py:class:`OpenAIClientConfigurationConfigModel`类。
- component_type: ClassVar[ComponentType] = 'model'#
组件的逻辑类型。
- component_config_schema#
- component_provider_override: ClassVar[str | None] = 'autogen_ext.models.openai.OpenAIChatCompletionClient'#
覆盖组件的provider字符串。这应该用于防止内部模块名称成为模块名称的一部分。
- _to_config() OpenAIClientConfigurationConfigModel [源代码]#
导出当前组件实例的配置,该配置可用于创建具有相同配置的新组件实例。
- Returns:
T -- 组件的配置。
- classmethod _from_config(config: OpenAIClientConfigurationConfigModel) Self [源代码]#
从配置对象创建组件的新实例。
- 参数:
config (T) -- 配置对象。
- Returns:
Self -- 组件的新实例。
- class AzureOpenAIChatCompletionClient(**kwargs: Unpack)[源代码]#
基类:
BaseOpenAIChatCompletionClient
,Component
[AzureOpenAIClientConfigurationConfigModel
]Azure OpenAI托管模型的聊天补全客户端。
使用此客户端需要安装`azure`和`openai`扩展包:
pip install "autogen-ext[openai,azure]"
参数:
model (str): 使用的OpenAI模型名称。 azure_endpoint (str): Azure模型端点。Azure模型必须提供。 azure_deployment (str): Azure模型部署名称。Azure模型必须提供。 api_version (str): API版本。Azure模型必须提供。 azure_ad_token (str): Azure AD令牌。使用令牌认证时需提供此参数或`azure_ad_token_provider`。 azure_ad_token_provider (可选, Callable[[], Awaitable[str]] | AzureTokenProvider): Azure AD令牌提供者。使用令牌认证时需提供此参数或`azure_ad_token`。 api_key (可选, str): API密钥。使用基于密钥的认证时需提供。如果使用Azure AD令牌认证或设置了`AZURE_OPENAI_API_KEY`环境变量则可选。 timeout: (可选, float): 请求超时时间(秒)。 max_retries (可选, int): 最大重试次数。 model_info (可选, ModelInfo): 模型能力描述。当模型名称不是有效的OpenAI模型时必须提供。 frequency_penalty (可选, float): logit_bias: (可选, dict[str, int]): max_tokens (可选, int): n (可选, int): presence_penalty (可选, float): response_format (可选, Dict[str, Any]): 响应格式。可选值包括:
# 文本响应(默认) {"type": "text"}
# JSON响应,需确保指示模型返回JSON {"type": "json_object"}
# 结构化输出响应,带有预定义JSON模式 { "type": "json_schema", "json_schema": { "name": "模式名称,必须是标识符", "description": "模型描述", # 可通过`model_json_schema()`方法将Pydantic(v2)模型转为JSON模式 "schema": "<JSON模式本身>", # 是否在生成输出时严格遵循模式 # 设为true时模型将完全遵循`schema`字段定义的模式 # 仅支持JSON Schema的子集 # 详见https://platform.openai.com/docs/guides/structured-outputs "strict": False, # 或True }, }
对于结构化输出,建议使用
create_stream()
方法中的`json_output`参数而非`response_format`。 `json_output`参数更灵活,可直接指定Pydantic模型类。seed (可选, int): stop (可选, str | List[str]): temperature (可选, float): top_p (可选, float): user (可选, str): default_headers (可选, dict[str, str]): 自定义头部,用于认证或其他定制需求。
使用此客户端需提供部署名称、Azure认知服务端点和API版本。 认证方式可选择API密钥或Azure Active Directory(AAD)令牌凭证。
以下代码片段展示如何使用AAD认证。 使用的身份必须被分配`认知服务OpenAI用户 <https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/role-based-access-control#cognitive-services-openai-user>`_角色。
from autogen_ext.auth.azure import AzureTokenProvider from autogen_ext.models.openai import AzureOpenAIChatCompletionClient from azure.identity import DefaultAzureCredential # 创建令牌提供者 token_provider = AzureTokenProvider( DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default", ) az_model_client = AzureOpenAIChatCompletionClient( azure_deployment="{your-azure-deployment}", model="{model-name, such as gpt-4o}", api_version="2024-06-01", azure_endpoint="https://{your-custom-endpoint}.openai.azure.com/", azure_ad_token_provider=token_provider, # 如果选择基于密钥的认证则可选 # api_key="sk-...", # 基于密钥的认证 )
其他用法示例请参考:class:`OpenAIChatCompletionClient`类。
从配置加载使用基于身份认证的客户端:
from autogen_core.models import ChatCompletionClient config = { "provider": "AzureOpenAIChatCompletionClient", "config": { "model": "gpt-4o-2024-05-13", "azure_endpoint": "https://{your-custom-endpoint}.openai.azure.com/", "azure_deployment": "{your-azure-deployment}", "api_version": "2024-06-01", "azure_ad_token_provider": { "provider": "autogen_ext.auth.azure.AzureTokenProvider", "config": { "provider_kind": "DefaultAzureCredential", "scopes": ["https://cognitiveservices.azure.com/.default"], }, }, }, } client = ChatCompletionClient.load_component(config)
完整配置选项列表请参考:py:class:`AzureOpenAIClientConfigurationConfigModel`类。
备注
目前仅支持`DefaultAzureCredential`且不能传递额外参数。
备注
Azure OpenAI客户端默认设置User-Agent头为`autogen-python/{version}`。要覆盖此设置,可将环境变量`autogen_ext.models.openai.AZURE_OPENAI_USER_AGENT`设为空字符串。
直接使用Azure客户端或获取更多信息,请参阅`此处 <https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/managed-identity#chat-completions>`_。
- component_type: ClassVar[ComponentType] = 'model'#
组件的逻辑类型。
- component_config_schema#
- component_provider_override: ClassVar[str | None] = 'autogen_ext.models.openai.AzureOpenAIChatCompletionClient'#
覆盖组件的provider字符串。这应该用于防止内部模块名称成为模块名称的一部分。
- _to_config() AzureOpenAIClientConfigurationConfigModel [源代码]#
导出当前组件实例的配置,该配置可用于创建具有相同配置的新组件实例。
- Returns:
T -- 组件的配置。
- classmethod _from_config(config: AzureOpenAIClientConfigurationConfigModel) Self [源代码]#
从配置对象创建组件的新实例。
- 参数:
config (T) -- 配置对象。
- Returns:
Self -- 组件的新实例。
- class BaseOpenAIChatCompletionClient(client: AsyncOpenAI | AsyncAzureOpenAI, *, create_args: Dict[str, Any], model_capabilities: ModelCapabilities | None = None, model_info: ModelInfo | None = None, add_name_prefixes: bool = False)[源代码]#
-
- async create(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = [], json_output: bool | type[BaseModel] | None = None, extra_create_args: Mapping[str, Any] = {}, cancellation_token: CancellationToken | None = None) CreateResult [源代码]#
从模型创建单个响应。
- 参数:
messages (Sequence[LLMMessage]) -- 发送给模型的消息。
tools (Sequence[Tool | ToolSchema], optional) -- 与模型一起使用的工具。默认为 []。
json_output (Optional[bool | type[BaseModel]], optional) -- 是否使用 JSON 模式、结构化输出或都不使用。 默认为 None。如果设置为 Pydantic BaseModel 类型, 将用作结构化输出的输出类型。 如果设置为布尔值,将用于确定是否使用 JSON 模式。 如果设置为 True,请确保在指令或提示中指示模型生成 JSON 输出。
extra_create_args (Mapping[str, Any], optional) -- 传递给底层客户端的额外参数。默认为 {}。
cancellation_token (Optional[CancellationToken], optional) -- 用于取消的令牌。默认为 None。
- Returns:
CreateResult -- 模型调用的结果。
- async create_stream(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = [], json_output: bool | type[BaseModel] | None = None, extra_create_args: Mapping[str, Any] = {}, cancellation_token: CancellationToken | None = None, max_consecutive_empty_chunk_tolerance: int = 0) AsyncGenerator[str | CreateResult, None] [源代码]#
创建一个以
CreateResult
结尾的模型字符串块流。扩展
autogen_core.models.ChatCompletionClient.create_stream()
以支持 OpenAI API。在流式传输中,默认行为是不返回令牌使用计数。 参见: OpenAI API 参考了解可能的参数。
您可以设置 extra_create_args={"stream_options": {"include_usage": True}} (如果所访问的API支持)来 返回一个最终块,其usage设置为
RequestUsage
对象 包含提示和完成令牌计数, 所有前面的块都将usage设为`None`。 参见: OpenAI API 关于流选项的参考。- 其他可以在 extra_create_args 中包含的支持参数示例:
temperature (float): 控制输出的随机性。较高的值(如0.8)使输出更随机,而较低的值(如0.2)使其更集中和确定。
max_tokens (int): 完成中生成的最大令牌数。
top_p (float): 温度采样的替代方法,称为核采样,模型考虑具有top_p概率质量的令牌结果。
frequency_penalty (float): -2.0到2.0之间的值,根据新令牌在文本中的现有频率进行惩罚,降低重复短语的可能性。
presence_penalty (float): -2.0到2.0之间的值,根据新令牌是否出现在文本中进行惩罚,鼓励模型讨论新主题。
- actual_usage() RequestUsage [源代码]#
- total_usage() RequestUsage [源代码]#
- count_tokens(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = []) int [源代码]#
- remaining_tokens(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = []) int [源代码]#
- property capabilities: ModelCapabilities#
- pydantic model AzureOpenAIClientConfigurationConfigModel[源代码]#
基类:
BaseOpenAIClientConfigurationConfigModel
Show JSON schema
{ "title": "AzureOpenAIClientConfigurationConfigModel", "type": "object", "properties": { "frequency_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Frequency Penalty" }, "logit_bias": { "anyOf": [ { "additionalProperties": { "type": "integer" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Logit Bias" }, "max_tokens": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Tokens" }, "n": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "N" }, "presence_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Presence Penalty" }, "response_format": { "anyOf": [ { "$ref": "#/$defs/ResponseFormat" }, { "type": "null" } ], "default": null }, "seed": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Seed" }, "stop": { "anyOf": [ { "type": "string" }, { "items": { "type": "string" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Stop" }, "temperature": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Temperature" }, "top_p": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Top P" }, "user": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "User" }, "stream_options": { "anyOf": [ { "$ref": "#/$defs/StreamOptions" }, { "type": "null" } ], "default": null }, "model": { "title": "Model", "type": "string" }, "api_key": { "anyOf": [ { "format": "password", "type": "string", "writeOnly": true }, { "type": "null" } ], "default": null, "title": "Api Key" }, "timeout": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Timeout" }, "max_retries": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Retries" }, "model_capabilities": { "anyOf": [ { "$ref": "#/$defs/ModelCapabilities" }, { "type": "null" } ], "default": null }, "model_info": { "anyOf": [ { "$ref": "#/$defs/ModelInfo" }, { "type": "null" } ], "default": null }, "add_name_prefixes": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Add Name Prefixes" }, "default_headers": { "anyOf": [ { "additionalProperties": { "type": "string" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Default Headers" }, "azure_endpoint": { "title": "Azure Endpoint", "type": "string" }, "azure_deployment": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Azure Deployment" }, "api_version": { "title": "Api Version", "type": "string" }, "azure_ad_token": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Azure Ad Token" }, "azure_ad_token_provider": { "anyOf": [ { "$ref": "#/$defs/ComponentModel" }, { "type": "null" } ], "default": null } }, "$defs": { "ComponentModel": { "description": "\u7ec4\u4ef6\u7684\u6a21\u578b\u7c7b\u3002\u5305\u542b\u5b9e\u4f8b\u5316\u7ec4\u4ef6\u6240\u9700\u7684\u5168\u90e8\u4fe1\u606f\u3002", "properties": { "provider": { "title": "Provider", "type": "string" }, "component_type": { "anyOf": [ { "enum": [ "model", "agent", "tool", "termination", "token_provider", "workbench" ], "type": "string" }, { "type": "string" }, { "type": "null" } ], "default": null, "title": "Component Type" }, "version": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Version" }, "component_version": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Component Version" }, "description": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Description" }, "label": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Label" }, "config": { "title": "Config", "type": "object" } }, "required": [ "provider", "config" ], "title": "ComponentModel", "type": "object" }, "JSONSchema": { "properties": { "name": { "title": "Name", "type": "string" }, "description": { "title": "Description", "type": "string" }, "schema": { "title": "Schema", "type": "object" }, "strict": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "title": "Strict" } }, "required": [ "name" ], "title": "JSONSchema", "type": "object" }, "ModelCapabilities": { "deprecated": true, "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" } }, "required": [ "vision", "function_calling", "json_output" ], "title": "ModelCapabilities", "type": "object" }, "ModelInfo": { "description": "ModelInfo\u662f\u4e00\u4e2a\u5305\u542b\u6a21\u578b\u5c5e\u6027\u4fe1\u606f\u7684\u5b57\u5178\u3002\n\u9884\u671f\u7528\u4e8e\u6a21\u578b\u5ba2\u6237\u7aef\u7684model_info\u5c5e\u6027\u4e2d\u3002\n\n\u968f\u7740\u6211\u4eec\u6dfb\u52a0\u66f4\u591a\u529f\u80fd\uff0c\u9884\u8ba1\u8fd9\u4e2a\u7ed3\u6784\u4f1a\u4e0d\u65ad\u6269\u5c55\u3002", "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" }, "family": { "anyOf": [ { "enum": [ "gpt-41", "gpt-45", "gpt-4o", "o1", "o3", "o4", "gpt-4", "gpt-35", "r1", "gemini-1.5-flash", "gemini-1.5-pro", "gemini-2.0-flash", "gemini-2.5-pro", "gemini-2.5-flashclaude-3-haiku", "claude-3-sonnet", "claude-3-opus", "claude-3-5-haiku", "claude-3-5-sonnet", "claude-3-7-sonnet", "llama-3.3-8b", "llama-3.3-70b", "llama-4-scout", "llama-4-maverick", "codestral", "open-codestral-mamba", "mistral", "ministral", "pixtral", "unknown" ], "type": "string" }, { "type": "string" } ], "title": "Family" }, "structured_output": { "title": "Structured Output", "type": "boolean" }, "multiple_system_messages": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "title": "Multiple System Messages" } }, "required": [ "vision", "function_calling", "json_output", "family", "structured_output" ], "title": "ModelInfo", "type": "object" }, "ResponseFormat": { "properties": { "type": { "enum": [ "text", "json_object", "json_schema" ], "title": "Type", "type": "string" }, "json_schema": { "anyOf": [ { "$ref": "#/$defs/JSONSchema" }, { "type": "null" } ] } }, "required": [ "type", "json_schema" ], "title": "ResponseFormat", "type": "object" }, "StreamOptions": { "properties": { "include_usage": { "title": "Include Usage", "type": "boolean" } }, "required": [ "include_usage" ], "title": "StreamOptions", "type": "object" } }, "required": [ "model", "azure_endpoint", "api_version" ] }
- Fields:
api_version (str)
azure_ad_token (str | None)
azure_ad_token_provider (autogen_core._component_config.ComponentModel | None)
azure_deployment (str | None)
azure_endpoint (str)
- field azure_ad_token_provider: ComponentModel | None = None#
- pydantic model OpenAIClientConfigurationConfigModel[源代码]#
基类:
BaseOpenAIClientConfigurationConfigModel
Show JSON schema
{ "title": "OpenAIClientConfigurationConfigModel", "type": "object", "properties": { "frequency_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Frequency Penalty" }, "logit_bias": { "anyOf": [ { "additionalProperties": { "type": "integer" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Logit Bias" }, "max_tokens": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Tokens" }, "n": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "N" }, "presence_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Presence Penalty" }, "response_format": { "anyOf": [ { "$ref": "#/$defs/ResponseFormat" }, { "type": "null" } ], "default": null }, "seed": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Seed" }, "stop": { "anyOf": [ { "type": "string" }, { "items": { "type": "string" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Stop" }, "temperature": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Temperature" }, "top_p": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Top P" }, "user": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "User" }, "stream_options": { "anyOf": [ { "$ref": "#/$defs/StreamOptions" }, { "type": "null" } ], "default": null }, "model": { "title": "Model", "type": "string" }, "api_key": { "anyOf": [ { "format": "password", "type": "string", "writeOnly": true }, { "type": "null" } ], "default": null, "title": "Api Key" }, "timeout": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Timeout" }, "max_retries": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Retries" }, "model_capabilities": { "anyOf": [ { "$ref": "#/$defs/ModelCapabilities" }, { "type": "null" } ], "default": null }, "model_info": { "anyOf": [ { "$ref": "#/$defs/ModelInfo" }, { "type": "null" } ], "default": null }, "add_name_prefixes": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Add Name Prefixes" }, "default_headers": { "anyOf": [ { "additionalProperties": { "type": "string" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Default Headers" }, "organization": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Organization" }, "base_url": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Base Url" } }, "$defs": { "JSONSchema": { "properties": { "name": { "title": "Name", "type": "string" }, "description": { "title": "Description", "type": "string" }, "schema": { "title": "Schema", "type": "object" }, "strict": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "title": "Strict" } }, "required": [ "name" ], "title": "JSONSchema", "type": "object" }, "ModelCapabilities": { "deprecated": true, "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" } }, "required": [ "vision", "function_calling", "json_output" ], "title": "ModelCapabilities", "type": "object" }, "ModelInfo": { "description": "ModelInfo\u662f\u4e00\u4e2a\u5305\u542b\u6a21\u578b\u5c5e\u6027\u4fe1\u606f\u7684\u5b57\u5178\u3002\n\u9884\u671f\u7528\u4e8e\u6a21\u578b\u5ba2\u6237\u7aef\u7684model_info\u5c5e\u6027\u4e2d\u3002\n\n\u968f\u7740\u6211\u4eec\u6dfb\u52a0\u66f4\u591a\u529f\u80fd\uff0c\u9884\u8ba1\u8fd9\u4e2a\u7ed3\u6784\u4f1a\u4e0d\u65ad\u6269\u5c55\u3002", "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" }, "family": { "anyOf": [ { "enum": [ "gpt-41", "gpt-45", "gpt-4o", "o1", "o3", "o4", "gpt-4", "gpt-35", "r1", "gemini-1.5-flash", "gemini-1.5-pro", "gemini-2.0-flash", "gemini-2.5-pro", "gemini-2.5-flashclaude-3-haiku", "claude-3-sonnet", "claude-3-opus", "claude-3-5-haiku", "claude-3-5-sonnet", "claude-3-7-sonnet", "llama-3.3-8b", "llama-3.3-70b", "llama-4-scout", "llama-4-maverick", "codestral", "open-codestral-mamba", "mistral", "ministral", "pixtral", "unknown" ], "type": "string" }, { "type": "string" } ], "title": "Family" }, "structured_output": { "title": "Structured Output", "type": "boolean" }, "multiple_system_messages": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "title": "Multiple System Messages" } }, "required": [ "vision", "function_calling", "json_output", "family", "structured_output" ], "title": "ModelInfo", "type": "object" }, "ResponseFormat": { "properties": { "type": { "enum": [ "text", "json_object", "json_schema" ], "title": "Type", "type": "string" }, "json_schema": { "anyOf": [ { "$ref": "#/$defs/JSONSchema" }, { "type": "null" } ] } }, "required": [ "type", "json_schema" ], "title": "ResponseFormat", "type": "object" }, "StreamOptions": { "properties": { "include_usage": { "title": "Include Usage", "type": "boolean" } }, "required": [ "include_usage" ], "title": "StreamOptions", "type": "object" } }, "required": [ "model" ] }
- Fields:
base_url (str | None)
organization (str | None)
- pydantic model BaseOpenAIClientConfigurationConfigModel[源代码]#
-
Show JSON schema
{ "title": "BaseOpenAIClientConfigurationConfigModel", "type": "object", "properties": { "frequency_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Frequency Penalty" }, "logit_bias": { "anyOf": [ { "additionalProperties": { "type": "integer" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Logit Bias" }, "max_tokens": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Tokens" }, "n": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "N" }, "presence_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Presence Penalty" }, "response_format": { "anyOf": [ { "$ref": "#/$defs/ResponseFormat" }, { "type": "null" } ], "default": null }, "seed": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Seed" }, "stop": { "anyOf": [ { "type": "string" }, { "items": { "type": "string" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Stop" }, "temperature": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Temperature" }, "top_p": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Top P" }, "user": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "User" }, "stream_options": { "anyOf": [ { "$ref": "#/$defs/StreamOptions" }, { "type": "null" } ], "default": null }, "model": { "title": "Model", "type": "string" }, "api_key": { "anyOf": [ { "format": "password", "type": "string", "writeOnly": true }, { "type": "null" } ], "default": null, "title": "Api Key" }, "timeout": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Timeout" }, "max_retries": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Retries" }, "model_capabilities": { "anyOf": [ { "$ref": "#/$defs/ModelCapabilities" }, { "type": "null" } ], "default": null }, "model_info": { "anyOf": [ { "$ref": "#/$defs/ModelInfo" }, { "type": "null" } ], "default": null }, "add_name_prefixes": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Add Name Prefixes" }, "default_headers": { "anyOf": [ { "additionalProperties": { "type": "string" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Default Headers" } }, "$defs": { "JSONSchema": { "properties": { "name": { "title": "Name", "type": "string" }, "description": { "title": "Description", "type": "string" }, "schema": { "title": "Schema", "type": "object" }, "strict": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "title": "Strict" } }, "required": [ "name" ], "title": "JSONSchema", "type": "object" }, "ModelCapabilities": { "deprecated": true, "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" } }, "required": [ "vision", "function_calling", "json_output" ], "title": "ModelCapabilities", "type": "object" }, "ModelInfo": { "description": "ModelInfo\u662f\u4e00\u4e2a\u5305\u542b\u6a21\u578b\u5c5e\u6027\u4fe1\u606f\u7684\u5b57\u5178\u3002\n\u9884\u671f\u7528\u4e8e\u6a21\u578b\u5ba2\u6237\u7aef\u7684model_info\u5c5e\u6027\u4e2d\u3002\n\n\u968f\u7740\u6211\u4eec\u6dfb\u52a0\u66f4\u591a\u529f\u80fd\uff0c\u9884\u8ba1\u8fd9\u4e2a\u7ed3\u6784\u4f1a\u4e0d\u65ad\u6269\u5c55\u3002", "properties": { "vision": { "title": "Vision", "type": "boolean" }, "function_calling": { "title": "Function Calling", "type": "boolean" }, "json_output": { "title": "Json Output", "type": "boolean" }, "family": { "anyOf": [ { "enum": [ "gpt-41", "gpt-45", "gpt-4o", "o1", "o3", "o4", "gpt-4", "gpt-35", "r1", "gemini-1.5-flash", "gemini-1.5-pro", "gemini-2.0-flash", "gemini-2.5-pro", "gemini-2.5-flashclaude-3-haiku", "claude-3-sonnet", "claude-3-opus", "claude-3-5-haiku", "claude-3-5-sonnet", "claude-3-7-sonnet", "llama-3.3-8b", "llama-3.3-70b", "llama-4-scout", "llama-4-maverick", "codestral", "open-codestral-mamba", "mistral", "ministral", "pixtral", "unknown" ], "type": "string" }, { "type": "string" } ], "title": "Family" }, "structured_output": { "title": "Structured Output", "type": "boolean" }, "multiple_system_messages": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "title": "Multiple System Messages" } }, "required": [ "vision", "function_calling", "json_output", "family", "structured_output" ], "title": "ModelInfo", "type": "object" }, "ResponseFormat": { "properties": { "type": { "enum": [ "text", "json_object", "json_schema" ], "title": "Type", "type": "string" }, "json_schema": { "anyOf": [ { "$ref": "#/$defs/JSONSchema" }, { "type": "null" } ] } }, "required": [ "type", "json_schema" ], "title": "ResponseFormat", "type": "object" }, "StreamOptions": { "properties": { "include_usage": { "title": "Include Usage", "type": "boolean" } }, "required": [ "include_usage" ], "title": "StreamOptions", "type": "object" } }, "required": [ "model" ] }
- Fields:
add_name_prefixes (bool | None)
api_key (pydantic.types.SecretStr | None)
default_headers (Dict[str, str] | None)
max_retries (int | None)
model (str)
model_capabilities (autogen_core.models._model_client.ModelCapabilities | None)
model_info (autogen_core.models._model_client.ModelInfo | None)
timeout (float | None)
- field model_capabilities: ModelCapabilities | None = None#
- pydantic model CreateArgumentsConfigModel[源代码]#
基类:
BaseModel
Show JSON schema
{ "title": "CreateArgumentsConfigModel", "type": "object", "properties": { "frequency_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Frequency Penalty" }, "logit_bias": { "anyOf": [ { "additionalProperties": { "type": "integer" }, "type": "object" }, { "type": "null" } ], "default": null, "title": "Logit Bias" }, "max_tokens": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Max Tokens" }, "n": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "N" }, "presence_penalty": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Presence Penalty" }, "response_format": { "anyOf": [ { "$ref": "#/$defs/ResponseFormat" }, { "type": "null" } ], "default": null }, "seed": { "anyOf": [ { "type": "integer" }, { "type": "null" } ], "default": null, "title": "Seed" }, "stop": { "anyOf": [ { "type": "string" }, { "items": { "type": "string" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Stop" }, "temperature": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Temperature" }, "top_p": { "anyOf": [ { "type": "number" }, { "type": "null" } ], "default": null, "title": "Top P" }, "user": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "User" }, "stream_options": { "anyOf": [ { "$ref": "#/$defs/StreamOptions" }, { "type": "null" } ], "default": null } }, "$defs": { "JSONSchema": { "properties": { "name": { "title": "Name", "type": "string" }, "description": { "title": "Description", "type": "string" }, "schema": { "title": "Schema", "type": "object" }, "strict": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "title": "Strict" } }, "required": [ "name" ], "title": "JSONSchema", "type": "object" }, "ResponseFormat": { "properties": { "type": { "enum": [ "text", "json_object", "json_schema" ], "title": "Type", "type": "string" }, "json_schema": { "anyOf": [ { "$ref": "#/$defs/JSONSchema" }, { "type": "null" } ] } }, "required": [ "type", "json_schema" ], "title": "ResponseFormat", "type": "object" }, "StreamOptions": { "properties": { "include_usage": { "title": "Include Usage", "type": "boolean" } }, "required": [ "include_usage" ], "title": "StreamOptions", "type": "object" } } }
- Fields:
frequency_penalty (float | None)
logit_bias (Dict[str, int] | None)
max_tokens (int | None)
n (int | None)
presence_penalty (float | None)
response_format (autogen_ext.models.openai.config.ResponseFormat | None)
seed (int | None)
stop (str | List[str] | None)
stream_options (autogen_ext.models.openai.config.StreamOptions | None)
temperature (float | None)
top_p (float | None)
user (str | None)