Chat messages
We support 4 types of message objects:
SystemMessage
: Instructions and context for the conversationUserMessage
: Messages from end users to the assistantAssistantMessage
: Responses from the AI assistant, potentially including tool callsToolMessage
: Results from tools invoked during the conversation
Notes on the construction of messages:
- Each message has a
content
field, which can either be astr
or a list ofContent
objects with text and/or reasoning. We don't support audio/image/video content yet. AssistantMessage
objects also have atool_calls
field, which supports a list ofToolCall
objects.
Usage
The easiest way to convert a dict
into a ChatMessage
is to use parse_chat_message
:
from docent.data_models.chat import parse_chat_message
message_data = [
{
"role": "user",
"content": "What is the capital of France?",
},
{
"role": "assistant",
"content": "Paris",
},
]
messages = [parse_chat_message(msg) for msg in message_data]
The function will automatically raise validation errors if the input message does not conform to the schema.
You may also want to create messages manually:
from docent.data_models.chat import (
SystemMessage,
UserMessage,
AssistantMessage,
ContentText,
ContentReasoning,
ToolCall,
ToolCallContent,
)
messages = [
SystemMessage(content="You are a helpful assistant."),
UserMessage(content=[ContentText(text="Help me with this problem.")]),
AssistantMessage(content="I'll help you solve that.", tool_calls=[ToolCall(id="call_123", function="calculator", arguments={"operation": "add", "a": 5, "b": 3}, view=ToolCallContent(format="markdown", content="Calculating: 5 + 3"))]),
ToolMessage(content="8", tool_call_id="call_123", function="calculator"),
]
docent.data_models.chat.message
ChatMessage
module-attribute
ChatMessage = Annotated[SystemMessage | UserMessage | AssistantMessage | ToolMessage, Discriminator('role')]
Type alias for any chat message type, discriminated by the role field.
BaseChatMessage
Bases: BaseModel
Base class for all chat message types.
Attributes:
Name | Type | Description |
---|---|---|
id |
str | None
|
Optional unique identifier for the message. |
content |
str | list[Content]
|
The message content, either as a string or list of Content objects. |
role |
Literal['system', 'user', 'assistant', 'tool']
|
The role of the message sender (system, user, assistant, tool). |
Source code in docent/data_models/chat/message.py
SystemMessage
Bases: BaseChatMessage
System message in a chat conversation.
Attributes:
Name | Type | Description |
---|---|---|
role |
Literal['system']
|
Always set to "system". |
Source code in docent/data_models/chat/message.py
UserMessage
Bases: BaseChatMessage
User message in a chat conversation.
Attributes:
Name | Type | Description |
---|---|---|
role |
Literal['user']
|
Always set to "user". |
tool_call_id |
list[str] | None
|
Optional list of tool call IDs this message is responding to. |
Source code in docent/data_models/chat/message.py
AssistantMessage
Bases: BaseChatMessage
Assistant message in a chat conversation.
Attributes:
Name | Type | Description |
---|---|---|
role |
Literal['assistant']
|
Always set to "assistant". |
model |
str | None
|
Optional identifier for the model that generated this message. |
tool_calls |
list[ToolCall] | None
|
Optional list of tool calls made by the assistant. |
Source code in docent/data_models/chat/message.py
ToolMessage
Bases: BaseChatMessage
Tool message in a chat conversation.
Attributes:
Name | Type | Description |
---|---|---|
role |
Literal['tool']
|
Always set to "tool". |
tool_call_id |
str | None
|
Optional ID of the tool call this message is responding to. |
function |
str | None
|
Optional name of the function that was called. |
error |
dict[str, Any] | None
|
Optional error information if the tool call failed. |
Source code in docent/data_models/chat/message.py
parse_chat_message
parse_chat_message(message_data: dict[str, Any] | ChatMessage) -> ChatMessage
Parse a message dictionary or object into the appropriate ChatMessage subclass.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
message_data
|
dict[str, Any] | ChatMessage
|
A dictionary or ChatMessage object representing a chat message. |
required |
Returns:
Name | Type | Description |
---|---|---|
ChatMessage |
ChatMessage
|
An instance of a ChatMessage subclass based on the role. |
Raises:
Type | Description |
---|---|
ValueError
|
If the message role is unknown. |
Source code in docent/data_models/chat/message.py
docent.data_models.chat.content
Content
module-attribute
Content = Annotated[ContentText | ContentReasoning, Discriminator('type')]
Discriminated union of possible content types using the 'type' field. Can be either ContentText or ContentReasoning.
BaseContent
Bases: BaseModel
Base class for all content types in chat messages.
Provides the foundation for different content types with a discriminator field.
Attributes:
Name | Type | Description |
---|---|---|
type |
Literal['text', 'reasoning', 'image', 'audio', 'video']
|
The content type identifier, used for discriminating between content types. |
Source code in docent/data_models/chat/content.py
ContentText
Bases: BaseContent
Text content for chat messages.
Represents plain text content in a chat message.
Attributes:
Name | Type | Description |
---|---|---|
type |
Literal['text']
|
Fixed as "text" to identify this content type. |
text |
str
|
The actual text content. |
refusal |
bool | None
|
Optional flag indicating if this is a refusal message. |
Source code in docent/data_models/chat/content.py
ContentReasoning
Bases: BaseContent
Reasoning content for chat messages.
Represents reasoning or thought process content in a chat message.
Attributes:
Name | Type | Description |
---|---|---|
type |
Literal['reasoning']
|
Fixed as "reasoning" to identify this content type. |
reasoning |
str
|
The actual reasoning text. |
signature |
str | None
|
Optional signature associated with the reasoning. |
redacted |
bool
|
Flag indicating if the reasoning has been redacted. |
Source code in docent/data_models/chat/content.py
docent.data_models.chat.tool
ToolCall
dataclass
Tool call information.
Attributes:
Name | Type | Description |
---|---|---|
id |
str
|
Unique identifier for tool call. |
type |
Literal['function'] | None
|
Type of tool call. Can only be "function" or None. |
function |
str
|
Function called. |
arguments |
dict[str, Any]
|
Arguments to function. |
parse_error |
str | None
|
Error which occurred parsing tool call. |
view |
ToolCallContent | None
|
Custom view of tool call input. |
Source code in docent/data_models/chat/tool.py
ToolCallContent
Bases: BaseModel
Content to include in tool call view.
Attributes:
Name | Type | Description |
---|---|---|
title |
str | None
|
Optional (plain text) title for tool call content. |
format |
Literal['text', 'markdown']
|
Format (text or markdown). |
content |
str
|
Text or markdown content. |
Source code in docent/data_models/chat/tool.py
ToolParam
Bases: BaseModel
A parameter for a tool function.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name
|
The name of the parameter. |
required | |
description
|
A description of what the parameter does. |
required | |
input_schema
|
JSON Schema describing the parameter's type and validation rules. |
required |
Source code in docent/data_models/chat/tool.py
ToolParams
Bases: BaseModel
Description of tool parameters object in JSON Schema format.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
type
|
The type of the parameters object, always 'object'. |
required | |
properties
|
Dictionary mapping parameter names to their ToolParam definitions. |
required | |
required
|
List of required parameter names. |
required | |
additionalProperties
|
Whether additional properties are allowed beyond those specified. Always False. |
required |
Source code in docent/data_models/chat/tool.py
ToolInfo
Bases: BaseModel
Specification of a tool (JSON Schema compatible).
If you are implementing a ModelAPI, most LLM libraries can be passed this object (dumped to a dict) directly as a function specification. For example, in the OpenAI provider:
In some cases the field names don't match up exactly. In that case
call model_dump()
on the parameters
field. For example, in the
Anthropic provider:
ToolParam(
name=tool.name,
description=tool.description,
input_schema=tool.parameters.model_dump(exclude_none=True),
)
Attributes:
Name | Type | Description |
---|---|---|
name |
str
|
Name of tool. |
description |
str
|
Short description of tool. |
parameters |
ToolParams
|
JSON Schema of tool parameters object. |