Chat Memory
In xFlow, Chat Memory is a crucial feature that allows AI agents to retain context across multiple interactions. This capability is essential for creating coherent and contextually relevant conversations. Chat Memory in xFlow supports different types of memory configurations that can be customized to fit specific requirements. This documentation explains the details of the types of memory supported and how to configure them.
Chat Memory Properties
Memory ID
Name:
memoryId
Type:
string
Description: The unique identifier used to store and retrieve the agent's memory. This ID ensures that the conversation history is correctly associated with the specific AI agent.
Example:
memoryId: "session123"
Memory Type
Name:
memoryType
Type:
string
Description: Defines the type of memory to use. The supported types are
message_window
andtoken_window
. The default type ismessage_window
.Enum Values:
message_window: Retains a fixed number of messages.
token_window: Retains a fixed number of tokens.
Default:
message_window
Example:
memoryType: "token_window"
Max Messages
Name:
maxMessages
Type:
integer
Description: The maximum number of messages to retain in memory. If the memory exceeds this limit, the oldest messages are evicted. This property is used when the memory type is
message_window
.Example:
maxMessages: 10
Max Tokens
Name:
maxTokens
Type:
integer
Description: The maximum number of tokens to retain in memory. The memory will retain as many of the most recent messages as can fit within the
maxTokens
limit. If an old message does not fit within the limit, it is completely evicted. This property is used when the memory type istoken_window
.Example:
maxTokens: 1000
Types of Chat Memory
Message Window Memory
message_window
memory retains a fixed number of recent messages. This type of memory is useful when the exact number of past interactions needs to be remembered, regardless of the length of the individual messages.
Configuration Example:
memory: memoryId: "session123" memoryType: "message_window" maxMessages: 10
Token Window Memory
token_window
memory retains a fixed number of tokens. This type of memory is beneficial when controlling the total size of the memory is more important than the number of messages. It ensures that the memory does not exceed a certain number of tokens, making it easier to manage the cost and performance of the AI agent.
Configuration Example:
memory: memoryId: "session123" memoryType: "token_window" maxTokens: 1000
Usage Example
Here is an example of how to configure Chat Memory in an AI Agent State within a xFlow workflow:
- name: ExampleAIState
type: aiagent
agentName: ExampleAgent
aiModel: gpt-4o
systemMessage: "You are an assistant designed to provide accurate answers."
userMessage: '${ "User: " + .request.question }'
output:
{
"type": "object",
"properties": {
"response": {
"type": "string",
"description": "The AI's response to the user question"
}
},
"required": ["response"]
}
maxToolExecutions: 5
memory:
memoryId: "session123"
memoryType: "message_window"
maxMessages: 10
tools:
- name: SEARCH_DOCUMENTS
description: "Search for relevant documents based on the user's query."
parameters:
{
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query"
}
},
"required": ["query"]
}
output:
{
"type": "object",
"properties": {
"documents": {
"type": "array",
"items": {
"type": "string",
"format": "uri"
}
}
},
"required": ["documents"]
}
agentOutcomes:
- condition: '${ $agentOutcome.returnValues.response != null }'
transition: SuccessState
- condition: '${ $agentOutcome.returnValues.response == null }'
transition: ErrorState
For more detailed information and advanced configurations, refer to the AI Agent State spec.
Last updated