LLM Configuration
Developers can customize these AI agents by defining specific configurations for the LLMs they use. This documentation provides a detailed explanation of how to define and customize LLM configurations, including specifying the provider, API key, and overriding specific parameters.
LLM Configuration Properties
The LLM configuration allows developers to specify various properties to customize the behavior and functionality of the AI language models. Below are the key properties that can be defined, along with examples for each:
Provider
Name:
provider
Type:
string
Description: The name of the provider offering the AI language model services. This could be
openai
,groq
,google
, etc.Example:
provider: "openai"
API Key
Name:
apiKey
Type:
string
Description: The API key used to authenticate and access the AI language model services provided by the chosen provider.
Example:
apiKey: "your-api-key"
Override Parameters
Name:
overrideParams
Type:
object
Description: The parameters to override for the provider. These parameters allow developers to fine-tune the model's behavior and output. This property is optional.
Example:
overrideParams: model: "gpt-4o" temperature: 0.7 max_tokens: 1000
Override Parameters Details:
Model: The specific model to use for generating responses.
Example:
model: "gpt-4o"
Temperature: The sampling temperature, influencing the randomness of the model's output.
Example:
temperature: 0.7
Top_p: The top-p sampling parameter, affecting the diversity of the generated responses.
Example:
top_p: 0.9
n: The number of completions to generate for each prompt.
Example:
n: 3
Logprobs: The number of log probabilities to return for each generated token.
Example:
logprobs: 5
Echo: Whether to echo back the prompt in the completion.
Example:
echo: true
Stop: Sequences where the model should stop generating further tokens.
Example:
stop: ["\n", "###"]
Max_tokens: The maximum number of tokens to generate.
Example:
max_tokens: 1000
Presence_penalty: The presence penalty parameter, which discourages the model from repeating tokens.
Example:
presence_penalty: 0.6
Frequency_penalty: The frequency penalty parameter, which reduces the likelihood of token repetition.
Example:
frequency_penalty: 0.5
Logit_bias: Logit bias configuration, allowing adjustment of the probability distribution of tokens.
Example:
logit_bias: "50256": -100
Multi LLMs Configuration
Name:
multiLLMsConfig
Type:
object
Description: The strategy to use when multiple AI language models are employed. This can include strategies such as fallback mechanisms, load balancing, and other techniques to ensure robust and reliable AI performance. More details at Multi LLM Configuration
Example:
multiLLMsConfig: strategy: "fallback" models: - provider: "openai" apiKey: "your-openai-api-key" - provider: "anthropic" apiKey: "your-anthropic-api-key"
Usage Example
To use a custom LLM configuration in your AI Agent State, you can define the configuration in the state definition. Here is an example of how to configure an AI Agent State using a custom LLM configuration:
- name: CustomAIState
type: aiagent
agentName: CustomAgent
aiModel: gpt-4o
systemMessage: "You are an assistant designed to provide accurate answers."
userMessage: '${ "User: " + .request.question }'
output:
{
"type": "object",
"properties": {
"response": {
"type": "string",
"description": "The AI's response to the user question"
}
},
"required": ["response"]
}
maxToolExecutions: 5
memory:
memoryId: "session123"
memoryType: "message_window"
maxMessages: 10
tools:
- name: SEARCH_DOCUMENTS
description: "Search for relevant documents based on the user's query."
parameters:
{
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query"
}
},
"required": ["query"]
}
output:
{
"type": "object",
"properties": {
"documents": {
"type": "array",
"items": {
"type": "string",
"format": "uri"
}
}
},
"required": ["documents"]
}
llmConfig:
provider: "openai"
apiKey: "your-api-key"
overrideParams:
model: "gpt-4o"
temperature: 0.7
max_tokens: 1000
multiLLMsConfig:
strategy: "fallback"
models:
- provider: "openai"
apiKey: "your-openai-api-key"
- provider: "anthropic"
apiKey: "your-anthropic-api-key"
agentOutcomes:
- condition: '${ $agentOutcome.returnValues.response != null }'
transition: SuccessState
- condition: '${ $agentOutcome.returnValues.response == null }'
transition: ErrorState
For more detailed information and advanced configurations, refer to the AI Agent State spec.
Last updated