LLM Configuration
Developers can customize these AI agents by defining specific configurations for the LLMs they use. This documentation provides a detailed explanation of how to define and customize LLM configurations, including specifying the provider, API key, and overriding specific parameters.
LLM Configuration Properties
The LLM configuration allows developers to specify various properties to customize the behavior and functionality of the AI language models. Below are the key properties that can be defined, along with examples for each:
Provider
Name:
provider
Type:
string
Description: The name of the provider offering the AI language model services. This could be
openai
,groq
,google
, etc.Example:
API Key
Name:
apiKey
Type:
string
Description: The API key used to authenticate and access the AI language model services provided by the chosen provider.
Example:
Override Parameters
Name:
overrideParams
Type:
object
Description: The parameters to override for the provider. These parameters allow developers to fine-tune the model's behavior and output. This property is optional.
Example:
Override Parameters Details:
Model: The specific model to use for generating responses.
Example:
model: "gpt-4o"
Temperature: The sampling temperature, influencing the randomness of the model's output.
Example:
temperature: 0.7
Top_p: The top-p sampling parameter, affecting the diversity of the generated responses.
Example:
top_p: 0.9
n: The number of completions to generate for each prompt.
Example:
n: 3
Logprobs: The number of log probabilities to return for each generated token.
Example:
logprobs: 5
Echo: Whether to echo back the prompt in the completion.
Example:
echo: true
Stop: Sequences where the model should stop generating further tokens.
Example:
stop: ["\n", "###"]
Max_tokens: The maximum number of tokens to generate.
Example:
max_tokens: 1000
Presence_penalty: The presence penalty parameter, which discourages the model from repeating tokens.
Example:
presence_penalty: 0.6
Frequency_penalty: The frequency penalty parameter, which reduces the likelihood of token repetition.
Example:
frequency_penalty: 0.5
Logit_bias: Logit bias configuration, allowing adjustment of the probability distribution of tokens.
Example:
Multi LLMs Configuration
Name:
multiLLMsConfig
Type:
object
Example:
Usage Example
To use a custom LLM configuration in your AI Agent State, you can define the configuration in the state definition. Here is an example of how to configure an AI Agent State using a custom LLM configuration:
Last updated