xFlow
  • Overview
    • Introduction
    • Core Features
    • Architecture
      • High Level Architecture
      • Tech Stack
      • Deployment Flexibility
      • Performance and Scalability
      • Security Compliance
  • Getting Started
    • Installation
    • Quick Start
    • Configuration
  • Core Concepts
    • Serverless Workflow Specification
    • Workflow data handling
    • Workflow Expressions
    • Error handling
    • Input and Output schema definition
    • User Task
    • User Forms
      • Lowcode Form
      • Advanced User Form
    • AI Agents in Enterprise Business Processes
    • Comparisons
      • BPMN2
  • Developer Guide
    • Architecture
    • API Reference
    • Workflow States Reference
      • Event State
      • Operation State
      • Switch State
      • Parallel State
      • Inject State
      • ForEach State
      • Callback State
      • UserTask State
      • AIAgent State
      • AIAgentProxy State
      • UserProxyAgent State
      • AI Outbound Agent State
    • Workflow Functions
      • REST
      • GraphQL
      • Custom
        • Built-in Functions
        • Lowcoder Query Function
      • Function Auth
    • Workflow Secrets
    • Integrations
    • Workflow Modeler
    • Frontend Development
      • Forms
        • Lowcode Form
        • Advanced User Form
    • Serverless Workflow Development
      • Operation State
      • Switch State
      • Parallel State
      • ForEach State
      • Callback State
      • User Task State
    • AI Agent Development
      • AI Agent
        • Predefined LLM
        • LLM Configuration
        • Multi LLM Configuration
        • Chat Memory
        • Tools
        • Data Output
        • Agent Outcomes
      • AI Agent Proxy
        • AI Agents Integration
      • User Proxy Agent
      • xChatBot Integration
  • Examples
    • Basic Examples
    • Advanced Examples
      • Loan Approval Workflow
      • QMS AP Workflow
  • Administration
    • Monitoring and Logging
    • Security
    • Performance Tuning
  • Extensions and Customizations
    • Plugins and Add-ons
  • Troubleshooting
    • Common Issues
    • FAQs
  • Release Notes
    • Version History
    • Upcoming Features
  • Support
    • Contact Information
    • Community
Powered by GitBook
On this page
  • LLM Configuration Properties
  • Usage Example
  1. Developer Guide
  2. AI Agent Development
  3. AI Agent

LLM Configuration

Developers can customize these AI agents by defining specific configurations for the LLMs they use. This documentation provides a detailed explanation of how to define and customize LLM configurations, including specifying the provider, API key, and overriding specific parameters.

LLM Configuration Properties

The LLM configuration allows developers to specify various properties to customize the behavior and functionality of the AI language models. Below are the key properties that can be defined, along with examples for each:

Provider

  • Name: provider

  • Type: string

  • Description: The name of the provider offering the AI language model services. This could be openai, groq, google, etc.

  • Example:

    provider: "openai"

API Key

  • Name: apiKey

  • Type: string

  • Description: The API key used to authenticate and access the AI language model services provided by the chosen provider.

  • Example:

    apiKey: "your-api-key"

Override Parameters

  • Name: overrideParams

  • Type: object

  • Description: The parameters to override for the provider. These parameters allow developers to fine-tune the model's behavior and output. This property is optional.

  • Example:

    overrideParams: 
      model: "gpt-4o"
      temperature: 0.7
      max_tokens: 1000
  • Override Parameters Details:

    • Model: The specific model to use for generating responses.

      • Example: model: "gpt-4o"

    • Temperature: The sampling temperature, influencing the randomness of the model's output.

      • Example: temperature: 0.7

    • Top_p: The top-p sampling parameter, affecting the diversity of the generated responses.

      • Example: top_p: 0.9

    • n: The number of completions to generate for each prompt.

      • Example: n: 3

    • Logprobs: The number of log probabilities to return for each generated token.

      • Example: logprobs: 5

    • Echo: Whether to echo back the prompt in the completion.

      • Example: echo: true

    • Stop: Sequences where the model should stop generating further tokens.

      • Example: stop: ["\n", "###"]

    • Max_tokens: The maximum number of tokens to generate.

      • Example: max_tokens: 1000

    • Presence_penalty: The presence penalty parameter, which discourages the model from repeating tokens.

      • Example: presence_penalty: 0.6

    • Frequency_penalty: The frequency penalty parameter, which reduces the likelihood of token repetition.

      • Example: frequency_penalty: 0.5

    • Logit_bias: Logit bias configuration, allowing adjustment of the probability distribution of tokens.

      • Example:

        logit_bias: 
          "50256": -100

Multi LLMs Configuration

  • Name: multiLLMsConfig

  • Type: object

  • Example:

    multiLLMsConfig: 
      strategy: "fallback"
      models: 
        - provider: "openai"
          apiKey: "your-openai-api-key"
        - provider: "anthropic"
          apiKey: "your-anthropic-api-key"

Usage Example

To use a custom LLM configuration in your AI Agent State, you can define the configuration in the state definition. Here is an example of how to configure an AI Agent State using a custom LLM configuration:

- name: CustomAIState
  type: aiagent
  agentName: CustomAgent
  aiModel: gpt-4o
  systemMessage: "You are an assistant designed to provide accurate answers."
  userMessage: '${ "User: " + .request.question }'
  output: 
    {
      "type": "object",
      "properties": {
          "response": {
              "type": "string",
              "description": "The AI's response to the user question"
          }
      },
      "required": ["response"]
    }
  maxToolExecutions: 5
  memory: 
    memoryId: "session123"
    memoryType: "message_window"
    maxMessages: 10
  tools:
    - name: SEARCH_DOCUMENTS
      description: "Search for relevant documents based on the user's query."
      parameters: 
        {
          "type": "object",
          "properties": {
              "query": {
                  "type": "string",
                  "description": "The search query"
              }
          },
          "required": ["query"]
        }
      output: 
        {
            "type": "object",
            "properties": {
                "documents": {
                    "type": "array",
                    "items": {
                        "type": "string",
                        "format": "uri"
                    }
                }
            },
            "required": ["documents"]
        }
  llmConfig: 
    provider: "openai"
    apiKey: "your-api-key"
    overrideParams: 
      model: "gpt-4o"
      temperature: 0.7
      max_tokens: 1000
    multiLLMsConfig: 
      strategy: "fallback"
      models: 
        - provider: "openai"
          apiKey: "your-openai-api-key"
        - provider: "anthropic"
          apiKey: "your-anthropic-api-key"
  agentOutcomes:
    - condition: '${ $agentOutcome.returnValues.response != null }'
      transition: SuccessState
    - condition: '${ $agentOutcome.returnValues.response == null }'
      transition: ErrorState
PreviousPredefined LLMNextMulti LLM Configuration

Last updated 11 months ago

Description: The strategy to use when multiple AI language models are employed. This can include strategies such as fallback mechanisms, load balancing, and other techniques to ensure robust and reliable AI performance. More details at

For more detailed information and advanced configurations, refer to the .

Multi LLM Configuration
AI Agent State spec