xFlow
  • Overview
    • Introduction
    • Core Features
    • Architecture
      • High Level Architecture
      • Tech Stack
      • Deployment Flexibility
      • Performance and Scalability
      • Security Compliance
  • Getting Started
    • Installation
    • Quick Start
    • Configuration
  • Core Concepts
    • Serverless Workflow Specification
    • Workflow data handling
    • Workflow Expressions
    • Error handling
    • Input and Output schema definition
    • User Task
    • User Forms
      • Lowcode Form
      • Advanced User Form
    • AI Agents in Enterprise Business Processes
    • Comparisons
      • BPMN2
  • Developer Guide
    • Architecture
    • API Reference
    • Workflow States Reference
      • Event State
      • Operation State
      • Switch State
      • Parallel State
      • Inject State
      • ForEach State
      • Callback State
      • UserTask State
      • AIAgent State
      • AIAgentProxy State
      • UserProxyAgent State
      • AI Outbound Agent State
    • Workflow Functions
      • REST
      • GraphQL
      • Custom
        • Built-in Functions
        • Lowcoder Query Function
      • Function Auth
    • Workflow Secrets
    • Integrations
    • Workflow Modeler
    • Frontend Development
      • Forms
        • Lowcode Form
        • Advanced User Form
    • Serverless Workflow Development
      • Operation State
      • Switch State
      • Parallel State
      • ForEach State
      • Callback State
      • User Task State
    • AI Agent Development
      • AI Agent
        • Predefined LLM
        • LLM Configuration
        • Multi LLM Configuration
        • Chat Memory
        • Tools
        • Data Output
        • Agent Outcomes
      • AI Agent Proxy
        • AI Agents Integration
      • User Proxy Agent
      • xChatBot Integration
  • Examples
    • Basic Examples
    • Advanced Examples
      • Loan Approval Workflow
      • QMS AP Workflow
  • Administration
    • Monitoring and Logging
    • Security
    • Performance Tuning
  • Extensions and Customizations
    • Plugins and Add-ons
  • Troubleshooting
    • Common Issues
    • FAQs
  • Release Notes
    • Version History
    • Upcoming Features
  • Support
    • Contact Information
    • Community
Powered by GitBook
On this page
  • Common Parameters
  • Predefined LLM Configurations
  • Usage Example
  1. Developer Guide
  2. AI Agent Development
  3. AI Agent

Predefined LLM

In xFlow, the AI Agent State leverages predefined large language model (LLM) configurations to streamline the integration of AI capabilities into business workflows. These configurations are designed to provide optimal performance and reliability by incorporating strategies like fallback mechanisms. This documentation details the predefined LLM configurations that developers can use by specifying the aiModel parameter in the AI Agent State.

Common Parameters

All predefined LLM configurations in xFlow share the following common parameters:

  • Temperature: 0.0

  • Timeout: PT60S

Predefined LLM Configurations

1. GPT-4o

  • aiModel: gpt-4o

  • Strategy: single

  • Provider: openai

  • Model: gpt-4o

2. LLaMA3-70B-8192

  • aiModel: llama3-70b-8192

  • Strategy: Fallback on status codes 429 and 400

  • Providers:

    • Primary: groq

    • Secondary: openai

  • Models:

    • Primary: llama3-70b-8192

    • Fallback: gpt-4o

3. Mixtral-8x7B-32768

  • aiModel: mixtral-8x7b-32768

  • Strategy: Fallback on status codes 429 and 400

  • Providers:

    • Primary: Groq

    • Secondary: OpenAI

  • Models:

    • Primary: mixtral-8x7b-32768

    • Fallback: gpt-4o

4. Gemini-1.5-Pro-Latest

  • aiModel: gemini-1.5-pro-latest

  • Strategy: Fallback on status codes 429 and 400

  • Providers:

    • Primary: google

    • Secondary: openai

  • Models:

    • Primary: gemini-1.5-pro-latest

    • Fallback: gpt-4o

Usage Example

To use one of these predefined LLM configurations in your AI Agent State, simply specify the aiModel parameter in the state definition. Here is an example of how to configure an AI Agent State using the gpt-4o model:

- name: ExampleAIState
  type: aiagent
  agentName: ExampleAgent
  aiModel: gpt-4o
  systemMessage: "You are an assistant designed to provide accurate answers."
  userMessage: '${ "User: " + .request.question }'
  output: 
    {
      "type": "object",
      "properties": {
          "response": {
              "type": "string",
              "description": "The AI's response to the user question"
          }
      },
      "required": ["response"]
    }
  maxToolExecutions: 5
  memory: 
    memoryId: "session123"
    memoryType: "message_window"
    maxMessages: 10
  tools:
    - name: SEARCH_DOCUMENTS
      description: "Search for relevant documents based on the user's query."
      parameters: 
        {
          "type": "object",
          "properties": {
              "query": {
                  "type": "string",
                  "description": "The search query"
              }
          },
          "required": ["query"]
        }
      output: 
        {
            "type": "object",
            "properties": {
                "documents": {
                    "type": "array",
                    "items": {
                        "type": "string",
                        "format": "uri"
                    }
                }
            },
            "required": ["documents"]
        }
  agentOutcomes:
    - condition: '${ $agentOutcome.returnValues.response != null }'
      transition: SuccessState
    - condition: '${ $agentOutcome.returnValues.response == null }'
      transition: ErrorState
PreviousAI AgentNextLLM Configuration

Last updated 11 months ago