Core Concepts
AI Services
How AI Services Connect with Each Other
AI Services within xBot are designed to work cohesively by passing information through a defined execution state. The key points of connection include:
Service Chaining: The output of one service is incorporated into the AI Flow execution state, which then serves as the input for the next service in the sequence.
Execution State Management: The AI Flow execution state acts as a dynamic data structure that captures all relevant information as the flow progresses. Each service's output is merged back into this state, ensuring that all subsequent services have access to the most up-to-date data.
Context Preservation: By continuously updating the execution state, xBot ensures that each service operates with a full understanding of the context, enabling accurate and context-aware processing of user queries and tasks.
This mechanism allows xBot to maintain a consistent and coherent flow of data across all AI services, ensuring efficient and reliable operations.
Common configuration
Each AI service has its own specific configuration that dictates how it operates. However, these services also share common configurations that ensure consistency and smooth integration within the AI Flow. These common configurations include:
Default Transition
Transition Mechanism: AI services within the flow are chained together by transitions. Each transition can be defined by specific conditions that determine the next service to execute.
Fallback Transition: If none of the specified conditions are met, the default transition will be taken. This ensures that the AI Flow continues to progress even when the expected conditions are not satisfied.
Flow Termination: Transitions can either lead to another service or specify that the current service is the end of the AI Flow.
Output Answering
Response Configuration: This configuration specifies whether a response message should be sent to the end user during the current conversation.
Content Specification: The content of the response message is defined by the
answerMessage
expression, allowing the AI to generate contextually appropriate responses based on the data within the AI Flow execution state.
User Message & System Message
System Message: Is a crucial configuration that sets the context for the AI agent. It acts as a guideline for the AI, instructing it on how to behave and what role to assume during the interaction. This message is typically used to define the AI service's identity, tone, and the type of assistance it is expected to provide.
User Message: Represents the input from the user that the AI service needs to process. It captures the user's query or command, which the AI will analyze to generate an appropriate response. This message is dynamically constructed based on the user's interaction with the system.
This common configuration framework ensures that all AI services within xBot operate harmoniously, with clear guidelines on transitions, responses, and message handling.
Supported AI Services
Knowledge Base Agent Service
The Knowledge Base Agent in xBot is a Retrieval-Augmented Generation (RAG) based implementation designed to retrieve knowledge from predefined sources such as Q&A, S3, and the xFile DMS solution. It simplifies the most common use case of a chatbot system, which involves querying knowledge bases to answer user questions accurately and efficiently.
Key features of the Knowledge Base Agent include:
RAG-Based Retrieval: This approach allows the agent to pull relevant information from multiple sources, ensuring comprehensive responses based on the most pertinent data available.
Simplified Query Handling: The Knowledge Base Agent is optimized to simplify the process of querying knowledge sources, making it an ideal tool for chatbot systems that need to provide quick and reliable answers to user inquiries.
Configurable Knowledge Bases: Users can define a list of knowledge bases that the agent will use, tailoring the system to specific needs and ensuring that the most relevant information is always prioritized.
Q&A Knowledge Base Priority: The Q&A knowledge base is given top priority in the retrieval process because it contains pre-prepared and verified knowledge, ensuring that the most accurate and reliable answers are provided first.
This setup ensures that the Knowledge Base Agent effectively leverages the available data sources, delivering high-quality, contextually relevant responses to user queries.
AI Agent Service
The AI Agent Service is a powerful component within the xBot platform, designed to integrate AI-driven decision-making and automation into AI Flows. By leveraging Large Language Models (LLMs), the AI Agent Service enables the creation of intelligent agents capable of understanding, processing, and responding to complex queries. This capability significantly enhances the efficiency and effectiveness of business processes, allowing for more sophisticated and context-aware interactions.
LLM Configuration
The LLM Configuration specifies the settings for the large language model that powers the AI Agent Service. Key aspects of the configuration include:
Model Selection: Choosing the specific LLM (e.g., GPT-4) that best suits the application.
Temperature: Adjusting the degree of randomness in the model's responses, allowing for more creative or more conservative output.
Max Tokens: Limiting the length of the responses generated by the model, ensuring that outputs are concise and relevant.
Agent Outcome
The Agent Outcome defines the actions that should be taken based on the AI Agent's responses. These outcomes specify conditions, actions, and transitions that dictate how the AI Flow progresses. Agent Outcomes serve several critical purposes within the AI Flow:
Conditional Execution: By defining specific conditions, the flow can execute actions only when certain criteria are met. This allows for dynamic and responsive flows that adapt based on the AI Agent's outputs.
Flow Control: The finish property within Agent Outcomes allows for precise control over the workflow's execution. This property determines whether the AI Agent Service should continue processing further or terminate after achieving a particular outcome.
AI Service Transitions: The transition property within Agent Outcomes defines the next steps in the flow, ensuring a smooth and logical progression based on the AI service's outcomes. This ensures that the flow remains coherent and effective, moving to the appropriate next stage depending on the agent's decision.
The AI Agent Service, with its sophisticated LLM capabilities and flexible outcome management, plays a central role in enabling intelligent and adaptive AI Flows within xBot.
AI Proxy Agent Service
The AI Proxy Agent Service is designed to facilitate the integration of AI agents developed using any framework or programming language into xBot AI Flows. This component acts as an intermediary, allowing for seamless communication and execution of AI agents regardless of their underlying technology stack.
Purpose of AI Proxy Agent Service
The AI Proxy Agent Service serves as a crucial bridge between xBot AI Flows and externally developed AI agents. Its primary purposes include:
Interoperability: The AI Proxy Agent enables AI agents built with different frameworks and languages to be integrated smoothly into xBot AI Flows. This promotes interoperability and flexibility, allowing organizations to leverage a wide range of AI technologies within a single, cohesive platform.
Abstraction: The service provides an abstraction layer that simplifies the interaction between the AI Flow and the external AI agent. By hiding the complexities of the underlying AI agent implementation, it makes integration more straightforward and less prone to errors.
Scalability: The AI Proxy Agent supports distributed execution and scaling of AI agent tasks, leveraging xBot's AI Flow capabilities to ensure that even large-scale AI operations can be handled efficiently.
Standardization: It ensures that all interactions between xBot and the integrated AI agents follow a standardized protocol. This standardization simplifies the management and maintenance of AI agent integrations, ensuring consistent and reliable performance across different AI services.
Supported AI Agent Frameworks
The AI Proxy Agent Service can integrate AI agents developed using a wide variety of frameworks and languages. Supported frameworks and languages include, but are not limited to:
Langchain: A popular framework for building language models and conversational AI applications, widely used for its flexibility and scalability.
Spring AI: An extension of the Spring framework, tailored for developing AI applications in Java. It allows for seamless integration of AI functionalities within enterprise-level applications.
LlamaIndex: A framework designed for building and deploying large language models, providing powerful tools for managing and scaling AI models.
Custom Implementations: The AI Proxy Agent can also integrate AI agents developed using custom code in languages such as Python, JavaScript, Java, and others. This ensures that any AI agent, regardless of how it was built, can be incorporated into the xBot platform.
The AI Proxy Agent's flexibility and comprehensive support for various frameworks make it an essential component for organizations looking to integrate diverse AI technologies within xBot, ensuring that all AI agents can work together seamlessly within a unified AI Flow.
Advanced Pre-defined Flows
Advanced Pre-defined Flows are specialized, customized workflows tailored to address complex use cases within the xBot platform. These flows are designed based on specific customer requirements and provide a high level of functionality that meets unique business needs.
Key characteristics of Advanced Pre-defined Flows include:
Customization: These flows are created by the xBot development team in response to specific customer requests, allowing for highly specialized and targeted solutions.
Complex Workflows: They are designed to handle intricate processes, combining multiple AI services and external integrations to achieve the desired outcomes.
Non-Editable: Once created, Pre-defined Flows cannot be edited by the end-user. This ensures the integrity and stability of the complex workflows, preventing accidental changes that could disrupt the intended functionality.
These flows provide a powerful tool for businesses that require customized solutions beyond the standard capabilities of xBot, offering robust and reliable automation for complex tasks.
Tools
Tools in xBot represent the concept of executable components that the large language model (LLM) can use to perform specialized tasks and interact with external systems. These tools enable AI Agents to go beyond simple data processing by executing actions that impact external environments or trigger complex workflows.
Key aspects of Tools include:
Executable Actions: The execution of a tool is defined by a list of actions. Each action is linked to a Function (as described in the Functions section), which specifies the operation to be performed. These actions can be carried out in either synchronous (Sync) or asynchronous (Async) mode, depending on the requirements:
Sync Mode: The action waits for the function to complete before proceeding to the next step, ensuring that operations are completed in a specific order.
Async Mode: The action triggers the function and proceeds without waiting for completion, allowing for parallel execution and more efficient processing of tasks.
Action Sequencing and Parallelism: Tools provide flexibility in how actions are executed. Actions can be performed one after another (sequentially) or simultaneously (in parallel), depending on the complexity and demands of the task.
Distributed Execution: xBot supports distributed execution of tool actions. When a tool action is triggered, it can be executed by a distributed worker within the xBot backend. This built-in support ensures that actions are performed efficiently, leveraging xBot's distributed architecture to handle multiple tasks concurrently. This capability enhances the platform's scalability and performance, particularly when dealing with large-scale or time-sensitive operations.
By leveraging Tools, the AI Agents in xBot can execute a broad range of operations, making the AI Flow more robust and versatile. Tools are a crucial part of the platform's ability to automate complex processes and interact seamlessly with external systems, ensuring that business processes are managed efficiently and effectively.
Functions
Functions in xBot are used to describe services and their operations that need to be invoked during AI Flow execution. They serve as the core building blocks for defining how and when various service operations should be executed within the flow, as well as specifying the data parameters that should be passed to these services if required.
Key aspects of Functions include:
Service Invocation: Functions define the operations of services that are to be executed during the AI Flow. These functions can be referenced by AI Service action definitions, which specify exactly when the service operations should be invoked during the flow. This allows for precise control over the execution order and timing of service interactions.
Data Parameters: When a Function is invoked, it may require specific data parameters to perform its operation. These parameters can include various types of data, such as request bodies, headers, query parameters, or other contextual information relevant to the service being called. This ensures that the service operation is executed with the necessary context and inputs.
Support for RESTful Services: xBot supports the invocation of RESTful services, which can be defined in two main ways:
OpenAPI Specification: Functions can be defined using an OpenAPI spec, which provides a standardized and comprehensive description of the RESTful service, including endpoints, methods, parameters, and response formats.
Simple Definition: For more straightforward cases, functions can be defined using a simple definition that specifies essential REST API details, such as the URL, HTTP method (GET, POST, PUT, DELETE, etc.), headers, query parameters, and request body. This approach is suitable for services that do not require the full complexity of an OpenAPI spec but still need to be clearly defined and invoked within the AI Flow.
By using Functions, xBot ensures that AI Flows are capable of interacting with external services in a structured and controlled manner, allowing for sophisticated automation and integration with a wide range of systems. Functions provide the flexibility to define both complex and simple service interactions, making them a critical component of the platform's ability to handle diverse business needs.
Memory
Memory in xBot is a crucial feature that enables AI agents to retain context across multiple interactions. This capability is essential for creating coherent and contextually relevant conversations, allowing the AI agent to remember details from previous exchanges and use that information to enhance future interactions.
Chat Memory Concept
Chat Memory ensures that AI agents can maintain a continuous and logical dialogue with users by storing key details and conversation history. This allows the AI agent to respond more accurately, taking into account the context of previous messages and interactions.
Message Window Mode
xBot supports a message_window mode for memory management, where the memory retains a fixed number of recent messages. This type of memory is particularly useful when it is important to remember a specific number of past interactions, regardless of the length of individual messages. By focusing on a fixed window of recent exchanges, the AI agent can keep the conversation relevant and up-to-date without being overwhelmed by the full history.
Memory ID
The Memory ID is the unique identifier used to store and retrieve the agent's memory. This ID ensures that the conversation history is correctly associated with the specific AI agent and the particular session. By default, the Memory ID is set to the conversation ID, linking the memory directly to the ongoing dialogue. This default setup allows the AI agent to maintain a consistent context throughout the conversation, ensuring that all relevant information is available whenever needed.
With the Memory feature, xBot provides a robust mechanism for maintaining conversational continuity, making the AI interactions more natural, personalized, and effective. This capability is particularly valuable in scenarios where context and history play a significant role in delivering accurate and meaningful responses.
Knowledge Bases
The Knowledge Bases system in xBot is designed to handle various types of data sources, ensuring that AI agents can efficiently retrieve and process information. This system is crucial for enabling the AI agents to provide accurate and context-aware responses based on the data stored within these knowledge bases.
xBot supports multiple types of knowledge bases, including:
Q&A: A structured repository of question-and-answer pairs, prioritized for its pre-prepared and verified content. This type of knowledge base is particularly useful for handling frequently asked questions and standard queries.
S3: An object storage service that allows AI agents to access unstructured data such as documents, images, and other file types. S3 provides a scalable and reliable source for retrieving large volumes of data.
xFile: A Document Management System (DMS) provided by A4B that supports a comprehensive set of DMS features. xFile includes a file and folder hierarchy, distributed document storage, metadata extraction, permissions, and workflow management, making it a robust source for document retrieval and management within AI Flows.
By integrating these diverse knowledge bases, xBot ensures that AI agents have access to a wide range of data, enabling them to respond to user queries with the most relevant and accurate information available. This capability is fundamental to the platform's ability to support sophisticated and responsive AI-driven interactions.
Variables & Expressions
In xBot AI Flows, variables and expressions play a crucial role in manipulating and selecting data throughout the flow. They allow for dynamic interaction with the flow's execution state and AI service data, enabling the creation of more flexible and responsive AI-driven processes.
Key aspects of Variables & Expressions include:
Expression Usage: Variables and expressions can be used within the AI Flow to select or manipulate data from the flow’s execution state or specific AI services. This capability is essential for dynamically adapting the flow's behavior based on the current context and data inputs.
Expression Syntax: By default, all expressions in xBot should be defined using the jq version 1.6 syntax. jq is a powerful tool for working with JSON data, allowing users to filter, map, and transform JSON content with ease. More detailed information on jq can be found in its official manual.
Built-in Variables: xBot provides a set of built-in common variables that are readily available for selecting and manipulating execution state data. In addition to these, AI Service-specific variables are also defined, allowing for direct interaction with data produced or required by specific services within the flow. Detailed descriptions of these variables and their usage will be provided in the Variables & Expressions documentation page.
By leveraging Variables & Expressions, users can create highly dynamic and adaptable AI Flows that respond intelligently to a wide range of inputs and conditions. This flexibility is key to building complex, real-time AI-driven processes that meet specific business needs.
Agent Execution Context
The Agent Execution Context in xBot is a critical component that encapsulates all the relevant information and state required for the AI Agent to process and respond to user requests effectively. This context ensures that the AI Agent operates with full awareness of the security, user, and conversation-specific details, enabling it to perform tasks accurately and securely.
The Agent Execution Context includes the following key components:
$authContext: This represents the security context that the bot is using to execute the AI Flow. It contains the authenticated username that the bot is operating under, the associated email address, and a list of permissions defining what actions the bot is allowed to perform on behalf of the user. Additionally, any other security-related data required for secure interactions with external systems or services is included in this context. This setup is vital for maintaining secure and compliant operations throughout the AI Flow.
$customer: This variable holds information about the user interacting with xBot through a bot channel. It includes the user's full name, phone number, and email address. This information enables the AI Agent to tailor responses and actions based on the specific user, enhancing the overall interaction experience.
$conversationContext: This contains information about the ongoing conversation between the user and xBot. It includes a unique conversation ID, which allows the AI Agent to track and manage interactions across different sessions. The bot channel, such as ZALO, TEAMS, WEB, etc., through which the user is interacting, is also recorded. Additionally, the current state of the conversation is maintained, indicating where the user is within the dialogue flow. Any other relevant details that help the AI Agent maintain context and continuity throughout the dialogue are also included. This ensures a seamless and coherent user experience.
By leveraging the Agent Execution Context, xBot ensures that each AI Flow is executed with a comprehensive understanding of the security, user, and conversation-specific details. This context is essential for delivering accurate, context-aware, and secure AI-driven interactions.
Secrets
Last updated