Knowledge Base Agent

Overview

The Knowledge Base Agent is an AI agent that leverages a knowledge base to provide informed responses. It uses RAG (Retrieval-Augmented Generation) to combine knowledge retrieval with AI generation for accurate, context-aware responses based on stored documents and information.

Configuration Details

Field
Description
Type
Required
Default

name

The unique name of the knowledge base agent within the workflow.

String

Yes

-

start

Indicates if this is the starting agent of the workflow.

Boolean

No

false

transition

Defines the transition to the next flow node after this agent completes. See TransitionDef

Object

No

-

answerAfterFinish

Whether the agent should provide an answer message after finishing execution.

Boolean

No

false

answerMessage

Custom message to display when the agent finishes (if answerAfterFinish is true).

String

No

-

systemMessage

System message that defines the agent's role and behavior context for knowledge base interactions.

String

No

"You are a helpful AI Assistant."

userMessage

User message template that defines how user queries are processed against the knowledge base. Can include expressions.

String

Yes

-

userIdentityExpression

Expression to identify the user context for personalized knowledge base responses.

String

No

-

llmConfig

Advanced LLM configuration for the knowledge base agent including temperature and token limits. See LLMConfigDef

Object

No

-

memory

Memory configuration for maintaining conversation context across knowledge base interactions. See ChatMemoryDef

Object

No

-

knowledgeBase

Knowledge base configuration including vector store, embedding model, and retrieval settings. See KnowledgeBaseDef

Object

Yes

-

ragMode

RAG (Retrieval-Augmented Generation) mode for knowledge retrieval and response generation.

String

No

"NAIVE"

loginRequired

Whether user authentication is required to access the knowledge base.

Boolean

No

-

streaming

Whether responses should be streamed back to the user for real-time feedback.

Boolean

No

-


Common Configuration Objects

ChatMemoryDef

Configures how AI agents manage conversation memory including short-term message windows, memory optimization through summarization, and long-term memory for persistent context across sessions.

Field
Description
Type
Required
Default

memoryId

Unique identifier for the memory session

String

No

-

memoryType

Type of memory management strategy

String

No

"message_window"

maxMessages

Maximum number of messages to retain in memory

Integer

No

10

maxTokens

Maximum number of tokens to retain in memory (alternative to maxMessages)

Integer

No

-

memoryOptimizer

Configuration for memory optimization strategies like summarization

Object

No

-

longTermMemory

Configuration for long-term memory that persists across sessions

Object

No

-

MemoryOptimizerDef

Memory Optimizer Definition. Configures strategies for optimizing memory usage through summarization and other compression techniques to maintain relevant context while reducing memory footprint.

Field
Description
Type
Required

summarization

Configuration for conversation summarization to compress older messages

Object

No

ConversationSummarizationDef

Conversation Summarization Definition. Configures how older conversation messages are summarized to compress memory while retaining important context and recent interactions.

Field
Description
Type
Required
Default

retainedMessages

Number of recent messages to retain without summarization

Integer

No

2

promptMessage

Custom prompt for guiding the summarization process

String

No

-

LongTermMemoryDef

Long Term Memory Definition. Configures persistent memory that survives across conversation sessions. Supports different memory types (semantic, episodic, procedural) and scopes (global, local) for various use cases.

Field
Description
Type
Required
Default

memoryId

Unique identifier for the long-term memory instance

String

No

-

scope

Scope of the long-term memory: 'global' (shared across agents) or 'local' (agent-specific)

String

No

-

memoryType

Type of long-term memory: semantic (facts/knowledge), episodic (events/experiences), or procedural (skills/processes)

String

No

"episodic"

LLMConfigDef

LLM Configuration Definition. Defines the configuration for AI Language Models used by AI agents. Includes provider settings, authentication, and parameter overrides for fine-tuning model behavior.

Field
Description
Type
Required

provider

The name of the LLM provider

String

No

apiKey

The API key to access the AI Language Model provider

String

No

overrideParams

Parameters to override for the provider. These settings will take precedence over default model parameters. See LLMParamsDef

Object

No

LLMParamsDef

LLM Parameters Definition. Defines configuration parameters for Language Model behavior including sampling settings, token limits, penalties, and other model-specific options that control AI response generation.

Field
Description
Type
Required
Default

model

The specific model to use for generating responses

String

No

-

temperature

The sampling temperature (0.0 to 2.0). Higher values make output more random, lower values more focused

Double

No

-

topP

The top-p sampling parameter (0.0 to 1.0). Controls nucleus sampling for token selection

Double

No

-

n

The number of completions to generate for each prompt

Integer

No

-

logprobs

The number of log probabilities to return for each token

Integer

No

-

echo

Whether to echo back the prompt in the response

Boolean

No

-

stop

Sequences where the model should stop generating further tokens

List

No

[]

maxTokens

The maximum number of tokens to generate in the response

Integer

No

-

presencePenalty

Presence penalty parameter (-2.0 to 2.0). Positive values penalize new tokens based on whether they appear in the text so far

Double

No

-

frequencyPenalty

Frequency penalty parameter (-2.0 to 2.0). Positive values penalize new tokens based on their existing frequency in the text

Double

No

-

logitBias

Logit bias configuration. Maps token IDs to bias values (-100 to 100) to increase or decrease likelihood of specific tokens

Map<String, Integer>

No

-

required

Additional required parameters specific to the model or provider

Object

No

-

KnowledgeBaseDef

Configuration for AI agents that use knowledge bases for RAG (Retrieval-Augmented Generation). Defines how the agent retrieves information from knowledge bases, including search strategies, retrieval configurations, and reranking options to provide accurate, context-aware responses.

Field
Description
Type
Required
Default

queryStrategy

Strategy for querying the knowledge base

String

No

-

knowledgeBaseCodes

List of knowledge base codes/identifiers to search against

List

No

[]

ragConfig

RAG (Retrieval-Augmented Generation) configuration including history and retrieval settings

Object

No

default RagConfig

multiModalSupport

Whether the knowledge base supports multi-modal content (text, images, etc.)

Boolean

No

-

RagConfig

RAG (Retrieval-Augmented Generation) Configuration. Defines how the knowledge base retrieval and generation process works, including conversation history management and retrieval parameters.

Field
Description
Type
Required
Default

history

Configuration for managing conversation history in RAG context

Object

No

default RagHistoryConfig

retriever

Configuration for the document retrieval component of RAG

Object

No

default RagRetrieverConfig

RagHistoryConfig

RAG History Configuration. Controls how conversation history is managed and used in RAG processing.

Field
Description
Type
Required
Default

messageLimit

Maximum number of previous messages to include in RAG context for better conversation continuity

Integer

No

20

RagRetrieverConfig

RAG Retriever Configuration. Defines how documents are retrieved from the knowledge base, including search parameters, scoring thresholds, and advanced features like reranking and query rewriting.

Field
Description
Type
Required
Default

maxResults

Maximum number of documents to retrieve from the knowledge base

Integer

No

5

minScore

Minimum similarity score threshold for document retrieval (0.0 to 1.0)

Double

No

0.3

kbLocalRerank

Whether to use local knowledge base reranking (deprecated - use rerankConfig instead)

Boolean

No

false

rerankConfig

Configuration for document reranking to improve retrieval quality

Object

No

-

rewriteQueryPrompt

Custom prompt for query rewriting (deprecated - use rewriteQueryConfig instead)

String

No

-

rewriteQueryConfig

Configuration for automatic query rewriting to improve search results

Object

No

-

includeDocReference

Whether to include document references in the response for citation purposes

Boolean

No

false

prefetchDocs

Whether to prefetch documents for faster access during retrieval

Boolean

No

true

allowLLMSearch

Whether to allow LLM-based search in addition to traditional retrieval methods

Boolean

No

false

maxSearchToolCalls

Maximum number of search tool calls allowed during retrieval

Integer

No

2

RewriteQueryConfig

Field
Description
Type
Default/Required

disabled

Whether query rewriting is disabled

Boolean

false

rewriteQueryPrompt

Prompt for query rewriting

String

No

minQueryWords

Minimum words for query rewriting

Integer

-1

RerankConfig

Field
Description
Type
Default/Required

useExternalRerank

Use external rerank

Boolean

true

perKbRerank

Per-knowledge base rerank

Boolean

false

TransitionDef

Defines the flow control mechanism for moving between different states or agents in a workflow. Specifies either a target destination or workflow termination, enabling complex branching and routing logic in AI agent conversations.

Field
Description
Type
Required
Default

end

Whether this transition marks the end of the workflow execution. When true, the workflow terminates. When false, execution continues to the target agent

Boolean

No

false

targetId

The unique identifier of the target agent or node to transition to. Must correspond to a valid agent name defined in the workflow. Ignored if 'end' is true

String

No

-

Example Configuration

{
  "name": "documentQAAgent",
  "type": "kbagent",
  "start": false,
  "systemMessage": "You are a document Q&A assistant. Use the knowledge base to provide accurate, well-sourced answers based on company documentation.",
  "userMessage": "Based on the company documentation, please answer: {{userQuestion}}",
  "userIdentityExpression": "{{user.department}} - {{user.role}}",
  "llmConfig": {
    "provider": "openai",
    "apiKey": "sk-1234567890abcdef",
    "overrideParams": {
      "model": "gpt-4o",
      "temperature": 0.7,
      "maxTokens": 1000,
      "presencePenalty": 0.1
    }
  },
  "memory": {
    "memoryId": "kb-agent-session-123",
    "memoryType": "message_window",
    "maxMessages": 20,
    "maxTokens": 4000,
    "memoryOptimizer": {
      "summarization": {
        "retainedMessages": 5,
        "promptMessage": "Summarize the key points from this knowledge base conversation"
      }
    },
    "longTermMemory": {
      "memoryId": "user-long-term-kb-123",
      "memoryType": "episodic",
      "scope": "local"
    }
  },
  "knowledgeBase": {
    "queryStrategy": "semantic",
    "knowledgeBaseCodes": ["product-docs", "faq-kb", "user-manual"],
    "multiModalSupport": true,
    "ragConfig": {
      "history": {
        "messageLimit": 10
      },
      "retriever": {
        "maxResults": 5,
        "minScore": 0.3,
        "includeDocReference": true,
        "rerankConfig": {
          "useExternalRerank": true,
          "perKbRerank": false
        },
        "rewriteQueryConfig": {
          "disabled": false,
          "rewriteQueryPrompt": "Rewrite this query to be more specific and searchable: {query}",
          "minQueryWords": 3
        }
      }
    }
  },
  "ragMode": "NAIVE",
  "streaming": false,
  "loginRequired": true,
  "transition": {
    "targetId": "followupAgent"
  },
  "answerAfterFinish": false,
  "answerMessage": "Knowledge base search completed successfully"
}

Last updated