Skip to main content
An AI Agent is the task executor within the ecosystem.
It combines three main elements:
  • AI Model: responsible for reasoning and language generation.
  • Tools: which expand the agent’s capacity, allowing it to act on the real world.
  • Knowledge Bases: which provide reliable context to ground the responses.
Therefore, the AI Agent is nothing more than the decision layer. It receives a goal, interprets the user’s input, decides when to consult a database, when to trigger a tool, and when to simply respond with the model.

How do they work?

  1. Receiving input: the agent receives the AgentPayload (user text, files, conversation context).
  2. Internal reasoning: evaluates whether the task requires only the model, or if it should consult databases/tools.
  3. Execution of actions: calls tools (e.g., fetching data, querying API) or retrieves context from knowledge bases.
  4. Integration of results: combines external observations with the model’s reasoning.
  5. Final response: delivers to the user a well-founded output.

Best practices for defining agents

  • Well-defined objective (objective): ensure that each agent has a specific purpose (e.g., “customer support” vs. “technical assistant”).
  • Clear persona (role): align the agent’s tone of voice with the field of use (e.g., support needs to be empathetic, engineering can be more technical).
  • Adequate model configuration (temperature, reasoning): adjust to balance creativity, precision, and reasoning ability.
  • Responsible delegation (can.delegate): use only when there are multiple agents and it is safe to redistribute tasks.
  • Tools with minimal scope (tools): enable only what is necessary to reduce the risks of misuse.
  • Relevant knowledge bases (knowledgeBases): do not overwhelm the agent with irrelevant content.
  • Governance by sectors (sectors): clearly delineate who can use each agent.
  • Purposeful multimodality (multimodal): enable only in cases where images, audios, or videos are essential input.
The temperature of an AI agent, just like reasoning, affects the response! It defines the degree of randomness, ranging from 0.0 (more consistent and formal responses) to 1.0 (more creative and diverse responses). It is recommended to always find the balance between the extremes and validate which best suits your scenario.

Orkeia AI Agent Interface

export interface Agent {
  id?: string;
  name: string;
  model: string | AI;
  enabled: boolean;
  auto: boolean;
  reasoning: boolean;
  publisher: string;
  temperature: number;
  role: string;
  objective: string;
  referencies: string;
  can.delegate: boolean;
  can.code: boolean;
  multimodal: boolean;
  sectors: string[];
  tools: string[] | Tool[];
  knowledgeBases: string[] | KnowledgeBase[];
}

Example of Orkeia AI Agent JSON

{
  "name": "Orkeia Support",
  "model": "gpt-4-0613",
  "enabled": true,
  "auto": false,
  "reasoning": true,
  "publisher": "Orkeia",
  "temperature": 0.3,
  "role": "Friendly attendant",
  "objective": "Answer customer questions about plans",
  "referencies": "internal docs, plans KB",
  "can.delegate": false,
  "can.code": false,
  "multimodal": false,
  "sectors": ["support"],
  "tools": ["http-fetcher"],
  "knowledgeBases": ["kb_internal_policies"]
}