Agents in the Atthene Multi-Agent System can be enhanced with various capabilities to handle complex tasks. This guide covers all available capabilities and how to configure them.Documentation Index
Fetch the complete documentation index at: https://docs.atthene.com/llms.txt
Use this file to discover all available pages before exploring further.
Tools
Tools extend agent capabilities by providing access to external systems, APIs, and computational resources. Only ReAct agents (react_agent) support tool calling.
We’re actively working on integrating with the Model Context Protocol (MCP) to expand the available tools and enable seamless integration with external services.
Available Tools
tavily_search
Web SearchSearch engine optimized for comprehensive, accurate results from the web.Configurable
tavily_extract
Content ExtractionExtracts comprehensive content from web pages based on URLs.Configurable
google_search
Google SearchGemini-grounded Google Search with AI-synthesized answers and citations.Configurable
calculator
CalculatorPerforms basic mathematical calculations safely.
randomizer
RandomizerGenerates random integers within a configurable range for dynamic workflows.Configurable
wikipedia_search
Wikipedia SearchSearch Wikipedia and get article summaries on any topic.
arxiv_search
ArXiv SearchSearch academic papers on ArXiv.org across scientific fields.
youtube_search
YouTube SearchSearch YouTube for videos and content.
python_repl
Python REPLExecute Python code in a safe environment.
memory
Memory ManagementExplicitly save information to designated memory spaces for long-term retention.Configurable
schedule_agent_task
Schedule Agent TaskCreate scheduled tasks (cronjobs) for agents to run at specific times.
manage_agent_tasks
Manage Agent TasksQuery, update, enable/disable, or delete existing scheduled tasks.
Configuring Tools
Tools can be added in two ways: simple (just the tool name) or with configuration (for tools that support it).Simple Tool Configuration
Most tools can be added by just specifying their name:Advanced Tool Configuration
Some tools support optional configuration for customization: Configurable Tools:tavily_search- Web search with advanced optionstavily_extract- Content extraction with format optionsgoogle_search- Gemini model, temperature, and timeout optionsrandomizer- Default min/max rangememory- Specific memory modules to expose
Tool Configuration Reference
tavily_search Configuration
tavily_search Configuration
Available options:
max_results(integer, default: 5) - Maximum number of search resultsinclude_answer(boolean, default: false) - Include a short answer to the queryinclude_raw_content(string, default: “markdown”) - Include cleaned HTML content (“markdown” or “text”)include_image_descriptions(boolean, default: false) - Include image descriptionscountry(string, optional) - Boost results from specific country (e.g., “US”, “UK”)auto_parameters(boolean, default: false) - Enable automatic parameter configurationtimeout(integer, default: 30) - Request timeout in seconds
tavily_extract Configuration
tavily_extract Configuration
Available options:
format(string, default: “markdown”) - Output format: “markdown” or “text”timeout(integer, default: 30) - Request timeout in seconds
google_search Configuration
google_search Configuration
Available options:
model(string, default: “gemini-2.5-flash”) - Gemini model for search groundingtemperature(float, default: 0.7) - Sampling temperature (0.0-2.0)max_output_tokens(integer, default: 8192) - Maximum response tokenstimeout(integer, default: 60) - Request timeout in seconds
randomizer Configuration
randomizer Configuration
Available options:
min_value(integer, default: 0) - Default minimum value (inclusive)max_value(integer, default: 100) - Default maximum value (inclusive)
memory Configuration
memory Configuration
Available options:
modules(list of strings, optional) - List of memory module names to expose to this tool. When specified, only these modules are available. When omitted, all modules withauto_save: falseare exposed.
Tool Calling Configuration
Maximum number of ReAct reasoning-action-observation cycles. Set directly on the agent (not in a nested object). Tool calling is always enabled for
react_agent agents when tools are provided.Recommended values:- Simple tasks: 3-5 iterations
- Research tasks: 5-10 iterations
- Complex analysis: 10-15 iterations
Tool Usage Examples
Web Search and Research
Data Analysis with Calculator
Streaming
Streaming enables real-time delivery of agent responses, tool calls, and reasoning steps to end users.Streaming Configuration
Enable streaming of agent responses
Stream agent text output to users in real-time
Show tool calling events and results to users
Show agent’s internal reasoning and thought process. This flag has a dual purpose: when enabled on models that support thinking (currently Gemini and Vertex AI Anthropic models), it also activates the model’s internal reasoning mode, sending
enable_thinking=True to the provider. The thinking content is then streamed via ThinkingStart, ThinkingContent, and ThinkingEnd events. (Note: This field is system-managed and controlled by the platform UI, so it will be automatically stripped from LLM-generated configs).Show memory query events to users. Controls visibility of
MemoryCall* events emitted during memory retrieval operations.Generate and send an LLM-powered agent introduction snippet before the main response. When enabled, the agent produces a brief preview of what it’s about to do, streamed via
AgentIntroductionStart, AgentIntroductionContent, and AgentIntroductionEnd events. (Note: This field is system-managed and controlled by the platform UI, so it will be automatically stripped from LLM-generated configs).Identifies whether this is the main chatbox agent. Used by the frontend to determine which agent’s output should be rendered in the primary chat area. Propagated in agent introduction events. (Note: This field is system-managed and controlled by the platform UI, so it will be automatically stripped from LLM-generated configs).
Streaming Examples
Standard User-Facing Streaming
- ✅ Agent responses (streaming)
- ✅ Tool calls and results
- ✅ Memory retrieval events
- ❌ Internal reasoning
Development/Debug Mode
- ✅ Agent responses (streaming)
- ✅ Tool calls and results
- ✅ Internal reasoning steps (also enables model thinking for Gemini)
- ✅ Memory retrieval events
Main Chatbox Agent with Preview
- ✅ Agent introduction snippet (preview of what it will do)
- ✅ Agent responses (streaming, rendered in primary chat area)
- ✅ Tool calls and results
Background Processing
Streaming Events
When streaming is enabled, the system emits these event types:Text Message Events
Text Message Events
TextMessageStart- Agent begins generating a response (includes backend-generated message ID)TextMessageContent- Streaming content chunks (delta text)TextMessageEnd- Response generation complete
Thinking Events (show_reasoning)
Thinking Events (show_reasoning)
ThinkingStart- Agent begins reasoning (only for models with thinking support)ThinkingContent- Streaming thinking/reasoning chunksThinkingEnd- Reasoning complete
show_reasoning: true and the model supports thinking (currently Gemini and Vertex AI Anthropic models).Run Events
Run Events
RunStarted- Agent execution beginsRunFinished- Agent execution completes successfullyRunError- Agent execution encounters an error
LLM Events (tracing only)
LLM Events (tracing only)
LLMCallStarted- LLM API call initiated (model, provider, temperature)LLMCallCompleted- LLM call finished (includes token usage, cost, duration)LLMCallError- LLM call failed
Tool Events (show_tool_to_user)
Tool Events (show_tool_to_user)
ToolCallStart- Tool execution begins (tool name, agent name)ToolCallArgs- Tool argument data (streamed)ToolCallResult- Tool execution result with contentToolCallEnd- Tool execution completeToolCallError- Tool execution failed (AI continues processing)
Knowledge Base Events
Knowledge Base Events
KnowledgeBaseCallStart- KB retrieval beginsKnowledgeBaseCallArgs- Query and config detailsKnowledgeBaseCallResult- Retrieved resultsKnowledgeBaseCallEnd- Retrieval completeKnowledgeBaseCallError- Retrieval failed
Memory Events (show_memory)
Memory Events (show_memory)
MemoryCallStart- Memory query beginsMemoryCallResult- Retrieved memory itemsMemoryCallEnd- Memory query completeMemoryCallError- Memory query failed
Agent Introduction Events (send_preview_snippet)
Agent Introduction Events (send_preview_snippet)
AgentIntroductionStart- Introduction snippet beginsAgentIntroductionContent- Introduction contentAgentIntroductionEnd- Introduction complete
Knowledge Base Integration
Agents can access knowledge bases to retrieve domain-specific information and documents. Both LLM agents and ReAct agents support knowledge base integration.Available Knowledge Base Types
milvus
Milvus Vector DatabaseProduction-ready vector database with dense vector semantic search and multi-tenancy support.
Additional vector database integrations will be supported in future releases.
Configuration
Add knowledge bases directly to agent configuration:Configuration Reference
Knowledge Base Instance Fields
Knowledge Base Instance Fields
Required fields:
name(string) - Instance identifier (alphanumeric, hyphens, underscores only)knowledge_base_type(string) -"milvus"or"uknow"
enabled(boolean, default: true) - Toggle KB on/off without removing itid(string, optional) - KnowledgeBase model ID (for Milvus DB lookup)config(object, default: ) - Adapter-specific retrieval parametersdescription(string, optional) - Human-readable description
Milvus config Fields
Milvus config Fields
top_k(integer, default: 10) - Number of results to return (1-1000)strategy(string, default: “dense”) - Search strategy:"dense","hybrid","bm25","hybrid_rrf"search_ef(integer, optional) - HNSW search parameter (higher = more accurate but slower)score_threshold(float, optional) - Minimum similarity score (0.0-1.0)offset(integer, default: 0) - Number of results to skip for paginationembedding_provider(string, default: “mistral”) -"mistral","azure_openai", or"telekom_otc"embedding_model(string, default: “mistral-embed”) - Embedding model namequery_model(string, optional) - LLM model for query expansionmax_num_query_expansions(integer, default: 0) - Number of expanded queries (0-5)filter_expression(string, optional) - Raw Milvus filter expressionfilters(object, optional) - Haystack-style metadata filters
UKnow config Fields
UKnow config Fields
username(string, required) - Cloud storage account emaildrive_key(string, required) - Storage type:"SP"(SharePoint),"ONEDRIVEP"(OneDrive),"GOOGLEDRIVE","CONFLUENCE"path_filter(string, default: "") - Restrict search to folder pathdrive_ids(list of strings, default: []) - Specific drive IDs to searchquery_model(string, optional) - LLM for query processingmax_num_query_expansions(integer, default: 0) - Number of expanded queriessearch_options.search_type(string, default: “similarity”) -"similarity","similarity_score_threshold","mmr"search_options.fetch_k(integer, default: 5) - Number of resultssearch_options.lambda_mult(float, default: 0.5) - MMR lambda multiplier
LLM Configuration
Fine-tune language model behavior for different use cases.Model Selection
gpt-4o- Azure OpenAI (text + image)gemini-2.5-flash/gemini-2.5-pro- Google Gemini (text, image, audio, video + thinking)gemini-3-flash/gemini-3.1-flash-lite/gemini-3.1-pro- Gemini 3 & 3.1 (text, image, audio, video + thinking, 64K output)claude-opus-4-6/claude-sonnet-4-6- Vertex AI Anthropic (text, image + thinking)mistral-large/mistral-small- Mistral AI (text + image)Llama-3.3-70B/Qwen3-30B/claude-sonnet-4- Telekom OTC
Temperature Control
Temperature controls the randomness and creativity of responses:| Temperature | Use Case | Example |
|---|---|---|
| 0.0 - 0.3 | Factual, deterministic | Data analysis, fact retrieval |
| 0.4 - 0.7 | Balanced | General conversation, Q&A |
| 0.8 - 1.2 | Creative | Content writing, brainstorming |
| 1.3 - 2.0 | Highly creative | Creative writing, poetry |
Token Limits
- Short responses: 500-1000 tokens
- Standard responses: 1000-2000 tokens
- Long-form content: 2000-4000 tokens
Advanced Capabilities
Multi-Agent Coordination
Supervisor agents coordinate multiple specialized agents:Supervisors can coordinate both LLM agents and ReAct agents, creating hybrid teams with diverse capabilities.
Agent Handoffs
In supervisor architectures, agents can hand off tasks to each other. Define the coordination strategy inprompt_config.system_prompt:
Capability Combinations
Research + Analysis Agent
Creative Writing Agent
Best Practices
Tool Selection
Test agents with minimal tools first, then add more as needed.
Streaming Configuration
Performance Optimization
System Prompts
Troubleshooting
Agent not using tools
Agent not using tools
Possible causes:
- Agent type is not
react_agent(only ReAct agents support tools) - System prompt doesn’t mention tool usage
max_iterationsis too low- Tools are not registered in the agent registry
agent_type: "react_agent", update prompt_config.system_prompt to encourage tool use, and increase max_iterations.Infinite tool calling loops
Infinite tool calling loops
Cause: Agent repeatedly calls tools without reaching a conclusionSolution: Lower
max_iterations and improve system prompt to guide the agent toward conclusions.Streaming not working
Streaming not working
Possible causes:
enable_streamingis falseshow_output_to_useris false- Network/connection issues
Knowledge base not returning results
Knowledge base not returning results
Possible causes:
- Knowledge base not properly configured
enabledis false- Empty or unindexed knowledge base
Next Steps
YAML Configuration
Complete YAML configuration reference
Agent Types
Learn about different agent types
API Reference
Explore the API for programmatic access