Atthene’s PromptConfig provides sophisticated prompt engineering capabilities through dynamic context control, few-shot examples, and intelligent history management. This enables precise control over what information reaches your LLM.
prompt_config: system_prompt: string # Core behavioral instructions (supports template variables) include_user_input: bool # Include current user message (default: true) include_history: bool | int # History control (default: true) include_media_in_history: bool # Strip media from history (default: false) few_shot_examples: list # Teaching examples (default: [])
Atthene supports dynamic template variables in your system_prompt using {{ variable }} syntax. Variables are substituted recursively before the prompt is sent to the LLM, enabling powerful dynamic context injection.
Prompt Reusability: Store common instructions in the prompt library and reference them across multiple agents. Update once, apply everywhere.
Variables in Prompt Library: You can use template variables inside prompt library prompts! For example, a stored prompt can contain {{ user_input }}, {{ history }}, or even other {{ prompts.* }} references. The system resolves them recursively (up to 10 iterations).Example Prompt Library Entry:
Copy
Name: "contextual_greeting"Content: | You are a helpful assistant. The user just said: "{{ user_input }}" Recent conversation: {{ history[3] }} Respond appropriately based on context.
prompt_config: system_prompt: | ## Recent Conversation {{ history[5] }} ## Task Continue the conversation naturally, maintaining context and tone. include_history: false # Using template variable instead
Output Format:
Copy
user: What's the weather today?assistant: It's sunny with a high of 75°F.user: Should I bring an umbrella?assistant: No need for an umbrella today!
History vs include_history: You can use {{ history }} template variable for formatted history in your system prompt, or use include_history: true to add messages to the message list. They serve different purposes.
Access complete execution trace including internal messages, tool calls, and tool results.
Variable
Description
Includes
{{ full_history }}
Complete execution trace
User + Internal + Tool calls + Results
{{ full_history[N] }}
Last N messages (all types)
Same, limited to N messages
Example:
Copy
agents: - name: execution_analyzer prompt_config: system_prompt: | ## Complete Execution Trace {{ full_history[20] }} ## Analysis Task Analyze the tool usage patterns and identify: 1. Which tools were called 2. Success/failure rates 3. Optimization opportunities include_history: false include_user_input: false
Output Format:
Copy
user: Search for Python tutorialsassistant: I'll search for that information.tool: search_web(query="Python tutorials")tool_result: Found 10 results: [...]assistant: Here are the top Python tutorials I found...
Full History vs History: {{ full_history }} includes internal messages and tool execution details. Use it for debugging, analysis, or when agents need to understand the complete execution flow.
Agent Coordination: Structured outputs enable sophisticated multi-agent workflows where downstream agents can access precise, typed data from upstream agents.
Access the last message from a specific agent (non-structured output).
Pattern
Description
{{ agent_name.output }}
Last message from specified agent
Example:
Copy
agents: - name: researcher agent_type: llm_agent prompt_config: system_prompt: Research the topic and provide findings - name: writer agent_type: llm_agent prompt_config: system_prompt: | ## Research Findings {{ researcher.output }} ## Task Write a comprehensive article based on these findings. Cite sources and maintain accuracy.edges: - from: START to: researcher - from: researcher to: writer
agents: - name: customer_support agent_type: llm_agent prompt_config: system_prompt: | You are a helpful customer support agent. Your goal is to resolve customer issues efficiently and professionally. Always be polite and empathetic.
agents: - name: data_analyst agent_type: llm_agent prompt_config: system_prompt: | # Role You are an expert data analyst specializing in financial data. # Capabilities - Analyze trends and patterns - Generate statistical insights - Create data visualizations - Provide actionable recommendations # Guidelines 1. Always cite data sources 2. Use statistical terminology correctly 3. Provide confidence levels for predictions 4. Format numbers with appropriate precision # Output Format Structure your analysis with: - Executive Summary - Key Findings - Detailed Analysis - Recommendations
Use Markdown formatting in system prompts for better structure. LLMs understand headings, lists, and emphasis.
agents: - name: code_reviewer agent_type: llm_agent prompt_config: system_prompt: Review code for quality and best practices few_shot_examples: - input: | def calculate_total(items): total = 0 for item in items: total += item['price'] return total output: | **Issues Found:** 1. No input validation (items could be None) 2. Missing type hints 3. No error handling for missing 'price' key **Improved Version:** ```python def calculate_total(items: list[dict]) -> float: if not items: return 0.0 return sum(item.get('price', 0.0) for item in items)
input: |
def process_data(data):
return [x * 2 for x in data if x > 0]
output: |
Code Quality: GoodStrengths:
Concise list comprehension
Clear filtering logic
Suggestions:
Add type hints: def process_data(data: list[int]) -> list[int]:
Add docstring explaining the transformation
Copy
### Few-Shot Best Practices<AccordionGroup> <Accordion title="Example Selection"> - **Diversity**: Cover different scenarios and edge cases - **Relevance**: Match your actual use cases - **Quality**: Show ideal outputs, not just correct ones - **Quantity**: 2-5 examples usually sufficient (more isn't always better) </Accordion> <Accordion title="Input/Output Format"> - **Consistency**: Use same format across all examples - **Clarity**: Make inputs and outputs clearly distinguishable - **Completeness**: Show full expected output format - **Realism**: Use realistic data, not toy examples </Accordion> <Accordion title="Common Patterns"> - **Classification**: Show examples for each class - **Extraction**: Demonstrate different entity types - **Transformation**: Show input-output mappings - **Reasoning**: Include step-by-step thought process </Accordion></AccordionGroup>## History ManagementControl how much conversation history is included in prompts.### Include All History (Default)```yamlprompt_config: system_prompt: You are a helpful assistant include_history: true # Include all previous messages
Use Case: Long conversations where full context is neededToken Impact: High - all messages included
# Include all conversation historyagents: - name: therapist_bot agent_type: llm_agent prompt_config: system_prompt: You are a supportive therapist include_history: true # Need full session context
prompt_config: system_prompt: Generate a summary of the conversation include_user_input: false # Exclude current user message include_history: true # But include all history
Use Case:
Conversation summarization
Automatic analysis without user prompt
Background processing tasks
When include_user_input: false, the agent processes only the conversation history without the latest user message. This is useful for automated tasks triggered by system events.
Important for ReAct Agents: Messages with a run_id in their metadata are ALWAYS included in the prompt, regardless of include_history or include_user_input settings. This ensures tool execution results are visible to the agent in reasoning loops.
When a ReAct agent executes tools, the results are tagged with the current run_id. These messages bypass all filtering to ensure the agent can see:
Tool call requests
Tool execution results
Error messages from failed tools
Example Flow:
Copy
agents: - name: research_agent agent_type: react_agent prompt_config: system_prompt: Research topics using available tools include_history: 3 # Only last 3 messages # But tool results and agent reasoning are ALWAYS included regardless tools: - search_web - read_url# Execution:# 1. User: "Research AI trends"# 2. Agent thinks: "I'll search for AI trends"# 3. Tool call: search_web(query="AI trends")# 4. Tool result: [10 search results] ← ALWAYS visible# 5. Agent sees tool result and continues reasoning
Why This Matters:
Without tool results, ReAct agents can’t complete their reasoning loop
Tool results must be visible even if history is limited
agents: - name: conversation_summarizer agent_type: llm_agent prompt_config: system_prompt: | Summarize the conversation into key points. Focus on decisions made and action items. include_history: true # Need full conversation include_user_input: false # Triggered automatically few_shot_examples: - input: | User: I need to update my billing info Agent: Sure, I can help with that User: My new card ends in 1234 Agent: Updated successfully output: | **Summary:** - Customer requested billing update - New card ending in 1234 added - Update completed successfully
Atthene automatically validates your prompt configuration:
Empty Configuration Warning
Warning: All components disabled (no system prompt, no examples, no history, no user input)Fix: Enable at least one component
History Without User Input
Warning: History enabled but user input disabledBehavior: Only conversation history sent to LLM, not current questionUse Case: Intentional for summarization/analysis
User Input Without History
Info: User input enabled but history disabledBehavior: Only current question sent (stateless)Use Case: Classification, single-turn tasks
Few-Shot Format Validation
Error: Missing ‘input’ or ‘output’ in examplesFix: Ensure each example has both fields
name: customer_support_systemdescription: Multi-agent customer support with optimized promptsarchitecture: workflowagents: - name: intent_router agent_type: llm_agent llm_config: model: mistral-small-latest temperature: 0.2 prompt_config: system_prompt: | Classify customer intent into one of these categories: - billing: Payment, invoices, refunds - technical: Product issues, bugs, errors - general: Questions, information requests include_history: false # Stateless classification few_shot_examples: - input: "I was charged twice for my subscription" output: "billing" - input: "The app keeps crashing on startup" output: "technical" - input: "What are your business hours?" output: "general" - name: billing_specialist agent_type: llm_agent llm_config: model: gpt-4o temperature: 0.5 prompt_config: system_prompt: | You are a billing specialist. Handle payment issues professionally. **Capabilities:** - Check payment history - Process refunds - Update billing information **Guidelines:** - Always verify customer identity - Explain charges clearly - Offer solutions proactively include_history: 10 # Recent conversation context include_media_in_history: false # Text only few_shot_examples: - input: "Why was I charged $50?" output: | I can help you understand this charge. Let me check your account. The $50 charge is for your monthly premium subscription that renewed on [date]. This is the standard rate for the premium plan. Would you like me to review your subscription details? - name: technical_specialist agent_type: llm_agent llm_config: model: gpt-4o temperature: 0.3 prompt_config: system_prompt: | You are a technical support specialist. Solve product issues efficiently. **Troubleshooting Process:** 1. Understand the issue 2. Ask clarifying questions 3. Provide step-by-step solutions 4. Verify resolution **Tone:** Patient, clear, technical but accessible include_history: 15 # Need full technical context include_media_in_history: true # Screenshots important few_shot_examples: - input: "The app won't load" output: | I'll help you resolve this. Let's troubleshoot: 1. First, try force-closing the app and reopening it 2. If that doesn't work, check your internet connection 3. Ensure you have the latest app version Can you try step 1 and let me know if it works?edges: - from: START to: intent_router - from: intent_router condition: "Route to {intent_router.output}" condition_type: literal possible_outputs: [billing_specialist, technical_specialist]
Issue: “PromptConfig has no content”Cause: All components disabled or emptySolution: Enable at least one component (system_prompt, examples, history, or user_input)
Few-Shot Validation Error
Issue: “Example missing ‘input’ or ‘output’ field”Cause: Incorrect example formatSolution: Ensure each example has both input and output keys
prompt_config: system_prompt: Classify user intent include_history: false include_user_input: true few_shot_examples: - input: "Cancel my order" output: "intent: cancel"
Copy
prompt_config: system_prompt: You are a helpful assistant include_history: 10 # Last 10 messages include_user_input: true include_media_in_history: false
Copy
# Agent 1: Analyzerprompt_config: system_prompt: Analyze the inputstructured_output: enabled: true output_name: analysis# Agent 2: Responderprompt_config: system_prompt: | Analysis: {{ analyzer.analysis }} Respond based on this analysis.
Copy
prompt_config: system_prompt: | {{ history }} Summarize the above conversation. include_history: false # Using template variable include_user_input: false # Auto-triggered