Atthene’s PromptConfig provides sophisticated prompt engineering capabilities through dynamic context control, few-shot examples, and intelligent history management. This enables precise control over what information reaches your LLM.
prompt_config: system_prompt: string # Core behavioral instructions (supports template variables) include_user_input: bool # Include current user message (default: true) include_history: bool | int # History control (default: true) include_media_in_history: bool # Strip media from history (default: false) include_internal_messages: bool # Include internal agent outputs (default: true) few_shot_examples: list # Teaching examples (default: [])
Atthene supports dynamic template variables in your system_prompt using {{ variable }} syntax. Variables are substituted recursively before the prompt is sent to the LLM, enabling powerful dynamic context injection.
Prompt Reusability: Store common instructions in the prompt library and reference them across multiple agents. Update once, apply everywhere.
Variables in Prompt Library: You can use template variables inside prompt library prompts! For example, a stored prompt can contain {{ user_input }}, {{ history }}, or even other {{ prompts.* }} references. The system resolves them recursively (up to 10 iterations).Example Prompt Library Entry:
Name: "contextual_greeting"Content: | You are a helpful assistant. The user just said: "{{ user_input }}" Recent conversation: {{ history[3] }} Respond appropriately based on context.
Important: Structured output from agents is NOT included in history variables. Use {{ agent_name.output.field }} template variables to access structured data.
Example:
prompt_config: system_prompt: | ## Recent Conversation {{ history[5] }} ## Task Continue the conversation naturally, maintaining context and tone. include_history: false # Using template variable instead
Output Format:
user: What's the weather today?assistant: It's sunny with a high of 75°F.user: Should I bring an umbrella?assistant: No need for an umbrella today!
Structured Output Exclusion: The history variables only include user-visible text messages. Structured output data from agents with structured_output enabled is excluded from these history arrays and must be accessed via specific template variables.
History vs include_history: You can use {{ history }} template variable for formatted history in your system prompt, or use include_history: true to add messages to the message list. They serve different purposes.
Access complete execution trace including internal messages, tool calls, and tool results.
Variable
Description
Includes
{{ full_history }}
Complete execution trace
User + Internal + Tool calls + Results
{{ full_history[N] }}
Last N messages (all types)
Same, limited to N messages
Example:
agents: - name: execution_analyzer prompt_config: system_prompt: | ## Complete Execution Trace {{ full_history[20] }} ## Analysis Task Analyze the tool usage patterns and identify: 1. Which tools were called 2. Success/failure rates 3. Optimization opportunities include_history: false include_user_input: false
Output Format:
user: Search for Python tutorialsassistant: I'll search for that information.tool: search_web(query="Python tutorials")tool_result: Found 10 results: [...]assistant: Here are the top Python tutorials I found...
Full History vs History: {{ full_history }} includes internal messages and tool execution details. Use it for debugging, analysis, or when agents need to understand the complete execution flow.
Agent Coordination: Structured outputs enable sophisticated multi-agent workflows where downstream agents can access precise, typed data from upstream agents.
Complex types: When a substituted value is a list or dict, it is automatically serialized to JSON in the prompt.
Access the last message from a specific agent (non-structured output).
Pattern
Description
{{ agent_name.output }}
Last message from specified agent
Example:
agents: - name: researcher agent_type: llm_agent prompt_config: system_prompt: Research the topic and provide findings - name: writer agent_type: llm_agent prompt_config: system_prompt: | ## Research Findings {{ researcher.output }} ## Task Write a comprehensive article based on these findings. Cite sources and maintain accuracy.edges: - from: START to: researcher - from: researcher to: writer
Access variables defined in the system’s variables section. These are populated from user inputs, agent variable_assignments, or defaults.
Pattern
Description
{{ variables.name }}
Value of a custom variable
Example:
variables: customer_name: type: "str" description: "Customer name" priority: type: "str" default: "medium"agents: - name: support_agent agent_type: llm_agent prompt_config: system_prompt: | Customer: {{ variables.customer_name }} Priority: {{ variables.priority }} Help the customer with their request.
Variables are the bridge between structured output and prompt substitution. Use variable_assignments on agents to populate variables from structured output, then reference them with {{ variables.name }} in downstream agents. See Variables System for details.
agents: - name: customer_support agent_type: llm_agent prompt_config: system_prompt: | You are a helpful customer support agent. Your goal is to resolve customer issues efficiently and professionally. Always be polite and empathetic.
agents: - name: data_analyst agent_type: llm_agent prompt_config: system_prompt: | # Role You are an expert data analyst specializing in financial data. # Capabilities - Analyze trends and patterns - Generate statistical insights - Create data visualizations - Provide actionable recommendations # Guidelines 1. Always cite data sources 2. Use statistical terminology correctly 3. Provide confidence levels for predictions 4. Format numbers with appropriate precision # Output Format Structure your analysis with: - Executive Summary - Key Findings - Detailed Analysis - Recommendations
Use Markdown formatting in system prompts for better structure. LLMs understand headings, lists, and emphasis.
agents: - name: code_reviewer agent_type: llm_agent prompt_config: system_prompt: Review code for quality and best practices few_shot_examples: - input: | def calculate_total(items): total = 0 for item in items: total += item['price'] return total output: | **Issues Found:** 1. No input validation (items could be None) 2. Missing type hints 3. No error handling for missing 'price' key **Improved Version:** ```python def calculate_total(items: list[dict]) -> float: if not items: return 0.0 return sum(item.get('price', 0.0) for item in items)
input: |
def process_data(data):
return [x * 2 for x in data if x > 0]
output: |
Code Quality: GoodStrengths:
Concise list comprehension
Clear filtering logic
Suggestions:
Add type hints: def process_data(data: list[int]) -> list[int]:
Add docstring explaining the transformation
### Few-Shot Best Practices<AccordionGroup> <Accordion title="Example Selection"> - **Diversity**: Cover different scenarios and edge cases - **Relevance**: Match your actual use cases - **Quality**: Show ideal outputs, not just correct ones - **Quantity**: 2-5 examples usually sufficient (more isn't always better) </Accordion> <Accordion title="Input/Output Format"> - **Consistency**: Use same format across all examples - **Clarity**: Make inputs and outputs clearly distinguishable - **Completeness**: Show full expected output format - **Realism**: Use realistic data, not toy examples </Accordion> <Accordion title="Common Patterns"> - **Classification**: Show examples for each class - **Extraction**: Demonstrate different entity types - **Transformation**: Show input-output mappings - **Reasoning**: Include step-by-step thought process </Accordion></AccordionGroup>## History ManagementControl how much conversation history is included in prompts.### Include All History (Default)```yamlprompt_config: system_prompt: You are a helpful assistant include_history: true # Include all previous messages
Use Case: Long conversations where full context is neededToken Impact: High - all messages included
# Include all conversation historyagents: - name: therapist_bot agent_type: llm_agent prompt_config: system_prompt: You are a supportive therapist include_history: true # Need full session context
prompt_config: system_prompt: Generate a summary of the conversation include_user_input: false # Exclude current user message include_history: true # But include all history
Use Case:
Conversation summarization
Automatic analysis without user prompt
Background processing tasks
When include_user_input: false, the agent processes only the conversation history without the latest user message. This is useful for automated tasks triggered by system events.
Control whether outputs from internal agents (those with streaming_config.show_output_to_user: false) are included in the conversation history sent to the LLM.
prompt_config: system_prompt: You are a coordinator agent include_internal_messages: true # See all agent outputs
Behavior: Both user-facing messages and internal agent outputs appear in conversation history.Use Case: Agents that need full awareness of the multi-agent workflow state.
prompt_config: system_prompt: You are a customer-facing assistant include_internal_messages: false # Only see user-visible messages
Behavior: Only messages from agents with show_output_to_user: true and user messages appear in history. Internal agent outputs are filtered out.Use Case: Customer-facing agents that should only see the user conversation, not background processing.
Internal messages are outputs from agents with streaming_config.show_output_to_user: false. These agents run in the background without emitting text to the end user.
Important for ReAct Agents: Messages with a run_id in their metadata are ALWAYS included in the prompt, regardless of include_history or include_user_input settings. This ensures tool execution results are visible to the agent in reasoning loops.
When a ReAct agent executes tools, the results are tagged with the current run_id. These messages bypass all filtering to ensure the agent can see:
Tool call requests
Tool execution results
Error messages from failed tools
Example Flow:
agents: - name: research_agent agent_type: react_agent prompt_config: system_prompt: Research topics using available tools include_history: 3 # Only last 3 messages # But tool results and agent reasoning are ALWAYS included regardless tools: - search_web - read_url# Execution:# 1. User: "Research AI trends"# 2. Agent thinks: "I'll search for AI trends"# 3. Tool call: search_web(query="AI trends")# 4. Tool result: [10 search results] ← ALWAYS visible# 5. Agent sees tool result and continues reasoning
Why This Matters:
Without tool results, ReAct agents can’t complete their reasoning loop
Tool results must be visible even if history is limited
agents: - name: conversation_summarizer agent_type: llm_agent prompt_config: system_prompt: | Summarize the conversation into key points. Focus on decisions made and action items. include_history: true # Need full conversation include_user_input: false # Triggered automatically few_shot_examples: - input: | User: I need to update my billing info Agent: Sure, I can help with that User: My new card ends in 1234 Agent: Updated successfully output: | **Summary:** - Customer requested billing update - New card ending in 1234 added - Update completed successfully
Atthene automatically validates your prompt configuration:
Empty Configuration Warning
Warning: All components disabled (no system prompt, no examples, no history, no user input)Fix: Enable at least one component
History Without User Input
Warning: History enabled but user input disabledBehavior: Only conversation history sent to LLM, not current questionUse Case: Intentional for summarization/analysis
User Input Without History
Info: User input enabled but history disabledBehavior: Only current question sent (stateless)Use Case: Classification, single-turn tasks
Few-Shot Format Validation
Error: Missing ‘input’ or ‘output’ in examplesFix: Ensure each example has both fields
name: customer_support_systemdescription: Multi-agent customer support with optimized promptsarchitecture: workflowagents: - name: intent_router agent_type: llm_agent llm_config: model: mistral-small-latest temperature: 0.2 prompt_config: system_prompt: | Classify customer intent into one of these categories: - billing: Payment, invoices, refunds - technical: Product issues, bugs, errors - general: Questions, information requests include_history: false # Stateless classification few_shot_examples: - input: "I was charged twice for my subscription" output: "billing" - input: "The app keeps crashing on startup" output: "technical" - input: "What are your business hours?" output: "general" - name: billing_specialist agent_type: llm_agent llm_config: model: gpt-4o temperature: 0.5 prompt_config: system_prompt: | You are a billing specialist. Handle payment issues professionally. **Capabilities:** - Check payment history - Process refunds - Update billing information **Guidelines:** - Always verify customer identity - Explain charges clearly - Offer solutions proactively include_history: 10 # Recent conversation context include_media_in_history: false # Text only few_shot_examples: - input: "Why was I charged $50?" output: | I can help you understand this charge. Let me check your account. The $50 charge is for your monthly premium subscription that renewed on [date]. This is the standard rate for the premium plan. Would you like me to review your subscription details? - name: technical_specialist agent_type: llm_agent llm_config: model: gpt-4o temperature: 0.3 prompt_config: system_prompt: | You are a technical support specialist. Solve product issues efficiently. **Troubleshooting Process:** 1. Understand the issue 2. Ask clarifying questions 3. Provide step-by-step solutions 4. Verify resolution **Tone:** Patient, clear, technical but accessible include_history: 15 # Need full technical context include_media_in_history: true # Screenshots important few_shot_examples: - input: "The app won't load" output: | I'll help you resolve this. Let's troubleshoot: 1. First, try force-closing the app and reopening it 2. If that doesn't work, check your internet connection 3. Ensure you have the latest app version Can you try step 1 and let me know if it works?edges: - from: START to: intent_router - from: intent_router condition: "Route to {intent_router.output}" condition_type: literal possible_outputs: [billing_specialist, technical_specialist]
Issue: “PromptConfig has no content”Cause: All components disabled or emptySolution: Enable at least one component (system_prompt, examples, history, or user_input)
Few-Shot Validation Error
Issue: “Example missing ‘input’ or ‘output’ field”Cause: Incorrect example formatSolution: Ensure each example has both input and output keys
prompt_config: system_prompt: Classify user intent include_history: false include_user_input: true few_shot_examples: - input: "Cancel my order" output: "intent: cancel"
prompt_config: system_prompt: You are a helpful assistant include_history: 10 # Last 10 messages include_user_input: true include_media_in_history: false
# Agent 1: Analyzerprompt_config: system_prompt: Analyze the inputstructured_output: enabled: true schema: summary: type: "str" description: "Analysis summary"# Agent 2: Responderprompt_config: system_prompt: | Analysis: {{ analyzer.output.summary }} Respond based on this analysis.
prompt_config: system_prompt: | {{ history }} Summarize the above conversation. include_history: false # Using template variable include_user_input: false # Auto-triggered