Core Concepts
Understanding the fundamental architecture of Aegeantic
Architecture Overview
Aegeantic is built around discrete, composable components that work together to enable agent execution. Each component has a single, well-defined responsibility:
Storage Layer
Provides persistent storage via RocksDB or in-memory storage for testing. All agent state is stored here.
Context Manager
Manages versioned context with automatic iteration tracking. Wraps storage and provides version control.
Pattern System
Extracts structured segments (tools, reasoning, responses) from LLM output using configurable patterns.
Tool System
Executes tools with validation, timeout handling, and multi-mode execution (thread, process, async).
Event System
Streams real-time events for all operations: LLM chunks, tool execution, pattern detection, status changes.
Context Management
Context is the core state management system in Aegeantic. Every piece of agent state is stored in a versioned context.
Key Features
- Automatic Versioning: Every write creates a new version, preserving full history
- Iteration Tracking: Global iteration counter tracks agent execution steps
- Tombstone Deletion: Deletions create tombstone versions, preserving history
- Pattern-based Access: List keys with prefix matching for efficient queries
Context Operations
# Set creates a new version (accepts string or bytes)
context.set("llm_output", "Agent response")
# Update overwrites current version (for streaming)
context.update("streaming_content", "Partial text...")
# Get returns auto-decoded string (most common use case)
text = context.get("llm_output")
print(text) # "Agent response"
# Get with specific version
old_text = context.get("llm_output", version=1)
# Get binary data (images, pickled objects, etc.)
binary_data = context.get_bytes("model_weights")
# Get full record with metadata (version, iteration, timestamp)
record = context.get_record("llm_output")
print(record.value) # bytes
print(record.version) # 1
print(record.iteration) # Current iteration
# Get full history
history = context.get_history("llm_output", max_versions=10)
# List all keys with prefix
keys = context.list_keys(prefix="tool:")
Iterations
Iterations provide a global timeline for agent execution. Each agent step increments the iteration counter, creating a chronological record of all operations.
Iteration Manager
iteration_mgr = IterationManager(storage)
# Get current iteration
current = iteration_mgr.get() # 0
# Increment iteration
next_iter = iteration_mgr.next() # 1
# Register event markers
iteration_mgr.register_event("agent_started")
AgentRunner auto-increments iterations on each step. Disable with auto_increment_iteration=False in AgentConfig.
Patterns
Patterns define how to extract structured segments from unstructured LLM output.
Pattern Types
- TOOL: Tool invocations (parsed as JSON or line format)
- REASONING: Internal reasoning steps
- RESPONSE: Final response to user
Default Pattern Set
# Default patterns use XML-style tags
<tool>
{
"name": "search",
"arguments": {"query": "Python"}
}
</tool>
<reasoning>
I need to search for Python information first.
</reasoning>
<response>
Here's what I found about Python...
</response>
Custom Patterns
Create custom pattern sets for different LLM formats:
from Aegeantic import PatternSet, Pattern, SegmentType
custom_patterns = PatternSet(
name="custom",
patterns=[
Pattern(
name="tool",
start_tag="```json",
end_tag="```",
segment_type=SegmentType.TOOL,
expected_format="json"
),
Pattern(
name="thinking",
start_tag="<think>",
end_tag="</think>",
segment_type=SegmentType.REASONING
)
],
default_response_behavior="all_remaining"
)
patterns.register_pattern_set(custom_patterns)
Tools
Tools are functions that agents can execute. Aegeantic provides validation, timeout handling, and multiple execution modes.
Tool Definition
def calculate(args):
a = args["a"]
b = args["b"]
operation = args.get("operation", "add")
if operation == "add":
return {"result": a + b}
elif operation == "multiply":
return {"result": a * b}
tool = create_tool(
name="calculate",
func=calculate,
input_schema={
"validator": "simple",
"required": ["a", "b"],
"fields": {
"a": {"type": "int"},
"b": {"type": "int"},
"operation": {"type": "str"}
}
},
timeout_seconds=10.0,
processing_mode=ProcessingMode.THREAD
)
Processing Modes
- THREAD: Execute in thread pool (default, best for I/O-bound tools)
- PROCESS: Execute in process pool (CPU-bound tools, isolation)
- ASYNC: Execute as async coroutine (native async tools)
Events
Everything in Aegeantic flows through events. Events enable streaming, real-time monitoring, and reactive patterns.
Core Event Types
The most commonly encountered events during basic agent execution:
LLMChunkEvent- Individual chunks from LLMLLMCompleteEvent- LLM generation completePatternStartEvent- Pattern opening tag detectedPatternEndEvent- Pattern complete with contentToolStartEvent- Tool execution startedToolEndEvent- Tool execution completeStatusEvent- Agent status changeStepCompleteEvent- Agent step complete with final result
Consuming Events
async for event in runner.step_stream(user_input):
match event.type:
case "llm_chunk":
print(event.chunk, end="")
case "tool_start":
print(f"\n[Tool: {event.tool_name}]")
case "step_complete":
result = event.result
print(f"\nStatus: {result.status}")
Agent Lifecycle
Understanding the agent execution flow:
- Prompt Building: Construct prompt from context using
input_mapping - LLM Generation: Stream chunks from LLM provider
- Pattern Extraction: Detect and extract patterns in real-time
- Tool Execution: Execute detected tools (sequential or concurrent)
- Context Update: Store results using
output_mapping - Result Return: Emit
StepCompleteEventwith final result
Next Steps
- Agent System - Configure agents and execution
- Patterns - Deep dive into pattern extraction
- Tools - Create and manage tools
- Logic Flows - Build multi-step agent loops