Event System
Real-time event streaming for agent execution monitoring
Overview
Aegeantic provides a comprehensive event system with 21 event types that stream real-time updates during agent execution, including graph orchestration. Events enable reactive UIs, monitoring, debugging, and custom workflows built on agent execution.
AsyncIterator interfaces, making it easy to build responsive, real-time applications.
Event Types
LLMChunkEvent
Emitted for each chunk generated by the LLM during streaming.
event.type = "llm_chunk"
event.chunk = "Hello" # Individual chunk
event.step_id = "uuid..." # Step identifier
LLMCompleteEvent
Emitted when LLM generation completes.
event.type = "llm_complete"
event.full_output = "Complete LLM response..."
event.step_id = "uuid..."
PatternStartEvent
Emitted when a pattern's opening tag is detected.
event.type = "pattern_start"
event.pattern_name = "tool"
event.pattern_type = SegmentType.TOOL
event.step_id = "uuid..."
PatternContentEvent
Emitted for partial pattern content during streaming (if stream_pattern_content=True).
event.type = "pattern_content"
event.pattern_name = "reasoning"
event.content = "Partial text..."
event.is_partial = True
event.step_id = "uuid..."
PatternEndEvent
Emitted when a pattern's closing tag is detected.
event.type = "pattern_end"
event.pattern_name = "tool"
event.pattern_type = SegmentType.TOOL
event.full_content = "Complete pattern content"
event.step_id = "uuid..."
ToolStartEvent
Emitted when tool execution begins.
event.type = "tool_start"
event.tool_name = "search"
event.arguments = {"query": "Python"}
event.iteration = 1
event.call_id = "unique_id"
event.step_id = "uuid..."
ToolOutputEvent
Emitted for tool output (full or streaming).
event.type = "tool_output"
event.tool_name = "search"
event.output = {"results": [...]}
event.is_partial = False
event.call_id = "unique_id"
event.step_id = "uuid..."
ToolEndEvent
Emitted when tool execution completes.
event.type = "tool_end"
event.tool_name = "search"
event.result = ToolResult(...) # Full result object
event.call_id = "unique_id"
event.step_id = "uuid..."
ToolDecisionEvent
Emitted when a tool is accepted or rejected by verification callback.
event.type = "tool_decision"
event.tool_name = "delete_file"
event.call_id = "unique_id"
event.accepted = False
event.rejection_reason = "User denied permission"
event.verification_duration_ms = 1250.5
event.step_id = "uuid..."
ToolValidationEvent
Emitted when tool argument validation fails.
event.type = "tool_validation"
event.tool_name = "search"
event.validation_errors = [
{"field": "query", "message": "Required field missing"}
]
event.step_id = "uuid..."
StatusEvent
Emitted when agent status changes.
event.type = "status"
event.status = AgentStatus.TOOL_EXECUTED
event.message = "Tool execution complete"
event.step_id = "uuid..."
ContextWriteEvent
Emitted when context is updated (if incremental_context_writes=True).
event.type = "context_write"
event.key = "agent_output"
event.value_preview = "First 100 chars..."
event.version = 3
event.iteration = 1
event.step_id = "uuid..."
ErrorEvent
Emitted when errors occur during execution.
event.type = "error"
event.error_type = "tool_execution_error"
event.error_message = "Tool 'search' timed out"
event.recoverable = False
event.partial_data = None # Optional partial content
event.step_id = "uuid..."
StepCompleteEvent
Emitted when an agent step completes. Contains the final AgentStepResult.
event.type = "step_complete"
event.result = AgentStepResult(...) # Complete result
event.step_id = "uuid..."
RetryEvent
Emitted when an operation is retried due to failure.
event.type = "retry"
event.operation_type = "llm" # "llm" | "tool" | "custom"
event.operation_name = "api_call"
event.attempt = 2
event.max_attempts = 3
event.error = "Connection timeout"
event.next_delay_seconds = 4.0
RateLimitEvent
Emitted when a rate limiter acquires a token.
event.type = "rate_limit"
event.operation_name = "api_call"
event.acquired_at = 1234567890.123
event.tokens_remaining = 9.5
ContextHealthEvent
Emitted when context health check detects an issue.
event.type = "context_health"
event.check_type = "size" # "size" | "version_count" | "growth_rate"
event.key = "conversation_history"
event.current_value = 15000.0
event.threshold = 10000.0
event.recommended_action = "Consider summarization"
GraphStartEvent
Emitted when graph execution begins.
event.type = "graph_start"
event.graph_id = "data_pipeline"
event.total_nodes = 5
event.timestamp = 1234567890.123
GraphNodeStartEvent
Emitted when a graph node begins execution.
event.type = "graph_node_start"
event.graph_id = "data_pipeline"
event.node_id = "fetch_data"
event.parents = ["validate"] # List of parent node IDs
event.timestamp = 1234567890.123
GraphNodeCompleteEvent
Emitted when a graph node completes (success, failure, or skipped).
event.type = "graph_node_complete"
event.graph_id = "data_pipeline"
event.node_id = "fetch_data"
event.status = GraphNodeStatus.COMPLETED # COMPLETED | FAILED | SKIPPED
event.error_message = None # Set if status is FAILED
event.timestamp = 1234567890.123
GraphCompleteEvent
Emitted when entire graph execution completes.
event.type = "graph_complete"
event.graph_id = "data_pipeline"
event.status = "success" # "success" | "partial_failure" | "failed"
event.stats = {
"completed": 4,
"failed": 1,
"skipped": 0,
"pending": 0
}
event.timestamp = 1234567890.123
Consuming Events
Basic Event Loop
async for event in runner.step_stream(user_input):
# Access common event attributes
print(f"Event type: {event.type}")
print(f"Step ID: {event.step_id}")
# Handle specific events
if event.type == "llm_chunk":
print(event.chunk, end="")
elif event.type == "step_complete":
result = event.result
print(f"\nFinal status: {result.status}")
Pattern Matching (Python 3.10+)
async for event in runner.step_stream(user_input):
match event.type:
case "llm_chunk":
print(event.chunk, end="")
case "tool_start":
print(f"\n[Executing {event.tool_name}...]")
case "tool_end":
if event.result.success:
print(f"[{event.tool_name} completed]")
else:
print(f"[{event.tool_name} failed: {event.result.error_message}]")
case "error":
print(f"\nError: {event.error_message}")
case "step_complete":
print(f"\nCompleted in iteration {event.result.iteration}")
Event Filtering
async def filter_events(event_stream, event_types):
async for event in event_stream:
if event.type in event_types:
yield event
# Only process specific event types
filtered = filter_events(
runner.step_stream(user_input),
{"tool_start", "tool_end", "step_complete"}
)
async for event in filtered:
print(f"Tool event: {event.type}")
Building Reactive UIs
Streaming to Web UI
from fastapi import FastAPI
from fastapi.responses import StreamingResponse
app = FastAPI()
async def event_generator(user_input):
async for event in runner.step_stream(user_input):
# Convert event to JSON for client
event_data = {
"type": event.type,
"data": {}
}
if event.type == "llm_chunk":
event_data["data"] = {"chunk": event.chunk}
elif event.type == "tool_start":
event_data["data"] = {
"tool": event.tool_name,
"args": event.arguments
}
elif event.type == "step_complete":
event_data["data"] = {
"status": event.result.status.value,
"response": event.result.segments.response
}
yield f"data: {json.dumps(event_data)}\n\n"
@app.post("/chat")
async def chat(message: str):
return StreamingResponse(
event_generator(message),
media_type="text/event-stream"
)
Progress Tracking
class ProgressTracker:
def __init__(self):
self.chunks = 0
self.tools_executed = 0
self.errors = []
async def track(self, event_stream):
async for event in event_stream:
if event.type == "llm_chunk":
self.chunks += 1
elif event.type == "tool_end":
self.tools_executed += 1
elif event.type == "error":
self.errors.append(event.error_message)
yield event
def get_stats(self):
return {
"chunks_generated": self.chunks,
"tools_executed": self.tools_executed,
"error_count": len(self.errors)
}
tracker = ProgressTracker()
async for event in tracker.track(runner.step_stream(user_input)):
pass
print(tracker.get_stats())
Event Logging
Structured Logging
import logging
logger = logging.getLogger("agent_events")
async for event in runner.step_stream(user_input):
if event.type == "tool_start":
logger.info(
f"Tool execution started",
extra={
"tool_name": event.tool_name,
"arguments": event.arguments,
"step_id": event.step_id
}
)
elif event.type == "error":
logger.error(
f"Agent error",
extra={
"error_type": event.error_type,
"message": event.error_message,
"recoverable": event.recoverable
}
)
Step IDs
Every event includes a step_id that uniquely identifies the agent step. Use this to correlate events from the same execution:
step_events = {}
async for event in runner.step_stream(user_input):
step_id = event.step_id
if step_id not in step_events:
step_events[step_id] = []
step_events[step_id].append(event)
# All events from same step grouped together
for step_id, events in step_events.items():
print(f"Step {step_id}: {len(events)} events")
Error Handling
Graceful Error Recovery
async for event in runner.step_stream(user_input):
if event.type == "error":
if event.recoverable:
print(f"Warning: {event.error_message}")
# Continue processing
else:
print(f"Fatal error: {event.error_message}")
break
elif event.type == "step_complete":
if event.result.status == AgentStatus.ERROR:
print(f"Step failed: {event.result.error_message}")
Best Practices
- Always consume the event stream fully to avoid blocking
- Use
step_idto correlate related events - Handle errors gracefully with try/except around event processing
- Filter events early to reduce processing overhead
- Log important events for debugging and monitoring
- Use pattern matching for clean event handling (Python 3.10+)
- Build reactive UIs by streaming events to clients
- Track metrics using event data (chunks, tool calls, timing)
Next Steps
- Agent System - Configure event streaming options
- Tools - Tool execution events
- Patterns - Pattern detection events
- Logic Flows - Multi-step event handling