API Reference

Complete reference for all classes, functions, and types in the Aegeantic Framework

Complete Coverage: This reference documents all public APIs. For usage examples and patterns, see the topic-specific documentation pages.

Core Module

Core agent execution engine and configuration.

Agent

Main agent class combining all components.

class Agent:
    def __init__(
        config: AgentConfig,
        context: ContextManager,
        patterns: PatternRegistry,
        tools: ToolRegistry,
        provider: LLMProvider
    )

AgentRunner

Executes agent steps with event streaming.

# Synchronous execution
result = runner.step(user_input, processing_mode=ProcessingMode.THREAD)

# Streaming execution
async for event in runner.step_stream(user_input):
    if isinstance(event, StepCompleteEvent):
        result = event.result

AgentConfig

Configuration object for agent behavior.

Field Type Default Description
agent_id str required Unique identifier
tools_allowed list[str] [] Allowed tool names
pattern_set str "default" Pattern set to use
concurrent_tool_execution bool False Execute tools concurrently
max_partial_buffer_size int 10_000_000 Maximum buffer size for streaming patterns
input_mapping list[dict[str, Any]] [] Context key mappings
output_mapping list[tuple[str, str]] [] Output storage mappings

AgentStatus

Execution status enumeration.

AgentStepResult

Result from a single agent step.

result.status                        # AgentStatus
result.raw_output                    # Complete LLM output
result.segments                      # ExtractedSegments
result.tool_results                  # list[ToolResult]
result.iteration                     # int
result.error_message                 # str | None
result.error_type                    # str | None
result.partial_malformed_patterns    # dict[str, str] | None
result.tool_decisions                # list[ToolExecutionDecision]

ExtractedSegments

Structured segments from LLM output.

segments.tools          # list[ToolCall]
segments.reasoning      # list[str]
segments.response       # str | None
segments.parse_errors   # dict[str, str]

Context Module

Versioned key-value storage with iteration tracking.

ContextManager

Manages versioned context state.

# Set value (creates new version)
context.set("key", "value")

# Update value (overwrites current version, no new version)
context.update("key", "value")

# Delete key (marks as deleted)
context.delete("key")

# Get as UTF-8 string (most common)
value = context.get("key")

# Get binary data
binary = context.get_bytes("key")

# Get with metadata
record = context.get_record("key")

# Get history
history = context.get_history("key", max_versions=10)

# List keys with prefix
keys = context.list_keys(prefix="tool:")

ContextRecord

Versioned context entry with metadata.

record.value         # bytes
record.version       # int
record.iteration     # int
record.timestamp     # float

IterationManager

Manages global iteration counter.

iteration_mgr.get()                    # Get current
iteration_mgr.next()                   # Increment and return
iteration_mgr.register_event("name")   # Log event

Tools Module

Tool creation, registration, and execution.

create_tool

Create a tool from a function.

from agentic import create_tool, ProcessingMode

def my_tool(args):
    return {"result": args["input"]}

tool = create_tool(
    name="my_tool",
    func=my_tool,
    description="Tool description",
    input_schema={"validator": "simple", ...},
    output_schema={"validator": "simple", ...},
    timeout_seconds=10.0,
    processing_mode=ProcessingMode.THREAD
)

ProcessingMode

Tool execution modes.

ToolRegistry

Global tool registration and management.

tools = ToolRegistry()
tools.register(tool)
tools.exists("tool_name")
tool = tools.get("tool_name")  # Get tool by name (returns Tool | None)
tools.unregister("tool_name")  # Remove tool (returns bool)
all_tools = tools.list()
definitions = tools.get_definitions()

ToolResult

Tool execution result.

result.name              # str
result.output            # Any
result.success           # bool
result.error_message     # str | None
result.execution_time    # float
result.iteration         # int
result.call_id           # str

Patterns Module

Pattern extraction system for structured output.

PatternRegistry

Manages pattern sets.

patterns = PatternRegistry(storage)
patterns.register_pattern_set(pattern_set)
pattern_set = patterns.get_pattern_set("default")

PatternSet

Collection of patterns.

from agentic import PatternSet, Pattern, SegmentType

custom_set = PatternSet(
    name="custom",
    patterns=[
        Pattern(
            name="tool",
            start_tag="<tool>",
            end_tag="</tool>",
            segment_type=SegmentType.TOOL,
            expected_format="json"
        )
    ]
)

Built-in Pattern Sets

from agentic import (
    create_default_pattern_set,
    create_xml_tools_pattern_set,
    create_json_tools_pattern_set,
    create_backtick_tools_pattern_set
)

Logic Module

Conditional execution and logic flows.

LogicRunner

Manages iterative agent execution with conditions.

from agentic import LogicRunner, LogicConfig, LogicCondition

config = LogicConfig(
    logic_id="my_flow",
    max_iterations=10,
    stop_conditions=[
        LogicCondition(
            pattern_set="default",
            pattern_name="done",
            match_type="contains",
            target="response",
            evaluation_point="auto"  # "auto" | "llm_chunk" | "llm_complete" | "step_complete" etc.
        )
    ]
)

runner = LogicRunner(agent_runner, context, patterns, config)
results = runner.run(initial_input)

Helper Functions

loop_n_times(runner, context, patterns, n=5)
loop_until_pattern(runner, context, patterns, "default", "done")
loop_until_regex(runner, context, patterns, r"COMPLETE")
stop_on_error(runner, context, patterns)

Multi-Agent Module

Multi-agent coordination patterns.

AgentChain

Sequential agent execution.

chain = AgentChain(
    agents=[("agent1", agent1), ("agent2", agent2)],
    config=AgentChainConfig()
)

async for event in chain.execute("initial input"):
    pass

SupervisorPattern

Supervisor delegates to workers.

supervisor = SupervisorPattern(
    supervisor=supervisor_agent,
    workers={"worker1": agent1, "worker2": agent2}
)

async for event in supervisor.execute("task"):
    pass

ParallelPattern

Parallel agent execution with merging.

parallel = ParallelPattern(
    agents={"agent1": agent1, "agent2": agent2},
    merger=merger_agent,
    config=ParallelConfig(merge_strategy="agent")
)

async for event in parallel.execute_and_merge("query"):
    pass

DebatePattern

Multi-round agent debate.

debate = DebatePattern(
    agents={"agent1": agent1, "agent2": agent2},
    moderator=moderator_agent
)

async for event in debate.converge("topic"):
    pass

Graph Module

DAG-based workflow orchestration with dynamic scheduling and failure handling.

GraphRunner

Executes a directed acyclic graph of nodes with concurrency control.

from agentic import GraphRunner, GraphConfig, GraphNode

config = GraphConfig(
    graph_id="data_pipeline",
    max_concurrency=4,
    failure_strategy="allow_independent",  # or "fail_fast", "always_run"
    persist_state=True,
    state_context_key="graph:pipeline:state"
)

graph = GraphRunner(config, context, rate_limiter)

# Add nodes with dependencies
graph.add_node(GraphNode("fetch", agent_runner))
graph.add_node(GraphNode("process", agent_runner), ["fetch"])

# Streaming execution
async for event in graph.run_stream():
    if isinstance(event, GraphCompleteEvent):
        print(f"Graph {event.status}: {event.stats}")

# Batch execution
statuses = graph.run()  # dict[str, GraphNodeStatus]

GraphConfig

Configuration for graph execution.

Field Type Default Description
graph_id str required Unique graph identifier
max_concurrency int 8 Maximum parallel nodes
failure_strategy str "fail_fast" How to handle failures: "fail_fast", "allow_independent", "always_run"
persist_state bool False Write final state to context
state_context_key str | None None Custom key for state (default: "graph:{graph_id}:state")

GraphNode

Executable node in the graph. Supports AgentRunner, LogicRunner, or custom async callables.

from agentic import GraphNode, RetryConfig

# Agent node with output capture
node1 = GraphNode(
    id="fetch_data",
    executable=agent_runner,
    output_key="fetched_data",          # Store result in context
    output_selector=lambda r: r.segments.response,
    retry_config=RetryConfig(max_attempts=3),
    failure_mode="fail",                 # or "soft_fail"
    run_on_failure=False                  # Run even if upstream fails
)

# Logic flow node
node2 = GraphNode("validate", logic_runner)

# Custom callable node
async def merge_results(ctx: ContextManager) -> AsyncIterator[BaseEvent]:
    data1 = ctx.get("output1")
    data2 = ctx.get("output2")
    ctx.set("merged", f"{data1}\n{data2}")
    yield StatusEvent(AgentStatus.OK, "Merged")

node3 = GraphNode("merge", merge_results)

# Cleanup node (always runs)
cleanup = GraphNode(
    "cleanup",
    cleanup_agent,
    run_on_failure=True  # Runs even on upstream failure
)

GraphNodeStatus

Node execution status enum.

Failure Strategies

Control how the graph handles node failures:

Graph Visualization

Export graph structure for documentation and debugging.

from agentic import to_mermaid, to_dot

# Export to Mermaid format
mermaid = to_mermaid(graph, include_metadata=True)
print(mermaid)
# flowchart TD
#     fetch["fetch"]
#     process["process"]
#     fetch --> process

# Export to Graphviz DOT format
dot = to_dot(graph, include_metadata=True)
# Render: dot -Tpng graph.dot -o graph.png

Events Module

Event system for streaming agent execution.

Event Types

Resilience Module

Retry logic and rate limiting.

retry_stream

Wrap any async iterator with retry logic.

from agentic import retry_stream, RetryConfig

async for item in retry_stream(
    stream_fn=my_async_generator,
    config=RetryConfig(
        max_attempts=3,
        base_delay=1.0,
        max_delay=60.0,
        backoff="exponential",
        jitter=True,
        retry_on=(TimeoutError, ConnectionError)
    ),
    operation_name="my_operation"
):
    if isinstance(item, RetryEvent):
        print(f"Retrying after {item.next_delay_seconds}s...")
    else:
        # Process normal output
        print(item)

resilient_stream

Combined retry and rate limiting wrapper.

from agentic import resilient_stream, RetryConfig, RateLimiter, RateLimitConfig

limiter = RateLimiter(RateLimitConfig(
    requests_per_second=10.0,
    burst_size=10
))

async for item in resilient_stream(
    stream_fn=my_async_generator,
    retry_config=RetryConfig(max_attempts=3),
    rate_limiter=limiter,
    operation_name="llm_call"
):
    # Handle events and output
    pass

RateLimiter

Token bucket rate limiter.

from agentic import RateLimiter, RateLimitConfig

limiter = RateLimiter(RateLimitConfig(
    requests_per_second=10.0,
    burst_size=10
))

await limiter.acquire()  # Blocks until token available
await call_api()

RetryConfig

Configuration for retry behavior.

RetryConfig(
    max_attempts=3,              # Maximum retry attempts
    backoff="exponential",       # "exponential" | "linear" | "constant"
    base_delay=1.0,              # Initial delay in seconds
    max_delay=60.0,              # Maximum delay cap
    jitter=True,                 # Add random jitter
    retry_on=(TimeoutError,)     # Exception types to retry on
)

RateLimitConfig

Configuration for rate limiting.

RateLimitConfig(
    requests_per_second=10.0,    # Rate limit per second
    requests_per_minute=None,   # Optional per-minute limit
    requests_per_hour=None,     # Optional per-hour limit
    burst_size=10                # Token bucket burst capacity
)

Validation Module

Format-agnostic validation system.

ValidatorRegistry

Manages validators.

registry = ValidatorRegistry()
is_valid, errors = registry.validate(data, schema)

simple_validator

Built-in lightweight validator.

schema = {
    "validator": "simple",
    "required": ["field1"],
    "fields": {
        "field1": {
            "type": "str",
            "min_length": 1,
            "max_length": 100,
            "pattern": r"^[a-z]+$"
        },
        "field2": {
            "type": "int",
            "min": 0,
            "max": 100
        }
    }
}

passthrough_validator

Validator that skips all validation checks.

schema = {
    "validator": "passthrough"
}
# Always returns (True, []) regardless of input

Storage Module

Storage backends for context persistence.

RocksDBStorage

Production storage backend.

from agentic import RocksDBStorage, StorageConfig

storage = RocksDBStorage(StorageConfig(base_dir="./context"))
storage.initialize()

InMemoryStorage

In-memory storage for testing.

from agentic import InMemoryStorage, StorageConfig

storage = InMemoryStorage(StorageConfig())
storage.initialize()

Logging Module

Structured logging utilities.

get_logger

Get structured logger for a module.

from agentic import get_logger

logger = get_logger(__name__)
logger.info("message", extra={"key": "value"})

LLMProvider Protocol

Interface for LLM provider integration.

Required Methods

class MyProvider:
    def generate(
        self,
        prompt: PromptType,
        **kwargs
    ) -> str:
        # Return complete response (synchronous)
        pass

    async def stream(
        self,
        prompt: PromptType,
        **kwargs
    ) -> AsyncIterator[str]:
        # Yield chunks as they arrive (async)
        pass

Core Dataclasses

Additional dataclasses used throughout the framework.

ToolCall

Represents a tool invocation extracted from agent output.

@dataclass
class ToolCall:
    name: str                    # Tool name
    arguments: dict[str, Any]    # Tool arguments
    raw_segment: str             # Raw extracted text
    iteration: int               # Iteration number
    call_id: str = ""           # Unique call identifier

ToolExecutionDecision

Tracks complete lifecycle of a detected tool call.

@dataclass
class ToolExecutionDecision:
    tool_call: ToolCall
    verification_required: bool
    accepted: bool
    rejection_reason: str | None = None
    verification_duration_ms: float = 0.0
    executed: bool = False
    result: ToolResult | None = None

ValidationError

Validation error details.

@dataclass
class ValidationError:
    field: str           # Field name that failed
    message: str         # Error message
    value: Any = None  # The invalid value

ContextHealthCheck

Configuration for context health monitoring.

@dataclass
class ContextHealthCheck:
    check_type: str              # "size" | "version_count" | "growth_rate"
    key_pattern: str             # Glob pattern: "llm_output:*", "*"
    threshold: float
    action: str = "warn"       # "warn" | "stop"
    evaluation_point: str = "step_complete"
    max_versions_limit: int = 10000

AgentChainConfig

Configuration for sequential agent chains.

@dataclass
class AgentChainConfig:
    pass_mode: str = "response"     # "response" | "full_context" | "tool_results" | "custom"
    transform_fn: Callable[[AgentStepResult], str] | None = None
    prepend_context: bool = True
    context_template: str = "Previous agent ({agent_id}) output:\n{output}\n\n"

SupervisorConfig

Configuration for supervisor-worker pattern.

@dataclass
class SupervisorConfig:
    delegation_pattern_name: str = "delegate"
    worker_key: str = "to"
    task_key: str = "task"
    max_delegation_rounds: int = 10

ParallelConfig

Configuration for parallel agent execution.

@dataclass
class ParallelConfig:
    merge_strategy: str = "agent"    # "agent" | "concat" | "voting"
    merge_template: str = "Synthesize these perspectives:\n\n{perspectives}"
    timeout_seconds: float = 120.0

DebateConfig

Configuration for multi-round agent debate.

@dataclass
class DebateConfig:
    max_rounds: int = 5
    consensus_detector: Callable[[list[str]], bool] | None = None
    moderator_prompt_template: str = "Summarize the consensus from this debate:\n{history}"

StorageConfig

Configuration for storage layer.

@dataclass
class StorageConfig:
    base_dir: Path | str = "./context"
    db_name_prefix: str = "context"
    app_id: str | None = None  # Optional custom application ID

Next Steps