Quick Start
Get up and running with Aegeantic in 5 minutes
Installation
Install Aegeantic from PyPI:
pip install aegeantic
For development:
git clone https://github.com/LOGQS/agentic_framework.git
cd agentic_framework
pip install -e .
Note: Requires Python 3.9+. RocksDB (via
rocksdict) is installed automatically for persistent storage. For testing, you can use InMemoryStorage which requires no additional dependencies.
Your First Agent
1
Import Core Components
from agentic import (
Agent, AgentConfig, AgentRunner,
ContextManager, IterationManager,
PatternRegistry, ToolRegistry,
RocksDBStorage, StorageConfig,
create_default_pattern_set
)
2
Initialize Storage & Context
Set up persistent storage and context management:
# Initialize RocksDB storage
storage = RocksDBStorage(StorageConfig(
base_dir="./agent_data",
db_name_prefix="my_agent"
))
storage.initialize()
# Create iteration manager and context
iteration_mgr = IterationManager(storage)
context = ContextManager(storage, iteration_mgr)
3
Setup Patterns & Tools
Register patterns for output extraction and tools for agent capabilities:
# Register default patterns (tool, reasoning, response)
patterns = PatternRegistry(storage)
patterns.register_pattern_set(create_default_pattern_set())
# Create tool registry
tools = ToolRegistry()
# Register your tools (example)
from agentic import create_tool, ProcessingMode
def search_web(args):
query = args.get("query")
# Your search implementation
return {"results": [f"Result for: {query}"]}
search_tool = create_tool(
name="search",
func=search_web,
input_schema={
"validator": "simple",
"required": ["query"],
"fields": {
"query": {"type": "str", "min_length": 1}
}
},
processing_mode=ProcessingMode.THREAD
)
tools.register(search_tool)
4
Create LLM Provider
Implement the LLMProvider protocol for your LLM:
class MyLLMProvider:
def generate(self, prompt, **kwargs):
# Call your LLM API (synchronous)
return "LLM response text"
async def stream(self, prompt, **kwargs):
# Default implementation: yield full response
# Override with real streaming for production
text = self.generate(prompt, **kwargs)
yield text
provider = MyLLMProvider()
5
Configure & Create Agent
config = AgentConfig(
agent_id="my_agent",
tools_allowed=["search"],
pattern_set="default",
input_mapping=[
{"context_key": "literal:You are a helpful assistant.", "order": 0}
],
output_mapping=[
("agent_output", "set_latest")
]
)
agent = Agent(config, context, patterns, tools, provider)
runner = AgentRunner(agent)
6
Execute Agent
# Single step execution
result = runner.step("Search for Python async patterns")
print(f"Status: {result.status}")
print(f"Response: {result.segments.response}")
print(f"Tools executed: {len(result.tool_results)}")
Using Streaming
For real-time updates and reactive UIs, use streaming execution:
import asyncio
async def run_agent_streaming():
async for event in runner.step_stream("Analyze this data"):
if event.type == "llm_chunk":
print(event.chunk, end="")
elif event.type == "tool_start":
print(f"\n[Executing tool: {event.tool_name}]")
elif event.type == "step_complete":
print(f"\n\nFinal status: {event.result.status}")
asyncio.run(run_agent_streaming())
Next Steps
- Core Concepts - Understand context, iterations, and patterns
- Agent System - Deep dive into agent configuration
- Tools - Learn about tool creation and execution modes
- Patterns - Customize output extraction patterns
- Logic Flows - Build conditional agent loops