Agent System
Understanding agent configuration, execution, and lifecycle
Agent Components
An agent is composed of five main components:
1. AgentConfig
Configuration object defining agent behavior, tools, patterns, and I/O mapping
2. ContextManager
Manages versioned state storage with iteration tracking
3. PatternRegistry
Registry of pattern sets for extracting structured data from LLM output
4. ToolRegistry
Registry of available tools the agent can execute
5. LLMProvider
Provider client implementing generate() and stream()
AgentConfig Reference
Complete configuration options for agents:
| Parameter | Type | Description |
|---|---|---|
agent_id |
str | Unique identifier for the agent |
tools_allowed |
list[str] | List of tool names this agent can execute |
tool_name_mapping |
dict[str, str] | Map public tool names to internal registry names |
validate_tool_arguments |
bool | Enable argument validation (default: True) |
input_mapping |
list[dict] | Define how to build prompts from context |
output_mapping |
list[tuple] | Define how to store agent outputs in context |
pattern_set |
str | Name of pattern set to use (default: "default") |
auto_increment_iteration |
bool | Auto-increment iteration counter (default: True) |
processing_mode |
ProcessingMode | Default tool execution mode (THREAD, PROCESS, ASYNC) |
incremental_context_writes |
bool | Write partial outputs during streaming (default: False) |
stream_pattern_content |
bool | Emit pattern content before end tag (default: False) |
concurrent_tool_execution |
bool | Execute tools concurrently during LLM streaming (default: False) |
on_tool_detected |
Callable | Callback for tool verification (accept/reject) |
tool_verification_timeout |
float | None | Timeout for verification callback in seconds |
tool_verification_on_timeout |
str | "accept" or "reject" if verification times out |
prompt_builder |
Callable | Custom prompt building function |
Input Mapping
Define how prompts are built from context. Each entry specifies a context key and order:
Note: The
role and order fields only work when using prompt_builder=create_message_prompt_builder(). The default prompt builder only uses context_key and concatenates entries in list order.
input_mapping=[
{
"context_key": "literal:You are a helpful AI assistant.",
"role": "system",
"order": 0
},
{
"context_key": "conversation_history",
"role": "user",
"order": 1
},
{
"context_key": "tool_results",
"role": "assistant",
"order": 2
}
]
Literal Values: Prefix with
"literal:" to use static content instead of context lookup.
Output Mapping
Define how agent outputs are stored in context:
output_mapping=[
("agent_output", "set_latest"), # Raw output
("agent_response", "set_response"), # Extracted response segment
("reasoning_log", "set_reasoning"), # Reasoning segments
("tool_results", "set_tools"), # Tool execution results
("conversation", "append_version") # Append to existing
]
Operation Types
set_latest- Store raw LLM outputset_response- Store extracted response segmentset_reasoning- Store reasoning segmentsset_tools- Store tool execution results as JSONappend_version- Append to existing context value
Tool Verification
Enable human-in-the-loop or programmatic tool approval:
def verify_tool(tool_call: ToolCall) -> bool:
# Inspect tool call before execution
print(f"Tool: {tool_call.name}")
print(f"Args: {tool_call.arguments}")
# Return True to accept, False to reject
if tool_call.name == "delete_file":
return input("Allow deletion? (y/n): ") == "y"
return True
config = AgentConfig(
agent_id="safe_agent",
tools_allowed=["search", "delete_file"],
on_tool_detected=verify_tool,
tool_verification_timeout=30.0,
tool_verification_on_timeout="reject"
)
Custom Prompt Builder
For advanced prompt construction, provide a custom builder:
from agentic import PromptObject
def custom_prompt_builder(context, config, user_input):
# Build custom prompt structure
system = "You are a specialized assistant."
messages = []
# Add conversation history
history = context.get("history")
if history:
messages.append({
"role": "assistant",
"content": history
})
# Add user input
if user_input:
messages.append({
"role": "user",
"content": user_input
})
return PromptObject(
system=system,
messages=messages,
metadata={"temperature": 0.7}
)
config = AgentConfig(
agent_id="custom_agent",
prompt_builder=custom_prompt_builder
)
Agent Execution
Single Step (Batch)
result = runner.step("What's the weather?")
print(result.segments.response)
Streaming
async for event in runner.step_stream("What's the weather?"):
if event.type == "llm_chunk":
print(event.chunk, end="")
Result Structure
# AgentStepResult
result.status # AgentStatus enum
result.raw_output # Full LLM output
result.segments # ExtractedSegments
result.tool_results # List[ToolResult]
result.iteration # Iteration number
result.tool_decisions # Tool verification details
Status Values
OK- Execution successfulWAITING_FOR_VERIFICATION- Tool awaiting approvalWAITING_FOR_TOOL- Tool execution in progressTOOL_EXECUTED- Tools executed successfullyTOOLS_REJECTED- All tools rejected by verificationVALIDATION_ERROR- Tool argument validation failedDONE- Agent completed with no further actionERROR- Execution error occurred
Next Steps
- Tools - Creating and configuring tools
- Patterns - Pattern extraction system
- Logic Flows - Multi-step agent loops
- Events - Working with the event stream