Tools
Creating, configuring, and executing agent tools
Overview
Tools are functions that agents can execute. Aegeantic provides a robust tool system with validation, timeout handling, multiple execution modes, and streaming support.
Creating Tools
Simple Tool
from agentic import create_tool, ProcessingMode
def get_current_time(args):
from datetime import datetime
return {
"time": datetime.now().isoformat()
}
time_tool = create_tool(
name="get_time",
func=get_current_time,
description="Get the current time"
)
Tool with Validation
def calculate(args):
a = args["a"]
b = args["b"]
op = args.get("operation", "add")
if op == "add": return {"result": a + b}
elif op == "subtract": return {"result": a - b}
elif op == "multiply": return {"result": a * b}
elif op == "divide":
if b == 0:
raise ValueError("Cannot divide by zero")
return {"result": a / b}
calc_tool = create_tool(
name="calculate",
func=calculate,
input_schema={
"validator": "simple",
"required": ["a", "b"],
"fields": {
"a": {"type": "int"},
"b": {"type": "int"},
"operation": {
"type": "str",
"pattern": "^(add|subtract|multiply|divide)$"
}
}
},
timeout_seconds=5.0
)
Processing Modes
Tools can execute in different modes depending on their characteristics:
THREAD (Default)
Best for I/O-bound operations (API calls, file I/O, database queries):
tool = create_tool(
name="fetch_data",
func=fetch_from_api,
processing_mode=ProcessingMode.THREAD
)
PROCESS
Best for CPU-bound operations (heavy computation, data processing):
tool = create_tool(
name="process_image",
func=expensive_computation,
processing_mode=ProcessingMode.PROCESS
)
ASYNC
For native async functions:
async def fetch_async(args):
async with aiohttp.ClientSession() as session:
async with session.get(args["url"]) as response:
return await response.json()
tool = create_tool(
name="fetch",
func=fetch_async,
processing_mode=ProcessingMode.ASYNC
)
Streaming Tools
Tools can support streaming outputs for long-running operations:
class StreamingTool:
async def run_stream(self, inputs):
# Yield progressive outputs
for i in range(10):
await asyncio.sleep(0.5)
yield {"progress": i * 10, "status": "processing"}
yield {"progress": 100, "status": "complete"}
tool = Tool(
definition=ToolDefinition(
name="long_task",
input_schema={},
output_schema={}
),
callable_func=StreamingTool()
)
Tool Registry
Manage available tools through the registry:
from agentic import ToolRegistry
tools = ToolRegistry()
# Register tools
tools.register(time_tool)
tools.register(calc_tool)
# Check if tool exists
if tools.exists("calculate"):
print("Calculator available")
# List all tools
all_tools = tools.list()
# Get tool definitions (for LLM prompts)
definitions = tools.get_definitions()
Tool Validation
Use the validation system to ensure tool arguments are correct:
Simple Validator
input_schema = {
"validator": "simple",
"required": ["url", "method"],
"fields": {
"url": {
"type": "str",
"pattern": "^https?://",
"max_length": 2048
},
"method": {
"type": "str",
"pattern": "^(GET|POST|PUT|DELETE)$"
},
"timeout": {
"type": "int",
"min": 1,
"max": 300
}
}
}
Custom Validator Function
input_schema = {
"validator": "simple",
"fields": {
"email": {
"type": "str",
"validator_func": lambda v: "@" in v and "." in v
}
}
}
Timeout Handling
Tools automatically timeout after the specified duration:
tool = create_tool(
name="slow_operation",
func=slow_func,
timeout_seconds=30.0 # Timeout after 30 seconds
)
# Tool result will have success=False and error_message if timeout
result = tool.run(args, iteration=1)
if not result.success:
print(f"Tool failed: {result.error_message}")
Tool Name Mapping
Map public tool names (that LLMs see) to internal registry names:
# Register tool with internal name
tools.register(create_tool(
name="internal_search_v2",
func=search_func
))
# Map public name to internal name
config = AgentConfig(
agent_id="my_agent",
tools_allowed=["internal_search_v2"],
tool_name_mapping={
"search": "internal_search_v2" # LLM uses "search"
}
)
# LLM output: <tool>{"name": "search", ...}</tool>
# Framework executes: internal_search_v2
Concurrent Tool Execution
Execute tools concurrently as they're detected during LLM streaming:
config = AgentConfig(
agent_id="fast_agent",
tools_allowed=["search", "calculate"],
concurrent_tool_execution=True # Execute immediately
)
# Tools execute as soon as their </tool> tag is detected
# Don't wait for LLM completion
Tool Results
Tool execution returns a ToolResult object:
result = tool.run(args, iteration=1)
result.name # Tool name
result.output # Return value (dict, str, bytes, list)
result.success # True if successful
result.error_message # Error message if failed
result.execution_time # Execution time in seconds
result.iteration # Iteration when executed
result.call_id # Unique call identifier
Best Practices
- Always provide clear descriptions for tools to guide LLM usage
- Use validation schemas to catch invalid arguments early
- Set appropriate timeouts to prevent hanging operations
- Choose the right processing mode for your tool's workload
- Return structured data (dicts) when possible for better parsing
- Handle errors gracefully and return meaningful error messages
- Use streaming for long-running operations to provide progress feedback
Next Steps
- Validation - Deep dive into validation schemas
- Patterns - How tools are extracted from output
- Agent System - Configuring tool access