Metadata-Version: 2.4
Name: synqed
Version: 1.1.27
Summary: Synqed - A wrapper around A2A for simplified multi-agent systems interaction and communication
Author: Synq Team
License: Proprietary
License-File: LICENSE
Requires-Python: >=3.10
Requires-Dist: a2a-sdk[http-server]==0.3.12
Requires-Dist: aiohttp>=3.8.0
Requires-Dist: cryptography>=41.0.0
Requires-Dist: httpx>=0.24.0
Requires-Dist: pydantic>=2.0.0
Requires-Dist: redis>=5.0.0
Requires-Dist: uvicorn>=0.20.0
Provides-Extra: all
Requires-Dist: a2a-sdk[all]==0.3.12; extra == 'all'
Provides-Extra: dev
Requires-Dist: mypy>=2.0.0; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.21.0; extra == 'dev'
Requires-Dist: pytest-cov>=4.0.0; extra == 'dev'
Requires-Dist: pytest>=7.0.0; extra == 'dev'
Requires-Dist: ruff>=0.1.0; extra == 'dev'
Provides-Extra: grpc
Requires-Dist: a2a-sdk[grpc]==0.3.12; extra == 'grpc'
Provides-Extra: sql
Requires-Dist: a2a-sdk[sql]==0.3.12; extra == 'sql'
Description-Content-Type: text/markdown

# Synqed Python API library

[![Python Version](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/)

Synqed enables true **AI-to-AI interaction** and **multi-agent collaboration**.

Agents can talk to each other, collaborate, coordinate, delegate tasks, and solve problems together—letting you build actual **multi-agent** systems where agents truly work as a team.

### 🤝 True Collaboration, Not Just Delegation

Unlike traditional multi-agent systems that just assign tasks in parallel, Synqed enables **genuine collaboration** where agents:
- 👀 See what other agents are working on
- 💬 Provide feedback to each other
- 🔄 Refine their work based on peer input
- 🎯 Create integrated, cohesive solutions together

All seamless. All autonomous.

Synqed also lets agents from any provider—OpenAI, Anthropic, Google, or local models—communicate as part of the same system.

### 🌐 Universal Substrate

Synqed acts as a **universal substrate** for AI agents. Any agent that speaks the A2A (Agent-to-Agent) protocol can join a Synqed workspace, regardless of how it was built:

- ✅ Mix Synqed agents with agents built using `a2a-python` SDK
- ✅ Mix Synqed agents with agents from ANY framework that implements A2A
- ✅ Route transparently - agents don't know if peers are local or remote
- ✅ No wrapping or adaptation needed - just routing!

See `examples/universal_demo/` for a working demo mixing local Synqed agents with remote A2A agents.

---

## 🚀 Quick Links

- **[Complete Examples](#complete-examples)** - Working code in `examples/` directory
- **[Getting Started](#installation)** - Install and run your first agent
- **[Multi-Agent Collaboration](#-multi-agent-collaboration)** - Agent-to-agent communication
- **[Execution Patterns](#execution-patterns)** - Sequential, parallel, and hierarchical
- **[API Documentation](https://github.com/SynqLabs/synqed-samples/blob/main/api/python/README.md)** - Full API reference

---

## Documentation

For full API documentation, see <a href="https://github.com/SynqLabs/synqed-samples/blob/main/api/python/README.md">here</a>

## Installation

```bash
# Install from PyPI
pip install synqed
```

Synqed works with the following LLM providers. Install your preferred provider:

```bash
pip install openai                  # For OpenAI (GPT-4, GPT-4o, etc.)
pip install anthropic               # For Anthropic (Claude)
pip install google-generativeai     # For Google (Gemini)
```

### Environment Setup

Most examples use environment variables for API keys. Create a `.env` file:

```bash
# For OpenAI examples
OPENAI_API_KEY='your-openai-api-key'

# For Anthropic examples (most examples use this)
ANTHROPIC_API_KEY='your-anthropic-api-key'

# For Google examples
GOOGLE_API_KEY='your-google-api-key'
```

Install `python-dotenv` to load environment variables:

```bash
pip install python-dotenv
```

## Usage


### Quick Start: Your First Agent

The fastest way to get started is with the included examples:

```bash
# Clone or navigate to the examples directory
cd examples/intro

# Start your first agent (Terminal 1)
python synqed_agent.py

# Connect a client (Terminal 2)
python synqed_client.py
```

**Congratulations!** You just ran your first AI agent.

Want to build from scratch? Here's a minimal example:

```python
import asyncio
import os
import synqed

async def agent_logic(context):
    """Your agent's brain - this is where the magic happens."""
    user_message = context.get_user_input()
    
    # Use any LLM you want
    from openai import AsyncOpenAI
    client = AsyncOpenAI(api_key=os.getenv("OPENAI_API_KEY"))
    
    response = await client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": user_message}
        ]
    )
    
    return response.choices[0].message.content

async def main():
    # Create your agent
    agent = synqed.Agent(
        name="MyFirstAgent",
        description="A helpful AI assistant",
        skills=["general_assistance", "question_answering"],
        executor=agent_logic
    )
    
    # Start the server
    server = synqed.AgentServer(agent, port=8000)
    print(f"Agent running at {agent.url}")
    await server.start()

if __name__ == "__main__":
    asyncio.run(main())
```

See `examples/intro/synqed_agent.py` for a complete working example with detailed comments.

---

### Understanding Agent Logic Functions

Your **agent logic function** is where you define your agent's behavior. For single-agent use cases, it receives a context object and returns a response string. For multi-agent collaboration, it returns a structured dict.

#### Single-Agent Logic (with executor parameter):

```python
async def agent_logic(context):
    """
    Args:
        context: RequestContext with methods:
            - get_user_input() → str: User's message
            - get_task() → Task: Full task object
            - get_message() → Message: Full message object
    
    Returns:
        str: Agent's response
    """
    user_message = context.get_user_input()
    
    # Implement any logic:
    # - Call LLMs (OpenAI, Anthropic, Google)
    # - Query databases
    # - Call external APIs
    
    return "Agent response"

# Create agent with executor parameter
agent = synqed.Agent(
    name="MyAgent",
    description="A helpful assistant",
    executor=agent_logic  # Single-agent mode
)
```

#### Multi-Agent Logic (with logic parameter):

```python
async def agent_logic(context: synqed.AgentLogicContext) -> dict:
    """
    Args:
        context: AgentLogicContext with:
            - latest_message: Latest incoming message
            - memory: Agent's message history
            - get_conversation_history(): Formatted conversation
            - build_response(): Helper to build responses
            - workspace: Current workspace
            - agent_name: Agent's name
    
    Returns:
        dict: {"send_to": "TargetAgent", "content": "message"}
    """
    latest = context.latest_message
    if not latest:
        return context.build_response("OtherAgent", "Ready!")
    
    # Get conversation history
    history = context.get_conversation_history()
    
    # Use any LLM to generate response
    # ... (call your LLM here)
    
    # Return structured response
    return context.build_response("TargetAgent", "My response")

# Create agent with logic parameter
agent = synqed.Agent(
    name="MyAgent",
    description="Collaborative agent",
    logic=agent_logic,  # Multi-agent mode
    default_target="OtherAgent"
)
```

See `examples/intro/synqed_agent.py` for single-agent examples and `examples/intro/workspace.py` for multi-agent examples.


### Client Configuration

The client allows your agents to interact with other agents.

```python
import synqed

# Default configuration
client = synqed.Client("http://localhost:8000")

# Custom timeout
client = synqed.Client(
    agent_url="http://localhost:8000",
    timeout=120.0  # 2 minutes (default is 60)
)

# Disable streaming
client = synqed.Client(
    agent_url="http://localhost:8000",
    streaming=False
)

# Override per-request
async with synqed.Client("http://localhost:8000") as client:
    response = await client.with_options(timeout=30.0).ask("Quick question")
```

---

## 🤝 Multi-Agent Collaboration

Synqed's **workspace-based messaging system** enables true agent-to-agent communication where agents:
- Maintain their own server-side message memory
- Exchange structured messages within workspaces
- Collaborate naturally through iterative communication
- Work together without conversation history blobs

### Architecture

The system consists of core components:

1. **Agent**: Agent with built-in memory and logic functions
2. **Workspace**: Logical routing domain where agents collaborate
3. **WorkspaceExecutionEngine**: Executes agents with event-driven scheduling
4. **AgentLogicContext**: Provides conversation history and message building helpers

### Basic Two-Agent Collaboration

See `examples/intro/workspace.py` for a complete working example of Writer and Editor collaborating:

```bash
cd examples/intro
python workspace.py
```

Here's a simplified version showing the key concepts:

```python
import asyncio
import os
from synqed import Agent, AgentLogicContext

async def writer_logic(context: AgentLogicContext) -> dict:
    """Writer agent logic."""
    latest = context.latest_message
    if not latest:
        return context.build_response("Editor", "I'm ready!")
    
    # Get conversation history automatically
    conversation_text = context.get_conversation_history()
    
    # Use any LLM to generate response
    # ... (call your LLM here)
    
    # Return structured response
    return context.build_response("Editor", "Here's my draft...")

async def editor_logic(context: AgentLogicContext) -> dict:
    """Editor agent logic."""
    latest = context.latest_message
    if not latest:
        return context.build_response("Writer", "I'm ready!")
    
    # Get conversation history
    conversation_text = context.get_conversation_history()
    
    # Process and provide feedback
    return context.build_response("Writer", "Great work! Here's feedback...")

# For complete setup and execution, see examples/intro/workspace.py
```

### Agent Logic Functions

Agent logic functions receive an `AgentLogicContext` with:
- `context.memory`: Agent's message memory
- `context.latest_message`: Latest incoming message
- `context.get_conversation_history()`: Auto-formatted conversation history
- `context.build_response()`: Helper for structured responses
- `context.workspace`: Current workspace reference
- `context.agent_name`: The agent's name

Logic functions must return a dict with `"send_to"` and `"content"` keys:

```python
async def agent_logic(context: AgentLogicContext) -> dict:
    # Access memory
    latest = context.latest_message
    all_messages = context.memory.get_messages()
    
    # Get formatted conversation history (includes parsing of JSON messages)
    conversation_text = context.get_conversation_history()
    
    # Use any LLM to generate response
    # ... (your LLM call here)
    
    # Build response using helper
    return context.build_response("TargetAgent", "Message content")
```

See `examples/intro/workspace.py` for complete examples of agent logic functions.

### Key Benefits

✅ **True Agent-to-Agent Communication**: Agents send structured messages directly to each other  
✅ **Server-Side Memory**: Each agent maintains its own message history  
✅ **Workspace Routing**: Messages are routed through workspaces, enabling hierarchical collaboration  
✅ **Structured Responses**: All responses follow JSON format with `send_to` and `content`  
✅ **Event-Driven Execution**: WorkspaceExecutionEngine runs agents efficiently with automatic scheduling  
✅ **Parallel Execution**: Multiple workspaces can execute simultaneously for true parallelism  

See `examples/intro/workspace.py` for a complete two-agent collaboration example.  
See `examples/multi-agentic/` for advanced multi-team examples.

---

## Modern Orchestration Pattern

The modern approach uses **WorkspaceExecutionEngine** with **PlannerLLM** for intelligent multi-agent orchestration:

```python
import synqed
from pathlib import Path

# Create planner for intelligent task routing
planner = synqed.PlannerLLM(
    provider="anthropic",
    api_key=os.environ["ANTHROPIC_API_KEY"],
    model="claude-sonnet-4-5"
)

# Create workspace manager
workspace_manager = synqed.WorkspaceManager(
    workspaces_root=Path("/tmp/synqed_workspaces")
)

# Create execution engine
execution_engine = synqed.WorkspaceExecutionEngine(
    planner=planner,
    workspace_manager=workspace_manager,
    enable_display=True,
    max_agent_turns=10
)

# Execute multi-agent collaboration
await execution_engine.run(workspace_id)
```

See `examples/multi-agentic/sequential_two_teams.py` and `examples/multi-agentic/parallel_three_teams.py` for complete examples.

---

## Legacy Orchestrator API

> **Note**: The Orchestrator class below is deprecated. For new projects, use the 
> WorkspaceExecutionEngine pattern shown above and in the examples.

The legacy **Orchestrator** uses an LLM to analyze tasks and intelligently route them to the most suitable agents.

### Basic Orchestration

```python
import synqed
import os

# Create orchestrator with LLM-powered routing
orchestrator = synqed.Orchestrator(
    provider=synqed.LLMProvider.OPENAI,
    api_key=os.environ.get("OPENAI_API_KEY"),
    model="gpt-4o"
)

# Register your specialized agents to the orchestrator
orchestrator.register_agent(research_agent.card, "http://localhost:8001")
orchestrator.register_agent(coding_agent.card, "http://localhost:8002")
orchestrator.register_agent(writing_agent.card, "http://localhost:8003")

# Orchestrator automatically selects the best agent(s) for the task
result = await orchestrator.orchestrate(
    "Research recent AI developments and write a technical summary"
)

print(f"Selected: {result.selected_agents[0].agent_name}")
print(f"Confidence: {result.selected_agents[0].confidence:.0%}")
print(f"Reasoning: {result.selected_agents[0].reasoning}")
```

### Supported LLM Providers

```python
import synqed

# OpenAI
synqed.Orchestrator(
    provider=synqed.LLMProvider.OPENAI,
    api_key=os.environ.get("OPENAI_API_KEY"),
    model="model-here" 
)

# Anthropic
synqed.Orchestrator(
    provider=synqed.LLMProvider.ANTHROPIC,
    api_key=os.environ.get("ANTHROPIC_API_KEY"),
    model="model-here"
)

# Google
synqed.Orchestrator(
    provider=synqed.LLMProvider.GOOGLE,
    api_key=os.environ.get("GOOGLE_API_KEY"),
    model="model-here"
)
```

### Orchestration Configuration

```python
import synqed

orchestrator = synqed.Orchestrator(
    provider=synqed.LLMProvider.OPENAI,
    api_key=os.environ.get("OPENAI_API_KEY"),
    model="gpt-4o",
    temperature=0.7,     # Creativity level (0.0 - 1.0)
    max_tokens=2000      # Maximum response length
)
```

---

## Multi-Agent Delegation

The **TaskDelegator** coordinates multiple agents working together on complex tasks:

```python
import synqed
import os

# Create orchestrator for intelligent routing
orchestrator = synqed.Orchestrator(
    provider=synqed.LLMProvider.OPENAI,
    api_key=os.environ.get("OPENAI_API_KEY"),
    model="gpt-4o"
)

# Create delegator
delegator = synqed.TaskDelegator(orchestrator=orchestrator)

# Register specialized agents (local or remote)
delegator.register_agent(agent=research_agent)
delegator.register_agent(agent=coding_agent)
delegator.register_agent(agent=writing_agent)

# Agents automatically collaborate on complex tasks
result = await delegator.submit_task(
    "Research microservices patterns and write implementation guide"
)
```

---

## 🤝 Agent Collaboration (NEW!)

Beyond simple delegation, Synqed enables **true agent collaboration** where agents actively interact, provide feedback, and refine their work together.

### Collaborative Workspace

The **OrchestratedWorkspace** creates a temporary environment where agents collaborate through structured phases:

```python
import synqed

# Create orchestrator
orchestrator = synqed.Orchestrator(
    provider=synqed.LLMProvider.OPENAI,
    api_key=os.environ.get("OPENAI_API_KEY"),
    model="gpt-4o"
)

# Create collaborative workspace
workspace = synqed.OrchestratedWorkspace(
    orchestrator=orchestrator,
    enable_agent_discussion=True  # 🔑 Enables true collaboration!
)

# Register specialized agents
workspace.register_agent(research_agent)
workspace.register_agent(design_agent)
workspace.register_agent(development_agent)

# Agents will collaborate in 4 phases:
# 1. Share initial proposals
# 2. Provide peer feedback
# 3. Refine based on feedback
# 4. Produce integrated solution
result = await workspace.execute_task(
    "Design a new mobile app feature for habit tracking"
)
```

### Collaboration Phases

When `enable_agent_discussion=True`, agents go through structured collaboration:

**Phase 1: Kickoff** - All agents see the full context and team assignments

**Phase 2: Proposals** - Each agent shares their initial approach
```
🔬 Researcher: "I'll analyze user behavior patterns..."
🎨 Designer: "I'll create an intuitive daily tracking interface..."
💻 Developer: "I'll implement a notification system..."
```

**Phase 3: Peer Feedback** - Agents review and provide feedback
```
🔬 Researcher → Designer: "Great UI! Consider gamification based on my findings..."
🎨 Designer → Developer: "Can we use push notifications for streak reminders?"
💻 Developer → Researcher: "Your data suggests we need offline sync..."
```

**Phase 4: Refinement** - Agents refine work based on feedback
```
Each agent incorporates peer insights into their final deliverable
```

### Delegation vs. Collaboration

```python
# ❌ Traditional delegation (parallel, independent)
workspace = synqed.OrchestratedWorkspace(
    orchestrator=orchestrator,
    enable_agent_discussion=False  # Faster, but no interaction
)

# ✅ True collaboration (sequential phases, interactive)
workspace = synqed.OrchestratedWorkspace(
    orchestrator=orchestrator,
    enable_agent_discussion=True   # Slower, but higher quality
)
```

### Accessing Collaboration Data

```python
result = await workspace.execute_task(task)

# View all agent interactions
for msg in result.workspace_messages:
    print(f"{msg['sender_name']}: {msg['content']}")

# Count feedback exchanges
feedback_count = len([m for m in result.workspace_messages 
                     if 'feedback' in m.get('metadata', {})])
print(f"Agents exchanged {feedback_count} feedback messages")
```

### When to Use Collaboration

✅ **Use collaboration when:**
- Task requires multiple perspectives
- Quality matters more than speed
- Agents have complementary skills
- Integration is important

❌ **Use delegation when:**
- Tasks are independent
- Speed is critical
- Simple, straightforward tasks

📚 **Learn More**: See [AGENT_COLLABORATION_GUIDE.md](./AGENT_COLLABORATION_GUIDE.md) for detailed documentation.

---

### Remote Agent Registration

Register agents running anywhere:

```python
# Register remote agent
delegator.register_agent(
    agent_url="https://specialist-agent.example.com",
    agent_card=agent_card  # Optional pre-loaded card
)
```

---

## Workspace Architecture

Synqed uses **Workspaces** as the fundamental unit of agent collaboration. A workspace is a logical routing domain where agents communicate and coordinate.

### Core Components

- **Workspace**: Container for agents and their message routing
- **WorkspaceManager**: Creates and manages workspace lifecycle
- **WorkspaceExecutionEngine**: Executes agents with event-driven scheduling
- **AgentRuntimeRegistry**: Global registry for agent prototypes

### Working with Workspaces

The modern workspace pattern (see `examples/intro/workspace.py` and `examples/multi-agentic/`):

```python
import synqed
from pathlib import Path

# Step 1: Register agent prototypes
synqed.AgentRuntimeRegistry.register("Agent1", agent1)
synqed.AgentRuntimeRegistry.register("Agent2", agent2)

# Step 2: Create workspace manager
workspace_manager = synqed.WorkspaceManager(
    workspaces_root=Path("/tmp/synqed_workspaces")
)

# Step 3: Create planner for orchestration
planner = synqed.PlannerLLM(
    provider="anthropic",
    api_key=os.environ["ANTHROPIC_API_KEY"],
    model="claude-sonnet-4-5"
)

# Step 4: Create execution engine
execution_engine = synqed.WorkspaceExecutionEngine(
    planner=planner,
    workspace_manager=workspace_manager,
    enable_display=True,
    max_agent_turns=10
)

# Step 5: Create workspace and send initial message
workspace = await workspace_manager.create_workspace(
    task_tree_node=task_node,
    parent_workspace_id=None
)

await workspace.route_message("USER", "Agent1", "Task description", manager=workspace_manager)

# Step 6: Execute
await execution_engine.run(workspace.workspace_id)
```

See complete examples in `examples/multi-agentic/` for full implementations.

### Legacy Workspace API

> **Note**: The basic Workspace class below has been replaced by WorkspaceManager + WorkspaceExecutionEngine.
> For new projects, use the pattern shown above.

The legacy **Workspace** provides a collaborative environment where agents can work together, share resources, and coordinate on complex tasks.

```python
import synqed

# Create a workspace
workspace = synqed.Workspace(
    name="Content Creation",
    description="Collaborative space for research and writing"
)

# Add agents to workspace
workspace.add_agent(research_agent)
workspace.add_agent(writing_agent)

# Start collaboration
await workspace.start()

# Execute collaborative task
results = await workspace.collaborate(
    "Research AI trends and write a comprehensive article"
)

# View results
for agent_name, response in results.items():
    print(f"{agent_name}: {response}")

# Clean up
await workspace.close()
```

### Hierarchical Workspaces

Synqed supports parent-child workspace relationships for complex orchestration:

```python
# Create root workspace
root_workspace = await workspace_manager.create_workspace(
    task_tree_node=root_task_node,
    parent_workspace_id=None
)

# Create child workspaces
child_workspace_1 = await workspace_manager.create_workspace(
    task_tree_node=child_task_node_1,
    parent_workspace_id=root_workspace.workspace_id
)

child_workspace_2 = await workspace_manager.create_workspace(
    task_tree_node=child_task_node_2,
    parent_workspace_id=root_workspace.workspace_id
)
```

See `examples/multi-agentic/sequential_two_teams.py` and `parallel_three_teams.py` for complete hierarchical workspace examples.

### Legacy Workspace Features

```python
# Create workspace with orchestrator for intelligent routing
orchestrator = synqed.Orchestrator(
    provider=synqed.LLMProvider.OPENAI,
    api_key=os.environ.get("OPENAI_API_KEY"),
    model="gpt-4o"
)

workspace = synqed.Workspace(
    name="Smart Collaboration",
    enable_persistence=True,  # Save workspace state
    auto_cleanup=False        # Keep artifacts
)

workspace.add_agent(agent1)
workspace.add_agent(agent2)
workspace.add_agent(agent3)

await workspace.start()

# Orchestrator selects best agents for the task
results = await workspace.collaborate(
    "Complex multi-step task",
    orchestrator=orchestrator
)
```

### Sharing Artifacts and State

```python
# Share data between agents
workspace.add_artifact(
    name="data.json",
    artifact_type="data",
    content={"key": "value"},
    created_by="agent1"
)

# Set shared state
workspace.set_shared_state("project_id", "proj-123")

# Get artifacts
artifacts = workspace.get_artifacts(artifact_type="data")

# Get shared state
project_id = workspace.get_shared_state("project_id")
```

### Direct Agent Communication

```python
# Send message to specific agent
response = await workspace.send_message_to_agent(
    participant_id="agent-123",
    message="Analyze this data"
)

# Broadcast to all agents
responses = await workspace.broadcast_message(
    "Please provide status updates"
)
```

For detailed workspace documentation, see the [Workspace Guide](https://github.com/SynqLabs/synqed-samples/blob/main/api/python/WORKSPACE.md).

---

## Execution Patterns

Synqed supports different execution patterns for multi-agent collaboration:

### Sequential Collaboration
Agents work together in turn-based cycles, passing work sequentially:
```
USER → Agent1 → Agent2 → Agent3 → USER
```
**Use when**: Tasks have dependencies, agents need to build on each other's work  
**Example**: `examples/multi-agentic/sequential_two_teams.py`

### Parallel Execution  
Multiple agents or teams work simultaneously using broadcast delegation:
```
                    ┌─→ Team1 (works in parallel)
Coordinator ──────┼─→ Team2 (works in parallel)
                    └─→ Team3 (works in parallel)
                           ↓
                    Coordinator synthesizes
```
**Use when**: Tasks are independent, speed is important  
**Example**: `examples/multi-agentic/parallel_three_teams.py`  
**Speedup**: Potential 3x faster for 3 parallel teams

### Hierarchical Workspaces
Organize agents in parent-child workspaces for complex orchestration:
```
Root Workspace (Orchestrator)
  ├─ Child Workspace 1 (Team A)
  └─ Child Workspace 2 (Team B)
```
**Use when**: Large teams, natural hierarchy, subteam isolation  
**Example**: Both `sequential_two_teams.py` and `parallel_three_teams.py`

### Mixed Local/Remote Agents
Combine agents built with Synqed and external A2A agents in the same workspace:
```
Synqed Workspace
  ├─ Local Agent (Synqed)
  ├─ Local Agent (Synqed)
  └─ Remote Agent (A2A protocol, any framework)
```
**Use when**: Integrating existing A2A agents, cross-ecosystem collaboration  
**Example**: `examples/universal_demo/universal_substrate_demo.py`

---

## Complete Examples

The `examples/` directory contains fully working examples demonstrating different aspects of Synqed:

### 📚 Getting Started (`examples/intro/`)

**Basic Agent Setup:**
- `synqed_agent.py` - Create and run your first AI agent with streaming support
- `synqed_client.py` - Connect to agents using both `ask()` and `stream()` methods
- `agent_card.py` - Fetch and display agent capabilities and metadata

**Multi-Agent Collaboration:**
- `workspace.py` - Two agents (Writer + Editor) collaborating in a workspace using the inbox-based messaging system

```bash
# Run the basic examples
cd examples/intro
python synqed_agent.py    # Terminal 1 - start the agent
python synqed_client.py   # Terminal 2 - connect as client

# Run workspace collaboration
python workspace.py
```

### 🚀 Advanced Multi-Agent Systems (`examples/multi-agentic/`)

**Parallel Three Teams** (`parallel_three_teams.py`)
- Demonstrates TRUE parallel execution with broadcast delegation
- 1 coordinator broadcasts to 3 research teams simultaneously
- Each team has 3 agents (Lead + Senior + Junior) who collaborate internally
- Teams work in parallel for 3x speedup potential
- Total: 10 agents across 4 workspaces

```bash
cd examples/multi-agentic
python parallel_three_teams.py
```

**Sequential Two Teams** (`sequential_two_teams.py`)
- Orchestrator pattern with hierarchical workspace delegation
- Project Manager coordinates Research Team and Development Team
- Each team has 3 specialized agents working together
- Total: 7 agents across 3 workspaces (1 root + 2 child teams)

```bash
cd examples/multi-agentic
python sequential_two_teams.py
```

### 🌐 Universal Substrate (`examples/universal_demo/`)

**Key Concept**: Synqed is a universal substrate that can route to ANY agent speaking A2A protocol, regardless of how it was built.

**Code Review A2A Agent** (`code_review_a2a_agent.py`)
- A standalone A2A agent built with `a2a-python` SDK (NOT Synqed)
- Runs as independent HTTP server on port 8001
- Demonstrates that Synqed can route to agents from ANY ecosystem

**Universal Substrate Demo** (`universal_substrate_demo.py`)
- Mixes local Synqed agents with remote A2A agents in the same workspace
- Coordinator (Synqed) → LocalWriter (Synqed) → RemoteCodeAgent (A2A)
- Shows transparent routing across different agent frameworks
- No wrapping or adaptation needed - just routing!

```bash
cd examples/universal_demo
python universal_substrate_demo.py
```

### 📋 Example Requirements

All examples require:
```bash
pip install synqed anthropic python-dotenv
```

Universal substrate examples additionally require:
```bash
pip install a2a-sdk aiohttp
```

Create a `.env` file in the example directory:
```
ANTHROPIC_API_KEY='your-key-here'
```

---

## Summary

Synqed provides a complete framework for building multi-agent AI systems:

### 🎯 Key Features
- **True Multi-Agent Collaboration**: Agents communicate, provide feedback, and refine work together
- **Flexible Execution**: Sequential, parallel, and hierarchical patterns
- **Universal Substrate**: Route to any A2A-compliant agent
- **Memory Management**: Each agent maintains its own conversation history
- **Event-Driven**: Efficient execution with automatic scheduling

### 📚 Learning Path
1. Start with `examples/intro/synqed_agent.py` - Create your first agent
2. Try `examples/intro/workspace.py` - Two agents collaborating
3. Explore `examples/multi-agentic/sequential_two_teams.py` - Hierarchical teams
4. Learn `examples/multi-agentic/parallel_three_teams.py` - Parallel execution
5. Discover `examples/universal_demo/universal_substrate_demo.py` - Cross-framework integration

### 🔗 Resources
- [Complete Examples](#complete-examples) - Working code in `examples/` directory
- [API Documentation](https://github.com/SynqLabs/synqed-samples/blob/main/api/python/README.md) - Full API reference
- [GitHub Repository](https://github.com/SynqLabs/synqed) - Source code and issues

---

Copyright © 2025 Synq Team. All rights reserved.

