Metadata-Version: 2.4
Name: linden
Version: 0.2.0
Summary: A Python framework for building AI agents with multi-provider LLM support, persistent memory, and function calling capabilities.
Author-email: Matteo Stabile <matteo.stabile2@gmail.com>
License-Expression: MIT
Project-URL: Homepage, https://github.com/matstech/linden
Project-URL: Bug Tracker, https://github.com/matstech/linden/issues
Classifier: Programming Language :: Python :: 3
Classifier: Operating System :: OS Independent
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: altair==5.5.0
Requires-Dist: annotated-types==0.7.0
Requires-Dist: anthropic==0.64.0
Requires-Dist: anyio==4.10.0
Requires-Dist: argcomplete==3.6.2
Requires-Dist: attrs==25.3.0
Requires-Dist: backoff==2.2.1
Requires-Dist: bleach==6.2.0
Requires-Dist: blinker==1.9.0
Requires-Dist: boto3==1.38.22
Requires-Dist: botocore==1.38.22
Requires-Dist: build==1.3.0
Requires-Dist: cachetools
Requires-Dist: certifi==2025.8.3
Requires-Dist: charset-normalizer==3.4.3
Requires-Dist: click==8.2.1
Requires-Dist: cohere==5.15.0
Requires-Dist: colorama==0.4.6
Requires-Dist: contourpy==1.3.1
Requires-Dist: coverage==7.10.3
Requires-Dist: cvxopt==1.3.2
Requires-Dist: cycler==0.12.1
Requires-Dist: deepdiff==8.3.0
Requires-Dist: Deprecated==1.2.18
Requires-Dist: deprecation==2.1.0
Requires-Dist: distro==1.9.0
Requires-Dist: docopt==0.6.2
Requires-Dist: docstring_parser==0.17.0
Requires-Dist: docutils==0.22
Requires-Dist: elastic-transport==9.1.0
Requires-Dist: elasticsearch==9.1.0
Requires-Dist: eval_type_backport==0.2.2
Requires-Dist: faiss-cpu==1.12.0
Requires-Dist: fasta2a==0.2.6
Requires-Dist: fastavro==1.11.1
Requires-Dist: filelock==3.19.1
Requires-Dist: fonttools==4.56.0
Requires-Dist: fsspec==2025.7.0
Requires-Dist: gitdb==4.0.12
Requires-Dist: GitPython==3.1.45
Requires-Dist: google-auth
Requires-Dist: google-genai==1.16.1
Requires-Dist: graphviz==0.20.3
Requires-Dist: griffe==1.7.3
Requires-Dist: groq==0.31.0
Requires-Dist: grpcio==1.74.0
Requires-Dist: h11==0.16.0
Requires-Dist: h2==4.3.0
Requires-Dist: hf-xet==1.1.8
Requires-Dist: hpack==4.1.0
Requires-Dist: httpcore==1.0.9
Requires-Dist: httpx==0.28.1
Requires-Dist: httpx-sse==0.4.0
Requires-Dist: huggingface-hub==0.34.4
Requires-Dist: hyperframe==6.1.0
Requires-Dist: id==1.5.0
Requires-Dist: idna==3.10
Requires-Dist: importlib_metadata==8.6.1
Requires-Dist: iniconfig==2.1.0
Requires-Dist: intervaltree==3.1.0
Requires-Dist: jaraco.classes==3.4.0
Requires-Dist: jaraco.context==6.0.1
Requires-Dist: jaraco.functools==4.3.0
Requires-Dist: Jinja2==3.1.6
Requires-Dist: jiter==0.10.0
Requires-Dist: jmespath==1.0.1
Requires-Dist: joblib==1.5.1
Requires-Dist: jsonschema==4.25.1
Requires-Dist: jsonschema-specifications==2025.4.1
Requires-Dist: kaggle==1.7.4.2
Requires-Dist: kagglehub==0.3.10
Requires-Dist: keyring==25.6.0
Requires-Dist: kiwisolver==1.4.8
Requires-Dist: logfire-api==3.16.0
Requires-Dist: lxml==5.4.0
Requires-Dist: markdown-it-py==3.0.0
Requires-Dist: MarkupSafe==3.0.2
Requires-Dist: matplotlib==3.10.1
Requires-Dist: mcp==1.9.1
Requires-Dist: mdurl==0.1.2
Requires-Dist: mem0ai==0.1.116
Requires-Dist: mistralai==1.7.1
Requires-Dist: more-itertools==10.7.0
Requires-Dist: mplcursors==0.6
Requires-Dist: mpmath==1.3.0
Requires-Dist: narwhals==2.1.2
Requires-Dist: networkx==3.5
Requires-Dist: nh3==0.3.0
Requires-Dist: numpy==2.3.2
Requires-Dist: ollama==0.5.3
Requires-Dist: openai==1.101.0
Requires-Dist: opentelemetry-api==1.33.1
Requires-Dist: orderly-set==5.3.0
Requires-Dist: packaging==25.0
Requires-Dist: pandas==2.3.2
Requires-Dist: pillow==11.3.0
Requires-Dist: pipreqs==0.4.13
Requires-Dist: pluggy==1.6.0
Requires-Dist: pm4py==2.7.15.2
Requires-Dist: portalocker==3.2.0
Requires-Dist: posthog==6.6.1
Requires-Dist: prompt_toolkit==3.0.51
Requires-Dist: protobuf==5.29.5
Requires-Dist: pyarrow==21.0.0
Requires-Dist: pyasn1==0.6.1
Requires-Dist: pyasn1_modules==0.4.2
Requires-Dist: pydantic==2.11.7
Requires-Dist: pydantic-ai==0.2.6
Requires-Dist: pydantic-ai-slim==0.2.6
Requires-Dist: pydantic-evals==0.2.6
Requires-Dist: pydantic-graph==0.2.6
Requires-Dist: pydantic-settings==2.9.1
Requires-Dist: pydantic_core==2.33.2
Requires-Dist: pydeck==0.9.1
Requires-Dist: pydotplus==2.0.2
Requires-Dist: Pygments==2.19.2
Requires-Dist: pyparsing==3.2.1
Requires-Dist: pyproject_hooks==1.2.0
Requires-Dist: pytest==8.4.1
Requires-Dist: pytest-asyncio==1.1.0
Requires-Dist: pytest-cov==6.2.1
Requires-Dist: pytest-mock==3.14.1
Requires-Dist: python-dateutil==2.9.0.post0
Requires-Dist: python-dotenv==1.1.1
Requires-Dist: python-multipart==0.0.20
Requires-Dist: python-slugify==8.0.4
Requires-Dist: python-telegram-bot==22.1
Requires-Dist: pytz==2025.2
Requires-Dist: PyYAML==6.0.2
Requires-Dist: qdrant-client==1.15.1
Requires-Dist: readme_renderer==44.0
Requires-Dist: referencing==0.36.2
Requires-Dist: regex==2025.7.34
Requires-Dist: requests==2.32.5
Requires-Dist: requests-toolbelt==1.0.0
Requires-Dist: rfc3986==2.0.0
Requires-Dist: rich==14.0.0
Requires-Dist: rpds-py==0.27.0
Requires-Dist: rsa==4.9.1
Requires-Dist: s3transfer==0.13.0
Requires-Dist: safetensors==0.6.2
Requires-Dist: scikit-learn==1.7.1
Requires-Dist: scipy==1.16.1
Requires-Dist: seaborn==0.13.2
Requires-Dist: semantic-version==2.10.0
Requires-Dist: sentence-transformers==4.0.2
Requires-Dist: setuptools==80.9.0
Requires-Dist: six==1.17.0
Requires-Dist: smmap==5.0.2
Requires-Dist: sniffio==1.3.1
Requires-Dist: sortedcontainers==2.4.0
Requires-Dist: SQLAlchemy==2.0.43
Requires-Dist: sse-starlette==2.3.5
Requires-Dist: starlette==0.46.2
Requires-Dist: streamlit==1.48.1
Requires-Dist: sympy==1.14.0
Requires-Dist: tenacity==9.1.2
Requires-Dist: text-unidecode==1.3
Requires-Dist: threadpoolctl==3.6.0
Requires-Dist: tokenizers==0.21.4
Requires-Dist: toml==0.10.2
Requires-Dist: torch==2.8.0
Requires-Dist: tornado==6.5.2
Requires-Dist: tqdm==4.67.1
Requires-Dist: transformers==4.55.4
Requires-Dist: twine==6.1.0
Requires-Dist: types-requests==2.32.0.20250515
Requires-Dist: typing-inspection==0.4.1
Requires-Dist: typing_extensions==4.14.1
Requires-Dist: tzdata==2025.2
Requires-Dist: urllib3==2.5.0
Requires-Dist: uvicorn==0.34.2
Requires-Dist: wcwidth==0.2.13
Requires-Dist: webencodings==0.5.1
Requires-Dist: websockets==15.0.1
Requires-Dist: wheel==0.45.1
Requires-Dist: wrapt==1.17.2
Requires-Dist: yarg==0.1.10
Requires-Dist: zipp==3.21.0
Dynamic: license-file

# Linden

<div align="center">
<img src="https://raw.githubusercontent.com/matstech/linden/main/doc/logo.png" alt="Linden Logo" width="200"/>
</div>

<div align="center">
  <p><em>A Python framework for building AI agents with multi-provider LLM support, persistent memory, and function calling capabilities.</em></p>
</div>

## Table of Contents

- [Overview](#overview)
- [Features](#features)
- [Installation](#installation)
- [Requirements](#requirements)
- [Quick Start](#quick-start)
  - [Basic Agent Setup](#basic-agent-setup)
  - [Agent with Function Calling](#agent-with-function-calling)
  - [Streaming Responses](#streaming-responses)
  - [Structured Output with Pydantic](#structured-output-with-pydantic)
- [Configuration](#configuration)
  - [Environment Variables](#environment-variables)
- [Architecture](#architecture)
  - [Core Components](#core-components)
  - [Memory Architecture](#memory-architecture)
  - [Function Tool Definition](#function-tool-definition)
- [Advanced Usage](#advanced-usage)
  - [Multi-Turn Conversations](#multi-turn-conversations)
  - [Error Handling and Retries](#error-handling-and-retries)
  - [Memory Management](#memory-management)
  - [Provider-Specific Features](#provider-specific-features)
- [API Reference](#api-reference)
  - [AgentRunner](#agentrunner)
  - [Memory Classes](#memory-classes)
  - [Configuration](#configuration-1)
- [Error Types](#error-types)
- [Contributing](#contributing)
- [License](#license)
- [Support](#support)

## Overview

Linden is a comprehensive AI agent framework that provides a unified interface for interacting with multiple Large Language Model (LLM) providers including OpenAI, Anthropic, Groq, and Ollama. It features persistent conversation memory, automatic tool/function calling, and robust error handling for building production-ready AI applications.

## Features

- **Multi-Provider LLM Support**: Seamless integration with OpenAI, Anthropic, Groq, and Ollama
- **Persistent Memory**: Long-term conversation memory using FAISS vector storage and embeddings
- **Function Calling**: Automatic parsing and execution of tools with Google-style docstring support
- **Streaming Support**: Real-time response streaming for interactive applications
- **Thread-Safe Memory**: Concurrent agent support with isolated memory per agent
- **Configuration Management**: Flexible TOML-based configuration with environment variable support
- **Type Safety**: Full Pydantic model support for structured outputs
- **Error Handling**: Comprehensive error handling with retry mechanisms

## Installation

```bash
pip install linden
```

## Requirements

- Python >= 3.9
- Dependencies automatically installed:
  - `openai` - OpenAI API client
  - `anthropic` - Anthropic API client
  - `groq` - Groq API client  
  - `ollama` - Ollama local LLM client
  - `pydantic` - Data validation and serialization
  - `mem0` - Memory management
  - `docstring_parser` - Function documentation parsing

## Quick Start

### Basic Agent Setup

```python
from linden.core import AgentRunner, Provider

# Create a simple agent
agent = AgentRunner(
    user_id="user123",
    name="assistant",
    model="gpt-4",
    temperature=0.7,
    system_prompt="You are a helpful AI assistant.",
    client=Provider.OPENAI
)

# Ask a question
response = agent.run("What is the capital of France?")
print(response)
```

### Agent with Function Calling

```python
def get_weather(location: str, units: str = "celsius") -> str:
    """Get current weather for a location.
    
    Args:
        location (str): The city name or location
        units (str, optional): Temperature units (celsius/fahrenheit). Defaults to celsius.
        
    Returns:
        str: Weather information
    """
    return f"The weather in {location} is 22°{units[0].upper()}"

# Create agent with tools
agent = AgentRunner(
    user_id="user123",
    name="weather_bot",
    model="gpt-4",
    temperature=0.7,
    system_prompt="You are a weather assistant.",
    tools=[get_weather],
    client=Provider.OPENAI
)

response = agent.run("What's the weather in Paris?")
print(response)
```

### Streaming Responses

```python
# Stream responses for real-time interaction
for chunk in agent.run("Tell me a story", stream=True):
    print(chunk, end="", flush=True)
```

### Structured Output with Pydantic

```python
from pydantic import BaseModel

class PersonInfo(BaseModel):
    name: str
    age: int
    occupation: str

agent = AgentRunner(
    user_id="user123",
    name="extractor",
    model="gpt-4",
    temperature=0.1,
    system_prompt="Extract person information from text.",
    output_type=PersonInfo,
    client=Provider.OPENAI
)

result = agent.run("John Smith is a 30-year-old software engineer.")
print(f"Name: {result.name}, Age: {result.age}")
```

## Configuration

Create a `config.toml` file in your project root:

```toml
[models]
dec = "gpt-4"
tool = "gpt-4"
extractor = "gpt-3.5-turbo"
speaker = "gpt-4"

[openai]
api_key = "your-openai-api-key"
timeout = 30

[anthropic]
api_key = "your-anthropic-api-key"
timeout = 30
max_tokens = 1024 #example

[groq]
base_url = "https://api.groq.com/openai/v1"
api_key = "your-groq-api-key" 
timeout = 30

[ollama]
timeout = 60

[memory]
path = "./memory_db"
collection_name = "agent_memories"
```

### Environment Variables

Set your API keys as environment variables:

```bash
export OPENAI_API_KEY="your-openai-api-key"
export ANTHROPIC_API_KEY="your-anthropic-api-key"
export GROQ_API_KEY="your-groq-api-key"
```

## Architecture

### Core Components

#### AgentRunner
The main agent orchestrator that handles:
- LLM interaction and response processing
- Tool calling and execution
- Memory management
- Error handling and retries
- Streaming and non-streaming responses

#### Memory System
- **AgentMemory**: Per-agent conversation history and semantic search
- **MemoryManager**: Thread-safe singleton for shared vector storage
- **Persistent Storage**: FAISS-based vector database for long-term memory

#### AI Clients
Abstract interface with concrete implementations:
- **OpenAiClient**: OpenAI GPT models
- **AnthropicClient**: Anthropic Claude models
- **GroqClient**: Groq inference API
- **Ollama**: Local LLM execution

#### Function Calling
- Automatic parsing of Google-style docstrings
- JSON Schema generation for tool descriptions
- Type-safe argument parsing and validation
- Error handling for tool execution

### Memory Architecture

The memory system uses a shared FAISS vector store with agent isolation:

```python
# Each agent has isolated memory
agent1 = AgentRunner(name="agent1", ...)
agent2 = AgentRunner(name="agent2", ...)

# Memories are automatically isolated by agent_id
agent1.run("Remember I like coffee")
agent2.run("Remember I like tea")

# Each agent only retrieves its own memories
```

### Function Tool Definition

Functions must use Google-style docstrings for automatic parsing:

```python
def search_database(query: str, limit: int = 10, filters: dict = None) -> list:
    """Search the knowledge database.
    
    Args:
        query (str): The search query string
        limit (int, optional): Maximum results to return. Defaults to 10.
        filters (dict, optional): Additional search filters:
            category (str): Filter by category
            date_range (str): Date range in ISO format
            
    Returns:
        list: List of search results with metadata
    """
    # Implementation here
    pass
```

## Advanced Usage

### Multi-Turn Conversations

```python
agent = AgentRunner(user_id="user123", name="chat_bot", model="gpt-4", temperature=0.7)

# Conversation maintains context automatically
agent.run("My name is Alice")
agent.run("What's my name?")  # Will remember "Alice"
agent.run("Tell me about my previous question")  # Has full context
```

### Error Handling and Retries

```python
agent = AgentRunner(
    user_id="user123",
    name="robust_agent",
    model="gpt-4", 
    temperature=0.7,
    retries=3  # Retry failed calls up to 3 times
)

try:
    response = agent.run("Complex query that might fail")
except ToolError as e:
    print(f"Tool execution failed: {e.message}")
except ToolNotFound as e:
    print(f"Tool not found: {e.message}")
```

### Memory Management

```python
# Reset agent memory
agent.reset()

# Add context without user interaction
agent.add_to_context("Important context information", persist=True)

# Get conversation history
history = agent.memory.get_conversation("Current query")
```

### Provider-Specific Features

```python
# Use Anthropic Claude models
claude_agent = AgentRunner(
    user_id="user123",
    name="claude_agent",
    model="claude-3-opus-20240229",
    client=Provider.ANTHROPIC
)

# Use local Ollama models
local_agent = AgentRunner(
    user_id="user123",
    name="local_agent",
    model="llama2",
    client=Provider.OLLAMA
)

# Use Groq for fast inference
fast_agent = AgentRunner(
    user_id="user123",
    name="fast_agent", 
    model="mixtral-8x7b-32768",
    client=Provider.GROQ
)
```

## API Reference

### AgentRunner

#### Constructor Parameters
- `user_id` (str): Unique identifier for the user
- `name` (str): Unique agent identifier
- `model` (str): LLM model name
- `temperature` (int): Response randomness (0-1)
- `system_prompt` (str, optional): System instruction
- `tools` (list[Callable], optional): Available functions
- `output_type` (BaseModel, optional): Structured output schema
- `client` (Provider): LLM provider selection
- `retries` (int): Maximum retry attempts

#### Methods
- `run(user_question: str, stream: bool = False)`: Execute agent query
- `reset()`: Clear conversation history
- `add_to_context(content: str, persist: bool = False)`: Add contextual information

### Memory Classes

#### AgentMemory
- `record(message: str, persist: bool = False)`: Store message
- `get_conversation(user_input: str)`: Retrieve relevant context
- `reset()`: Clear agent memory

#### MemoryManager (Singleton)
- `get_memory()`: Access shared memory instance
- `get_all_agent_memories(agent_id: str = None)`: Retrieve stored memories

### Configuration

#### ConfigManager
- `initialize(config_path: str | Path)`: Load configuration file
- `get(config_path: Optional[str | Path] = None)`: Get configuration instance
- `reload()`: Refresh configuration from file

## Error Types

- `ToolNotFound`: Requested function not available
- `ToolError`: Function execution failed
- `ValidationError`: Pydantic model validation failed
- `RequestException`: HTTP/API communication error

## Contributing

1. Fork the repository
2. Create a feature branch (`git checkout -b feature/new-feature`)
3. Commit your changes (`git commit -am 'Add new feature'`)
4. Push to the branch (`git push origin feature/new-feature`)
5. Create a Pull Request

## License

This project is licensed under the MIT License - see the LICENSE file for details.

## Support

- GitHub Issues: [https://github.com/matstech/linden/issues](https://github.com/matstech/linden/issues)
- Documentation: [https://github.com/matstech/linden](https://github.com/matstech/linden)
