Metadata-Version: 2.4
Name: hapax
Version: 0.1.1
Summary: Python SDK for Hapax LLM Gateway with OpenLit observability
Author-email: Teilo Millet <teilo@example.com>
License: MIT
License-File: LICENSE
Requires-Python: >=3.8
Requires-Dist: openai>=1.0.0
Requires-Dist: openlit>=0.1.0
Requires-Dist: typing-extensions>=4.0.0
Provides-Extra: dev
Requires-Dist: black>=22.0; extra == 'dev'
Requires-Dist: isort>=5.0; extra == 'dev'
Requires-Dist: mypy>=1.0; extra == 'dev'
Requires-Dist: pytest>=7.0; extra == 'dev'
Description-Content-Type: text/markdown

# Hapax Python SDK

Hapax is a powerful toolkit for building, deploying, and monitoring LLM-powered applications. This Python SDK provides decorators and utilities to seamlessly integrate LLM functions with observability powered by OpenLit.

## Features

- **LLM Function Decorators**: Easily mark functions that interact with LLMs using the `@llm_function` decorator
- **OpenLit Observability**: Built-in tracing and metrics for your LLM operations
- **Rich Metadata**: Capture detailed information about your LLM requests, responses, and performance
- **Easy Integration**: Works with popular LLM providers like OpenAI

## Installation

```bash
pip install hapax
```

## Quick Start

1. Set up your environment:
```bash
# Set your OpenAI API key
export OPENAI_API_KEY=your-api-key-here

# Start the observability stack (requires Docker)
docker compose up -d
```

2. Create your first LLM function:
```python
from hapax import llm_function
import openlit
from openai import OpenAI

# Initialize OpenLit with the collector endpoint
openlit.init(otlp_endpoint="http://127.0.0.1:4328")

# Initialize OpenAI client
client = OpenAI()

@llm_function(
    name="generate_text",
    model="gpt-3.5-turbo",
    provider="openai"
)
def generate_text(prompt: str) -> str:
    response = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": prompt}]
    )
    return response.choices[0].message.content

# Use the function with tracing
with openlit.start_trace(name="my_llm_request") as trace:
    result = generate_text("Write a haiku about coding")
    trace.set_metadata({
        "gen_ai.prompt": prompt,
        "gen_ai.completion": result
    })
```

## Project Structure

- `hapax/`: Core SDK implementation
  - `decorators.py`: LLM function decorators and tracing utilities
  - `types.py`: Type definitions and interfaces
- `examples/`: Example applications
  - `simple_completion.py`: Basic example of generating text with tracing
  - `evaluation_example.py`: Advanced example with text generation and analysis

## Observability Setup

### Prerequisites
- Docker and Docker Compose
- Python 3.8+
- OpenAI API key

### Starting the Observability Stack

1. The project includes a `docker-compose.yaml` that sets up:
   - OpenTelemetry Collector
   - Jaeger UI for trace visualization

2. Start the services:
```bash
docker compose up -d
```

3. Access the interfaces:
   - Jaeger UI: http://localhost:16687

### Trace Attributes

Hapax captures rich metadata about your LLM operations:

- `gen_ai.system`: LLM provider (e.g., "openai")
- `gen_ai.request.model`: Model name (e.g., "gpt-3.5-turbo")
- `gen_ai.operation.name`: Operation type (e.g., "text_generation")
- `gen_ai.prompt`: Input prompt
- `gen_ai.completion`: Generated text
- `gen_ai.usage.*`: Token usage metrics

## Examples

### Simple Text Generation

See `examples/simple_completion.py` for a basic example of generating haikus with tracing.

### Text Generation with Analysis

See `examples/evaluation_example.py` for an advanced example that includes:
- Text generation
- Content analysis
- Detailed trace attributes
- Nested traces for complex operations

## Known Issues

1. The langchain integration warning can be safely ignored if you're not using langchain:
```
Failed to instrument langchain: No module named 'langchain_community'
```

2. The metrics setup warning is a known issue that doesn't affect tracing functionality:
```
OpenLIT metrics setup failed. Metrics will not be available
```

## Contributing

We welcome contributions! Please feel free to submit a Pull Request.

## License

This project is licensed under the MIT License - see the LICENSE file for details.