Metadata-Version: 2.4
Name: video-lens
Version: 0.5.10
Summary: Video analysis lens for the modular assessment platform - extracts frames, transcripts, and quality metrics
Author-email: Michael Borck <michael@example.com>
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Education
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Requires-Python: >=3.11
Requires-Dist: anthropic>=0.7.0
Requires-Dist: faster-whisper>=0.10.0
Requires-Dist: ffmpeg-python>=0.2.0
Requires-Dist: gradio>=4.8.0
Requires-Dist: jinja2>=3.1.0
Requires-Dist: numpy>=1.24.0
Requires-Dist: opencv-python>=4.8.0
Requires-Dist: pandas>=2.1.0
Requires-Dist: pillow>=10.1.0
Requires-Dist: pydantic-settings>=2.1.0
Requires-Dist: pydantic>=2.5.0
Requires-Dist: pytesseract>=0.3.10
Requires-Dist: python-dotenv>=1.0.0
Requires-Dist: pyyaml>=6.0.1
Requires-Dist: rich>=13.7.0
Requires-Dist: spacy>=3.7.0
Requires-Dist: torch>=2.2.0
Requires-Dist: torchvision>=0.17.0
Requires-Dist: tqdm>=4.66.0
Requires-Dist: transformers>=4.36.0
Requires-Dist: typer>=0.9.0
Requires-Dist: weasyprint>=60.0
Provides-Extra: api
Requires-Dist: google-generativeai>=0.3.0; extra == 'api'
Requires-Dist: openai>=1.12.0; extra == 'api'
Provides-Extra: build
Requires-Dist: build>=1.0.0; extra == 'build'
Requires-Dist: twine>=4.0.0; extra == 'build'
Provides-Extra: dev
Requires-Dist: basedpyright>=1.8.0; extra == 'dev'
Requires-Dist: pre-commit>=3.5.0; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.21.0; extra == 'dev'
Requires-Dist: pytest-cov>=4.1.0; extra == 'dev'
Requires-Dist: pytest>=7.4.0; extra == 'dev'
Requires-Dist: ruff>=0.1.6; extra == 'dev'
Requires-Dist: types-pillow>=10.2.0; extra == 'dev'
Requires-Dist: types-pyyaml>=6.0.12; extra == 'dev'
Requires-Dist: types-requests>=2.31.0; extra == 'dev'
Requires-Dist: types-setuptools>=80.9.0; extra == 'dev'
Provides-Extra: gpu
Requires-Dist: torch>=2.2.0; extra == 'gpu'
Requires-Dist: torchvision>=0.17.0; extra == 'gpu'
Description-Content-Type: text/markdown

# DeepBrief

[![PyPI version](https://badge.fury.io/py/deep-brief.svg)](https://pypi.org/project/deep-brief/)
[![Python 3.11+](https://img.shields.io/badge/python-3.11+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

A video analysis application that helps students, educators, and professionals analyze presentations by combining speech transcription, visual analysis, and AI-powered feedback.

> **Status**: Phase 1 MVP in development. Core infrastructure complete, video processing pipeline in progress.

## Features

- **Video Processing**: Support for MP4, MOV, AVI, and WebM formats
- **Speech Analysis**: Automatic transcription with speaking rate and filler word detection
- **Visual Analysis**: Scene detection with frame captioning and quality assessment
- **AI Feedback**: Actionable insights and recommendations for improvement
- **Professional Reports**: Interactive HTML and structured JSON outputs

## Installation

### Prerequisites

- Python 3.11 or higher
- ffmpeg (for video processing)

### Option 1: Install from PyPI (recommended for users)

```bash
pip install deep-brief
```

### Option 2: Install from source (for development)

```bash
# Install uv (fast Python package manager)
curl -LsSf https://astral.sh/uv/install.sh | sh

# Clone the repository
git clone https://github.com/michael-borck/deep-brief.git
cd deep-brief

# Create virtual environment and install
uv venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate
uv pip install -e ".[dev]"
```

### Installing ffmpeg

**macOS:**
```bash
brew install ffmpeg
```

**Ubuntu/Debian:**
```bash
sudo apt update && sudo apt install ffmpeg
```

**Windows:**
Download from [https://ffmpeg.org/download.html](https://ffmpeg.org/download.html)

## Quick Start

```bash
# Show available commands
deep-brief --help

# Check version
deep-brief version

# Launch web interface (coming soon)
deep-brief analyze

# Analyze a specific video (CLI mode - coming soon)
deep-brief analyze video.mp4 --output ./reports
```

**Current Status**: The CLI framework is complete. Video processing features are in active development.

## Development

This project uses modern Python tooling and follows strict quality standards:

- **uv** for fast package management
- **ruff** for formatting and linting
- **basedpyright** for strict type checking
- **pytest** for testing with coverage
- **pyproject.toml** for all configuration (no setup.py)

### Development Setup

```bash
# Clone and setup
git clone https://github.com/michael-borck/deep-brief.git
cd deep-brief
uv venv && source .venv/bin/activate
uv pip install -e ".[dev]"

# Verify setup
deep-brief --help
pytest -v
```

### Code Quality Standards

```bash
# Format code
ruff format .

# Lint code  
ruff check .

# Type checking (strict mode)
basedpyright

# Run tests with coverage
pytest -v

# Run all quality checks
ruff format . && ruff check . && basedpyright && pytest -v
```

### Project Structure

```
src/deep_brief/          # Main package
├── core/                # Video processing pipeline
├── analysis/            # Speech and visual analysis  
├── reports/             # Report generation
├── interface/           # Gradio web interface
└── utils/               # Configuration and utilities

tests/                   # Test suite (mirrors src structure)
docs/                    # Documentation and specs
tasks/                   # Development task tracking
config/                  # Configuration files
```

### Current Development Phase

- ✅ **Phase 0**: Project setup, packaging, PyPI publication
- 🚧 **Phase 1**: Core video processing pipeline (in progress)
- 📋 **Phase 2**: Enhanced analysis features
- 📋 **Phase 3**: Advanced AI features

See `tasks/tasks-prd-phase1-mvp.md` for detailed task tracking.

## Links

- **PyPI**: https://pypi.org/project/deep-brief/
- **GitHub**: https://github.com/michael-borck/deep-brief
- **Documentation**: Coming soon

## License

MIT License - see LICENSE file for details.

## Contributing

Contributions are welcome! Please read the development guidelines in `CLAUDE.md` for our coding standards and toolchain requirements.