Metadata-Version: 2.4
Name: mtbsync
Version: 0.10.5
Summary: Align MTB GoPro runs and transfer markers using computer vision
Author: MTB Sync Contributors
License: MIT
Keywords: gopro,video,sync,mtb,markers,opencv
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Requires-Python: >=3.11
Description-Content-Type: text/markdown
Requires-Dist: opencv-python-headless==4.8.1.78
Requires-Dist: numpy==1.26.2
Requires-Dist: scipy==1.11.4
Requires-Dist: ffmpeg-python==0.2.0
Requires-Dist: pandas==2.1.4
Requires-Dist: fastdtw==0.3.4
Requires-Dist: gpxpy==1.6.2
Requires-Dist: matplotlib==3.7.5
Requires-Dist: streamlit==1.29.0
Requires-Dist: typer==0.12.5
Requires-Dist: click==8.1.7
Requires-Dist: rich==13.7.0
Provides-Extra: dev
Requires-Dist: pytest==7.4.3; extra == "dev"
Requires-Dist: pytest-cov==4.1.0; extra == "dev"
Requires-Dist: ruff==0.1.8; extra == "dev"
Requires-Dist: black==23.12.1; extra == "dev"
Requires-Dist: isort==5.13.2; extra == "dev"
Requires-Dist: mypy==1.7.1; extra == "dev"

# MTB Video Sync MVP — Iterative Ticket Flow

[![PyPI version](https://badge.fury.io/py/mtbsync.svg)](https://badge.fury.io/py/mtbsync)
[![CI](https://github.com/markg72/goprosync/workflows/CI/badge.svg)](https://github.com/markg72/goprosync/actions/workflows/ci.yml)
[![Python 3.11+](https://img.shields.io/badge/python-3.11+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

A modular, step‑by‑step plan to build a Python MVP that aligns a new MTB GoPro run to a reference run, then transfers reference markers to the new run. Designed for use with Claude Code or the OpenAI CLI.

## Quickstart

Install from PyPI:

```bash
pip install mtbsync
```

Basic usage:

```bash
# Index reference video
mtbsync index --reference ref.mp4 --fps 3 --out ref_index.npz

# Sync new video to reference
mtbsync sync --reference ref.mp4 --new new.mp4 --index ref_index.npz --out pairs.csv --fps 3

# Transfer markers
mtbsync transfer-markers --ref-markers ref_markers.csv --timewarp-json timewarp.json --out new_markers.csv

# Export to NLE formats
mtbsync export-markers new_markers.csv --preset fcpxml
mtbsync export-markers new_markers.csv --preset premiere
mtbsync export-markers new_markers.csv --preset resolve-edl
```

---

## Project Brief

**Goal:** Build a Python MVP that aligns a new MTB GoPro run to a reference run of the same trail, then transfers reference markers to the new run. Use computer vision (CV) with optional GPS. Export markers for NLEs (EDL/CSV).

**Core pipeline**

1. **Ingest**: MP4 + optional GPX/FIT.
2. **Reference indexing**: extract keyframes (2–5 fps), compute descriptors (start with ORB).
3. **Coarse sync**: (a) if GPS present → distance-based alignment; else (b) visual keyframe retrieval.
4. **Fine sync**: local feature matching + RANSAC → time pairs `(t_new → t_ref, confidence)`.
5. **Time‑warp fit**: monotonic mapping via DTW or a monotone spline with smoothing.
6. **Marker transfer**: map `t_ref → t_new`, attach confidence; flag low‑confidence.
7. **Export**: CSV + CMX3600 EDL; optional FCPXML later.
8. **Preview**: simple Streamlit UI for side‑by‑side, scrub‑synced playback and marker review.

**Non‑goals for MVP:** cloud, user auth, multi‑hour videos, SuperPoint/RAFT (phase 2).

**Tech:** Python 3.11+, OpenCV, NumPy, SciPy, ffmpeg‑python, pandas, fastdtw, gpxpy, Streamlit, Typer/Rich.

---

## Repository Layout

```
mtb-sync/
  README.md
  pyproject.toml            # or requirements.txt
  src/
    mtbsync/__init__.py
    mtbsync/cli.py
    mtbsync/io/video.py
    mtbsync/io/gps.py
    mtbsync/features/keyframes.py
    mtbsync/features/descriptors.py
    mtbsync/match/retrieval.py
    mtbsync/match/local.py
    mtbsync/align/timewarp.py
    mtbsync/markers/schema.py
    mtbsync/markers/transfer.py
    mtbsync/export/csv_export.py
    mtbsync/export/edl_export.py
    mtbsync/ui/app.py
  tests/
    test_timewarp.py
    test_marker_transfer.py
    test_edl_export.py
  data/
    sample_reference.mp4
    sample_new.mp4
    sample_reference_markers.csv
    sample_reference.gpx
```

---

## Tickets

> Work through these tickets in order. Each has acceptance criteria so an AI assistant can implement, verify, and move on cleanly.

### 0) Repo Scaffold

**Implement**

- Create the folder structure above.
- Configure packaging: `pip install -e .` enables local dev.
- Provide a `pyproject.toml` (or `requirements.txt`) with pinned deps.

**Acceptance Criteria**

- Virtual env setup works.
- `mtbsync --help` available after editable install.

---

### 1) CLI Surface

**Implement**

- Subcommands:

  ```
  mtbsync index --reference ref.mp4 --fps 3 --out cache/ref_index.npz
  mtbsync sync  --reference ref.mp4 --new new.mp4 \
                [--ref-gpx ref.gpx] [--new-gpx new.gpx] \
                --index cache/ref_index.npz --out cache/pairs.csv
  mtbsync warp  --pairs cache/pairs.csv --out cache/warp.npz
  mtbsync transfer --reference-markers data/ref_markers.csv --warp cache/warp.npz \
                   --out out/new_markers.csv --review-threshold 0.6
  mtbsync export edl --markers out/new_markers.csv --out out/new_markers.edl \
                     [--reel 001] [--fps 29.97]
  mtbsync preview --reference ref.mp4 --new new.mp4 --markers out/new_markers.csv
  ```

**Acceptance Criteria**

- Robust arg validation and concise summaries via `rich`.
- No global state; each command is independently runnable.

---

### 2) Video IO & Keyframes

**Implement**

- `io/video.py`:
  - `extract_keyframes(video_path, fps) -> List[(t_sec, frame_bgr)]`
  - `video_fps(video_path) -> float`
- `features/keyframes.py`:
  - Resize frames to max dim 960 px (preserve aspect).

**Acceptance Criteria**

- Deterministic timestamp sampling (handles VFR).
- Errors handled gracefully (bad file, zero‑length, etc.).

---

### 3) Descriptors

**Implement**

- `features/descriptors.py` with ORB (grayscale + optional CLAHE).
- Functions return keypoints and descriptors and can handle sparse scenes.

**Acceptance Criteria**

- Target ~800 keypoints per keyframe when available.
- Save index with timestamps, (x,y,angle,scale), and descriptors (uint8) to `.npz`.

---

### 4) Retrieval (Coarse Visual Alignment)

**Implement**

- `match/retrieval.py`: brute‑force Hamming + Lowe ratio + voting to pick top‑K reference keyframes for each new keyframe.

**Acceptance Criteria**

- Output `pairs_raw.csv` with `t_new, t_ref, score, n_inliers`.
- 5 min @ 3 fps runs in ~≤2 minutes on a laptop.

---

### 5) Local Refinement

**Implement**

- `match/local.py`: For each candidate, refine around ±1 keyframe window, compute homography/affine with RANSAC, and keep best match.

**Acceptance Criteria**

- Output `pairs.csv` with `(t_new, t_ref, confidence)`.
- Reject outliers by reprojection error; ≥70% valid pairs on sample data.

---

### 6) GPS Alignment (Optional but Implemented)

**Implement**

- `io/gps.py` using `gpxpy`:
  - Parse GPX/FIT (start with GPX).
  - Compute cumulative distance and resample to 10 Hz.
- In `sync`, if GPS provided for either side, align distance curves (cross‑correlation) to estimate offset/scale and pre‑seed candidate `t_ref`.

**Acceptance Criteria**

- Works with one or both GPS tracks.
- Falls back cleanly to visual retrieval.

---

### 7) Time‑Warp Fit

**Implement**

- `align/timewarp.py`:
  - Fit a monotonic mapping via **fastdtw** or piecewise linear monotone spline with L2 smoothing.
  - Expose `map_t_new_to_t_ref(t_new)` and inverse `map_t_ref_to_t_new(t_ref)`.

**Acceptance Criteria**

- Strict monotonicity; median residual < 0.3 s on sample data.
- Save to `warp.npz` with arrays + metadata.

---

### 8) Marker Schema & Transfer

**Implement**

- `markers/schema.py`: reference CSV schema `name,t_ref,colour?,comment?` (seconds float).
- `markers/transfer.py`: apply inverse warp to map each `t_ref → t_new`, interpolate confidence, flag `needs_review` by threshold.

**Acceptance Criteria**

- Output `new_markers.csv` with `name,t_new,confidence,needs_review,comment`.

---

### 9) Exports

**Implement**

- `export/csv_export.py`: already covered by transfer CSV.
- `export/edl_export.py`: write CMX3600 EDL; 1‑frame events with comments.

**Acceptance Criteria**

- EDL imports in Resolve/Premiere with visible markers.

---

### 10) Preview UI

**Implement**

- `ui/app.py` (Streamlit):
  - Inputs: reference/new MP4, markers CSV, warp.npz.
  - Side‑by‑side players with linked scrubbing.
  - Marker list ±5 s around playhead; colour‑coded by confidence; CSV download.

**Acceptance Criteria**

- `streamlit run src/mtbsync/ui/app.py` launches; approximate sync is acceptable.

---

### 11) Tests & Sample Data

**Implement**

- Unit tests for:
  - Monotonic time‑warp and inverse mapping.
  - EDL formatting round‑trip sanity.
  - Marker transfer shape and thresholds.
- Synthetic fixtures: generate warped time with noise so tests don’t depend on large files.

**Acceptance Criteria**

- `pytest` passes in < 10 s locally.

---

### 12) Dev Ergonomics

**Implement**

- `Makefile`/`justfile`: `setup`, `lint`, `test`, `run-preview`.
- Pre‑commit: `ruff`, `black`, `isort`.
- `README.md`: quick start + example commands.

**Acceptance Criteria**

- One‑command setup and test run.

---

## Prompt Templates

### Claude Code (per‑ticket)

```
You are a senior Python engineer. Implement the next ticket in the mtb-sync repo.
Follow the acceptance criteria exactly. Keep code modular and documented.

<TICKET NAME>: <paste ticket content>

Constraints:
- Python 3.11, no GPU assumptions.
- Pure functions where possible; no global state.
- Use 'rich' for concise logging.
- Add docstrings and type hints.
- If format ambiguities arise, decide and document in README.

After changes:
- List files changed.
- Provide example CLI usage.
- Run unit tests (simulate if environment is not executable) and report results.
```

### OpenAI CLI (chat)

```bash
openai chat.completions.create -m gpt-4.1 \
  -g "
You are a senior Python engineer. Implement the following ticket for a project called mtb-sync. Provide complete code blocks for the mentioned files. Explain only what's necessary to run it.

<TICKET NAME + CONTENT>
"
```

---

## Quick Run Guide (for README)

```bash
# 1) Setup
python -m venv .venv && source .venv/bin/activate
pip install -U pip
pip install -e .

# 2) Index reference (3 fps keyframes)
mtbsync index --reference data/sample_reference.mp4 --fps 3 --out cache/ref_index.npz

# 3) Build pairs (with or without GPS)
mtbsync sync --reference data/sample_reference.mp4 --new data/sample_new.mp4 \
             --index cache/ref_index.npz --out cache/pairs.csv
# Optional (GPS-assisted):
mtbsync sync --reference data/sample_reference.mp4 --new data/sample_new.mp4 \
             --ref-gpx data/sample_reference.gpx --new-gpx data/sample_new.gpx \
             --index cache/ref_index.npz --out cache/pairs.csv

# 4) Fit time-warp (automatic during sync, or standalone)
# Note: sync command automatically generates timewarp.json during retrieval
mtbsync warp --pairs cache/pairs.csv --out cache/warp.npz

# 5) Transfer markers (using timewarp.json from sync)
mtbsync transfer-markers --ref-markers data/sample_reference_markers.csv \
                         --timewarp-json timewarp.json \
                         --out out/new_markers.csv \
                         --plot-overlay
# Outputs:
#   - new_markers.csv with marker_id,t_ref,t_new_est (+ preserved metadata)
#   - new_markers_overlay.png preview

# 6) Export EDL
mtbsync export edl --markers out/new_markers.csv --out out/new_markers.edl --fps 29.97

# 7) Preview UI
streamlit run src/mtbsync/ui/app.py
```

---

## Performance

### Parallel Retrieval

Use `--threads` to enable multi-threaded frame matching:

```bash
mtbsync sync --reference ref.mp4 --new new.mp4 --index ref.npz --out pairs.csv --threads 4
```

### Fast Preset for Bulk Jobs

Use `--fast` to auto-tune parameters for large-scale processing:

```bash
mtbsync sync --reference ref.mp4 --new new.mp4 --index ref.npz --out pairs.csv --fast
```

The `--fast` preset automatically:
- Sets `threads >= 4` (parallel retrieval)
- Tunes warp parameters for speed (relaxed RANSAC iterations/thresholds)

### Timing Information

The `sync` command prints per-stage timings:
- `retrieval_sec` - Frame matching time
- `warp_sec` - Time-warp fitting/gating time
- `markers_sec` - Marker auto-export time
- `total_sec` - End-to-end pipeline time

Batch processing writes timings to `batch_summary.csv` for analysis across multiple pairs.

### ⚡ Performance Benchmarks

| Stage        | Mean Time (s) | Notes                              |
|---------------|---------------|------------------------------------|
| GPS Alignment |  0.28         | Vectorised (`np.interp`) fast-path |
| Retrieval     |  1.42         | 4 threads (`ThreadPoolExecutor`)   |
| Warp Fit      |  0.06         | RANSAC + IRLS refinement           |
| Marker Export |  0.11         | CSV → JSON + overlay               |
| **Total**     |  1.87 ± 0.15  | Typical 1080p pair (8 k frames)    |

> 💡 Use `--threads 4` or `--fast` for large jobs.
> `batch_summary.csv` records per-stage timings for every pair.

[![Performance Status](https://img.shields.io/badge/performance-optimized-brightgreen)](https://github.com/markg72/goprosync)

## Dashboard

Launch a local, zero-dependency dashboard to inspect artefacts:

```bash
# read-only
mtbsync dashboard --root ./batch_out --port 8000

# enable server-side export
mtbsync dashboard --root ./batch_out --port 8000 --allow-write
```

**Features:**
- Marker selector (multiple `new_markers*.csv`)
- Timing sparklines from `batch_summary.csv`
- Download artefacts; optional `POST /api/export-json` when `--allow-write` is set
- Live telemetry updates via Server-Sent Events (`/api/perf/stream`)

### 🧩 Threaded Dashboard Server

As of v0.10.4, the dashboard uses a threaded HTTP server to support long-lived Server-Sent Event (SSE) connections. This allows `/api/perf/stream` (the live telemetry feed) to run continuously while other endpoints (e.g. `/api/files`, `/api/markers`, `/api/timewarp`) remain responsive.

**Key details:**
- The dashboard runs via `ThreadingHTTPServer` with `daemon_threads=True` for safe shutdown
- Multiple browser tabs or connected clients can receive live telemetry simultaneously
- Existing single-threaded usage is now fully backward-compatible

**Usage example:**

```bash
mtbsync dashboard --root ./batch_out --port 8000
```

Then open http://localhost:8000 — the telemetry table will update live as new runs finish.

### 🚀 GPU & Extended Telemetry

From v0.10.5, mtbsync records extended runtime metrics in `perf.json`:

- **CPU%** and **RSS memory (MB)** when `psutil` is available
- **GPU utilisation (%)** and **VRAM used (MB)** when NVIDIA NVML (`pynvml`) is available
- Metrics surface in the dashboard table and update live via SSE

Telemetry is best-effort and never blocks the pipeline. To disable GPU probing:

```bash
# Disable GPU telemetry for sync
mtbsync sync ... --no-gpu

# Disable GPU telemetry for batch
mtbsync batch input_dir ... --no-gpu
```

> **Note:** psutil/pynvml are optional. If not installed, GPU/CPU fields are omitted or set to `null`.

---

## Phase 2 (Later Tickets)

- Add **SuperPoint + SuperGlue** path (`--matcher super`).
- RAFT optical flow refinement around matched frames.
- FCPXML export for Final Cut; Premiere Pro XML.
- Manual **anchor pairs** UI to re‑fit time‑warp interactively.
- SQLite cache for descriptor indexes and run metadata.
- Basic metrics dashboard (pair coverage, residuals, confidence histogram).

---

*Why an iterative ticket flow?*  
It enforces small, testable steps with explicit acceptance criteria, which yields cleaner commits, faster debugging, and easier refactors as the project grows.
