Palabra Benchmark Report

Generated: {{ generated_at }}

Summary

Total Chunks

{{ summary.total_chunks }}

Duration

{{ "%.1f"|format(summary.total_duration) }}s

Sentences Transcribed

{{ summary.chunks_with_validated }}

Sentences Translated

{{ summary.chunks_with_translation }}

Latency Statistics

{% for metric_key, metric_label in metric_labels.items() %} {% if metric_key in statistics %} {% set s = statistics[metric_key] %} {% set badge_class = "badge-good" if s.p95 <= 5.0 else ("badge-warning" if s.p95 <= 8.0 else "badge-bad") %} {% endif %} {% endfor %}
Metric Samples Min P25 P50 P75 P90 P95 P99 Max Mean ± StDev
{{ metric_label }} {{ s.count }} {{ "%.3f"|format(s.min) }}s {{ "%.3f"|format(s.p25) }}s {{ "%.3f"|format(s.p50) }}s {{ "%.3f"|format(s.p75) }}s {{ "%.3f"|format(s.p90) }}s {{ "%.3f"|format(s.p95) }}s {{ "%.3f"|format(s.p99) }}s {{ "%.3f"|format(s.max) }}s {{ "%.3f"|format(s.mean) }}s ± {{ "%.3f"|format(s.stdev) }}s

Percentile Comparison

Latency Over Time