Compression Rate
{{ report_data.compression_rate|default("0.0") }}x
{{ report_data.compression_improvement|default("0.0") }}% smaller
Student Accuracy
{{ report_data.student_accuracy|default("0.0") }}%
{{ report_data.accuracy_drop|default("0.0") }}% vs teacher
Speed-up Factor
{{ report_data.speedup_factor|default("0.0") }}x
{{ report_data.inference_time_reduction|default("0.0") }}% faster
Memory Saved
{{ report_data.memory_saved|default("0.0") }} MB
{{ report_data.memory_reduction|default("0.0") }}% reduction

Distillation Process Timeline

Configuration Summary

Distillation Method {{ report_data.distillation_method|default("Knowledge Distillation") }}
Teacher Model {{ report_data.teacher_model|default("Not specified") }}
Student Architecture {{ report_data.student_architecture|default("Not specified") }}
Temperature {{ report_data.temperature|default("3.0") }}
Alpha (KD Loss Weight) {{ report_data.alpha|default("0.7") }}
Training Epochs {{ report_data.epochs|default("100") }}
Batch Size {{ report_data.batch_size|default("32") }}
Learning Rate {{ report_data.learning_rate|default("0.001") }}

Model Size Comparison

Training Progress

Quick Insights

{% if report_data.insights %} {% for insight in report_data.insights %}
{{ insight.message }}
{% endfor %} {% else %}
The distillation process successfully compressed the model by {{ report_data.compression_rate|default("N/A") }}x while maintaining {{ report_data.accuracy_retention|default("N/A") }}% of the original accuracy.
The student model achieves {{ report_data.speedup_factor|default("N/A") }}x faster inference time with {{ report_data.memory_saved|default("N/A") }} MB memory reduction.
{% if report_data.warnings %} {% for warning in report_data.warnings %}
{{ warning }}
{% endfor %} {% endif %} {% endif %}