- Article
A Multi-Teacher Knowledge Distillation Framework with Aggregation Techniques for Lightweight Deep Models
- Ahmed Hamdi,
- Hassan N. Noura and
- Joseph Azar
Knowledge Distillation (KD) is a machine learning technique in which a compact student model learns to replicate the performance of a larger teacher model by mimicking its output predictions. Multi-Teacher Knowledge Distillation extends this paradigm...

