Episode Details

Back to Episodes

Compressing deep learning models: distillation (Ep.104)

Episode 101 Published 5 years, 11 months ago
Description

Using large deep learning models on limited hardware or edge devices is definitely prohibitive. There are methods to compress large models by orders of magnitude and maintain similar accuracy during inference.

In this episode I explain one of the first methods: knowledge distillation

 Come join us on Slack

Reference
Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us