Episode Details

Back to Episodes

Make Stochastic Gradient Descent Fast Again (Ep. 113)

Episode 110 Published 5 years, 9 months ago
Description

There is definitely room for improvement in the family of algorithms of stochastic gradient descent. In this episode I explain a relatively simple method that has shown to improve on the Adam optimizer. But, watch out! This approach does not generalize well.

Join our Discord channel and chat with us.

 

References

 

Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us