Brain Networks Laboratory (Choe Lab)

How to find the hyperparameters more efficiently

Apr 3, 2018

https://arxiv.org/abs/1803.09820v1

A disciplined approach to neural network hyper-parameters: Part 1 – learning rate, batch size, momentum, and weight decay

Leslie N. Smith

Although deep learning has produced dazzling successes for applications of image, speech, and video processing in the past few years, most trainings are with suboptimal hyper-parameters, requiring unnecessarily long training times. Setting the hyper-parameters remains a black art that requires years of experience to acquire. This report proposes several efficient ways to set the hyper-parameters that significantly reduce training time and improves performance. Specifically, this report shows how to examine the training validation/test loss function for subtle clues of underfitting and overfitting and suggests guidelines for moving toward the optimal balance point. Then it discusses how to increase/decrease the learning rate/momentum to speed up training. Our experiments show that it is crucial to balance every manner of regularization for each dataset and architecture. Weight decay is used as a sample regularizer to show how its optimal value is tightly coupled with the learning rates and momentums.


← Back to all articles         Quick Navigation:    Next:[ j ] – Prev:[ k ] – List:[ l ]