[en] Parallel and distributed algorithms have become a necessity in modern machine
learning tasks. In this work, we focus on parallel asynchronous gradient descent and propose a zealous variant that minimizes the idle time of processors to achieve a substantial speedup. We then experimentally study this algorithm in the context of training a restricted Boltzmann machine on a large collaborative filtering task.
Disciplines :
Computer science
Author, co-author :
Louppe, Gilles ; Université de Liège - ULiège > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Systèmes et modélisation
Geurts, Pierre ; Université de Liège - ULiège > Dép. d'électric., électron. et informat. (Inst.Montefiore) > Systèmes et modélisation
Language :
English
Title :
A zealous parallel gradient descent algorithm
Publication date :
11 December 2010
Event name :
NIPS 2010 Workshop on Learning on Cores, Clusters and Clouds