Reference : Convergence Analysis of Decentralized ASGD
E-prints/Working papers : Already available on another site
Engineering, computing & technology : Computer science
http://hdl.handle.net/10993/56001
Convergence Analysis of Decentralized ASGD
English
Dalle Lucca Tosi, Mauro mailto [University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS) >]
Theobald, Martin mailto [University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS) >]
7-Sep-2023
No
[en] SGD ; asynchronous ; decentralized ; ASGD
[en] Over the last decades, Stochastic Gradient Descent (SGD) has been intensively studied by the Machine Learning community. Despite its versatility and excellent performance, the optimization of large models via SGD still is a time-consuming task. To reduce training time, it is common to distribute the training process across multiple devices. Recently, it has been shown that the convergence of asynchronous SGD (ASGD) will always be faster than mini-batch SGD. However, despite these improvements in the theoretical bounds, most ASGD convergence-rate proofs still rely on a centralized parameter server, which is prone to become a bottleneck when scaling out the gradient computations across many distributed processes.

In this paper, we present a novel convergence-rate analysis for decentralized and asynchronous SGD (DASGD) which does not require partial synchronization among nodes nor restrictive network topologies. Specifically, we provide a bound of O(σ ɛ⁻²) + O(Q S_avg ɛ⁻³ᐟ²)+ O(S_avg ɛ⁻¹)) for the convergence rate of DASGD, where S_avg is the average staleness between models, Q is a constant that bounds the norm of the gradients, and ɛ is a (small) error that is allowed within the bound. Furthermore, when gradients are not bounded, we prove the convergence rate of DASGD to be O(σ ɛ⁻²) + O(√(Ŝ_avg Ŝ_max) ɛ⁻¹)), with Ŝ_max and Ŝ_avg representing a loose version of the average and maximum staleness, respectively. Our convergence proof holds for a fixed stepsize and any non-convex, homogeneous, and L-smooth objective function. We anticipate that our results will be of high relevance for the adoption of DASGD by a broad community of researchers and developers.
Researchers
http://hdl.handle.net/10993/56001
https://arxiv.org/pdf/2309.03754.pdf
https://arxiv.org/abs/2309.03754
FnR ; FNR12252781 > Andreas Zilian > DRIVEN > Data-driven Computational Modelling And Applications > 01/09/2018 > 28/02/2025 > 2017

There is no file associated with this reference.

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.