Tian to develop approaches to efficient distributed learning
Zhi Tian, Professor, Electrical and Computer Engineering, will receive funding from the National Science Foundation for a project in which she will develop communication-efficient and robust approaches to distributed stochastic optimization for collaborative learning from private data in big data computing.
In distributed collaborative learning, each of several machines has access to its local private samples only, while seeking to collaboratively learn a common model that describes all these samples. Building on the distributed stochastic optimization framework, the goal of this project is to jointly optimize the anticipated objective with regard to the shared learning model, thus minimizing: (1) overall runtime; (2) communication costs, and (3) total number of samples used.
For this project, Tian will introduce a communication-censoring framework to effectively reduce message movement among distributed nodes, while globally optimizing a shared learning model with provable convergence, even in the absence of any central coordination or synchronism.
She will also develop new mechanisms, theory and algorithms for federated machine learning, when the distributed nodes communicate in either a centrally coordinated mode or a fully decentralized mode.
Additionally, Tian will conduct rigorous analyses to delineate the convergence conditions, convergence rates, and tradeoff between efficiency and robustness.
This research will help prepare students for work in science and engineering in the era of big data.
Tian will receive $423,100 for this research. Funding will begin in October 2020 and will end in late September 2023.