Document Type
Conference Proceeding
Publication Date
1-1-2024
Journal / Book Title
Advances in Neural Information Processing Systems
Abstract
Split federated learning (SFL) is a recent distributed approach for collaborative model training among multiple clients. In SFL, a global model is typically split into two parts, where clients train one part in a parallel federated manner, and a main server trains the other. Despite the recent research on SFL algorithm development, the convergence analysis of SFL is missing in the literature, and this paper aims to fill this gap. The analysis of SFL can be more challenging than that of federated learning (FL), due to the potential dual-paced updates at the clients and the main server. We provide convergence analysis of SFL for strongly convex and general convex objectives on heterogeneous data. The convergence rates are O(1/T) and O(1/√3 T), respectively, where T denotes the total number of rounds for SFL training. We further extend the analysis to non-convex objectives and the scenario where some clients may be unavailable during training. Experimental experiments validate our theoretical results and show that SFL outperforms FL and split learning (SL) when data is highly heterogeneous across a large number of clients.
Journal ISSN / Book ISBN
105000493690 (Scopus)
Montclair State University Digital Commons Citation
Han, Pengchao; Huang, Chao; Tian, Geng; Tang, Ming; and Liu, Xin, "Convergence Analysis of Split Federated Learning on Heterogeneous Data" (2024). School of Computing Faculty Scholarship and Creative Works. 24.
https://digitalcommons.montclair.edu/computing-facpubs/24