Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

Convergence Rate Analysis of Non-I.I.D. SplitFed Learning with Partial Worker Participation and Auxiliary Networks

Version 1 : Received: 2 September 2024 / Approved: 2 September 2024 / Online: 4 September 2024 (08:29:42 CEST)

How to cite: Talebi, A. Convergence Rate Analysis of Non-I.I.D. SplitFed Learning with Partial Worker Participation and Auxiliary Networks. Preprints 2024, 2024090335. https://doi.org/10.20944/preprints202409.0335.v1 Talebi, A. Convergence Rate Analysis of Non-I.I.D. SplitFed Learning with Partial Worker Participation and Auxiliary Networks. Preprints 2024, 2024090335. https://doi.org/10.20944/preprints202409.0335.v1

Abstract

In conventional Federated Learning (FL), clients work together to train a model managed by a central server, intending to speed up the learning process. However, this approach imposes significant computational and communication burdens on clients, particularly with complex models. Additionally, while FL strives to protect client privacy, the server's access to local and global models raises security concerns. To address these challenges, Split Learning (SL) separates the model into parts handled by the client and the server, though it suffers from inefficiencies due to sequential client participation. To overcome these issues, SplitFed Learning (SFL) was proposed, which combines the parallelism of FL with the model-splitting strategy of SL, enabling simultaneous training by multiple clients. Our main contribution is the theoretical analysis of SFL, which, for the first time, includes non-i.i.d. datasets, non-convex loss functions, and both full and partial client participation. We provide convergence proofs for a state-of-the-art SFL algorithm based on conventional convergence analysis assumptions for FL. Our results prove that we can recover the linear convergence rate of conventional FL for the SFL algorithm with the distinction that increasing the number of local steps or clients may not speed up the convergence in SFL.

Keywords

SplitFed Learning; Convergence Theory; Federated Learning; Auxiliary Networks; Machine Learning

Subject

Computer Science and Mathematics, Computer Science

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.