Game theory in split learning

Presentation Type

Abstract

Faculty Advisor

Chao Huang

Access Type

Event

Start Date

25-4-2025 1:30 PM

End Date

25-4-2025 2:29 PM

Description

Split Learning is a distributed deep learning paradigm in which multiple clients collaboratively train a shared model through a central server while maintaining data privacy. In Split Learning, each client processes its own data and transmits only intermediate activations or gradients, ensuring that raw data remains local. In this work, we conceptualize client participation in Split Learning as a game theoretic optimization problem. Each client independently selects an optimal participation probability, striving to maximize its utility by balancing the benefits of contributing to overall model performance against the computational and communication costs incurred. Key factors influencing this decision making include individual resource constraints, the potential incentives for improved model accuracy, and the broader impact of participation on the learning process. The server’s role is to secure a sufficient level of participation, which is critical for achieving stable model convergence and effective training dynamics. To capture the interactive relationship between clients and the server, we formulate the problem as a Nash equilibrium, where every client’s strategy is optimal given the strategies adopted by others. An iterative best-response algorithm is proposed to compute the equilibrium participation levels, enabling clients to update their strategies dynamically based on both observed interactions and anticipated responses from other participants. This framework provides a structured approach to understanding how strategic behaviors and resource limitations among clients can influence the training stability and efficiency of distributed learning.

Comments

Poster presentation at the 2025 Student Research Symposium.

This document is currently not available here.

Share

COinS
 
Apr 25th, 1:30 PM Apr 25th, 2:29 PM

Game theory in split learning

Split Learning is a distributed deep learning paradigm in which multiple clients collaboratively train a shared model through a central server while maintaining data privacy. In Split Learning, each client processes its own data and transmits only intermediate activations or gradients, ensuring that raw data remains local. In this work, we conceptualize client participation in Split Learning as a game theoretic optimization problem. Each client independently selects an optimal participation probability, striving to maximize its utility by balancing the benefits of contributing to overall model performance against the computational and communication costs incurred. Key factors influencing this decision making include individual resource constraints, the potential incentives for improved model accuracy, and the broader impact of participation on the learning process. The server’s role is to secure a sufficient level of participation, which is critical for achieving stable model convergence and effective training dynamics. To capture the interactive relationship between clients and the server, we formulate the problem as a Nash equilibrium, where every client’s strategy is optimal given the strategies adopted by others. An iterative best-response algorithm is proposed to compute the equilibrium participation levels, enabling clients to update their strategies dynamically based on both observed interactions and anticipated responses from other participants. This framework provides a structured approach to understanding how strategic behaviors and resource limitations among clients can influence the training stability and efficiency of distributed learning.