Document Type

Conference Proceeding

Publication Date

1-1-2024

Journal / Book Title

Lecture Notes in Computer Science Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics

Abstract

In competitive distributed learning, organizations face the challenge of collaboratively training machine learning models without sharing sensitive raw data, while competing for the same customer base using model-based services. Federated learning is an extensively studied distributed learning approach, but it has been shown to discourage collaboration in a competitive environment. The reason is that the shared global model is a public good, which can lead to intense organization competition and hence small incentives for collaboration. To address this issue, this paper uses SplitFed learning (SFL) for model training and proposes an accuracy-shapring mechanism to incentivize inter-organizational collaboration. SFL divides the global model into two components: one trained by the organizations and the other by a main server. After convergence, the mechanism introduces customized noise into the main server’s model, enabling the provision of differentiated models to each organization. Both our theoretical analysis and numerical experiments validate the efficacy of SFL and the proposed mechanism, showing significant improvements in both model accuracy and social welfare at equilibrium.

DOI

10.1007/978-3-031-72347-6_10

Journal ISSN / Book ISBN

85205300801 (Scopus)

Share

COinS