Document Type

Conference Proceeding

Publication Date

1-1-2023

Journal / Book Title

Proceedings IEEE Global Communications Conference Globecom

Abstract

Distributed ensemble learning (DEL) involves training multiple models at distributed learners, and then combining their predictions to improve performance. Existing related studies focus on algorithm development but ignore the important issue of incentives, without which self-interested learners may be unwilling to participate. We aim to fill this gap by presenting a first study on the incentive mechanism design in DEL. The mechanism specifies both the training data and the reward for learners with heterogeneous computation and communication costs. One challenge is that it is unclear how learners' diversity (in terms of training data) contributes to the ensemble accuracy. To this end, we decompose the ensemble accuracy into a diversity-precision tradeoff to guide the mechanism design. Another challenge is that the mechanism design is a mixed-integer program with a large search space. To this end, we propose an alternating algorithm that iteratively updates each learner's training data size and reward. We prove that the algorithm converges and is polynomial in the number of learners. Numerical results using MNIST dataset are consistent with our analysis. Interestingly, we show that the mechanism may prefer a lower level of learner diversity to achieve a higher ensemble accuracy. Our code is made publicly available.

DOI

10.1109/GLOBECOM54140.2023.10436862

Journal ISSN / Book ISBN

85187405858 (Scopus)

Share

COinS