Classifier Fusion using Shared Sampling Distribution for Boosting
We present a new framework for classifier fusion that uses a shared sampling distribution for obtaining a weighted classifier ensemble. The weight update process is self regularizing as subsequent classifiers trained on the disjoint views rectify the bias introduced by any classifier in preceding iterations. We provide theoretical guarantees that our approach indeed provides results which are better than the case when boosting is performed separately on different views. The results are shown to outperform other classifier fusion strategies on a well known texture image database.
MSU Digital Commons Citation
Barbu, Costin; Iqbal, Raja; and Peng, Jing, "Classifier Fusion using Shared Sampling Distribution for Boosting" (2005). Department of Computer Science Faculty Scholarship and Creative Works. 160.