Classifier Fusion using Shared Sampling Distribution for Boosting

Document Type

Conference Proceeding

Publication Date

12-1-2005

Abstract

We present a new framework for classifier fusion that uses a shared sampling distribution for obtaining a weighted classifier ensemble. The weight update process is self regularizing as subsequent classifiers trained on the disjoint views rectify the bias introduced by any classifier in preceding iterations. We provide theoretical guarantees that our approach indeed provides results which are better than the case when boosting is performed separately on different views. The results are shown to outperform other classifier fusion strategies on a well known texture image database.

DOI

10.1109/ICDM.2005.40

This document is currently not available here.

Share

COinS