Exact model averaging with naive Bayesian classifiers
Dash D, Cooper GF. Exact model averaging with naive Bayesian classifiers. In: Proceedings of the International Conference on Machine Learning (2002) 91-98.
The naive classiﬁer is a well-established mathematical model whose simplicity, speed and accuracy have made it a popular choice for classiﬁcation in AI and engineering. In this paper we show that, given N features of interest, it is possible to perform tractable exact model averaging (MA) over all 2N possible feature-set models. In fact, we show that it is possible to calculate parameters for a single naive classiﬁer C∗ such that C∗ produces predictions equivalent to those obtained by the full model-averaging, and we show that C∗ can be constructed using the same time and space complexity required to construct a single naive classiﬁer with MAP parameters. We present experimental results which show that on average the MA classiﬁer typically outperforms the MAP classiﬁer on simulated data, and we characterize how the relative performance varies with number of variables, number of training records, and complexityofthegeneratingdistribution. Finally, we examine the performance of the MA naive model on the real-world ALARM and HEPAR networks and show MA improved classiﬁcation here as well.