UWEE Tech Report Series

Necessary Intransitive Likelihood Ratio Classifiers


Gang Ji, Jeff Bilmes

classifiers, pattern classification, intransitive theory, game theory,


In any pattern classification task, errors are introduced because of the difference between the true generative model and the one obtained via model estimation. One approach to solve this problem uses more training data and more accurate (but often more complicated) models. In previous work we address this problem differently, by trying to compensate (post log-likelihood ratio) for the difference between the true and estimated model scores. This was done by adding a bias term to the log likelihood ratio that was based on an initial pass on the test data, thereby producing an intransitive classifier. In this work, we extend the previous theory by noting that the correction term used before was sufficient but not necessary for perfect correction. We derive weaker (necessary) conditions that still lead to perfect correction, and therefore might be more easily attainable. We test a number of new schemes on an isolated-word speech recognition task as well as on the UCI machine learning data sets. Results show that by using the bias terms calculated in this new way, classification accuracy substantially improves over both the baseline and over our previous results.

Download the PDF version

Download the Gzipped Postscript version