APPROXIMATE STATISTICAL TESTS FOR COMPARING SUPERVISED CLASSIFICATIONLEARNING ALGORITHMS

Authors
Citation
Tg. Dietterich, APPROXIMATE STATISTICAL TESTS FOR COMPARING SUPERVISED CLASSIFICATIONLEARNING ALGORITHMS, Neural computation, 10(7), 1998, pp. 1895-1923
Citations number
18
Categorie Soggetti
Computer Science Artificial Intelligence","Computer Science Artificial Intelligence
Journal title
ISSN journal
08997667
Volume
10
Issue
7
Year of publication
1998
Pages
1895 - 1923
Database
ISI
SICI code
0899-7667(1998)10:7<1895:ASTFCS>2.0.ZU;2-Z
Abstract
This article reviews five approximate statistical tests for determinin g whether one learning algorithm outperforms another on a particular l earning task. These tests are compared experimentally to determine the ir probability of incorrectly detecting a difference when no differenc e exists (type I error). Two widely used statistical tests are shown t o have high probability of type I error in certain situations and shou ld never be used: a test for the difference of two proportions and a p aired-differences t test based on taking several random train-test spl its. A third test, a paired-differences t test based on 10-fold cross- validation, exhibits somewhat elevated probability of type I error. A fourth test, McNemar's test, is shown to have low type I error. The fi fth test is a new test, 5 x 2 cv, based on five iterations of twofold cross-validation. Experiments show that this test also has acceptable type I error. The article also measures the power (ability to detect a lgorithm differences when they do exist) of these tests. The cross-val idated t test is the most powerful. The 5 x 2 cv test is shown to be s lightly more powerful than McNemar's test. The choice of the best test is determined by the computational cost of running the learning algor ithm. For algorithms that can be executed only once, McNemar's test is the only test with acceptable type I error. For algorithms that can b e executed 10 times, the 5 x 2 cv test is recommended, because it is s lightly more powerful and because it directly measures variation due t o the choice of training set.