Bailey and Dynarski cited in piece on why quality education should be a "civil and moral right"
Kalousova and Burgard find credit card debt increases likelihood of foregoing medical care
Arline Geronimus wins Excellence in Research Award from School of Public Health
Yu Xie to give DBASSE's David Lecture April 30, 2013 on "Is American Science in Decline?"
U-M grad programs do well in latest USN&WR "Best" rankings
Sheldon Danziger named president of Russell Sage Foundation
Back in September
Laber, Eric B., and Susan A. Murphy. 2011. "Adaptive Confidence Intervals for the Test Error in Classification." Journal of the American Statistical Association, 106(495): 904-913.
The estimated test error of a learned classifier is the most commonly reported measure of classifier performance. However, constructing a high-quality point estimator of the test error has proved to be very difficult. Furthermore, common interval estimators (e.g., confidence intervals) are based on the point estimator of the test error and thus inherit all the difficulties associated with the point estimation problem. As a result, these confidence intervals do not reliably deliver nominal coverage. In contrast, we directly construct the confidence interval by using smooth data-dependent upper and lower bounds on the test error. We prove that, for linear classifiers, the proposed confidence interval automatically adapts to the nonsmoothness of the test error, is consistent under fixed and local alternatives, and does not require that the Bayes classifier be linear. Moreover, the method provides nominal coverage on a suite of test problems using a range of classification algorithms and sample sizes. This article has supplementary material online.
DOI:10.1198/jasa.2010.tm10053 (Full Text)
PMCID: PMC3285493. (Pub Med Central)
Browse | Search : All Pubs | Next