Asymptotic Bayes-optimality under sparsity of some multiple testing procedures

Citation
Bogdan, Ma.gorzata et al., Asymptotic Bayes-optimality under sparsity of some multiple testing procedures, Annals of statistics , 39(3), 2011, pp. 1551-1579
Journal title
ISSN journal
00905364
Volume
39
Issue
3
Year of publication
2011
Pages
1551 - 1579
Database
ACNP
SICI code
Abstract
Within a Bayesian decision theoretic framework we investigate some asymptotic optimality properties of a large class of multiple testing rules. A parametric setup is considered, in which observations come from a normal scale mixture model and the total loss is assumed to be the sum of losses for individual tests. Our model can be used for testing point null hypotheses, as well as to distinguish large signals from a multitude of very small effects. A rule is defined to be asymptotically Bayes optimal under sparsity (ABOS), if within our chosen asymptotic framework the ratio of its Bayes risk and that of the Bayes oracle (a rule which minimizes the Bayes risk) converges to one. Our main interest is in the asymptotic scheme where the proportion p of .true. alternatives converges to zero.We fully characterize the class of fixed threshold multiple testing rules which are ABOS, and hence derive conditions for the asymptotic optimality of rules controlling the Bayesian False Discovery Rate (BFDR). We finally provide conditions under which the popular Benjamini.Hochberg (BH) and Bonferroni procedures are ABOS and show that for a wide class of sparsity levels, the threshold of the former can be approximated by a nonrandom threshold.It turns out that while the choice of asymptotically optimal FDR levels for BH depends on the relative cost of a type I error, it is almost independent of the level of sparsity. Specifically, we show that when the number of tests m increases to infinity, then BH with FDR level chosen in accordance with the assumed loss function is ABOS in the entire range of sparsity parameters p . m.., with . . (0, 1].