Doubly penalized estimation in additive regression with high-dimensional data

Citation
Zhiqiang Tan et Cun-hui Zhang, Doubly penalized estimation in additive regression with high-dimensional data, Annals of statistics , 47(5), 2019, pp. 2567-2600
Journal title
ISSN journal
00905364
Volume
47
Issue
5
Year of publication
2019
Pages
2567 - 2600
Database
ACNP
SICI code
Abstract
Additive regression provides an extension of linear regression by modeling the signal of a response as a sum of functions of covariates of relatively low complexity. We study penalized estimation in high-dimensional nonparametric additive regression where functional semi-norms are used to induce smoothness of component functions and the empirical L2 norm is used to induce sparsity. The functional semi-norms can be of Sobolev or bounded variation types and are allowed to be different amongst individual component functions. We establish oracle inequalities for the predictive performance of such methods under three simple technical conditions: a sub-Gaussian condition on the noise, a compatibility condition on the design and the functional classes under consideration and an entropy condition on the functional classes. For random designs, the sample compatibility condition can be replaced by its population version under an additional condition to ensure suitable convergence of empirical norms. In homogeneous settings where the complexities of the component functions are of the same order, our results provide a spectrum of minimax convergence rates, from the so-called slow rate without requiring the compatibility condition to the fast rate under the hard sparsity or certain Lq sparsity to allow many small components in the true regression function. These results significantly broaden and sharpen existing ones in the literature.