Estimation of time-varying parameters in statistical models: An optimization approach

Citation
D. Bertsimas et al., Estimation of time-varying parameters in statistical models: An optimization approach, MACH LEARN, 35(3), 1999, pp. 225-245
Citations number
8
Categorie Soggetti
AI Robotics and Automatic Control
Journal title
MACHINE LEARNING
ISSN journal
08856125 → ACNP
Volume
35
Issue
3
Year of publication
1999
Pages
225 - 245
Database
ISI
SICI code
0885-6125(199906)35:3<225:EOTPIS>2.0.ZU;2-S
Abstract
We propose a convex optimization approach to solving the nonparametric regr ession estimation problem when the underlying regression function is Lipsch itz continuous. This approach is based on the minimization of the sum of em pirical squared errors, subject to the constraints implied by Lipschitz con tinuity. The resulting optimization problem has a convex objective function and linear constraints, and as a result, is efficiently solvable. The esti mated function computed by this technique, is proven to convergeto the unde rlying regression function uniformly and almost surely, when the sample siz e grows to infinity, thus providing a very strong form of consistency. We a lso propose a convex optimization approach to the maximum likelihood estima tion of unknown parameters in statistical models, where the parameters depe nd continuously on some observable input variables. For a number of classic al distributional forms, the objective function in the underlying optimizat ion problem is convex and the constraints are linear. These problems are, t herefore, also efficiently solvable.