We study proximal level methods for convex optimization that use proje
ctions onto successive approximations of level sets of the objective c
orresponding to estimates of the optimal value. We show that they enjo
y almost optimal efficiency estimates. We give extensions for solving
convex constrained problems, convex-concave saddle-point problems and
variational inequalities with monotone operators. We present several v
ariants, establish their efficiency estimates, and discuss possible im
plementations. In particular, our methods require bounded storage in c
ontrast to the original level methods of Lemarechal, Nemirovskii and N
esterov.