The nonlinear rescaling principle (NRP) consists of transforming the o
bjective function and/or the constraints of a given constrained optimi
zation problem into another problem which is equivalent to the origina
l one in the sense that their optimal set of solutions coincides. A no
nlinear transformation parameterized by a positive scalar parameter an
d based on a smooth scaling function is used to transform the constrai
nts. The methods based on NRP consist of sequential unconstrained mini
mization of the classical Lagrangian for the equivalent problem, follo
wed by an explicit formula updating the Lagrange multipliers. We first
show that the NRP leads naturally to proximal methods with an entropy
-like kernel, which is defined by the conjugate of the scaling functio
n, and establish that the two methods are dually equivalent for convex
constrained minimization problems, We then study the convergence prop
erties of the nonlinear rescaling algorithm and the corresponding entr
opy-like proximal methods for convex constrained optimization problems
. Special cases of the nonlinear rescaling algorithm are presented, In
particular a new class of exponential penalty-modified barrier functi
ons methods is introduced.