We coin the term augmented penalty method to refer to an augmented Lag
rangian-like method in which the penalty parameter is driven to zero i
nstead of being kept bounded away from zero. The spirit of the algorit
hm is thus that of a classical penalty in which an estimate of the Lag
range multipliers is updated. For the classical augmented Lagrangian f
unction (using the quadratic loss penalty term) applied to equality co
nstrained optimization, Gould (1989 Siam J. Numer Anal. 26, 107) has o
btained a two-step superlinear local convergence result using a consta
nt Lagrange multiplier estimate. Dussault et al have generalized those
results to inequality constrained optimization using other penalty te
rms. On the other hand, convergence and rate of convergence results fo
r augmented Lagrangian methods concern the convergence of the dual var
iables, usually without relation to the actual effort required to obta
in the approximate solutions of the unconstrained primal minimizations
. In this paper we consider several updating rules for the Lagrange mu
ltiplier estimates and we obtain rate of convergence results for both
primal and dual variables, a two-step superlinear convergence of order
alpha, with alpha < 2. In this result, each iteration uses the soluti
on of a single primal-dual linear system. While this does not improve
on the rate of convergence of simple penalty methods, it may alleviate
some cancellation errors in internal computations.