The Lagrangian function in the conventional theory for solving constrained
optimization problems is a linear combination of the cost and constraint fu
nctions. Typically, the optimality conditions based on linear Lagrangian th
eory are either necessary or sufficient, but not both unless the underlying
cost and constraint functions are also convex.
We propose a somewhat different approach for solving a nonconvex inequality
constrained optimization problem based on a nonlinear Lagrangian function.
This leads to optimality conditions which are both sufficient and necessar
y, without any convexity assumption. Subsequently, under appropriate assump
tions, the optimality conditions derived from the new nonlinear Lagrangian
approach are used to obtain an equivalent root-finding problem. By appropri
ately defining a dual optimization problem and an alternative dual problem,
we show that zero duality gap will hold always regardless of convexity, co
ntrary to the case of linear Lagrangian duality.