Methods for finding feasible points are needed in some numerical optim
ization algorithms to initiate the search for a local minimum, Other m
ethods perform more efficiently if a nearly feasible starting point is
used. In addition, a feasible solution may be acceptable as the final
design in some practical applications. Therefore, constraint correcti
on methods are quite useful. Such methods based on penalty functions a
nd primal approach are described and discussed. To gain insight into t
he methods a two-variable problem is used to analyze the methods. Seve
ral structural design problems are used to study the numerical behavio
r of the methods and compare their performance. It is concluded that,
in general, the primal methods are more efficient than the penalty fun
ction methods. Also, no method has been found that is guaranteed to fi
nd a feasible point for general constrained nonlinear problems startin
g from an arbitrary point. In case a method fails, random points shoul
d be used to restart the procedure to search for a feasible point.