Classical unidimensional scaling provides a difficult combinatorial ta
sk. A procedure formulated as a nonlinear programming (NLP) model is p
roposed to solve this problem. The new method can be implemented with
standard mathematical programming software. Unlike the traditional pro
cedures that minimize either the sum of squared error (L-2 norm) or th
e sum of absolute error (L-1 norm), the pro posed method can minimize
the error based on any L-p norm for 1 less than or equal to p < infini
ty. Extensions of the NLP formulation to address a multidimensional sc
aling problem under the city-block model are also discussed.