Benchmarking is the quantitative method most commonly used when manage
rs contemplate procuring a large business information system. It consi
sts of running a group of representative applications on the systems o
ffered by vendors to validate their claims. The implementation of benc
hmarking can be very costly, as users need to convert, run, and test a
pplications on several partially compatible computer systems. Benchmar
king works well in modern database management systems (DBMS)-oriented
applications because the system performance is more a function of the
database structure and activities than of the complexity of the applic
ation code. Earlier research focused primarily on designing various be
nchmarks for database systems; the decision problem associated with fi
nding an optimal mix of benchmarks has largely been overlooked. In thi
s paper, we examine the problem of defining the most economical proces
s for generating and evaluating the appropriate mix of benchmarks to b
e used across the contending information systems. Our analytical appro
ach considers information-gathering priorities, acquisition and execut
ion costs, resource consumption, and overall time requirements. We pre
sent a multiobjective decision-making approach for deriving the optima
l mix of benchmarks; this approach reflects the major organizational o
bjectives in more than simple one-dimensional numerical terms. A pract
ical example illustrates the utility of this approach for evaluating a
client-server relational database system.