Software applications are now a mission-critical source of competitive adva
ntage for most companies. They are also a source of great risk, as the Y2K
bug has made clear. Yet many line managers still haven't confronted softwar
e issues - partly because they aren't sure how best to define the quality o
f the applications in their IT infrastructures. Some companies such as Wal-
Mart and the Gap have successfully integrated the software in their network
s, but most have accumulated an unwieldy number of incompatible application
s - all designed to perform the same tasks.
The authors provide a framework for measuring the performance of software i
n a company's IT portfolio. Quality traditionally has been measured accordi
ng to a product's ability to meet certain specifications; other views of qu
ality have emerged that measure a product's adaptability to customer's need
s and a product's ability to encourage innovation. To judge software qualit
y properly, argue the authors, manager's must measure applications against
all three approaches.
Understanding the domain of a software application is an important part of
that process. The domain is the body of knowledge about a user's needs and
expectations for a product. Software domains change frequently based on how
a consumer chooses to use, for example, Microsoft Word or a spreadsheet ap
plication. The domain can also be influenced by general changes in technolo
gy, such as the development of a new software platform. Thus, applications
can't be judged only according to whether they conform to specifications. T
he authors discuss how to identify domain characteristics and software risk
s and suggest ways to reduce the variability of software domains.