Software engineering standards abound. If we include work of the major
national standards bodies throughout the world, there are in fact mor
e than 250 software engineering standards. The existence of these stan
dards raises some important questions. How do we know which practices
to standardize? Since projects sometimes produce less-than-desirable p
roducts, are the standards not working, or being ignored? Perhaps the
answer is that standards have codified approaches whose effectiveness
has not been rigorously and scientifically demonstrated.This article r
eports on the results of the Smartie project (Standards and Methods As
sessment Using Rigorous Techniques in Industrial Environments), a coll
aborative effort to propose a widely applicable procedure for the obje
ctive assessment of standards used in software development. The author
s hope that Smartie will enable the identification of standards whose
use is most likely to lead to improvements in some aspect of software
development processes and products. The authors discuss how to evaluat
e a standard for its applicability and objectivity. They then describe
the results of a major industrial case study involving the reliabilit
y and maintainability of almost two million lines of code. Their resea
rch suggests that small, simple changes to standards writing, and espe
cially to data collection standards, can improve significantly the qua
lity of information about what is going on in a system and with a proj
ect.