Device scaling has led to the blurring of the boundary between design and t
est: marginalities introduced by design tool approximations can cause failu
res when aggressive designs are subjected to process variation. Larger die
sizes are more vulnerable to intra-die variations, invalidating analyses ba
sed on a number of given process corners. These trends are eroding the pred
ictability of test quality based on stuck-at fault coverage. Industry studi
es have shown that an at-speed functional test with poor stuck-at fault cov
erage can be a better DPM screen than a set of scan tests with very high st
uck-at fault coverage. Contrary to conventional wisdom, we have observed th
at a high stuck-at fault test set is not necessarily good at detecting faul
ts that model actual failure mechanisms. One approach to address the test q
uality crisis is to rethink the fault model that is at the core of these te
sts. Targeting realistic fault models is a challenge that spans the design,
test and manufacturing domains: the extraction of realistic faults has to
analyze the design at the physical and circuit levels of abstraction while
taking into account the failure modes observed during manufacture. Practica
l fault models need to be defined that adequately model failing behavior wh
ile remaining amenable to automatic test generation. The addition of these
fault models place increasing performance and capacity demands on already s
tressed test generation and fault simulation tools. A new generation of ana
lysis and test generation tools is needed to address the challenge of defec
t-based test. We provide a detailed discussion of process technology trends
that are responsible for next generation test problems, and present a test
automation infrastructure being developed at Intel to meet the challenge.