Consider the qualitative approach to evaluation design las opposed to measu
rement) to be typified by a case study with a sample of just one. Although
there have certainly been elaborate and emphatic defenses of the qualitativ
e approach to program evaluation, such defenses rarely attempt to qualify t
he approach explicitly and rigorously as a method of impact analysis. The p
resent paper makes that attempt. The problem with seeking to advance a qual
itative method of impact analysis is that impact is a matter of causation a
nd a nonquantitative approach to design is apparently not well suited to th
e task of establishing causal relations. The root of the difficulty is loca
ted in the counterfactual definition of causality, which is our only broadl
y accepted formal definition of causality for social science. It is not, ho
wever, the only definition we use informally. Another definition, labeled "
physical causality," is widely used in practice and has recently been forma
lized. Physical causality can be applied to the present problem. For exampl
e, it explains the persuasiveness of Scriven's "Modus Operandi" approach to
causal inference in evaluation. Under this conceptualization, a tailored c
ase study design with a sample size of one becomes in principle as strong a
basis for making inferences about program impact as a randomized experimen
t. Crucial for the application of the method to program evaluation is the f
inding that people's "operative reasons" for doing what they do are the phy
sical causes of their actions. Lastly, it is shown that external validity u
sing this qualitative approach would have exceptional strengths.