Sometimes implementing evaluation designs that involve random assignme
nt or strong nonrandomized comparison group designs is not feasible, W
hen this is the case, several other types of comparisons may offer cre
dible, though not conclusive, evidence on program effects. This articl
e describes three approaches-comparisons of the outcomes of the treatm
ent group with a national sample, comparisons of the outcomes of progr
am participants and nonparticipants, and dose-response analyses-and il
lustrates their use in the evaluation of the School-Based Adolescent H
ealth Care Program. The findings suggest that if outcomes are measured
before and after the intervention, comparisons of the treatment group
outcomes to outcomes for a national sample may provide valid estimate
s of program effects. The other two types of comparisons produced impl
ausible and unstable estimates of program effects, Because of the sele
ction bias inherent in these two methods, researchers cannot count on
being able to produce plausible estimates of program effects with such
comparisons.