Surveys that require users to evaluate or make judgments about informa
tion systems and their effect on specific work activities can produce
misleading results if respondents do not interpret or answer questions
in the ways intended by the researcher. This paper provides a framewo
rk for understanding both the cognitive activities and the errors and
biases in judgment that can result when users are asked to categorize
a system, explain its effects, or predict their own future actions and
preferences with respect to use of a system. Specific suggestions are
offered for wording survey questions and response categories so as to
elicit more precise and reliable responses. In addition, possible sou
rces of systematic bias are discussed, using examples drawn from publi
shed IS research. Recommendations are made for further research aimed
at better understanding how and to what extent judgment biases could a
ffect the results of IS surveys.