Solving any inverse problem requires understanding the uncertainties in the
data to know what it means to fit the data. We also need methods to incorp
orate data-independent prior information to eliminate unreasonable models t
hat fit the data. Both of these issues involve subtle choices that may sign
ificantly influence the results of inverse calculations. The specification
of prior information is especially controversial. How does one quantify inf
ormation? What does it mean to know something about a parameter a priori? I
n this tutorial we discuss Bayesian and frequentist methodologies that can
be used to incorporate information into inverse calculations, In particular
we show that apparently conservative Bayesian choices, such as representin
g interval constraints by uniform probabilities (as is commonly done when u
sing genetic algorithms, for example) may lead to artificially small uncert
ainties. We also describe tools from statistical decision theory that can b
e used to characterize the performance of inversion algorithms.