In many applications, the use of Bayesian probability theory is proble
matical. Information needed to feasibility calculate is unavailable. T
here are different methodologies for dealing with this problem, e.g.,
maximal entropy and Dempster-Shafer Theory. If one can make independen
ce assumptions, many of the problems disappear, and in fact, this is o
ften the method of choice even when it is obviously incorrect. The not
ion of independence is a 0-1 concept, which implies that human guesses
about its validity will not lead to robust systems. In this paper, we
propose a fuzzy formulation of this concept. It should lend itself to
probabilistic updating formulas by allowing heuristic estimation of t
he ''degree of independence.'' We show how this can be applied to comp
ute a new notion of conditional probability (we call this ''extended c
onditional probability''). Given information, one typically has the ch
oice of full conditioning (standard dependence) or ignoring the inform
ation (standard independence). We list some desiderata for the extensi
on of this to allowing degree of conditioning. We then show how our fo
rmulation of degree of independence leads to a formula fulfilling thes
e desiderata. After describing this formula, we show how this compares
with other possible formulations of parameterized independence. In pa
rticular, we compare it to a linear interpolant, a higher power of a l
inear interpolant, and to a notion originally presented by Hummel and
Manevitz [Tenth Int. Joint Conf. on Artificial Intelligence, 1987]. In
terestingly, it turns out that a transformation of the Hummel-Manevitz
method and our ''fuzzy'' method are close approximations of each othe
r. Two examples illustrate how fuzzy independence and extended conditi
onal probability might be applied. The first shows how linguistic prob
abilities result from treating fuzzy independence as a linguistic vari
able. The second is an industrial example of troubleshooting on the sh
op floor.