The notion of algorithmic complexity was developed by Kolmogorov (1965) and
Chaitin (1966) independently of one another and of Solomonoff's notion (19
64) of algorithmic probability. Given a Turing machine T, the (prefix) algo
rithmic complexity of a string S is the length of the shortest input to T w
hich would cause T to output S and stop. The Solomonoff probability of S gi
ven T is the probability that a random binary string of 0s and 1s will resu
lt in T producing an output having S as a prefix, We attempt to establish a
parallel between a restricted (two-part) version of the Kolmogorov model a
nd the minimum message length approach to statistical inference and machine
learning of Wallace and Boulton (1968), in which an 'explanation' of a dat
a string is modelled as a two-part message, the first part stating a genera
l hypothesis about the data and the second encoding details of the data not
implied by the hypothesis, Solomonoff's model is tailored to prediction ra
ther than inference in that it considers not just the most likely explanati
on, but it also gives weights to all Explanations depending upon their post
erior probability, However, as the amount of data increases, we typically e
xpect the most likely explanation to have a dominant weighting in the predi
ction.