A major problem in machine learning is that of inductive bias: how to choos
e a learner's hypothesis space so that it is large enough to contain a solu
tion to the problem being learnt, yet small enough to ensure reliable gener
alization from reasonably-sized training sets. Typically such bias is suppl
ied by hand through the skill and insights of experts. In this paper a mode
l for automatically learning bias is investigated. The central assumption o
f the model is that the learner is embedded within an environment of relate
d learning tasks. Within such an environment the learner can sample from mu
ltiple tasks, and hence it can search for a hypothesis space that contains
good solutions to many of the problems in the environment. Under certain re
strictions on the set of all hypothesis spaces available to the learner, we
show that a hypothesis space that performs well on a sufficiently large nu
mber of training tasks will also perform well when learning novel tasks in
the same environment. Explicit bounds are also derived demonstrating that l
earning multiple tasks within an environment of related tasks can potential
ly give much better generalization than learning a single task.