Multitask Learning is an approach to inductive transfer that improves
generalization by using the domain information contained in the traini
ng signals of related tasks as an inductive bias. It does this by lear
ning tasks in parallel while using a shared representation; what is le
arned For each task can help other tasks be learned better. This paper
reviews prior work on MTL, presents new evidence that MTL in backprop
nets discovers task relatedness without the need of supervisory signa
ls, and presents new results for MTL with k-nearest neighbor and kerne
l regression. In this paper we demonstrate multitask learning in three
domains. We explain how multitask learning works, and show that there
are many opportunities for multitask learning in real domains. We pre
sent an algorithm and results for multitask learning with case-based m
ethods like k-nearest neighbor and kernel regression, and sketch an al
gorithm for multitask learning in decision trees. Because multitask le
arning works, can be applied to many different kinds of domains, and c
an be used with different learning algorithms, we conjecture there wil
l be many opportunities for its use on real-world problems.