Language acquisition in the absence of explicit negative evidence: how important is starting small?

Citation
Dlt. Rohde et Dc. Plaut, Language acquisition in the absence of explicit negative evidence: how important is starting small?, COGNITION, 72(1), 1999, pp. 67-109
Citations number
77
Categorie Soggetti
Psycology
Journal title
COGNITION
ISSN journal
00100277 → ACNP
Volume
72
Issue
1
Year of publication
1999
Pages
67 - 109
Database
ISI
SICI code
0010-0277(19990825)72:1<67:LAITAO>2.0.ZU;2-D
Abstract
It is commonly assumed that innate linguistic constraints are necessary to learn a natural language, based on the apparent lack of explicit negative e vidence provided to children and on Gold's proof that, under assumptions of virtually arbitrary positive presentation, most interesting classes of lan guages are not learnable. However, Gold's results do not apply under the ra ther common assumption that language presentation may be modeled as a stoch astic process. Indeed, Elman (Elman, J.L., 1993. Learning and development i n neural networks: the importance of starting small. Cognition 48, 71-99) d emonstrated that a simple recurrent connectionist network could learn an ar tificial grammar with some of the complexities of English, including embedd ed clauses, based on performing a word prediction task within a stochastic environment. However, the network was successful only when either embedded sentences were initially withheld and only later introduced gradually, or w hen the network itself was given initially limited memory which only gradua lly improved. This finding has been taken as support for Newport's 'less is more' proposal, that child language acquisition may be aided rather than h indered by limited cognitive resources, The current article reports on conn ectionist simulations which indicate, to the contrary, that starting with s implified inputs or limited memory is not necessary in training recurrent n etworks to learn pseudo-natural languages; in fact, such restrictions hinde r acquisition as the languages are made more English-like by the introducti on of semantic as well as syntactic constraints. We suggest that, under a s tatistical model of the language environment, Gold's theorem and the possib le lack of explicit negative evidence do not implicate innate, linguistic-s pecific mechanisms. Furthermore, our simulations indicate that special teac hing methods or maturational constraints may be unnecessary in learning the structure of natural language. (C) 1999 Elsevier Science B.V. All rights r eserved.