A Tsailis-statistics-based generalization of the gradient descent dyna
mics (using non-extensive cost functions), recently introduced by one
of us, is proposed as a learning rule in a simple perceptron. The resu
lting Langevin equations are solved numerically for different values o
f an index q (q = 1 and q not equal 1 respectively correspond to the e
xtensive and non-extensive cases) and for different cost functions. Th
e results are compared with the learning curve (mean error versus time
) obtained from a learning experiment carried out with human beings, s
howing an excellent agreement for values of q slightly above unity. Th
is fact illustrates the possible importance of including some degree o
f non-locality (non-extensivity) in computational learning procedures,
whenever one wants to mimic human behaviour.