Td. Sanger, NEURAL-NETWORK LEARNING CONTROL OF ROBOT MANIPULATORS USING GRADUALLYINCREASING TASK-DIFFICULTY, IEEE transactions on robotics and automation, 10(3), 1994, pp. 323-333
Trajectory Extension Learning is an incremental method for training an
artificial neural network to approximate the inverse dynamics of a ro
bot manipulator. Training data near a desired trajectory is obtained b
y slowly varying a parameter of the trajectory from a region of easy s
olvability of the inverse dynamics toward the desired behavior. The pa
rameter can be average speed, path shape, feedback gain, or any other
controllable variable. As learning proceeds, an approximate solution t
o the local inverse dynamics for each value of the parameter is used t
o guide learning for the next value of the parameter. Convergence cond
itions are given for two variations on the algorithm. Examples are sho
wn of application to a real 2-joint direct drive robot arm and a simul
ated 3-joint redundant arm, both using simulated equilibrium point con
trol.