EXPERIMENTS WITH REINFORCEMENT LEARNING IN PROBLEMS WITH CONTINUOUS STATE AND ACTION SPACES

Citation
Jc. Santamaria et al., EXPERIMENTS WITH REINFORCEMENT LEARNING IN PROBLEMS WITH CONTINUOUS STATE AND ACTION SPACES, Adaptive behavior, 6(2), 1997, pp. 163-217
Citations number
26
Journal title
ISSN journal
10597123
Volume
6
Issue
2
Year of publication
1997
Pages
163 - 217
Database
ISI
SICI code
1059-7123(1997)6:2<163:EWRLIP>2.0.ZU;2-K
Abstract
A key element in the solution of reinforcement learning problems is th e value function. The purpose of this function is so measure the long- term utility or value of any given state. The function is important be cause an agent can use this measure to decide what to do next A common problem in reinforcement learning when applied so systems having cont inuous states and action spaces is that the value function must operat e with a domain consisting of real-valued variables, which means that it should be able to represent the value of infinitely many state and action pairs. For this reason, function approximators are used to repr esent the value function when a close-form solution of the optimal pol icy is not available. in this article, we extend a previously proposed reinforcement learning algorithm so that it can be used with function approximators that generalize the value of individual experiences acr oss both state and action spaces. in particular we discuss the benefit s of using sparse coarse-coded function approximators to represent val ue functions and describe in detail three implementations: cerebellar model articulation controllers, instance-based, and case-based. Additi onally we discuss how function approximators having different degrees of resolution in different regions of the state and action spaces may influence the performance and learning efficiency of the agent. We pro pose a simple and modular technique that can be used to implement func tion approximators with nonuniform degrees of resolution so that the v alue function can be represented with higher accuracy in important reg ions of the state and action spaces. We performed extensive experiment s in the double-integrator and pendulum swing-up systems to demonstrat e the proposed ideas.