In this paper, a new learning method bused on the concept of cell-to-c
ell mapping is developed to generate automatically a controller from e
xperimental data without using a system model. The continuous state sp
ace is divided into many discrete cells initially, and then the dynami
cs of the system are approximated by the cell mapping. A series of lea
rning signals is applied to excite the system, the experimental data a
re collected in a data buffer, and are then processed into a control t
able which is used to store the experienced dynamics of the system. Th
e control table is optimized and updated through learning to improve t
he controller performance. Some generalized controls called pseudo-exp
erience production rules are used to fill some empty entries in the co
ntrol table. Finally, the control table is used as a controller via a
table look-up scheme for the system.