Title: A fuzzy decision tree-based robust Markov game controller for robot manipulators
Authors: Hitesh Shah, M. Gopal
Addresses: Department of Electrical Engineering, IIT-Delhi, New Delhi 110016, India. ' Department of Electrical Engineering, IIT-Delhi, New Delhi 110016, India
Abstract: Two-player zero-sum Markov game framework offers an effective platform for designing robust controllers. In the Markov game-based learning, theoretical convergence of the learning process with the function approximator cannot be guaranteed. However, fusing Q-learning with decision tree (DT) function approximator has shown good learning performance and more reliable convergence. It scales better to larger input spaces with lower memory requirements, and can solve problems that are infeasible using table lookup. This motivates us to introduce DT function approximator in Markov game reinforcement learning (RL) framework. This approach works, though it deals with only discrete actions. In realistic applications, it is imperative to deal with continuous state–action spaces. In this paper, we propose Markov game framework for continuous state–action space systems using fuzzy DT as a function approximator. Simulation experiments on a two-link robot manipulator bring out the importance of the proposed structure in terms of better robust performance and computational efficiency.
Keywords: Markov game; RL control; fuzzy Q-learning; fuzzy decision trees; robust control; robot manipulators; robot control; reinforcement learning; simulation.
DOI: 10.1504/IJAAC.2010.035528
International Journal of Automation and Control, 2010 Vol.4 No.4, pp.417 - 439
Published online: 30 Sep 2010 *
Full-text access for editors Full-text access for subscribers Purchase this article Comment on this article