Fuzzy decision tree function approximation in reinforcement learning Online publication date: Sun, 04-Apr-2010
by Hitesh Shah, M. Gopal
International Journal of Artificial Intelligence and Soft Computing (IJAISC), Vol. 2, No. 1/2, 2010
Abstract: Recent results on reinforcement learning regarding the convergence of control algorithms with function approximators, have shown that decision tree based reinforcement learning provides good learning performance and more reliable convergence than the neural network approach. It scales better to larger input spaces with lower memory requirements, and can solve problems that are infeasible using table lookup. However, decision tree based reinforcement learning can deal with only discrete actions. In realistic applications, it is imperative to deal with continuous states and actions. In this paper, we have proposed fuzzy decision tree based reinforcement learning that takes care of the limitations of decision tree based learning. We compare our approach with decision tree based function approximator on two bench mark problems: inverted pendulum stabilisation problem and two-link robot manipulator tracking problem.
Online publication date: Sun, 04-Apr-2010
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Artificial Intelligence and Soft Computing (IJAISC):
Login with your Inderscience username and password:
Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.
If you still need assistance, please email firstname.lastname@example.org