Bootstrapping learning from abstract models in games
by Purvag Patel; Normal Carver; Shahram Rahimi
International Journal of Bio-Inspired Computation (IJBIC), Vol. 5, No. 4, 2013

Abstract: Computer gaming environments are real time, dynamic, and complex, with incomplete knowledge of the world. Agents in such environments require detailed models of the world if they are to learn effective policies. Machine learning techniques such as reinforcement learning can become intractably large, detailed world models. In this paper we tackle the well-known problem of low convergence speed in reinforcement learning for the detailed model of the world, specifically for video games. We propose first training the agents with an abstract model of the world and then using the resulting policy to initialise the system prior to training the agent with the detailed model of the world. This paper reports on results from applying the proposed technique to the classic arcade game Asteroids. Our experiments show that an agent can quickly learn a policy with the abstract model, and that when this policy's learned values are used to initialise the detailed model, learning with the detailed model improves the rate of convergence.

Online publication date: Mon, 31-Mar-2014

The full text of this article is only available to individual subscribers or to users at subscribing institutions.

 
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.

Pay per view:
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.

Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Bio-Inspired Computation (IJBIC):
Login with your Inderscience username and password:

    Username:        Password:         

Forgotten your password?


Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.

If you still need assistance, please email subs@inderscience.com