An integration model for Texas Hold'em Online publication date: Fri, 27-Oct-2023
by Yajie Wang; Shengyu Han; Zhihao Wei; Zhonghui Shi
International Journal of Computing Science and Mathematics (IJCSM), Vol. 18, No. 3, 2023
Abstract: Texas Hold'em is a representative of an incomplete information game. Existing research on computing Nash equilibrium as a Texas Hold'em strategy has problems, including high resource consumption and conservative strategies. To solve the above problems, an integration model combining deep learning and reinforcement learning is proposed. Firstly, to reduce the storage resources consumed due to the large Texas Hold'em state space, a long short-term memory (LSTM) is designed to predict the game results. Since the win rate and historical action information are used as input data by the LSTM, a convolutional neural network (CNN) is designed to predict the current win rate. Secondly, in order to enable the strategy to have dynamic adjustment ability, the deep Q-network(DQN) is used to generate the strategy by using the results predicted by LSTM. Finally, an agent is implemented to provide training data for LSTM. The experimental results show that the model wins more chips, which proves that it can be used as a solution for incomplete information games.
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Computing Science and Mathematics (IJCSM):
Login with your Inderscience username and password:
Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.
If you still need assistance, please email subs@inderscience.com