Learning To Play Strong Poker

$10.00 $1.00
Quantity:

in stock

Learning To Play Strong Poker

Poker is an interesting test-bed for artificial intelligence research. It is a game of imperfect knowledge, where multiple competing agents must deal with risk management, opponent modeling, unreliable information, and deception, much like decision-making applications in the real world. Opponent modeling is one of the most difficult problems in decision-making applications and in poker it is essential to achieving high performance.

DMCA.com

Learning To Play Strong Poker

Poker is an interesting test-bed for artificial intelligence research. It is a game of imperfect knowledge, where multiple competing agents must deal with risk management, opponent modeling, unreliable information, and deception, much like decision-making applications in the real world. Opponent modeling is one of the most difficult problems in decision-making applications and in poker it is essential to achieving high performance. This paper describes and evaluates the implicit and explicit learning in the poker program Loki. L o k i implicitly “learns” sophisticated strategies by selectively sampling likely cards for the opponents and then simulating the remainder of the game. The program has explicit learning for observing its opponents, constructing opponent models and dynamically adapting its play to exploit patterns in the opponents’ play. The result is a program capable of playing reasonably strong poker, but there remains considerable research to be done to play at a world-class level.

The artificial intelligence community has recently benefited from the tremendous publicity generated by the development of chess, checkers and Othello programs that are capable of defeating the best human players. However, there is an important difference between these board games and popular card games like bridge and poker. In the board games, players always have complete knowledge of the entire game state since it is visible to both participants. This property allows high performance to be achieved by a brute-force search of the game tree. In contrast, bridge and poker involve imperfect information since the other players’ cards are not known, and search alone is insufficient to play these games well. Dealing with imperfect information is the main reason why progress on developing strong bridge and poker programs has lagged behind the advances in other games. However, it is also the reason why these games promise higher potential research benefits.

Poker has a rich history of scientific investigation. Economists and mathematicians have applied a variety of analytical techniques to certain poker-related problems. However, since “real” poker is too complex for this approach, they have studied simplified variants ([15] for example). Other individuals, including expert players with a penchant for mathematics, have gained considerable insight about “real” poker by using partial mathematical analyses, simulation, and ad-hoc expert experience ([18] is a popular example).

Until recently, the computing science community has largely ignored poker. However, poker has a number of attributes that make it an interesting domain for artificial intelligence (AI) research. These attributes include imperfect knowledge, multiple competing agents, risk management, opponent modeling, deception, and dealing with unreliable information. All of these are challenging dimensions to a difficult problem.

There are two ways that poker can be used as an interesting testbed for artificial intelligence research. One approach is to use simplified variants that are easier to analyze. For example, Findler worked on and off for 20 years on a poker-playing program for simplified 5-card draw poker [7]. He modeled human cognitive processes and built a program that could learn. The danger with this approach is that simplification can remove the challenging components of the problem that are of interest to AI researchers. A variant of this approach is to look at a subset of the game, and try to address each component in isolation. Several attempts have been made to apply machine learning techniques to individual aspects of poker (some examples include [19,21,6]).

The second approach, and the one that we advocate, is to tackle the entire problem: choose a real variant of poker and address all the considerations necessary to build a program that performs at a level comparable to that of the best human players. Clearly this is the most ambitious approach, but also the one that promises the most exciting research opportunities.

Recently, Koller and Pfeffer have been investigating poker from a theoretical point of view [13]. They present a new algorithm for finding optimal randomized strategies in two-player imperfect information competitive games. This is done in their Gala system, a tool for specifying and solving problems of imperfect information. Their system builds decision trees to find the optimal gametheoretic strategy. However the tree sizes prompted the authors to state that “…we are nowhere close to being able to solve huge games such as full-scale poker, and it is unlikely that we will ever be able to do so.” In theory, their approach could be used to build an optimal poker player for a real variant of poker. In practice, it will require too many computational resources unless further improvements are discovered.

Buy this eBook and write a review, and get a $7 coupon




Submit your review
* Required Field


All Ebooks, Gambling, Games, Resell Rights

Comments are closed.

Related Products