Moreover, there are still challenging tasks for Deep Networks: contexts where the success depends on structured knowledge that can not be easily provided to the networks in a standardized way. Nonetheless, Deep Networks are still difficult to interpret, and their inference process is all but transparent. The author presents an investigation on how to find an optimal strategy for the Morabaraba game which will benefit the implementation of an artificial player.ĭeep Learning has revolutionized the whole discipline of machine learning, heavily impacting fields such as Computer Vision, Natural Language Processing, and other domains concerned with the processing of raw inputs. With the obtained results, the research recommends the implementation of an artificial player that does not counterattack. This research study proves that an equilibrium point exists between two Morabaraba strategies and shows that for fixed utility values using a game matrix, there exists an equilibrium point for which the strategy of both players remains unchanged. The paper endorses the view that the equilibrium strategy describes a Morabaraba player’s moves and could be used in the implementation of an artificial player. This study posits that the artificial player will use the most optimal strategy to play and win the game. An attempt is made to systematically determine an optimal strategy for the game using game theory. This research aims at introducing a design of an artificial player for the Morabaraba game. No prior research has been conducted in creating an artificial player for this indigenous game. One such game is Morabaraba, an ancient African board game. Nevertheless, there are still many board games for which computers have not been able to achieve the level of play equivalent to human players. However, it has become well known for computer programs beating grandmasters in games such as Chess and Othello. Making computers that can play games against human opponents has been a challenge since the beginning of research into Artificial Intelligence. Incorporated our algorithm into the retrograde analysis. Ultra-strong solutions used local heuristics or learning during games, but we When a program is playing based only on the strong solution, it is surprisinglyĮasy for the opponent to achieve the game-theoretic value. Selecting between "just strongly" optimal moves. Our program is playing against a fallible opponent, it has a greater chance ofĪchieving a better result than the game-theoretic value, compared to randomly We also developed a multi-valued retrograde analysis, and used it as a basisįor an algorithm for solving these games ultra-strongly. Opposed to the above games, where these are usually draws). (including the standard starting position) are wins for the first player (as Of the starting positions where the players can place an equal number of stones Previously unsolved third variant, Morabaraba, with interesting results: most Players is different from the standard rules. The game-theoretic values of all possible game states that could be reachedįrom certain starting positions where the number of stones to be placed by the These games, and calculated extended strong solutions for them. Well-known results (the starting positions are draws). The strong solutions of Nine Men's Morris and its variant, Lasker are
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |