Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>First you can check <a href="https://github.com/THeK3nger/Cobra-Draughts/blob/master/cobradraughts/core/DraughtsBrain.py" rel="nofollow">this piece of code</a> for a checkers AI that I wrote years ago. The interesting part is the last function (<code>alphabeta</code>). (It's in python but I think you can look at that like <em>pseudocode</em>).</p> <p>Obviously I cannot teach you all the alpha/beta theory cause it can be a little tricky, but maybe I can give you some practical tips. </p> <p><strong>Evaluation Function</strong></p> <p>This is one of the key points for a good min/max alpha/beta algorithm (and for any other informed search algorithm). Write a good heuristic function is the artistic part in AI development. You have to know well the game, talk with expert game player to understand which board features are important to answer the question: <em>How good is this position for player X?</em></p> <p>You have already indicated some good features like mobility, stability and free corners. However note that the evaluation function has to be fast cause it will be called a lot of times.</p> <p>A basic evaluation function is</p> <pre><code>H = f1 * w1 + f2 * w2 + ... + fn * wn </code></pre> <p>where <code>f</code> is a feature score (for example the number of free corners) and <code>w</code> is a corresponding weight that say <em>how much the feature f is important in the total score</em>.</p> <p>There is only one way to find weights value: experience and experiments. ;)</p> <p><strong>The Basic Algorithm</strong></p> <p>Now you can start with the algorithm. The first step is understand game tree navigation. In my AI I've just used the principal board like a blackboard where the AI can try the moves.</p> <p>For example we start with board in a certain configuration <strong>B1</strong>.</p> <p><em>Step 1: get all the available moves</em>. You have to find all the applicable moves to B1 for a given player. In my code this is done by <code>self.board.all_move(player)</code>. It returns a list of moves.</p> <p><em>Step 2: apply the move and start recursion</em>. Assume that the function has returned three moves (<strong>M1</strong>, <strong>M2</strong>, <strong>M3</strong>). </p> <ol> <li>Take the first moves M1 and apply it to obtain a new board configuration B11.</li> <li>Apply recursively the algorithm on the new configuration (find all the moves applicable in B11, apply them, recursion on the result, ...)</li> <li>Undo the move to restore the B1 configuration.</li> <li>Take the next moves M2 and apply it to obtain a new board configuration B12.</li> <li>And so on.</li> </ol> <p>NOTE: The step 3 can be done only if all the moves are reversible. Otherwise you have to find another solution like allocate a new board for each moves.</p> <p>In code:</p> <pre><code>for mov in moves : self.board.apply_action(mov) v = max(v, self.alphabeta(alpha, beta, level - 1, self._switch_player(player), weights)) self.board.undo_last() </code></pre> <p><em>Step 3: stop the recursion</em>. This three is very deep so you have to put a search limit to the algorithm. A simple way is to stop the iteration after <code>n</code> levels. For example I start with <strong>B1</strong>, <code>max_level=2</code> and <code>current_level=max_level</code>.</p> <ol> <li>From B1 (current_level 2) I apply, for example, the M1 move to obtain B11.</li> <li>From B11 (current_level 1) I apple, for example, the M2 move to obtain B112.</li> <li>B122 is a "current_level 0" board configuration so I stop recursion. I return the evaluation function value applied to B122 and I come back to level 1.</li> </ol> <p>In code:</p> <pre><code>if level == 0 : value = self.board.board_score(weights) return value </code></pre> <p>Now... standard algorithm pseudocode returns the value of the best <em>leaf</em> value. Bu I want to know which move bring me to the best leaf! To do this you have to find a way to map leaf value to moves. For example you can save moves sequences: starting from B1, the sequence (M1 M2 M3) bring the player in the board B123 with value -1; the sequence (M1 M2 M2) bring the player in the board B122 with value 2; and so on... Then you can simply select the move that brings the AI to the best position.</p> <p>I hope this can be helpful.</p> <p>EDIT: Some notes on <strong>alpha-beta</strong>. Alpha-Beta algorithm is hard to explain without graphical examples. For this reason I want to link one of the most detailed alpha-beta pruning explanation I've ever found: <strong><a href="http://www.cs.ucla.edu/~rosen/161/notes/alphabeta.html" rel="nofollow">this one</a></strong>. I think I cannot really do better than that. :)</p> <p>The key point is: Alpha-beta pruning adds to MIN-MAX two bounds to the nodes. This bounds can be used to decide if a sub-tree should be expanded or not.</p> <p>This bounds are:</p> <ul> <li><em>Alpha</em>: the maximum lower bound of possible solutions.</li> <li><em>Beta</em>: the minimum upper bound of possible solutions.</li> </ul> <p>If, during the computation, we find a situation in which <code>Beta &lt; Alpha</code> we can stop computation for that sub-tree.</p> <p>Obviously check the previous link to understand how it works. ;)</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload