In 6vs6 Pokemon games, you have up to 9 choices every turn: 4 moves and 5 possible switches. Sometimes there is also the option to mega evolve or use a Z-move which can increase possible options to 13. As your opponent has also ~9 choices to make, there are ~81 possible ways the next turn could go. This means that there would be 81^5= ~3.500.000.000 ways the game could evolve in only the next 5 turns. This calculation assumes that you know your opponent’s sets and does not even take into account the randomness from things like damage rolls and freezes. Simulating all these different ways with the PokemonShowdown implementation would take hundreds of years.
Luckily there are some ways to cut down on this. One option is to use Monte Carlo tree search (MCTS). With this approach, you begin by just selecting random moves for both players until the game ends. And if this leads to you winning, then you go: “Oh, these were some good moves I chose there, I should select those more often!” And if you do this long enough, then you will eventually get a pretty good understanding of what the best moves are in a particular situation, even if you only look at a tiny fraction of actual playouts. MCTS is often used in combination with Reinforcement Learning.
Another popular approach for searching the game tree is the minimax algorithm. Or rather, for non-deterministic games like Pokemon, the expectiminimax algorithm. This algorithm tries to always select the move that minimises the expected loss (=Chance of opponent winning) for the worst-case scenario (= Opponent selecting the best move for your move choice). This ‘always playing safe’ strategy is not optimal because it is a deterministic strategy and can, as such, very easily be exploited. Nevertheless, you can get quite far with it, as demonstrated in this Pokemon AI implementation from pmariglia.
When using this approach, you can make use of alpha-beta pruning. This simply means not further examining branches of the search tree if a certain move can be proven to be worse than one of the alternatives.
So what can we take away from this to improve in competitive Pokemon as a player?
One interesting detail of alpha-beta pruning is that the number of branches you can prune depends heavily on the order in which you evaluate the moves. If you always evaluate from worst to best move you get absolutely no benefit from using alpha-beta pruning compared to vanilla minimax search. But if you always evaluate from best to worst move, you can search twice as deep with the same amount of computation. So always try to evaluate the most promising moves first?
After all, I think that those search tree algorithms are not very helpful for improving human play. I have the feeling that people use more of a heuristic-based approach. And the excellent metagrok Pokemon AI proves that you can play decently without doing any search at all.
I think you become a great Pokemon player not by going deep into the search tree, but by developing excellent abilities for evaluating the game state. If you know for example that you need your currently active Porygon2 to stop a potential Garchomp sweep in the late game, you only need to think about what switch to make and not about what would be the best attack.
And that’s it for now with my series about ‘The math behind competitive Pokemon’.
I have a few more ideas, but I feel the longer I try to keep this series alive the further it will be detached from any actual playing advice.
I’m still interested in building strong Pokemon AI. More than a year ago, I started a first attempt together with Christopher Wolff, but we got not quite as far as we would have liked and in the meantime, even better Pokemon-AI frameworks were developed. But as I have finished my life simulation game Cell Tune now, I can hopefully devote some time to developing a better AI in the coming months.
If you are interested in building Pokemon AI (or just chatting about it), you can join or PokeML discord channel here.