I'm TA for an AI course at my university. They recently had to deliver and demonstrate their system beating 2048. Most people used min/max with alpha-beta pruning, and considered all possible moves and all possible placements of a 2 or 4 tile. This can make your bot a bit too cautious, so some used Expectimax instead, weighting each value with the probability of it happening.
Those who had simpler heuristics did better. Trying to combine 4-5 heuristics is hard, as you have to weight them against each other. The "gradients" mentioned here did alone produce good results for most students. Of the ~50 people, most managed to demonstrate to me that they could get a 2048 tile within a time limit. Some even 8k and 16k tiles.
I think most of them got the "Tetris-effect" by watching their bot play a few rounds, tweak, run it again etc. for a few days. Probably watched blocks sliding around when making food etc. :p
In my case it actually lost, but by watching it play I can tell it made very questionnable decisions about some moves, and it plays a lot less defensively than I usually do. It managed to get really far, only a couple of moves away from winning. It actually had assembled every piece for winning, only it failed to group them together to achieve 2048. Pretty incredible!
The brilliance of this post is not in the fact that a AI program can beat another AI program, but if a human conceivable algorithm of this length can beat the raw cognitive power of human users itself. I would be seriously diggin this.
I let the algorithm run to the end. 78992 points. Not only did I win I got a 4096 square (which is black btw) and another 2048 square. It died very close to getting an 8192 square.
Rly like that game too, did you try it with alpha beta pruning? should considerably speed up the "look into the future" thing compared to simple min/max
Agreed. Another way to optimize it is by running the animations and the AI on separate threads. The deeper the search tree, the better the AI performs, irrespective of the heuristic used.
I'm TA for an AI course at my university. They recently had to deliver and demonstrate their system beating 2048. Most people used min/max with alpha-beta pruning, and considered all possible moves and all possible placements of a 2 or 4 tile. This can make your bot a bit too cautious, so some used Expectimax instead, weighting each value with the probability of it happening.
Those who had simpler heuristics did better. Trying to combine 4-5 heuristics is hard, as you have to weight them against each other. The "gradients" mentioned here did alone produce good results for most students. Of the ~50 people, most managed to demonstrate to me that they could get a 2048 tile within a time limit. Some even 8k and 16k tiles.
I think most of them got the "Tetris-effect" by watching their bot play a few rounds, tweak, run it again etc. for a few days. Probably watched blocks sliding around when making food etc. :p
In my case it actually lost, but by watching it play I can tell it made very questionnable decisions about some moves, and it plays a lot less defensively than I usually do. It managed to get really far, only a couple of moves away from winning. It actually had assembled every piece for winning, only it failed to group them together to achieve 2048. Pretty incredible!
Also a very interesting read.
Thanks for reading the post. :)
As the write up says, this is way more addictive. I'm cheering it along every move it makes.
The brilliance of this post is not in the fact that a AI program can beat another AI program, but if a human conceivable algorithm of this length can beat the raw cognitive power of human users itself. I would be seriously diggin this.
I let the algorithm run to the end. 78992 points. Not only did I win I got a 4096 square (which is black btw) and another 2048 square. It died very close to getting an 8192 square.
Rly like that game too, did you try it with alpha beta pruning? should considerably speed up the "look into the future" thing compared to simple min/max
Agreed. Another way to optimize it is by running the animations and the AI on separate threads. The deeper the search tree, the better the AI performs, irrespective of the heuristic used.
I got way too hooked on the game but never actually managed to beat it. This makes me wonder whether I could write something that could faster.
https://ov3y.github.io/2048-AI/
Someone should benchmark these against each other
This looks cool.. I have the same addiction .. whats the max score you have reached? by urself and by AI?