A lot of "artificial intelligence" programs in gaming these days are more about positional evaluation and canned responses. One of my favorite recent AI articles, Skynet Meets the Swarm, which is about the progress of the "Berkley Overmind" Starcraft AI, mostly reveals it to be a collection of scripts. It wins with micro-controlled Mutalisks, and never really goes off-script.
Meanwhile, chess computers have gotten good enough to beat grandmasters regularly, but they're able to do so because of advances in hardware -- not because they're learning. Chess computers still use brute force methods as far as I'm aware, traversing a vast database of board positions rather than incorporating any new knowledge. Even computers in the game of go/baduk/weiqi, where brute force is said to be impossible, have been making serious advances lately, but probably only due to "widest path" assessments and better hardware.
On the day a truly heuristic approach surfaces, which can evaluate a move in terms of the influence it has on the current board (rather than in the context of millions of past records) and assemble its own set of "best practices," improving over time based on its personal experience, that will represent a real breakthrough. Has that happened, and I missed it? Or are we just throwing more and more CPUs at the same old algorithms?
If you know of any articles about AI that go into depth about the strategies used under the hood, post them here.
Bookmarks