Imagined Thinking Machines: Artificial Intelligence in Video Games
Published on Apr 2, 2015 by Stephanie Rizzo
When Atari introduced the now classic arcade game Space Invaders in 1978, it was the first video game to make use of rudimentary artificial intelligence (A.I.) in the form of stored movement patterns. A lot has changed since then. Consoles have gotten faster, and computerized opponents have evolved to behave in new and interesting ways. Yet when it comes to most games, developers are still reliant on hard-coded methods for setting a challenge level. This is because players want A.I. that’s good, but not too good.
“It’s very difficult to find that exact break-even point of what is challenging enough for a player versus what’s going to make them feel frustrated and quit,” says Ed Younskevicius, a Course Director in the Game Development program who teaches a class on artificial intelligence in games. “Our goal is to try to fool the player a little bit into thinking the computer is smarter than it actually is.”
To do that, developers employ algorithms to dictate the computer’s behavior. There are dozens of approaches out there, but two of the most popular are finite state machines and behavior trees.
Finite state machines are a simple form of A.I. built around the idea that each object or character in a game has certain number of activities that it’s able to perform. To move from one state to another, something has to trigger it. For instance, a standard guard AI might have three states: patrol, alert, and attack. In the patrol state, the guard might be walking around on a preset path. Once the player makes a noise, it triggers a shift from a patrol state to an alert state. If the player fires a shot or moves closer, the state will change again, from alert to attack, but if the player moves out of the guard’s line of sight, the guard will eventually return to a patrol state.
For more complex A.I., developers commonly use behavior trees, a system of hierarchical tasks that control the way a computerized opponent makes decisions. The “tree” is made up of basic command nodes (leaves) and higher-level command nodes (branches), with more complex tasks at the top and simpler tasks at the bottom. Returning to our example of a standard guard A.I., let’s assume that “patrol” is the first task he’s able to select from. The algorithm might break that patrol task into smaller, simpler actions—walking around, for instance, or picking up a suspicious item. Each task will break down into smaller ones until the A.I. reaches a point that it can execute a single, simple action. So if our guard is shooting at a player and he runs out of ammo, that might signal the guard to start moving through the tasks associated with a retreat.
While neither of these formulas can be classified as actual “thinking,” they do give the player the impression that something more intelligent is going on.
“People will try to apply meaning to something,” says Ed. “I’ve seen many post-mortems of games where the A.I. programmer will say, ‘I watched these videos of people playing, and they said that the bot did something for a specific reason. But I didn’t code that.’ So just a little bit of randomness thrown in goes a long way.”
If the ultimate goal is to keep the player entertained, then it’s important for developers to build in responses to every potential move a player might make in order to suggest intelligence. If a game involves a computerized companion, such as Ellie in The Last of Us, they should always work with the player rather than getting in the way. Ed suggests paying particular attention to pathfinding, or the way a character moves through the world of the game. A bot that behaves illogically can pull a player out of the game.
“We want bots to do stupid things,” laughs Ed. “But only stupid things that an actual human being would do. If [a bot] does something really stupid like walk into a wall, it looks horrible.”
While A.I. didn’t change much during the first quarter-century of gaming, the last decade has seen radical shifts in technology. Earlier this year, software from Google’s DeepMind artificial intelligence group successfully taught itself to play 22 arcade games—including Space Invaders—without ever being exposed to the rules of gameplay. Gaming companies are now able to apply similar technology to computerized opponents. Microsoft Studios’ open-world racing game Forza Horizon 2 features a neural network system that collects and stores player data as a digital avatar, essentially learning player behavior as the game progresses. These “drivatars” live in the cloud, where they can be called upon to compete against other players, adopting driving styles similar to the human opponents they learned from. It’s a system not unlike a neural network in the brain, and could give us insight into the future of A.I. in gaming.
Despite these advancements, Ed suggests starting simple.
“I have a motto in my class, which is 'do it dumber.' The best strategies are often hard-coded simple strategies, and they often perform better because as you’re coding, you’ll be able to understand them and make sure there are no bugs. If you want to do something more complex down the road, then go nuts. But if you do the simple thing first, it’ll give you a nice baseline for testing other approaches.”