Xiao-Li Meng writes about the trade-off between efficiency and robustness (bottom of page 208, left side). A solution that works all the time is likely to be inefficient. You can greatly optimize it by making a few assumptions, but then it only works when those assumptions hold. His example is finding a parked car. If you have ever forgotten where you parked at a mall, you know the problem. The robust solution is always to park in the same spot (or very near it), and the way you guarantee that spot is available is to pick the worst one. No one is competing for the back of the lot or the furthest point in the parking structure. You could park at the spot closest to your destination, but that will vary with not only your destination but also lot crowding, who just left, etc.
In our games, we refer to mobs’ having an AI, but we mean that in a very broad sense of AI. They have a few basic behavioral commands and the equivalent of a few buttons to push. Really fancy fights involve unvarying, scripted dances. A few even inspire to pre-planned reactions to certain events, but let’s not tax the system too much.
This is far from an artificial general intelligence that could hold a conversation, but it usually works just fine. The goblin is not expected to do much: close and stab. There are some details about its aggro range and its use of the standard aggro system, but there is no depth, and it really does not matter for the 10 seconds the goblin will be alive. More complex encounters maintain their fidelity by limiting the variables: they fight in limited arenas with closed doors, reset conditions, and things like rage timers to sweep up problems.
Take a step or two outside the assumed parameters, however, and the simple AI has no idea how to vary its behavior. It sphexishly follows its programming even if that programming works against its ostensible goals. You can kite enemies right past your perfectly safe allies. They get caught on rocks or try to run laps on buildings instead of making an ankle-high hop. You can turn their powers against them, and they will not stop following a script that has become suicidal.
I occasionally wonder how Deep Blue or one of the other chess supercomputers would react to blatant cheating. Replace one of your pawns with a rook mid-game or take two moves in a row. A human player will smack you and tell you to stop being an idiot. Does the computer even have the parameters to deal with that? I would expect an error and refusal to continue.
: Zubon
H/T to Andrew Gelman for the Xiao-Li Meng link.
Shamus from Twenty Sided is doing a series on AI atm, worth reading up on:
http://www.shamusyoung.com/twentysidedtale/?p=4113
(and a few of the other recent posts)
CoH’s scripting is pretty good, all things considered. Some of the things the critters do is surprising, if you’re in their zone of consideration. Kill off the human Vhaz members, and the zombies become stupider, and lose track of you feet away. Don’t kill ’em, and they’ll follow you for miles. Stay behind a mob? For a minute, you’re fine. Two minutes, they’ll notice you and aggro.
I’d love to see MMO mobs with even a slightly realistic level of intelligence.
Starter areas packed with anti-social wolves would be a thing of the past.
Pulling a single orc out of that camp? Forget about it. The rest of the camp would see that one orc getting his face smashed and either come running with blades drawn or scatter like chickens.
Tank classes would be more or less useless in parties when attacking anything with half a brain. Kill the healer, then DPS, leaving the slow guy in full plate armor that keeps taunting you for last.
Well obviously the terrain would need to be more interactive, or the healing less necessary. Maybe the realistic heals only after battle? Or the healer could hide behind a tree so the enemies don’t notice him ’til it’s too late?