Dev Blog #6: Simulating human behaviour

Introduction

Some observations about NPC behaviour while playing games:

  • A soldier in a RTS is just standing around while his buddies around him are being shot down. But he doesn’t seem to care at all.
  • A soldier storms a stronghold single handedly although it is an obviously hopeless endeavour.
  • You go around a corner in a FPS and even though the NPC has his back to you, he shoots (and hits you) in that instance.

Every gamer can list at least a dozen similar situations that are annoying and can spoil the fun.
What went wrong? Game A.I. wasn’t designed to simulate believable human behaviour but rather a perfect emotionless automaton.

Game design

A big general design rule, for us at BitBunch in the development of Chain of Command, is to design in big numbers and aim for emergent behaviour rather than complex designs that try to cover every contingency.
It requires applying only indirect control, which comes with its own challenges, but when done right provides a wealth of behaviour.
Much like modelling a flock of birds by having simple behaviour for each individual bird in stead of trying to come up with one complex system that describes flocking as a whole.

So instead of designing complex, top level, purpose-build behaviour, give each unit a set of basic behaviours that it selects from when responding to environment conditions.

A.I. design

One area we spend much time on before starting development of Chain of Command was A.I. Not path-finding or obstacle-avoidance but rather simulating human behaviour. The result was the following premise:

  • The purpose of the brain (cortex) is to predict.
  • The set up is like this: Environment -> sensors -> brain -> action -> (changed environment)newgraph_ai

Create an A.I. in three steps:

  1. List what conclusions you want the A.I. to pick from
  2. List all environment elements that are important to distinctly differentiate between those conclusions
  3. Teach the A.I. which environmental situation would merit what conclusion

The environment

This environment is a description of what a unit perceives from the perspective of an A.I. agent. This description enables him to distinctly pick the most appropriate conclusion.

Sensors

For humans this would be our 5 senses, For the A.I. we only use vision and hearing.

The brain

The brain (cortex) is implemented using a neural-network topology that is capable of “recording associations” by using the input the sensors pick up from the environment. The output of this brain are the conclusions we talked about earlier. These conclusions will result in actions that alter the environment.

Actions

In Chain of Command, every action consists of a movement to some location. Meanwhile, the local environment for a unit dictates what reaction is triggered. A perfect battle would be one that is won solely by manoeuvring without firing a single shot.
A simplified example: A unit’s conclusion is “attack”. the action could be “move toward the nearest detected enemy”. Once the unit is in firing range of the enemy, firing is triggered. Meanwhile the unit can at any time change its conclusion causing it to move to a different location. For example, during battle the conclusion changes to “retreat”. The new position to move towards could be “the nearest allied unit” or “away from the nearest enemy”.

Benefits

The biggest problem with purpose build A.I. is that it only covered situations that the A.I. was explicitly designed for. Which is a lot of work and will never be complete.

A.I. Bootcamp

The A.I. research environment.

Contrary to traditional A.I. approaches, our approach has the following advantages:

  1. There is always a best fitting conclusion for any situation
  2. Events from the past influence conclusions in the future
  3. Running the A.I. is very cheap
  4. The “state” of the brain can easily be stored and restored
  5. Since the system is a generic “association recorder”, any relationship between environment and conclusions can be trained.

Due to the large number of units in “Chain of Command”, a simple set of conclusions and companying actions, can result in realistic emergent behaviour. Combined with simulations for important facets of battle like communications, logistics, military hierarchy and supplies gives a much more complete model of WW2 warfare.

Share:
  • Matt Morley

    I understand what you are saying about this A.I. approach being better for a game like this. It sounds like it will produce some very dynamic solutions despite the simplicity of the methodology. But my question is, why don’t more developers do it this way?

    • georgebitbunch

      That’s a question you have to ask those developers, although there are signals that adaptive A.I. approaches are getting more and more attention nowadays. The main reason why you would NOT want to do it this way, is the unpredictability of the approach. If a developer has a more scripted game environment you want the A.I. to do exactly what it is supposed to do in order for certain scripted events to have the intended effect. This approach to A.I. might lead to script events never being triggered because the A.I. decided to react differently from what the designer anticipated.