Artificial Intelligence Simulation Engines 2008 Chalmers University - - PowerPoint PPT Presentation

artificial intelligence
SMART_READER_LITE
LIVE PREVIEW

Artificial Intelligence Simulation Engines 2008 Chalmers University - - PowerPoint PPT Presentation

Artificial Intelligence Simulation Engines 2008 Chalmers University of Technology Markus Larsson markus.larsson@slxgames.com 08-11-24 Simulation Engines 2008, Markus Larsson 1 History of AI AI concerns itself with understanding


slide-1
SLIDE 1

08-11-24 Simulation Engines 2008, Markus Larsson 1

Artificial Intelligence

Simulation Engines 2008 Chalmers University of Technology

Markus Larsson markus.larsson@slxgames.com

slide-2
SLIDE 2

08-11-24 Simulation Engines 2008, Markus Larsson 2

History of AI

 AI concerns itself with understanding intelligent entities  Unlike psychology or philosophy, AI deals with how to

build these intelligent entities as well

 Young research area  Defined in 1956  However, has connections to definitions by classic

Greek philosophers such as Plato and Aristotle

 Has gone through many turbulent phases  Almost childish enthusiasm in the early days  Depressive state after a while  More realistic outlook today

slide-3
SLIDE 3

08-11-24 Simulation Engines 2008, Markus Larsson 3

AI in games

Most computer games are played against some form of

  • pponent

When not playing against another human, in many cases a computer-controlled opponent is needed

 AI in games provide the human player with a challenging

  • pponent or ally without requiring the presence of another

human

A keyword is “challenging”

 Not necessarily proficient or complex  Do not use unnecessarily advanced techniques 

Often the AI needs to be tunable for different difficulty levels

There are no style points for being true to the field of AI. Cheap tricks are good

slide-4
SLIDE 4

08-11-24 Simulation Engines 2008, Markus Larsson 4

Agents

 An agent is an autonomous and independent

entity that, much like a human being:

 Collects information about its surroundings  Draws conclusions  Makes decisions  Executes actions

 Very useful in AI

 Easy and natural semantical conception of a

sentient game entity

 Lends themselves well to object-oriented design

slide-5
SLIDE 5

08-11-24 Simulation Engines 2008, Markus Larsson 5

A model for AI in games

A useful model for our continued discussion

Perception

 The agent collects information about the surrounding using its

“senses”

Decision

 The agent analyzed the collected data, builds an understanding of

the situation and then makes a decision

Action

 Given the decision, translates into a number of separate steps

needed to accomplish the goal

slide-6
SLIDE 6

08-11-24 Simulation Engines 2008, Markus Larsson 6

Cheating

 The golden rule of games AI

 Cheat as much as you can get away with!

 There is no incentive for AI programmers to

“play straight”

 The job is to create a worthy opponent  No rules of conduct  Cheating can be done in all parts of the model

slide-7
SLIDE 7

08-11-24 Simulation Engines 2008, Markus Larsson 7

Cheating

 Perception

 The most basic cheat is to give the computer

access to an internal representation of the world instead of having it interpret the world itself

 It is often useful to give the AI more information

than what the player has access to (exact positions etc)

 Decision

 More difficult are with cheating  Agents that are not visible to the player can often

entirely ignore the decision phase

slide-8
SLIDE 8

08-11-24 Simulation Engines 2008, Markus Larsson 8

Action

 Action

 It can often be useful to work on different sets of

rules than players

 For instance, a computer-controlled combat pilot

might use a more simplified flight model than human players

 If you cheat, make sure it is not obvious to the

player!

 The ultimate cheat is to script behaviors

slide-9
SLIDE 9

08-11-24 Simulation Engines 2008, Markus Larsson 9

Perception

 Perception provides the agent with information about its

surrounding environment using sensors

 Can be anything from a simple photosensitive sensor

that detects light to a full vision system

 In the context of games, the agents rarely perceive the

environment on their own, instead they look at the common scene graphs etc

 Topics of interest  Identify different ways to access an enemy position  Find places to hide  Identify threats (windows, doors, etc)

slide-10
SLIDE 10

08-11-24 Simulation Engines 2008, Markus Larsson 10

Perception

 Perceptive tasks depend on the type of game of course,

but tasks tend to relate to the perception of the topology of the environment

 Manual topology markup  Level editors add topological information to the 3D

world (places to hide, places to shoot, patrol routes, pathfinding information, etc)

 Automatic topology analysis  Analysis of the world to automatically identify access

points, paths, hiding places, etc

 Often at least partially done in a pre-processing step

slide-11
SLIDE 11

08-11-24 Simulation Engines 2008, Markus Larsson 11

Decision

 Given our input from the perception phase, we want to

make a decision

 Finite state machines  Rule-based transition system  Fuzzy state machine  Rule-based transition system based on fuzzy logic  Artificial life  Simulation of artificial life forms and behavior  Neural network  Network-structure for interpreting input and giving

  • utput (learning architecture)
slide-12
SLIDE 12

08-11-24 Simulation Engines 2008, Markus Larsson 12

Action

In the decision phase, we come up with a general decision for our high-level plan

In the action phase, we execute the plan through low-level actions

Example

An AI infantry commander comes up with the decision to “take hill 241”. The action phase then translates this into first finding the shortest path to hill 241 (staying in cover from enemy fire), issuing the movement commands to his soldiers, assuming a combat formation when approaching the hill, and then taking a defensive position once the hill has been secured.

Pathfinding

Finding the shortest path from point A to point B given a number of

  • constraints. Might need to take coordination between multiple agents in

consideration

Multi-level agents

Modern AI systems often need multiple layers for controlling low-level and high-level actions

slide-13
SLIDE 13

08-11-24 Simulation Engines 2008, Markus Larsson 13

Finite state machines

 Suitable technique for implementing simple rule-based

agent

 Consists of a set of states and a collection of

transitions for each state

 A transition consists of a trigger input, which initiates

the state transition, a destination state, and an output

 The FSM also has a start state from which it begins

execution

 FSM's are often drawn as state transition diagrams

slide-14
SLIDE 14

08-11-24 Simulation Engines 2008, Markus Larsson 14

FSM's for AI

 Benefits

 Good control over the agent's behavior  Easy to implement  Model is easy to understand for designers

 Drawbacks

 Hard (and time-consuming) to write exhaustively  No emergent behavior; the agent will only do what

we tell it to do. We can not hope to get holistic effects of rules acting together

 Deterministic: agent is easy to predict and its

behavior could potentially be exploited

slide-15
SLIDE 15

08-11-24 Simulation Engines 2008, Markus Larsson 15

Example: Finite state machine

slide-16
SLIDE 16

08-11-24 Simulation Engines 2008, Markus Larsson 16

Fuzzy logic

 One of the main features of a FSM is that it is

deterministic

 A desirable effect in many systems  Not necessarily a good thing in AI  Can create predictable behavior

 Natural solution

 Make our FSM non-deterministic  For a given input, any output can be chosen by

random or by an internal weighting function

 Fuzzy state machine

slide-17
SLIDE 17

08-11-24 Simulation Engines 2008, Markus Larsson 17

Autonomous agents

 Definition from Russel & Norvig, 1995

 An agent is anything that can be viewed as

perceiving its environment through sensors and acting upon that environment using effectors

 Works very well with the model we have

previously mentioned

slide-18
SLIDE 18

08-11-24 Simulation Engines 2008, Markus Larsson 18

Autonomous agents

 Autonomous agents take the information they

perceive into account when forming and carrying out its decision

 Non-autonomous agents simply discard

sensory input

 We will examine three types of agents

 Reactive agents  Reactive agents with state  Goal-based agents

slide-19
SLIDE 19

08-11-24 Simulation Engines 2008, Markus Larsson 19

Reactive agents

 A reactive agent is the simplest form of agent

and reacts to a situation purely according to a set of rules for action and reaction

 For each update, the agent searches its

database of rules until it finds one that matches the current situation, then executes the appropriate action associated with the rule

 Can be easily implemented using FSM

slide-20
SLIDE 20

08-11-24 Simulation Engines 2008, Markus Larsson 20

Reactive agents with state

 In many cases, it is not sufficient to base

behavior on input alone

 We might need some kind of state (memory)

 Example

 A driver looks in a rear-view mirror from time to

  • time. When changing lanes, the driver needs to

take both the information from looking in the mirror and the information from looking forwards in consideration.

slide-21
SLIDE 21

08-11-24 Simulation Engines 2008, Markus Larsson 21

Goal-based agents

 Sometimes state and rules are not sufficient

 We need a goal to decide the most useful course of

action

 Gives rise to goal-based agents which do not

  • nly have a rule database, but also select

actions with a higher-level goal in mind

 Implies that the agent needs to know the

consequences Y of performing an action X

 The decision process becomes one of searching or

planning given a set of actions and consequences

 Goal-based agents allow for emergent behavior

slide-22
SLIDE 22

08-11-24 Simulation Engines 2008, Markus Larsson 22

Multi-level agent structures

 Another way to achieve emergent behavior

using agent technology is to create hierarchical structures of agents acting on different abstraction levels

slide-23
SLIDE 23

08-11-24 Simulation Engines 2008, Markus Larsson 23

SOAR

 State, operator and result  Goal based agent architecture used for building

complex rule-based agents incorporating goal planning, searching and learning

 The SOAR architecture consists of  A set of user-written if-then rules  An input link transferring game knowledge into the AI  A working memory consisting of intermediate data and

a stack of current goals

 An output link allowing SOAR to act on the game world  SOAR implements the decision phase

slide-24
SLIDE 24

08-11-24 Simulation Engines 2008, Markus Larsson 24

Example: SOAR QuakeBot

 SOAR uses a hierarchical rule database that allows for

decision decomposition

 As you descend the hierarchy, the operators become

more and more specific

slide-25
SLIDE 25

08-11-24 Simulation Engines 2008, Markus Larsson 25

Artificial life

A subfield of AI that deals with various ways of simulating life to achieve emergent behavior

Different techniques

Agents

The agent concept is an instance of a life

Evolutionary algorithms

Genetic algorithms

Apply selection, mutation and recombination operators to abstract representations of a problem, usually a binary string of a fixed length

Genetic programming

Tree-like representations of computer programs and selection, mutation, and recombination operators defined over tree-like representation to optimize those programs

Cellular automata

A discrete model studied in computability theory and mathematics. Consists of an infinite, regular grid of cells, each in one of a finite number of states (such as Game of Life)

Ad-hoc approaches

Simulation of life using some ad-hoc techniques or combinations of techniques such as what is used in The Sims

slide-26
SLIDE 26

08-11-24 Simulation Engines 2008, Markus Larsson 26

Evolved virtual creatures

 Karl Sims created Evolved virtual creatures in

1994

 A huge number of virtual creatures consisting of

a number of movable 3D objects are created

 Each creature is tested for utility in fulfilling a

task (swimming, jumping, running, etc) and the most successful specimens are chosen for reproduction

 Movie

slide-27
SLIDE 27

08-11-24 Simulation Engines 2008, Markus Larsson 27

Flocking behavior

 Flocking is an a-life discipline concerned with the

behavioral modeling of a large population of individuals interacting with each other

 Commonly used (i.e. in The Lion King for the first time

in a motion picture)

 Similarities with particle systems in computer graphics,

but includes some kind of sociological model for the individuals in the population

 Craig Reynolds created the “boids” system in 1987  Separation  Alignment  Cohesion

slide-28
SLIDE 28

08-11-24 Simulation Engines 2008, Markus Larsson 28

Flocking behavior

 These seemingly simple behavioral rules give

rise to distinct emergent behavior

slide-29
SLIDE 29

08-11-24 Simulation Engines 2008, Markus Larsson 29

Boids demo

Reynolds demo

slide-30
SLIDE 30

08-11-24 Simulation Engines 2008, Markus Larsson 30

3D topology analysis

 3D topology analysis is the analysis of the 3D

world for the purpose of “understanding” it

 For threat assessment, pathfinding, or higher-level

reasoning and behavior

 Non-trivial task and is traditionally avoided in favor

  • f markup information added manually by a human

level designer

 Recent games are starting to pay attention to

the process

slide-31
SLIDE 31

08-11-24 Simulation Engines 2008, Markus Larsson 31

Pathfinding

 Pathfinding is the problem of finding a path P

from point A to point B on a (2D or 3D) map

 We often impose a number of additional

constraints on P; for instance that P is optimal, takes other dynamic objects into consideration, etc

 Standard topological analysis algorithm that is

  • f special interest to most 3D games
slide-32
SLIDE 32

08-11-24 Simulation Engines 2008, Markus Larsson 32

Dijkstra's shortest path algorithm

Dijkstra's shortest path algorithm generates the shortest path to all

  • ther destination nodes in the graph, including the node we are

interested in

Dijksta's algorithm operates on a weighted graph

 If we only have a two-dimensional map, we must find a way of

representing it as a graph

 Normally we treat it as a dense graph where each grid position

is a node

 The cost of a node is dependent on the terrain type associated

with the node

slide-33
SLIDE 33

08-11-24 Simulation Engines 2008, Markus Larsson 33

Dijkstra's shortest path algorithm

 Another solution is to have the level designer

build the path data graph when designing the level

 Weights can then be computed by the euclidian

distance between vertices

slide-34
SLIDE 34

Artificiell Intelligens - GameMaker - Markus Larsson 34

A* algorithm

BFS Dijkstra A*

slide-35
SLIDE 35

08-11-24 Simulation Engines 2008, Markus Larsson 35

A* algorithm

 The A* algorithm is often seen as “magic” by novice

programmers, but is nothing more than an elegant search algorithm

 Can be used to more things than pathfinding  Heuristical algorithm algorithm  g(n) = the cost of moving from start to n

h(n) = the heuristically estimated cost from n to the goal

 For each iteration, examine the vertex with the lowest f(n)

= g(n) + h(n)

 Greedy algorithm that attempts to find a global optimum

through local optimization

slide-36
SLIDE 36

08-11-24 Simulation Engines 2008, Markus Larsson 36

Heuristics in A*

 The heuristic function plays an important role in the

performance of A*

 If h(n) is always 0, A* becomes Dijkstra's algorithm  If h(n) is always lower than (or equal to) the cost of moving

from n to the goal, then A* is guaranteed to find a shortest path

 If h(n) is exactly equal to the cost of moving from n to the

goal, then A* will only follow the best path and never expand anything else, making it very fast

 The heuristic function is often either the “Manhattan

function” or the plain euclidian distance

slide-37
SLIDE 37

08-11-24 Simulation Engines 2008, Markus Larsson 37

3D pathfinding in RenderWare

RenderWare automatically generates graphs for pathfinding in three steps

Exploration

The 3D world is exhaustively explored using nodes while obeying the physics model of the game (preventing creation of unnecessary nodes)

Optimization

The generated path is optimized into a minimum of path nodes

Edge creation

More edges are added to the existing nodes to produce the final path data

slide-38
SLIDE 38

08-11-24 Simulation Engines 2008, Markus Larsson 38

Learning architectures

 The subject of learning is central in AI and provides

appealing possibilities for a computer game

 If the computer-controlled opponent (or ally) can learn

  • ver time and become more proficient at his task, the

player will have a much more believable and challenging game experience

 Most learning algorithms do not operate in real-time,

but advances have been made nowadays

 Outside of a few special games, learning architectures

are very uncommon, but this may change soon

slide-39
SLIDE 39

08-11-24 Simulation Engines 2008, Markus Larsson 39

Neural networks

Neural networks are the most common learning architectures in AI

 Essentially mimics human neuron networks 

Artificial neurons are arranged into large interconnected networks which work together to recognize patterns on its inputs and produce a result on its outputs

Each node has an associated weight, and the learning phase constitutes adjusting individual weights given some training examples to produce the desired result

Neural networks were once announced to be the solution for all AI problems, but this has proved to be exaggerated

 But useful in a lot of areas nonetheless

slide-40
SLIDE 40

08-11-24 Simulation Engines 2008, Markus Larsson 40

Example: The Sims

The Sims (Maxis, led by Will Wright) is one of the most successful games ever made

Sims are modeled using a-life

Uses something called smart terrain

 All objects in the world embed the behaviors and actions

associated with an object in the object itself

 The object also contains information about the

consequences of an action (i.e. playing with a soccer ball will decrease the boredom of a Sim and also give it physical exercise)

Smart terrain allows for easy bottom-up construction of the game as well as easily being able to extend it with new objects

slide-41
SLIDE 41

08-11-24 Simulation Engines 2008, Markus Larsson 41

Example: Black and White

Black and White (Lionhead studios, led by Petery Molyneux) features large virtual worlds where the player (a god) controls a tribe of his people in their fight for survival

The player also has an avatar in the world in the shape of of a huge creature that can be taught various kinds of behavior that allows it to help (or break!) your own efforts

The B&W creature is modeled using a-life principles

It has a number of desires (hunger, curiosity, fatigue, etc) and a number

  • f actions encoded in the world

For instance, a hungry creature might pick up a villager and eat him

The player either rewards or punishes the creature for his actions and by doing so, increases or decreases the creatures' urge for a specific action

To make a the creature only eat enemy villagers, you would first reward the creature when he eats a villager and then punish it when it eats one of your

  • wn villagers, thereby refining its mental mode
slide-42
SLIDE 42

08-11-24 Simulation Engines 2008, Markus Larsson 42

Example: Black and White

The internal AI solves this by distinguishing between individual attributes of various instances

In essence, this is implemented using a decision tree internal to the creature AI which gets dynamically updated as the creature learns new things

slide-43
SLIDE 43

08-11-24 Simulation Engines 2008, Markus Larsson 43

Words of advice

 Always explain AI decisions extremely well  Examples:

 Radio chatter in Half Life  Grunts in Halo

 Cheat!

 Then cheat some more...

slide-44
SLIDE 44

08-11-24 Simulation Engines 2008, Markus Larsson 44

Summary

Artificial Intelligence is one aspect more and more games are focusing on

AI in games is more about creating tunable and challenging

  • pponents using whatever means necessary rather than being true to

the field of AI

Agents is a useful concept for game AI due (semantics and object-

  • rientation)

A typical AI model includes perception, decision and action in a continuous cycle

Interesting AI topics include artificial life, evolutionary algorithms, neural networks, agents, etc

3D pathfinding is a very important aspect of AI in games