Simulation Engines TDA571|DIT030 Artificial Intelligence Tommaso - - PowerPoint PPT Presentation

simulation engines tda571 dit030 artificial intelligence
SMART_READER_LITE
LIVE PREVIEW

Simulation Engines TDA571|DIT030 Artificial Intelligence Tommaso - - PowerPoint PPT Presentation

Simulation Engines TDA571|DIT030 Artificial Intelligence Tommaso Piazza 1 Administrative stuff Next week AI presents 6 out of 7 groups have AI If you dont have AI but do have networks you will be presenting on Wednesday next


slide-1
SLIDE 1

Simulation Engines TDA571|DIT030 Artificial Intelligence

Tommaso Piazza

1

slide-2
SLIDE 2

IDC | Interaction Design Collegium

Administrative stuff

  • Next week AI presents
  • 6 out of 7 groups have AI
  • If you don’t have AI but do have networks you

will be presenting on Wednesday next week, lecture on networks will be on Monday

  • Probably no lectures on the 30/11 and 2/12
  • They will take place on the 7/12 and 8/12 instead

2

slide-3
SLIDE 3

IDC | Interaction Design Collegium

History of AI

  • AI concerns itself with understanding intelligent

entities

  • Unlike psychology or philosophy, AI deals with how to

build these intelligent entities as well

  • Young research area
  • Defined in 1956
  • However, has connections to definitions by classic Greek

philosophers such as Plato and Aristotle

  • Has gone through many turbulent phases
  • Almost childish enthusiasm in the early days
  • Depressive state after a while
  • More realistic outlook today

3

slide-4
SLIDE 4

IDC | Interaction Design Collegium

AI in games

  • Most computer games are played against some form of
  • pponent
  • When not playing against another human, in many cases a

computer-controlled opponent is needed

  • AI in games provide the human player with a challenging opponent or

ally without requiring the presence of another human

  • A keyword is “challenging”
  • Not necessarily proficient or complex
  • Do not use unnecessarily advanced techniques
  • Often the AI needs to be tunable for different difficulty

levels

  • There are no style points for being true to the field of AI.

Cheap tricks are good

4

slide-5
SLIDE 5

IDC | Interaction Design Collegium

Agents

  • An agent is an autonomous and independent

entity that, much like a human being:

  • Collects information about its surroundings
  • Draws conclusions
  • Makes decisions
  • Executes actions
  • Very useful in AI
  • Easy and natural semantical conception of a sentient

game entity

  • Lends themselves well to object-oriented design

5

slide-6
SLIDE 6

IDC | Interaction Design Collegium

A model for AI in games

  • A useful model for our continued discussion
  • Perception
  • The agent collects information about the surrounding using its

“senses”

  • Decision
  • The agent analyzed the collected data, builds an understanding of

the situation and then makes a decision

  • Action
  • Given the decision, translates into a number of separate steps

needed to accomplish the goal

6

slide-7
SLIDE 7

IDC | Interaction Design Collegium

Cheating

  • The golden rule of games AI
  • Cheat as much as you can get away with!
  • There is no incentive for AI programmers to

“play straight”

  • The job is to create a worthy opponent
  • No rules of conduct
  • Cheating can be done in all parts of the model

7

slide-8
SLIDE 8

IDC | Interaction Design Collegium

Cheating

  • Perception
  • The most basic cheat is to give the computer

access to an internal representation of the world instead of having it interpret the world itself

  • It is often useful to give the AI more information

than what the player has access to (exact positions etc)

  • Decision
  • More difficult are with cheating
  • Agents that are not visible to the player can often

entirely ignore the decision phase

8

slide-9
SLIDE 9

IDC | Interaction Design Collegium

Action

  • Action
  • It can often be useful to work on different sets of

rules than players

  • For instance, a computer-controlled combat pilot

might use a more simplified flight model than human players

  • If you cheat, make sure it is not obvious to

the player!

  • The ultimate cheat is to script behaviors

9

slide-10
SLIDE 10

IDC | Interaction Design Collegium

Perception

  • Perception provides the agent with information

about its surrounding environment using sensors

  • Can be anything from a simple photosensitive sensor that

detects light to a full vision system

  • In the context of games, the agents rarely perceive

the environment on their own, instead they look at the common scene graphs etc

  • Topics of interest
  • Identify different ways to access an enemy position
  • Find places to hide
  • Identify threats (windows, doors, etc)

10

slide-11
SLIDE 11

IDC | Interaction Design Collegium

Perception

  • Perceptive tasks depend on the type of game of

course, but tasks tend to relate to the perception

  • f the topology of the environment
  • Manual topology markup
  • Level editors add topological information to the 3D

world (places to hide, places to shoot, patrol routes, pathfinding information, etc)

  • Automatic topology analysis
  • Analysis of the world to automatically identify access

points, paths, hiding places, etc

  • Often at least partially done in a pre-processing step

11

slide-12
SLIDE 12

IDC | Interaction Design Collegium

Decision

  • Given our input from the perception phase, we want

to make a decision

  • Finite state machines
  • Rule-based transition system
  • Fuzzy state machine
  • Rule-based transition system based on fuzzy logic
  • Artificial life
  • Simulation of artificial life forms and behavior
  • Neural network
  • Network-structure for interpreting input and giving output

(learning architecture)

12

slide-13
SLIDE 13

IDC | Interaction Design Collegium

Action

  • In the decision phase, we come up with a general decision for our

high-level plan

  • In the action phase, we execute the plan through low-level actions
  • Example
  • An AI infantry commander comes up with the decision to “take hill 241”. The action

phase then translates this into first finding the shortest path to hill 241 (staying in cover from enemy fire), issuing the movement commands to his soldiers, assuming a combat formation when approaching the hill, and then taking a defensive position

  • nce the hill has been secured.
  • Pathfinding
  • Finding the shortest path from point A to point B given a number of constraints.

Might need to take coordination between multiple agents in consideration

  • Multi-level agents
  • Modern AI systems often need multiple layers for controlling low-level and high-level

actions

13

slide-14
SLIDE 14

IDC | Interaction Design Collegium

Finite state machines

  • Suitable technique for implementing simple rule-

based agent

  • Consists of a set of states and a collection of

transitions for each state

  • A transition consists of a trigger input, which initiates

the state transition, a destination state, and an

  • utput
  • The FSM also has a start state from which it begins

execution

  • FSM's are often drawn as state transition diagrams

14

slide-15
SLIDE 15

IDC | Interaction Design Collegium

FSM's for AI

  • Benefits
  • Good control over the agent's behavior
  • Easy to implement
  • Model is easy to understand for designers
  • Drawbacks
  • Hard (and time-consuming) to write exhaustively
  • No emergent behavior; the agent will only do what

we tell it to do. We can not hope to get holistic effects of rules acting together

  • Deterministic: agent is easy to predict and its

behavior could potentially be exploited

15

slide-16
SLIDE 16

IDC | Interaction Design Collegium

Example: Finite state machine

16

slide-17
SLIDE 17

IDC | Interaction Design Collegium

Fuzzy logic

  • One of the main features of a FSM is that it is

deterministic

  • A desirable effect in many systems
  • Not necessarily a good thing in AI
  • Can create predictable behavior
  • Natural solution
  • Make our FSM non-deterministic
  • For a given input, any output can be chosen by

random or by an internal weighting function

  • Fuzzy state machine

17

slide-18
SLIDE 18

IDC | Interaction Design Collegium

Autonomous agents

  • Definition from Russel & Norvig, 1995
  • An agent is anything that can be viewed as

perceiving its environment through sensors and acting upon that environment using effectors

  • Works very well with the model we have

previously mentioned

18

slide-19
SLIDE 19

IDC | Interaction Design Collegium

Autonomous agents

  • Autonomous agents take the information they

perceive into account when forming and carrying out its decision

  • Non-autonomous agents simply discard

sensory input

  • We will examine three types of agents
  • Reactive agents
  • Reactive agents with state
  • Goal-based agents

19

slide-20
SLIDE 20

IDC | Interaction Design Collegium

Reactive agents

  • A reactive agent is the simplest form of agent

and reacts to a situation purely according to a set of rules for action and reaction

  • For each update, the agent searches its

database of rules until it finds one that matches the current situation, then executes the appropriate action associated with the rule

  • Can be easily implemented using FSM

20

slide-21
SLIDE 21

IDC | Interaction Design Collegium

Reactive agents with state

  • In many cases, it is not sufficient to base

behavior on input alone

  • We might need some kind of state (memory)
  • Example
  • A driver looks in a rear-view mirror from time to
  • time. When changing lanes, the driver needs to take

both the information from looking in the mirror and the information from looking forwards in consideration.

21

slide-22
SLIDE 22

IDC | Interaction Design Collegium

Goal-based agents

  • Sometimes state and rules are not sufficient
  • We need a goal to decide the most useful course of

action

  • Gives rise to goal-based agents which do not only

have a rule database, but also select actions with a higher-level goal in mind

  • Implies that the agent needs to know the consequences

Y of performing an action X

  • The decision process becomes one of searching or

planning given a set of actions and consequences

  • Goal-based agents allow for emergent behavior

22

slide-23
SLIDE 23

IDC | Interaction Design Collegium

Multi-level agent structures

  • Another way to achieve emergent behavior

using agent technology is to create hierarchical structures of agents acting on different abstraction levels

23

slide-24
SLIDE 24

IDC | Interaction Design Collegium

SOAR

  • State, operator and result
  • Goal based agent architecture used for building

complex rule-based agents incorporating goal planning, searching and learning

  • The SOAR architecture consists of
  • A set of user-written if-then rules
  • An input link transferring game knowledge into the AI
  • A working memory consisting of intermediate data and a

stack of current goals

  • An output link allowing SOAR to act on the game world
  • SOAR implements the decision phase

24

slide-25
SLIDE 25

IDC | Interaction Design Collegium

Example: SOAR QuakeBot

  • SOAR uses a hierarchical rule database that

allows for decision decomposition

  • As you descend the hierarchy, the
  • perators become more and more specific

25

slide-26
SLIDE 26

IDC | Interaction Design Collegium

Artificial life

  • A subfield of AI that deals with various ways of simulating life to

achieve emergent behavior

  • Different techniques
  • Agents
  • The agent concept is an instance of a life
  • Evolutionary algorithms
  • Genetic algorithms
  • Apply selection, mutation and recombination operators to abstract representations of a problem, usually a

binary string of a fixed length

  • Genetic programming
  • Tree-like representations of computer programs and selection, mutation, and recombination operators defined
  • ver tree-like representation to optimize those programs
  • Cellular automata
  • A discrete model studied in computability theory and mathematics. Consists of an infinite, regular

grid of cells, each in one of a finite number of states (such as Game of Life)

  • Ad-hoc approaches
  • Simulation of life using some ad-hoc techniques or combinations of techniques such as what is

used in The Sims

26

slide-27
SLIDE 27

IDC | Interaction Design Collegium

Flocking behavior

  • Flocking is an a-life discipline concerned with

the behavioral modeling of a large population of individuals interacting with each other

  • Commonly used (i.e. in The Lion King for the first time in a

motion picture)

  • Similarities with particle systems in computer graphics, but

includes some kind of sociological model for the individuals in the population

  • Craig Reynolds created the “boids” system in 1987
  • Separation
  • Alignment
  • Cohesion

27

slide-28
SLIDE 28

IDC | Interaction Design Collegium

Flocking behavior

  • These seemingly simple behavioral rules give

rise to distinct emergent behavior

28

slide-29
SLIDE 29

IDC | Interaction Design Collegium

Boids demo

  • Reynolds demo
  • http://www.red3d.com/cwr/boids/
  • Check out the demo in the OpenSteer Library
  • http://opensteer.sourceforge.net/

29

slide-30
SLIDE 30

IDC | Interaction Design Collegium

Evolved virtual creatures

  • Karl Sims created Evolved virtual creatures in

1994

  • A huge number of virtual creatures consisting
  • f a number of movable 3D objects are created
  • Each creature is tested for utility in fulfilling

a task (swimming, jumping, running, etc) and the most successful specimens are chosen for reproduction

30

slide-31
SLIDE 31

IDC | Interaction Design Collegium

http://www.archive.org/details/sims_evolved_virtual_creatures_1994

31

slide-32
SLIDE 32

IDC | Interaction Design Collegium

3D topology analysis

  • 3D topology analysis is the analysis of the

3D world for the purpose of “understanding” it

  • For threat assessment, pathfinding, or higher-level

reasoning and behavior

  • Non-trivial task and is traditionally avoided in favor
  • f markup information added manually by a human

level designer

  • Recent games are starting to pay attention to

the process

32

slide-33
SLIDE 33

IDC | Interaction Design Collegium

Pathfinding

  • Pathfinding is the problem of finding a path P

from point A to point B on a (2D or 3D) map

  • We often impose a number of additional

constraints on P; for instance that P is optimal, takes other dynamic objects into consideration, etc

  • Lets take a look at standard topological

analysis algorithms that are of special interest to most 3D games

33

slide-34
SLIDE 34

IDC | Interaction Design Collegium

Dijkstra's shortest path algorithm

  • Dijkstra's shortest path algorithm generates

the shortest path to all other destination nodes in the graph, including the node we are interested in

  • Dijksta's algorithm operates on a weighted graph
  • If we only have a two-dimensional map, we must find

a way of representing it as a graph

  • Normally we treat it as a dense graph where each

grid position is a node

  • The cost of a node is dependent on the terrain type

associated with the node

34

slide-35
SLIDE 35

IDC | Interaction Design Collegium

Dijkstra's shortest path algorithm

  • Another solution is to have the level

designer build the path data graph when designing the level

  • Weights can then be computed by the euclidian

distance between vertices

35

slide-36
SLIDE 36

IDC | Interaction Design Collegium

A* algorithm

  • The A* algorithm is often seen as “magic” by novice

programmers, but is nothing more than an elegant search algorithm

  • Can be used to more things than pathfinding
  • Heuristic algorithm
  • g(n) = the cost of moving from start to n

h(n) = the heuristically estimated cost from n to the goal

  • For each iteration, examine the vertex with the lowest

f(n) = g(n) + h(n)

  • Greedy algorithm that attempts to find a global optimum

through local optimization

36

slide-37
SLIDE 37

IDC | Interaction Design Collegium

Heuristics in A*

  • The heuristic function plays an important role in the

performance of A*

  • If h(n) is always 0, A* becomes Dijkstra's algorithm
  • If h(n) is always lower than (or equal to) the cost of

moving from n to the goal, then A* is guaranteed to find a shortest path

  • If h(n) is exactly equal to the cost of moving from n

to the goal, then A* will only follow the best path and never expand anything else, making it very fast

  • The heuristic function is often either the “Manhattan

function” or the plain euclidian distance

37

slide-38
SLIDE 38

IDC | Interaction Design Collegium

3D pathfinding in RenderWare

  • RenderWare automatically generates graphs for

pathfinding in three steps

  • Exploration
  • The 3D world is exhaustively explored using nodes while
  • beying the physics model of the game (preventing creation
  • f unnecessary nodes)
  • Optimization
  • The generated path is optimized into a minimum of path

nodes

  • Edge creation
  • More edges are added to the existing nodes to produce the

final path data

38

slide-39
SLIDE 39

IDC | Interaction Design Collegium

Learning architectures

  • The subject of learning is central in AI and

provides appealing possibilities for a computer game

  • If the computer-controlled opponent (or ally) can learn
  • ver time and become more proficient at his task, the

player will have a much more believable and challenging game experience

  • Most learning algorithms do not operate in real-time,

but advances have been made nowadays

  • Outside of a few special games, learning

architectures are very uncommon, but this may change soon

39

slide-40
SLIDE 40

IDC | Interaction Design Collegium

Neural networks

  • Neural networks are the most common learning

architectures in AI

  • Essentially mimics human neuron networks
  • Artificial neurons are arranged into large interconnected

networks which work together to recognize patterns on its inputs and produce a result on its outputs

  • Each node has an associated weight, and the learning

phase constitutes adjusting individual weights given some training examples to produce the desired result

  • Neural networks were once announced to be the solution for

all AI problems, but this has proved to be exaggerated

  • But useful in a lot of areas nonetheless

40

slide-41
SLIDE 41

IDC | Interaction Design Collegium

Example: The Sims

  • The Sims (Maxis, led by Will Wright) is one of the most

successful games ever made

  • Sims are modeled using a-life
  • Uses something called smart terrain
  • All objects in the world embed the behaviors and actions

associated with an object in the object itself

  • The object also contains information about the

consequences of an action (i.e. playing with a soccer ball will decrease the boredom of a Sim and also give it physical exercise)

  • Smart terrain allows for easy bottom-up construction of

the game as well as easily being able to extend it with new objects

41

slide-42
SLIDE 42

IDC | Interaction Design Collegium

Example: Black and White

  • Black and White (Lionhead studios, led by Petery Molyneux) features

large virtual worlds where the player (a god) controls a tribe of his people in their fight for survival

  • The player also has an avatar in the world in the shape of a huge

creature that can be taught various kinds of behavior that allows it to help (or break!) your own efforts

  • The B&W creature is modeled using a-life principles
  • It has a number of desires (hunger, curiosity, fatigue, etc) and a number of

actions encoded in the world

  • For instance, a hungry creature might pick up a villager and eat him
  • The player either rewards or punishes the creature for his actions and by doing

so, increases or decreases the creatures' urge for a specific action

  • To make a the creature only eat enemy villagers, you would first reward

the creature when he eats a villager and then punish it when it eats one

  • f your own villagers, thereby refining its mental mode

42

slide-43
SLIDE 43

IDC | Interaction Design Collegium

Example: Black and White

  • The internal AI solves this

by distinguishing between individual attributes of various instances

  • In essence, this is

implemented using a decision tree internal to the creature AI which gets dynamically updated as the creature learns new things

43

slide-44
SLIDE 44

IDC | Interaction Design Collegium

Words of advice

  • Always explain AI decisions extremely well
  • Examples:
  • Radio chatter in Half Life
  • Grunts in Halo

44

slide-45
SLIDE 45

IDC | Interaction Design Collegium

Summary

  • Artificial Intelligence is one aspect more and more games

are focusing on

  • AI in games is more about creating tunable and

challenging opponents using whatever means necessary rather than being true to the field of AI

  • Agents is a useful concept for game AI due (semantics and
  • bject-orientation)
  • A typical AI model includes perception, decision and action

in a continuous cycle

  • Interesting AI topics include artificial life, evolutionary

algorithms, neural networks, agents, etc

  • 3D pathfinding is a very important aspect of AI in games

45