CS 480/680: GAME ENGINE PROGRAMMING ARTIFICIAL INTELLIGENCE - - PowerPoint PPT Presentation
CS 480/680: GAME ENGINE PROGRAMMING ARTIFICIAL INTELLIGENCE - - PowerPoint PPT Presentation
CS 480/680: GAME ENGINE PROGRAMMING ARTIFICIAL INTELLIGENCE 3/7/2013 Santiago Ontan santi@cs.drexel.edu https://www.cs.drexel.edu/~santi/teaching/2013/CS480-680/intro.html Game AI in Educational Game A CS RA @ 20 hrs/week for two-terms
Game AI in Educational Game
A CS RA @ 20 hrs/week for two-terms in the Digital Media program. Experience Management for educational games Skills: * basic knowledge in AI * C# and/or JavaScript * Unity 3D experience is a plus If you are interested, contact me.
Outline
- Student Presentations
- Game AI
- Movement
- Path-finding
- Scripting
- Board Games
- Learning
- Project Discussion
Outline
- Student Presentations
- Game AI
- Movement
- Path-finding
- Scripting
- Board Games
- Learning
- Project Discussion
Student Presentations
- Michael Parenzan:
- “Game Audio Design Patterns”
- William Stern:
- “Classic Super Mario 64 Third-Person Control and Animation”
- Ryan Daugherty:
- “Time Bounded A*: Efficient Real Time Pathfinding”
- Andrew Townsley
- “Time and Consistency Management for Multiserver-Based
MMORPGs”
- Thomas Burdak:
- “Vector and Plane Tricks”
Outline
- Student Presentations
- Game AI
- Movement
- Path-finding
- Scripting
- Board Games
- Learning
- Project Discussion
Game Engine Architecture
HARDWARE DRIVERS OS SDKs Platform Independence Layer Utility Layer Resource Management Game Engine Functionalities Game Specific Game Engine Dependencies
Game Engine Architecture
Rendering Engine Animation Engine Collisions Physics Audio Subsystem Online Multiplayer Profiling & Debugging Gameplay Foundations (Game Loop) Artificial Intelligence Scripting
Game Engine Architecture
Rendering Engine Animation Engine Collisions Physics Audio Subsystem Online Multiplayer Profiling & Debugging Gameplay Foundations (Game Loop) Artificial Intelligence Scripting
What is Game AI?
- Artificial Intelligence for Computer Games
- Different from traditional AI
Traditional AI: Optimality, efficiency Game AI: Fun, artificial “stupidity”
Examples of Game AI
Pac-Man (1980) First ever game to feature AI AI: finite state machine
Examples of Game AI
“Pac-Man” (1980) First ever game to feature AI AI: finite state machine
Examples of Game AI
“Double Dragon” AI: Finite State Machines
Examples of Game AI
Chess AI needs to provide a collection of difficulty levels. Only one: hardest (to be played only against grand-masters), falls into the realm of traditional AI.
Examples of Game AI
“Left 4 Dead 2” AI Director adjusts game pace to ensure desired dramatic effects
Examples of Game AI
“Starcraft II” Strategy, planning, path- finding, economics, etc.
AI Interface with the Game Engine
Game State Collision Input AI World Interface (perception) Strategy Decision Making Movement
Example: RPG Game
- Consider Morrowind (Elder Scrolls III)
Example: RPG Game
Game State Collision + Line of sight Input AI Perception:
- Navigation
mesh
- Quests’
status
- Interaction
events (hits, etc.) Strategy Decision Making: Scripts Movement: A* + Animation
Example: RPG Game
Game State Collision + Line of sight Input AI Perception:
- Navigation
mesh
- Quests’
status
- Interaction
events (hits, etc.) Strategy Decision Making: Scripts Movement: A* + Animation For the triggers and conditions used in the scripts (basically, we need all the information needed to test the conditions in the scripts, or an interface to do so)
Example: RPG Game
Game State Collision + Line of sight Input AI Perception:
- Navigation
mesh
- Quests’
status
- Interaction
events (hits, etc.) Strategy Decision Making: Scripts Movement: A* + Animation Basically for A*, although some scripts might check for things like “if there is path between A and B then …”
Polling
- Simplest idea:
- No special perception layer
- Whenever the AI wants some piece of information, it just asks the
Game State / Collision / Line of Sight modules about it
- Benefits:
- Easy
- Problems:
- Inefficient
- The same information might be asked for multiple times
- Most of the time the player does not collide with enemies, why
check it at every frame?
- Hard to debug
Message Passing
- Event-based approach
- Wait for an event message to arrive rather than constantly
checking for conditions to be true at each cycle (e.g. check if the player is in line of sight):
- When something relevant happens (e.g. player is in line of sight), a
message is sent to all the AIs that care about this condition.
- Centralized “event manager”
Event Manager
Checking Engine Event queue Registry
- f
Event Listeners AI1 AI2 AIn
Event Manager
Checking Engine Event queue Registry
- f
Event Listeners AI1 AI2 AIn AIs register for certain events, like collisions with certain objects, etc.
Event Manager
Checking Engine Event queue Registry
- f
Event Listeners AI1 AI2 AIn A single checking engine checks for the events for which any AI is registered
Event Manager
Checking Engine Event queue Registry
- f
Event Listeners AI1 AI2 AIn Certain events, do not need to be checked for, since the originating entity sends the directly (e.g. state changes in another AI)
Outline
- Student Presentations
- Game AI
- Movement
- Path-finding
- Scripting
- Board Games
- Learning
- Project Discussion
Game AI Architecture
AI World Interface (perception) Strategy Decision Making Movement
Steering Behaviors
- Basic building blocks for continuous movement
- Whole family of steering behaviors
- Widespread use among commercial computer games
Steering Behaviors: Uses
Steering Behaviors: Uses
Decision Making Movement
?
Steering Behaviors: Uses
Decision Making Movement In car racing games, Decision Making is typically hard coded. The game designers create a set
- f waypoints in the track (or in the
track pieces), and cars go to them in order.
Steering Behaviors: Uses
Decision Making Movement Movement is in charge of driving the car to each of the waypoints, avoiding opponents, braking, accelerating, turning, etc.
Steering Behaviors: Uses
- Not just racing games
- Any games with vehicles (helicopters, tanks, planes,
boats)
- Or even characters moving in a 3D environment
(continuous movement) with inertia (e.g. sports games)
- Most FPS games just assume there is no inertia and characters
can move in any direction at any time (bad physics J)
Basic Steering Behaviors
- Seek
- Flee
- Arrive
- Align
- Velocity Matching
Steering Behaviors
- Defined as methods that return the acceleration that the
body/vehicle controller by the AI needs to have during the next execution frame:
- Input: position, orientation, speed of AI, target
- Output: acceleration
- They are executed once per game cycle
- Some return linear acceleration (accelerate north at 3m/
s2), some return angular acceleration (turn right at 2rad/s2)
Seek
- Move towards a (potentially moving) target
Target
Seek
- Move towards a (potentially moving) target
Target Difference D = E - S S: Start coordinates E: End coordinates V: Current Speed
Seek
Seek(character, E) D = E - character.position ND = D / |D| A = ND * maxAcceleration Return A
Seek
Seek(character, E) D = E - character.position ND = D / |D| A = ND * maxAcceleration Return A
Target character.position E: End coordinates character.velocity
Seek
Seek(character, E) D = E - character.position ND = D / |D| A = ND * maxAcceleration Return A
Target character.position E: End coordinates character.velocity Difference D = E – character.position
Seek
Seek(character, E) D = E - character.position ND = D / |D| A = ND * maxAcceleration Return A
Target character.position E: End coordinates character.velocity ND
Seek
Seek(character, E) D = E - character.position ND = D / |D| A = ND * maxAcceleration Return A
Target character.position E: End coordinates character.velocity A
Seek
Seek(character, E) D = E - character.position ND = D / |D| A = ND * maxAcceleration Return A
Target character.position E: End coordinates character.velocity A
Seek
Seek(character, E) D = E - character.position ND = D / |D| A = ND * maxAcceleration Return A
Target character.position E: End coordinates character.velocity A
Seek
Seek(character, E) D = E - character.position ND = D / |D| A = ND * maxAcceleration Return A
Target character.position E: End coordinates character.velocity A
Composite Steering Behaviors
- Pursue and Evade
- Face
- Looking where you are going
- Wander
- Path Following
- Separation
- Collision Avoidance
- Obstacle/Wall Avoidance
- General Combination
Steering Behaviors in Vehicles
- As defined so far, steering behaviors assume:
- Character/vehicle under control can exert a force at an arbitrary
angle
- The direction of movement is independent of the direction being
faced
- None of those assumptions are satisfied by vehicles
- Are Steering behaviors still useful then?
Motor Control Layer
- Steering Behaviors generate the “desired accelerations”.
An underlying “motor layer” translates that into commands like “accelerate, brake, turn right, turn left”:
AI World Interface (perception) Strategy Decision Making Movement Steering Behaviors Motor Control
Output Filtering
- Idea:
- Use the steering behavior to produce an acceleration request
- Project the request onto the accelerations that the vehicle at hand
can perform, and ignore the rest
Steering Request: Vehicle capabilities: Projection:
Outline
- Student Presentations
- Game AI
- Movement
- Path-finding
- Scripting
- Board Games
- Learning
- Project Discussion
Pathfinding
- Problem:
- Finding a path for a
character/unit to move from point A to point B
- One of the most common
AI requirements in games
- Specially critical in RTS
games since there are lots of units
A B
Pathfinding
- Problem:
- Finding a path for a
character/unit to move from point A to point B
- One of the most common
AI requirements in games
- Specially critical in RTS
games since there are lots of units
A B
Pathfinding
- Simplest scenario:
- Single character
- Non-real time
- Grid
- Static world
- Solution: A*
- Complex scenario:
- Multiple characters (overlapping paths)
- Real time
- Continuous map
- Dynamic world
Pathfinding is a Problem!
- Even in modern commercial games:
- http://www.youtube.com/watch?
v=lw9G-8gL5o0&feature=player_embedded
Quantization and Localization
- Pathfinding computations are performed with an
abstraction over the game map (a graph, composed of nodes and links)
- Quantization:
- Game map coordinates -> graph node
- Localization:
- Graph node -> Game map coordinates
Tile Graphs
- Divide the game map in equal tiles (squares, hexagons,
etc.)
- Typical in RTS games
Navigation Meshes (Navmesh)
- By far the most widely used
- Use the level geometry as the pathfinding graph
- Floor is made out of triangles:
- Use the center of each “floor” triangle as a characteristic point
Navigation Meshes (Navmesh)
- One characteristic point is only connected to neighboring
- nes
Navigation Meshes (Navmesh)
- The validity of the graph depends on the level designer. It
is her responsibility to author proper triangles for navigation:
Pathfinding
- Simplest scenario:
- Single character
- Non-real time
- Grid
- Static world
- Solution: A*
- Complex scenario:
- Multiple characters/Dynamic world: D* / D* Lite
- Real time: TBA* / LRTA*
- Continuous map: Quantization
Outline
- Student Presentations
- Game AI
- Movement
- Path-finding
- Scripting
- Board Games
- Learning
- Project Discussion
Scripting
- Most Game AI is scripted
- Game Engine allows game developers to specify the
behavior of characters in some scripting language:
- Lua
- Scheme
- Python
- Etc.
- Game specific languages
Scripting
- Advantages:
- Gives control to the game designers (characters do whatever the
game designers want)
- Disadvantages:
- Labor intensive: the behavior of each characters needs to be
predefined for all situations
- Rigid:
- If the player finds a hole in one of the behaviors, she can exploit it again
and again
- Not adaptive to what the player does (unless scripted coded to be so)
Finite State Machines
Harvest Minerals Build Barracks Train marines Attack Enemy Explore Train SCVs 4 SCVs harvesting 4 SCVs Less than 4 SCVs barracks 4 marines & Enemy seen Enemy seen 4 marines & Enemy unseen No marines
Finite State Machines
- Easy to implement:
switch(state) { case START: if (numSCVs<4) state = TRAIN_SCVs; if (numHarvestingSCVs>=4) state = BUILD_BARRACKS; Unit *SCV = findIdleSCV(); Unit *mineral = findClosestMineral(SCV); SCV->harvest(mineral); break; case TRAIN_SCVs: if (numSCVs>=4) state = START; Unit *base = findIdleBase(); base->train(UnitType::SCV); break; case BUILD_BARRACKS: … }
Finite State Machines
- Good for simple AIs
- Become unmanageable for complex tasks
- Hard to maintain
Finite State Machines (Add a new state)
Harvest Minerals Build Barracks Train marines Attack Enemy Explore Train SCVs 4 SCVs harvesting 4 SCVs Less than 4 SCVs barracks 4 marines & Enemy seen Enemy seen 4 marines & Enemy unseen No marines
?
Finite State Machines (Add a new state)
Harvest Minerals Build Barracks Train 4 marines Attack Enemy Explore Train 4 SCVs 4 SCVs harvesting 4 SCVs Less than 4 SCVs barracks 4 marines & Enemy seen Enemy seen Enemy unseen No marines Attack Inside Enemy Enemy Inside Base
Standard Strategy
Hierarchical Finite State Machines
Attack Inside Enemy Enemy Inside Base No Enemy Inside Base
- FSM inside of the state of another FSM
- As many levels as needed
- Can alleviate complexity problem to some extent
Hierarchical Finite State Machines
Harvest Minerals Build Barracks Train 4 marines Attack Enemy Explore Train 4 SCVs 4 SCVs harvesting 4 SCVs Less than 4 SCVs barracks 4 marines & Enemy seen Enemy seen Enemy unseen No marines
Attack Inside Enemy Enemy Inside Base No Enemy Inside Base
- FSM inside of the state of another FSM
- As many levels as needed
- Can alleviate complexity problem to some extent
Behavior Trees
- Combination of techniques (some of them we covered last
week):
- Hierarchical state machines
- Scheduling
- Automated planning
- Action Execution
- Increasingly popular in commercial games
- Strength:
- Visual and easy to understand way to author behaviors and
decisions for characters without having programming knowledge
Behavior Trees Appeared in Halo 2
Halo 2 (2004)
Example Behavior Tree
Behavior Tree Basics
- A behavior tree (BT) captures the “behavior” or “decision
mechanism” of a character in a game
- At each frame (if synchronous):
- The game engine executes one cycle of the BT:
- As a side effect of execution, a BT executes actions (that control the character)
- The basic component of a behavior tree is a task
- At each game cycle, a cycle of a task is executed
- It returns success, failure, error, etc.
- As a side effect of execution they might execute things in the game
- Three basic types of tasks:
- Conditions
- Actions
- Composites
Behavior Tree Tasks
Action Condition Sequence Selector Generic: the same for all games Domain dependent: Each game defines its own actions and conditions
Example Execution:
- Goal: Make a character move right when the player is
near
Move Right Is Player Near? Sequence
Example Execution:
- Goal: Make a character move right when the player is
near
Move Right Is Player Near? Sequence
Example Execution:
- Goal: Make a character move right when the player is
near
Move Right Is Player Near? Sequence
Example Execution:
- Goal: Make a character move right when the player is
near
Move Right Is Player Near? Sequence
Example Execution:
- Goal: Make a character move right when the player is
near
Move Right Is Player Near? Sequence
Example Execution:
- Goal: Make a character move right when the player is
near
Move Right Is Player Near? Sequence
What If There Are Obstacles?
- Goal: Make a character move right when the player is
near, even if the door is closed
Open Door Is Player Near? Is Door Closed? Move Right Is Door Open? Move Right Move to Door Sequence Selector Sequence Sequence
Outline
- Student Presentations
- Game AI
- Movement
- Path-finding
- Scripting
- Board Games
- Learning
- Project Discussion
Board Games
- Main characteristic: turn-based
- The AI has a lot of time to decide the next move
Board Games
- Not just chess…
Board Games
- Not just chess…
Board Games
- Not just chess…
Board Games
- Not just chess…
Board Games
- Not just chess…
Game Tree
Current Situation Player 1 action Player 2 action U(s) U(s) U(s) U(s) U(s) U(s)
- Game trees capture the effects of successive action
executions:
Game Tree
Current Situation Player 1 action Player 2 action U(s) U(s) U(s) U(s) U(s) U(s)
- Game trees capture the effects of successive action
executions:
Pick the action that leads to the state with maximum expected utility after taking into account what the other players might do
Game Tree
Current Situation Player 1 action Player 2 action U(s) U(s) U(s) U(s) U(s) U(s)
- Game trees capture the effects of successive action
executions:
In this example, we look ahead only one player 1 action and
- ne player 2 action.
But we could grow the tree arbitrarily deep
Minimax Principle
Current Situation Player 1 action Player 2 action U(s) = -1 U(s) = 0 U(s) = -1 U(s) = 0 U(s) = 0 U(s) = 0
- Positive utility is good for player 1, and negative for player 2
- Player 1 chooses actions that maximize U, player 2 chooses
actions that minimize U
Minimax Principle
Current Situation Player 1 action Player 2 action (min) U(s) = -1 U(s) = 0 U(s) = -1 U(s) = 0 U(s) = 0 U(s) = 0
- Positive utility is good for player 1, and negative for player 2
- Player 1 chooses actions that maximize U, player 2 chooses
actions that minimize U
Minimax Principle
Current Situation Player 1 action Player 2 action (min) U(s) = -1 U(s) = 0 U(s) = -1 U(s) = 0 U(s) = 0 U(s) = 0
- Positive utility is good for player 1, and negative for player 2
- Player 1 chooses actions that maximize U, player 2 chooses
actions that minimize U
U(s) = -1 U(s) = -1 U(s) = 0
Minimax Principle
Current Situation Player 1 action (max) Player 2 action (min) U(s) = -1 U(s) = 0 U(s) = -1 U(s) = 0 U(s) = 0 U(s) = 0
- Positive utility is good for player 1, and negative for player 2
- Player 1 chooses actions that maximize U, player 2 chooses
actions that minimize U
U(s) = -1 U(s) = -1 U(s) = 0
Minimax Algorithm
Minimax(state, player, MAX_DEPTH) IF MAX_DEPTH == 0 RETURN U(state) BestAction = null BestScore = null FOR Action in actions(player, state) (Score,Action2) = Minimax(result(action, state), nextplayer(player), MAX_DEPTH-1) IF BestScore == null || (player == 1 && Score>BestScore) || (player == 2 && Score<BestScore) BestScore = Score BestAction = Action ENDFOR RETURN (BestScore, BestAction)
Minimax Algorithm
Minimax(state, player, MAX_DEPTH) IF MAX_DEPTH == 0 RETURN U(state) BestAction = null BestScore = null FOR Action in actions(player, state) (Score,Action2) = Minimax(simulate(action, state), nextplayer(player), MAX_DEPTH-1) IF BestScore == null || (player == 1 && Score>BestScore) || (player == 2 && Score<BestScore) BestScore = Score BestAction = Action ENDFOR RETURN (BestScore, BestAction)
Successes of Minimax
- Deep Blue defeated Kasparov in Chess (1997)
- Checkers was completely solved by Jonathan Shaeffert
(2007):
- If no players make mistakes, the game is a draw (like tick-tack-toe)
- Go:
- Using a variant of minimax, based on Monte Carlo search (UCT),
- In 2011 The program Zen19S reached 4 dan (professional humans
are rated between 1 to 9 dan)
Interesting Uses of Minimax
- “bastet” (Bastard Tetris)
Beyond Minimax
- Alpha-beta search (improvement over Minimax)
- Max^n: for multiplayer (more than 2) games
- Large games: Monte-Carlo search (UCT)
Outline
- Student Presentations
- Game AI
- Movement
- Path-finding
- Scripting
- Board Games
- Learning
- Project Discussion
Machine Learning
- Branch of AI that studies how to:
- Infer general knowledge from data: supervised learning
- Infer behavior from data: learning from demonstration
- Find hidden structure in data: unsupervised learning
- Infer behavior from trial an error (data): reinforcement learning
- Underlying principle is inductive inference:
- E.g. after seeing 100 times that objects fall down, we can infer by
induction the general law of gravity.
- We cannot deduce gravity from observation, we can only induce it.
Machine Learning Methods:
- K-Nearest Neighbor
- Decision Trees (ID3, C4.5)
- Linear Regression
- Bayesian Models (Naïve Bayes)
- Boosting (AdaBoost)
- Kernel Methods (SVM)
- Neural Networks (MLP)
- Reinforcement Learning (Q-learning, etc.)
- Clustering (K-means, Spectral Clustering, etc.)
- Etc.
Uses of Learning in Games
- Driving Games:
- Learn good driving behavior from observing humans
- Fine-tune parameters of physics simulation, or of car handling
parameters
- RTS Games:
- Automatically balance the game
- RPG/FPS Games:
- Believable movements
- Others:
- Specific game genres possible only with learning (training games)
- Automatically adapt to player strategies
- Learning player models
Example Usages: Black & White
Learning in Black & White
Desires Opinions Beliefs Intention: Abstract Plan Specific Plan Action List Belief-Desire-Intention Architecture
Learning in Black & White
Desires Opinions Beliefs Intention: Abstract Plan Specific Plan Action List Belief-Desire-Intention Architecture Represented as hand- crafted perceptrons (single layer neural networks) Given the current situation, they activate more or less, triggering certain desires. Example: hunger The structure of the perceptrons is hardcoded, but the parameters are learned
Learning in Black & White
Desires Opinions Beliefs Intention: Abstract Plan Specific Plan Action List Belief-Desire-Intention Architecture Represented as learned decision trees, one per desire. They capture towards which
- bject the creature should
express each desire. Example: hunger The creature will learn a decision tree from player’s feedback of what can be eaten
Learning in Black & White
Desires Opinions Beliefs Intention: Abstract Plan Specific Plan Action List Belief-Desire-Intention Architecture Beliefs are just lists of properties of the objects in the game, used for learning
- pinions.
Learning in Black & White
Desires Opinions Beliefs Intention: Abstract Plan Specific Plan Action List Belief-Desire-Intention Architecture Which desire will be attempted, and towards which object: e.g. “destroy town” or “eat human”
Learning in Black & White
Desires Opinions Beliefs Intention: Abstract Plan Specific Plan Action List Belief-Desire-Intention Architecture Desire, and list of objects that will be used. E.g. “Destroy town by throwing a stone”
Learning in Black & White
Desires Opinions Beliefs Intention: Abstract Plan Specific Plan Action List Belief-Desire-Intention Architecture Specific list of actions to execute the plan
Learning in Black & White
- BDI model to simulate character personality
- Learning is deployed at specific players to adapt the BDI
model to player preferences
- Quite advanced, for commercial game standards!
Outline
- Student Presentations
- Game AI
- Movement
- Path-finding
- Scripting
- Board Games
- Learning
- Project Discussion
Links to Interesting Game Videos
- Magicka:
- https://www.youtube.com/watch?v=RVrQ8fBOG_w
- QWOP:
- https://www.youtube.com/watch?v=gB8OqCRtTDA
- Surgeon Simulator:
- http://www.youtube.com/watch?
v=Y2F3ZWEEbF4&feature=youtu.be
- Katamari:
- https://www.youtube.com/watch?v=cwhFH75OCDs
Remember that next week:
- Final Project Deliverable:
- Demo of your Game Engine:
- 9 minutes per team:
- For both online and in-class students
- It will be timed. After 9 minutes exactly I will stop the presentation.
- Source Code
- Document:
- Updated document from deliverable 3.
- Submission procedure:
- Email to (copy both):
- Santiago Ontañón santi@cs.drexel.edu
- Stephen Lombardi sal64@drexel.edu
- Subject: CS480-680 Project Deliverable 4 Group #