Representing People in Virtual Environments
Will Steptoe 17th November 2009
Representing People in Virtual Environments Will Steptoe 17 th - - PowerPoint PPT Presentation
Representing People in Virtual Environments Will Steptoe 17 th November 2009 INTRODUCTION Whats in this lecture? First Hour Overview and Applications - State-of-Art, Social Agency, Human Behaviour, Realism, Applications, Agency (Agents
Representing People in Virtual Environments
Will Steptoe 17th November 2009
What’s in this lecture?
First Hour
Overview and Applications
Applications, Agency (Agents and Avatars), CVEs, Avatar Control.
Second Hour Technical Aspects and Demonstration
INTRODUCTION
State-of-Art
Real-Time Pre-Rendered
Heavy Rain Quantic Dream, 2009 The Curious Case of Benjamin Button David Fincher, 2008
INTRODUCTION
Virtual Humans
problem beyond raw computing power. The more real they look, the more real we expect them to behave.
have to completely understand human perception in reality!
virtual humans?
INTRODUCTION
Social Agency and the ELIZA effect
to view computer systems and applications as social agents, reading far more understanding than is warranted from symbols and graphical displays.
“Individuals mindlessly apply social rules and expectations to computers” – Nass and Moon, 2000.
documented, by Weizenbaum (1966) when performing user studies with ELIZA - a computer program for the study of natural language communication between man and machine.
SOCIAL AGENCY
Social Agency and the ELIZA effect
participants and the system, ELIZA simulated a Rogerian psychotherapist by rephrasing input statements from the user, and returning them as
you think you are feeling depressed?)
becoming emotionally engaged when ‘communicating’ with ELIZA, and some even asked to be left alone with the system.
SOCIAL AGENCY
Social Agency and the ELIZA effect
‘ELIZA effect’, and may be considered a precursor to many observations found in the VE literature concerning presence (place illusion) and copresence.
SOCIAL AGENCY
with public speaking
his favourite subject: cables
at appropriate intervals
Pertaub, D.-P., Slater, M., and Barker, C. (2002). An experiment on public speaking anxiety in response to three different types of virtual audience. Presence: Teleoperators and Virtual Environments, 11(1): 68-78
The Fear of public speaking
SOCIAL AGENCY
The Fear of public speaking
– Positive, Negative and Mixed
maintained gaze, clapped hands, etc.
the table, avoided eye contact, and finally walked out
responses and gradually turned positive
SOCIAL AGENCY
Realistic responses in VE ?
correlated with the perceived good mood of the agents
with the negatively inclined audience – Sweating and stammering – Vocal protests at the agent behaviours
fidelity can elicit significant user responses
that mimic real-life context-appropriate behaviours
SOCIAL AGENCY
Categories of behavioural cues
– Tone, Pitch, Loudness…
– The most studied behavioural cue due to it’s role in communication
– Probably the most intense social signallers
– Numerous gestures depending on culture for instance
– Culture and gender dependent
Argyle, M. (1998). Bodily Communication. Methuen & Co Ltd, second edition.
HUMAN BEHAVIOUR
Facial expression
– Facial action parameters (most basic units)
– Phonemes (mouth shapes for lip-sync) – Principal component analysis HUMAN BEHAVIOUR
Gesture
gestures
– Also back channel gestures by listeners (e.g. head nod)
– E.g. beat, iconic
HUMAN BEHAVIOUR
Posture
have been observed
choosing from (or blending between) a library of gestures
and emotion
interpersonal attitude
Coulson, M. (2004). Attributing emotion to static body postures: Recognition accuracy, confusions, and viewpoint dependence. Journal of Nonverbal Behavior, 28(2):117–139.
HUMAN BEHAVIOUR
Measuring Success
are caveats
which sensory data projected within a virtual environment replaces the sensory data from the physical world
– quantified by rating the individuals’ sense of presence during the experience
which participants act and respond to the agents as if they were real
– Subjective: Questionnaires, Interviews – Objective: Physiological, Behavioural
HUMAN BEHAVIOUR
Subjective means
interviews
– Various questionnaires exist – http://www.presence-research.org
– the individual’s accurate post-hoc recall, – processing and rationalisations of their experience in the VE and – Varying interpretations of the word ‘presence’ HUMAN BEHAVIOUR
Objective: Responses to stimuli
– Subconscious responses
strategies
– Neural responses
situations
– Psychological responses
– Physiological responses
electromyography, Respiratory activity
– Behavioural responses
emotional state, gender etc.
– How do we interpret the data and results? HUMAN BEHAVIOUR
Uncanny Valley
(and other facsimiles) of humans approaches that
revulsion among human observers.
– Controversial, its not very rigorous or scientific, many people don’t believe it – There are problems but maybe it captures something REALISM
The Uncanny Valley
REALISM
The Uncanny Valley
Dreamworks reduced realism of Princes Fiona (Shrek):
“…she was beginning to look too real, and the effect was getting distinctly unpleasant.”
Final Fantasy movie:
“…it begins to get grotesque. You start to feel like you're puppeteering a corpse”
REALISM
REALISM
Uncanny Valley
character the more people like it.
to get disturbing - corpses are used a lot as metaphors
appearance, movement is more important.
REALISM
Different Types of Realism
– What it looks like (pictures, film, games, VE)
– How it moves, animation (film, games, VE)
– How it responds and interacts (games, VE) REALISM
Mismatch in Realism
and behavioural realism do not match graphical realism.
looks human but does not act like a human.
REALISM
Appearance vs. Behaviour
Vinayagamoorthy, V., Garau, M., Steed, A., and Slater, M. (2004b). An eye gaze model for dyadic interaction in an immersive virtual environment: Practice and
REALISM
Appearance vs. Behaviour
App. Beh. Cartoon – Form Higher – Fidelity Random gaze 3 ♂ pairs 3 ♀ pairs 3 ♂ pairs 3 ♀ pairs Inferred* gaze 3 ♂ pairs 3 ♀ pairs 3 ♂ pairs 3 ♀ pairs App. Beh. Cartoon – Form Higher – Fidelity Random gaze High Low Inferred* gaze Low High
Garau, M., Slater, M., Vinayagamoorthy, V., Brogni, A., Steed, A., and Sasse, A. M. (2003). The impact of avatar realism and eye gaze control on the perceived quality of communication in a shared immersive virtual
REALISM
Appearance vs. Behaviour
perception of more visually-realistic avatars.
complex gaze model had a negative effect on participant response.
gaze models were very subtle – saccadic velocity and fixation durations.
between the type of avatar and the fidelity of the gaze model.
REALISM
Realism vs Believability
realism for virtual humans
– Not how much a character is objectively like a human – How much we feel it is/respond to it as if it is – Bugs Bunny is very believable
– But don’t turn into an anti-realism zealot! REALISM
Characters in Virtual Environments
to characters.
environments
primary content
environments are interesting
APPLICATIONS OF VIRTUAL CHARACTERS
Applications of Virtual Characters
Games
Non-player characters are generally there to either be shot, or to have more complex interactions with. Player-characters are represent the user.
Online Virtual Worlds
Users are represented by avatars – an iconic representation of a human. Interaction via text, voice and nonverbal (scripted animation) means.
Immersive VEs
Users are embodied by avatars – natural body movement is mapped to avatar animation.
APPLICATIONS OF VIRTUAL CHARACTERS
Multi-user worlds
user worlds (the most important feature?)
APPLICATIONS OF VIRTUAL CHARACTERS
Multi-user worlds
behaviour are preserved in VEs: male-male dyads maintain greater interpersonal distance than female-female dyads, male-male dyads maintain less eye contact than female-female dyads, and decreases in interpersonal distance are compensated with gaze avoidance (Yee 2007).
specifying an inverse relationship between mutual gaze and interpersonal distance
APPLICATIONS OF VIRTUAL CHARACTERS
Immersive VR
size, real-time characters that may be agents (autonomous), avatars (other human users) or hybrids.
APPLICATIONS OF VIRTUAL CHARACTERS
Agency: Avatars and Agents
roles but there are two primary types
– Representations of you, or other people – User controlled (tracked)
– Others, that you interact with – Computer Controlled
– Part tracked, part simulated AGENCY
Agency: Avatars and Agents
programmed not tracked
blinking at the least
about ‘mood’ to determine aspects of avatar behaviour.
aspect of the human’s behaviour so much must be inferred and programmed.
Avatar Agent Tracking Programming Mixture of both
AGENCY
Agency: Avatars and Agents
programmed.
determined by the behaviour of the real tracked human.
typically in VR only head and one hand movements are tracked!
AGENCY
Interactive Behaviour
character
– In what ways do we interact with a character?
– How does the character respond? – How is it controlled? AGENCY
Agents
complex conversational agents
(from dialogue trees to spoken interaction)
AGENTS
Agents - Game NPC
– Moving, shooting – Simple conversation
– Finite state machines – Scripts – Path Planning AGENTS
Virtual Humans - Agents
http://www.miralab.unige.ch Agents are entirely program controlled rather than representing an on-line human. These are examples from virtual fashion shows.
AGENTS
Agents - Embodied Conversational Agents
– Speech conversation – Gestures etc. – Tracking data
– Complex conversational – AI methods AGENTS
Inferring Behaviour: Animation imitating life
– Controllers of behaviour in accordance to internal states
– Creating unique identities
– Controlling behaviour
– Interpersonal relationships and attitudes
Lasseter, J. (1987). Principles of traditional animation applied to 3d computer animation. ACM SIGGRAPH Computer Graphics, 21(4):35–44.
AGENCY
Avatars
fundamental mediator of the visual interaction, functioning both to identify users and to communicate nonverbal behaviour including position, identification, focus of attention, and gesture and actions (Thalmann 1999)].
AVATARS
Avatars
form, reflecting their status as a representation of a human user, and critically, enables direct relationship between the user’s natural bodily movement, and the corresponding animation of their avatar embodiment in the VE.
AVATARS
Avatars
with other people
– Simulation of real events – Training – Entertainment – Shared VEs
represented entirely synthetically
– As in shared (networked VEs)
AVATARS
Collaborative Virtual Environments (CVEs)
(i.e. desktop)
– In immersive system, avatars embody the real tracked person in terms of spatial representation (where they are, what they are looking at) and behavioural representation (what they are doing). – In non-immersive systems, avatar control is performed by standard input devices as no tracking is available. AVATARS IN CVEs
Avatars and Identity
means of identity creation
– Appearance, clothes, hair, sometimes animation
– Have a different appearance, personality, gender – Explore hidden sides of yourself – Some people feel their avatar is “More Me” than their physical self AVATARS IN CVEs
Avatars as social tools
interaction
(body language)
use them unrealistically or often not at all
AVATARS IN CVEs
Avatars in Immersive CVEs
Allows spatial interaction more easily than other telecommunication systems such as video. Spatiality is a natural feature of the real-world.
AVATARS IN IMMERSIVE CVEs
Avatar Mediated Communication
AVATARS IN IMMERSIVE CVEs
Hardware and software work together to approximate reality:
hand at least as these are the prerequisite tracking devices used in immersive systems.
Avatar Mediated Communication
AVATARS IN IMMERSIVE CVEs CVE hardware and software are usually decoupled. This means that the same software will operate
Asymmetric collaboration is possible (i.e. a user in the CAVE and a user on a mobile device). Each user views and interacts via input devices appropriate to their hardware.
Controlling avatars
Full body tracking
AVATAR CONTROL
Minimal Tracking for IK in VR
configuration for IK representing the movements of a human in VR
– www.cis.upenn.edu/ ~hollick/presence/presence.html
sufficient to reasonably reconstruct the approximate body configuration in real-time. AVATAR CONTROL
Nonverbal Expression – tracking vs simulation
Key limiting factor of avatar-mediated communication is the lack of nonverbal communication (NVC). Avatars are primitive when compared to video of real people. In AMC, NVC behaviours can modelled (thus forming a hybrid avatar-agent), but models are unable to communicate the subtleties of human behaviour and the unpredictability of social interaction. Also, models are not faithful to the controlling user’s behaviour, so may communicate incorrect signals.
AVATAR CONTROL
Nonverbal Expression – tracking vs simulation
Tracking is the solution, but is difficult. Implement tracking behaviour according to priority:
AVATAR CONTROL
Problems with Controlling Avatars
choose between either selecting a gesture from a menu or typing in a piece of text for the character to say. This means the subtle connections and synchronisations between speech and gestures are lost.
choose which gesture to perform at a given moment. As much of our expressive behaviour is subconscious the user will simply not know what the appropriate behaviour to perform at a give time is [BodyChat, Vilhjalmsson, H. and Cassell, J., 1998] AVATAR CONTROL
Problems with Controlling Avatars
displays of emotion whereas Thórisson and Cassell (1998) have shown that envelope displays – subtle gestures and actions that regulate the flow of a dialog and establish mutual focus and attention – are more important in conversation.
face or body does not help as the user resides in a different space from that of the avatar and so features such as direction of gaze will not map over appropriately. [BodyChat, Vilhjalmsson, H. and Cassell, J., 1998] AVATAR CONTROL
Solutions
single interface (e.g. through text chat)
autonomous, and indirectly controlled by users
[BodyChat, Vilhjalmsson, H. and Cassell, J., 1998] AVATAR CONTROL
Solutions: Spark
environment
input for interactional information
to generate behaviour
AVATAR CONTROL
Solutions: Spark
AVATAR CONTROL
Solutions: PIAVCA
User Interaction Script Database Motion Queue Operator Speech Generation Final Animation Gaze Posture Shifts Proxemics Concurrent Behaviours
Speech Movements Multi-model utterances
AVATAR CONTROL
Designing virtual humans
– With perceived realism, believability …
– Inducing realistic/lifelike responses
relationships
GOAL OF VIRTUAL HUMANS
Designing behaviour
challenging
perceived (and plausible) psychological state
– Or the near-truth internal state of the Person being represented
– Dependent on many factors
design process is approached in an ad-hoc manner
– For instance: In social interactions within VE, the more visually realistic the virtual human, the more naturalistic users expect it to act
GOAL OF VIRTUAL HUMANS
Summary
and necessary to represent social situations
complex problem which should be adapted to application.
a good thing.
tracking or simulation of behaviour. The design of these should consider to many factors, again including application.
data and inferred state.
creation of Virtual Humans using objective measures. GOAL OF VIRTUAL HUMANS
End of Part 1 3DSMax Demo
Technical Aspects of Virtual Characters
– Polygon meshes, rendering
– Skeletal animation, mesh morphing, physical simulation
INTRODUCTION
Graphics
graphics stuff
realism
INTRODUCTION
Modelling
Scanned body results in huge mesh which can be rendered at different resolutions (numbers of polygons)
INTRODUCTION
Animation – bones and morphs
INTRODUCTION
Body Animation
skeleton
MOTION CAPTURE
Marker-based Capture
applied to any model, but facial motion capture is more specific to a particular model).
MOTION CAPTURE
Skeletal Animation
the motion of the skeleton
joints (first approximation)
as muscle and fat briefly later
SKELETAL ANIMATION
Typical Skeleton
lines are rigid links (bones)
(position and rotation offset from the origin)
by rotating joints and moving and rotating the root SKELETAL ANIMATION
Forward Kinematics (FK)
concatenating rotations and offsets
O0 R0 O1 O2 R1 P2 SKELETAL ANIMATION
Forward Kinematics (FK)
above the link
then rotate by its joint. Go up it its parent and iterate until you get to the root
SKELETAL ANIMATION
Forward Kinematics (FK)
– often we want to specify the positions of a characters hands not the rotations of its joints
– Calculating the required rotations of joints needed to put a hand (or other body part) in a given position. SKELETAL ANIMATION
Inverse Kinematics
– A geometric method (secretly matrices underneath) R1 Pt R0 O1 O2 SKELETAL ANIMATION
Inverse Kinematics
SKELETAL ANIMATION
Inverse Kinematics
SKELETAL ANIMATION
Inverse Kinematics
SKELETAL ANIMATION
Inverse Kinematics
SKELETAL ANIMATION
Inverse Kinematics
SKELETAL ANIMATION
Inverse Kinematics
SKELETAL ANIMATION
Inverse Kinematics
SKELETAL ANIMATION
Inverse Kinematics
SKELETAL ANIMATION
Inverse Kinematics
SKELETAL ANIMATION
Inverse Kinematics
SKELETAL ANIMATION
Inverse Kinematics
SKELETAL ANIMATION
Inverse Kinematics
applying specific constraints
systems
SKELETAL ANIMATION
Representation and Format
– Skeleton structure forms a scene graph – Scene graph embodies a set of joints – A mesh overlays the scene graph – As the skeletal structure moves the mesh must deform appropriately (otherwise there are holes)
MPEG4 example
http://ligwww.epfl.ch/~maurel/Thesis98.html
SKELETAL ANIMATION
Facial Animation
underlying structure like a skeleton
animated as meshes of vertices
individual vertices MORPH TARGET ANIMATION
Morph Targets
represented by a separate mesh
number of vertices as the original mesh but with different positions
expressions (called Morph Targets)
MORPH TARGET ANIMATION
Morph Targets
MORPH TARGET ANIMATION
Morph Targets
targets to get the output mesh
∈
ets morph_targ t t ti t i
MORPH TARGET ANIMATION
Using Morph Targets
technique
that)
MORPH TARGET ANIMATION
Summary
‘skinned’ skeletal scene graphs, representing sets of joints that link to the geometry.
configuration given joint angles and Inverse kinematics determines joint angles from requirements for end-effectors
deformation often used for facial animation
END!