Virtual Actors Machine emulation of character gesture behaviour as - - PowerPoint PPT Presentation

virtual actors
SMART_READER_LITE
LIVE PREVIEW

Virtual Actors Machine emulation of character gesture behaviour as - - PowerPoint PPT Presentation

Virtual Actors Machine emulation of character gesture behaviour as portrayed by human actors - By Sri Sri Perangur Supervised by Dr. Suresh Manandhar Project aims Objective : To produce automated human gesturing behaviour for personification of


slide-1
SLIDE 1

Virtual Actors

Machine emulation of character gesture behaviour as portrayed by human actors

  • By Sri Sri Perangur

Supervised by Dr. Suresh Manandhar

slide-2
SLIDE 2

Project aims

Objective: To produce automated human gesturing behaviour for personification of humanoids.

  • 1. Develop a structure to identify the semantics expressed in a gesture

Example:  OK  That way

  • 2. Identify a set of stimuli that influences a Person’s gesture style the most:

Psychological study + Social psychological study + Social psychology study + Video analysis

  • 3. System development model:

Role theory + Film production

  • 4. Machine Learning of gestures:
  • Annotation
  • Machine learning methods : C.45 , Naïve Bayes & S.V.M – 5 & 10 Fold
slide-3
SLIDE 3

Why Gestures? Definition, use, importance

  • Gestures are a person’s memories and thoughts rendered visible.

Hand & Mind [McNeil,2009] Gesture & Thought [McNeil,2006] The face [Ekman,2003] Human- Human nonverbal communication [Ekman & Frisch, 1991]

slide-4
SLIDE 4

Gesture interpretation

Influencing factors: Long term factors: Cultural use, regional use, … Immediate gestures : Recent history of events, relationship between speaker and audience, environment, … Classification difficulties rise due to : Multiple classification of gestures , Overlapping of gestures , … Thus gesture interpretation can be very ambiguous without context and history of context Overlapping gestures

slide-5
SLIDE 5

Why Virtual ‘Actors’?

  • Emulation of human emotion and thought not replication
  • Emulation of natural environmental stimuli = Film set
  • Expression emulation: No feeling - Just acting = Films acting

Role theory: Humans hold social positions + Humans play ‘role’

slide-6
SLIDE 6

Annotation: Dialogue annotation

slide-7
SLIDE 7

Annotation: Dialogue annotation

Dialogue Act Emotion F_Gesture type F_Gesture Binary Head Movement Head Binary Eyebrow Movement Eye Movement Eye Binary Lip Mmt Lip Binary B_Gesture type B_Gesture Binary Body Gesture Body Part Question Annoyed Emblem Y None N AU3 Y None N None N None N None None State Annoyed None N Forward Y None N None N None N None N None None Sentence Self conscious S_Positive S_Negative Public Private Friend Colleague Acquaintance Stranger Is it all gone Kit? N N Y Y N N Y N N Hey, let's go. N N N Y N N Y N N

slide-8
SLIDE 8

Annotation: Dialogue annotation

Dialogue Act Emotion F_Gesture type F_Gesture Binary Head Movement Head Binary Eyebrow Movement Eye Movement Eye Binary Lip Mmt Lip Binary B_Gesture type B_Gesture Binary Body Gesture Body Part Question Annoyed Emblem Y None N AU3 Y None N None N None N None None State Annoyed None N Forward Y None N None N None N None N None None Sentence Self conscious S_Positive S_Negative Public Private Friend Colleague Acquaintance Stranger Is it all gone Kit? N N Y Y N N Y N N Hey, let's go. N N N Y N N Y N N

slide-9
SLIDE 9

Annotation: Gesture semantics model

slide-10
SLIDE 10

Annotation: Overview

Basic setup : Environmental influencing factors + Emotion + Dialogue Act Movements : Facial gesture type + Facial movements + Body gesture type + Body movements

Embaressed (0.28%) Impressed (0.28%) Instruct (0.28%) Secretive (0.28%) Amused (0.57%) Disappointment (0.57%) Sympathy (0.57%) Excitement (1.71%) Please (1.71%) Vengeful (1.99%) Confused (2.28%) Surprised (2.56%) Angry (2.85%) Arrogant (2.85%) Thinking (3.13%) Joke (3.42%) Concern (4.56%) Annoyed (6.27%) Sad (9.12%) Interest (11.68%) Happy (21.08%) None (21.37%)

None (21.37%)

slide-11
SLIDE 11

Machine learning stage

Face gesture (Y/N)

slide-12
SLIDE 12

Machine learning stage

Face gesture Type (Classes)

slide-13
SLIDE 13

Machine learning stage

Head Movement(Y/N)…

slide-14
SLIDE 14

Machine learning stage

Head Movement (Classes)… Machine Learning methods implemented:

  • Naïve Bayes
  • Support Vector Machine (S.V.M.)
  • 5 Fold method i.e. Training: 80% data

Testing: 20% data

  • 10 Fold method i.e. Training: 90% data

Testing: 10% data

  • C.45 Algorithm

( known as J48 for java implementation)

slide-15
SLIDE 15

Predicting gesture

slide-16
SLIDE 16

Prediction accuracy: Facial expressions

Facial gesture prediction accuracy

89.24% 88.67% 90.08% 49.01% 49.01% 50.99% 39.38% 39.38% 39.66% 85.55% 75.92% 85.55% 0.00% 10.00% 20.00% 30.00% 40.00% 50.00% 60.00% 70.00% 80.00% 90.00% 100.00%

slide-17
SLIDE 17

Body gesture predictions

69.97% 69.69% 70.54% 69.20% 69.40% 69.60% 69.80% 70.00% 70.20% 70.40% 70.60% 70.80% Body Gesture- J48 Body Gesture- Naïve Body Gesture- SVM

Accuracy of Body movement predictions

247 353 315 350 106 38 3 50 100 150 200 250 300 350 400 Initial Body-J48 Body - Naïve Body - SVM None Non-None

Distribution of values between None and Non-none classes

slide-18
SLIDE 18

Conclusion & Future work

Machine learning of human gesture behaviour is possible! Future Work: Machine mapping gestures and to word semantic in a sentence. Prediction of accurate time gesture expression period compared speech rate. Gesture science + Machine learning + Kinect + Stanford Parser  Step closer to humanoid personification

slide-19
SLIDE 19

Any Questions?

Contact: Sri Sri Perangur at srisri.perangur@gmail.com or sp574@york.ac.uk or sp574@cs.york.ac.uk

Few of the research material used: [1] D. McNeil, Hand and Mind: What Gestures reveal about thought. The University of Chi-cago Press, 2009, p. 11. [2] K. R. Gibson and T. Ingold, Tools, Language and Cognition in Human Evolution. Cam-bridge University Press, 1993, p. 483. [6] Dr. S. Manandhar, “AMEDEUS: Slide1.” [Online]. Available: http://www.cs.york.ac.uk/amadeus/projects/uda/slide01.html. [7]K. M. Knutson, E. M. McClellan, and J. Grafman, “Observing social gestures: an fMRI study.” Experimental brain research. Experimentelle Hirnforschung. Expérimentation cérébrale, vol. 188, no. 2, pp. 187-98, Jun. 2008. [8] R. B. Zajonc, “Feeling and thinking: Preferences need no inferences.” [9] R. W. Picard, “AFFECTIVE COMPUTING FOR HCI,” in Proceedings of HCI International, 1999. [10] Xadhoom, “Facial expressions.” [Online]. Available: http://xadhoom.deviantart.com/art/3D-course-facial-expressions-3011857. [11] M. Montaner, B. López, and J. L. de la Rosa, “Developing trust in recommender agents,” in Proceedings of the first international joint conference on Autonomous agents and multia-gent systems part 1 - AAMAS ’02, 2002, p. 304. [13] V. Trehan and T. Y. Project, “Gesture Mark-up and an Analysis of Spontaneous Gestures,” Artificial Intelligence, vol. 802, 2003. [14] J. M. Pim, “Modelling and prediction of spontaneous gestures,” 2006. [15] S. K. Richmond, V. P., McCroskey, J. C., & Payne, “Nonverbal Behaviour in Interpersonal Relations.,” Englewood Cliffs, NJ, 1987. [16] M. L. Knapp, “Nonverbal Communication in Human Interaction,” Holt, New York: Rinehart and Winston., 1972. [17 ] L. A. at el. Malandro, Nonverbal Communication. Reading, U.K.: Addison-Wesley., 1983. [18] D. McNeil, “Gesture and Thought,” Continuum, 2006.