Based on Signal Processing SHREEKANT MARWADI Why Emotion - - PowerPoint PPT Presentation

based on signal processing
SMART_READER_LITE
LIVE PREVIEW

Based on Signal Processing SHREEKANT MARWADI Why Emotion - - PowerPoint PPT Presentation

Emotion Recognition Based on Signal Processing SHREEKANT MARWADI Why Emotion Recognition in HCI? 1 2 3 Natural way of interaction Computers will Ease interaction for humans understand human between human and input more precisely


slide-1
SLIDE 1

Emotion Recognition Based on Signal Processing

SHREEKANT MARWADI

slide-2
SLIDE 2

Why Emotion Recognition in HCI?

1

Natural way of interaction for humans

2

Computers will understand human input more precisely and respond accordingly

3

Ease interaction between human and computers

slide-3
SLIDE 3

How We recognise emotions

  • P. Ekam’s 6 basic Emotions (Universal)

✓ Emotions from speaker’s tone ✓ Emotions from facial expressions ✓ Emotions from Body Gesture

slide-4
SLIDE 4

Differentiating Emotions

Acted Spontaneous

Emotions Depends on

Gender Age Group Cultural Diversity

Why recognizing emotions Is difficult for a computer.

slide-5
SLIDE 5

Emotion Recognition modals Uni-modal Speech Emotion Recognition Facial Emotion Recognition Body Gesture Recognition Bi-modal Multi-modal User dependent User independent

slide-6
SLIDE 6

Speech Emotion Recognition

Verbal Communication contains 45% of emotion information ➢ Voice intonation 38% ➢ Actual word 7% Availability of sufficient dataset is major concern

slide-7
SLIDE 7

Speech Emotion Recognition

Tree diagram for types of Features:

slide-8
SLIDE 8

ACOUSTIC ANALYSIS COMBINING ACOUSTIC WITH LINGUISTIC ANALYSIS

➢ Recognition Accuracy is 74.2% ➢ Recognition Accuracy is 59.6% (only Linguistic) ➢ Recognition Accuracy is 92% (Combination)

slide-9
SLIDE 9

Applications

➢ Smart Call Centre ➢ Sorting voice mail ➢ Lie-detection ➢ Will improve intelligent assistant like Siri and google now Etc.

✓ Enable a natural interaction with the computer by speaking instead of using traditional input devices and not only have the machine understand the verbal content.

slide-10
SLIDE 10

Facial Emotion Recognition

➢ Contain major emotion information ➢ Efficient Dataset Available

slide-11
SLIDE 11

Applications

➢ Intelligent Online tutoring system ➢ Detecting Emotions of Driver ➢ Smart Computer/ Mobile interface Etc.

slide-12
SLIDE 12

Multi-modal Emotion Recognition

  • L. Kesseus multimodal

Emotional recognition ➢ acoustic analysis for speech emotion recognition ➢ best probability approach for decision level fusion ➢ overall performance of system improved ➢ No universal dataset Available

slide-13
SLIDE 13

Overall Performance Comparison of Uni-modal, Bi-modal and Multi-modal systems 78.3%

Speech Emotion 57.1%

Body Gesture 67.1%

Facial Emotion 48.3%

Percentage of instances correctly classified in different modals in L. Kesseus experiment.

62.5% 75% 65%

slide-14
SLIDE 14

Current Technologies

An artificial intelligence startup that can read your mind. It predicts attitudes and actions based

  • n facial expressions. It is used by advertisers to monitor and assess reactions to their ads and

products from potential customers. Developed a way for computers to recognize human emotions based on facial cues. Affectiva's technology can enable applications to use a webcam to track a user's smirks, smiles, frowns and furrows, which measures the user's levels of surprise, amusement or confusion.

slide-15
SLIDE 15

Emovu Driver Monitor System (Eyeris)

Feeling sad, angry? Your future car will know.

It determine if that driver is angry, sad, happy, surprised, fearful, disgusted or expressing no emotion.

Some of the features of Emovu DMS ➢ Fear reaction when the brakes are applied. ➢ Sleepy while driving ➢ Pre-crash actions, such as tightening seat belts or preparing braking ➢ Correlating driver emotions to particular location.

An autonomous car of the future could actually take over the driving if it felt its human wasn't up to the task.

slide-16
SLIDE 16

References:

I.

  • P. Ekman, “Universals and cultural differences in facial expressions of emotion,” in Proc.

Nebraska Symp. Motivation, vol. 19, pp. 207–283, 1971 II.

  • S. Ramakrishnan, “Speech emotion recognition approaches in human computer interaction.”

Springer Science+Business Media, LLC 2011, 2nd September 2011 III.

  • L. Kessous, G. Castellano, and G. Caridakis, "Multimodal emotion recognition in speech-

based interaction using facial expression, body gesture, and acoustic analysis," Journal on Multimodal User Interfaces, vol. 3, pp. 33-48,2009. IV. https://en.wikipedia.org/wiki/Emotion_recognition