FACIAL ANIMATIONS COMPUTER GRAPHICS SEMINAR PRIIT PALUOJA HUMAN - - PowerPoint PPT Presentation

facial
SMART_READER_LITE
LIVE PREVIEW

FACIAL ANIMATIONS COMPUTER GRAPHICS SEMINAR PRIIT PALUOJA HUMAN - - PowerPoint PPT Presentation

FACIAL ANIMATIONS COMPUTER GRAPHICS SEMINAR PRIIT PALUOJA HUMAN ANATOMY GENERAL FRAMEWORK OUTLINE DATA-DRIVEN TECHNIQUES CONCLUSION 2 HUMAN ANATOMY GENERAL FRAMEWORK OUTLINE DATA-DRIVEN TECHNIQUES CONCLUSION 3 HUMAN ANATOMY 4 5


slide-1
SLIDE 1

FACIAL ANIMATIONS

COMPUTER GRAPHICS SEMINAR PRIIT PALUOJA

slide-2
SLIDE 2

OUTLINE

2

HUMAN ANATOMY GENERAL FRAMEWORK DATA-DRIVEN TECHNIQUES CONCLUSION

slide-3
SLIDE 3

OUTLINE

3

HUMAN ANATOMY GENERAL FRAMEWORK DATA-DRIVEN TECHNIQUES CONCLUSION

slide-4
SLIDE 4

HUMAN ANATOMY

4

slide-5
SLIDE 5

5

Image: Wikipedia

slide-6
SLIDE 6

SKIN [1]

  • 1. Age
  • 2. Sex
  • 3. Race
  • 4. Thickness
  • 5. Environment
  • 6. Disease

6

slide-7
SLIDE 7

SKULL [1]

  • 1. Age
  • 2. Sex
  • 3. Race
  • 4. Geographically

distant locations

7

Image: Wikipedia

slide-8
SLIDE 8

MUSCULAR ANATOMY [1]

IMAGE: WIKIPEDIA

8

slide-9
SLIDE 9

9

slide-10
SLIDE 10

VASCULAR SYSTEMS

10

Image: https://www.dummies.com/education/science/anatomy/veins-arteries-and-lymphatics-of-the-face/

slide-11
SLIDE 11

NOT COVERED

  • Eyes
  • Lips
  • Teeth
  • Tongue

11

slide-12
SLIDE 12

OUTLINE

12

HUMAN ANATOMY GENERAL FRAMEWORK DATA-DRIVEN TECHNIQUES CONCLUSION

slide-13
SLIDE 13

GENERAL FRAMEWORK

13

slide-14
SLIDE 14

AIM [1]

14

Realistic animation in real time Minimal manual handling Adaptability to any individuals face

slide-15
SLIDE 15

15

Figure: [1]

slide-16
SLIDE 16

INTERPOLATION

  • Addition of a number into the middle of a series [4]
  • Calculated based on the numbers before and after it

[4]

16

slide-17
SLIDE 17

INTERPOLATION IN COMPUTER GRAPHICS

Fill in frames between the key frames [5]

17

slide-18
SLIDE 18

18

Figure: [1]

slide-19
SLIDE 19

19

Figure: [1]

slide-20
SLIDE 20

20

Figure: [1]

slide-21
SLIDE 21

21

Figure: [1]

slide-22
SLIDE 22

22

Figure: [1]

slide-23
SLIDE 23

SHAPE INTERPOLATION [1]

  • 1. Interpolation over a normalized time interval
  • 2. Polygonal meshes approximate expressions

23

slide-24
SLIDE 24

PRACTICAL CONSIDERATIONS?

24

slide-25
SLIDE 25

SHAPE INTERPOLATION [1]

  • 1. Cases which involve scaling or rotating
  • 2. Computationally light
  • 3. Labor intense

25

slide-26
SLIDE 26

PARAMETERIZATION [1]

  • Enhancement
  • Facial geometry in parts
  • Facial configurations
  • Not practical in complex models

26

slide-27
SLIDE 27

27

slide-28
SLIDE 28

PARAMETERIZATION [1]

  • Enhancement
  • Facial geometry in parts
  • Facial configurations
  • Not practical in complex models

28

slide-29
SLIDE 29

PARAMETERIZATION [1]

  • Enhancement
  • Facial geometry in parts
  • Facial configurations
  • Not practical in complex models?

29

slide-30
SLIDE 30

30

slide-31
SLIDE 31

CAN WE DO BETTER?

31

slide-32
SLIDE 32

MUSCLE-BASED MODELLING [1]

32

Figure: [1]

slide-33
SLIDE 33

33

Image: en.wikipedia.org/wiki/Spring_(device)#/media/File:Ressort_de_compression.jpg

slide-34
SLIDE 34

MUSCLE-BASED MODELLING [1] (1980)

  • Mass-spring model
  • Connects skin, muscle and bone nodes
  • Spring network connects the 38 regional muscles

with action units

34

slide-35
SLIDE 35

FACIAL ACTION CODING SYSTEM [6]

  • 1. Allows manually to code nearly any anatomically possible

facial expression

  • 2. Specific action units (AU) can produce the expression
  • 3. Manual is over 500 pages in length

35

slide-36
SLIDE 36

Source: Wikipedia

36

slide-37
SLIDE 37

MUSCLE-BASED MODELLING [1] (1990)

  • 1. Anatomically-based muscle and physically-based

tissue model

  • 2. Spring mesh: skin, fatty tissues and muscles

37

slide-38
SLIDE 38

38

slide-39
SLIDE 39

PRACTICAL EXAMPLE

39

slide-40
SLIDE 40

40

slide-41
SLIDE 41

OUTLINE

41

HUMAN ANATOMY GENERAL FRAMEWORK DATA-DRIVEN TECHNIQUES CONCLUSION

slide-42
SLIDE 42

DATA-DRIVEN TECHNIQUES [1]

  • 1. Image-Based Techniques
  • 2. Speech-Driven Techniques
  • 3. Performance-Driven Animation

42

slide-43
SLIDE 43

IMAGE-BASED TECHNIQUES

  • 1. Facial surface and position data is captured from

images

  • 2. The depth of the model can be calculated

43

slide-44
SLIDE 44

THE MATRIX RELOADED [2]

44

Image: [2]

slide-45
SLIDE 45

MOTIVATION [2]

  • Create a 3-d recording of the real actor's performance that

could be played back from various angles and lighting conditions

  • This allows to extract geometry, texture, light and

movement

45

slide-46
SLIDE 46

THE MATRIX RELOADED [2]

  • Array of five synchronized cameras
  • Sony/Panavision HDW-F900 cameras with workstations
  • Images in uncompressed digital format
  • Hard disks at data rates close 1G/sec

46

slide-47
SLIDE 47

THE MATRIX RELOADED [2]

  • 1. Project a vertex of the model into each of the

cameras

  • 2. Track the motion of the vertex in 2-d
  • 3. At each frame estimate the 3-d position
  • 4. Measure flow error and propagate

47

slide-48
SLIDE 48

THE MATRIX RELOADED [2]

  • 1. Project a vertex of the model into each of the

cameras

  • 2. Track the motion of the vertex in 2-d
  • 3. At each frame estimate the 3-d position
  • 4. Measure flow error and propagate

48

slide-49
SLIDE 49

THE MATRIX RELOADED [2]

  • 1. Project a vertex of the model into each of the

cameras

  • 2. Track the motion of the vertex in 2-d
  • 3. At each frame estimate the 3-d position
  • 4. Measure flow error and propagate

49

slide-50
SLIDE 50

THE MATRIX RELOADED [2]

  • 1. Project a vertex of the model into each of the

cameras

  • 2. Track the motion of the vertex in 2-d
  • 3. At each frame estimate the 3-d position
  • 4. Measure flow error and propagate

50

slide-51
SLIDE 51

RESULT [2]

Reconstruction of the path of each vertex though 3-d space over time

51

slide-52
SLIDE 52

52

slide-53
SLIDE 53

WHAT IF?

SPEECH

53

slide-54
SLIDE 54

END-TO-END LEARNING FOR 3D FACIAL ANIMATION FROM SPEECH [3]

  • 1. Input: sequence of speech spectrograms
  • 2. Output: facial action unit intensities

54

slide-55
SLIDE 55

ARTIFICIAL NEURAL NETWORKS

  • Figure:

https://en.wikipedia.org/w iki/Artificial_neural_networ k#/media/File:Colored_ne ural_network.svg

55

slide-56
SLIDE 56

56

slide-57
SLIDE 57

Image: Wikipedia

57

slide-58
SLIDE 58

58

Figure: [3]

slide-59
SLIDE 59

Figure: [3]

59

slide-60
SLIDE 60

Figure: [3]

60

slide-61
SLIDE 61

Figure: [3]

61

Label Model

  • utput

Different models

slide-62
SLIDE 62

62

slide-63
SLIDE 63

PERFORMANCE-DRIVEN ANIMATION

Based on motion data

63

slide-64
SLIDE 64

64

slide-65
SLIDE 65

OUTLINE

65

HUMAN ANATOMY GENERAL FRAMEWORK DATA-DRIVEN TECHNIQUES CONCLUSION

slide-66
SLIDE 66

CONCLUSION

66

slide-67
SLIDE 67

67

Realistic animation in real time Minimal manual handling Adaptability to any individuals face

slide-68
SLIDE 68

68

bit.ly/vikt4

slide-69
SLIDE 69

69

DEMO: bit.ly/vikt6

slide-70
SLIDE 70

WHICH ACTION UNITS (AU) CORRESPOND TO…

  • 1. … happiness?
  • 2. … sadness?
  • 3. … anger?
  • 4. … fear?

70

slide-71
SLIDE 71

71

DEMO: bit.ly/vikt6

Fig: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3402717/

slide-72
SLIDE 72

SOURCES

  • 1. DOI: 10.7763/IJCTE.2013.V5.770
  • 2. DOI: 10.1145/1198555.1198596
  • 3. 10.1145/3242969.3243017
  • 4. https://dictionary.cambridge.org/dictionary/english/interpolatio

n

  • 5. https://en.wikipedia.org/wiki/Interpolation_(computer_graphics)
  • 6. https://en.wikipedia.org/wiki/Facial_Action_Coding_System

72

slide-73
SLIDE 73

FACIAL ANIMATIONS

COMPUTER GRAPHICS SEMINAR PRIIT PALUOJA