ForgetMeNot: Memory-Aware Forensic Facial Sketch Matching Authors: - - PowerPoint PPT Presentation

forgetmenot memory aware forensic facial sketch matching
SMART_READER_LITE
LIVE PREVIEW

ForgetMeNot: Memory-Aware Forensic Facial Sketch Matching Authors: - - PowerPoint PPT Presentation

ForgetMeNot: Memory-Aware Forensic Facial Sketch Matching Authors: Ouyang, Hospedales, Song, Li Slides by Josh Kelle 1 Overview VIPSL dataset experiment goals experiment results conclusion 2 VIPSL Dataset artist A B C D


slide-1
SLIDE 1

ForgetMeNot: Memory-Aware Forensic Facial Sketch Matching

Authors: Ouyang, Hospedales, Song, Li Slides by Josh Kelle

1

slide-2
SLIDE 2

Overview

  • VIPSL dataset
  • experiment goals
  • experiment results
  • conclusion

2

slide-3
SLIDE 3

VIPSL Dataset

  • Photographs of 200 faces with

neutral expression

  • Each photo was sketched by

5 different artists

artist A B C D E

3

slide-4
SLIDE 4

Artist Style

Artist A Artist B

4

slide-5
SLIDE 5

Goal: re-sketch in a different style

Gaussian Process input sketch from artist A

  • utput

sketch in the style of artist B

5

slide-6
SLIDE 6
  • utput HOG

features

HOG representation

Gaussian Process input sketch invert HOG features input HOG features

6

slide-7
SLIDE 7

Training the GP

  • Treat each HOG image as a vector in ℝ

2560.

  • Use PCA to reduce this to ℝ

150, although this didn’t produce a noticeable

improvement.

  • GP: ℝ

150 → ℝ 150

  • Then convert GP output back to ℝ

2560 hog space.

X = { } Y = { }

7

slide-8
SLIDE 8

Results for A→B model

input GP prediction ground truth

  • The prediction’s gradients look less sharp, which is good.
  • I was surprised to see more gradients around the outside of the head.

8

slide-9
SLIDE 9

Results for A→B model

  • It looks like the GP is smoothing too much.
  • Hypothesis: the GP is putting too much emphasis on the mean face.

input GP prediction ground truth

9

slide-10
SLIDE 10

Reverse direction: B to A

B to A A to B A B GPAB GPBA A has more gradient activity than B A has more gradient activity than B

10

slide-11
SLIDE 11

Quantifying Style Similarity

11

  • Measure similarity of sketch style by L2 distance in

HOG space.

where xi(A) is the HOG representation of the i-th sketch from artist A

slide-12
SLIDE 12

Quantifying Style Similarity

12

Lowest A→B error (err = 91) A B A B A→B prediction Highest A→B error (err = 176)

slide-13
SLIDE 13

Which artists have similar style?

  • For each pair of

artists X→Y, measure average prediction error.

A B C D E A 129.52 119.99 119.27 125.82 B 129.52 120.8 121.32 122.95 C 119.99 120.8 114.05 121.02 D 119.27 121.32 114.05 104.03 E 125.82 122.95 121.02 104.03

13

D and E are most similar A and B are most different

slide-14
SLIDE 14

Which artists have similar style?

A B C D E A 129.52 119.99 119.27 125.82 B 129.52 120.8 121.32 122.95 C 119.99 120.8 114.05 121.02 D 119.27 121.32 114.05 104.03 E 125.82 122.95 121.02 104.03

A B C D E

14

slide-15
SLIDE 15

Chaining

GPAB input sketch from artist A reconstructed sketch in the style of artist B GPBC reconstructed sketch in the style of artist C …

15

slide-16
SLIDE 16

Chaining

  • Does chaining reduce error?
  • Average E→C error is 121.
  • avg_err(E→D) = 104


avg_err(D→C) = 114

  • Compare error between E→C

vs E→D→C chain.

16

test set index chaining improved chaining did not improve average

slide-17
SLIDE 17

Chaining

  • Does chaining reduce error?
  • Average E→C error is 121.
  • avg_err(E→D) = 104


avg_err(D→C) = 114

  • Compare error between E→C

vs E→D→C chain.

17

test set index chaining improved chaining did not improve average

slide-18
SLIDE 18

Chaining

(best and worst case example)

18

E D C chaining improved the most chaining improved the least

slide-19
SLIDE 19

Chaining

(best test case example)

19

E→D→C E→C

slide-20
SLIDE 20

Chaining

(worst test case example)

20

E→D→C E→C

slide-21
SLIDE 21

Chaining

  • Differences are too slight to see a difference in

HOG images.

  • Error is ~ 100. Difference in error ~3. Most extreme

gains and losses are only about 3% different.

  • I’m not convinced chaining significantly improves

results.

21

slide-22
SLIDE 22

Conclusions

  • Gaussian Processes can be used to learn the

relation between sketch images.

  • It’s not perfect. More data or a different feature

space may help.

  • The authors’ use of multi-task learning helped

alleviate the problem of small data.

22