Texture S ynthesis Daniel Cohen-Or Texture Weathering + = The - - PowerPoint PPT Presentation

texture s ynthesis
SMART_READER_LITE
LIVE PREVIEW

Texture S ynthesis Daniel Cohen-Or Texture Weathering + = The - - PowerPoint PPT Presentation

Texture S ynthesis Daniel Cohen-Or Texture Weathering + = The slides are based of Efros and Freeman Efros&Freeman Image Quilting + = + = The Goal of Texture Synthesis input image SYNTHESIS True (infinite) texture generated


slide-1
SLIDE 1

Texture S ynthesis

Daniel Cohen-Or

slide-2
SLIDE 2

Texture Weathering

slide-3
SLIDE 3

+ =

The slides are based of Efros and Freeman “Efros&Freeman Image Quilting“

slide-4
SLIDE 4

+ =

slide-5
SLIDE 5

+

slide-6
SLIDE 6

=

slide-7
SLIDE 7

The Goal of Texture Synthesis

  • Given a finite sample (large enough) of some texture, the

goal is to synthesize other samples from that same texture. True (infinite) texture SYNTHESIS generated image input image

slide-8
SLIDE 8
slide-9
SLIDE 9

The Challenge

repeated stochastic Both Need to model the whole spectrum: from repeated to stochastic texture

slide-10
SLIDE 10

Texture Types

slide-11
SLIDE 11

Texture model

Stationary - under a proper window size, the

  • bservable portion always appears similar.

Local - each pixel is predictable from a small set of neighboring pixels and independent of the rest of the image.

slide-12
SLIDE 12

Non-Stationary

slide-13
SLIDE 13

Non-Stationary

slide-14
SLIDE 14

Texture Synthesis for Graphics

  • Inspired by Texture Analysis and Psychophysics

– [Heeger & Bergen,’95] – [DeBonet,’97] – [Portilla & Simoncelli,’98]

  • …but didn’t work well for structured textures

– [Efros & Leung,’99]

  • (originally proposed by [Garber,’81])
slide-15
SLIDE 15
  • Input patch boundary.
  • Input texture example.
  • Fill boundary with texture.

“By Example” Texture Synthesis

slide-16
SLIDE 16

Texture Synthesis by Non Parametric Sampling

  • Generate English-looking text using N-grams,

[Shannon,’48]

  • Assuming Markov Chain on letters:

– P( letter | Proceeding n-letters )

? A

E N C Y C L O P E D I

slide-17
SLIDE 17

Synthesizing English-looking text

  • [Shannon,’48] proposed a way to generate English-

looking text using N-grams: – Assume a generalized Markov model – Use a large text to compute prob. distributions of each letter given N-1 previous letters – Starting from a seed, repeatedly sample this Markov chain to generate new letters – Also works for whole words

WE NEED TO EAT CAKE

slide-18
SLIDE 18

Unit of Synthesis

  • Letter-by-letter: Used to name planets in

early 80s game “Elite”.

  • Word-by-word: M.V. Shaney (Bell Labs)

using alt.singles corpus.

– “As I've commented before, really relating to someone involves standing next to impossible.” – "One morning I shot an elephant in my arms and kissed him.” – "I spent an interesting evening recently with a grain of salt“.

slide-19
SLIDE 19

Mark V. Shaney (Bell Labs)

– Letter-by-letter: – Word-by-word: – "I spent an interesting evening recently with a grain of salt“.

Notice how well local structure is preserved!

Now, instead of letters let’s try pixels…

slide-20
SLIDE 20

Efros & Leung 99*

* A.A.Efros, T .K.Leung; “ Texture synthesis by non-parametric sampling” ; ICCV99. (originally proposed by [Garber,’ 81])

Assuming Markov property, compute P( p | N(p) ). Non-parametric sampling

slide-21
SLIDE 21

Non-parametric S ampling

  • P( p | N(p) )

– Explicit probability tables infeasible. – Instead, search input image for similar

neighbourhoods - that’s our histogram for p.

slide-22
SLIDE 22

Sample Output

?

Efros & Leung 99 - Algorithm

  • Causal neighborhood – Neighboring pixels

with known values.

slide-23
SLIDE 23

Efros & Leung ’99

To synthesize p, j ust pick one match at random

p

non-parametric sampling

Input image Synthesizing a pixel

slide-24
SLIDE 24

Efros & Leung ’99

  • The algorithm

– Very simple – S

urprisingly good results

– S

ynthesis is easier than analysis!

– …

but very slow

  • Optimizations and Improvements

– [Wei & Levoy,’ 00] (based on [Popat & Picard,’ 93]) – [Harrison,’ 01] – [Ashikhmin,’ 01] – PatchMatch [Barnes et al. 2009]

slide-25
SLIDE 25

Chaos Mosaic [Xu, Guo & S hum, ‘ 00]

  • Process: 1) tile input image; 2) pick random blocks

and place them in random locations 3) S mooth edges

input idea result

Used in Lapped Textures [Praun et.al,’00]

slide-26
SLIDE 26

Chaos Mosaic [Xu, Guo & S hum, ‘ 00]

Of course, doesn’ t work for structured textures

input result

slide-27
SLIDE 27

Multi-Resolution Pyramids*

Example texture pyramid

* L.-Y.Wei, M.Levoy; “Fast Texture Synthesis using Tree-structured Vector Quantization”;

SIGGRAPH00.

Output texture

slide-28
SLIDE 28

Extension to 3D Textures

  • Motion both in space and time

– fire, smoke, ocean waves.

  • How to synthesize?

– extend 2D algorithm to 3D.

slide-29
SLIDE 29

The Problems of Causal Scanning

  • Scanning order:

– Efros&Leung(1): Pixels with most neighbors. – Wei&Levoi(2): Raster scan.

  • These are “causal” scans.

(1) A.A.Efros, T .K.Leung; “ Text ure synt hesis by non-paramet ric sampling” ; ICCV99. (originally proposed by [Garber,’ 81]) (2) L.-Y .Wei, M.Levoy; “ Fast Texture S ynthesis using Tree-structured Vector Quantization” ; S IGGRAPH00.

slide-30
SLIDE 30

The Problems of Causal Scanning

  • Can grow garbage.
  • No natural means of refining

synthesis.

  • Cannot be parallelized.
  • Problems are made worst for

synthesis of 3D space-time volumes (a.k.a. video)...

A.A.Efros, T.K.Leung; “Texture synthesis by non-parametric sampling”; ICCV99.

slide-31
SLIDE 31

Image Quilting

  • Idea:

– let’s combine random block placement of Chaos

Mosaic with spatial constraints of Efros & Leung.

  • Observation: neighbor pixels are highly correlated
slide-32
SLIDE 32

p

Efros & Leung ’99 extended

Input image

non-parametric sampling

B

Idea: unit of synthesis = block

  • Exactly the same but now we want P(B| N(B))
  • Much faster: synthesize all pixels in a block at once
  • Not the same as multi-scale!

Synthesizing a block

slide-33
SLIDE 33

Input texture

B1 B2

Random placement

  • f blocks

block

B1 B2

Neighboring blocks constrained by overlap

B1 B2

Minimal error boundary cut

slide-34
SLIDE 34
  • min. error boundary

Minimal error boundary

  • verlapping blocks

vertical boundary

_

=

2

  • verlap error
slide-35
SLIDE 35

Our Philosophy

  • The “ Corrupt Professor’s Algorithm” :

– Plagiarize as much of the source image as you

can

– Then try to cover up the evidence

  • Rationale:

– Texture blocks are by definition correct

samples of texture so problem only connecting them together

slide-36
SLIDE 36

Image Quilting Algorithm

– Pick size of block and size of overlap – Synthesize blocks in raster order – Search input texture for block that satisfies

  • verlap constraints (above and left)
  • Easy to optimize using NN search [Liang et.al., ’ 01]

– Paste new block into resulting texture

  • use dynamic programming to compute minimal error

boundary cut

See https://www.youtube.com/watch?v=t6DzioKuVEs

Video

slide-37
SLIDE 37
slide-38
SLIDE 38
slide-39
SLIDE 39
slide-40
SLIDE 40
slide-41
SLIDE 41
slide-42
SLIDE 42
slide-43
SLIDE 43
slide-44
SLIDE 44

Failures

(Chernobyl Harvest)

slide-45
SLIDE 45

input image

Portilla & Simoncelli Wei & Levoy Image Quilting Xu, Guo & Shum

slide-46
SLIDE 46

Portilla & Simoncelli Wei & Levoy Image Quilting Xu, Guo & Shum

input image

slide-47
SLIDE 47

Portilla & Simoncelli Wei & Levoy Image Quilting

input image

Homage to Shannon!

Xu, Guo & Shum

slide-48
SLIDE 48

Synthesis in Action

slide-49
SLIDE 49
slide-50
SLIDE 50

Synthesis by Optimization

The pixels are all synthesized in parallel, not in a particular order Iterate until convergence

  • Y. Wexler E. Shechtman M. Irani;

“Space-Time Video Completion"; CVPR’04. V.Kwatra I.Essa A.Bobick N.Kwatra; “Texture Optimization for Example-based Synthesis"; SIGGRAPH’05.

slide-51
SLIDE 51

Synthesis by Optimization

Synthesized Texture Exemplar

slide-52
SLIDE 52

Synthesis by Optimization

Synthesized Texture Exemplar

Patches overlap!!

slide-53
SLIDE 53

Synthesis by Optimization

Exemplar Synthesized Texture

Average

slide-54
SLIDE 54

Histogram Matching, Kopf et. al SIG2006

Exemplar Synthesis Exemplar Synthesis

slide-55
SLIDE 55

Histogram Matching

slide-56
SLIDE 56

Histogram Matching

slide-57
SLIDE 57

Histogram Matching

slide-58
SLIDE 58

Histogram Matching

slide-59
SLIDE 59

+ =

Application: Texture Transfer

  • Try to explain one obj ect with bits and

pieces of another obj ect:

slide-60
SLIDE 60

Texture Transfer

Constraint Texture sample

slide-61
SLIDE 61
  • Take the texture from one

image and “paint” it onto another obj ect

Texture Transfer

Same as texture synthesis, except an additional constraint:

  • 1. Consistency of texture
  • 2. S

imilarity to the image being “explained”

slide-62
SLIDE 62

+ =

slide-63
SLIDE 63

Source texture Target image Source correspondence image Target correspondence image

slide-64
SLIDE 64

+ =

slide-65
SLIDE 65

= +

slide-66
SLIDE 66
slide-67
SLIDE 67

?

Image analogies (filter by example)

A A’ B A to A’ like B to ? B’ B’

slide-68
SLIDE 68

? A1,…,An : A1’,…,An’ :: B : ?

A1 A1’ A2 A2’ A3 A3’ B B’

B’

slide-69
SLIDE 69

input

  • utput

texture segmentation drawing with color coded textures

slide-70
SLIDE 70

Applications –Artistic Filters (Cont.)

S

  • urce

Pair: Target Pairs:

slide-71
SLIDE 71

“ Texture By Numbers”

  • By color-labeling source image parts a realistic

synthesized image can be created

A B A` B`

Video

slide-72
SLIDE 72

Fragment-based Image Completion (S IGGRAPH’ 03)

slide-73
SLIDE 73

Fragment-based Image Completion (S IGGRAPH’ 03)

slide-74
SLIDE 74

Completion process

confidence and color at different time steps and scales

time

scale

slide-75
SLIDE 75

Results

slide-76
SLIDE 76

input image completion

Results

slide-77
SLIDE 77

Results

slide-78
SLIDE 78

Video Completion

. Wexler E. Shechtman M. Irani; “Space-Time Video Completion"; CVPR’04.

slide-79
SLIDE 79

Time-varying Weathering in Texture Space Rachele Bellini, Yanir Kleiman, Daniel Cohen-Or SIGGRAPH 2016

slide-80
SLIDE 80

(Neural) T exture Synthesis

slide-81
SLIDE 81

57

Given a sample patch of some texture, can we generate a bigger image of the same texture?

Input Output
slide-82
SLIDE 82

59

Wei and Levoy, “Fast Texture Synthesis using Tree-structured Vector Quantization”, SIGGRAPH 2000 Efros and Leung, “Texture Synthesis by Non-parametric Sampling”, ICCV 1999

Neural Style Transfer

slide-83
SLIDE 83
  • 1. Pretrain CNN on ImageNet (VGG-19)
  • 2. Feed a texture, record activations
  • 3. Compute Gram matrix for each layer:
  • 4. Initialize image from random noise
  • 5. Feed image through CNN, computing

Gram matrices

  • 6. Compute loss as the sum of L2 distances

between Gram matrices

  • 7. Back-propagate to get a gradient for the

input image

  • 8. Update input image
  • 9. Repeat 5-8 until convergence

Neural Style Transfer

slide-84
SLIDE 84

Style Loss

L.A. Gatys, A.S. Ecker, and M. Bethge. Texture Synthesis Using Convolutional Neural Networks. Advances in Neural Information Processing Systems 28 (May 2015)
slide-85
SLIDE 85

Style Loss cont.

L.A. Gatys, A.S. Ecker, and M. Bethge. Texture Synthesis Using Convolutional Neural Networks. Advances in Neural Information Processing Systems 28 (May 2015)
slide-86
SLIDE 86

68

Reconstructing from higher layers recovers larger features from the input texture

Gatys et al, “Texture Synthesis using Convolutional Neural Networks”, NIPS 2015

Neural Style Transfer

slide-87
SLIDE 87

Style Transfer: Feature Inversion + Texture Synthesis

slide-88
SLIDE 88

70

Feature reconstruction Texture synthesis (Gram reconstruction)

Figure credit: Johnson et al, “Perceptual Losses for Real-Time Style Transfer and Super-Resolution”, ECCV 2016

Neural Style Transfer

slide-89
SLIDE 89

+

Gatys et al, “A Neural Algorithm of Artistic Style”, arXiv 2015 Gatys et al, “Image Style Transfer using Convolutional Neural Networks”, CVPR 2016

95

=

ContentImage StyleImage StylizedResult

Given a content image and a style image, find a new image that

  • Matches the CNN features of the content image (feature reconstruction)
  • Matches the Gram matrices of the style image (texture synthesis)

Combine feature reconstruction from Mahendran et al with Neural Texture Synthesis from Gatys et al, using the same CNN!

Neural Style Transfer

slide-90
SLIDE 90

96

Gatys et al, “Image Style Transfer using Convolutional Neural Networks”, CVPR 2016

1.

Pretrain CNN 2. Compute features for content image 3. Compute Gram matrices for style image 4. Randomly initialize new image 5. Forward new image through CNN 6. Compute style loss (L2 distance between Gram matrices) and content loss (L2 distance between features) 7. Loss is weighted sum of style and content losses 8. Backprop to image 9. Take a gradient step 10. GOTO 5

Neural Style Transfer

slide-91
SLIDE 91

Neural Style Transfer

97

Gatys et al, “Image Style Transfer using Convolutional Neural Networks”, CVPR 2016

slide-92
SLIDE 92

Neural Style Transfer

98

slide-93
SLIDE 93

Neural Style Transfer: Style / Content Tradeoff

More weight to content loss More weight to style loss

100

Justin Johnson, “neural-style”, https://github.com/jcjohnson/neural-style

slide-94
SLIDE 94

Neural Style Transfer: Style Scale

Larger style image Smaller style image Resizing style image before running style transfer algorithm can transfer different types of features

101

Justin Johnson, “neural-style”, https://github.com/jcjohnson/neural-style

slide-95
SLIDE 95

Neural Style Transfer: Multiple Style Images

Mix style from multiple images by taking a weighted average of Gram matrices

102

Justin Johnson, “neural-style”, https://github.com/jcjohnson/neural-style

slide-96
SLIDE 96

Neural Style Transfer: Multiple Style Images

More “Scream” More “Starry Night”

103

Justin Johnson, “neural-style”, https://github.com/jcjohnson/neural-style

slide-97
SLIDE 97

87

Fast Style Transfer

Problem: Style transfer is slow; need hundreds of forward + backward passes of VGG Solution: Train a feedforward network to perform style transfer!

slide-98
SLIDE 98

88

Fast Style Transfer

(1) Train a feedforward network for each style (2) Use pretrained CNN to compute same losses as before (3) After training, stylize images using a single forward pass

Works real-time at test-time!

Johnson et al, “Perceptual Losses for Real-Time Style Transfer and Super-Resolution”, ECCV 2016

slide-99
SLIDE 99

89

Fast Style Transfer

Gatys Ours Gatys Ours

Johnson et al, “Perceptual Losses for Real-Time Style Transfer and Super-Resolution”, ECCV 2016 https://github.com/jcjohnson/fast-neural-style

Works real-time on video!

slide-100
SLIDE 100

Learning to transfer style

Huang, Xun; Belongie, Serge Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization International Conference on Computer Vision (ICCV), Venice, Italy, 2017, (Oral).