2D Computer Graphics Diego Nehab Summer 2020 IMPA 1 - - PowerPoint PPT Presentation

2d computer graphics
SMART_READER_LITE
LIVE PREVIEW

2D Computer Graphics Diego Nehab Summer 2020 IMPA 1 - - PowerPoint PPT Presentation

2D Computer Graphics Diego Nehab Summer 2020 IMPA 1 Anti-aliasing and texture mapping Value of pixel p i is given by p i f i f t i t dt How to compute the integral when f is a vector graphics illustration? Anti-aliasing Let f be a


slide-1
SLIDE 1

2D Computer Graphics

Diego Nehab Summer 2020

IMPA 1

slide-2
SLIDE 2

Anti-aliasing and texture mapping

slide-3
SLIDE 3

Anti-aliasing

Let f be a function and ψ an anti-aliasing fjlter Value of pixel pi is given by pi f i f t i t dt How to compute the integral when f is a vector graphics illustration?

2

slide-4
SLIDE 4

Anti-aliasing

Let f be a function and ψ an anti-aliasing fjlter Value of pixel pi is given by pi = (f ∗ ψ)(i) = ∞

−∞

f(t) ψ(i − t) dt How to compute the integral when f is a vector graphics illustration?

2

slide-5
SLIDE 5

Anti-aliasing

Let f be a function and ψ an anti-aliasing fjlter Value of pixel pi is given by pi = (f ∗ ψ)(i) = ∞

−∞

f(t) ψ(i − t) dt How to compute the integral when f is a vector graphics illustration?

2

slide-6
SLIDE 6

Analytic antialiasing

Assume box fjlter, single layer, solid color, simple polygon

  • Clip polygon against the box centered at each pixel
  • Compute weighted area using on Green’s theorem from Calculus

Possible to clip edges, not the shapes

  • + general piecewise polynomial fjlters [Duff, 1989]
  • + curved edges [Manson and Schaefer, 2013]

What about polygons with self-intersections? What about spatially varying colors? What about multiple opaque layers? What about transparency?

3

slide-7
SLIDE 7

Analytic antialiasing

Assume box fjlter, single layer, solid color, simple polygon

  • Clip polygon against the box centered at each pixel
  • Compute weighted area using on Green’s theorem from Calculus

Possible to clip edges, not the shapes

  • + general piecewise polynomial fjlters [Duff, 1989]
  • + curved edges [Manson and Schaefer, 2013]

What about polygons with self-intersections? What about spatially varying colors? What about multiple opaque layers? What about transparency?

3

slide-8
SLIDE 8

Analytic antialiasing

Assume box fjlter, single layer, solid color, simple polygon

  • Clip polygon against the box centered at each pixel
  • Compute weighted area using on Green’s theorem from Calculus

Possible to clip edges, not the shapes

  • + general piecewise polynomial fjlters [Duff, 1989]
  • + curved edges [Manson and Schaefer, 2013]

What about polygons with self-intersections? What about spatially varying colors? What about multiple opaque layers? What about transparency?

3

slide-9
SLIDE 9

Analytic antialiasing

Assume box fjlter, single layer, solid color, simple polygon

  • Clip polygon against the box centered at each pixel
  • Compute weighted area using on Green’s theorem from Calculus

Possible to clip edges, not the shapes

  • + general piecewise polynomial fjlters [Duff, 1989]
  • + curved edges [Manson and Schaefer, 2013]

What about polygons with self-intersections? What about spatially varying colors? What about multiple opaque layers? What about transparency?

3

slide-10
SLIDE 10

Analytic antialiasing

Assume box fjlter, single layer, solid color, simple polygon

  • Clip polygon against the box centered at each pixel
  • Compute weighted area using on Green’s theorem from Calculus

Possible to clip edges, not the shapes

  • + general piecewise polynomial fjlters [Duff, 1989]
  • + curved edges [Manson and Schaefer, 2013]

What about polygons with self-intersections? What about spatially varying colors? What about multiple opaque layers? What about transparency?

3

slide-11
SLIDE 11

Analytic antialiasing

Assume box fjlter, single layer, solid color, simple polygon

  • Clip polygon against the box centered at each pixel
  • Compute weighted area using on Green’s theorem from Calculus

Possible to clip edges, not the shapes

  • + general piecewise polynomial fjlters [Duff, 1989]
  • + curved edges [Manson and Schaefer, 2013]

What about polygons with self-intersections? What about spatially varying colors? What about multiple opaque layers? What about transparency?

3

slide-12
SLIDE 12

Popular hack

Assume path Pi with constant color fi, αfi Assume blending over the background bi

bi

Assume anti-aliasing fjlter with support Defjne the coverage o of Pi at pixel p

  • u

p Pi u du The new background bi

1 i 1 is

bi

1 i 1

fi

i o

bi

i 4

slide-13
SLIDE 13

Popular hack

Assume path Pi with constant color fi, αfi Assume blending over the background bi, αbi Assume anti-aliasing fjlter with support Defjne the coverage o of Pi at pixel p

  • u

p Pi u du The new background bi

1 i 1 is

bi

1 i 1

fi

i o

bi

i 4

slide-14
SLIDE 14

Popular hack

Assume path Pi with constant color fi, αfi Assume blending over the background bi, αbi Assume anti-aliasing fjlter ψ with support Ω Defjne the coverage o of Pi at pixel p

  • u

p Pi u du The new background bi

1 i 1 is

bi

1 i 1

fi

i o

bi

i 4

slide-15
SLIDE 15

Popular hack

Assume path Pi with constant color fi, αfi Assume blending over the background bi, αbi Assume anti-aliasing fjlter ψ with support Ω Defjne the coverage o of Pi at pixel p

  • =

[u − p ∈ Pi] ψ(u) du The new background bi

1 i 1 is

bi

1 i 1

fi

i o

bi

i 4

slide-16
SLIDE 16

Popular hack

Assume path Pi with constant color fi, αfi Assume blending over the background bi, αbi Assume anti-aliasing fjlter ψ with support Ω Defjne the coverage o of Pi at pixel p

  • =

[u − p ∈ Pi] ψ(u) du The new background bi+1, αi+1 is bi+1, αi+1 = fi, (αi · o) ⊕ bi, αi

4

slide-17
SLIDE 17

Problems with hack

Visible seams at perfectly abutting layers, weird halos This is called the correlated mattes problem It also either blends in linear, or antialiases in gamma Must blend in gamma and antialias in linear [Nehab and Hoppe, 2008] bi

1

i 1

1 fi i

bi

i

  • 1 bi

i

1

  • 5
slide-18
SLIDE 18

Problems with hack

Visible seams at perfectly abutting layers, weird halos This is called the correlated mattes problem It also either blends in linear, or antialiases in gamma Must blend in gamma and antialias in linear [Nehab and Hoppe, 2008] bi

1

i 1

1 fi i

bi

i

  • 1 bi

i

1

  • 5
slide-19
SLIDE 19

Problems with hack

Visible seams at perfectly abutting layers, weird halos This is called the correlated mattes problem It also either blends in linear, or antialiases in gamma Must blend in gamma and antialias in linear [Nehab and Hoppe, 2008] bi+1, βi + 1 = γ

  • γ−1(fi, αi ⊕ bi, βi) · o + γ−1(bi, βi) · (1 − o)
  • 5
slide-20
SLIDE 20

Probability in 2 slides

A random variable X is a function that maps outcomes to numbers The associated cumulative distribution function FX is such that FX a P X a i.e., it measures the probability that the numerical value is at most a. The associated probability density function fX is such that FX a

a

fX t dt i.e., its integral is the cumulative distribution function. The associated expectation E X (or mean

X) is

E X t fX t dt

X

(1) i.e., the mean value weighted by the probability density function.

6

slide-21
SLIDE 21

Probability in 2 slides

A random variable X is a function that maps outcomes to numbers The associated cumulative distribution function FX is such that FX(a) = P[X ≤ a] i.e., it measures the probability that the numerical value is at most a. The associated probability density function fX is such that FX a

a

fX t dt i.e., its integral is the cumulative distribution function. The associated expectation E X (or mean

X) is

E X t fX t dt

X

(1) i.e., the mean value weighted by the probability density function.

6

slide-22
SLIDE 22

Probability in 2 slides

A random variable X is a function that maps outcomes to numbers The associated cumulative distribution function FX is such that FX(a) = P[X ≤ a] i.e., it measures the probability that the numerical value is at most a. The associated probability density function fX is such that FX a

a

fX t dt i.e., its integral is the cumulative distribution function. The associated expectation E X (or mean

X) is

E X t fX t dt

X

(1) i.e., the mean value weighted by the probability density function.

6

slide-23
SLIDE 23

Probability in 2 slides

A random variable X is a function that maps outcomes to numbers The associated cumulative distribution function FX is such that FX(a) = P[X ≤ a] i.e., it measures the probability that the numerical value is at most a. The associated probability density function fX is such that FX(a) = a

−∞

fX(t) dt i.e., its integral is the cumulative distribution function. The associated expectation E X (or mean

X) is

E X t fX t dt

X

(1) i.e., the mean value weighted by the probability density function.

6

slide-24
SLIDE 24

Probability in 2 slides

A random variable X is a function that maps outcomes to numbers The associated cumulative distribution function FX is such that FX(a) = P[X ≤ a] i.e., it measures the probability that the numerical value is at most a. The associated probability density function fX is such that FX(a) = a

−∞

fX(t) dt i.e., its integral is the cumulative distribution function. The associated expectation E X (or mean

X) is

E X t fX t dt

X

(1) i.e., the mean value weighted by the probability density function.

6

slide-25
SLIDE 25

Probability in 2 slides

A random variable X is a function that maps outcomes to numbers The associated cumulative distribution function FX is such that FX(a) = P[X ≤ a] i.e., it measures the probability that the numerical value is at most a. The associated probability density function fX is such that FX(a) = a

−∞

fX(t) dt i.e., its integral is the cumulative distribution function. The associated expectation E[X] (or mean µX) is E[X] = ∞

−∞

t fX(t) dt = µX (1) i.e., the mean value weighted by the probability density function.

6

slide-26
SLIDE 26

Probability in 2 slides

A random variable X is a function that maps outcomes to numbers The associated cumulative distribution function FX is such that FX(a) = P[X ≤ a] i.e., it measures the probability that the numerical value is at most a. The associated probability density function fX is such that FX(a) = a

−∞

fX(t) dt i.e., its integral is the cumulative distribution function. The associated expectation E[X] (or mean µX) is E[X] = ∞

−∞

t fX(t) dt = µX (1) i.e., the mean value weighted by the probability density function.

6

slide-27
SLIDE 27

Probability in 2 slides

The associated variance var(X) = σ2

X is

var(X) = E[(X − µX)2] = E[X2] − E2[X] = σ2

X

and the standard deviation is σX. Measure how much the random variable deviates from the mean The sample average is Xn

1 n X1

X2 Xn Law of large numbers Xn

X

for n Variance of sample average var Xn var

1 n

Xi

1 n2

var Xi

2 X

n 7

slide-28
SLIDE 28

Probability in 2 slides

The associated variance var(X) = σ2

X is

var(X) = E[(X − µX)2] = E[X2] − E2[X] = σ2

X

and the standard deviation is σX. Measure how much the random variable deviates from the mean The sample average is Xn

1 n X1

X2 Xn Law of large numbers Xn

X

for n Variance of sample average var Xn var

1 n

Xi

1 n2

var Xi

2 X

n 7

slide-29
SLIDE 29

Probability in 2 slides

The associated variance var(X) = σ2

X is

var(X) = E[(X − µX)2] = E[X2] − E2[X] = σ2

X

and the standard deviation is σX. Measure how much the random variable deviates from the mean The sample average is Xn = 1

n(X1 + X2 + · · · + Xn)

Law of large numbers Xn

X

for n Variance of sample average var Xn var

1 n

Xi

1 n2

var Xi

2 X

n 7

slide-30
SLIDE 30

Probability in 2 slides

The associated variance var(X) = σ2

X is

var(X) = E[(X − µX)2] = E[X2] − E2[X] = σ2

X

and the standard deviation is σX. Measure how much the random variable deviates from the mean The sample average is Xn = 1

n(X1 + X2 + · · · + Xn)

Law of large numbers Xn → µX for n → ∞ Variance of sample average var Xn var

1 n

Xi

1 n2

var Xi

2 X

n 7

slide-31
SLIDE 31

Probability in 2 slides

The associated variance var(X) = σ2

X is

var(X) = E[(X − µX)2] = E[X2] − E2[X] = σ2

X

and the standard deviation is σX. Measure how much the random variable deviates from the mean The sample average is Xn = 1

n(X1 + X2 + · · · + Xn)

Law of large numbers Xn → µX for n → ∞ Variance of sample average var(Xn) = var 1

n

Xi

  • =

1 n2

var(Xi) = σ2

X

n 7

slide-32
SLIDE 32

Monte Carlo integration

Start by expressing an integral as the expectation of a random variable Estimate expectation by sample mean Rely on law of large numbers Let X be such that support of fX is g t dt g t fX t fX t dt E g X fX X 1 n

n i 1

g Xi fX Xi This is the basis of supersampling The solution to our anti-aliasing problems

8

slide-33
SLIDE 33

Monte Carlo integration

Start by expressing an integral as the expectation of a random variable Estimate expectation by sample mean Rely on law of large numbers Let X be such that support of fX is g t dt g t fX t fX t dt E g X fX X 1 n

n i 1

g Xi fX Xi This is the basis of supersampling The solution to our anti-aliasing problems

8

slide-34
SLIDE 34

Monte Carlo integration

Start by expressing an integral as the expectation of a random variable Estimate expectation by sample mean Rely on law of large numbers Let X be such that support of fX is Ω

g(t) dt =

g(t) fX(t) fX(t) dt = E g(X) fX(X)

  • ≈ 1

n

n

  • i=1

g(Xi) fX(Xi) This is the basis of supersampling The solution to our anti-aliasing problems

8

slide-35
SLIDE 35

Monte Carlo integration

Start by expressing an integral as the expectation of a random variable Estimate expectation by sample mean Rely on law of large numbers Let X be such that support of fX is Ω

g(t) dt =

g(t) fX(t) fX(t) dt = E g(X) fX(X)

  • ≈ 1

n

n

  • i=1

g(Xi) fX(Xi) This is the basis of supersampling The solution to our anti-aliasing problems

8

slide-36
SLIDE 36

Supersampling

Let g : R2 → RGB map positions to linear color Consider an anti-aliasing kernel ψ The linear color at pixel p is c p g p q q dq E g p X X fX X 1 n

n i 1

g p Xi Xi fX Xi When

0 is the box, fX

1 with support

1 2 1 2 2

c p 1 n

n i 1

g p Xi

9

slide-37
SLIDE 37

Supersampling

Let g : R2 → RGB map positions to linear color Consider an anti-aliasing kernel ψ The linear color at pixel p is c(p) =

g(p − q) ψ(q) dq E g p X X fX X 1 n

n i 1

g p Xi Xi fX Xi When

0 is the box, fX

1 with support

1 2 1 2 2

c p 1 n

n i 1

g p Xi

9

slide-38
SLIDE 38

Supersampling

Let g : R2 → RGB map positions to linear color Consider an anti-aliasing kernel ψ The linear color at pixel p is c(p) =

g(p − q) ψ(q) dq = E g(p − X)ψ(X) fX(X)

  • 1

n

n i 1

g p Xi Xi fX Xi When

0 is the box, fX

1 with support

1 2 1 2 2

c p 1 n

n i 1

g p Xi

9

slide-39
SLIDE 39

Supersampling

Let g : R2 → RGB map positions to linear color Consider an anti-aliasing kernel ψ The linear color at pixel p is c(p) =

g(p − q) ψ(q) dq = E g(p − X)ψ(X) fX(X)

  • ≈ 1

n

n

  • i=1

g(p − Xi) ψ(Xi) fX(Xi) When

0 is the box, fX

1 with support

1 2 1 2 2

c p 1 n

n i 1

g p Xi

9

slide-40
SLIDE 40

Supersampling

Let g : R2 → RGB map positions to linear color Consider an anti-aliasing kernel ψ The linear color at pixel p is c(p) =

g(p − q) ψ(q) dq = E g(p − X)ψ(X) fX(X)

  • ≈ 1

n

n

  • i=1

g(p − Xi) ψ(Xi) fX(Xi) When ψ = β0 is the box, fX = 1 with support Ω = [− 1

2, 1 2]2

c(p) ≈ 1 n

n

  • i=1

g(p − Xi)

9

slide-41
SLIDE 41

Biased estimator

Estimator is unbiased if expected value is correct The Monte Carlo estimator is unbiased in this sense c p 1 n

n i 1

g p Xi Xi fX Xi It often makes sense to use a biased estimator to reduce variance c p

n i 1

g p Xi Xi fX Xi

n i 1

Xi fX Xi

10

slide-42
SLIDE 42

Biased estimator

Estimator is unbiased if expected value is correct The Monte Carlo estimator is unbiased in this sense c(p) ≈ 1 n

n

  • i=1

g(p − Xi) ψ(Xi) fX(Xi) It often makes sense to use a biased estimator to reduce variance c p

n i 1

g p Xi Xi fX Xi

n i 1

Xi fX Xi

10

slide-43
SLIDE 43

Biased estimator

Estimator is unbiased if expected value is correct The Monte Carlo estimator is unbiased in this sense c(p) ≈ 1 n

n

  • i=1

g(p − Xi) ψ(Xi) fX(Xi) It often makes sense to use a biased estimator to reduce variance c(p) ≈

n

  • i=1

g(p − Xi) ψ(Xi) fX(Xi)

n

  • i=1

ψ(Xi) fX(Xi)

10

slide-44
SLIDE 44

Importance sampling

What happens if we choose fX(t) ∝ g(t)? g t dt E g X fX X E g X f X We only need one sample! Unfortunately, we need to normalize g to transform it into a PDF For that, we need to divide it by its integral This integral is exactly what we are trying to compute! However, we can often make fX almost proportional to g This is importance sampling

11

slide-45
SLIDE 45

Importance sampling

What happens if we choose fX(t) ∝ g(t)?

g(t) dt = E g(X) fX(X)

  • E

g X f X We only need one sample! Unfortunately, we need to normalize g to transform it into a PDF For that, we need to divide it by its integral This integral is exactly what we are trying to compute! However, we can often make fX almost proportional to g This is importance sampling

11

slide-46
SLIDE 46

Importance sampling

What happens if we choose fX(t) ∝ g(t)?

g(t) dt = E g(X) fX(X)

  • = E[α]

g X f X We only need one sample! Unfortunately, we need to normalize g to transform it into a PDF For that, we need to divide it by its integral This integral is exactly what we are trying to compute! However, we can often make fX almost proportional to g This is importance sampling

11

slide-47
SLIDE 47

Importance sampling

What happens if we choose fX(t) ∝ g(t)?

g(t) dt = E g(X) fX(X)

  • = E[α] = g(X)

f(X) We only need one sample! Unfortunately, we need to normalize g to transform it into a PDF For that, we need to divide it by its integral This integral is exactly what we are trying to compute! However, we can often make fX almost proportional to g This is importance sampling

11

slide-48
SLIDE 48

Importance sampling

What happens if we choose fX(t) ∝ g(t)?

g(t) dt = E g(X) fX(X)

  • = E[α] = g(X)

f(X) We only need one sample! Unfortunately, we need to normalize g to transform it into a PDF For that, we need to divide it by its integral This integral is exactly what we are trying to compute! However, we can often make fX almost proportional to g This is importance sampling

11

slide-49
SLIDE 49

Importance sampling

What happens if we choose fX(t) ∝ g(t)?

g(t) dt = E g(X) fX(X)

  • = E[α] = g(X)

f(X) We only need one sample! Unfortunately, we need to normalize g to transform it into a PDF For that, we need to divide it by its integral This integral is exactly what we are trying to compute! However, we can often make fX almost proportional to g This is importance sampling

11

slide-50
SLIDE 50

Importance sampling

What happens if we choose fX(t) ∝ g(t)?

g(t) dt = E g(X) fX(X)

  • = E[α] = g(X)

f(X) We only need one sample! Unfortunately, we need to normalize g to transform it into a PDF For that, we need to divide it by its integral This integral is exactly what we are trying to compute! However, we can often make fX almost proportional to g This is importance sampling

11

slide-51
SLIDE 51

Importance sampling

What happens if we choose fX(t) ∝ g(t)?

g(t) dt = E g(X) fX(X)

  • = E[α] = g(X)

f(X) We only need one sample! Unfortunately, we need to normalize g to transform it into a PDF For that, we need to divide it by its integral This integral is exactly what we are trying to compute! However, we can often make fX almost proportional to g This is importance sampling

11

slide-52
SLIDE 52

Importance sampling

What happens if we choose fX(t) ∝ g(t)?

g(t) dt = E g(X) fX(X)

  • = E[α] = g(X)

f(X) We only need one sample! Unfortunately, we need to normalize g to transform it into a PDF For that, we need to divide it by its integral This integral is exactly what we are trying to compute! However, we can often make fX almost proportional to g This is importance sampling

11

slide-53
SLIDE 53

Better sample distributions

Many different point distributions have fX = 1/AΩ in Ω Uniform, stratifjed, low-discrepancy (e.g. Poisson disk, Lloyd relaxation) Variance of Xn is not the same for all of them!

12

slide-54
SLIDE 54

Better sample distributions

Many different point distributions have fX = 1/AΩ in Ω Uniform, stratifjed, low-discrepancy (e.g. Poisson disk, Lloyd relaxation) Variance of Xn is not the same for all of them!

12

slide-55
SLIDE 55

Better sample distributions

Many different point distributions have fX = 1/AΩ in Ω Uniform, stratifjed, low-discrepancy (e.g. Poisson disk, Lloyd relaxation) Variance of Xn is not the same for all of them!

12

slide-56
SLIDE 56

16 samples

Regular

13

slide-57
SLIDE 57

16 samples

Uniform

13

slide-58
SLIDE 58

16 samples

Stratifjed

13

slide-59
SLIDE 59

16 samples

Blue noise

13

slide-60
SLIDE 60

64 samples

Regular

14

slide-61
SLIDE 61

64 samples

Uniform

14

slide-62
SLIDE 62

64 samples

Stratifjed

14

slide-63
SLIDE 63

64 samples

Blue noise

14

slide-64
SLIDE 64

256 samples

Regular

15

slide-65
SLIDE 65

256 samples

Uniform

15

slide-66
SLIDE 66

256 samples

Stratifjed

15

slide-67
SLIDE 67

256 samples

Blue noise

15

slide-68
SLIDE 68

1024 samples

Regular

16

slide-69
SLIDE 69

1024 samples

Uniform

16

slide-70
SLIDE 70

1024 samples

Stratifjed

16

slide-71
SLIDE 71

1024 samples

Blue noise

16

slide-72
SLIDE 72

Better anti-aliasing kernels

Box

4 2 2 4 0.2 0.2 0.4 0.6 0.8 1. 1.2 4 2 2 4 0.2 0.2 0.4 0.6 0.8 1. 1.2 2 Π Π Π 2 Π 0.5 1 2 Π Π Π 2 Π 80 60 40 20 20

17

slide-73
SLIDE 73

Better anti-aliasing kernels

Linear

4 2 2 4 0.2 0.2 0.4 0.6 0.8 1. 1.2 4 2 2 4 0.2 0.2 0.4 0.6 0.8 1. 1.2 2 Π Π Π 2 Π 0.5 1 2 Π Π Π 2 Π 80 60 40 20 20

17

slide-74
SLIDE 74

Better anti-aliasing kernels

Gaussian

4 2 2 4 0.2 0.2 0.4 0.6 0.8 1. 1.2 4 2 2 4 0.2 0.2 0.4 0.6 0.8 1. 1.2 2 Π Π Π 2 Π 0.5 1 2 Π Π Π 2 Π 80 60 40 20 20

17

slide-75
SLIDE 75

Better anti-aliasing kernels

Keys

4 2 2 4 0.2 0.2 0.4 0.6 0.8 1. 1.2 4 2 2 4 0.2 0.2 0.4 0.6 0.8 1. 1.2 2 Π Π Π 2 Π 0.5 1 2 Π Π Π 2 Π 80 60 40 20 20

17

slide-76
SLIDE 76

Better anti-aliasing kernels

Lanczos

4 2 2 4 0.2 0.2 0.4 0.6 0.8 1. 1.2 4 2 2 4 0.2 0.2 0.4 0.6 0.8 1. 1.2 2 Π Π Π 2 Π 0.5 1 2 Π Π Π 2 Π 80 60 40 20 20

17

slide-77
SLIDE 77

Better anti-aliasing kernels

Cardinal B-spline

4 2 2 4 0.2 0.2 0.4 0.6 0.8 1. 1.2 4 2 2 4 0.2 0.2 0.4 0.6 0.8 1. 1.2 2 Π Π Π 2 Π 0.5 1 2 Π Π Π 2 Π 80 60 40 20 20

17

slide-78
SLIDE 78

Generalized sampling

mixed synthesis sampling continuous analysis digital filtering input

  • utput

discretization reconstruction

Cardinal cubic B-spline Needs sample sharing for variance reduction and speed

18

slide-79
SLIDE 79

Generalized sampling

mixed synthesis sampling continuous analysis digital filtering input

  • utput

discretization reconstruction

Cardinal cubic B-spline Needs sample sharing for variance reduction and speed

18

slide-80
SLIDE 80

Generalized sampling

mixed synthesis sampling continuous analysis digital filtering input

  • utput

discretization reconstruction

Cardinal cubic B-spline Needs sample sharing for variance reduction and speed

18

slide-81
SLIDE 81

Texturing

Assuming good reconstruction and prefjlter kernels,

  • Upsampling needs only reconstruction
  • Downsampling needs only prefjltering

19

slide-82
SLIDE 82

Box upsampling

20

slide-83
SLIDE 83

Linear upsampling

21

slide-84
SLIDE 84

Cardinal Cubic B-spline upsampling

22

slide-85
SLIDE 85

Texturing

Assuming good reconstruction and prefjlter kernels,

  • Upsampling needs only reconstruction
  • Downsampling needs only prefjltering

Reconstruction is easy, prefjltering is diffjcult Non-uniform resampling

  • Reconstruct when locally upsampling
  • Prefjlter when locally downsampling
  • Jacobian of map from screen to texture coordinates decides

Approximate solution for isotropic downsampling: Mipmaps Otherwise, use anisotropic fjltering

23

slide-86
SLIDE 86

Texturing

Assuming good reconstruction and prefjlter kernels,

  • Upsampling needs only reconstruction
  • Downsampling needs only prefjltering

Reconstruction is easy, prefjltering is diffjcult Non-uniform resampling

  • Reconstruct when locally upsampling
  • Prefjlter when locally downsampling
  • Jacobian of map from screen to texture coordinates decides

Approximate solution for isotropic downsampling: Mipmaps Otherwise, use anisotropic fjltering

23

slide-87
SLIDE 87

Texturing

Assuming good reconstruction and prefjlter kernels,

  • Upsampling needs only reconstruction
  • Downsampling needs only prefjltering

Reconstruction is easy, prefjltering is diffjcult Non-uniform resampling

  • Reconstruct when locally upsampling
  • Prefjlter when locally downsampling
  • Jacobian of map from screen to texture coordinates decides

Approximate solution for isotropic downsampling: Mipmaps Otherwise, use anisotropic fjltering

23

slide-88
SLIDE 88

Texturing

Assuming good reconstruction and prefjlter kernels,

  • Upsampling needs only reconstruction
  • Downsampling needs only prefjltering

Reconstruction is easy, prefjltering is diffjcult Non-uniform resampling

  • Reconstruct when locally upsampling
  • Prefjlter when locally downsampling
  • Jacobian of map from screen to texture coordinates decides

Approximate solution for isotropic downsampling: Mipmaps Otherwise, use anisotropic fjltering

23

slide-89
SLIDE 89

Texturing

Assuming good reconstruction and prefjlter kernels,

  • Upsampling needs only reconstruction
  • Downsampling needs only prefjltering

Reconstruction is easy, prefjltering is diffjcult Non-uniform resampling

  • Reconstruct when locally upsampling
  • Prefjlter when locally downsampling
  • Jacobian of map from screen to texture coordinates decides

Approximate solution for isotropic downsampling: Mipmaps Otherwise, use anisotropic fjltering

23

slide-90
SLIDE 90

Texturing

Assuming good reconstruction and prefjlter kernels,

  • Upsampling needs only reconstruction
  • Downsampling needs only prefjltering

Reconstruction is easy, prefjltering is diffjcult Non-uniform resampling

  • Reconstruct when locally upsampling
  • Prefjlter when locally downsampling
  • Jacobian of map from screen to texture coordinates decides

Approximate solution for isotropic downsampling: Mipmaps Otherwise, use anisotropic fjltering

23

slide-91
SLIDE 91

References

  • E. C. Anderson. Monte carlo methods and importance sampling. UC

Berkeley, 1999. Lecture notes for Stat 578C.

  • T. Duff. Polygon scan conversion by exact convolution. In Jacques André

and Roger D. Hersch, editors, Raster Imaging and Digital Typography, pages 154–168. Cambridge University Press, 1989.

  • J. Manson and S. Schaefer. Analytic rasterization of curves with

polynomial fjlters. Computer Graphics Forum (Proceedings of Eurographics), 32(2pt4):499–507, 2013.

  • D. Nehab and H. Hoppe. Random-access rendering of general vector
  • graphics. ACM Transactions on Graphics (Proceedings of ACM

SIGGRAPH 2008), 27(5):135, 2008.

  • D. Nehab and H. Hoppe. A fresh look at generalized sampling.

Foundations and Trends in Computer Graphics and Vision, 8(1):1–84, 2014.

24