2D Computer Graphics Diego Nehab Summer 2020 IMPA 1 - - PowerPoint PPT Presentation
2D Computer Graphics Diego Nehab Summer 2020 IMPA 1 - - PowerPoint PPT Presentation
2D Computer Graphics Diego Nehab Summer 2020 IMPA 1 Anti-aliasing and texture mapping Value of pixel p i is given by p i f i f t i t dt How to compute the integral when f is a vector graphics illustration? Anti-aliasing Let f be a
Anti-aliasing and texture mapping
Anti-aliasing
Let f be a function and ψ an anti-aliasing fjlter Value of pixel pi is given by pi f i f t i t dt How to compute the integral when f is a vector graphics illustration?
2
Anti-aliasing
Let f be a function and ψ an anti-aliasing fjlter Value of pixel pi is given by pi = (f ∗ ψ)(i) = ∞
−∞
f(t) ψ(i − t) dt How to compute the integral when f is a vector graphics illustration?
2
Anti-aliasing
Let f be a function and ψ an anti-aliasing fjlter Value of pixel pi is given by pi = (f ∗ ψ)(i) = ∞
−∞
f(t) ψ(i − t) dt How to compute the integral when f is a vector graphics illustration?
2
Analytic antialiasing
Assume box fjlter, single layer, solid color, simple polygon
- Clip polygon against the box centered at each pixel
- Compute weighted area using on Green’s theorem from Calculus
Possible to clip edges, not the shapes
- + general piecewise polynomial fjlters [Duff, 1989]
- + curved edges [Manson and Schaefer, 2013]
What about polygons with self-intersections? What about spatially varying colors? What about multiple opaque layers? What about transparency?
3
Analytic antialiasing
Assume box fjlter, single layer, solid color, simple polygon
- Clip polygon against the box centered at each pixel
- Compute weighted area using on Green’s theorem from Calculus
Possible to clip edges, not the shapes
- + general piecewise polynomial fjlters [Duff, 1989]
- + curved edges [Manson and Schaefer, 2013]
What about polygons with self-intersections? What about spatially varying colors? What about multiple opaque layers? What about transparency?
3
Analytic antialiasing
Assume box fjlter, single layer, solid color, simple polygon
- Clip polygon against the box centered at each pixel
- Compute weighted area using on Green’s theorem from Calculus
Possible to clip edges, not the shapes
- + general piecewise polynomial fjlters [Duff, 1989]
- + curved edges [Manson and Schaefer, 2013]
What about polygons with self-intersections? What about spatially varying colors? What about multiple opaque layers? What about transparency?
3
Analytic antialiasing
Assume box fjlter, single layer, solid color, simple polygon
- Clip polygon against the box centered at each pixel
- Compute weighted area using on Green’s theorem from Calculus
Possible to clip edges, not the shapes
- + general piecewise polynomial fjlters [Duff, 1989]
- + curved edges [Manson and Schaefer, 2013]
What about polygons with self-intersections? What about spatially varying colors? What about multiple opaque layers? What about transparency?
3
Analytic antialiasing
Assume box fjlter, single layer, solid color, simple polygon
- Clip polygon against the box centered at each pixel
- Compute weighted area using on Green’s theorem from Calculus
Possible to clip edges, not the shapes
- + general piecewise polynomial fjlters [Duff, 1989]
- + curved edges [Manson and Schaefer, 2013]
What about polygons with self-intersections? What about spatially varying colors? What about multiple opaque layers? What about transparency?
3
Analytic antialiasing
Assume box fjlter, single layer, solid color, simple polygon
- Clip polygon against the box centered at each pixel
- Compute weighted area using on Green’s theorem from Calculus
Possible to clip edges, not the shapes
- + general piecewise polynomial fjlters [Duff, 1989]
- + curved edges [Manson and Schaefer, 2013]
What about polygons with self-intersections? What about spatially varying colors? What about multiple opaque layers? What about transparency?
3
Popular hack
Assume path Pi with constant color fi, αfi Assume blending over the background bi
bi
Assume anti-aliasing fjlter with support Defjne the coverage o of Pi at pixel p
- u
p Pi u du The new background bi
1 i 1 is
bi
1 i 1
fi
i o
bi
i 4
Popular hack
Assume path Pi with constant color fi, αfi Assume blending over the background bi, αbi Assume anti-aliasing fjlter with support Defjne the coverage o of Pi at pixel p
- u
p Pi u du The new background bi
1 i 1 is
bi
1 i 1
fi
i o
bi
i 4
Popular hack
Assume path Pi with constant color fi, αfi Assume blending over the background bi, αbi Assume anti-aliasing fjlter ψ with support Ω Defjne the coverage o of Pi at pixel p
- u
p Pi u du The new background bi
1 i 1 is
bi
1 i 1
fi
i o
bi
i 4
Popular hack
Assume path Pi with constant color fi, αfi Assume blending over the background bi, αbi Assume anti-aliasing fjlter ψ with support Ω Defjne the coverage o of Pi at pixel p
- =
- Ω
[u − p ∈ Pi] ψ(u) du The new background bi
1 i 1 is
bi
1 i 1
fi
i o
bi
i 4
Popular hack
Assume path Pi with constant color fi, αfi Assume blending over the background bi, αbi Assume anti-aliasing fjlter ψ with support Ω Defjne the coverage o of Pi at pixel p
- =
- Ω
[u − p ∈ Pi] ψ(u) du The new background bi+1, αi+1 is bi+1, αi+1 = fi, (αi · o) ⊕ bi, αi
4
Problems with hack
Visible seams at perfectly abutting layers, weird halos This is called the correlated mattes problem It also either blends in linear, or antialiases in gamma Must blend in gamma and antialias in linear [Nehab and Hoppe, 2008] bi
1
i 1
1 fi i
bi
i
- 1 bi
i
1
- 5
Problems with hack
Visible seams at perfectly abutting layers, weird halos This is called the correlated mattes problem It also either blends in linear, or antialiases in gamma Must blend in gamma and antialias in linear [Nehab and Hoppe, 2008] bi
1
i 1
1 fi i
bi
i
- 1 bi
i
1
- 5
Problems with hack
Visible seams at perfectly abutting layers, weird halos This is called the correlated mattes problem It also either blends in linear, or antialiases in gamma Must blend in gamma and antialias in linear [Nehab and Hoppe, 2008] bi+1, βi + 1 = γ
- γ−1(fi, αi ⊕ bi, βi) · o + γ−1(bi, βi) · (1 − o)
- 5
Probability in 2 slides
A random variable X is a function that maps outcomes to numbers The associated cumulative distribution function FX is such that FX a P X a i.e., it measures the probability that the numerical value is at most a. The associated probability density function fX is such that FX a
a
fX t dt i.e., its integral is the cumulative distribution function. The associated expectation E X (or mean
X) is
E X t fX t dt
X
(1) i.e., the mean value weighted by the probability density function.
6
Probability in 2 slides
A random variable X is a function that maps outcomes to numbers The associated cumulative distribution function FX is such that FX(a) = P[X ≤ a] i.e., it measures the probability that the numerical value is at most a. The associated probability density function fX is such that FX a
a
fX t dt i.e., its integral is the cumulative distribution function. The associated expectation E X (or mean
X) is
E X t fX t dt
X
(1) i.e., the mean value weighted by the probability density function.
6
Probability in 2 slides
A random variable X is a function that maps outcomes to numbers The associated cumulative distribution function FX is such that FX(a) = P[X ≤ a] i.e., it measures the probability that the numerical value is at most a. The associated probability density function fX is such that FX a
a
fX t dt i.e., its integral is the cumulative distribution function. The associated expectation E X (or mean
X) is
E X t fX t dt
X
(1) i.e., the mean value weighted by the probability density function.
6
Probability in 2 slides
A random variable X is a function that maps outcomes to numbers The associated cumulative distribution function FX is such that FX(a) = P[X ≤ a] i.e., it measures the probability that the numerical value is at most a. The associated probability density function fX is such that FX(a) = a
−∞
fX(t) dt i.e., its integral is the cumulative distribution function. The associated expectation E X (or mean
X) is
E X t fX t dt
X
(1) i.e., the mean value weighted by the probability density function.
6
Probability in 2 slides
A random variable X is a function that maps outcomes to numbers The associated cumulative distribution function FX is such that FX(a) = P[X ≤ a] i.e., it measures the probability that the numerical value is at most a. The associated probability density function fX is such that FX(a) = a
−∞
fX(t) dt i.e., its integral is the cumulative distribution function. The associated expectation E X (or mean
X) is
E X t fX t dt
X
(1) i.e., the mean value weighted by the probability density function.
6
Probability in 2 slides
A random variable X is a function that maps outcomes to numbers The associated cumulative distribution function FX is such that FX(a) = P[X ≤ a] i.e., it measures the probability that the numerical value is at most a. The associated probability density function fX is such that FX(a) = a
−∞
fX(t) dt i.e., its integral is the cumulative distribution function. The associated expectation E[X] (or mean µX) is E[X] = ∞
−∞
t fX(t) dt = µX (1) i.e., the mean value weighted by the probability density function.
6
Probability in 2 slides
A random variable X is a function that maps outcomes to numbers The associated cumulative distribution function FX is such that FX(a) = P[X ≤ a] i.e., it measures the probability that the numerical value is at most a. The associated probability density function fX is such that FX(a) = a
−∞
fX(t) dt i.e., its integral is the cumulative distribution function. The associated expectation E[X] (or mean µX) is E[X] = ∞
−∞
t fX(t) dt = µX (1) i.e., the mean value weighted by the probability density function.
6
Probability in 2 slides
The associated variance var(X) = σ2
X is
var(X) = E[(X − µX)2] = E[X2] − E2[X] = σ2
X
and the standard deviation is σX. Measure how much the random variable deviates from the mean The sample average is Xn
1 n X1
X2 Xn Law of large numbers Xn
X
for n Variance of sample average var Xn var
1 n
Xi
1 n2
var Xi
2 X
n 7
Probability in 2 slides
The associated variance var(X) = σ2
X is
var(X) = E[(X − µX)2] = E[X2] − E2[X] = σ2
X
and the standard deviation is σX. Measure how much the random variable deviates from the mean The sample average is Xn
1 n X1
X2 Xn Law of large numbers Xn
X
for n Variance of sample average var Xn var
1 n
Xi
1 n2
var Xi
2 X
n 7
Probability in 2 slides
The associated variance var(X) = σ2
X is
var(X) = E[(X − µX)2] = E[X2] − E2[X] = σ2
X
and the standard deviation is σX. Measure how much the random variable deviates from the mean The sample average is Xn = 1
n(X1 + X2 + · · · + Xn)
Law of large numbers Xn
X
for n Variance of sample average var Xn var
1 n
Xi
1 n2
var Xi
2 X
n 7
Probability in 2 slides
The associated variance var(X) = σ2
X is
var(X) = E[(X − µX)2] = E[X2] − E2[X] = σ2
X
and the standard deviation is σX. Measure how much the random variable deviates from the mean The sample average is Xn = 1
n(X1 + X2 + · · · + Xn)
Law of large numbers Xn → µX for n → ∞ Variance of sample average var Xn var
1 n
Xi
1 n2
var Xi
2 X
n 7
Probability in 2 slides
The associated variance var(X) = σ2
X is
var(X) = E[(X − µX)2] = E[X2] − E2[X] = σ2
X
and the standard deviation is σX. Measure how much the random variable deviates from the mean The sample average is Xn = 1
n(X1 + X2 + · · · + Xn)
Law of large numbers Xn → µX for n → ∞ Variance of sample average var(Xn) = var 1
n
Xi
- =
1 n2
var(Xi) = σ2
X
n 7
Monte Carlo integration
Start by expressing an integral as the expectation of a random variable Estimate expectation by sample mean Rely on law of large numbers Let X be such that support of fX is g t dt g t fX t fX t dt E g X fX X 1 n
n i 1
g Xi fX Xi This is the basis of supersampling The solution to our anti-aliasing problems
8
Monte Carlo integration
Start by expressing an integral as the expectation of a random variable Estimate expectation by sample mean Rely on law of large numbers Let X be such that support of fX is g t dt g t fX t fX t dt E g X fX X 1 n
n i 1
g Xi fX Xi This is the basis of supersampling The solution to our anti-aliasing problems
8
Monte Carlo integration
Start by expressing an integral as the expectation of a random variable Estimate expectation by sample mean Rely on law of large numbers Let X be such that support of fX is Ω
- Ω
g(t) dt =
- Ω
g(t) fX(t) fX(t) dt = E g(X) fX(X)
- ≈ 1
n
n
- i=1
g(Xi) fX(Xi) This is the basis of supersampling The solution to our anti-aliasing problems
8
Monte Carlo integration
Start by expressing an integral as the expectation of a random variable Estimate expectation by sample mean Rely on law of large numbers Let X be such that support of fX is Ω
- Ω
g(t) dt =
- Ω
g(t) fX(t) fX(t) dt = E g(X) fX(X)
- ≈ 1
n
n
- i=1
g(Xi) fX(Xi) This is the basis of supersampling The solution to our anti-aliasing problems
8
Supersampling
Let g : R2 → RGB map positions to linear color Consider an anti-aliasing kernel ψ The linear color at pixel p is c p g p q q dq E g p X X fX X 1 n
n i 1
g p Xi Xi fX Xi When
0 is the box, fX
1 with support
1 2 1 2 2
c p 1 n
n i 1
g p Xi
9
Supersampling
Let g : R2 → RGB map positions to linear color Consider an anti-aliasing kernel ψ The linear color at pixel p is c(p) =
- Ω
g(p − q) ψ(q) dq E g p X X fX X 1 n
n i 1
g p Xi Xi fX Xi When
0 is the box, fX
1 with support
1 2 1 2 2
c p 1 n
n i 1
g p Xi
9
Supersampling
Let g : R2 → RGB map positions to linear color Consider an anti-aliasing kernel ψ The linear color at pixel p is c(p) =
- Ω
g(p − q) ψ(q) dq = E g(p − X)ψ(X) fX(X)
- 1
n
n i 1
g p Xi Xi fX Xi When
0 is the box, fX
1 with support
1 2 1 2 2
c p 1 n
n i 1
g p Xi
9
Supersampling
Let g : R2 → RGB map positions to linear color Consider an anti-aliasing kernel ψ The linear color at pixel p is c(p) =
- Ω
g(p − q) ψ(q) dq = E g(p − X)ψ(X) fX(X)
- ≈ 1
n
n
- i=1
g(p − Xi) ψ(Xi) fX(Xi) When
0 is the box, fX
1 with support
1 2 1 2 2
c p 1 n
n i 1
g p Xi
9
Supersampling
Let g : R2 → RGB map positions to linear color Consider an anti-aliasing kernel ψ The linear color at pixel p is c(p) =
- Ω
g(p − q) ψ(q) dq = E g(p − X)ψ(X) fX(X)
- ≈ 1
n
n
- i=1
g(p − Xi) ψ(Xi) fX(Xi) When ψ = β0 is the box, fX = 1 with support Ω = [− 1
2, 1 2]2
c(p) ≈ 1 n
n
- i=1
g(p − Xi)
9
Biased estimator
Estimator is unbiased if expected value is correct The Monte Carlo estimator is unbiased in this sense c p 1 n
n i 1
g p Xi Xi fX Xi It often makes sense to use a biased estimator to reduce variance c p
n i 1
g p Xi Xi fX Xi
n i 1
Xi fX Xi
10
Biased estimator
Estimator is unbiased if expected value is correct The Monte Carlo estimator is unbiased in this sense c(p) ≈ 1 n
n
- i=1
g(p − Xi) ψ(Xi) fX(Xi) It often makes sense to use a biased estimator to reduce variance c p
n i 1
g p Xi Xi fX Xi
n i 1
Xi fX Xi
10
Biased estimator
Estimator is unbiased if expected value is correct The Monte Carlo estimator is unbiased in this sense c(p) ≈ 1 n
n
- i=1
g(p − Xi) ψ(Xi) fX(Xi) It often makes sense to use a biased estimator to reduce variance c(p) ≈
n
- i=1
g(p − Xi) ψ(Xi) fX(Xi)
n
- i=1
ψ(Xi) fX(Xi)
10
Importance sampling
What happens if we choose fX(t) ∝ g(t)? g t dt E g X fX X E g X f X We only need one sample! Unfortunately, we need to normalize g to transform it into a PDF For that, we need to divide it by its integral This integral is exactly what we are trying to compute! However, we can often make fX almost proportional to g This is importance sampling
11
Importance sampling
What happens if we choose fX(t) ∝ g(t)?
- Ω
g(t) dt = E g(X) fX(X)
- E
g X f X We only need one sample! Unfortunately, we need to normalize g to transform it into a PDF For that, we need to divide it by its integral This integral is exactly what we are trying to compute! However, we can often make fX almost proportional to g This is importance sampling
11
Importance sampling
What happens if we choose fX(t) ∝ g(t)?
- Ω
g(t) dt = E g(X) fX(X)
- = E[α]
g X f X We only need one sample! Unfortunately, we need to normalize g to transform it into a PDF For that, we need to divide it by its integral This integral is exactly what we are trying to compute! However, we can often make fX almost proportional to g This is importance sampling
11
Importance sampling
What happens if we choose fX(t) ∝ g(t)?
- Ω
g(t) dt = E g(X) fX(X)
- = E[α] = g(X)
f(X) We only need one sample! Unfortunately, we need to normalize g to transform it into a PDF For that, we need to divide it by its integral This integral is exactly what we are trying to compute! However, we can often make fX almost proportional to g This is importance sampling
11
Importance sampling
What happens if we choose fX(t) ∝ g(t)?
- Ω
g(t) dt = E g(X) fX(X)
- = E[α] = g(X)
f(X) We only need one sample! Unfortunately, we need to normalize g to transform it into a PDF For that, we need to divide it by its integral This integral is exactly what we are trying to compute! However, we can often make fX almost proportional to g This is importance sampling
11
Importance sampling
What happens if we choose fX(t) ∝ g(t)?
- Ω
g(t) dt = E g(X) fX(X)
- = E[α] = g(X)
f(X) We only need one sample! Unfortunately, we need to normalize g to transform it into a PDF For that, we need to divide it by its integral This integral is exactly what we are trying to compute! However, we can often make fX almost proportional to g This is importance sampling
11
Importance sampling
What happens if we choose fX(t) ∝ g(t)?
- Ω
g(t) dt = E g(X) fX(X)
- = E[α] = g(X)
f(X) We only need one sample! Unfortunately, we need to normalize g to transform it into a PDF For that, we need to divide it by its integral This integral is exactly what we are trying to compute! However, we can often make fX almost proportional to g This is importance sampling
11
Importance sampling
What happens if we choose fX(t) ∝ g(t)?
- Ω
g(t) dt = E g(X) fX(X)
- = E[α] = g(X)
f(X) We only need one sample! Unfortunately, we need to normalize g to transform it into a PDF For that, we need to divide it by its integral This integral is exactly what we are trying to compute! However, we can often make fX almost proportional to g This is importance sampling
11
Importance sampling
What happens if we choose fX(t) ∝ g(t)?
- Ω
g(t) dt = E g(X) fX(X)
- = E[α] = g(X)
f(X) We only need one sample! Unfortunately, we need to normalize g to transform it into a PDF For that, we need to divide it by its integral This integral is exactly what we are trying to compute! However, we can often make fX almost proportional to g This is importance sampling
11
Better sample distributions
Many different point distributions have fX = 1/AΩ in Ω Uniform, stratifjed, low-discrepancy (e.g. Poisson disk, Lloyd relaxation) Variance of Xn is not the same for all of them!
12
Better sample distributions
Many different point distributions have fX = 1/AΩ in Ω Uniform, stratifjed, low-discrepancy (e.g. Poisson disk, Lloyd relaxation) Variance of Xn is not the same for all of them!
12
Better sample distributions
Many different point distributions have fX = 1/AΩ in Ω Uniform, stratifjed, low-discrepancy (e.g. Poisson disk, Lloyd relaxation) Variance of Xn is not the same for all of them!
12
16 samples
Regular
13
16 samples
Uniform
13
16 samples
Stratifjed
13
16 samples
Blue noise
13
64 samples
Regular
14
64 samples
Uniform
14
64 samples
Stratifjed
14
64 samples
Blue noise
14
256 samples
Regular
15
256 samples
Uniform
15
256 samples
Stratifjed
15
256 samples
Blue noise
15
1024 samples
Regular
16
1024 samples
Uniform
16
1024 samples
Stratifjed
16
1024 samples
Blue noise
16
Better anti-aliasing kernels
Box
4 2 2 4 0.2 0.2 0.4 0.6 0.8 1. 1.2 4 2 2 4 0.2 0.2 0.4 0.6 0.8 1. 1.2 2 Π Π Π 2 Π 0.5 1 2 Π Π Π 2 Π 80 60 40 20 20
17
Better anti-aliasing kernels
Linear
4 2 2 4 0.2 0.2 0.4 0.6 0.8 1. 1.2 4 2 2 4 0.2 0.2 0.4 0.6 0.8 1. 1.2 2 Π Π Π 2 Π 0.5 1 2 Π Π Π 2 Π 80 60 40 20 20
17
Better anti-aliasing kernels
Gaussian
4 2 2 4 0.2 0.2 0.4 0.6 0.8 1. 1.2 4 2 2 4 0.2 0.2 0.4 0.6 0.8 1. 1.2 2 Π Π Π 2 Π 0.5 1 2 Π Π Π 2 Π 80 60 40 20 20
17
Better anti-aliasing kernels
Keys
4 2 2 4 0.2 0.2 0.4 0.6 0.8 1. 1.2 4 2 2 4 0.2 0.2 0.4 0.6 0.8 1. 1.2 2 Π Π Π 2 Π 0.5 1 2 Π Π Π 2 Π 80 60 40 20 20
17
Better anti-aliasing kernels
Lanczos
4 2 2 4 0.2 0.2 0.4 0.6 0.8 1. 1.2 4 2 2 4 0.2 0.2 0.4 0.6 0.8 1. 1.2 2 Π Π Π 2 Π 0.5 1 2 Π Π Π 2 Π 80 60 40 20 20
17
Better anti-aliasing kernels
Cardinal B-spline
4 2 2 4 0.2 0.2 0.4 0.6 0.8 1. 1.2 4 2 2 4 0.2 0.2 0.4 0.6 0.8 1. 1.2 2 Π Π Π 2 Π 0.5 1 2 Π Π Π 2 Π 80 60 40 20 20
17
Generalized sampling
mixed synthesis sampling continuous analysis digital filtering input
- utput
discretization reconstruction
Cardinal cubic B-spline Needs sample sharing for variance reduction and speed
18
Generalized sampling
mixed synthesis sampling continuous analysis digital filtering input
- utput
discretization reconstruction
Cardinal cubic B-spline Needs sample sharing for variance reduction and speed
18
Generalized sampling
mixed synthesis sampling continuous analysis digital filtering input
- utput
discretization reconstruction
Cardinal cubic B-spline Needs sample sharing for variance reduction and speed
18
Texturing
Assuming good reconstruction and prefjlter kernels,
- Upsampling needs only reconstruction
- Downsampling needs only prefjltering
19
Box upsampling
20
Linear upsampling
21
Cardinal Cubic B-spline upsampling
22
Texturing
Assuming good reconstruction and prefjlter kernels,
- Upsampling needs only reconstruction
- Downsampling needs only prefjltering
Reconstruction is easy, prefjltering is diffjcult Non-uniform resampling
- Reconstruct when locally upsampling
- Prefjlter when locally downsampling
- Jacobian of map from screen to texture coordinates decides
Approximate solution for isotropic downsampling: Mipmaps Otherwise, use anisotropic fjltering
23
Texturing
Assuming good reconstruction and prefjlter kernels,
- Upsampling needs only reconstruction
- Downsampling needs only prefjltering
Reconstruction is easy, prefjltering is diffjcult Non-uniform resampling
- Reconstruct when locally upsampling
- Prefjlter when locally downsampling
- Jacobian of map from screen to texture coordinates decides
Approximate solution for isotropic downsampling: Mipmaps Otherwise, use anisotropic fjltering
23
Texturing
Assuming good reconstruction and prefjlter kernels,
- Upsampling needs only reconstruction
- Downsampling needs only prefjltering
Reconstruction is easy, prefjltering is diffjcult Non-uniform resampling
- Reconstruct when locally upsampling
- Prefjlter when locally downsampling
- Jacobian of map from screen to texture coordinates decides
Approximate solution for isotropic downsampling: Mipmaps Otherwise, use anisotropic fjltering
23
Texturing
Assuming good reconstruction and prefjlter kernels,
- Upsampling needs only reconstruction
- Downsampling needs only prefjltering
Reconstruction is easy, prefjltering is diffjcult Non-uniform resampling
- Reconstruct when locally upsampling
- Prefjlter when locally downsampling
- Jacobian of map from screen to texture coordinates decides
Approximate solution for isotropic downsampling: Mipmaps Otherwise, use anisotropic fjltering
23
Texturing
Assuming good reconstruction and prefjlter kernels,
- Upsampling needs only reconstruction
- Downsampling needs only prefjltering
Reconstruction is easy, prefjltering is diffjcult Non-uniform resampling
- Reconstruct when locally upsampling
- Prefjlter when locally downsampling
- Jacobian of map from screen to texture coordinates decides
Approximate solution for isotropic downsampling: Mipmaps Otherwise, use anisotropic fjltering
23
Texturing
Assuming good reconstruction and prefjlter kernels,
- Upsampling needs only reconstruction
- Downsampling needs only prefjltering
Reconstruction is easy, prefjltering is diffjcult Non-uniform resampling
- Reconstruct when locally upsampling
- Prefjlter when locally downsampling
- Jacobian of map from screen to texture coordinates decides
Approximate solution for isotropic downsampling: Mipmaps Otherwise, use anisotropic fjltering
23
References
- E. C. Anderson. Monte carlo methods and importance sampling. UC
Berkeley, 1999. Lecture notes for Stat 578C.
- T. Duff. Polygon scan conversion by exact convolution. In Jacques André
and Roger D. Hersch, editors, Raster Imaging and Digital Typography, pages 154–168. Cambridge University Press, 1989.
- J. Manson and S. Schaefer. Analytic rasterization of curves with
polynomial fjlters. Computer Graphics Forum (Proceedings of Eurographics), 32(2pt4):499–507, 2013.
- D. Nehab and H. Hoppe. Random-access rendering of general vector
- graphics. ACM Transactions on Graphics (Proceedings of ACM
SIGGRAPH 2008), 27(5):135, 2008.
- D. Nehab and H. Hoppe. A fresh look at generalized sampling.