Machin ine Learnin ing Basic ics
I2DL: Prof. Niessner, Prof. Leal-Taixé 1
Machin ine Learnin ing Basic ics I2DL: Prof. Niessner, Prof. - - PowerPoint PPT Presentation
Machin ine Learnin ing Basic ics I2DL: Prof. Niessner, Prof. Leal-Taix 1 Machin ine Learn rning Task I2DL: Prof. Niessner, Prof. Leal-Taix 2 Im Image Cla lassific ication I2DL: Prof. Niessner, Prof. Leal-Taix 3 Appearance
I2DL: Prof. Niessner, Prof. Leal-Taixé 1
Task
I2DL: Prof. Niessner, Prof. Leal-Taixé 2
I2DL: Prof. Niessner, Prof. Leal-Taixé 3
I2DL: Prof. Niessner, Prof. Leal-Taixé 4
Pose Appearance Illumination
Occlusions
I2DL: Prof. Niessner, Prof. Leal-Taixé 5
I2DL: Prof. Niessner, Prof. Leal-Taixé 6
Background clutter
Representation
I2DL: Prof. Niessner, Prof. Leal-Taixé 7
Task Image classification Experience Data
I2DL: Prof. Niessner, Prof. Leal-Taixé 8
Supervised learning
the structure of the data
PCA, etc.)
I2DL: Prof. Niessner, Prof. Leal-Taixé 9
Unsupervised learning
Unsupervised learning Supervised learning
I2DL: Prof. Niessner, Prof. Leal-Taixé 10
Unsupervised learning Supervised learning
I2DL: Prof. Niessner, Prof. Leal-Taixé 11
DOG DOG DOG CAT CAT CAT
Unsupervised learning Supervised learning
I2DL: Prof. Niessner, Prof. Leal-Taixé 12
Experience Data Training data Test data Underlying assumption that train and test data come from the same distribution
I2DL: Prof. Niessner, Prof. Leal-Taixé 13
Task Image classification Experience Data Performance measure Accuracy
I2DL: Prof. Niessner, Prof. Leal-Taixé 14
Reinforcement learning Agents Environment interaction
Unsupervised learning Supervised learning
I2DL: Prof. Niessner, Prof. Leal-Taixé 15
Reinforcement learning Agents Environment reward
Unsupervised learning Supervised learning
I2DL: Prof. Niessner, Prof. Leal-Taixé 16
Reinforcement learning Agents Environment reward
Unsupervised learning Supervised learning
I2DL: Prof. Niessner, Prof. Leal-Taixé 17
I2DL: Prof. Niessner, Prof. Leal-Taixé 18
I2DL: Prof. Niessner, Prof. Leal-Taixé 19
distance NN classifier = dog
I2DL: Prof. Niessner, Prof. Leal-Taixé 20
distance k-NN classifier = cat
I2DL: Prof. Niessner, Prof. Leal-Taixé 21
Source: https://commons.wikimedia.org/wiki/File:Data3classes.png
How does the NN classifier perform on training data? What classifier is more likely to perform best on test data?
I2DL: Prof. Niessner, Prof. Leal-Taixé 22
NN Classifier 5NN Classifier The Data
L2 distance : ||𝑦 − 𝑑||2 L1 distance : |𝑦 − 𝑑|
I2DL: Prof. Niessner, Prof. Leal-Taixé 23
I2DL: Prof. Niessner, Prof. Leal-Taixé 24
Find your hyperparameters train test validation 20% 60% 20%
Other splits are also possible (e.g., 80%/10%/10%)
I2DL: Prof. Niessner, Prof. Leal-Taixé 25
Find your hyperparameters train test validation 20% 60% 20% Test set is only used once!
train validation Run 1 Run 2 Run 3 Run 4 Run 5 Split the tra train ining data ta into N folds
I2DL: Prof. Niessner, Prof. Leal-Taixé 26
Find your hyperparameters train test validation 20% 60% 20% Why do cross validation? Why not just train and test?
I2DL: Prof. Niessner, Prof. Leal-Taixé 27
Find your hyperparameters train test validation 20% 60% 20%
Why do cross validation? Why not just train and test?
Test set is only used once!
I2DL: Prof. Niessner, Prof. Leal-Taixé 28
This lecture What are the pros and cons for using linear decision boundaries?
I2DL: Prof. Niessner, Prof. Leal-Taixé 32
I2DL: Prof. Niessner, Prof. Leal-Taixé 33
inputs 𝒚 𝒚 𝒛
I2DL: Prof. Niessner, Prof. Leal-Taixé 34
Training Model parameters
{𝒚1:𝑜, 𝒛1:𝑜}
Data points
𝜾
Input (e.g., image,
measurement) Labels (e.g., cat/dog)
I2DL: Prof. Niessner, Prof. Leal-Taixé 35
Learner
Training Testing Learner Model parameters Predictor
can be parameters of a Neural Network
{𝒚1:𝑜, 𝒛1:𝑜}
Data points
𝜾 𝑦𝑜+1, 𝜾 ො 𝑧𝑜+1 Estimation
I2DL: Prof. Niessner, Prof. Leal-Taixé 36
Input data, features weights (i.e., model parameters) input dimension
ො 𝑧𝑗 =
𝑘=1 𝑒
𝑦𝑗𝑘𝜄
𝑘
I2DL: Prof. Niessner, Prof. Leal-Taixé 37
ො 𝑧𝑗 = 𝜄0 +
𝑘=1 𝑒
𝑦𝑗𝑘𝜄
𝑘 = 𝜄0 + 𝑦𝑗1𝜄1 + 𝑦𝑗2𝜄2 + ⋯ + 𝑦𝑗𝑒𝜄𝑒 bias
𝒚 𝒛
I2DL: Prof. Niessner, Prof. Leal-Taixé 38
𝜄0
Temperature
Outside temperature
Number of people
Sun exposure Level of humidity
𝜄1 𝜄3 𝜄2 𝜄4 𝑦1 𝑦2 𝑦3 𝑦4
I2DL: Prof. Niessner, Prof. Leal-Taixé 39
I2DL: Prof. Niessner, Prof. Leal-Taixé 40
ො 𝑧1 ො 𝑧2 ⋮ ො 𝑧𝑜 = 𝜄0 + 𝑦11 𝑦21 ⋯ ⋯ 𝑦1𝑒 𝑦2𝑒 ⋮ ⋱ ⋮ 𝑦𝑜1 ⋯ 𝑦𝑜𝑒 ∙ 𝜄1 𝜄2 ⋮ 𝜄𝑒
ො 𝑧1 ො 𝑧2 ⋮ ො 𝑧𝑜 = 1 1 ⋮ 1 𝑦11 𝑦21 ⋯ ⋯ 𝑦1𝑒 𝑦2𝑒 ⋮ ⋱ ⋮ 𝑦𝑜1 ⋯ 𝑦𝑜𝑒 𝜄0 𝜄1 ⋮ 𝜄𝑒 ֜ ො 𝐳 = 𝐘𝜾
Input features
(one sample has 𝑒 features)
Model parameters Prediction
ො 𝐳 = 𝐘𝜾
I2DL: Prof. Niessner, Prof. Leal-Taixé 41
(𝑒 weights and 1 bias)
ො 𝑧1 ො 𝑧2 ⋮ ො 𝑧𝑜 = 1 1 ⋮ 1 𝑦11 𝑦21 ⋯ ⋯ 𝑦1𝑒 𝑦2𝑒 ⋮ ⋱ ⋮ 𝑦𝑜1 ⋯ 𝑦𝑜𝑒 𝜄0 𝜄1 ⋮ 𝜄𝑒
Temperature
I2DL: Prof. Niessner, Prof. Leal-Taixé 42
ො 𝑧1 ො 𝑧2 = 1 25 1 − 10 50 50 2 50 10 ⋅ 0.2 0.64 1 0.14
MODEL
ො 𝑧1 ො 𝑧2 = 1 25 1 − 10 50 50 2 50 10 ⋅ 0.2 0.64 1 0.14
Temperature
MODEL
I2DL: Prof. Niessner, Prof. Leal-Taixé 43
How
in th the mod
I2DL: Prof. Niessner, Prof. Leal-Taixé 44
Data points Model parameters Labels (ground truth) Estimation Loss function Optimization 𝐘 𝜄 ො 𝑧 𝑧
functio ion: measures how good my estimation is (how good my model is) and tells the optimization method how to make it better.
timizatio ion: : changes the model in order to improve the loss function (i.e., to improve my estimation).
I2DL: Prof. Niessner, Prof. Leal-Taixé 45
Prediction: Temperature
I2DL: Prof. Niessner, Prof. Leal-Taixé 46
I2DL: Prof. Niessner, Prof. Leal-Taixé 47
Prediction: Temperature
Minimizing Objective function Energy Cost function
𝐾 𝜾 = 1 𝑜
𝑗=1 𝑜
ො 𝑧𝑗 − 𝑧𝑗 2
I2DL: Prof. Niessner, Prof. Leal-Taixé 48
to the data
that is unique.
min
𝜄
𝐾 𝜾 = 1 𝑜
𝑗=1 𝑜
ො 𝑧𝑗 − 𝑧𝑗 2
I2DL: Prof. Niessner, Prof. Leal-Taixé 49
The estimation comes from the linear model 𝑜 training samples
= 1 𝑜
𝑗=1 𝑜
𝐲𝑗𝜾 − 𝑧𝑗 2
I2DL: Prof. Niessner, Prof. Leal-Taixé 50
min
𝜾
𝐾 𝜾 = 1 𝑜
𝑗=1 𝑜
ො 𝑧𝑗 − 𝑧𝑗 2
min
𝜾
𝐾 𝜾 = 1 𝑜
𝑗=1 𝑜
ො 𝑧𝑗 − 𝑧𝑗 2 = 1 𝑜
𝑗=1 𝑜
𝐲𝑗𝜾 − 𝑧𝑗 2 min
𝜾
𝐾 𝜾 = 𝐘𝜾 − 𝒛 𝑈(𝐘𝜾 − 𝒛)
𝑜 training samples, each input vector has size 𝑒 𝑜 labels
Matrix notation
I2DL: Prof. Niessner, Prof. Leal-Taixé 51
min
𝜾
𝐾 𝜾 = 1 𝑜
𝑗=1 𝑜
ො 𝑧𝑗 − 𝑧𝑗 2 = 1 𝑜
𝑗=1 𝑜
𝐲𝑗𝜾 − 𝑧𝑗 2 min
𝜾
𝐾 𝜾 = 𝐘𝜾 − 𝒛 𝑈(𝐘𝜾 − 𝒛)
Matrix notation
More on matrix notation in the next exercise session
I2DL: Prof. Niessner, Prof. Leal-Taixé 52
min
𝜾
𝐾 𝜾 = 1 𝑜
𝑗=1 𝑜
ො 𝑧𝑗 − 𝑧𝑗 2 = 1 𝑜
𝑗=1 𝑜
𝐲𝑗𝜾 − 𝑧𝑗 2 min
𝜾
𝐾 𝜾 = 𝐘𝜾 − 𝒛 𝑈(𝐘𝜾 − 𝒛)
53
Convex Optimum
𝜖𝐾(𝜾) 𝜖𝜾 = 0
I2DL: Prof. Niessner, Prof. Leal-Taixé 53
True output: Temperature of the building Inputs: Outside temperature, number of people, …
We have found an analytical solution to a convex problem
Details in the exercise session!
𝜖𝐾(𝜄) 𝜖𝜄 = 2𝐘𝑈𝐘𝜾 − 2𝐘𝑈𝐳 = 0 𝜄 = 𝐘𝑈𝐘 −1𝐘𝑈𝐳
I2DL: Prof. Niessner, Prof. Leal-Taixé 55
𝐾 𝜾 = 1 𝑜
𝑗=1 𝑜
ො 𝑧𝑗 − 𝑧𝑗 2
I2DL: Prof. Niessner, Prof. Leal-Taixé 56
I2DL: Prof. Niessner, Prof. Leal-Taixé 57
Controlled by parameter(s)
Parametric family of distributions 𝑞𝑒𝑏𝑢𝑏(𝐳|𝐘) 𝑞𝑛𝑝𝑒𝑓𝑚(𝐳|𝐘, 𝜾)
I2DL: Prof. Niessner, Prof. Leal-Taixé 58
True underlying distribution
model given observations,
Observations from 𝑞𝑒𝑏𝑢𝑏(𝐳|𝐘)
𝑞𝑛𝑝𝑒𝑓𝑚(𝐳|𝐘, 𝜾)
I2DL: Prof. Niessner, Prof. Leal-Taixé 59
model given observations, by finding the parameter values that maxim ximiz ize th the li likelih lihood of making the
𝜾𝑵𝑴 = arg max
𝜾
𝑞𝑛𝑝𝑒𝑓𝑚(𝐳|𝐘, 𝜾)
I2DL: Prof. Niessner, Prof. Leal-Taixé 61
in independent and genera rated by th the same pro robabili ility dis istri tribution
𝑞𝑛𝑝𝑒𝑓𝑚 𝐳 𝐘, 𝜾 = ෑ
𝑗=1 𝑜
𝑞𝑛𝑝𝑒𝑓𝑚(𝑧𝑗|𝐲𝑗, 𝜾)
I2DL: Prof. Niessner, Prof. Leal-Taixé 62
“i.i.d.” assumption
𝜾𝑵𝑴 = arg max
𝜾
ෑ
𝑗=1 𝑜
𝑞𝑛𝑝𝑒𝑓𝑚(𝑧𝑗|𝐲𝑗, 𝜾) 𝜾𝑵𝑴 = arg max
𝜾
𝑗=1 𝑜
log 𝑞𝑛𝑝𝑒𝑓𝑚(𝑧𝑗|𝐲𝑗, 𝜾)
Logarithmic property log 𝑏𝑐 = log 𝑏 + log 𝑐
I2DL: Prof. Niessner, Prof. Leal-Taixé 63
𝜾𝑵𝑴 = arg max
𝜾
𝑗=1 𝑜
log 𝑞𝑛𝑝𝑒𝑓𝑚(𝑧𝑗|𝐲𝑗, 𝜾)
What shape does our probability distribution have?
I2DL: Prof. Niessner, Prof. Leal-Taixé 64
What shape does our probability distribution have? 𝑞(𝑧𝑗|𝐲𝑗, 𝜾)
I2DL: Prof. Niessner, Prof. Leal-Taixé 65
Assuming
Gaussian / Normal distribution
𝑞(𝑧𝑗|𝐲𝑗, 𝜾) 𝑧𝑗 = 𝒪 𝐲𝑗𝜾, 𝜏2 = 𝐲𝑗𝜾 + 𝒪(0, 𝜏2)
mean Gaussian:
𝑞 𝑧𝑗 = 1 2𝜌𝜏2 𝑓− 1
2𝜏2 𝑧𝑗−𝜈 2
𝑧𝑗 ~ 𝒪(𝜈, 𝜏2)
I2DL: Prof. Niessner, Prof. Leal-Taixé 66
Assuming 𝑞 𝑧𝑗 𝐲𝑗, 𝜾 = ? 𝑧𝑗 = 𝒪 𝐲𝑗𝜾, 𝜏2 = 𝐲𝑗𝜾 + 𝒪(0, 𝜏2)
𝑞 𝑧𝑗 = 1 2𝜌𝜏2 𝑓− 1
2𝜏2 𝑧𝑗−𝜈 2
𝑧𝑗 ~ 𝒪(𝜈, 𝜏2)
I2DL: Prof. Niessner, Prof. Leal-Taixé 67
mean Gaussian:
𝑞 𝑧𝑗 = 1 2𝜌𝜏2 𝑓− 1
2𝜏2 𝑧𝑗−𝜈 2
𝑧𝑗 ~ 𝒪(𝜈, 𝜏2)
𝑞 𝑧𝑗 𝐲𝑗, 𝜾 = 2𝜌𝜏2 −1/2𝑓− 1
2𝜏2 𝑧𝑗−𝐲𝒋𝜾 2
Assuming 𝑧𝑗 = 𝒪 𝐲𝑗𝜾, 𝜏2 = 𝐲𝑗𝜾 + 𝒪(0, 𝜏2)
𝑞 𝑧𝑗 = 1 2𝜌𝜏2 𝑓− 1
2𝜏2 𝑧𝑗−𝜈 2
𝑧𝑗 ~ 𝒪(𝜈, 𝜏2)
I2DL: Prof. Niessner, Prof. Leal-Taixé 68
mean Gaussian:
𝑞 𝑧𝑗 = 1 2𝜌𝜏2 𝑓− 1
2𝜏2 𝑧𝑗−𝜈 2
𝑧𝑗 ~ 𝒪(𝜈, 𝜏2)
𝑞 𝑧𝑗 𝐲𝑗, 𝜾 = 2𝜌𝜏2 −1/2𝑓− 1
2𝜏2 𝑧𝑗−𝐲𝒋𝜾 2
Original
problem 𝜾𝑵𝑴 = arg max
𝜾
𝑗=1 𝑜
log 𝑞𝑛𝑝𝑒𝑓𝑚(𝑧𝑗|𝐲𝑗, 𝜾)
I2DL: Prof. Niessner, Prof. Leal-Taixé 69
𝑗=1 𝑜
log 2𝜌𝜏2 −1
2 𝑓− 1 2𝜏2 𝑧𝑗−𝒚𝒋𝜾 2 Matrix notation Canceling log and 𝑓
− 𝑜 2 log 2𝜌𝜏2 − 1 2𝜏2 𝒛 − 𝒀𝜾 𝑈 𝒛 − 𝒀𝜾
I2DL: Prof. Niessner, Prof. Leal-Taixé 70
𝑗=1 𝑜
− 1 2 log 2𝜌𝜏2 +
𝑗=1 𝑜
− 1 2𝜏2 𝑧𝑗 − 𝒚𝒋𝜾 2
𝜄𝑁𝑀 = arg max
𝜄
𝑗=1 𝑜
log 𝑞𝑛𝑝𝑒𝑓𝑚(𝑧𝑗|𝐲𝑗, 𝜾)
How can we find the estimate of theta? Details in the exercise session!
− 𝑜 2 log 2𝜌𝜏2 − 1 2𝜏2 𝐳 − 𝐘𝜾 𝑈 𝐳 − 𝐘𝜾
𝜖𝐾(𝜾) 𝜖𝜾 = 0
𝜾 = 𝒀𝑈𝒀 −1𝒀𝑈𝐳
I2DL: Prof. Niessner, Prof. Leal-Taixé 71
the Least Squares Estimate (given the assumptions)
I2DL: Prof. Niessner, Prof. Leal-Taixé 72
I2DL: Prof. Niessner, Prof. Leal-Taixé 73
temperature of a room)
– Binary classification: output is either 0 or 1 – Multi-class classification: set of N classes
I2DL: Prof. Niessner, Prof. Leal-Taixé 74
CAT classifier
I2DL: Prof. Niessner, Prof. Leal-Taixé 75
Can be interpreted as a probability
1 𝑦0 𝑦1 𝑦2 𝜄1 𝜄0 𝜄2
𝜏 𝑦 = 1 1 + 𝑓−𝑦
𝑞(𝑧𝑗 = 1|𝐲𝑗, 𝜾)
I2DL: Prof. Niessner, Prof. Leal-Taixé 76
1 𝑦0 𝑦1 𝑦2 𝜄1 𝜄0 𝜄2
𝜏 𝑦 = 1 1 + 𝑓−𝑦
𝑞(𝑧𝑗 = 1|𝐲𝑗, 𝜾)
I2DL: Prof. Niessner, Prof. Leal-Taixé 77
Can be interpreted as a probability
The prediction of
ො 𝐳 = 𝑞 𝐳 = 1 𝐘, 𝜾 = ෑ
𝑗=1 𝑜
𝑞(𝑧𝑗 = 1|𝐲𝑗, 𝜾) ො 𝑧𝑗 = 𝜏(𝐲𝑗𝜾)
I2DL: Prof. Niessner, Prof. Leal-Taixé 78
ො 𝐳 = 𝑞 𝐳 = 1 𝐘, 𝜾 = ෑ
𝑗=1 𝑜
𝑞(𝑧𝑗 = 1|𝐲𝑗, 𝜾)
Model for coins Bernoulli trial The prediction of
𝑞 𝑨 𝜚 = 𝜚𝑨 1 − 𝜚 1−𝑨 = ቊ𝜚 , if 𝑨 = 1 1 − 𝜚, if 𝑨 = 0
I2DL: Prof. Niessner, Prof. Leal-Taixé 79
ො 𝐳 = 𝑞 𝐳 = 1 𝐘, 𝜾 = ෑ
𝑗=1 𝑜
𝑞(𝑧𝑗 = 1|𝐲𝑗, 𝜾)
Model for coins
ො 𝐳 = ෑ
𝑗=1 𝑜
ො 𝑧𝑗
𝑧𝑗 1 − ො
𝑧𝑗 (1−𝑧𝑗)
True labels: 0 or 1 Prediction of the Sigmoid: continuous
I2DL: Prof. Niessner, Prof. Leal-Taixé 80
𝑞 y 𝐘, 𝜾 = ො 𝐳 = ෑ
𝑗=1 𝑜
ො 𝑧𝑗
𝑧𝑗 1 − ො
𝑧𝑗 (1−𝑧𝑗) 𝜾𝑵𝑴 = arg max
𝜾
log 𝑞 y 𝐘, 𝜾
I2DL: Prof. Niessner, Prof. Leal-Taixé 81
𝑞 y 𝐘, 𝜾 = ො 𝐳 = ෑ
𝑗=1 𝑜
ො 𝑧𝑗
𝑧𝑗 1 − ො
𝑧𝑗 (1−𝑧𝑗)
𝑗=1 𝑜
log ො 𝑧𝑗
𝑧𝑗 1 − ො
𝑧𝑗 (1−𝑧𝑗)
𝑗=1 𝑜
𝑧𝑗 log ො 𝑧𝑗 + (1 − 𝑧𝑗) log(1 − ො 𝑧𝑗)
I2DL: Prof. Niessner, Prof. Leal-Taixé 82
ℒ ො 𝑧𝑗, 𝑧𝑗 = 𝑧𝑗 log ො 𝑧𝑗 + (1 − 𝑧𝑗) log(1 − ො 𝑧𝑗) 𝑧𝑗 = 1 ℒ ො 𝑧𝑗, 1 = log ො 𝑧𝑗
I2DL: Prof. Niessner, Prof. Leal-Taixé 83
Maximize!
𝜾𝑵𝑴 = arg max
𝜾
log 𝑞 y 𝐘, 𝜾
ℒ ො 𝑧𝑗, 𝑧𝑗 = 𝑧𝑗 log ො 𝑧𝑗 + (1 − 𝑧𝑗) log(1 − ො 𝑧𝑗) 𝑧𝑗 = 1 ℒ ො 𝑧𝑗, 1 = log ො 𝑧𝑗 We want log ො 𝑧𝑗 large; since logarithm is a monotonically increasing function, we also want large ො 𝑧𝑗 .
(1 is the largest value our model’s estimate can take!)
I2DL: Prof. Niessner, Prof. Leal-Taixé 84
ℒ ො 𝑧𝑗, 𝑧𝑗 = 𝑧𝑗 log ො 𝑧𝑗 + (1 − 𝑧𝑗) log(1 − ො 𝑧𝑗) We want log 1 − ො 𝑧𝑗 large; so we want ො 𝑧𝑗 to be small
(0 is the smallest value our model’s estimate can take!)
𝑧𝑗 = 0 ℒ ො 𝑧𝑗, 0 = log 1 − ො 𝑧𝑗 𝑧𝑗 = 1 ℒ ො 𝑧𝑗, 1 = log ො 𝑧𝑗
I2DL: Prof. Niessner, Prof. Leal-Taixé 85
course (also called softmax loss)
Referred to as bin binary cr cross-entropy loss (BCE)
ℒ ො 𝑧𝑗, 𝑧𝑗 = 𝑧𝑗 log ො 𝑧𝑗 + (1 − 𝑧𝑗) log(1 − ො 𝑧𝑗)
I2DL: Prof. Niessner, Prof. Leal-Taixé 86
𝐷 𝜄 = − 1 𝑜
𝑗=1 𝑜
ℒ ො 𝑧𝑗, 𝑧𝑗 = − 1 𝑜
𝑗=1 𝑜
𝑧𝑗 log ො 𝑧𝑗 + (1 − 𝑧𝑗) log(1 − ො 𝑧𝑗)
Minimization ℒ ො 𝑧𝑗, 𝑧𝑗 = 𝑧𝑗 log ො 𝑧𝑗 + (1 − 𝑧𝑗) log(1 − ො 𝑧𝑗)
ො 𝑧𝑗 = 𝜏(𝐲𝑗𝜾)
I2DL: Prof. Niessner, Prof. Leal-Taixé 87
Gradient descent – later on!
I2DL: Prof. Niessner, Prof. Leal-Taixé 89
complex phenomena: e.g., weather:
– Linear combination of day-time, day-year etc. is often pretty good
I2DL: Prof. Niessner, Prof. Leal-Taixé 115
– Exponential spread at beginning – Plateaus when certain portion of pop. is infected/immune
I2DL: Prof. Niessner, Prof. Leal-Taixé 116
https://www.worldometers.info/coronavirus
– Exponential spread at beginning – Plateaus when certain portion of pop. is infected/immune
I2DL: Prof. Niessner, Prof. Leal-Taixé 117
https://www.worldometers.info/coronavirus
Think about good features:
– #coronavirus_infections cannot be > #total_population – Munich housing prizes seem exponential though
I2DL: Prof. Niessner, Prof. Leal-Taixé 118
– Jumping towards our first Neural Networks and Computational Graphs
I2DL: Prof. Niessner, Prof. Leal-Taixé 119
– https://medium.com/@zstern/k-fold-cross-validation- explained-5aeba90ebb3 – https://towardsdatascience.com/train-test-split-and- cross-validation-in-python-80b61beca4b6
– Pa Patte ttern Recogni nitio ion and nd Machin hine Learn
I2DL: Prof. Niessner, Prof. Leal-Taixé 120
I2DL: Prof. Niessner, Prof. Leal-Taixé 121