AMLD – Deep Learning in PyTorch
- 1. Introduction
Fran¸ cois Fleuret http://fleuret.org/amld/ February 10, 2018
ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE
AMLD Deep Learning in PyTorch 1. Introduction Fran cois Fleuret - - PowerPoint PPT Presentation
AMLD Deep Learning in PyTorch 1. Introduction Fran cois Fleuret http://fleuret.org/amld/ February 10, 2018 COLE POLYTECHNIQUE FDRALE DE LAUSANNE Why learning Fran cois Fleuret AMLD Deep Learning in PyTorch / 1.
ÉCOLE POLYTECHNIQUE FÉDÉRALE DE LAUSANNE
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 2 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 3 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 4 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 4 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 4 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 4 / 57
>>> from torchvision import datasets >>> cifar = datasets.CIFAR10 (’./ data/cifar10/’, train=True , download=True) Files already downloaded and verified >>> x = torch.from_numpy (cifar. train_data )[43]. transpose (2, 0).transpose (1, 2) >>> x.size () torch.Size ([3, 32, 32]) >>> x.narrow (1, 0, 4).narrow (2, 0, 12) (0 ,.,.) = 99 98 100 103 105 107 108 110 114 115 117 118 100 100 102 105 107 109 110 112 115 117 119 120 104 104 106 109 111 112 114 116 119 121 123 124 109 109 111 113 116 117 118 120 123 124 127 128 (1 ,.,.) = 166 165 167 169 171 172 173 175 176 178 179 181 166 164 167 169 169 171 172 174 176 177 179 180 169 167 170 171 171 173 174 176 178 179 182 183 170 169 172 173 175 176 177 178 179 181 183 184 (2 ,.,.) = 198 196 199 200 200 202 203 204 205 206 208 209 195 194 197 197 197 199 200 201 202 203 206 207 197 195 198 198 198 199 201 202 203 204 206 207 197 196 199 198 198 199 200 201 203 204 207 208 [torch. ByteTensor
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 5 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 6 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 6 / 57
80 100 120 140 160 180 200 220 240 10 20 30 40 50 60 70 80 Systolic Blood Pressure (mmHg) Age (years)
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 7 / 57
80 100 120 140 160 180 200 220 240 10 20 30 40 50 60 70 80 Systolic Blood Pressure (mmHg) Age (years) Model Data
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 7 / 57
80 100 120 140 160 180 200 220 240 10 20 30 40 50 60 70 80 Systolic Blood Pressure (mmHg) Age (years) Model Data
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 7 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 8 / 57
130
LOGICAL CALCULUS FOR NERVOUS ACTIVITY
b
e ~
~ 9 h
FIG~E 1
d f
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 9 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 10 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 10 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 10 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 10 / 57
195
visuo[ oreo 9l< QSsOCiQtion oreo-- lower-order
.grandmother retino --,- LGB --,. simple ~ complex --,. hypercomplex hypercomplex " -- cell '~
F- 3 I-- . . . . l r I I I I 1 1
[ I L ~ L J
between the hierarchy model by Hubel and Wiesel, and the neural network of the neocognitron shifted in parallel from cell to cell. Hence, all the cells in a single cell-plane have receptive fields of the same function, but at different positions. We will use notations Us~(k~,n ) to represent the
module, and Ucl(k~, n) to represent the output of a C-cell in the krth C-plane in that module, where n is the two- dimensional co-ordinates representing the position of these cell's receptive fields in the input layer. Figure 2 is a schematic diagram illustrating the interconnections between layers. Each tetragon drawn with heavy lines represents an S-plane or a C-plane, and each vertical tetragon drawn with thin lines, in which S-planes or C-planes are enclosed, represents an S-layer or a C-layer. In Fig. 2, a cell of each layer receives afferent connections from the cells within the area enclosed by the elipse in its preceding layer. To be exact, as for the S-cells, the elipses in Fig. 2 does not show the connect- ing area but the connectable area to the S-cells. That is, all the interconnections coming from the elipses are not always formed, because the synaptic connections incoming to the S-cells have plasticity. In Fig. 2, for the sake of simplicity of the figure,
cells in a cell-plane have input synapses of the same spatial distribution as shown in Fig. 3, and only the positions of the presynaptic cells are shifted in parallel from cell to cell.
R3 ~I
modifioble synapses ) unmodifiable synopses
Since the cells in the network are interconnected in a cascade as shown in Fig. 2, the deeper the layer is, the larger becomes the receptive field of each cell of that
determined as to decrease in accordance with the increase of the size of the receptive fields. Hence, the total number of the cells in each cell-plane decreases with the depth of the cell-plane in the network. In the last module, the receptive field of each C-cell becomes so large as to cover the whole area of input layer U0, and each C-plane is so determined as to have only one C-cell. The S-cells and C-cells are excitatory cells. That is, all the efferent synapses from these cells are excitatory. Although it is not shown in Fig. 2, we also have
to the cells within a single cell-plane
diagram illustrating the interconnections between layers in the neocognitron
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 11 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 12 / 57
32x32 Convolutions Subsampling Convolutions C1: feature maps 6@28x28 Subsampling S2: f. maps 6@14x14 S4: f. maps 16@5x5 C5: layer 120 C3: f. maps 16@10x10 F6: layer 84 Full connection Full connection Gaussian connections OUTPUT 10
✁❼✿▲❍✪❦✪❾★❦➝❱❜✸✶❆✶✯✪✿ ✵✶✰❅❆r✵✻✳✪✸✶✰❷✷✴⑥✦P❩✰❺❤❀✰r✵✻❑✂✁★❚♦✱✎❻✇✷✹❈❯▼❯✷✹❴▲✳★✵✶✿▲✷✹❈✪✱✴❴✦❤❳✰❅✳✪✸✶✱✴❴☛❤❳✰r✵●✽❢✷✹✸✻❣✑❚★✯✪✰❅✸✶✰❜⑥✙✷✹✸❢❉✪✿▲❍✹✿ ✵✻✺✏✸✶✰❅❆❅✷✹❍✹❈✪✿ ✵✶✿▲✷✹❈➎❦✥❧✈✱✴❆❖✯✞❃★❴➂✱✴❈✪✰❨✿▲✺✏✱➜⑥✙✰❇✱❘✵✶✳★✸✶✰❷❋✛✱✴❃❩❚✄✿✁❦ ✰✹❦✪✱➜✺✶✰r✵✏✷✴⑥❼✳✪❈✪✿ ✵✻✺ ✽❳✯★✷✹✺✶✰❨✽❢✰❅✿▲❍✹✯❯✵✶✺❜✱✴✸✶✰❨❆❅✷✹❈✪✺ ✵✶✸✶✱✴✿▲❈✪✰❅❉✞✵✶✷✎◗✑✰❷✿▲❉★✰❇❈❯✵✻✿▲❆❺✱✴❴✁❦ å➈ê➞è✧ç✙ï✶ëíï✖å➄è✧ÿ✙ä✥ï➽î➓å➄æ☎ê✒ø➑é➷è✥ç✙ï✶æ☎ä✧ï✖ú✐ø➀ì➈ÿ✎ê✌û➀å✪✘❛ï➊ä❞ð✫ã✛ç✙ï➓è✧ä❿å➄ø➀é☎å✑✡✙û➑ï ❶ì➇ï✱✯➵➊ø➑ï✖é✐è✒å➄é☎ù➒✡✙ø✓å➈ê✌➊ì➈é✐è✧ä✥ì➈û❦è✥ç✙ï➓ï❯➘✟ï✒❶è➞ì➄ë➵è✧ç✙ï✶ê✧ø✠✂➈î➻ì➈ø✓ù❖é☎ì➈é✦✝ û➀ø➑é✙ï❞å➄ä✥ø➩è❅✘❛ð➃➏⑥ë❦è✥ç✙ï➉❶ì➇ï✱✯➵➊ø➑ï✖é✐è➔ø➀ê♣ê✤î➓å➄û➀û✶➌✙è✧ç☎ï➊é✺è✧ç☎ï➞ÿ✙é✙ø➑è➉ì➈æ◆ï➊ä❿å✠è✥ï✖ê ø➀é✺å➵➍✐ÿ☎å❛ê✤ø✙✝⑥û➑ø➀é✙ï✖å➈ä✛î❭ì✂ù✂ï➎➌☎å➄é✎ù✶è✥ç✙ï➞ê✧ÿ❼✡✦✝➠ê✧å➈î❭æ☎û➑ø➀é❼✂➓û✓å✪✘➈ï➊ä✛î➻ï➊ä✥ï➊û✠✘ ✡✙û➀ÿ✙ä❿ê➻è✧ç✙ï❖ø➑é✙æ☎ÿ✂è✖ð ➏⑥ë➳è✥ç✙ï↕❶ì➇ï✱✯➵➊ø➑ï✖é✐è✶ø➀ê➽û➀å➈ä❺✂❛ï✑➌✇ê✧ÿ❼✡✦✝➠ê✥å➄î➻æ✙û➀ø➑é❼✂ ÿ✙é✙ø➑è✥ê④➊å➈é✆✡✎ï✒ê✧ï➊ï✖é✫å➈ê❨æ◆ï➊ä✧ëíì➈ä✥î➻ø➑é❼✂➽å ☛✤é✙ì❛ø➀ê❺✘✿ý✞✍ ✌➻ì❛ä➉å▲☛✤é✙ì❛ø➀ê❺✘ ✕➉ñ④✧✟✌➳ëíÿ☎é✗↔è✥ø➑ì❛é➻ù✙ï➊æ◆ï➊é☎ù✂ø➀é❼✂➤ì➈é❙è✧ç☎ï❨úrå➈û➑ÿ☎ï➵ì➈ë☎è✥ç✙ï✔✡✙ø➀å❛ê➊ð❣þ➇ÿ✗✒❶ï❞ê❅✝ ê✧ø➑ú❛ï❙û✓å✪✘➈ï✖ä✥ê♣ì➄ë✔❶ì➈é➇ú❛ì➈û➀ÿ✂è✧ø➀ì➈é☎ê➳å➄é✎ù❺ê✧ÿ❼✡✦✝➠ê✧å➈î➻æ✙û➑ø➀é❼✂✺å➄ä✥ï✒è❅✘➇æ✙ø✁➊å➈û➑û✠✘ å➄û➑è✧ï✖ä✧é✎å✠è✧ï❞ù❢➌☎ä✥ï✖ê✧ÿ✙û➑è✧ø➀é❼✂➽ø➀é❖å✏☛❺✡✙ø➟✝⑥æ☛✘➇ä✥å➈î❭ø✓ù ✌❇✰✇å✠è♣ï✖å✑❿ç✫û✓å✪✘➈ï✖ä✒➌✙è✧ç✙ï é➇ÿ✙î➉✡◆ï➊ä➳ì➄ë❣ëíï❞å✠è✥ÿ✙ä✧ï❙î➓å➄æ☎ê➉ø➀ê♣ø➑é✥❶ä✥ï✖å➈ê✧ï✖ù✫å❛ê➔è✥ç✙ï❙ê✧æ☎å➄è✧ø✓å➄û❯ä✧ï❞ê✤ì❛û➑ÿ✦✝ è✧ø➀ì➈é❭ø✓ê❯ù✙ï✒❶ä✥ï✖å❛ê✤ï❞ù☛ð ✭✇å✑❿ç➞ÿ☎é✙ø➩è❣ø➀é✒è✥ç✙ï➵è✧ç✙ø➀ä❿ù✒ç✙ø✓ù✙ù✂ï✖é❙û✓å✪✘➈ï✖ä❇ø➀é➑➞✗✂❄✝ ÿ✙ä✥ï✄✑➵î➓å✪✘➉ç☎årú❛ï❣ø➑é☎æ✙ÿ✂è❜➊ì➈é✙é☎ï✒↔è✥ø➑ì❛é☎ê✟ëíä✥ì➈î ê✤ï✖ú➈ï✖ä✥å➈ûrëíï✖å✠è✥ÿ✙ä✥ï➏î➓å➄æ☎ê ø➀é✢è✥ç✙ï✌æ✙ä✥ï➊ú➇ø➀ì➈ÿ☎ê✛û✓å✪✘➈ï✖ä✖ð❦ã✛ç✙ï➓❶ì❛é➇ú➈ì➈û➀ÿ✂è✥ø➑ì❛é☎✄✠ê✧ÿ❼✡✦✝➠ê✧å➈î➻æ✙û➑ø➀é❼✂➚❶ì➈î➚✝ ✡✙ø➀é☎å✠è✥ø➑ì❛é✏➌☛ø➀é☎ê✤æ☎ø➑ä✥ï✖ù✆✡☛✘✫õ➔ÿ✗✡✎ï✖û➏å➄é☎ù ✎ ø➑ï❞ê✤ï✖û✡❁ ê➉é☎ì➄è✧ø➀ì➈é✎ê➉ì➈ë ☛✧ê✧ø➑î➚✝ æ✙û➀ï✖✌➤å➄é✎ù ☛❘❶ì❛î❭æ☎û➑ï✄↔✔✌✞❶ï✖û➑û✓ê✒➌➄ó➵å❛ê❦ø➑î➻æ✙û➀ï➊î➻ï✖é❛è✥ï✖ù❭ø➀é➚✜☎ÿ✙ô➇ÿ☎ê✤ç☎ø➑î➓å❂❁ ê ñ➔ï✖ì✦❶ì✑✂❛é✙ø➑è✧ä✥ì➈é ✞ ❜ ✑✆✠❖➌✎è✧ç✙ì❛ÿ❼✂➈ç✫é✙ì➙✂➈û➀ì✑✡☎å➈û➑û✠✘✺ê✤ÿ✙æ◆ï➊ä✥ú➇ø➀ê✧ï✖ù✺û➀ï✖å➈ä✧é☎ø➑é❼✂ æ✙ä✥ì✦❶ï✖ù✙ÿ✙ä✧ï✌ê✧ÿ✗❿ç✺å➈ê✎✡☎å✑❿ô❩✝⑥æ✙ä✥ì➈æ☎å✑✂❛å✠è✥ø➑ì❛é➓ó➵å❛ê❨årú✠å➄ø➀û➀å✑✡✙û➑ï♣è✧ç☎ï➊é✝ð❹✕ û✓å➄ä❘✂➈ï✒ù✂ï✒✂➈ä✥ï➊ï➞ì➈ë❫ø➀é✐ú✠å➈ä✧ø✓å➄é✗➊ï✌è✧ì➙✂➈ï✖ì➈î➻ï❶è✥ä✧ø✁➤è✧ä❿å➄é✎ê④ëíì❛ä✧î➓å✠è✥ø➑ì❛é☎ê❨ì➈ë è✧ç☎ï❭ø➀é✙æ✙ÿ✙è✌✖å➄é➛✡✎ï➓å✑❿ç☎ø➑ï✖ú➈ï✖ù➲ó❨ø➩è✥ç❖è✥ç✙ø➀ê➤æ☎ä✧ì➎✂➈ä✥ï✖ê✥ê✤ø➀ú➈ï➞ä✥ï✖ù✙ÿ✗↔è✥ø➑ì❛é ì➄ë✝ê✧æ☎å✠è✥ø➀å➈û✙ä✥ï✖ê✧ì➈û➀ÿ✂è✧ø➀ì➈é➵❶ì❛î➻æ✎ï✖é☎ê✧å➄è✧ï❞ù➝✡❩✘❭å✌æ✙ä✥ì✑✂❛ä✧ï❞ê✧ê✧ø➑ú❛ï❫ø➀é✗❶ä✥ï✖å❛ê✤ï ì➄ë◆è✧ç☎ï➔ä✧ø✁❿ç✙é✙ï❞ê✧ê❯ì➈ë✎è✥ç✙ï➔ä✥ï➊æ✙ä✥ï✖ê✧ï➊é✐è✥å➄è✧ø➀ì➈é➐➪➺è✧ç☎ï➔é➇ÿ✙î➉✡◆ï➊ä➏ì➈ë✎ëíï❞å✠è✥ÿ✙ä✧ï î➓å➄æ☎ê✴➶↔ð þ➇ø➑é✥❶ï✌å➄û➀û☎è✥ç✙ï➤ó✇ï✖ø✙✂❛ç✐è✥ê➵å➄ä✥ï➉û➀ï✖å➄ä✥é✙ï❞ù➽ó❨ø➑è✧ç➐✡☎å✑❿ô❩✝⑥æ✙ä✧ì❛æ☎å❄✂✐å✠è✥ø➑ì❛é✏➌ ❶ì❛é➇ú➈ì➈û➀ÿ✂è✥ø➑ì❛é☎å➄û✛é✙ï➊è④ó✇ì❛ä✧ô✂ê➝➊å➈é⑧✡◆ï➲ê✤ï✖ï➊é➘å➈ê➓ê❺✘➇é❛è✥ç✙ï✖ê✧ø✠➽➊ø➀é❼✂❖è✥ç✙ï➊ø➀ä ì✠ó❨é ëíï✖å➄è✧ÿ✙ä✥ï➓ï❯↔➇è✧ä❿å✑❶è✧ì❛ä✖ð➽ã✛ç✙ï✶ó✇ï✖ø✙✂❛ç❛è➞ê✧ç☎å➄ä✥ø➀é❼✂✿è✧ï✒❿ç☎é✙ø✠➍✐ÿ✙ï✶ç☎å➈ê è✧ç☎ï✢ø➀é✐è✧ï➊ä✥ï✖ê✤è✧ø➀é❼✂ ê✤ø✓ù✂ï✿ï❯➘✟ï✒↔è➻ì➄ë➔ä✥ï✖ù✙ÿ✗❶ø➀é❼✂➲è✥ç✙ï✿é✐ÿ☎î➉✡◆ï➊ä➻ì➄ë❨ëíä✥ï➊ï æ☎å➈ä✥å➈î❭ï➊è✧ï✖ä✥ê✒➌✛è✧ç✙ï✖ä✧ï✒✡☛✘ ä✥ï✖ù✙ÿ✗❶ø➀é❼✂ è✥ç✙ï ☛❺➊å➈æ☎å✑➊ø➩è❅✘✔✌➷ì➈ë➞è✧ç✙ï î➓å♦✝ ❿ç✙ø➀é✙ï➉å➈é☎ù➻ä✧ï❞ù✂ÿ✗➊ø➑é❼✂➤è✥ç✙ï✛✂✐å➄æ➝✡✎ï➊è④ó✇ï✖ï➊é➻è✥ï✖ê✤è❫ï➊ä✥ä✧ì❛ä➏å➄é☎ù❭è✥ä✥å➈ø➑é☎ø➑é❼✂ ï➊ä✥ä✥ì➈ä ✞ ❜✫❝✬✠⑥ð➽ã✛ç✙ï➓é✙ï➊è④ó✇ì❛ä✧ô➲ø➀é➒➞✗✂➈ÿ☎ä✧ï❵✑✫❶ì❛é✐è✥å➄ø➀é☎ê✹❜✫❝✗✘❼➌ ❀✻✘✗✺ ➊ì➈é✦✝ é✙ï★↔è✧ø➀ì➈é✎ê✄➌♦✡✙ÿ✙è❦ì❛é✙û✠✘ ✓✗✘❼➌ ✘✻✘✗✘✛è✧ä❿å➄ø➀é☎å❄✡☎û➑ï❫ëíä✧ï✖ï❫æ☎å➈ä✥å➈î❭ï➊è✧ï✖ä✥ê❳✡✎ï★➊å➈ÿ☎ê✤ï ì➄ë❯è✧ç☎ï✌ó✇ï✖ø✙✂❛ç❛è➔ê✧ç☎å➄ä✥ø➀é❼✂☎ð ✜❇ø✙↔✂ï✖ù☛✝➠ê✧ø✙➽✖ï❭☞✇ì❛é✐ú❛ì➈û➀ÿ✂è✧ø➀ì➈é✎å➄û❙ñ➉ï❶è④ó➵ì➈ä✥ô✂ê❖ç☎årú❛ï➹✡◆ï➊ï✖é➶å➄æ☎æ✙û➑ø➀ï✖ù è✧ì î➓å➈é❩✘❑å➄æ✙æ✙û➀ø✁➊å✠è✥ø➑ì❛é☎ê✒➌➵å➈î❭ì❛é❼✂ ì➄è✧ç☎ï➊ä➽ç☎å➈é☎ù✂ó❨ä✥ø➩è✥ø➑é✗✂ ä✥ï✒➊ì✑✂❛é✙ø➟✝ è✧ø➀ì➈é✩✞ ❜ ✙✆✠❖➌ ✞ ❜✗✓ ✠❖➌☛î➓å✑❿ç✙ø➀é✙ï❯✝⑥æ✙ä✥ø➑é✐è✥ï✖ù➔❿ç✎å➄ä❿å✑↔è✥ï➊ä♣ä✧ï★❶ì✑✂❛é✙ø➑è✧ø➀ì➈é ✞ ❜ ✔✆✠✶➌ ì➈é❼✝♠û➀ø➑é☎ï✾ç☎å➈é☎ù✂ó❨ä✥ø➩è✥ø➑é✗✂➶ä✥ï✒➊ì✑✂➈é☎ø➩è✥ø➑ì❛é ✞ ❜✗✺ ✠❖➌➲å➄é☎ù✴ë⑨å✑➊ï✾ä✥ï✒➊ì✑✂❛é✙ø➟✝ è✧ø➀ì➈é ✞ ❜✗❀ ✠⑥ð ✜❇ø✙↔✂ï✖ù☛✝➠ê✧ø✙➽✖ï➹❶ì❛é✐ú❛ì➈û➀ÿ✂è✧ø➀ì➈é✎å➄û➞é✙ï➊è④ó✇ì❛ä✧ô✂ê✿è✥ç☎å✠è➷ê✤ç☎å➈ä✧ï ó➵ï➊ø✠✂➈ç✐è✥ê➻å➄û➀ì➈é✗✂ å ê✤ø➀é❼✂❛û➑ï✿è✧ï✖î➻æ✎ì❛ä✥å➈û✛ù✙ø➑î➻ï➊é✎ê✤ø➀ì➈é➘å➄ä✥ï✢ô➇é✙ì✠ó❨é å➈ê ã✛ø➀î❭ï✄✝❖✧♣ï➊û✓å✪✘♣ñ➉ï➊ÿ✙ä❿å➄û✐ñ➔ï➊è④ó✇ì❛ä✧ô✂ê❷➪⑨ã✩✧➳ñ➉ñ♣ê❘➶❶ð❯ã✩✧➳ñ➉ñ♣ê❀ç✎årú➈ï❷✡◆ï➊ï➊é ÿ☎ê✧ï✖ù❖ø➑é➲æ☎ç✙ì➈é✙ï✖î➻ï❙ä✥ï✒➊ì✑✂➈é☎ø➩è✥ø➑ì❛é✢➪íó❨ø➑è✧ç✙ì❛ÿ✂è➤ê✧ÿ❼✡✦✝➠ê✥å➄î➻æ✙û➀ø➑é❼✂☛➶✜✞ ❝✗✘✬✠✶➌ ✞ ❝✗➾✡✠❖➌➻ê✤æ◆ì➈ô❛ï➊éòó➵ì➈ä❿ù ä✥ï✒❶ì➎✂➈é✙ø➑è✧ø➀ì➈é ➪⑨ó❨ø➩è✥ç✭ê✧ÿ❼✡✦✝➠ê✥å➄î➻æ✙û➀ø➑é❼✂☛➶ ✞ ❝✵✑ ✠✶➌ ✞ ❝ ❜ ✠❖➌✌ì➈é❼✝♠û➀ø➑é☎ï ä✧ï★❶ì✑✂❛é✙ø➑è✧ø➀ì➈é ì➄ë➻ø➀ê✧ì➈û✓å✠è✥ï✖ù❲ç☎å➈é☎ù✂ó❨ä✥ø➩è✧è✧ï➊é②❿ç✎å➄ä❿å✑✹✝ è✧ï✖ä✥ê✹✞ ❝✫❝✫✠✶➌✙å➈é☎ù✺ê✤ø✠✂➈é✎å✠è✧ÿ☎ä✧ï➳ú➈ï✖ä✧ø✙➞✥➊å➄è✧ø➀ì➈é ✞ ❝✦✙✆✠⑥ð ✬ ✛❊✢ ➳❺➸➑➳❯➺❩✭✝✆ ã✛ç✙ø➀ê➉ê✤ï★↔è✧ø➀ì➈é✫ù✂ï❞ê❺➊ä✧ø✠✡◆ï✖ê❨ø➀é✺î❭ì❛ä✧ï✌ù✂ï➊è✥å➈ø➑û❀è✧ç✙ï➞å➈ä❘❿ç✙ø➑è✧ï★↔è✥ÿ✙ä✧ï➤ì➈ë ✗✝ï❞ñ➔ï➊è❇✝ ✙✦➌❨è✥ç✙ï①☞✇ì❛é➇ú➈ì➈û➀ÿ✂è✥ø➑ì❛é☎å➄û➳ñ➔ï➊ÿ☎ä✥å➈û♣ñ➔ï➊è④ó✇ì❛ä✧ô➘ÿ☎ê✧ï✖ù✻ø➑é✲è✧ç✙ï ï❯↔✂æ◆ï➊ä✥ø➑î➻ï✖é❛è❿ê➊ð ✗❀ï✖ñ➔ï➊è❇✝ ✙➉➊ì➈î➻æ✙ä✥ø➀ê✧ï✖ê ✔➞û➀å✪✘❛ï➊ä❿ê✄➌✐é✙ì➄è✩❶ì❛ÿ✙é✐è✧ø➀é❼✂❙è✧ç✙ï ø➀é✙æ✙ÿ✂è★➌☎å➄û➀û✟ì➄ë❀ó❨ç✙ø✁❿ç➙➊ì➈é✐è✥å➈ø➑é✶è✧ä❿å➄ø➀é☎å❄✡☎û➑ï➳æ☎å➄ä❿å➄î➻ï➊è✧ï➊ä❿ê✬➪⑨ó✇ï✖ø✙✂❛ç✐è✥ê✴➶↔ð ã✛ç✙ï➏ø➀é✙æ✙ÿ✙è❀ø➀ê❀å❙❜ ✑✪↔ ❜✵✑❫æ☎ø➟↔✂ï➊û➈ø➑î➓å❄✂❛ï➈ð❀ã✛ç☎ø➀ê❀ø➀ê✝ê✧ø✠✂➈é✙ø✙➞✥➊å➈é✐è✧û✠✘➉û✓å➄ä❘✂➈ï➊ä è✧ç✎å➄é✺è✧ç☎ï➞û➀å➈ä❺✂❛ï✖ê✤è✬❿ç☎å➄ä❿å✑❶è✧ï✖ä❨ø➑é✺è✥ç✙ï✒ù✙å➄è✥å❄✡✎å➈ê✧ï➣➪⑨å✠è♣î❭ì✐ê④è✒✑❆✘♦↔✤✑✻✘ æ✙ø✙↔✂ï➊û✓ê➓❶ï✖é❛è✥ï➊ä✥ï✖ù❺ø➑é å❈✑✻✺♦↔✤✑❆✺ ➞☎ï➊û✓ù✗➶❶ð✶ã✛ç✙ï➓ä✥ï✖å➈ê✧ì➈é ø✓ê➤è✧ç☎å➄è➞ø➩è➞ø✓ê ù✂ï❞ê✤ø➀ä✥å✑✡✙û➀ï✌è✧ç☎å➄è♣æ◆ì➄è✥ï➊é✐è✧ø✓å➄û❦ù✂ø✓ê④è✥ø➑é✗❶è✧ø➀ú➈ï✒ëíï✖å✠è✥ÿ✙ä✥ï✖ê➉ê✧ÿ✗❿ç➲å❛ê♣ê✤è✧ä✥ì➈ô❛ï ï➊é✎ù☛✝♠æ◆ì➈ø➀é✐è✥ê➵ì➈ä✎❶ì❛ä✧é✙ï✖ä✎✖å➄é✶å➈æ✙æ◆ï✖å➄ä❊✣●➨✆➺➭➥❼➳ ❄✴➳✄➨✗➺r➳❯➡❣ì➈ë✝è✧ç☎ï♣ä✥ï✒➊ï➊æ✦✝ è✧ø➀ú➈ï✞➞☎ï✖û➀ù✿ì➈ë❀è✧ç☎ï✌ç✙ø✙✂❛ç✙ï✖ê✤è❇✝⑥û➀ï➊ú➈ï✖û◆ëíï✖å✠è✥ÿ✙ä✥ï➤ù✙ï❶è✧ï★↔è✥ì➈ä❿ê➊ð❨➏➠é❖✗✝ï❞ñ➔ï❶è❺✝✝✙ è✧ç☎ï♣ê✧ï❶è✇ì➈ë✏❶ï➊é✐è✥ï➊ä❿ê➏ì➄ë☛è✥ç✙ï➉ä✥ï✒➊ï➊æ✂è✥ø➑ú❛ï✩➞☎ï✖û➀ù✙ê✇ì➄ë✟è✥ç✙ï➉û✓å➈ê✤è✎➊ì➈é➇ú➈ì❛û➑ÿ✦✝ è✧ø➀ì➈é✎å➄û✂û✓å✪✘➈ï➊ä✛➪❖☞❀❜❼➌➈ê✧ï➊ï✩✡◆ï➊û➀ì✠ó✬➶❀ëíì➈ä✥î➶å✹✑❆✘❄↔✤✑❆✘➳å➄ä✥ï✖å♣ø➑é❭è✧ç☎ï✬❶ï✖é❛è✥ï➊ä ì➄ë❀è✧ç✙ï✹❜ ✑♦↔ ❜ ✑➤ø➀é✙æ✙ÿ✂è❞ð❫ã✛ç✙ï➳ú✠å➄û➀ÿ✙ï✖ê➵ì➄ë❇è✧ç✙ï➳ø➀é✙æ✙ÿ✂è➔æ✙ø✙↔✂ï➊û✓ê✛å➄ä✥ï➉é✙ì❛ä❇✝ î➓å➄û➀ø✙➽✖ï✖ù❺ê✤ì✢è✧ç☎å➄è➤è✧ç✙ï➝✡☎å➎❿ô❩✂❛ä✧ì❛ÿ✙é☎ù✫û➀ï➊ú❛ï➊û✩➪íó❨ç☎ø➩è✥ï★➶t❶ì❛ä✧ä✥ï✖ê✧æ◆ì➈é☎ù✙ê è✧ì✫å✿ú✠å➄û➀ÿ✙ï➻ì➄ë➃✝ ✘☎ð✙➾❭å➄é☎ù❖è✧ç✙ï➻ëíì❛ä✧ï✒✂➈ä✥ì➈ÿ✙é✎ù✢➪➭✡✙û✓å✑❿ô❼➶✞❶ì❛ä✧ä✥ï✖ê✧æ◆ì➈é☎ù✙ê è✧ì①➾❛ð✙➾✍✔✬✙✙ð➻ã✛ç☎ø➀ê✌î➓å➈ô➈ï✖ê➳è✥ç✙ï➓î➻ï✖å➈é ø➀é✙æ✙ÿ✙è✒ä✧ì❛ÿ❼✂➈ç☎û✙✘✦✘✗➌❇å➈é☎ù❺è✧ç✙ï ú✠å➄ä✥ø➀å➈é✗❶ï➳ä✥ì➈ÿ❼✂❛ç✙û✙✘➔➾➳ó❨ç☎ø✠❿ç✺å✑✒❶ï➊û➀ï➊ä❿å✠è✥ï✖ê➵û➀ï✖å➄ä✥é✙ø➀é❼✂❖✞ ❝ ✓ ✠⑥ð ➏➠é➤è✧ç✙ï❫ëíì➈û➀û➑ì✠ó❨ø➀é❼✂✗➌✪❶ì❛é➇ú➈ì➈û➀ÿ✂è✥ø➑ì❛é☎å➄û✠û✓å✪✘➈ï✖ä✥ê✝å➈ä✧ï❣û✓å❄✡◆ï➊û➀ï✖ù➉☞➜↔❢➌✠ê✧ÿ❼✡✦✝ ê✥å➄î➻æ✙û➀ø➑é❼✂✢û➀å✪✘❛ï➊ä❿ê➉å➈ä✧ï➞û✓å❄✡◆ï➊û➀ï✖ù þ❩↔❢➌☛å➄é☎ù➲ëíÿ✙û➑û✠✘❩✝❖➊ì➈é✙é☎ï✒↔è✥ï✖ù✫û✓å✪✘➈ï✖ä✥ê å➄ä✥ï➤û➀å✑✡✎ï✖û➑ï❞ù➙✜❳↔❢➌✙ó❨ç✙ï✖ä✧ï✞↔✿ø➀ê➵è✥ç✙ï✌û➀å✪✘❛ï➊ä✛ø➀é☎ù✂ï❯↔☛ð ✗❀å✪✘➈ï✖ä➑☞④➾➽ø✓ê✒å✆❶ì❛é➇ú➈ì➈û➀ÿ✂è✥ø➑ì❛é☎å➄û❣û✓å✪✘➈ï✖ä➞ó❨ø➩è✥ç✮✓✺ëíï✖å➄è✧ÿ✙ä✥ï➓î➓å➄æ☎ê✖ð ✭❫å➎❿ç➻ÿ✙é✙ø➑è➵ø➀é➓ï✖å➎❿ç➻ëíï✖å✠è✥ÿ✙ä✥ï➔î➓å➄æ➽ø✓ê➜➊ì➈é✙é✙ï★↔è✥ï✖ù➻è✥ì✒å ✙✪↔✤✙➤é☎ï➊ø✠✂➈ç✦✝ ✡◆ì➈ä✥ç✙ì➇ì➇ù✒ø➑é✒è✧ç✙ï✛ø➀é✙æ✙ÿ✂è❞ð❯ã✛ç☎ï✛ê✧ø✠➽➊ï❫ì➈ë☎è✥ç✙ï❫ëíï❞å✠è✥ÿ✙ä✧ï➵î➓å➄æ☎ê❯ø✓ê ✑❆✺♦↔✤✑✻✺ ó❨ç✙ø✁❿ç❖æ✙ä✥ï➊ú❛ï➊é✐è✥êt➊ì➈é✙é✙ï★↔è✥ø➑ì❛é➲ëíä✥ì➈î è✧ç☎ï❭ø➀é✙æ✙ÿ✙è✌ëíä✧ì❛î ë⑨å➄û➀û➑ø➀é❼✂✿ì❄➘ è✧ç☎ï➑✡◆ì➈ÿ✙é✎ù✙å➄ä❘✘➈ð➓☞④➾➑➊ì➈é✐è✥å➈ø➑é☎ê➉➾ ✙✻✓➻è✧ä❿å➄ø➀é☎å❄✡☎û➑ï❙æ☎å➄ä❿å➄î➻ï❶è✥ï➊ä❿ê✄➌◆å➄é☎ù ➾ ✑ ✑✦➌ ❜✻✘✫❝➑❶ì❛é✙é✙ï★↔è✧ø➀ì➈é✎ê➊ð ✗❀å✪✘➈ï✖ä➔þ ✑❭ø➀ê➉å➓ê✤ÿ❼✡❼✝⑥ê✥å➄î➻æ✙û➀ø➑é✗✂❭û✓å✪✘➈ï✖ä❨ó❨ø➩è✥ç ✓❙ëíï✖å✠è✥ÿ✙ä✥ï✌î➻å➈æ☎ê❨ì➈ë ê✧ø✙➽✖ï✞➾✖❝❄↔❢➾❪❝✎ð★✭❫å➎❿ç✒ÿ✙é☎ø➩è❣ø➀é❭ï❞å✑❿ç✒ëíï✖å✠è✥ÿ✙ä✥ï➵î➓å➈æ❙ø✓ê❷❶ì❛é✙é✙ï✒❶è✧ï❞ù➞è✥ì➳å ✑✪↔✤✑➤é☎ï➊ø✠✂➈ç☛✡✎ì❛ä✧ç☎ì✐ì✂ù➻ø➀é➓è✧ç☎ï④❶ì❛ä✧ä✥ï✖ê✧æ◆ì➈é☎ù✂ø➀é❼✂➤ëíï❞å✠è✧ÿ☎ä✧ï♣î➻å➈æ➓ø➑é✫☞④➾➈ð ã✛ç✙ï➳ëíì➈ÿ☎ä❨ø➑é✙æ☎ÿ✂è✥ê✛è✥ì➓å✒ÿ✙é☎ø➩è➔ø➀é➲þ ✑✒å➈ä✧ï✌å❛ù✙ù✂ï✖ù✏➌➇è✧ç✙ï✖é✢î❙ÿ✙û➩è✥ø➑æ☎û➑ø➀ï✖ù ✡☛✘ å✫è✧ä❿å➄ø➀é☎å✑✡✙û➑ï➐❶ì➇ï❈✯➣➊ø➑ï✖é❛è★➌➏å➄é☎ù❑å➈ù✙ù✙ï✖ù➷è✥ì å➲è✧ä❿å➄ø➀é☎å❄✡✙û➀ï➣✡✙ø✓å➈ê✖ð ã✛ç✙ï➲ä✥ï✖ê✧ÿ✙û➑è➽ø➀ê✶æ☎å➈ê✥ê✧ï✖ù❑è✧ç✙ä✥ì➈ÿ✗✂➈ç❍å ê✤ø✠✂➈î➻ì❛ø➀ù✙å➈û✛ëíÿ✙é✗❶è✧ø➀ì➈é✝ð ã✛ç✙ï ✑✪↔✤✑✶ä✧ï★❶ï✖æ✂è✧ø➀ú➈ï➑➞☎ï➊û✓ù✙ê✌å➈ä✧ï❙é✙ì➈é✦✝⑥ì✠ú➈ï✖ä✧û✓å➄æ☎æ✙ø➑é✗✂✗➌✎è✧ç✙ï✖ä✧ï➊ëíì➈ä✥ï✒ëíï❞å✠è✧ÿ☎ä✧ï î➓å➄æ☎ê✒ø➑é❤þ✤✑✺ç☎årú➈ï➻ç☎å➈û➩ë✛è✥ç✙ï✶é➇ÿ✙î➑✡✎ï✖ä✒ì➄ë✛ä✥ì✠ó➔ê➞å➄é☎ù①➊ì➈û➀ÿ✙î➻é å➈ê ëíï✖å➄è✧ÿ✙ä✥ï➤î➓å➄æ☎ê❨ø➀é➔☞④➾➈ð ✗❇å✪✘➈ï➊ä❨þ ✑❭ç☎å❛êt➾ ✑✒è✧ä❿å➄ø➀é☎å✑✡✙û➑ï➤æ☎å➈ä✥å➈î➻ï❶è✧ï✖ä✥ê å➄é✎ù✳✙❼➌ ✺✗✺✻✘➚❶ì➈é☎é✙ï✒❶è✧ø➀ì➈é☎ê✖ð ✗❀å✪✘➈ï✖ä✞☞❀❜➻ø✓ê♣å➣➊ì➈é➇ú➈ì❛û➑ÿ✙è✧ø➀ì➈é☎å➈û☛û➀å✪✘❛ï➊ä➔ó❨ø➑è✧ç✢➾✒✓❭ëíï✖å➄è✧ÿ✙ä✥ï➞î➓å➄æ☎ê✖ð ✭❫å➎❿ç✺ÿ✙é✙ø➑è➉ø➑é✫ï❞å✑❿ç✿ëíï✖å➄è✧ÿ✙ä✥ï➞î➓å➄æ✺ø✓ê✬➊ì➈é✙é✙ï★↔è✥ï✖ù✢è✧ì✶ê✧ï➊ú❛ï➊ä❿å➄û ✙♦↔ ✙ é✙ï✖ø✙✂❛ç❩✡◆ì➈ä✥ç✙ì➇ì✂ù✙ê✺å✠è✫ø✓ù✂ï➊é✐è✧ø✁➊å➈û➤û➑ì✦➊å➄è✧ø➀ì➈é☎ê✺ø➀é å❑ê✧ÿ❼✡☎ê✧ï❶è➲ì➈ë➻þ ✑❂❁ ê ëíï✖å➄è✧ÿ✙ä✥ï✿î➻å➈æ☎ê✖ð❑ã❯å❄✡✙û➀ï➙➏❭ê✤ç☎ì✠ó➔ê✒è✥ç✙ï✫ê✤ï➊è❭ì➈ë➤þ ✑➲ëíï✖å➄è✧ÿ✙ä✥ï✿î➻å➈æ☎êFran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 13 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 14 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 15 / 57
7x7 conv, 64, /2 pool, /2 3x3 conv, 64 3x3 conv, 64 3x3 conv, 64 3x3 conv, 64 3x3 conv, 64 3x3 conv, 64 3x3 conv, 128, /2 3x3 conv, 128 3x3 conv, 128 3x3 conv, 128 3x3 conv, 128 3x3 conv, 128 3x3 conv, 128 3x3 conv, 128 3x3 conv, 256, /2 3x3 conv, 256 3x3 conv, 256 3x3 conv, 256 3x3 conv, 256 3x3 conv, 256 3x3 conv, 256 3x3 conv, 256 3x3 conv, 256 3x3 conv, 256 3x3 conv, 256 3x3 conv, 256 3x3 conv, 512, /2 3x3 conv, 512 3x3 conv, 512 3x3 conv, 512 3x3 conv, 512 3x3 conv, 512 avg pool fc 1000 image
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 16 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 17 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 18 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 19 / 57
75 80 85 90 95 100 2010 2012 2014 2016 2018
Krizhevsky et al. (2012) Graham (2015) Human performance Real et al. (2018)
Accuracy (%) Year
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 20 / 57
method top-1 err. top-5 err.
VGG [41] (ILSVRC’14)
GoogLeNet [44] (ILSVRC’14)
VGG [41] (v5) 24.4 7.1 PReLU-net [13] 21.59 5.71 BN-inception [16] 21.99 5.81 ResNet-34 B 21.84 5.71 ResNet-34 C 21.53 5.60 ResNet-50 20.74 5.25 ResNet-101 19.87 4.60 ResNet-152 19.38 4.49 Table 4. Error rates (%) of single-model results on the ImageNet validation set (except † reported on the test set).
method
top-5 err. (test) VGG [41] (ILSVRC’14) 7.32 GoogLeNet [44] (ILSVRC’14) 6.66 VGG [41] (v5) 6.8 PReLU-net [13] 4.94 BN-inception [16] 4.82 ResNet (ILSVRC’15) 3.57 Table 5. Error rates (%) of ensembles. The top-5 error is on the test set of ImageNet and reported by the test server.
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 21 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 22 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 23 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 24 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 25 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 26 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 27 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 28 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 29 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 30 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 31 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 32 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 33 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 34 / 57
10-3 100 103 106 109 1012 1960 1970 1980 1990 2000 2010 2020 Flops per USD
TFlops (1012) Price GFlops per $ Intel i7-6700K 0.2 $344 0.6 AMD Radeon R-7 240 0.5 $55 9.1 NVIDIA GTX 750 Ti 1.3 $105 12.3 AMD RX 480 5.2 $239 21.6 NVIDIA GTX 1080 8.9 $699 12.7
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 35 / 57
103 106 109 1012 1980 1990 2000 2010 2020 Bytes per USD
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 36 / 57
50 55 60 65 70 75 80 Top-1 accuracy [%] 5 10 15 20 25 30 35 40 Operations [G-Ops] 50 55 60 65 70 75 80 Top-1 accuracy [%]
1 2 4 8 16 32 64 Batch size [ / ] 100 200 300 400 500 600 Foward time per image [ms] BN-NIN GoogLeNet Inception-v3 AlexNet BN-AlexNet VGG-16 VGG-19 ResNet-18 ResNet-34 ResNet-50 ResNet-101 1 2 4 8 16 32 64 Batch size [ / ] 5 10 20 50 100 200 500 Foward time per image [ms] BN-NIN GoogLeNet Inception-v3 AlexNet BN-AlexNet VGG-16 VGG-19 ResNet-18 ResNet-34 ResNet-50 ResNet-101
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 37 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 38 / 57
Language(s) License Main backer PyTorch Python BSD Facebook Caffe2 C++, Python Apache Facebook TensorFlow Python, C++ Apache Google MXNet Python, C++, R, Scala Apache Amazon CNTK Python, C++ MIT Microsoft Torch Lua BSD Facebook Theano Python BSD
Caffe C++ BSD 2 clauses
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 39 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 40 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 41 / 57
class Net(nn.Module): def __init__(self): super(Net , self).__init__ () self.conv1 = nn.Conv2d (1, 32, kernel_size =5) self.conv2 = nn.Conv2d (32, 64, kernel_size =5) self.fc1 = nn.Linear (256 , 200) self.fc2 = nn.Linear (200 , 10) def forward(self , x): x = F.relu(F. max_pool2d (self.conv1(x), kernel_size =3)) x = F.relu(F. max_pool2d (self.conv2(x), kernel_size =2)) x = x.view(-1, 256) x = F.relu(self.fc1(x)) x = self.fc2(x) return x model = Net () mu , std = train_input .data.mean (), train_input .data.std () train_input .data.sub_(mu).div_(std)
criterion , bs = nn. CrossEntropyLoss (), 100 model.cuda () criterion.cuda () train_input , train_target = train_input .cuda (), train_target .cuda () for e in range (10): for b in range(nb_train_samples , bs):
loss = criterion(output , train_target .narrow (0, b, bs)) model.zero_grad () loss.backward ()
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 42 / 57
class Net(nn.Module): def __init__(self): super(Net , self).__init__ () self.conv1 = nn.Conv2d (1, 32, kernel_size =5) self.conv2 = nn.Conv2d (32, 64, kernel_size =5) self.fc1 = nn.Linear (256 , 200) self.fc2 = nn.Linear (200 , 10) def forward(self , x): x = F.relu(F. max_pool2d (self.conv1(x), kernel_size =3)) x = F.relu(F. max_pool2d (self.conv2(x), kernel_size =2)) x = x.view(-1, 256) x = F.relu(self.fc1(x)) x = self.fc2(x) return x model = Net () mu , std = train_input .data.mean (), train_input .data.std () train_input .data.sub_(mu).div_(std)
criterion , bs = nn. CrossEntropyLoss (), 100 model.cuda () criterion.cuda () train_input , train_target = train_input .cuda (), train_target .cuda () for e in range (10): for b in range(nb_train_samples , bs):
loss = criterion(output , train_target .narrow (0, b, bs)) model.zero_grad () loss.backward ()
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 42 / 57
class Net(nn.Module): def __init__(self): super(Net , self).__init__ () self.conv1 = nn.Conv2d (1, 32, kernel_size =5) self.conv2 = nn.Conv2d (32, 64, kernel_size =5) self.fc1 = nn.Linear (256 , 200) self.fc2 = nn.Linear (200 , 10) def forward(self , x): x = F.relu(F. max_pool2d (self.conv1(x), kernel_size =3)) x = F.relu(F. max_pool2d (self.conv2(x), kernel_size =2)) x = x.view(-1, 256) x = F.relu(self.fc1(x)) x = self.fc2(x) return x model = Net () mu , std = train_input .data.mean (), train_input .data.std () train_input .data.sub_(mu).div_(std)
criterion , bs = nn. CrossEntropyLoss (), 100 model.cuda () criterion.cuda () train_input , train_target = train_input .cuda (), train_target .cuda () for e in range (10): for b in range(nb_train_samples , bs):
loss = criterion(output , train_target .narrow (0, b, bs)) model.zero_grad () loss.backward ()
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 42 / 57
class Net(nn.Module): def __init__(self): super(Net , self).__init__ () self.conv1 = nn.Conv2d (1, 32, kernel_size =5) self.conv2 = nn.Conv2d (32, 64, kernel_size =5) self.fc1 = nn.Linear (256 , 200) self.fc2 = nn.Linear (200 , 10) def forward(self , x): x = F.relu(F. max_pool2d (self.conv1(x), kernel_size =3)) x = F.relu(F. max_pool2d (self.conv2(x), kernel_size =2)) x = x.view(-1, 256) x = F.relu(self.fc1(x)) x = self.fc2(x) return x model = Net () mu , std = train_input .data.mean (), train_input .data.std () train_input .data.sub_(mu).div_(std)
criterion , bs = nn. CrossEntropyLoss (), 100 model.cuda () criterion.cuda () train_input , train_target = train_input .cuda (), train_target .cuda () for e in range (10): for b in range(nb_train_samples , bs):
loss = criterion(output , train_target .narrow (0, b, bs)) model.zero_grad () loss.backward ()
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 42 / 57
class Net(nn.Module): def __init__(self): super(Net , self).__init__ () self.conv1 = nn.Conv2d (1, 32, kernel_size =5) self.conv2 = nn.Conv2d (32, 64, kernel_size =5) self.fc1 = nn.Linear (256 , 200) self.fc2 = nn.Linear (200 , 10) def forward(self , x): x = F.relu(F. max_pool2d (self.conv1(x), kernel_size =3)) x = F.relu(F. max_pool2d (self.conv2(x), kernel_size =2)) x = x.view(-1, 256) x = F.relu(self.fc1(x)) x = self.fc2(x) return x model = Net () mu , std = train_input .data.mean (), train_input .data.std () train_input .data.sub_(mu).div_(std)
criterion , bs = nn. CrossEntropyLoss (), 100 model.cuda () criterion.cuda () train_input , train_target = train_input .cuda (), train_target .cuda () for e in range (10): for b in range(nb_train_samples , bs):
loss = criterion(output , train_target .narrow (0, b, bs)) model.zero_grad () loss.backward ()
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 42 / 57
class Net(nn.Module): def __init__(self): super(Net , self).__init__ () self.conv1 = nn.Conv2d (1, 32, kernel_size =5) self.conv2 = nn.Conv2d (32, 64, kernel_size =5) self.fc1 = nn.Linear (256 , 200) self.fc2 = nn.Linear (200 , 10) def forward(self , x): x = F.relu(F. max_pool2d (self.conv1(x), kernel_size =3)) x = F.relu(F. max_pool2d (self.conv2(x), kernel_size =2)) x = x.view(-1, 256) x = F.relu(self.fc1(x)) x = self.fc2(x) return x model = Net () mu , std = train_input .data.mean (), train_input .data.std () train_input .data.sub_(mu).div_(std)
criterion , bs = nn. CrossEntropyLoss (), 100 model.cuda () criterion.cuda () train_input , train_target = train_input .cuda (), train_target .cuda () for e in range (10): for b in range(nb_train_samples , bs):
loss = criterion(output , train_target .narrow (0, b, bs)) model.zero_grad () loss.backward ()
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 42 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 43 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 44 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 44 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 45 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 45 / 57
N
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 45 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 46 / 57
D
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 47 / 57
D
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 47 / 57
D
n − yn
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 47 / 57
D
n − yn
1
1
N
N
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 47 / 57
D
n − yn
1
1
N
N
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 47 / 57
def fit_polynomial (D, x, y): N = x.size (0) X = Tensor(N, D + 1) Y = Tensor(N, 1) # Exercise: avoid the n loop for n in range(N): for d in range(D + 1): X[n, d] = x[n]**d Y[n, 0] = y[n] # LAPACK ’s GEneralized Least -Square alpha , _ = torch.gels(Y, X) return alpha
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 48 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 49 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 49 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 49 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 49 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 49 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 49 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 49 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 49 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 49 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 49 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 49 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 50 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 51 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 51 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 51 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 52 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 53 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 53 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 53 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 53 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 54 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 54 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 54 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 54 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 54 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 55 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 55 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 55 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 55 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 55 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 55 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 56 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 56 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 57 / 57
Fran¸ cois Fleuret AMLD – Deep Learning in PyTorch / 1. Introduction 57 / 57