19
play

19 Auto Lecture encoders : Ankur Bambhanoliya Scribes : - PowerPoint PPT Presentation

I Variational 19 Auto Lecture encoders : Ankur Bambhanoliya Scribes : Donald Hamnett Motivation Inferring from Images Latent Variables : Dataset MNIST Goh images ; hand of digits written - Goal two variables Infer


  1. I Variational 19 Auto Lecture encoders : Ankur Bambhanoliya Scribes : Donald Hamnett

  2. Motivation Inferring from Images Latent Variables : Dataset MNIST Goh images ; hand of digits written - Goal two variables Infer labels 1. Digit , Style { 9 } y e 0 - , . . , variables 2 RD 7 €

  3. Assume all digits Deep IX g Generative Models frequent equally , ) Neural Network Idea µyn,7n I Use to define : a , mode generative a generative µ .ae ( Digit ) supervise can ( to ) Discrete yn ~ 0.1 0.1 . . . , , Normal I ) ( 2- a n n , Berinoulli ( :O ) ) Xn n I Neural network Questions of 9 How ? Style train I this do we . ? supervision ) How inference do ( Image do 2 no we .

  4. Generative Training Models Deep Gradient Use Ascent Idea Stochastic 2 : on a lower bound approximate likelihood estimation to maximum Generative model with True data weights O network distribution a > KL ( pdatacxsllpcxio ) - To ) I T leg E = - paan , pcxnlo It . xhnpdatac :3 ftp.logpcxioyfFEIFstiaip Epdata = from unknown a distribution data N I , § To log , ) = = ,

  5. Generative Training Models Deep Gradient Use Ii II Ascent Idea Stochastic 2 : 10 on a lower bound approximate likelihood estimation to maximum .HN//ply,z1x,oD)/ variational distribution I proposal depends x on : ' " Ll o ' ' ftp.nm.lt#gI..leos' KL ( 9cg panta ( hog E ) PK = - Tx - Cx ) 's , I Ipo - To ( pm KL JL Col qcyitlxl ply 7190 when ) - = - , ] ) # I Ealy , ,×,[ To log 0£10 ) Ftpdnta 7107 pix , b. = pdatacx , n Tuff ? x - leg Do pcxn.gs , # = to , 5,7 ' 71×4 racy ,

  6. Generative Training Models Deep Gradient Use ;D ÷ : Ascent Idea Stochastic 3 : Learn Variational perform inference to doin Ii : " 't . pan .am#as.....teost Perform Combining 1+2 gradient both i ascent on Max To 7 , 4) Likelihood I ( O : . £10,4 model ) OF ( leg generative pgcxl wgnfax Variational Inference Tq ) 2 : . ) ) ( Approx . Posterior ) 't arqninklfqcyit.gs/lpry,7Ix of =

  7. Generative Training Models Deep Use Idea 4 neural network to define : ( . ) inference variation model the a.k.a. the dist Digit Inference model f I '(xn ;¢↳ ) ) ( Discrete yn ~ networks } Neural Normal ( Itkmyn ;¢D 7 n ~ y.tl/)=Mnqcyn,7n1xn18tule Vaniaticnal Dtst / in q( Image

  8. Variational Auto encoders Learn Objective model generative deep and : a a by inference corresponding model optimizing Em ) Lead Oo . F. ' - - . .mn/logP:Yi;::iIiTI -

  9. Auto Intermezzo encoders : 7h Xn ( all continuous >

  10. latent Encoder Mapping from code to image : x 7 Multi Perception 768 256 whijxn bh 50 layer 2- - hn ;=6( { ;) , ; + . wtinhni - Activation linear map o ( § 10h Tainan Qtnptoochws bti ) + zn ;= , , Activation functions 6kt X

  11. Mapping Decoder from to code image 7 x : Multi Perception layer . • ( § ) BY wiijzn hni + = , ; Whinhnj gxnzooh dipterous + bhi ) pwws = 6( § In . ; Loss Binary - entropy Cross : ,¢)=n§ .pl?jxn.plogxn.p LH ^( Minimize SGD with =

  12. Learned Auto Latent Codes encoder

  13. Auto Variation Treat latent al variable encoder a : as

  14. Auto Variational Treat latent variable encoder a : as Encoder ) Model ( ( Decoder ) Inference Model Generative O ) ph , 7- Q ) 917=1 Xn pin ; 's n - g 6 ( Whxn tbh ) ( Normal I ) hn 7- o = n n , ¢M Oh 6 ( Whan → on → , tbh ) " " µ ? b hut hn W = = 46 ' I ok - r a of What " Whn Git ) exp I y :-. b b = + ( pint , 67 ) Normal ( Yn ) Bernoulli 7- n xn n n 7×71 , on I log Pg ' , ] Lto Objective Eg , on : - - , ×

  15. Variutional Auto Learned Latent Codes encoder

  16. Variation Auto Auto encoder encoder vs

  17. Training The Re Trick parameterization : fowl qczb lxb tf batch Compute of gradient images : on % " Pok " Uniform ( f X xbn . ,Xr3 ) ) gczhsix , , . . B ¢ .iq#gczixbsllogqoTzx Analogue BBVI REINFORCE Estimator of style : - - § , 's § To b b s , to , LIO ) log , a) ' ' , log = 's ) B a Y b. s b 2- ( 2- 7 I X ~ g ¢ Problem will be high variance very ;

  18. Training The Re Trick parameterization : = Mkt = toff eh Eb b Idea Sample 's parameterized dist z using a : te +61×3 b. s Normal } ( 0,1 ) q ~ b. s ebiipce b it 9ft ) Ix ~ s :p ) ;¢ , 7 Tret ^ Neural works Result B- Estimator Re parameterized pce.bg/b8qo,czces;qslxb I Poet : b ;p ) ) fo I z £10,10 ) , Tq ) . B b b. s s ) ' 0 ) " E Po " [ Is ' leg , = 4 b. s B ; 9) 94 ( 2- C E lxb ) be s - I - . In S often I enough practice :

  19. Variational Auto encoders Learn Objective model generative deep and : a a by inference corresponding model optimizing ) scores % F. - - .GE?Em..*.leogP:YiiiiiiiTI -

  20. ( 7 style & digit ) Continuous ,7n ) ( Xn both pp nlxnl 9$17 : encodes , ft style ) Continuous Discrete ynifnl pfxn encodes qfbn lxn ) : ,7n + , ,

  21. Disentangled Representations Hao Learn Interpretable Babahtsarthaht Features : Xn Zn ' , Zn , -2 : in ,7 C 3 3 - ÷ 1 1 ) × 6 Fn ,i Zn 7h ,i=3 =-3 ,i=o

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend