dense random fields
play

Dense Random Fields Philipp Krhenbhl Stanford University Zoo of - PowerPoint PPT Presentation

Dense Random Fields Philipp Krhenbhl Stanford University Zoo of computer vision problems bottle tiger bottle bottle kitten wood car skin paper pumpkin cloth playing tennis Emma in her hat looking super cute 2 Zoo of computer


  1. Filtering Pros: v 1 v 2 v 3 v 4 v 5 v 6 • Propagates information over v 7 v 8 v 9 v 10 v 11 v 12 large distances ṽ i • up to 1/3 of image v 13 v 14 v 15 v 16 v 17 v 18 v 19 v 20 v 21 v 22 v 23 v 24 28

  2. Filtering Pros: v 1 v 2 v 3 v 4 v 5 v 6 • Propagates information over v 7 v 8 v 9 v 10 v 11 v 12 large distances ṽ i • up to 1/3 of image v 13 v 14 v 15 v 16 v 17 v 18 Cons: v 19 v 20 v 21 v 22 v 23 v 24 28

  3. Filtering Pros: v 1 v 2 v 3 v 4 v 5 v 6 • Propagates information over v 7 v 8 v 9 v 10 v 11 v 12 large distances ṽ i • up to 1/3 of image v 13 v 14 v 15 v 16 v 17 v 18 Cons: v 19 v 20 v 21 v 22 v 23 v 24 • No probabilistic interpretation 28

  4. Filtering Pros: v 1 v 2 v 3 v 4 v 5 v 6 • Propagates information over v 7 v 8 v 9 v 10 v 11 v 12 large distances ṽ i • up to 1/3 of image v 13 v 14 v 15 v 16 v 17 v 18 Cons: v 19 v 20 v 21 v 22 v 23 v 24 • No probabilistic interpretation • No joint inference 28

  5. Filtering Pros: v 1 v 2 v 3 v 4 v 5 v 6 • Propagates information over v 7 v 8 v 9 v 10 v 11 v 12 large distances ṽ i • up to 1/3 of image v 13 v 14 v 15 v 16 v 17 v 18 Cons: v 19 v 20 v 21 v 22 v 23 v 24 • No probabilistic interpretation • No joint inference • No learning 28

  6. Dense Random Fields 29

  7. Dense Random Fields 29

  8. Dense Random Fields 29

  9. Dense Random Fields X X E ( X ) = ψ i ( X i ) + ψ ij ( X i , X j ) i i,j ∈ N unary term pairwise term 30

  10. Dense Random Fields X X E ( X ) = ψ i ( X i ) + ψ ij ( X i , X j ) i i,j ∈ N unary term pairwise term • Every node is connected to every other node • Connections weighted differently 30

  11. Dense Random Fields 31

  12. Dense Random Fields 31

  13. Dense Random Fields 32

  14. Dense Random Fields Pros: 32

  15. Dense Random Fields Pros: • Long range interactions 32

  16. Dense Random Fields Pros: • Long range interactions • No shrinking bias 32

  17. Dense Random Fields Pros: • Long range interactions • No shrinking bias • Probabilistic interpretation 32

  18. Dense Random Fields Pros: • Long range interactions • No shrinking bias • Probabilistic interpretation • Parameter learning 32

  19. Dense Random Fields Pros: • Long range interactions • No shrinking bias • Probabilistic interpretation • Parameter learning • Combine with other models 32

  20. Dense Random Fields 33

  21. Dense Random Fields Cons: 33

  22. Dense Random Fields Cons: • Very large model • 50’000 - 100’000 variables • billions pairwise terms 33

  23. Dense Random Fields Cons: • Very large model • 50’000 - 100’000 variables • billions pairwise terms • Traditional inference very slow • MCMC “converges” in 36h • GraphCuts and alpha-exp.: no convergence in 3 days 33

  24. Dense Random Fields • Efficient inference • 0.2s / image • Pairwise term • linear combination of Gaussians 34

  25. Dense Random Fields X X ψ ij ( X i , X j ) E ( X ) = ψ i ( X i )+ i>j i 35

  26. Dense Random Fields X X ψ ij ( X i , X j ) E ( X ) = ψ i ( X i )+ i>j i X k ( m ) ( f i , f j ) µ ( m ) ( X i , X j ) ψ ij ( X i , X j ) = m 35

  27. Dense Random Fields X X ψ ij ( X i , X j ) E ( X ) = ψ i ( X i )+ i>j i X k ( m ) ( f i , f j ) µ ( m ) ( X i , X j ) ψ ij ( X i , X j ) = m Gaussian kernel k (m) 35

  28. Dense Random Fields X X ψ ij ( X i , X j ) E ( X ) = ψ i ( X i )+ i>j i X k ( m ) ( f i , f j ) µ ( m ) ( X i , X j ) ψ ij ( X i , X j ) = m Label compatibility 𝜈 (m) Gaussian kernel k (m) GRASS SHEEP WATER … 𝜈 GRASS 0 1 1 … ⨂ SHEEP 1 0 10 … 1 10 0 … WATER … … … … 0 35

  29. Dense Random Fields ! − | s i − s j | 2 − | c i − c j | 2 ψ ij ( X i , X j ) = µ 1 ( X i , X j ) exp + 2 σ 2 2 σ 2 α β − | s i − s j | 2 ✓ ◆ µ 2 ( X i , X j ) exp 2 σ 2 γ 36

  30. Dense Random Fields ! − | s i − s j | 2 − | c i − c j | 2 ψ ij ( X i , X j ) = µ 1 ( X i , X j ) exp + 2 σ 2 2 σ 2 α β − | s i − s j | 2 ✓ ◆ µ 2 ( X i , X j ) exp 2 σ 2 • Label compatibility γ 36

  31. Dense Random Fields ! − | s i − s j | 2 − | c i − c j | 2 ψ ij ( X i , X j ) = µ 1 ( X i , X j ) exp + 2 σ 2 2 σ 2 α β − | s i − s j | 2 ✓ ◆ µ 2 ( X i , X j ) exp 2 σ 2 • Label compatibility γ • Potts model: 𝜈 (Xi,Xj) = [Xi ≠ Xj] 𝜈 GRASS SHEEP WATER … 0 1 1 1 GRASS SHEEP 1 0 1 1 WATER 1 1 0 1 … 1 1 1 0 36

  32. Dense Random Fields ! − | s i − s j | 2 − | c i − c j | 2 ψ ij ( X i , X j ) = µ 1 ( X i , X j ) exp + 2 σ 2 2 σ 2 α β − | s i − s j | 2 ✓ ◆ µ 2 ( X i , X j ) exp 2 σ 2 • Label compatibility γ • Potts model: 𝜈 (Xi,Xj) = [Xi ≠ Xj] • Learned from data 𝜈 GRASS SHEEP WATER … 0 ? ? ? GRASS SHEEP ? 0 ? ? WATER ? ? 0 ? … ? ? ? 0 36

  33. Dense Random Fields ! − | s i − s j | 2 − | c i − c j | 2 ψ ij ( X i , X j ) = µ 1 ( X i , X j ) exp + 2 σ 2 2 σ 2 α β − | s i − s j | 2 ✓ ◆ µ 2 ( X i , X j ) exp 2 σ 2 • Label compatibility γ • Potts model: 𝜈 (Xi,Xj) = [Xi ≠ Xj] • Learned from data (c i -c j ) 2 =( - ) 2 s j • Appearance kernel s i 36

  34. Dense Random Fields ! − | s i − s j | 2 − | c i − c j | 2 ψ ij ( X i , X j ) = µ 1 ( X i , X j ) exp + 2 σ 2 2 σ 2 α β − | s i − s j | 2 ✓ ◆ µ 2 ( X i , X j ) exp 2 σ 2 • Label compatibility γ • Potts model: 𝜈 (Xi,Xj) = [Xi ≠ Xj] • Learned from data (c i -c j ) 2 =( - ) 2 s j • Appearance kernel • Color—sensitive s i 36

  35. Dense Random Fields ! − | s i − s j | 2 − | c i − c j | 2 ψ ij ( X i , X j ) = µ 1 ( X i , X j ) exp + 2 σ 2 2 σ 2 α β − | s i − s j | 2 ✓ ◆ µ 2 ( X i , X j ) exp 2 σ 2 • Label compatibility γ • Potts model: 𝜈 (Xi,Xj) = [Xi ≠ Xj] • Learned from data • Appearance kernel • Color—sensitive s i s j • Local smoothness 36

  36. Dense Random Fields ! − | s i − s j | 2 − | c i − c j | 2 ψ ij ( X i , X j ) = µ 1 ( X i , X j ) exp + 2 σ 2 2 σ 2 α β − | s i − s j | 2 ✓ ◆ µ 2 ( X i , X j ) exp 2 σ 2 • Label compatibility γ • Potts model: 𝜈 (Xi,Xj) = [Xi ≠ Xj] • Learned from data • Appearance kernel • Color—sensitive s i s j • Local smoothness • Discourages single pixel noise 36

  37. E ffi cient inference X X ψ ij ( X i , X j ) E ( X ) = ψ i ( X i )+ i>j i 37

  38. E ffi cient inference X X ψ ij ( X i , X j ) E ( X ) = ψ i ( X i )+ i>j i Find most likely assignment (MAP) P ( X ) = 1 X P ( X ) where Z exp( − E ( X )) x = arg max ˆ 37

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend