bounding boxes for weakly supervised segmentation global
play

Bounding boxes for weakly supervised segmentation: Global - PowerPoint PPT Presentation

Bounding boxes for weakly supervised segmentation: Global constraints get close to full supervision MIDL 2020, Montr eal Paper O-001 Hoel Kervadec , Jose Dolz, Shanshan Wang, Eric Granger, Ismail Ben Ayed July 6 2020 ETS Montr eal


  1. Bounding boxes for weakly supervised segmentation: Global constraints get close to full supervision MIDL 2020, Montr´ eal Paper O-001 Hoel Kervadec , Jose Dolz, Shanshan Wang, Eric Granger, Ismail Ben Ayed July 6 2020 ´ ETS Montr´ eal hoel@kervadec.science https://github.com/LIVIAETS/boxes_tightness_prior 1

  2. Presentation overview • On the (un)certainty of weak labels 2

  3. Presentation overview • On the (un)certainty of weak labels • Tightness prior: application to bounding boxes 2

  4. Presentation overview • On the (un)certainty of weak labels • Tightness prior: application to bounding boxes • Constraining a deep network during training 2

  5. Presentation overview • On the (un)certainty of weak labels • Tightness prior: application to bounding boxes • Constraining a deep network during training • Results and conclusion 2

  6. On the (un)certainty of weak labels

  7. Weak labels Blue: background, green: foreground, no-color: unknown. Full labels are expensive, but weak labels are difficult to use 3

  8. Constrained-CNN losses, with points [Kervadec et al., MedIA’19] Partial cross-entropy on the foreground pixels , with size constraint: Network parameters θ � − log( s p min θ ) θ p ∈ Ω L � s p s.t. a ≤ θ ≤ b p ∈ Ω 4

  9. Constrained-CNN losses, with points [Kervadec et al., MedIA’19] Partial cross-entropy on the foreground pixels , with size constraint: Network parameters θ � − log( s p min θ ) Ω Image space θ p ∈ Ω L Ω L ⊂ Ω Labeled pixels � s p s.t. a ≤ θ ≤ b p ∈ Ω 4

  10. Constrained-CNN losses, with points [Kervadec et al., MedIA’19] Partial cross-entropy on the foreground pixels , with size constraint: Network parameters θ � − log( s p min θ ) Ω Image space θ p ∈ Ω L Ω L ⊂ Ω Labeled pixels � s p s.t. a ≤ θ ≤ b p ∈ Ω pixel p ∈ Ω 4

  11. Constrained-CNN losses, with points [Kervadec et al., MedIA’19] Partial cross-entropy on the foreground pixels , with size constraint: Network parameters θ � − log( s p min θ ) Ω Image space θ p ∈ Ω L Ω L ⊂ Ω Labeled pixels � s p s.t. a ≤ θ ≤ b p ∈ Ω pixel s p Foreground probability p ∈ Ω 4 θ

  12. Constrained-CNN losses, with points [Kervadec et al., MedIA’19] It works well, but required some precise size information ( a , b ). 5

  13. Constrained-CNN losses, with points [Kervadec et al., MedIA’19] It works well, but required some precise size information ( a , b ). How to realistically get it? 5

  14. Constrained-CNN losses, with points [Kervadec et al., MedIA’19] It works well, but required some precise size information ( a , b ). How to realistically get it? A bounding box gives a natural upper size. 5

  15. But cannot do the opposite with a box 6

  16. But cannot do the opposite with a box Partial cross-entropy on the background pixels, with size constraint: Ω O Outside of the box � − log(1 − s p min θ ) θ p ∈ Ω O � s p s.t. θ ≤ | Ω I | p ∈ Ω 6

  17. But cannot do the opposite with a box Partial cross-entropy on the background pixels, with size constraint: Ω O Outside of the box � − log(1 − s p min θ ) Ω I Inside of the box θ p ∈ Ω O � s p s.t. θ ≤ | Ω I | p ∈ Ω 6

  18. But cannot do the opposite with a box Partial cross-entropy on the background pixels, with size constraint: Ω O Outside of the box � − log(1 − s p min θ ) Ω I Inside of the box θ 1 − s p p ∈ Ω O Background probability θ � s p s.t. θ ≤ | Ω I | p ∈ Ω 6

  19. Why it does not work? � − log(1 − s p min θ ) θ p ∈ Ω O � s p s.t. θ ≤ | Ω I | p ∈ Ω 7

  20. Why it does not work? � − log(1 − s p min θ ) θ p ∈ Ω O � s p s.t. θ ≤ | Ω I | p ∈ Ω Introduce massive imbalance in training. 7

  21. Why it does not work? � − log(1 − s p min θ ) θ p ∈ Ω O � s p s.t. θ ≤ | Ω I | p ∈ Ω Introduce massive imbalance in training. No explicit supervision to predict foreground. 7

  22. Why it does not work? � − log(1 − s p min θ ) θ p ∈ Ω O � s p s.t. θ ≤ | Ω I | p ∈ Ω Introduce massive imbalance in training. No explicit supervision to predict foreground. Result : It predicts only background. 7

  23. Dirty solution – Mixed labels We could mix the two kind of labels. But defeat the purpose of having less annotations. 8

  24. Dirty solution – Ugly heuristic Or use a heuristic: The center of the box is always foreground. 9

  25. Dirty solution – Ugly heuristic Hypothesis: The same part of the box always belong to the foreground. Does it hold for more complex, deformable objects? 10

  26. Dirty solution – Ugly heuristic Hypothesis: The same part of the box always belong to the foreground. Does it hold for more complex, deformable objects? If the camel moves, our heuristic will be wrong. 10

  27. Tightness prior

  28. Tightness prior The classical tightness prior [Lempitsky et al., ICCV’09] states that: Any line parallel to the box will cross the camel, at some point. 11

  29. Tightness prior Which can be generalized: A segment of width w will cross-the camel w times. 12

  30. Formal definition

  31. Formal definition S L := { s l } set of segments w width of a segment y p ∈ { 0 , 1 } true label for pixel p � y p ≥ w ∀ s l ∈ S L p ∈ s l 13

  32. Formal definition S L := { s l } set of segments w width of a segment y p ∈ { 0 , 1 } true label for pixel p � y p ≥ w ∀ s l ∈ S L p ∈ s l 13

  33. Formal definition S L := { s l } set of segments w width of a segment y p ∈ { 0 , 1 } true label for pixel p � y p ≥ w ∀ s l ∈ S L p ∈ s l 13

  34. Formal definition S L := { s l } set of segments w width of a segment y p ∈ { 0 , 1 } true label for pixel p � y p ≥ w ∀ s l ∈ S L p ∈ s l 13

  35. Updating the formulation We can update our bounding box supervision model: L O Loss outside the box min L O ( θ ) θ � s p s.t. θ ≤ | Ω I | p ∈ Ω 14

  36. Updating the formulation We can update our bounding box supervision model: L O Loss outside the box min L O ( θ ) θ � s p s.t. θ ≤ | Ω I | p ∈ Ω � s p s.t. θ ≥ w ∀ s l ∈ S L . p ∈ s l 14

  37. Updating the formulation We can update our bounding box supervision model: L O Loss outside the box p ∈ s l s p min L O ( θ ) � Sum on continuous values θ θ � s p s.t. θ ≤ | Ω I | p ∈ Ω � s p s.t. θ ≥ w ∀ s l ∈ S L . p ∈ s l 14

  38. Updating the formulation We can update our bounding box supervision model: L O Loss outside the box p ∈ s l s p min L O ( θ ) � Sum on continuous values θ θ � s p s.t. θ ≤ | Ω I | p ∈ Ω � s p s.t. θ ≥ w ∀ s l ∈ S L . p ∈ s l Gives an optimization problem with dozens of constraints . 14

  39. On constrained deep-networks during training Penalty method such as [Kervadec et al., MedIA’19] or tweaked Lagrangian methods [Nandwani et al., 2019, Pathak et al., 2015] crumble with many competing constraints. 15

  40. On constrained deep-networks during training Penalty method such as [Kervadec et al., MedIA’19] or tweaked Lagrangian methods [Nandwani et al., 2019, Pathak et al., 2015] crumble with many competing constraints. Recent work on extended log-barrier [Kervadec et al., 2019b] is much more robust: 15

  41. Extended log-barrier The ext. log-barrier is integrated directly into the loss function. Model to optimize: Model w/ extended log-barrier: L ( x ) + ˜ min L ( x ) min ψ t ( z ) x x s.t. z ≤ 0 16

  42. Final model  �   �  � ˜ �  + ˜ � s p min L O ( θ ) + λ w − s θ ( p ) θ − | Ω I | ψ t ψ t  θ p ∈ s l s l ∈S L p ∈ Ω Two simple hyper-parameters: weight λ for the tightness prior, t common to all constraints. 17

  43. Final model  �   �  � ˜ �  + ˜ � s p min L O ( θ ) + λ w − s θ ( p ) θ − | Ω I | ψ t ψ t  θ p ∈ s l s l ∈S L p ∈ Ω Two simple hyper-parameters: weight λ for the tightness prior, t common to all constraints. 17

  44. Final model  �   �  � ˜ �  + ˜ � s p min L O ( θ ) + λ w − s θ ( p ) θ − | Ω I | ψ t ψ t  θ p ∈ s l s l ∈S L p ∈ Ω Two simple hyper-parameters: weight λ for the tightness prior, t common to all constraints. 17

  45. Final model  �   �  � ˜ �  + ˜ � s p min L O ( θ ) + λ w − s θ ( p ) θ − | Ω I | ψ t ψ t  θ p ∈ s l s l ∈S L p ∈ Ω Two simple hyper-parameters: weight λ for the tightness prior, t common to all constraints. 17

  46. Evaluation and results

  47. Datasets and baseline Evaluate on two dataset: • PROMISE12: prostate segmentation [Litjens et al., 2014] • ATLAS: Ischemic stroke lesions [Liew et al., 2018] 18

  48. Datasets and baseline Evaluate on two dataset: • PROMISE12: prostate segmentation [Litjens et al., 2014] • ATLAS: Ischemic stroke lesions [Liew et al., 2018] Use DeepCut [Rajchl et al., 2016] as baseline and comparison. 18

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend