improving
play

Improving Graduate Seminar April 3rd, 2020 Computer Vision for - PowerPoint PPT Presentation

Sara Beery CompSust Open Improving Graduate Seminar April 3rd, 2020 Computer Vision for Camera Traps Leveraging Practitioner Insight to Build Solutions for Real-World Challenges Big goal: monitoring biodiversity, globally and in real time.


  1. Sara Beery CompSust Open Improving Graduate Seminar April 3rd, 2020 Computer Vision for Camera Traps Leveraging Practitioner Insight to Build Solutions for Real-World Challenges

  2. Big goal: monitoring biodiversity, globally and in real time. 2

  3. Big goal: monitoring biodiversity, globally and in real time. How can we contribute? 3

  4. Camera traps 4

  5. Camera traps ● 1,000s of organizations ● 10,000s of projects ● 1,000,000s of camera traps ● 100,000,000s of images *estimates by Eric Fegraus, Conservation International 5

  6. Camera traps ● 1,000s of organizations ● 10,000s of projects ● 1,000,000s of camera traps ● 100,000,000s of images For example: Idaho Department of Fish and Game alone has 5 years of unprocessed, unlabeled data, around 5 million images *estimates by Eric Fegraus, Conservation International 6

  7. Camera trap data is challenging

  8. All these images have an animal in them

  9. SOA models don’t generalize Cis Trans 10 0 Error 10 -1 10 -2 10 1 10 2 10 3 10 4 # Training Examples 9 Recognition in Terra Incognita, Beery et al., ECCV 2018

  10. Class-agnostic detectors generalize best MegaDetector Microsoft AI for Earth Efficient Pipeline for Automating Species ID in new Camera Trap Projects, Beery, et al., BiodiversityNext 2019 https:/ /github.com/microsoft/CameraTraps/blob/master/megadetector.md

  11. 11

  12. Rare classes are hard Cis Trans 10 0 Error 10 -1 10 -2 10 1 10 2 10 3 10 4 # Training Examples 12 Recognition in Terra Incognita, Beery et al., ECCV 2018

  13. Camera traps are static, and objects of interest are habitual 15

  14. Synthetic data improves rare-class performance Synthetic Examples Improve Generalization for Rare Classes, Beery et al., WACV 2020

  15. Camera traps are static, and objects of interest are habitual 17

  16. Human labeling method 18

  17. Human labeling method 19

  18. Human labeling method 20

  19. Human labeling method 21

  20. Human labeling method 22

  21. Human labeling method Impala! 23

  22. Camera traps are static, and objects of interest are habitual Human practitioners use this information, can we build a machine learning model that can do the same? Context R-CNN: Long Term Context for Per-Camera Object Detection, Beery et al., CVPR 2020 24

  23. Camera traps are static, and objects of interest are habitual 1. Improve per-location object classification These are probably the same species, and if we’re confident about one, that should help us classify the other 25

  24. Camera traps are static, and objects of interest are habitual 1. Improve per-location object classification 2. Ignore salient false positives These rocks have not moved in a month, they’re probably not animals. 26

  25. Contextual memory strategy Extract features offline ● ● Reduce feature size Curate features ● Maintain spatiotemporal information ● 27 Context R-CNN: Long Term Context for Per-Camera Object Detection, Beery et al., CVPR 2020

  26. Use attention to incorporate context 28 Context R-CNN: Long Term Context for Per-Camera Object Detection, Beery et al., CVPR 2020

  27. Context is incorporated based on relevance 29 Context R-CNN: Long Term Context for Per-Camera Object Detection, Beery et al., CVPR 2020

  28. Related Work: long-term temporal context in video Shvets et al., Leveraging Long-Range Temporal Relationships Between Wu et al., Sequence Level Semantics Aggregation for Video Proposals for Video Object Detection Object Detection Wu et al., Long-Term Feature Banks for Detailed Video Deng et al., Object Guided External Memory Network for Video Understanding Object Detection 30

  29. Datasets ● Snapshot Serengeti (SS): 225 cameras, 3.4M images, 48 classes, Eastern African game preserve Caltech Camera Traps (CCT): 140 ● cameras, 243K images, 18 classes, American Southwestern urban wildlife CityCam (CC): 17 cameras, 60K ● images, 10 vehicle classes, traffic cameras from NYC 31 Context R-CNN: Long Term Context for Per-Camera Object Detection, Beery et al., CVPR 2020

  30. Results SS: Snapshot Serengeti CCT: Caltech Camera Traps CC: CityCam 32

  31. Improves predominantly on challenging cases 33

  32. Attention is temporally adaptive to relevance 34

  33. Snapshot Serengeti mAP improves for all classes 35

  34. Background classes are learned without supervision 36

  35. Static passive monitoring sensors Sparse, irregular frame rate ● ● Power, computational, and memory constraints. ● Much of the data is “empty” 37

  36. Big goal: monitoring biodiversity, globally and in real time. How can we contribute? 38

  37. Current Biodiversity AI Competitions GeoLifeCLEF 2020 Global camera traps (WCS) + RS 2M Species Observations + RS + LC + Covariates https:/ /www.kaggle.com/c/iwildcam-2020-fgvc7 https:/ /www.imageclef.org/GeoLifeCLEF2020

  38. Acknowledgements Caltech Vision Lab AI for Earth 40

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend