neural networks applied to blending challenges
play

Neural Networks applied to Blending Challenges Sowmya Kamath, - PowerPoint PPT Presentation

Neural Networks applied to Blending Challenges Sowmya Kamath, Patricia Burchat Blending Workshop, 15th August, 2018 Blending & Neural Networks Object detection and instance segmentation is an active area of research in computer vision


  1. Neural Networks applied to Blending Challenges Sowmya Kamath, Patricia Burchat Blending Workshop, 15th August, 2018

  2. Blending & Neural Networks Object detection and instance segmentation is an active area of research in ● computer vision applications. Neural Networks could potentially contribute to a solution to several ● additional blending challenges: Identifying blends that are too “blended” to perform meaningful deblending. ○ Identifying shredded objects. ○ Deblending. ○ Identifying unrecognized blends. ○

  3. Blending & Neural Networks Object detection and instance segmentation is an active area of research in ● computer vision applications. Neural Networks could potentially contribute to a solution to several ● additional blending challenges: Identifying blends that are too “blended” to perform meaningful deblending. ○ Identifying shredded objects. ○ Deblending. ○ Identifying unrecognized blends. ○

  4. Convolutional Neural Network in 1 minute Category of Neural Network effective in ● image recognition and classification. “Convolves” image with a kernel (filter). ● http://deeplearning.stanford.edu/wiki/index.php/Feature_extraction_using_convolution

  5. Convolutional Neural Network in 1 minute Category of Neural Network effective in ● image recognition and classification. “Convolves” image with a kernel (filter). ● Different kernels extract different ● features. http://cs.nyu.edu/~fergus/tutorials/deep_learning_cvpr12/

  6. Convolutional Neural Network in 1 minute Category of Neural Network effective in ● image recognition and classification. “Convolves” image with a kernel (filter). ● Different kernels extract different ● features. Convolve each layer feature map with ● more kernels to learn complex features. Network learns the kernel values ● during training. http://web.eecs.umich.edu/~honglak/icml09-ConvolutionalDeepBeliefNetworks.pdf

  7. Mask R-CNN Currently using existing neural network framework, Mask Region-based Convolutional Neural Network (Mask R-CNN), [1] to perform detection and segmentation. Input: RGB Image ● Output for each object: Label ● Bounding Box (x, y, h, w) ● Segmentation Mask ● [1] https://github.com/facebookresearch/detectron, https://arxiv.org/abs/1703.06870

  8. Mask R-CNN Project each proposal onto 7 x 7 grid Classification Bounding Box Segmentation mask http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture11.pdf

  9. What it was developed for: What we need it for: 1. Opaque objects 1. (Semi) transparent objects 2. Sharp edges 2. No sharp edges 3. Large objects 3. Some objects as small as pixel scale 4. Good image quality 4. Lower SNR 5. Same resolution in RGB bands 5. Resolution can vary between filters 6. Scenes where vertical & horizontal 6. Image analysis that is directionally directions have special meaning. agnostic.

  10. Training data Simulated images of ● two-galaxy pairs with varying overlap. 0.6 - 2 arcsec apart ● Bulge+disk Sersic galaxies from CatSim drawn with ● WeakLensingDeblendingPackage. i<24, 10-year LSST depth. ● gri bands → RGB . ● 18,000 pairs (72,000 images with data augmentation). ●

  11. Green = true segmentation Examples of successful detections Red = CNN segmentation Truth Network Output

  12. Green = true segmentation Examples of unsuccessful detections Red = CNN segmentation

  13. Future Work on Deblending with Neural Networks Optimize parameters and threshold values to reduce false positives. ● Most architectures are built for input images with three bands (RGB). ● => Modify so that all six bands of LSST images can be utilized. Modify end layers of network to output pixel values for individual galaxies ● instead of segmentation maps. Include different kinds of sources (types of galaxies, stars, image artefacts?) ● and perform classification as well. Investigate using space-based images (HST) as truth & Hyper Suprime-Cam ● (HSC) as input images for training and measuring performance.

  14. Residual Detection Could unrecognized blends be detected on the residual images? ● Aim: Run Mask R-CNN detection network, with residual images + the Scarlet ● model as input, to predict undetected source locations. Work in progress. ●

  15. Undetected Objects Object detection in current pipeline has been developed for single band ● images ○ They do not use color information! Some objects may be undetected due to blending. ● Red X : True center of brighter source Blue X : True center of dimmer source Green O: DM stack detected center

  16. Undetected Objects and Scarlet Deblending with Scarlet requires prior information about the number of ● objects and their centers. Symmetry and monotonicity constraints of Scarlet causes these undetected ● objects to be modelled into the detected source. Red X : True center of brighter source Blue X : True center of dimmer source Green O: Detected center Red O : Scarlet fit center

  17. Undetected Objects in the Residuals This leads to dipole patterns in the residual image. ● Red X : True center of brighter source Blue X : True center of dimmer source Green O: Detected center Red O : Scarlet fit center

  18. Red X : True center of brighter source Blue X : True center of dimmer source Green O: Detected center Red O : Scarlet fit center Yellow square: Undetected source location to be predicted by the network.

  19. Red X : True center of brighter source Blue X : True center of dimmer source Green O: Detected center Red O : Scarlet fit center Yellow square: Undetected source location to be predicted by the network. Results Coming Soon!

Recommend


More recommend