Neural Networks applied to Blending Challenges Sowmya Kamath, - - PowerPoint PPT Presentation
Neural Networks applied to Blending Challenges Sowmya Kamath, - - PowerPoint PPT Presentation
Neural Networks applied to Blending Challenges Sowmya Kamath, Patricia Burchat Blending Workshop, 15th August, 2018 Blending & Neural Networks Object detection and instance segmentation is an active area of research in computer vision
Blending & Neural Networks
- Object detection and instance segmentation is an active area of research in
computer vision applications.
- Neural Networks could potentially contribute to a solution to several
additional blending challenges:
○ Identifying blends that are too “blended” to perform meaningful deblending. ○ Identifying shredded objects. ○ Deblending. ○ Identifying unrecognized blends.
Blending & Neural Networks
- Object detection and instance segmentation is an active area of research in
computer vision applications.
- Neural Networks could potentially contribute to a solution to several
additional blending challenges:
○ Identifying blends that are too “blended” to perform meaningful deblending. ○ Identifying shredded objects. ○ Deblending. ○ Identifying unrecognized blends.
Convolutional Neural Network in 1 minute
- Category of Neural Network effective in
image recognition and classification.
- “Convolves” image with a kernel (filter).
http://deeplearning.stanford.edu/wiki/index.php/Feature_extraction_using_convolution
Convolutional Neural Network in 1 minute
http://cs.nyu.edu/~fergus/tutorials/deep_learning_cvpr12/
- Category of Neural Network effective in
image recognition and classification.
- “Convolves” image with a kernel (filter).
- Different kernels extract different
features.
Convolutional Neural Network in 1 minute
http://web.eecs.umich.edu/~honglak/icml09-ConvolutionalDeepBeliefNetworks.pdf
- Category of Neural Network effective in
image recognition and classification.
- “Convolves” image with a kernel (filter).
- Different kernels extract different
features.
- Convolve each layer feature map with
more kernels to learn complex features.
- Network learns the kernel values
during training.
Mask R-CNN
Currently using existing neural network framework, Mask Region-based Convolutional Neural Network (Mask R-CNN),[1] to perform detection and segmentation. Input:
- RGB Image
Output for each object:
- Label
- Bounding Box (x, y, h, w)
- Segmentation Mask
[1] https://github.com/facebookresearch/detectron, https://arxiv.org/abs/1703.06870
Mask R-CNN
Classification Bounding Box Segmentation mask Project each proposal onto 7 x 7 grid
http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture11.pdf
What it was developed for: 1. Opaque objects 2. Sharp edges 3. Large objects 4. Good image quality 5. Same resolution in RGB bands 6. Scenes where vertical & horizontal directions have special meaning. What we need it for: 1. (Semi) transparent objects 2. No sharp edges 3. Some objects as small as pixel scale 4. Lower SNR 5. Resolution can vary between filters 6. Image analysis that is directionally agnostic.
Training data
- Simulated images of
two-galaxy pairs with varying overlap.
- 0.6 - 2 arcsec apart
- Bulge+disk Sersic galaxies from CatSim drawn with
WeakLensingDeblendingPackage.
- i<24, 10-year LSST depth.
- gri bands → RGB .
- 18,000 pairs (72,000 images with data augmentation).
Truth Network Output
Examples of successful detections
Green = true segmentation Red = CNN segmentation
Examples of unsuccessful detections
Green = true segmentation Red = CNN segmentation
Future Work on Deblending with Neural Networks
- Optimize parameters and threshold values to reduce false positives.
- Most architectures are built for input images with three bands (RGB).
=> Modify so that all six bands of LSST images can be utilized.
- Modify end layers of network to output pixel values for individual galaxies
instead of segmentation maps.
- Include different kinds of sources (types of galaxies, stars, image artefacts?)
and perform classification as well.
- Investigate using space-based images (HST) as truth & Hyper Suprime-Cam
(HSC) as input images for training and measuring performance.
Residual Detection
- Could unrecognized blends be detected on the residual images?
- Aim: Run Mask R-CNN detection network, with residual images + the Scarlet
model as input, to predict undetected source locations.
- Work in progress.
Undetected Objects
- Object detection in current pipeline has been developed for single band
images ○ They do not use color information!
- Some objects may be undetected due to blending.
Red X : True center of brighter source Blue X : True center of dimmer source Green O: DM stack detected center
Undetected Objects and Scarlet
- Deblending with Scarlet requires prior information about the number of
- bjects and their centers.
- Symmetry and monotonicity constraints of Scarlet causes these undetected
- bjects to be modelled into the detected source.
Red X : True center of brighter source Blue X : True center of dimmer source Green O: Detected center Red O : Scarlet fit center
Undetected Objects in the Residuals
- This leads to dipole patterns in the residual image.
Red X : True center of brighter source Blue X : True center of dimmer source Green O: Detected center Red O : Scarlet fit center
Red X : True center of brighter source Blue X : True center of dimmer source Green O: Detected center Red O : Scarlet fit center Yellow square: Undetected source location to be predicted by the network.
Results Coming Soon!
Red X : True center of brighter source Blue X : True center of dimmer source Green O: Detected center Red O : Scarlet fit center Yellow square: Undetected source location to be predicted by the network.