Neural Networks applied to Blending Challenges Sowmya Kamath, - - PowerPoint PPT Presentation

neural networks applied to blending challenges
SMART_READER_LITE
LIVE PREVIEW

Neural Networks applied to Blending Challenges Sowmya Kamath, - - PowerPoint PPT Presentation

Neural Networks applied to Blending Challenges Sowmya Kamath, Patricia Burchat Blending Workshop, 15th August, 2018 Blending & Neural Networks Object detection and instance segmentation is an active area of research in computer vision


slide-1
SLIDE 1

Neural Networks applied to Blending Challenges

Sowmya Kamath, Patricia Burchat Blending Workshop, 15th August, 2018

slide-2
SLIDE 2

Blending & Neural Networks

  • Object detection and instance segmentation is an active area of research in

computer vision applications.

  • Neural Networks could potentially contribute to a solution to several

additional blending challenges:

○ Identifying blends that are too “blended” to perform meaningful deblending. ○ Identifying shredded objects. ○ Deblending. ○ Identifying unrecognized blends.

slide-3
SLIDE 3

Blending & Neural Networks

  • Object detection and instance segmentation is an active area of research in

computer vision applications.

  • Neural Networks could potentially contribute to a solution to several

additional blending challenges:

○ Identifying blends that are too “blended” to perform meaningful deblending. ○ Identifying shredded objects. ○ Deblending. ○ Identifying unrecognized blends.

slide-4
SLIDE 4

Convolutional Neural Network in 1 minute

  • Category of Neural Network effective in

image recognition and classification.

  • “Convolves” image with a kernel (filter).

http://deeplearning.stanford.edu/wiki/index.php/Feature_extraction_using_convolution

slide-5
SLIDE 5

Convolutional Neural Network in 1 minute

http://cs.nyu.edu/~fergus/tutorials/deep_learning_cvpr12/

  • Category of Neural Network effective in

image recognition and classification.

  • “Convolves” image with a kernel (filter).
  • Different kernels extract different

features.

slide-6
SLIDE 6

Convolutional Neural Network in 1 minute

http://web.eecs.umich.edu/~honglak/icml09-ConvolutionalDeepBeliefNetworks.pdf

  • Category of Neural Network effective in

image recognition and classification.

  • “Convolves” image with a kernel (filter).
  • Different kernels extract different

features.

  • Convolve each layer feature map with

more kernels to learn complex features.

  • Network learns the kernel values

during training.

slide-7
SLIDE 7

Mask R-CNN

Currently using existing neural network framework, Mask Region-based Convolutional Neural Network (Mask R-CNN),[1] to perform detection and segmentation. Input:

  • RGB Image

Output for each object:

  • Label
  • Bounding Box (x, y, h, w)
  • Segmentation Mask

[1] https://github.com/facebookresearch/detectron, https://arxiv.org/abs/1703.06870

slide-8
SLIDE 8

Mask R-CNN

Classification Bounding Box Segmentation mask Project each proposal onto 7 x 7 grid

http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture11.pdf

slide-9
SLIDE 9

What it was developed for: 1. Opaque objects 2. Sharp edges 3. Large objects 4. Good image quality 5. Same resolution in RGB bands 6. Scenes where vertical & horizontal directions have special meaning. What we need it for: 1. (Semi) transparent objects 2. No sharp edges 3. Some objects as small as pixel scale 4. Lower SNR 5. Resolution can vary between filters 6. Image analysis that is directionally agnostic.

slide-10
SLIDE 10

Training data

  • Simulated images of

two-galaxy pairs with varying overlap.

  • 0.6 - 2 arcsec apart
  • Bulge+disk Sersic galaxies from CatSim drawn with

WeakLensingDeblendingPackage.

  • i<24, 10-year LSST depth.
  • gri bands → RGB .
  • 18,000 pairs (72,000 images with data augmentation).
slide-11
SLIDE 11

Truth Network Output

Examples of successful detections

Green = true segmentation Red = CNN segmentation

slide-12
SLIDE 12

Examples of unsuccessful detections

Green = true segmentation Red = CNN segmentation

slide-13
SLIDE 13

Future Work on Deblending with Neural Networks

  • Optimize parameters and threshold values to reduce false positives.
  • Most architectures are built for input images with three bands (RGB).

=> Modify so that all six bands of LSST images can be utilized.

  • Modify end layers of network to output pixel values for individual galaxies

instead of segmentation maps.

  • Include different kinds of sources (types of galaxies, stars, image artefacts?)

and perform classification as well.

  • Investigate using space-based images (HST) as truth & Hyper Suprime-Cam

(HSC) as input images for training and measuring performance.

slide-14
SLIDE 14

Residual Detection

  • Could unrecognized blends be detected on the residual images?
  • Aim: Run Mask R-CNN detection network, with residual images + the Scarlet

model as input, to predict undetected source locations.

  • Work in progress.
slide-15
SLIDE 15

Undetected Objects

  • Object detection in current pipeline has been developed for single band

images ○ They do not use color information!

  • Some objects may be undetected due to blending.

Red X : True center of brighter source Blue X : True center of dimmer source Green O: DM stack detected center

slide-16
SLIDE 16

Undetected Objects and Scarlet

  • Deblending with Scarlet requires prior information about the number of
  • bjects and their centers.
  • Symmetry and monotonicity constraints of Scarlet causes these undetected
  • bjects to be modelled into the detected source.

Red X : True center of brighter source Blue X : True center of dimmer source Green O: Detected center Red O : Scarlet fit center

slide-17
SLIDE 17

Undetected Objects in the Residuals

  • This leads to dipole patterns in the residual image.

Red X : True center of brighter source Blue X : True center of dimmer source Green O: Detected center Red O : Scarlet fit center

slide-18
SLIDE 18

Red X : True center of brighter source Blue X : True center of dimmer source Green O: Detected center Red O : Scarlet fit center Yellow square: Undetected source location to be predicted by the network.

slide-19
SLIDE 19

Results Coming Soon!

Red X : True center of brighter source Blue X : True center of dimmer source Green O: Detected center Red O : Scarlet fit center Yellow square: Undetected source location to be predicted by the network.