generating neural networks formicroscopysegmentation
play

Generating Neural Networks forMicroscopySegmentation Matthew Guay - PowerPoint PPT Presentation

Generating Neural Networks forMicroscopySegmentation Matthew Guay November 1, 2017 ELECTRON MICROSCOPY Electron microscopes (EM) produce nanometer-scale images. SERIAL BLOCK-FACE IMAGING Serial block-face scanning electron microscopy (SBF-SEM):


  1. Generating Neural Networks forMicroscopySegmentation Matthew Guay November 1, 2017

  2. ELECTRON MICROSCOPY Electron microscopes (EM) produce nanometer-scale images.

  3. SERIAL BLOCK-FACE IMAGING Serial block-face scanning electron microscopy (SBF-SEM): Image huge 3D samples by repeated cutting and scanning. (Denk, Horstmann 2004)

  4. SBF-SEM APPLICATIONS SBF-SEM images provide new insight into the organization of complex biological systems. Connectomics (vimeo.com/101018819) Systems biology (Pokrovskaya et al., 2016)

  5. IMAGE SEGMENTATION Image segmentation : Partition image pixels into labeled regions corresponding to image content. Natural image EM image

  6. IMAGE SEGMENTATION Image segmentation : Partition image pixels into labeled regions corresponding to image content. Natural image EM image

  7. AUTOMATING EM SEGMENTATION Manual segmentation is infeasible for large SBF-SEM images. Automated segmentation : algorithmically classify each pixel, manually correct. Practical segmentation algorithm: Manual correction of algorithm output is much faster than manual segmentation.

  8. BIOLOGICAL SEGMENTATION CHALLENGES accuracy despite: Noise + small objects Diffjcult label assignment A practical segmentation algorithm requires high ( > 99 . 9 %)

  9. DEEP LEARNING FOR SEGMENTATION Sliding window network: Convolutional neural network classifjes one pixel at a time. (Ciresan et al., 2012) Encoder-decoder network (U-net): Convolutional encoding and decoding paths classify large image patches at once.

  10. ENCODER-DECODER NETWORKS Encoding path: Convolution, pooling operations decompose an image into a multiscale collection of features. Decoding path: Convolution, transposed convolution operations synthesize a new image from encoder features. (Ronneberger et al., 2015)

  11. BUILDING ENCODER-DECODER NETWORKS Many design choices required for building encoder-decoder network architectures. - Convolution kernel size - Convolution kernels per layer - Convolution layers per stack - Use dropout? - Use batch normalization? - Convolution layer regularization Design choices can be represented as numeric hyperparameters (HPs). Architecture design ⇔ HP space search.

  12. ALGORITHMIC NETWORK DESIGN Two optimization problems when applying neural networks to a problem domain. Learning: Optimize network weight parameters. Parameter ranges are continuous, objective function is (sub)differentiable. - Evaluation is cheap, optimize with backpropagation. Architecture design: Optimize network HPs. Mix of continuous and discrete ranges, objective function is not differentiable. - Evaluation is expensive, optimization is an unstructured search.

  13. THE GENENET LIBRARY genenet : Build, train, and deploy encoder-decoder networks for segmentation using Python and TensorFlow. Goal: simple network design for humans and algorithms. Build computation graphs from Gene graphs .

  14. THE GENE GRAPH Computation graph : sequence of functions mapping network input to output. Gene graph : A height- n tree of Genes that builds a computation graph. Gene: Gene graph node. Each builds a subgraph (module) in the computation graph.

  15. THE GENE GRAPH Leaf Genes (height 0) build small modules. constructions into larger modules. Root Gene (height n ) assembles a full computation graph in TensorFlow. Internal Genes (height i > 0) assemble their child Gene

  16. net Gene graph encode path decode path encode stack 0 encode stack 1 decode stack 2 decode stack 1 decode stack 0 layer 0 layer 1 layer 0 layer 0 layer 1 layer 0 layer 1 layer 2 layer 3 layer 0 edge edge edge edge edge edge edge edge edge edge edge edge input layer 0 e stack 0 e path layer 0 d stack 2 e stack 1 layer 0 layer 1 layer 2 d stack 1 e stack 0 Computation graph

  17. net EdgeGene encode path decode path encode stack 0 encode stack 1 decode stack 2 decode stack 1 decode stack 0 layer 0 layer 1 layer 0 layer 0 layer 1 layer 0 layer 1 layer 2 layer 3 layer 0 edge edge edge edge edge edge edge edge edge edge edge edge input layer 0 e stack 0 e path layer 0 d stack 2 e stack 1 layer 0 layer 1 layer 2 d stack 1 e stack 0

  18. net ConvLayerGene encode path decode path encode stack 0 encode stack 1 decode stack 2 decode stack 1 decode stack 0 layer 0 layer 1 layer 0 layer 0 layer 1 layer 0 layer 1 layer 2 layer 3 layer 0 edge edge edge edge edge edge edge edge edge edge edge edge input layer 0 e stack 0 e path layer 0 d stack 2 e stack 1 layer 0 layer 1 layer 2 d stack 1 e stack 0

  19. net StackGene encode path decode path encode stack 0 encode stack 1 decode stack 2 decode stack 1 decode stack 0 layer 0 layer 1 layer 0 layer 0 layer 1 layer 0 layer 1 layer 2 layer 3 layer 0 edge edge edge edge edge edge edge edge edge edge edge edge input layer 0 e stack 0 e path layer 0 d stack 2 e stack 1 layer 0 layer 1 layer 2 d stack 1 e stack 0

  20. net PathGene encode path decode path encode stack 0 encode stack 1 decode stack 2 decode stack 1 decode stack 0 layer 0 layer 1 layer 0 layer 0 layer 1 layer 0 layer 1 layer 2 layer 3 layer 0 edge edge edge edge edge edge edge edge edge edge edge edge input layer 0 e stack 0 e path layer 0 d stack 2 e stack 1 layer 0 layer 1 layer 2 d stack 1 e stack 0

  21. net NetGene encode path decode path encode stack 0 encode stack 1 decode stack 2 decode stack 1 decode stack 0 layer 0 layer 1 layer 0 layer 0 layer 1 layer 0 layer 1 layer 2 layer 3 layer 0 edge edge edge edge edge edge edge edge edge edge edge edge input layer 0 e stack 0 e path layer 0 d stack 2 e stack 1 layer 0 layer 1 layer 2 d stack 1 e stack 0

  22. HP CALCULATION WITH GENENET Height- i Gene g i with ancestors g i + 1 , . . . , g n . HP h ( n_convlayers ) has value h i at Gene g i . Each g i tracks a delta value ∆ h i . h i = ∆ h i + ∆ h i + 1 + · · · + ∆ h n .

  23. RANDOM NETWORK GENERATION Allows for easy random network generation . Choose feasible regions for HPs (one for height n , another for Sample values from height n downward. Changing ∆ h n affects h for the whole Gene graph. Changing ∆ h 0 affects h for g 0 and its descendants. height i < n ).

  24. ENSEMBLE SEGMENTATION ALGORITHMS Classifjer ensemble : Take some classifjers, average their predictions. For EM segmentation, form an ensemble from high-performing neural networks. ambiguity, improving performance (Krogh, Vedelsby 1995). Diverse network architectures contribute to high ensemble

  25. PRELIMINARY SBF-SEM RESULTS Our lab imaged a human platelet sample with a Gatan 3View. Goal: Segment cells and 5 organelle types in a 250 × 2000 × 2000 volume.

  26. TRAINING ON BIOWULF Biowulf : NIH high-performance computing cluster. Train networks on NVIDIA K80 GPUs. Biowulf nodes, run genenet scripts. Create training jobs with Bash, load Singularity containers on

  27. SEGMENTATION NETWORK TRAINING subvolume. We trained 80 random networks for 100000 iterations on Biowulf. Mutable HPS: n_convkernels n_stacks n_convlayers input_size log_learning_rate Regularization HPs Lab members manually segmented a 50 × 800 × 800

  28. 0905_46 network 22 n params: 534717 n layers: 13 n params: 352510 n layers: 22 input shape: [48, 48] input shape: [48, 48] n kernels: 110 log learning rate: -3.88 n kernels: 53 log learning rate: -5.2 encode path decode path encode path decode path encode stack 0 decode stack 1 decode stack 0 encode stack 0 decode stack 1 decode stack 0 layer 0 layer 0 layer 0 layer 0 layer 0 layer 1 layer 2 layer 0 layer 1 edge edge edge edge edge edge edge edge edge edge edge input e path d stack 1 e stack 0 input e path layer 0 layer 1 d stack 1 e stack 0 layer 0 network 1 n params: 259447 n layers: 49 input shape: [150, 150] n kernels: 30 log learning rate: -2.79 encode path decode path encode stack 0 decode stack 1 decode stack 0 layer 0 layer 1 layer 2 layer 3 layer 4 layer 0 layer 1 layer 2 layer 3 layer 4 layer 5 layer 0 layer 1 layer 2 layer 3 edge edge edge edge edge edge edge edge edge edge edge edge edge edge edge edge input layer 0 layer 1 layer 2 layer 3 e path layer 0 layer 1 layer 2 layer 3 layer 4 d stack 1 e stack 0 layer 0 layer 1 layer 2 network 40 n params: 16946302 n layers: 62 input shape: [254, 254] n kernels: 87 log learning rate: -4.39 encode path decode path encode stack 0 encode stack 1 encode stack 2 decode stack 3 decode stack 2 decode stack 1 decode stack 0 layer 0 layer 0 layer 1 layer 2 layer 0 layer 1 layer 0 layer 1 layer 0 layer 1 layer 2 layer 3 layer 0 layer 1 layer 2 layer 0 layer 1 layer 2 edge edge edge edge edge edge edge edge edge edge edge edge edge edge edge edge edge edge edge edge edge input e stack 0 layer 0 layer 1 e stack 1 layer 0 e path layer 0 d stack 3 e stack 2 layer 0 layer 1 layer 2 d stack 2 e stack 1 layer 0 layer 1 d stack 1 e stack 0 layer 0 layer 1

  29. RANDOM NETWORK PERFORMANCE Below: Comparison of random network validation performance (adjusted Rand score) with the original (Ronneberger et al., 2015) u-net. Nine networks outperformed the original u-net.

  30. RANDOM NETWORK PERFORMANCE

  31. ENSEMBLE PERFORMANCE Strategy: Make an ensemble of the best N networks, evaluate on validation data. N = 4 is best.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend