automatic classification of galaxy morphology with zest
play

Automatic classification of galaxy morphology with ZEST+. Mariano - PowerPoint PPT Presentation

Automatic classification of galaxy morphology with ZEST+. Mariano Ciccolini Institut fr Astronomie, ETHZ PSI-LTP Theory seminar ETH Zrich - IfA Automatic classification of galaxy morphology with ZEST+. Mariano Ciccolini Institut fr


  1. Pattern Recognition: Supervised Learning 0.5 hep astro 0.4 ? Graphics rate 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 Equations rate PSI-LTP Theory seminar ETH Zürich - IfA 15

  2. Pattern Recognition: Supervised Learning 0.5 hep astro 0.4 ? Graphics rate 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 Equations rate PSI-LTP Theory seminar ETH Zürich - IfA 15

  3. Pattern Recognition: Supervised Learning 0.5 hep astro 0.4 ? Graphics rate 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 Equations rate PSI-LTP Theory seminar ETH Zürich - IfA 15

  4. Pattern Recognition: Supervised Learning 0.5 hep astro x ∈ Π ⇔ w · x + b = 0 0.4 ? f ( x ) = sgn ( w · x + b ) Graphics rate 0.3 ( x i , y i ) in training sample ⇒ y i = f ( x i ) 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 Equations rate PSI-LTP Theory seminar ETH Zürich - IfA 15

  5. Pattern Recognition: Separating hyperplanes 0.5 hep astro 0.4 Graphics rate 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 Equations rate PSI-LTP Theory seminar ETH Zürich - IfA 16

  6. Pattern Recognition: Separating hyperplanes 0.5 hep astro 0.4 Graphics rate 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 Equations rate PSI-LTP Theory seminar ETH Zürich - IfA 16

  7. Pattern Recognition: Separating hyperplanes 0.5 hep astro 0.4 Graphics rate 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 Equations rate PSI-LTP Theory seminar ETH Zürich - IfA 16

  8. Pattern Recognition: Separating hyperplanes 0.5 hep astro 0.4 Graphics rate 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 Equations rate PSI-LTP Theory seminar ETH Zürich - IfA 16

  9. Pattern Recognition: Separating hyperplanes 0.5 hep astro 0.4 Graphics rate 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 Equations rate PSI-LTP Theory seminar ETH Zürich - IfA 16

  10. Pattern Recognition: Separating hyperplanes 0.5 hep astro 0.4 Graphics rate 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 Equations rate PSI-LTP Theory seminar ETH Zürich - IfA 16

  11. Pattern Recognition: Separating hyperplanes 0.5 hep astro 0.4 Graphics rate 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 Equations rate PSI-LTP Theory seminar ETH Zürich - IfA 16

  12. Pattern Recognition: Separating hyperplanes 0.5 hep astro 0.4 Graphics rate 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 Equations rate PSI-LTP Theory seminar ETH Zürich - IfA 16

  13. Pattern Recognition: Separating hyperplanes 0.5 hep astro 0.4 Graphics rate 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 Equations rate PSI-LTP Theory seminar ETH Zürich - IfA 16

  14. Pattern Recognition: Separating hyperplanes 0.5 hep astro δ 0.4 Graphics rate 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 Equations rate PSI-LTP Theory seminar ETH Zürich - IfA 16

  15. Pattern Recognition: Separating hyperplanes 0.5 x ∈ Π ⇔ w · x + b = 0 hep astro x ∈ Π ± ⇔ w · x + b = ± 1 δ 0.4 2 ⇒ Margin : δ = Graphics rate � w � 0.3 f ( x ) = sgn ( w · x + b ) 0.2 ( x i , y i ) in training sample ⇒ y i = f ( x i ) 0.1 w · x i + b ≤ − 1 i : hep w · x j + b ≥ +1 j : astro 0 0 0.1 0.2 0.3 0.4 0.5 Equations rate PSI-LTP Theory seminar ETH Zürich - IfA 16

  16. Optimal hyperplane: Formal approach x ∈ Π ⇔ w · x + b = 0 x ∈ Π ± ⇔ w · x + b = ± 1 X ⊂ ℜ N ( x 1 , y 1 ) , . . . , ( x m , y m ) ∈ X × {± 1 } with 2 ⇒ Margin : δ = � w � ( x i , y i ) ← → P ( x, y ) f ( x ) = sgn ( w · x + b ) f : X → {± 1 } : f ( x i ) = y i ( x i , y i ) in training sample ⇒ y i = f ( x i ) P ( x, y ) − → ( x , y ) ∈ X f ( x ) = y ⇒ w · x i + b ≤ − 1 i : hep w · x j + b ≥ +1 j : astro PSI-LTP Theory seminar ETH Zürich - IfA 17

  17. Optimal hyperplane: Formal approach Karush-Kuhn-Tucker τ ( w ) = 1 m 2 � w � 2 minimize � = 0 , α i y i i =1 m s . t . y i (( w · x i ) + b ) ≥ 1 , i = 1 , . . . , m � = α i y i x i w i =1 α i ≥ 0 i = 1 , . . . , m 0 ≤ α i m L ( w , b, α ) = 1 0 ≤ ( y i ( w · x i + b ) − 1) 2 � w � 2 − � α i ( y i (( w · x i ) + b ) − 1) 0 = α i ( y i ( w · x i + b ) − 1) i =1 � m � � f ( x ) = sgn y i α i ( x · x i ) + b i =1 PSI-LTP Theory seminar ETH Zürich - IfA 18

  18. Optimal hyperplane: Formal approach Wolfe dual problem m m α i − 1 � � maximize W ( α ) = α i α j y i y j ( x i · x j ) 2 i =1 i,j =1 m � s . t . α i ≥ 0 i = 1 , . . . , m and α i y i = 0 i =1 � m � � f ( x ) = sgn y i α i ( x · x i ) + b i =1 m � = 1 − α i y i ( x i · x j ) with α j � = 0 b i =1 PSI-LTP Theory seminar ETH Zürich - IfA 19

  19. Optimal hyperplane: Non-linear separable data 0.5 hep astro 0.4 Graphics rate 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 Equations rate PSI-LTP Theory seminar ETH Zürich - IfA 20

  20. Optimal hyperplane: Non-linear separable data 0.5 hep astro 0.4 Graphics rate 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 Equations rate PSI-LTP Theory seminar ETH Zürich - IfA 20

  21. Optimal hyperplane: Non-linear separable data 0.5 hep astro 0.4 ξ i Graphics rate 0.3 0.2 ξ j 0.1 0 0 0.1 0.2 0.3 0.4 0.5 Equations rate PSI-LTP Theory seminar ETH Zürich - IfA 20

  22. Optimal hyperplane: Non-linear separable data 0.5 hep astro w · x i + b − 1 i : hep ≤ 0.4 w · x j + b +1 j : astro ≥ ξ i Graphics rate 0.3 y i ( w · x i + b ) ≥ +1 0.2 ξ i ≥ 0 i = 1 . . . m y i ( w · x i + b ) ≥ 1 − ξ i ξ j 0.1 0 0 0.1 0.2 0.3 0.4 0.5 Equations rate PSI-LTP Theory seminar ETH Zürich - IfA 20

  23. Optimal hyperplane: Non-linear separable data m τ ( w ) = 1 2 � w � 2 + C � minimize ξ i i =1 s . t . ξ ≥ 0 and y i ( w · x i + b ) ≥ 1 − ξ i , i = 1 , . . . , m m m α i − 1 � � maximize W ( α ) = α i α j y i y j ( x i · x j ) 2 i =1 i,j =1 m � s . t . 0 ≤ α i ≤ C i = 1 , . . . , m and α i y i = 0 i =1 � m � � f ( x ) = sgn y i α i ( x · x i ) + b i =1 m � = 1 − α i y i ( x i · x j ) with 0 < α j < C ⇒ ξ j = 0 b i =1 PSI-LTP Theory seminar ETH Zürich - IfA 21

  24. Support vector machines: Mapping data Φ ~ x 2 x 2 ~ x 1 x 1 PSI-LTP Theory seminar ETH Zürich - IfA 22

  25. Support Vector Machines: Kernel methods X ⊂ ℜ N ( x 1 , y 1 ) , . . . , ( x m , y m ) ∈ X × {± 1 } with ( x i , y i ) ← → P ( x, y ) f : X → {± 1 } : f ( x i ) = y i P ( x, y ) − → ( x , y ) ∈ X f ( x ) = y ⇒ k : X × X − → ℜ F ⊂ ℜ N ( x, x ′ ) k ( x, x ′ ) Φ : X − → − → x − → x If x , x ′ ∈ ℜ N ⇒ x · x ′ = Φ ( x ) · Φ ( x ′ ) k ( x, x ′ ) ≡ k ( x , x ′ ) = x · x ′ PSI-LTP Theory seminar ETH Zürich - IfA 23

  26. Support Vector Machines: Kernel methods X ⊂ ℜ N ( x 1 , y 1 ) , . . . , ( x m , y m ) ∈ X × {± 1 } with ( x i , y i ) ← → P ( x, y ) f : X → {± 1 } : f ( x i ) = y i P ( x, y ) − → ( x , y ) ∈ X f ( x ) = y ⇒ k : X × X − → ℜ F ⊂ ℜ N ( x, x ′ ) k ( x, x ′ ) Φ : X − → − → x − → x If x , x ′ ∈ ℜ N ⇒ x · x ′ = Φ ( x ) · Φ ( x ′ ) k ( x, x ′ ) ≡ k ( x , x ′ ) = x · x ′ PSI-LTP Theory seminar ETH Zürich - IfA 23

  27. Support Vector Machines: Kernel methods X ⊂ ℜ N ( x 1 , y 1 ) , . . . , ( x m , y m ) ∈ X × {± 1 } with m m α i − 1 ( x i , y i ) ← → P ( x, y ) � � maximize W ( α ) = α i α j y i y j ( x i · x j ) 2 i =1 i,j =1 m f : X → {± 1 } : f ( x i ) = y i � s . t . 0 ≤ α i ≤ C i = 1 , . . . , m and α i y i = 0 i =1 P ( x, y ) − → ( x , y ) ∈ X f ( x ) = y ⇒ � m � � f ( x ) = sgn y i α i ( x · x i ) + b k : X × X − → ℜ i =1 F ⊂ ℜ N m ( x, x ′ ) k ( x, x ′ ) Φ : X − → − → � = 1 − α i y i ( x i · x j ) with 0 < α j < C ⇒ ξ j = 0 b x − → x i =1 If x , x ′ ∈ ℜ N ⇒ x · x ′ = Φ ( x ) · Φ ( x ′ ) k ( x, x ′ ) ≡ k ( x , x ′ ) = x · x ′ PSI-LTP Theory seminar ETH Zürich - IfA 23

  28. Support Vector Machines: Kernel methods X ⊂ ℜ N ( x 1 , y 1 ) , . . . , ( x m , y m ) ∈ X × {± 1 } with m m α i − 1 ( x i , y i ) ← → P ( x, y ) � � maximize W ( α ) = α i α j y i y j ( x i · x j ) 2 i =1 i,j =1 m f : X → {± 1 } : f ( x i ) = y i � s . t . 0 ≤ α i ≤ C i = 1 , . . . , m and α i y i = 0 i =1 P ( x, y ) − → ( x , y ) ∈ X f ( x ) = y ⇒ � m � � f ( x ) = sgn y i α i ( x · x i ) + b k : X × X − → ℜ i =1 F ⊂ ℜ N m ( x, x ′ ) k ( x, x ′ ) Φ : X − → − → � = 1 − α i y i ( x i · x j ) with 0 < α j < C ⇒ ξ j = 0 b x − → x i =1 If x , x ′ ∈ ℜ N ⇒ x · x ′ = Φ ( x ) · Φ ( x ′ ) k ( x, x ′ ) ≡ k ( x , x ′ ) = x · x ′ PSI-LTP Theory seminar ETH Zürich - IfA 23

  29. Support Vector Machines: Kernel methods ( γ x i · x j + r 0 ) d linear : K ( x i , x j ) = polynomial : K ( x i , x j ) = x i · x j − γ � x i − x j � 2 � � RBF : K ( x i , x j ) = exp sigmoid : K ( x i , x j ) = tanh ( γ x i · x j + r 0 ) PSI-LTP Theory seminar ETH Zürich - IfA 24

  30. Support Vector Machines: Kernel methods ( γ x i · x j + r 0 ) d linear : K ( x i , x j ) = polynomial : K ( x i , x j ) = x i · x j − γ � x i − x j � 2 � � RBF : K ( x i , x j ) = exp sigmoid : K ( x i , x j ) = tanh ( γ x i · x j + r 0 ) PSI-LTP Theory seminar ETH Zürich - IfA 24

  31. Support Vector Machines: Kernel methods ( γ x i · x j + r 0 ) d linear : K ( x i , x j ) = polynomial : K ( x i , x j ) = x i · x j − γ � x i − x j � 2 � � RBF : K ( x i , x j ) = exp sigmoid : K ( x i , x j ) = tanh ( γ x i · x j + r 0 ) PSI-LTP Theory seminar ETH Zürich - IfA 24

  32. Support Vector Machines: Kernel methods ( γ x i · x j + r 0 ) d linear : K ( x i , x j ) = polynomial : K ( x i , x j ) = x i · x j − γ � x i − x j � 2 � � RBF : K ( x i , x j ) = exp sigmoid : K ( x i , x j ) = tanh ( γ x i · x j + r 0 ) PSI-LTP Theory seminar ETH Zürich - IfA 24

  33. Support Vector Machines: Error constraints � 1 m R emp [ f ] = 1 1 � 2 | f ( x i ) − y i | , R [ f ] = 2 | f ( x ) − y | dP ( x, y ) m i =1 � h � m, log( η ) R [ f ] ≤ R emp [ f ] + φ m � h � log 2 m � � h h + 1 − log( η / 4) � m, log( η ) = φ m m Capacity: PSI-LTP Theory seminar ETH Zürich - IfA 25

  34. Multiclass Support Vector Machines 2-class SVM generalization to k classes. PSI-LTP Theory seminar ETH Zürich - IfA 26

  35. Multiclass Support Vector Machines one-versus-one: 2-class SVM generalization to k classes. • Train k ( k +1)/2 SVM • Classification: Poll PSI-LTP Theory seminar ETH Zürich - IfA 26

  36. Multiclass Support Vector Machines one-versus-all: 2-class SVM generalization to k classes. • Train k SVM • Classification: max{d.f.} PSI-LTP Theory seminar ETH Zürich - IfA 26

  37. Support Vector Machines in HEP P . Vannerem et al., Classifying LEP data with support vector algorithms (hep-ex/9905027) In e+ e- → q qbar: •Charm-tagging. ANN and SVM give consistent results •Muon identification. A. Vaiciulis, SVM in analysis of Top quark production (Nucl. Instrum. Meth. A502 (2003) 492) Signal and background efficiency consistent with best set of cuts T MVA: Toolkit for Multivariate Analysis Integrated machine learning environment for CERN’s ROOT http://tmva.sourceforge.net PSI-LTP Theory seminar ETH Zürich - IfA 27

  38. Summary Class M Early 1 Late 2 Irregular 3 Non-parametric coefficients: C, A, S... M = M ( C, A, S, G, M 20 , . . . ) Support Vector Machines: Training SVM Very large catalogue ? ? ? ? PSI-LTP Theory seminar ETH Zürich - IfA 28

  39. ZEST+: The Zurich estimator of structural types The evolution of ZEST+ General Approach Details Applications: COSMOS Challenges ahead PSI-LTP Theory seminar ETH Zürich - IfA 29

  40. The evolution of ZEST+ ZEST: C. Scarlata & M. Carollo (2007) •C, A, G, M 20 , ε , and Sérsic n. •PCA analysis: 3D classification grid. •IDL application, no public release. ZEST+: •First C++ version, without classification (E. Weihs). •Further modifications (T. Bschorr). •Complete rewrite, new features, SVM classification (M.C.) PSI-LTP Theory seminar ETH Zürich - IfA 30

  41. ZEST+ Architecture Catalogue Initialisation Pre-processing Characterisation Coefficients Classification Morphologies PSI-LTP Theory seminar ETH Zürich - IfA 31

  42. ZEST+: Pre-processing Pre-processing Basic segmentation Image cleaning Segmentation refinement PSI-LTP Theory seminar ETH Zürich - IfA 32

  43. ZEST+: Pre-processing Pre-processing Basic Basic segmentation segmentation Image cleaning Segmentation refinement PSI-LTP Theory seminar ETH Zürich - IfA 32

  44. ZEST+: Pre-processing Pre-processing Basic segmentation Image cleaning Image cleaning Segmentation refinement PSI-LTP Theory seminar ETH Zürich - IfA 32

  45. ZEST+: Pre-processing Pre-processing Basic segmentation Image cleaning Segmentation Segmentation refinement refinement PSI-LTP Theory seminar ETH Zürich - IfA 32

  46. ZEST+: Segmentation refinement Galaxy’s center: Center of asymmetry Galaxy’s size: Petrosian radius � R 2 π 0 I ( R ′ ) dR ′ η ( R ) = π R 2 I ( R ) 1 = α η ( R α p ) R 0 . 2 R p ≡ p PSI-LTP Theory seminar ETH Zürich - IfA 33

  47. ZEST+: Characterisation Characterisation Diagnostics Structure analysis Substructure analysis Save results PSI-LTP Theory seminar ETH Zürich - IfA 34

  48. ZEST+: Characterisation Characterisation Center iterations Rp roots Background Diagnostics Signal-to-noise Negative pixels Contamination Structure analysis Substructure analysis Save results PSI-LTP Theory seminar ETH Zürich - IfA 34

  49. ZEST+: Characterisation Characterisation Diagnostics Structure analysis C, A, S, G, M 20 Substructure analysis Save results PSI-LTP Theory seminar ETH Zürich - IfA 34

  50. ZEST+: Characterisation Characterisation Diagnostics Structure analysis Selection Substructure ε , C, A, G, M 20 analysis Save results PSI-LTP Theory seminar ETH Zürich - IfA 34

  51. ZEST+: Characterisation Characterisation Diagnostics Structure analysis Substructure analysis ε Id … C A S M 20 G C A G M 20 Err. Save results PSI-LTP Theory seminar ETH Zürich - IfA 34

  52. ZEST+: Coefficients revisited Differences from ideal case: • Negative pixels • Low signal-to-noise ratio • Background artefacts = A A 0 − A bkg = S S 0 − S bkg = G ( I j ) → G ( | I j | ) G PSI-LTP Theory seminar ETH Zürich - IfA 35

  53. ZEST+: Coefficients revisited Differences from ideal case: • Negative pixels • Low signal-to-noise ratio • Background artefacts = A A 0 − A bkg = S S 0 − S bkg = G ( I j ) → G ( | I j | ) G PSI-LTP Theory seminar ETH Zürich - IfA 35

  54. ZEST+: Coefficients revisited Differences from ideal case: • Negative pixels • Low signal-to-noise ratio • Background artefacts = A A 0 − A bkg = S S 0 − S bkg = G ( I j ) → G ( | I j | ) G PSI-LTP Theory seminar ETH Zürich - IfA 35

  55. ZEST+: Substructure analysis 1. Self-subtract smoothed image. 2. Threshold and eliminate isolated pixels. 3. Measure all morphological coefficients. PSI-LTP Theory seminar ETH Zürich - IfA 36

  56. ZEST+: Classification Algorithm : Support Vector Machines Implementation : libSVM-3.88 (C. Chang & C. Lin 2001) • SVM stand-alone C applications. • SVM library • Kernels: Linear, polynomial, RBF, sigmoid and user provided. • Multiclass algorithm: one-versus-one. • Supports SVM probabilities. PSI-LTP Theory seminar ETH Zürich - IfA 37

  57. ZEST+: Classification Catalogue libSVM SVM info. Preprocessing Interface C, A, S, G, M 20 Characterization External Data (z, B/D,etc.) Morphologies PSI-LTP Theory seminar ETH Zürich - IfA 38

  58. Recent ZEST+ applications • Automatic preprocessing of large datasets. • Mask preparation to use with external fitting programs. • Petrosian radius calculation in simulated galaxies. • Morphological data calculation for a recent VLT proposal. • Star-forming E/S0 galaxies study. • Substructure identification for tidal features study. • Morphological analysis in the COSMOS survey. PSI-LTP Theory seminar ETH Zürich - IfA 39

  59. Objective Probe galaxy formation and evolution as a function of z and LSS environment NASA • HST Treasury Project with ACS • Largest HST survey • 2 square degrees equatorial field • 2 million objects I AB > 27 mag • Up to z ~ 5 PSI-LTP Theory seminar ETH Zürich - IfA 40

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend