Bayesian Optimization in Reduced Eigenbases David Gaudrie 1 , - - PowerPoint PPT Presentation

bayesian optimization in reduced eigenbases
SMART_READER_LITE
LIVE PREVIEW

Bayesian Optimization in Reduced Eigenbases David Gaudrie 1 , - - PowerPoint PPT Presentation

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions Bayesian Optimization in Reduced Eigenbases David Gaudrie 1 , Rodolphe Le Riche 2 , Victor Picheny 3 , t Enaux 1 ,


slide-1
SLIDE 1

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Bayesian Optimization in Reduced Eigenbases

David Gaudrie1, Rodolphe Le Riche2, Victor Picheny3, Benoˆ ıt Enaux1, Vincent Herbert1

1Groupe PSA 2CNRS LIMOS, ´ Ecole Nationale Sup´ erieure des Mines de Saint-´ Etienne 3 Prowler.io

PGMO Days, Saclay, December 4th 2019

slide-2
SLIDE 2

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Parametric shape optimization

Industrial context: Groupe PSA aims to optimize systems such as vehicle shape, combustion chamber design, ...

slide-3
SLIDE 3

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Parametric shape optimization

Industrial context: Groupe PSA aims to optimize systems such as vehicle shape, combustion chamber design, ... min

x∈X f (x), where x are CAD parameters of a shape Ωx.

slide-4
SLIDE 4

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Parametric shape optimization

Industrial context: Groupe PSA aims to optimize systems such as vehicle shape, combustion chamber design, ... min

x∈X f (x), where x are CAD parameters of a shape Ωx.

NACA 22 airfoil Electric machine rotor

slide-5
SLIDE 5

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Parametric shape optimization

Industrial context: Groupe PSA aims to optimize systems such as vehicle shape, combustion chamber design, ... min

x∈X f (x), where x are CAD parameters of a shape Ωx

Other CAD parameters → other shapes NACA 22 airfoil Electric machine rotor

slide-6
SLIDE 6

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Parametric shape optimization

Expensive computer code

Restricted number of evaluations (≈ 100-200), Limited knowledge of f (·) ⇒ use of surrogate models (GP) ⇒ Bayesian

  • ptimization.
slide-7
SLIDE 7

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Parametric shape optimization

Expensive computer code

Restricted number of evaluations (≈ 100-200), Limited knowledge of f (·) ⇒ use of surrogate models (GP) ⇒ Bayesian

  • ptimization.

Curse of dimensionality: x ∈ X ⊂ Rd, d 50 ⇒ too many design variables with regard to the available budget. Computation time of a shape Ωx negligible compared to the evaluation of f (x). Assumption: the shapes lie in a δ < d dimensional manifold.

slide-8
SLIDE 8

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Parametric shape optimization

Expensive computer code

Restricted number of evaluations (≈ 100-200), Limited knowledge of f (·) ⇒ use of surrogate models (GP).

Curse of dimensionality: x ∈ X ⊂ Rd, d 50 ⇒ too many design variables with regard to the available budget. Computation time of a shape Ωx negligible compared to the evaluation of f (x). Assumption: the shapes lie in a δ < d dimensional manifold. ⇒ Discover the δ-dimensional manifold, do kriging, and perform the optimization in it Use the CAD parameters Work directly with relevant shapes.

slide-9
SLIDE 9

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Shape space dimension

CAD parameters vector x ∈ Rd but δ < d dimensions should approximate Ω Ω Ω := {Ωx, x ∈ X} precisely enough.

NACA 22 airfoils

The xj’s have an heterogeneous and local influence on Ωx. Consider the designs in a high-dimensional feature space Φ via φ : X − → Φ.

slide-10
SLIDE 10

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Shape space dimension

CAD parameters vector x ∈ Rd but δ < d dimensions should approximate Ω Ω Ω := {Ωx, x ∈ X} precisely enough.

NACA 22 airfoils

The xj’s have an heterogeneous and local influence on Ωx. Consider the designs in a high-dimensional feature space Φ via φ : X − → Φ. Good choice of φ ⇒ possible to retrieve a δ-dimensional manifold.

slide-11
SLIDE 11

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

High-dimensional shape mapping

Φ ⊂ RD: high-dimensional space of shape representation.

slide-12
SLIDE 12

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

High-dimensional shape mapping

Φ ⊂ RD: high-dimensional space of shape representation. Three φ’s and their ability to distinguish the δ-dimensional manifold in the corresponding Φ compared: Characteristic function χ(Ωx) [Raghavan et al., 2013], Signed distance to contour ε(∂Ωx) [Raghavan et al., 2014], Discretization of contour D(∂Ωx) [Stegmann and Gomez, 2002].

slide-13
SLIDE 13

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

High-dimensional shape mapping

Φ ⊂ RD: high-dimensional space of shape representation. Three φ’s and their ability to distinguish the δ-dimensional manifold in the corresponding Φ compared: Characteristic function χ(Ωx) [Raghavan et al., 2013], Signed distance to contour ε(∂Ωx) [Raghavan et al., 2014], Discretization of contour D(∂Ωx) [Stegmann and Gomez, 2002].

slide-14
SLIDE 14

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

PCA on high-dimensional shape representation

Sample N x(i) ∼ U(X) and build Φ Φ Φ ∈ RN×D, Φ Φ Φ(i) = φ(x(i)) in rows. Apply a PCA on Φ Φ Φ: the “eigenshapes” vj form an orthonormal shape basis {v1, . . . , vD} of Φ with decreasing importance. φ(x(i)) = Φ Φ Φ(i) = φ φ φ + D

j=1 α(i) j vj ⇒ consider the eigenbasis coordinates

α α α(i) := (α(i)

1 , . . . , α(i) D )⊤ = V⊤(Φ

Φ Φ(i) − φ φ φ). The first αj’s describe the shapes globally as opposed to the xj’s ⇒ an

  • pportunity to distinguish a lower-dimensional manifold in Φ?
slide-15
SLIDE 15

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

PCA on high-dimensional shape representation

Sample N x(i) ∼ U(X) and build Φ Φ Φ ∈ RN×D, Φ Φ Φ(i) = φ(x(i)) in rows. Apply a PCA on Φ Φ Φ: the “eigenshapes” vj form an orthonormal shape basis {v1, . . . , vD} of Φ with decreasing importance. φ(x(i)) = Φ Φ Φ(i) = φ φ φ + D

j=1 α(i) j vj ⇒ consider the eigenbasis coordinates

α α α(i) := (α(i)

1 , . . . , α(i) D )⊤ = V⊤(Φ

Φ Φ(i) − φ φ φ). The first αj’s describe the shapes globally as opposed to the xj’s ⇒ an

  • pportunity to distinguish a lower-dimensional manifold in Φ?
slide-16
SLIDE 16

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

PCA on high-dimensional shape representation

Sample N x(i) ∼ U(X) and build Φ Φ Φ ∈ RN×D, Φ Φ Φ(i) = φ(x(i)) in rows. Apply a PCA on Φ Φ Φ: the “eigenshapes” vj form an orthonormal shape basis {v1, . . . , vD} of Φ with decreasing importance. φ(x(i)) = Φ Φ Φ(i) = φ φ φ + D

j=1 α(i) j vj ⇒ consider the eigenbasis coordinates

α α α(i) := (α(i)

1 , . . . , α(i) D )⊤ = V⊤(Φ

Φ Φ(i) − φ φ φ). The first αj’s describe the shapes globally as opposed to the xj’s ⇒ an

  • pportunity to distinguish a lower-dimensional manifold in Φ?
slide-17
SLIDE 17

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

PCA on high-dimensional shape representation

Sample N x(i) ∼ U(X) and build Φ Φ Φ ∈ RN×D, Φ Φ Φ(i) = φ(x(i)) in rows. Apply a PCA on Φ Φ Φ: the “eigenshapes” vj form an orthonormal shape basis {v1, . . . , vD} of Φ with decreasing importance. φ(x(i)) = Φ Φ Φ(i) = φ φ φ + D

j=1 α(i) j vj ⇒ consider the eigenbasis coordinates

α α α(i) := (α(i)

1 , . . . , α(i) D )⊤ = V⊤(Φ

Φ Φ(i) − φ φ φ). The first αj’s describe the shapes globally as opposed to the xj’s ⇒ an

  • pportunity to distinguish a lower-dimensional manifold in Φ?
slide-18
SLIDE 18

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

PCA on high-dimensional shape representation

Sample N x(i) ∼ U(X) and build Φ Φ Φ ∈ RN×D, Φ Φ Φ(i) = φ(x(i)) in rows. Apply a PCA on Φ Φ Φ: the “eigenshapes” vj form an orthonormal shape basis {v1, . . . , vD} of Φ with decreasing importance. φ(x(i)) = Φ Φ Φ(i) = φ φ φ + D

j=1 α(i) j vj ⇒ consider the eigenbasis coordinates

α α α(i) := (α(i)

1 , . . . , α(i) D )⊤ = V⊤(Φ

Φ Φ(i) − φ φ φ). The first αj’s describe the shapes globally as opposed to the xj’s ⇒ an

  • pportunity to distinguish a lower-dimensional manifold in Φ?

Remark: “Reversed” Kernel PCA. PCA in ad-hoc feature space Φ accessed through input related φ (unknown k).

slide-19
SLIDE 19

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Dimension reduction

Simple idea: choose δ such that

δ

j=1 λj

D

j=1 λj > T.

slide-20
SLIDE 20

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Dimension reduction

Simple idea: choose δ such that

δ

j=1 λj

D

j=1 λj > T.

Geometric criterion only, does not take output y into account. φ φ φ1:δ := Φ Φ Φ + δ

j=1 αjvj may erase features that impact y.

slide-21
SLIDE 21

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Dimension reduction

Simple idea: choose δ such that

δ

j=1 λj

D

j=1 λj > T.

Geometric criterion only, does not take output y into account. φ φ φ1:δ := Φ Φ Φ + δ

j=1 αjvj may erase features that impact y.

⇒ Build a surrogate model in the eigenbasis using the δ ≪ d variables, selected among the D, that influence y the most; do not completely omit the remaining eigenshapes.

slide-22
SLIDE 22

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Active components selection

Anisotropic GP: θj → +∞ ⇒ dimension αj has no influence on output y. Variable selection: force θj’s growth by maximizing the penalized log-likelihood [Yi et al., 2011] plλ(α α α(1:n), y (1:n); ϑ) := l(α α α(1:n), y (1:n); ϑ) − λθ θ θ−11 w.r.t GP hyper-parameters ϑ. l: log-likelihood and θ θ θ−1 = (1/θ1, . . . , 1/θD)⊤. Active dimensions j with small enough θj define α α αa ∈ Rδ Remaining dimensions define α α αa ∈ RD−δ.

slide-23
SLIDE 23

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Active components selection

Anisotropic GP: θj → +∞ ⇒ dimension αj has no influence on output y. Variable selection: force θj’s growth by maximizing the penalized log-likelihood [Yi et al., 2011] plλ(α α α(1:n), y (1:n); ϑ) := l(α α α(1:n), y (1:n); ϑ) − λθ θ θ−11 w.r.t GP hyper-parameters ϑ. l: log-likelihood and θ θ θ−1 = (1/θ1, . . . , 1/θD)⊤. Active dimensions j with small enough θj define α α αa ∈ Rδ Remaining dimensions define α α αa ∈ RD−δ.

slide-24
SLIDE 24

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Active components selection

Anisotropic GP: θj → +∞ ⇒ dimension αj has no influence on output y. Variable selection: force θj’s growth by maximizing the penalized log-likelihood [Yi et al., 2011] plλ(α α α(1:n), y (1:n); ϑ) := l(α α α(1:n), y (1:n); ϑ) − λθ θ θ−11 w.r.t GP hyper-parameters ϑ. l: log-likelihood and θ θ θ−1 = (1/θ1, . . . , 1/θD)⊤. Active dimensions j with small enough θj define α α αa ∈ Rδ Remaining dimensions define α α αa ∈ RD−δ.

slide-25
SLIDE 25

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Zonal anisotropic Gaussian Process

Zonal anisotropy [Allard et al., 2016] for surrogate model in eigenshape basis Y (α α α) = Y a(α α αa) + Y a(α α αa) with α α α = [α α αa,α α αa], α α αa ∈ Rδ active dimensions. Y a(·): main-effect, anisotropic GP ⇒ δ + 1 hyperparameters ϑa = (θ1, . . . , θδ, σ2

a) to be estimated.

Y a(·): sparse, residual-effect, isotropic GP ⇒ 2 hyperparameters ϑa = (θD, σ2

a) to be estimated.

slide-26
SLIDE 26

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Zonal anisotropic Gaussian Process

Zonal anisotropy [Allard et al., 2016] for surrogate model in eigenshape basis Y (α α α) = Y a(α α αa) + Y a(α α αa) with α α α = [α α αa,α α αa], α α αa ∈ Rδ active dimensions. Y a(·): main-effect, anisotropic GP ⇒ δ + 1 hyperparameters ϑa = (θ1, . . . , θδ, σ2

a) to be estimated.

Y a(·): sparse, residual-effect, isotropic GP ⇒ 2 hyperparameters ϑa = (θD, σ2

a) to be estimated.

slide-27
SLIDE 27

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Zonal anisotropic Gaussian Process

Zonal anisotropy [Allard et al., 2016] for surrogate model in eigenshape basis Y (α α α) = Y a(α α αa) + Y a(α α αa) with α α α = [α α αa,α α αa], α α αa ∈ Rδ active dimensions. Y a(·): main-effect, anisotropic GP ⇒ δ + 1 hyperparameters ϑa = (θ1, . . . , θδ, σ2

a) to be estimated.

Y a(·): sparse, residual-effect, isotropic GP ⇒ 2 hyperparameters ϑa = (θD, σ2

a) to be estimated.

slide-28
SLIDE 28

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Zonal anisotropic Gaussian Process

Zonal anisotropy [Allard et al., 2016] for surrogate model in eigenshape basis Y (α α α) = Y a(α α αa) + Y a(α α αa) with α α α = [α α αa,α α αa], α α αa ∈ Rδ active dimensions. Y a(·): main-effect, anisotropic GP ⇒ δ + 1 hyperparameters ϑa = (θ1, . . . , θδ, σ2

a) to be estimated.

Y a(·): sparse, residual-effect, isotropic GP ⇒ 2 hyperparameters ϑa = (θD, σ2

a) to be estimated.

y primarily modeled through Y a(·) and locally refined by Y a(·).

slide-29
SLIDE 29

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Surrogate model validation and comparison

NACA 22 lift metamodeling (n observations) with GPs built over different spaces, and the additive GP [Gaudrie et al., 2019].

R2 coefficient on a validation set n GP(X) GP(α α α1:20) GP(α α α1:2) GP(α α α1:3) GP(α α α1:6) GP(α α αa) AddGP(α α αa + α α αa) 20

  • 0.857

0.907 0.930 0.935 0.957 50 0.956 0.973 0.714 0.935 0.950 0.970 0.984 100 0.975 0.989 0.708 0.938 0.962 0.981 0.992 200 0.987 0.995 0.515 0.954 0.968 0.993 0.996

slide-30
SLIDE 30

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Optimizing with eigenshapes

Goal: min

x∈X f (x).

slide-31
SLIDE 31

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Optimizing with eigenshapes

Goal: min

x∈X f (x).

Use the GP to save calls to the expensive f (·): find shapes that have low Y (α α α)|y (1) = f (α α α(1)), · · · , y (t) = f (α α α(t)) ⇒ iterative maximization of the Expected Improvement [Jones et al., 1998] α α α(t+1)∗ = max

α α α∈A EI(α

α α).

slide-32
SLIDE 32

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Optimizing with eigenshapes

Goal: min

x∈X f (x).

Use the GP to save calls to the expensive f (·): find shapes that have low Y (α α α)|y (1) = f (α α α(1)), · · · , y (t) = f (α α α(t)) ⇒ iterative maximization of the Expected Improvement [Jones et al., 1998] α α α(t+1)∗ = max

α α α∈A EI(α

α α). A ⊂ RD manifold of admissible α α α’s: ∀α α α ∈ A, ∃x ∈ X: α α α = V⊤(φ(x) − φ φ φ).

slide-33
SLIDE 33

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Redefinition of improvement

Take advantage of dimension reduction (α α αa ∈ Rδ): maximization of EI w.r.t α α αa completed by a cheap optimization w.r.t α α αa.

slide-34
SLIDE 34

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Redefinition of improvement

Take advantage of dimension reduction (α α αa ∈ Rδ): maximization of EI w.r.t α α αa completed by a cheap optimization w.r.t α α αa. Maximize EI([α α αa,α α αa

∈RD

]) EI([α α αa, α

∈Rδ+1

]).

slide-35
SLIDE 35

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Redefinition of improvement

Take advantage of dimension reduction (α α αa ∈ Rδ): maximization of EI w.r.t α α αa completed by a cheap optimization w.r.t α α αa. Maximize EI([α α αa,α α αa

∈RD

]) EI([α α αa, α

∈Rδ+1

]). α ∈ R: coordinate along a D − δ dimensional random line a in α α αa space [Wang et al., 2013].

slide-36
SLIDE 36

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Redefinition of improvement

Take advantage of dimension reduction (α α αa ∈ Rδ): maximization of EI w.r.t α α αa completed by a cheap optimization w.r.t α α αa. Maximize EI([α α αa,α α αa

∈RD

]) EI([α α αa, α

∈Rδ+1

]). α ∈ R: coordinate along a D − δ dimensional random line a in α α αa space [Wang et al., 2013]. α α α∗ = [α α αa∗, α∗] ⇒ α α α(t+1)∗ = [α α αa∗, α∗a] = [α α αa∗, α∗a1, . . . , α∗aD−δ].

slide-37
SLIDE 37

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Redefinition of improvement

Take advantage of dimension reduction (α α αa ∈ Rδ): maximization of EI w.r.t α α αa completed by a cheap optimization w.r.t α α αa. Maximize EI([α α αa,α α αa

∈RD

]) EI([α α αa, α

∈Rδ+1

]). α ∈ R: coordinate along a D − δ dimensional random line a in α α αa space [Wang et al., 2013]. α α α∗ = [α α αa∗, α∗] ⇒ α α α(t+1)∗ = [α α αa∗, α∗a] = [α α αa∗, α∗a1, . . . , α∗aD−δ].

slide-38
SLIDE 38

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Redefinition of improvement

Take advantage of dimension reduction (α α αa ∈ Rδ): maximization of EI w.r.t α α αa completed by a cheap optimization w.r.t α α αa. Maximize EI([α α αa,α α αa

∈RD

]) EI([α α αa, α

∈Rδ+1

]). α ∈ R: coordinate along a D − δ dimensional random line a in α α αa space [Wang et al., 2013]. α α α∗ = [α α αa∗, α∗] ⇒ α α α(t+1)∗ = [α α αa∗, α∗a] = [α α αa∗, α∗a1, . . . , α∗aD−δ]. Solve the pre-image problem x(t+1) = arg min

x∈X

V⊤(φ(x) −φ φ φ) −α α α(t+1)∗2

2 and

evaluate x(t+1) ⇒ y (t+1) = f (x(t+1)).

slide-39
SLIDE 39

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Redefinition of improvement

Take advantage of dimension reduction (α α αa ∈ Rδ): maximization of EI w.r.t α α αa completed by a cheap optimization w.r.t α α αa. Maximize EI([α α αa,α α αa

∈RD

]) EI([α α αa, α

∈Rδ+1

]). α ∈ R: coordinate along a D − δ dimensional random line a in α α αa space [Wang et al., 2013]. α α α∗ = [α α αa∗, α∗] ⇒ α α α(t+1)∗ = [α α αa∗, α∗a] = [α α αa∗, α∗a1, . . . , α∗aD−δ]. Solve the pre-image problem x(t+1) = arg min

x∈X

V⊤(φ(x) −φ φ φ) −α α α(t+1)∗2

2 and

evaluate x(t+1) ⇒ y (t+1) = f (x(t+1)). Compute eigenshape coordinates α α α(t+1) and update the GP with (α α α(t+1), y (t+1)).

slide-40
SLIDE 40

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

EI maximization

A is unknown, non hyper-rectangular, approximated by AN := {α α α(i)}i=1,...,N (points).

slide-41
SLIDE 41

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

EI maximization

A is unknown, non hyper-rectangular, approximated by AN := {α α α(i)}i=1,...,N (points). ⇒ maximize the EI while staying close enough to AN.

slide-42
SLIDE 42

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

EI maximization

A is unknown, non hyper-rectangular, approximated by AN := {α α α(i)}i=1,...,N (points). ⇒ maximize the EI while staying close enough to AN. Might suffer from a weak AN approximation since new designs are restricted to its vicinity. Other alternative: replication.

slide-43
SLIDE 43

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

EI maximization: replication strategy

Promoted α α α(t+1)∗ not restricted to A: the pre-image solution x(t+1) has shape coordinates α α α(t+1) = α α α(t+1)∗.

slide-44
SLIDE 44

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

EI maximization: replication strategy

Promoted α α α(t+1)∗ not restricted to A: the pre-image solution x(t+1) has shape coordinates α α α(t+1) = α α α(t+1)∗. Replication strategy: update the GP with both α α α(t+1) and α α α(t+1)∗ if their shape representations φ φ φ(t+1) and φ φ φ(t+1)∗ are different.

slide-45
SLIDE 45

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

EI maximization: replication strategy

Promoted α α α(t+1)∗ not restricted to A: the pre-image solution x(t+1) has shape coordinates α α α(t+1) = α α α(t+1)∗. Replication strategy: update the GP with both α α α(t+1) and α α α(t+1)∗ if their shape representations φ φ φ(t+1) and φ φ φ(t+1)∗ are different. With the replication strategy (right), the variance at EI maximizers vanishes and the design space gets better explored.

slide-46
SLIDE 46

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Example: NACA 22 airfoil drag minimization

Faster decrease of the objective function in the reduced eigenshape basis (left) compared with the standard approach (right, CAD parameter space). Smoother airfoils are obtained because a shape basis is considered instead of a combination of local parameters.

slide-47
SLIDE 47

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Conclusions

Parametric shape optimization with Gaussian Processes. Principal Component Analysis in a high-dimensional feature space Φ, through a shape mapping φ ⇒ shape eigenbasis. Penalized MLE for the selection of relevant eigenshapes. Zonal anisotropy for the GP: emphasize retained components without

  • mitting remaining dimensions.

Bayesian optimization: optimize sharply w.r.t active components, and coarsely w.r.t to the remaining ones.

slide-48
SLIDE 48

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

References I

Allard, D., Senoussi, R., and Porcu, E. (2016). Anisotropy models for spatial data. Mathematical Geosciences, 48(3):305–328. Gaudrie, D., Riche, R. L., Picheny, V., Enaux, B., and Herbert, V. (2019). Modeling and optimization with gaussian processes in reduced eigenbases–extended version. arXiv preprint arXiv:1908.11272. Jones, D. R., Schonlau, M., and Welch, W. J. (1998). Efficient Global Optimization of expensive black-box functions. Journal of Global optimization, 13(4):455–492. Raghavan, B., Breitkopf, P., Tourbier, Y., and Villon, P. (2013). Towards a space reduction approach for efficient structural shape optimization. Structural and Multidisciplinary Optimization, 48(5):987–1000. Raghavan, B., Le Quilliec, G., Breitkopf, P., Rassineux, A., Roelandt, J.-M., and Villon, P. (2014). Numerical assessment of springback for the deep drawing process by level set interpolation using shape manifolds. International journal of material forming, 7(4):487–501. Stegmann, M. B. and Gomez, D. D. (2002). A brief introduction to statistical shape analysis. Informatics and mathematical modelling, Technical University of Denmark, DTU, 15(11). Wang, Z., Zoghi, M., Hutter, F., Matheson, D., and De Freitas, N. (2013). Bayesian optimization in high dimensions via random embeddings. In Twenty-Third International Joint Conference on Artificial Intelligence. Yi, G., Shi, J., and Choi, T. (2011). Penalized Gaussian process regression and classification for high-dimensional nonlinear data. Biometrics, 67(4):1285–1294.

slide-49
SLIDE 49

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Thank you for your attention, Do you have any question?

slide-50
SLIDE 50

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

PCA to retrieve effective dimension

Three easy test cases: circle with d = 1 to 3 parameters. AN = {α α α(i)}i=1,...,N in their V eigenvector basis

  • x1
  • x1

x2

  • x1

x2 x3

Characteristic function

10 5 Dim.2

  • 5
  • 10
  • 15
  • 20 -10

10 20

  • 10
  • 5

5 Dim.3 Dim.1

  • 30

5 Dim.2

  • 5
  • 10
  • 15
  • 20
  • 10

10

  • 10
  • 5

Dim.3 Dim.1 5 15 10 5 Dim.2

  • 5
  • 10
  • 15
  • 10 -5 0

5 10 15

  • 20
  • 15
  • 10
  • 5

5 Dim.3 Dim.1

  • 15

Signed distance

4 2 Dim.2

  • 2
  • 4
  • 20

20 40

  • 4
  • 2

2 Dim.3 Dim.1 4

  • 40

20 Dim.2

  • 20
  • 40
  • 40-20 0 20 40 60
  • 10
  • 5

5 10 Dim.3 Dim.1

  • 60

60 40 20 Dim.2

  • 20
  • 40
  • 60
  • 40-20 0 20 40 60
  • 40
  • 20

20 40 Dim.3 Dim.1

  • 60

Discretization

d

j=1 λj ≈ D j=1 λj

4 2 Dim.2

  • 2
  • 4
  • 5

5

  • 4
  • 2

2 Dim.3 Dim.1 4 4 2 Dim.2

  • 2
  • 4
  • 10

10

  • 4
  • 2

2 Dim.3 Dim.1 4 20 10 Dim.2

  • 10
  • 20
  • 10

10 20

  • 4
  • 2

2 Dim.3 Dim.1 4

  • 20
slide-51
SLIDE 51

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Multi-element shapes

Multi-element shapes: some permuted discretized vectors σ ◦ φ φ φ lead to exactly the same shape ⇒ enhance metamodels by providing this information. σ is not directly visible in V permutation-invariant kriging in α α α space. Aσ: y-invariant permutation matrices in Φ ⇒ use a kernel k which is y-invariant to Aσ’s pendant in V, Vσ := V⊤A−1

σ V.

slide-52
SLIDE 52

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Heart-rectangle problem

Objective function f (x) = Ωt − Ω˜

x2 2.

˜ x := x − (x1 + 2.5, x2 + 2.5, 0, . . . , 0)⊤: centered design. Ωt, Ω˜

x: nodal coordinates of the shapes.

slide-53
SLIDE 53

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

Alternative EI maximization

Maximize the EI in X instead of A: max

α α α∈A EI(α

α α) max

x∈X EI(V⊤(φ(x) − φ

  • α

α α

)). Advantages

On admissible manifold, No pre-image problem, Surrogate models in shape basis depending on each objective (e.g. PLS for data-driven reduction in each objective).

Drawbacks

Dimension d ≫ δ + 1, ∇φ? ⇒ ∇EI?.

Other alternatives: averaging w.r.t α α αa; α α αa maximization + Monte-Carlo maximization w.r.t α α αa or minimax in α α αa; ...

slide-54
SLIDE 54

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

REMBO

Standard REMBO [Wang et al., 2013]: Embedding matrix a ∈ RD×d with de ≤ d ≪ D, entries of x ∈ [−1, 1]D. In our case: α α αa inactive ⇒ de = 0. ⇒ Random line a ∈ R(D−δ)×1. aj ∼ N(0, λj): random line which takes the structure of α α αa into account.

slide-55
SLIDE 55

Parametric shape optimization Intrinsic dimensionality Meta-modeling in eigenbasis Optimization in eigenbasis Conclusions

What about PLS?

Output-related dimension reduction But... Linear method Only n eigenshapes (shape reconstruction error) vj no longer orthonormal Multi-objective problem: Y1(·), . . . , Ym(·) have to share the same input space (α α α space) for the maximization of EI’s multi-objective pendant.