Reconstruction and pattern recognition via the Petitot model J.P. - - PowerPoint PPT Presentation

reconstruction and pattern recognition via the petitot
SMART_READER_LITE
LIVE PREVIEW

Reconstruction and pattern recognition via the Petitot model J.P. - - PowerPoint PPT Presentation

Reconstruction and pattern recognition via the Petitot model J.P. Gauthier, U. Boscain, Dario Prandi University of Toulon and Ecole Polytechnique, Paris [01/15]January 2015 J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole


slide-1
SLIDE 1

Reconstruction and pattern recognition via the Petitot model

J.P. Gauthier, U. Boscain, Dario Prandi

University of Toulon and Ecole Polytechnique, Paris

[01/15]January 2015

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 1 / 36

slide-2
SLIDE 2

Plan

The Petitot Model The Hypoelliptic diffusion and the semi-discrete diffusion The lifts Chu categories and Moore groups The case of compact groups The case of SE2,N Pattern recognition and texture discrimination A few results

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 2 / 36

slide-3
SLIDE 3

Papers

  • U. Boscain, †,J. Duplaix‡, J.P. Gauthier, F.Rossi, Antropomorphic

image reconstruction via hypoelliptic diffusion, SIAM J. on Control SICON, 2012.¶

  • U. Boscain, J.P. Gauthier, D. Prandi, A. Remizov, Hypoelliptic

diffusion and human vision, a semi-discrete new twist, SIAM J. on Imaging science, 2014. J.P. Gauthier, J. Miteran, F. Smach, Generalized Fourier descriptors with application to pattern recognition in SVM context, J. on mathematical imaging and vision, 30, 2008. And the book by J. Petitot: "Vers une neurogéométrie de la vision", Ed de l’ecole Polytechnique, 2006.

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 3 / 36

slide-4
SLIDE 4

The Petitot Model

In the visual cortex V1, groups of neurons are sensitive to both positions anddirections.

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 4 / 36

slide-5
SLIDE 5

Antropomorphic vision-1

the model is: ˙ x = cos(θ)u, ˙ y = sin(θ)u, ˙ θ = v, J(u, v) =

T

0 (u(t)2 + v(t)2)dt → min

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 5 / 36

slide-6
SLIDE 6

Antropomorphic vision-2

To this model is associated a (hypoelliptic) diffusion equn: dΨ dt = LΨ, LΨ(z, θ) = 1 2((cos(θ) ∂ ∂x + sin(θ) ∂ ∂y )2 + ∂2 ∂θ2 )Ψ(x, y, θ), That corresponds to go to a stochastic problem, exciting the system by two independant Brownian motions: dxt = cos(θ)dut, dyt = sin(θ)dut, dθt = dvt, Geodesics can be computed using the PMP, they are given by classical Jacobi elliptic functions, there are very close relations between SR-distance, geodesics and small-time asymptotics of the heat kernel (for instance, limt→0(t log(Pt(x)) = − 1

4d(0, x)2),

Heat kernel (fundamental solution) can be computed using noncommutative harmonic analysis over the group SE(2). It is given as a series of Mathieu functions:

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 6 / 36

slide-7
SLIDE 7

Antropomorphic vision-3

Pt(g) =

+∞

  • (

+∞

n=0

eaλ

n t < cen(θ, λ2

4 ), κλ(X, θ)cen(θ, λ2 4 ) > + (1)

+∞

n=0

ebλ

n t < sen(θ, λ2

4 ), κλ(X, θ)sen(θ, λ2 4 ) >)λ dλ. Due to the small number of pinweels (≈ 20), the model is probably in fact semi-discrete, with stochastic equation: dzt = cos(θt) sin(θt)

  • dwt,

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 7 / 36

slide-8
SLIDE 8

Fokker-Planck with jumps-1

where θ is a jump process and z = (x, y). Set ΛN = (λi,j), i, j = 0, ..., N − 1, where λi,j = limt→0 1

t P[θt = ej|θ0 = ei), with ej = 2jπ N , and

λi,j = − ∑j=i λi,j. ΛN is the infinitesimal generator of the process θ. We assume Markov processes, where the law of the first jump time is exponential, with parameter λ (that will be specified later on). The jump has probability 1

2 on each side.

Then we get a Poisson process, and the probability of k jumps between 0 and t is: P[k jumps)] = (λt)k k! e−λt.

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 8 / 36

slide-9
SLIDE 9

Foker-Planck with jumps-2

So that: P[θt = ei+1|θ0 = ei] = 1 2[λt + 1 2λ2t2 + ...]e−λt, P[θt = ei+2|θ0 = ei] = 1 4[1 2λ2t2 + ...]e−λt, with the convention that ei is modulo N. So that λi,i+1 = λi,i−1 = 1

2λ, and λi,i = −λ.

Then, the infinitesimal generator of the semi-group associated with (zt, θt) is of the form: LNΨ(z, ei) = (AΨ)i(z) + (ΛNΨ(z, ei))i, where Ψj(z) = Ψ(z, ej), and,

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 9 / 36

slide-10
SLIDE 10

Foker-Planck with jumps-3

(AΨ)i(z) = AΨ(z, ei) = 1 2(cos(ei) ∂ ∂x + sin(ei) ∂ ∂y )2Ψ(x, y, ei), (ΛNΨ(z, ei))i =

n−1

j=0

λi,jΨj(z) = λ 2 (Ψi−1(z) − 2Ψi(z) + Ψi+1(z)). Then, if we set: λ = N 2

4π2 , we get:

(ΛNΨ(z, ei))i = 1 2 Ψi−1(z) − 2Ψi(z) + Ψi+1(z) ( 2π

N )2

, = 1 2 ∂2 ∂θ2 Ψ(z, ei) + O( 1 N ).

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 10 / 36

slide-11
SLIDE 11

Foker-Planck with jumps-4

At the limit, we get: LΨ(z, θ) = 1 2((cos(θ) ∂ ∂x + sin(θ) ∂ ∂y )2 + ∂2 ∂θ2 )Ψ(x, y, θ), which is our diffusion equation, while the exact Foker-Planck equation with small number of angles is: dpj dt (t, z) = 1 2(cos(ej) ∂ ∂x + sin(ej) ∂ ∂y )2pj(t, z)+ λ 2 (pj−1(t, z) − 2pj(t, z) + pj+1(t, z))

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 11 / 36

slide-12
SLIDE 12

Heat kernel via representations-1.

The group law of SE(2, N) is: (z, ei) ∗ (w, ej) = (z + Riw, ei+j), where R is the rotation of angle 2π

N .

It is a Moore group!! Unitary irreducible reoresentations are given by the Mackey’s imprimitivity

  • theorem. They work on Mackey’s orbits that are all Z/NZ.

They are parametrized by the orbit of the action of the discrete rotations

  • n the plane, i.e. the dual is the "slice of camembert" SN: (With the

topology of the dual, you have to fold it in order to get the "french fries cone" FN). Let λ, ν parametrize SN. Then the unitary irreducible representations are given by: (χλ,ν(z, er)) = diagk(ei<Vλ,ν,R kz>)Sr, where S is the shift modN of the components in CN. Also, Vλ,ν = (λ cos(ν), λ sin(ν)). The Plancherel measure is λdλdν.

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 12 / 36

slide-13
SLIDE 13

Heat kernel via representations-2.

The GFT transforms our hypoelliptic equation into a continuous sum of (**Elliptic**) ones. In the following, Mλ,µ is a N × N matrix: dMλ,ν dt = ΛNMλ,ν − diagk[λ2 cos(ek − ν)]Mλ,ν = ˜ Aλ,νMλ,ν. This is a matrix Matthieu-type diffusion. And via the inverse GFT, we get: pt(z, er) =

  • SN

trace[e ˜

Aλ,νt.diagk(ei<Vλ,ν,R kz>)Sr]λdλdν.

This is the Jump heat Kernel. A much simpler formula than in the case of SE(2).

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 13 / 36

slide-14
SLIDE 14

The algorithm-1.

We could start with the Kernel. What we do now is a bit less economic, but more understandable.

  • 1. First, take ordinary Fourier transform with respect to space variable z.

Write w for the dual variable to z. Set also w = (λ cos(θ), λ sin(θ)). Diffusion becomes, at w: dU dt = ΛNU − diagk[λ2 cos(ek − θ)2]U. Here, we write too many Matthieu equation. But this step can be improved on.

  • 2. Integrate w.r.t. t
  • 3. Take ordinary inverse Fourier transform.

If you do this, you get the exact solution of the discrete diffusion.

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 14 / 36

slide-15
SLIDE 15

The algorithm-2.

In fact, first, start from the discrete diffusion: dpk dt (t, z) = 1 2(cos(ek) ∂ ∂x + sin(ek) ∂ ∂y )2pj(t, z)+ λ 2 (pk−1(t, z) − 2pki(t, z) + pk+1(t, z)).

  • 1. Take a space discretization, and the ordinary finite differences

approximation, to get: dpk dt (t, zi,j) = 1 2(cos(ek)A ∂

∂x + sin(ek)A ∂ ∂y )2pk(t, z)+

λ 2 (pk−1(t, z) − 2pk(t, z) + pk+1(t, z)), (D) This is a very big linear differential system: (512 × 512 × N). 2. Take the double FFT, to get a completely parallel system d ˆ

Pi,j dt (t) = ˆ

Di,j ˆ Pi,j.

  • 3. Integrate with respect to time the 512×512 matthieu-like equations. 4.

Take the inverse double FFT.

Theorem

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 15 / 36

slide-16
SLIDE 16

The algorithm-3.

I assume that the initial image is infinite, twice periodic.

Theorem

This algorithm provides the exact solution to the space-discretized system (D). dpk dt (t, zi,j) = 1 2(cos(ek)A ∂

∂x + sin(ek)A ∂ ∂y )2pk(t, z)+

λ 2 (pk−1(t, z) − 2pk(t, z) + pk+1(t, z)), (D) Using the Heat Kernel, one could integrate much less Matthieu-like eqs:

  • nly one for each point of the slice SN.

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 16 / 36

slide-17
SLIDE 17

More precisely-1.

x, y : 0 → √ M, ,x = k−1

√ M , y = l−1 √ M .

FFTM(u)k;l = 1

M ∑M r,s=1 ur,se−2πi[ (k−1)(r−1)+(l−1)(s−1)

M

];

and conversely uk;l = 1

M ∑M r,s=1 FFTM(u)r,se2πi[ (k−1)(r−1)+(l−1)(s−1)

M

]. Then

the discretized operator

∂ ∂x is mapped to:

(∂u ∂x )ˆ

k =

1 √ M

M

r=1

(ur+1 − ur−1)

2 √ M

e−2πi[ (k−1)(r−1)

M

]

= √ M 1 2 √ M [(

M

r=1

ur+1e−2πi[ (k−1)(r+1−1)

M

])e2πi k−1

M −

(

M

r=1

ur−1e−2πi[ (k−1)(r−1−1)

M

])e−2πi k−1

M ]

= √ M 2 ˆ uk(e2πi k−1

M − e−2πi k−1 M ) = i

√ M ˆ uk sin(2π(k − 1) M ).

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 17 / 36

slide-18
SLIDE 18

More precisely-2.

Then, cos(θ) ∂

∂x + sin(θ) ∂ ∂y ˆ

→k,l iM(sin( 2π(k−1)

M

) cos(θ) + sin(θ) sin( 2π(l−1)

M

)), and: [ ∂2

∂θ2 + α(cos(θ) ∂ ∂x + sin(θ) ∂ ∂y )2]u ˆ

→k,l

∂2 ∂θ2 ˆ

uk,l − αM2(sin( 2π(k−1)

M

) cos(θ) + sin(θ) sin( 2π(l−1)

M

))2 ˆ uk,l. Finally, the diffusion becomes: 2∂ˆ ur

k,l

∂t = α ˆ ur−1

k,l − 2ˆ

ur

k,l + ˆ

ur+1

k,l

( π

M )2

− βM2(sin(2π(k − 1) M ) cos(θ) + sin(θ) sin(2π(l − 1) M ))2 ˆ ur

k,l.

One can see the natural projectivisation (due to the square).

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 18 / 36

slide-19
SLIDE 19

Relation to kernel and to dual-1.

The (limit) diffusion is, after space-Fourier transform: d ˆ u dt = ∂2 ˆ u ∂θ2 − ρ2 cos(ω − θ)2 ˆ u, or for each fixed ρ, ω : d ˆ uρ,ω(θ, t) dt = ∂2 ˆ uρ,ω ∂θ2 − ρ2 cos(ω − θ)2 ˆ uρ,ω. Set ¯ uρ,ω(ω − θ, t) = ˆ uρ,ω(θ, t), ω − θ = ˜ θ, to obtain: d ¯ uρ,ω( ˜ θ, t) dt = ∂2 ¯ uρ,ω( ˜ θ, t) ∂θ2 − ρ2 cos( ˜ θ)2 ¯ uρ,ω( ˜ θ, t). It means that we need to compute only resolvants along the dual half-line. In the discrete case, we need to compute resolvants at each point of the slice of camembert only.

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 19 / 36

slide-20
SLIDE 20

Based upon this diffusion and certain heuristic complements, we get nice algorithms for image completion:

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 20 / 36

slide-21
SLIDE 21

An other point of view.

In relation with what follows (for pattern recognition), there is a different point of view . There are finite dimensional subspaces [with arbitrarily large dimension] of the space of almost periodic functions over SE(2, N) that are invariant under the diffusion operator: any finite direct sum of N-dimensional spaces of irreducible representations of SE(2, N). In restriction to these spaces, the diffusion (not the discretization, but the exact diffusion) can be EXACTLY integrated by the previous algorithm.

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 21 / 36

slide-22
SLIDE 22

The lifts 1

These considerations here work in the general context of a semi-direct product G = N H of a compact group N by an abelian locally compact group H, and the Haar measure on H is invariant under the action R of N. In that case, all linear left invariant lifts L from L2(H) to L2(G), such that f → Lf (0, e) is densely defined and bounded in L2 norm are of the form: Lf (n, X) = [(RnΨ) ∗ f ](X), where (RnΨ)(X) = Ψ(Rn−1X), and Ψ ∈ L2(H) . Moreover such a map L is injective iff

  • N

| ˆ Ψ(Rnχ)|2dn > 0 for a.e. χ ∈ ˆ H. Example: we can take for Ψ a standard orientation filter (Gabor for instance).

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 22 / 36

slide-23
SLIDE 23

The lifts 2

Define, for f ∈ L2(H) and χ ∈ ˆ H, ωf (χ) ∈ L2(H) by ωf (χ)(n) = ˆ f (Rn−1χ). Problem: The Fourier transform Lˆ f (Tχ) of a left invariant lift is always a rank1 operator: Lˆ f (Tχ) = ωf (χ¯ )

∗ ⊗ ωΨ(χ).

This will be a big problem later on. From this point of view, there are better lifts than the left invariant ones. For instance if H = R2, the "cyclic lift" f c(n, X) = f (RnX + Xc), Xc = 1 ˆ f (0)

  • H

Xf (X)dX

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 23 / 36

slide-24
SLIDE 24

Chu Duality 1

Chu duality is an extension of Tannaka duality, working for certain noncompact groups. Groups (locally compact) that have Chu Duality: Abelian, compact, Moore. Remark: not all MAP groups have Chu duality (Roeders’s example). The group M2,N is Moore and then has Chu duality. Chu dual G a topological group, RPN(G) denotes the set of all N-dimensional continuous unitary representations R of G in CN, with the compact-open topology, and RP(G) is the topological summ of the RPN(G) over N ≥ 1. RPN(G) is second countable provided that G is so. RP(G) is called the Chu dual of G.

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 24 / 36

slide-25
SLIDE 25

Quasi representations

Quasi representations A quasi representation of G is a continuous map Q from RP(G) to the topological sum U = ∪n1U(n) of all unitary groups, with the following properties: (Q1) Q(R) ∈ U(n(R)), (Q2) Q(R ⊕ R) = Q(R) ⊕ Q(R), (Q3) Q(R ⊗ R) = Q(R) ⊗ Q(R), (Q4) Q(U−1RU) = U−1Q(R)U for all R, R ∈ RP(G) and U ∈ U(n(R)). Denote by RP(G ˇ ) the union of all quasi representations of G embedded with the compact open topology. RP(G ˇ ) is called the Chu quasi dual of G. Set E(R) = Idn(R), Q−1(R) = Q(R−1). Then, RP(G ˇ ) is a Hausdorf topological group, with E as its identity. for g ∈ G set ˇ g(R) = R(g). consider Ω : G → RP(G ˇ ), Ω(g) = ˇ g. Ω is a continuous homomorphism, injective provided that G is MAP.

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 25 / 36

slide-26
SLIDE 26

Chu duality again

Def: the group G has the Chu duality property if Ω is a topological isomorphism.

Theorem

Moore groups have the Chu duality property. Not all MAP groups have Chu duality property SE2,N has the Chu duality property while SE2 has not. Chu duality is a (topological) generalization of Tannaka duality for compact groups (which is itself an analog of Pontryagin’s duality for abelian groups).

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 26 / 36

slide-27
SLIDE 27

Pattern recognition: the bispectral principle

Let G be a locally compact group. Let ˆ G denote as usual the dual of G, i.e. the set of (equivalence classes of) unitary irreducible representations of G. For f ∈ L2(G), Define If : ˆ G × ˆ G → C, If (λ1, λ2) = ˆ f (λ1) ⊗ ˆ f (λ2) ◦ ˆ f (λ1 ⊗ λ2)∗, in which ˆ f (λ) is the Fourier transform of f , and λ1 ⊗ λ2 is the tensor product representation. By the properties of the Fourier transform, If is invariant under the left action of G on L2(G).

Theorem

Let G be separable, abelian or compact, or Moore (with certain restriction). Then, there is a residual Subset R of L2(G) such that If = Ih implies that h = g0.f for some g0 ∈ G. That is, over the very big subset R of L2(G), functions are separated modulo translations by the invariants If .

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 27 / 36

slide-28
SLIDE 28

Pattern recognition: the bispectral principle

Let G be a locally compact group. Let ˆ G denote as usual the dual of G, i.e. the set of (equivalence classes of) unitary irreducible representations of G. For f ∈ L2(G), Define If : ˆ G × ˆ G → C, If (λ1, λ2) = ˆ f (λ1) ⊗ ˆ f (λ2) ◦ ˆ f (λ1 ⊗ λ2)∗, in which ˆ f (λ) is the Fourier transform of f , and λ1 ⊗ λ2 is the tensor product representation. By the properties of the Fourier transform, If is invariant under the left action of G on L2(G). Bispectral principle:

Theorem

Let G be separable, abelian or compact, or Moore (with certain restriction). Then, there is a residual Subset R of L2(G) such that If = Ih implies that h = g0.f for some g0 ∈ G. That is, over the very big subset R of L2(G), functions are separated modulo translations by the invariants If .

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 27 / 36

slide-29
SLIDE 29

Proof of the bispectral principle (compact case, sketch)

The generic set R is the subset of L2(G) such that the Fourier transform ˆ f (λ) is invertible for all λ ∈ ˆ G. Assume If = Ih. Apply it to λ1 = λ, λ2 = T, the trivial representation, to get that ˆ f (λ) ◦ ˆ f (λ)∗ = ˆ h(λ) ◦ ˆ h(λ)∗. It follows that ˆ h(λ) = ˆ f (λ)U(λ), for some unitary operator U(λ). The map U extends uniquely to a map RP(G) → U by requiring Q2 (commutation with ⊕). Due to the definition of the Fourier transform, the map U meets Q4 (commutation with unitary equivalences). Property Q3 (commutation with tensor product) is obtained from the equality If = Ih and the definition of the generic set R. It follows that U is a quasi representation, and by Chu (Tannaka) duality, there is g ∈ G such that U = ˇ

  • g. Then, ˆ

h(λ) = ˆ f (λ) λ(g), and by the elementary property of the Fourier transform, h = g.f .

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 28 / 36

slide-30
SLIDE 30

In the case of 1-dimensional signals f (t), the bispectral invariants are just B(λ1, λ2) = ˆ f (λ1)ˆ f (λ2)ˆ f (λ1 + λ2) Note that B(λ1,0)

ˆ f (0)

is just the "power spectral density" of the signal, and B(λ1, λ2) contains the missing phase informations. The B(λ1, λ2) are used in several areas of signal processing. For instance: Dubnov S, Tishby N and Cohen D. (1997). "Polyspectra as Measures of Sound Texture and Timbre". Journal of New Music Research 26: 277—314. The If (λ1, λ2) (the Bispectral Invariants) are the generalization of these, to 2-D signals (more precisely to their lifts). It is expected that they contain all information modulo motions.

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 29 / 36

slide-31
SLIDE 31

The case of SE(2,N) 1

Define I λ1,λ2,k

2

(f ) =< ωf (λ1 + k.λ2), ωf (λ1) ωf (k.λ2) >L2(N), where is the pointwise product. Then, we have the following relation:

  • [Lf (λ1 ⊗ λ2) ◦

Lf (λ1)∗ ⊗ Lf (λ1)∗].F(u, uk−1) = I λ1,λ2,k

2

(f ) ˆ Ψ(u−1(λ1 + k.λ2)) From the bispectral principle, we could expect that the I λ1,λ2,k

2

form a (weakly) complete set of invariants of the action of SE(2, N) on L2(H), but it doesn’t work since the Fourier transform operators have all rank 1. This is still a conjecture. We have arguments to conjecture that this could be false.

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 30 / 36

slide-32
SLIDE 32

The case of SE(2,N) 2

Define ˜ I λ1,λ2,k

2

(f ) = [Lf (λ1 ⊗ λ2) ◦ Lf (λ1)∗ ⊗ Lf (R−kλ2)∗]. We say that ωf (λ) is cyclic if {Snωf (λ)} is a basis of L2(N). A function f ∈ L2(H) is said weakly cyclic if ωf (λ) is cyclic for almost all λ ∈ ˆ H Note that ˜ I λ1,λ2,k

2

are not invariant under translations (work for centered functions only). We can prove:

Theorem

(T) Assume that Ψ is weakly cyclic and ˆ Ψ(λ) = 0 for almost all λ ∈ ˆ H. Let f , g, be weakly cyclic functions with compact support having the same

  • invariants. Then, there is n ∈ N such that f = Rng.

The sketch of the proof is the same as the general proof for compact groups: we construct a quasi-representation of G. But a lot of complicated details come. A key point is the induction-reduction theorem (an analog

  • f the Clebsch-Gordan decomposition for compact groups).

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 31 / 36

slide-33
SLIDE 33

Texture discrimination 1

Let K be a (finite or countable) subset of R2, which is stable under the action of ZN. It is natural to consider images whose lift are now almost-periodic functions f over SE(2, N). Those are called "texture images". f (n, x) = ∑

n h∈K

a(n, h)ei<Rnh,x> and that are in the B2 Besikovich class. The theory above can be adapted to these spaces of functions, and an analog of theorem (T) above can be proved. Moreover, If K is finite, this space can be used (as was pointed out) to solve exactly the Diffusion equation (first part of the talk). It is a finite dimensional invariant subspace of the action of SE(2, N), and naturally a sum of spaces of irreducible representations. The (analog of) conjecture of completeness of the invariants I λ1,λ2,k

2

is false (we have a counterexample)

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 32 / 36

slide-34
SLIDE 34

A few results 1

Bispectral invariants have good properties w.r.t. scale: let fα(x) := f (αx), then:B(λ1, λ2)(fα) = 1

α6 B( λ1 α , λ2 α )(f ). We can use this

relation to eliminate the scale effects. At this point, we use a standard strategy Bispectral invariants are used together with a learning machine (SVM), to realize "pseudo 3D" pattern recognition. Vapnik, Vladimir N.; The Nature of Statistical Learning Theory, Springer-Verlag, 1995 Also, for texture discrimination, wit is very natural to use the almost-periodic context.

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 33 / 36

slide-35
SLIDE 35

A few results 2

A long time ago, we got a series of very nice results on standard academic data bases. We get nice results w.r.t. standard strategies.

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 34 / 36

slide-36
SLIDE 36

A few results 3

Face detection on the ORL data base

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 35 / 36

slide-37
SLIDE 37

Work on texture discrimation is going on We thank you for your attention

J.P. Gauthier, U. Boscain, Dario Prandi (University of Toulon and Ecole Polytechnique, Paris) Reconstruction and pattern recognition via the Petitot model [01/15]January 2015 36 / 36