M etodos de estad stica computacional y machine learning para - - PowerPoint PPT Presentation

m etodos de estad stica computacional y machine learning
SMART_READER_LITE
LIVE PREVIEW

M etodos de estad stica computacional y machine learning para - - PowerPoint PPT Presentation

M etodos de estad stica computacional y machine learning para ciencias de la vida, con una aplicaci on a COVID-19 Gonzalo E. Mena May 20th, 2020 Data Science Initiative and Statistics Department, Harvard University 1 Palabras


slide-1
SLIDE 1

M´ etodos de estad´ ıstica computacional y machine learning para ciencias de la vida, con una aplicaci´

  • n a COVID-19

Gonzalo E. Mena May 20th, 2020

Data Science Initiative and Statistics Department, Harvard University 1

slide-2
SLIDE 2

Palabras iniciales (advertencias)

  • Charla acad´

emica, resumen sobre trabajos aplicados en ciencias de la vida (neurociencia).

  • Al final: algunos m´

etodos para COVID-19. M´ as preguntas que

  • respuestas. Mostrar´

e algunos datos y an´ alisis muy preliminares sin calibrar.

  • Objetivo: incentivar discusi´
  • n y motivar trabajo en el ´

area y el uso de modelos bayesianos.

  • Spanglish

2

slide-3
SLIDE 3

World’s current situation

  • Several

large-scale imaging and stimulation

  • technologies. To

read and write neural activity.

3

slide-4
SLIDE 4

World’s current situation

  • Several

large-scale imaging and stimulation

  • technologies. To

read and write neural activity.

  • Consensus on

relevance. Goal: to develop new experimental tools that will revolutionize our understanding of the brain.

3

slide-5
SLIDE 5

World’s current situation

  • Several

large-scale imaging and stimulation

  • technologies. To

read and write neural activity.

  • Consensus on

relevance. Goal: to develop new experimental tools that will revolutionize our understanding of the brain.

Major bottleneck: data analysis capabilities are much below high-throughput data collection rates (TB’s/hour). Cannot fully exploit the potential of these technologies.

3

slide-6
SLIDE 6

This talk: (Neural) Data Science + COVID-19 (at the end)

Claim

The dialog between life sciences (neuroscience) and Statistics/Mathematics/Computation is of mutual benefit. Here, Bayesian Statistics

4

slide-7
SLIDE 7

Large-scale Spike Sorting with Stimulation Artifacts

slide-8
SLIDE 8

Introduction Overarching goal Stimulation and recording in large multi-electrode arrays (MEA) to read and write neural activity to achieve control.

0.9mm 1.85mm

60µm 8-15 µm

electrical stimulation physiological recording

lens saline retina

  • For control need to know the stimulus→ response map fast.
  • Large-scale, online data analysis. 512 electrodes, 20 Khz ∼ 50

GB/hour.

  • Scientific and Clinical significance: development of high-resolution

retinal prosthesis.

5

slide-9
SLIDE 9

Tailored activation

Goal: To generate artificial vision, elicit arbitrary patterns of neural activity with tailored stimuli.

6

slide-10
SLIDE 10

Tailored activation

Question: Is it possible to activate only the colored neurons?

6

slide-11
SLIDE 11

Tailored activation

Easier question: is it possible to activate only neuron A?

6

slide-12
SLIDE 12

Tailored activation

Stimulating with a pulse of 0.5µA on the electrode around the soma does not activate neuron A.

6

slide-13
SLIDE 13

Tailored activation

However, stimulating with 1.0µA does activate the neuron.

6

slide-14
SLIDE 14

Tailored activation

Further, stimulating with 1.5µA also activates nearby neuron B, through its axon.

6

slide-15
SLIDE 15

Tailored activation

Activation curves summarize responsiveness of

  • neurons. Inferred from many increasing stimuli.

6

slide-16
SLIDE 16

Stimulation artifacts Major hurdle: electrical stimuli are sensed in electrodes as artifacts, stymying identification of neural activity.

7

slide-17
SLIDE 17

Stimulation artifacts Major hurdle: electrical stimuli are sensed in electrodes as artifacts, stymying identification of neural activity.

7

slide-18
SLIDE 18

Stimulation artifacts Major hurdle: electrical stimuli are sensed in electrodes as artifacts, stymying identification of neural activity.

  • Artifacts are much larger than spikes, overlap temporally

with them.

7

slide-19
SLIDE 19

Stimulation artifacts Major hurdle: electrical stimuli are sensed in electrodes as artifacts, stymying identification of neural activity.

  • Artifacts are much larger than spikes, overlap temporally

with them.

Current solutions break down. Can take weeks to a human. Not online.

7

slide-20
SLIDE 20

Stimulation Artifacts Problem Data contains a nuisance parameter A, Y = A + s + ǫ, Recorded traces Y , artifact A, neural activity s and noise ǫ. To infer s need to know A.

8

slide-21
SLIDE 21

Stimulation Artifacts Problem Data contains a nuisance parameter A, Y = A + s + ǫ, Recorded traces Y , artifact A, neural activity s and noise ǫ. To infer s need to know A. Solution Impose structure and prior knowledge in A, s, and ǫ so ˆ A, ˆ s can be resolved.

8

slide-22
SLIDE 22

Neural activity structure

  • Spike sorting of spontaneous activity to identify neurons.
  • Provide us with templates (or spikes, or action potentials

waveforms)

9

slide-23
SLIDE 23

The structure of stimulation artifacts

  • Properties are revealed by silencing neural activity.

10

slide-24
SLIDE 24

The structure of stimulation artifacts

  • Properties are revealed by silencing neural activity.
  • Decays smoothly with distance from stimulating electrode

and has a peak in time. Increases with strength of

  • stimulus. Doesn’t change if stimulus is the same.

10

slide-25
SLIDE 25

The structure of stimulation artifacts

  • Properties are revealed by silencing neural activity.
  • Decays smoothly with distance from stimulating electrode

and has a peak in time. Increases with strength of

  • stimulus. Doesn’t change if stimulus is the same.

Non-linear and non-stationary, but smooth and structured.

10

slide-26
SLIDE 26

Crafting a principled solution

Consider the model Y = A + s + ǫ,

  • Data Y = Yt,e,j,i over time (1 ≤ t ≤ T), space (electrode,

1 ≤ e ≤ E), strength (1 ≤ j ≤ J) and trial (1 ≤ i ≤ I) dimensions.

11

slide-27
SLIDE 27

Crafting a principled solution

Consider the model Y = A + s + ǫ,

  • Data Y = Yt,e,j,i over time (1 ≤ t ≤ T), space (electrode,

1 ≤ e ≤ E), strength (1 ≤ j ≤ J) and trial (1 ≤ i ≤ I) dimensions.

Imposing structure

  • Represent neural activity s with Toeplitz matrices (shapes) and

binary vectors (timing).

11

slide-28
SLIDE 28

Crafting a principled solution

Consider the model Y = A + s + ǫ,

  • Data Y = Yt,e,j,i over time (1 ≤ t ≤ T), space (electrode,

1 ≤ e ≤ E), strength (1 ≤ j ≤ J) and trial (1 ≤ i ≤ I) dimensions.

Imposing structure

  • Represent neural activity s with Toeplitz matrices (shapes) and

binary vectors (timing).

  • Gaussian process (GP) to encode prior knowledge of artifact

A ∼ GP(0, K θ), and to borrow strength.

11

slide-29
SLIDE 29

Crafting a principled solution

Consider the model Y = A + s + ǫ,

  • Data Y = Yt,e,j,i over time (1 ≤ t ≤ T), space (electrode,

1 ≤ e ≤ E), strength (1 ≤ j ≤ J) and trial (1 ≤ i ≤ I) dimensions.

Imposing structure

  • Represent neural activity s with Toeplitz matrices (shapes) and

binary vectors (timing).

  • Gaussian process (GP) to encode prior knowledge of artifact

A ∼ GP(0, K θ), and to borrow strength.

  • Problem: n ≈ 106 artifact variables, O(n3) does not scale.

11

slide-30
SLIDE 30

Crafting a principled solution

Consider the model Y = A + s + ǫ,

  • Data Y = Yt,e,j,i over time (1 ≤ t ≤ T), space (electrode,

1 ≤ e ≤ E), strength (1 ≤ j ≤ J) and trial (1 ≤ i ≤ I) dimensions.

Imposing structure

  • Represent neural activity s with Toeplitz matrices (shapes) and

binary vectors (timing).

  • Gaussian process (GP) to encode prior knowledge of artifact

A ∼ GP(0, K θ), and to borrow strength.

  • Problem: n ≈ 106 artifact variables, O(n3) does not scale.
  • Solution: Kronecker decomposition

K (θ,φ2) = ρKt ⊗ Ke ⊗ Kj + φ2I.

  • Each kernel must represent smoothness and non-stationarity.

11

slide-31
SLIDE 31

Algorithm

Goal: Obtain ˆ A, ˆ s from the model Y = A + s + ǫ, A ∼ GP(0, K ˆ

θ)

  • Produce estimates increasingly in j (strength).
  • Rationale: at lowest strengths A is better behaved and easier to

estimate.

  • Initial guess ˆ

A0

j+1 is the extrapolation from ˆ

A[1,j].

  • Given j, alternate between maximizing p(sj|Yj, ˆ

Aj, ˆ θ) for ˆ sj and maximizing p(Aj|Yj, ˆ sj, ˆ θ) for ˆ Aj.

  • ˆ

sn

j,i given ˆ

Aj: sn

j,i = T nbj,i are binary vectors; do greedy template

matching. min

bn

j,i

  • (Yj,i − ˆ

Aj) −

  • n

T nbj,i

  • 2

.

  • ˆ

Aj given ˆ sj via filtering (posterior mean) of spike-subtracted traces.

12

slide-32
SLIDE 32

Example of sorting

13

slide-33
SLIDE 33

Large-scale automatic analysis

Gray dots indicate human judgement.

14

slide-34
SLIDE 34

Population results

1,713,233 trials.

  • Accuracy greater than 99.5%, also agreement in latencies.1
  • Past: weeks→ Now: ≈15 minutes. Compatible with online

control experiments.

  • Enhanced capabilities of technology.

1Mena et al., PLOS computational Biology, 2017.

15

slide-35
SLIDE 35

Probabilistic neural identity inference in C.elegans

slide-36
SLIDE 36

The relevance of C.elegans

16

slide-37
SLIDE 37

The relevance of C.elegans

16

slide-38
SLIDE 38

The relevance of C.elegans

16

slide-39
SLIDE 39

The relevance of C.elegans

16

slide-40
SLIDE 40

A data processing pipeline

  • Raw data: 5D point processes (space x time x color)
  • First step: finding neurons.
  • Second step: identifying neurons

17

slide-41
SLIDE 41

Find neurons with the help of color

Brainbow (Lichtman and Sanes, 2008) stochastic coloring of neurons

Tamily Weissman, 2008 Photomicrography competition

18

slide-42
SLIDE 42

Neural identification: NeuroPal

NeuroPal: A Neuronal Polychromatic Atlas of Landmarks for Whole-Brain Imaging in C. elegans. Now ‘deterministic’ colors.

100 µm

10 µm

Head (Lateral View) Head (Dorsal-Ventral View) Midbody (Dorsal-Ventral View)

10 µm

Tail (Dorsal-Ventral View)

10 µm 10 µm

Anterior Ganglion Dorsal Ganglion Lateral Ganglion Anterior Ganglion Ventral Ganglion Retro-Vesicular Ganglion Ventral Nerve Cord (Anterior) Ventral Nerve Cord (Anterior Midbody) Ventral Nerve Cord (Posterior Midbody) Vulva (Worm Midbody) Midbody Neurons Pre-Anal Ganglion Dorso-Rectal Ganglion Lumbar Ganglion (Right) Lumbar Ganglion (Left) Anterior Pharynx Posterior Pharynx Anterior Pharynx

Head Midbody Tail

Lateral Ganglion (Left) Lateral Ganglion (Right) Midbody Neuron Midbody Neuron D V A P R L A P R L A P R L A P

D V A P

19

slide-43
SLIDE 43

A canonical representation

From training worms in wi space define a latent canonical z space via affine transformations fi.

20

slide-44
SLIDE 44

Probabilistic Atlas

On canonical space neurons are represented as z = (zn) with p(z) =

N

  • n=1

p(zn) =

N

  • n=1

1 (2πΣn)d/2 e−(yn−µn)⊤Σ−1

n

(yn−µn).

AIAL AIAR AIML AIMR AIYL AIYR AMSOL AMSOR AS1 AVFL AVFR AVG AVKL AVKR AVL BAGL BAGR CEPVL CEPVR DA1 DB1 DB2 DD1 I1L I1R I2L I2R I3 I4 I5 I6 IL1DL IL1DR IL1L IL1R IL1VL IL1VR IL2DL IL2DR IL2L IL2R IL2VL IL2VR M1 M2L M2R M3L M3R M4 M5 MCL MCR MI NSML NSMR OLLL OLLR OLQDL OLQDR OLQVL OLQVR RIFL RIFR RIGL RIGR RIH RIPL RIPR RIR RIS RMDDL RMDDR RMED RMEL RMER RMEV RMFL RMFR RMHL RMHR SAADL SAADR SABD SABVL SABVR SIADL SIADR SIAVL SIAVR SIBVL SIBVR SMBDL SMBDR SMBVL SMBVR SMDDL SMDDR URADL URADR URAVL URAVR URBL URBR URYDL URYDR URYVL URYVR VA1 VB1 VB2 VD1 VD2

L R A P

Head (Ventral)

5µm

21

slide-45
SLIDE 45

Probabilistic neural identification

  • There are always mistakes. How can we use this setup to model,

uncertainty, i.e. a distribution over permutations P.

22

slide-46
SLIDE 46

Probabilistic neural identification

  • There are always mistakes. How can we use this setup to model,

uncertainty, i.e. a distribution over permutations P.

  • In canonical space, observed data is y = Pz. Induces the posterior

p(P|y, {µ, Σ}) ∝ eP,log L, with log Ln,m = −1 2(zn − µm)⊤Σ−1

n (zn − µm). 22

slide-47
SLIDE 47

Probabilistic neural identification

  • There are always mistakes. How can we use this setup to model,

uncertainty, i.e. a distribution over permutations P.

  • In canonical space, observed data is y = Pz. Induces the posterior

p(P|y, {µ, Σ}) ∝ eP,log L, with log Ln,m = −1 2(zn − µm)⊤Σ−1

n (zn − µm).

  • Deterministic assignment amounts to finding maximum likelihood,

i.e. solving max

P P, log L, 22

slide-48
SLIDE 48

Probabilistic neural identification

  • There are always mistakes. How can we use this setup to model,

uncertainty, i.e. a distribution over permutations P.

  • In canonical space, observed data is y = Pz. Induces the posterior

p(P|y, {µ, Σ}) ∝ eP,log L, with log Ln,m = −1 2(zn − µm)⊤Σ−1

n (zn − µm).

  • Deterministic assignment amounts to finding maximum likelihood,

i.e. solving max

P P, log L,

  • Probabilistically, with marginal inference, i.e. the matrix ρ = E(P),

the probability of each neuron having a label.

22

slide-49
SLIDE 49

Variational Inference with Sinkhorn Permanent

  • p(P|L) =

1 perm(L)eP,log L, the normalizing constant is the permanent

  • f L, perm(L), a #P hard problem (Valiant, 1979)

23

slide-50
SLIDE 50

Variational Inference with Sinkhorn Permanent

  • p(P|L) =

1 perm(L)eP,log L, the normalizing constant is the permanent

  • f L, perm(L), a #P hard problem (Valiant, 1979)
  • Inference of ρ = E(P) ⇐

⇒ computation of perm(L),

23

slide-51
SLIDE 51

Variational Inference with Sinkhorn Permanent

  • p(P|L) =

1 perm(L)eP,log L, the normalizing constant is the permanent

  • f L, perm(L), a #P hard problem (Valiant, 1979)
  • Inference of ρ = E(P) ⇐

⇒ computation of perm(L), log perm(L) = sup

µ∈B

log L, µ − A∗(µ) with ρ = µ achieving the supremum over the Birkhoff polytope B, and A∗(µ) the entropy dual function (Wainwright and Jordan, 2008)

23

slide-52
SLIDE 52

Variational Inference with Sinkhorn Permanent

  • p(P|L) =

1 perm(L)eP,log L, the normalizing constant is the permanent

  • f L, perm(L), a #P hard problem (Valiant, 1979)
  • Inference of ρ = E(P) ⇐

⇒ computation of perm(L), log perm(L) = sup

µ∈B

log L, µ − A∗(µ) with ρ = µ achieving the supremum over the Birkhoff polytope B, and A∗(µ) the entropy dual function (Wainwright and Jordan, 2008)

  • Variational Inference: replace A∗(µ) ≈ µ, log µ, then

log permS(L) = sup

µ∈B

log L, µ − µ, log µ. (1) and ρ = µ above is obtained by the Sinkhorn algorithm

23

slide-53
SLIDE 53

The GUI

24

slide-54
SLIDE 54

Some machine learning methods arising from the above

slide-55
SLIDE 55

Sinkhorn Networks

Artificial neural networks are useful for predicting discrete data, a label. What about permutations? Problem Statement: How to decode scrambled objects ˜

X into non-scrambled X (e.g. jigsaw puzzle).

  • Data are pairs (Xi, ˜

Xi) where ˜ Xi are constructed by permuting pieces of Xi: Xi = P−1

θ, ˜ Xi

˜ Xi, Pθ, ˜

X ≈ M(g( ˜

X, θ)).

  • To train, replace S(g( ˜

X, θ)/τ) ≈ M(g( ˜ X, θ)).

25

slide-56
SLIDE 56

Deep Generative Models

  • One of the most exciting applications of Deep Learning is generating

unseen data from a training dataset. e.g. GANS, VAE.

26

slide-57
SLIDE 57

Deep Generative Models

  • One of the most exciting applications of Deep Learning is generating

unseen data from a training dataset. e.g. GANS, VAE.

  • In any case, vector z is sampled from a noise distribution and passed

through a neural network gθ(z)

26

slide-58
SLIDE 58

Deep Generative Models

  • One of the most exciting applications of Deep Learning is generating

unseen data from a training dataset. e.g. GANS, VAE.

  • In any case, vector z is sampled from a noise distribution and passed

through a neural network gθ(z)

  • Instead: to generate an object first generate a set of pieces p and

then a permutation π of those pieces that will assemble the object o.

26

slide-59
SLIDE 59

Deep Generative Models

  • One of the most exciting applications of Deep Learning is generating

unseen data from a training dataset. e.g. GANS, VAE.

  • In any case, vector z is sampled from a noise distribution and passed

through a neural network gθ(z)

  • Instead: to generate an object first generate a set of pieces p and

then a permutation π of those pieces that will assemble the object o.

  • Like going to Home Center Sodimac. Get the pieces and the

instructions manual.

26

slide-60
SLIDE 60

New Deep Generative Methodology

Usual generative modeling perspective

27

slide-61
SLIDE 61

New Deep Generative Methodology

Usual generative modeling perspective New generative modeling perspective π is a permutation (approximated with Sinkhorn algorithm).

27

slide-62
SLIDE 62

Same number from different pieces

28

slide-63
SLIDE 63

Different numbers from same pieces

29

slide-64
SLIDE 64

Modelamiento estad´ ıstico de COVID-19

  • Incluso si todos los datos se hacen p´

ublicos no nos dan la imagen

  • completa. Los registros son ’ingenuos’.
  • Se puede reconstruir la verdadera situaci´
  • n en escalas

espacio-temporales finas, dados los registros?

  • Adem´

as de incompletos, los registros est´ an sesgados y retrasados. Los sesgos y retrasos pueden depender del tiempo y el espacio.

  • Las predicciones y la toma de decisiones son sensibles a estos

n´ umeros

  • Los tests pueden convertirse en un recurso limitado. Se puede

”extrapolar” a partir de situaciones donde hay mejor testeo?

  • mo se integran distintos tipos de datos (casos, UCI,

muertes, historia, encuesta MOVID-19 de vigilancia sindr´

  • mica, tests de otras enfermedades, movilidad, b´

usquedas de internet) para reconstruir la verdadera situaci´

  • n?
  • Ac´

a: Modelos bayesianos jer´ arquicos

30

slide-65
SLIDE 65

Los datos de @perez y la fecha de inicio de s´ ıntomas

Sofisticados m´ etodos de inteligencia artificial permiten convertir los histogramas de los informes de situaci´

  • n epidemiol´
  • gica en una base de

datos. Muy importante: cu´ ando empezaron los s´ ıntomas de cada caso es m´ as importante que los casos tabulados por su fecha de reporte.

31

slide-66
SLIDE 66

Ahorasticar (nowcasting)

Para entender la transmisi´

  • n se debe saber cu´

anta gente se est´ a enfermando en cada momento. Los casos aparecen s´

  • lo despu´

es del diagn´

  • stico. NobBS: Nowcasting by bayesian smoothing (McGough et al,

2020)

L´ ınea y bandas de confianza basadas en NobBS. Advertencia: este modelo no ha sido evaluado/calibrado para este contexto.

32

slide-67
SLIDE 67

Algunas ecuaciones

  • Yt =

d Yt,d, Yt,d= casos nuevos el d´

ıa t reportados con un retraso de d d´ ıas.

  • Modelo bayesiano

Yt,d = Poisson(λt,d), log(λt,d) = αt + log(βd)

  • Para ahorasticar se requiere inferir αt, el que se asume seguir un

paseo aleatorio o movimiento browniano como prior, αt ∼ N(αt−1, τ 2).

  • βd es la distribuci´
  • n de retraso en reportar, con Dirichlet prior.

33

slide-68
SLIDE 68

Los procesos espaciales pueden ser clave

El control de la epidemia puede tener que ver con entender cambios en din´ amicas m´ as finas

34

slide-69
SLIDE 69

Se puede hacer a escala espacial m´ as fina?

Advertencia: Santiago no es Chile

35

slide-70
SLIDE 70

Se puede hacer a escala espacial m´ as fina?

En algunas comunas los casos parecen reportarse m´ as tarde

36

slide-71
SLIDE 71

La dependencia de los retrasos en reportar en otras variables

R ≈ −0.45

  • Una correlaci´
  • n negativa (quiz´

as espuria) entre pobreza y confirmaci´

  • n de casos oportuna.
  • hip´
  • tesis: Retrasos diferenciales en responder podr´

ıan evidenciar (ser un proxy) de sub-reporte (Stoner et al, JASA 2019); si los resultados tardan mucho puede significar que otra gente no est´ a accediendo a tests.

37

slide-72
SLIDE 72

La realidad del testeo I

La disponibilidad de tests tambi´ en es una funci´

  • n del espacio

38

slide-73
SLIDE 73

La realidad del testeo II

Tasas de positividad podr´ ıan tambi´ en indicar problemas

39

slide-74
SLIDE 74

Algunas ecuaciones II

  • mo incluir subreporte en espacio-tiempo?
  • Yt,s =

d Yt,s,d, Yt,s,d= casos nuevos el d´

ıa t reportados con un retraso de d d´ ıas. En locacion s (comuna, regi´

  • n, etc). En realidad

hay Zt,s,d casos que son reportados con probabilidad πs,t.

  • Modelo bayesiano

Yt,d,s|Zt,d,s ∼ Binomial(Zt,s,d, πs,t), Zt,s,d ∼ Poisson(λt,s,d), log(λt,s,d) = αt,s + log(βs,d)

  • πs,t ∼ f (xs,t) representa la probabilidad reportar y depende de otros

regresores (retraso en responder, tasa de positividad, etc). ”Pedir prestado” la fuerza de testeo donde se testea mejor.

  • Para ahorasticar se refiere inferir αt = (αt,s)s, el que se asume

seguir un paseo aleatorio o movimiento browniano como prior, αt ∼ N(αt−1, Σs) donde Σs representa la estructura de correlaciones espaciales.

40

slide-75
SLIDE 75

M´ as all´ a de este modelo

  • mo agregar otros tipos de informaci´
  • n: muertes, muertes por

exceso, vigilancia sindr´

  • mica, MOVID-19, vigilancia epidemiol´
  • gica

de otros viruses, b´ usquedas de internet, etc.

  • Modelos m´

as complejo o meta-an´ alisis de varios modelos peque˜ nos? (boosting)

41

slide-76
SLIDE 76

Palabras finales

  • Gracias a Liam Paninski, EJ Chichilnisky, Sasi Madugula, Jasper

Snoek, Jonathan Niles-Weed, Nishal Shah, Amin Eviatar Yemini, Erdem Varol, Amin Nejatbakhsh.

  • Gracias a Pamela Mart´

ınez, Joaqu´ ın Fontbona, Alejandro Maass, Mauricio Santillana, Fred Lu, Oliver Stoner, Pablo Mart´ ınez, Orlando Rivera, Phyllis Ju, Jorge P´ erez, Crist´

  • bal Cuadrado,

Gonzalo Contador, Equipo Github Ministerio de Ciencia, etc

  • Es posible ayudar!

42