Outline: SUZero MML Talk Interspeech talk (for Ewald) Explain one - - PowerPoint PPT Presentation

outline suzero mml talk
SMART_READER_LITE
LIVE PREVIEW

Outline: SUZero MML Talk Interspeech talk (for Ewald) Explain one - - PowerPoint PPT Presentation

Outline: SUZero MML Talk Interspeech talk (for Ewald) Explain one technique in a bit more detail The experience of a coding sprint Unsupervised acoustic unit discovery for speech synthesis using discrete latent-variable neural


slide-1
SLIDE 1

Outline: SUZero MML Talk

  • Interspeech talk (for Ewald)
  • Explain one technique in a bit more detail
  • The experience of a coding sprint
slide-2
SLIDE 2

Unsupervised acoustic unit discovery for speech synthesis using discrete latent-variable neural networks

Interspeech 2019, Graz, Austria

Ryan Eloff, Andr´ e Nortje, Benjamin van Niekerk, Avashna Govender, Leanne Nortje, Arnu Pretorius, Elan van Biljon, Ewald van der Westhuizen, Lisa van Staden, Herman Kamper Stellenbosch University, South Africa & University of Edinburgh, UK https://github.com/kamperh/suzerospeech2019

slide-3
SLIDE 3

Advances in speech recognition

1 / 35

slide-4
SLIDE 4

Advances in speech recognition

  • Addiction to text: 2000 hours transcribed speech audio;

∼350M/560M words text [Xiong et al., TASLP’17]

1 / 35

slide-5
SLIDE 5

Advances in speech recognition

  • Addiction to text: 2000 hours transcribed speech audio;

∼350M/560M words text [Xiong et al., TASLP’17]

  • Sometimes not possible, e.g., for unwritten languages

1 / 35

slide-6
SLIDE 6

Advances in speech recognition

  • Addiction to text: 2000 hours transcribed speech audio;

∼350M/560M words text [Xiong et al., TASLP’17]

  • Sometimes not possible, e.g., for unwritten languages
  • Very different from the way human infants learn language

1 / 35

slide-7
SLIDE 7

Zero-Resource Speech Challenges (ZRSC)

2 / 35

slide-8
SLIDE 8

Zero-Resource Speech Challenges (ZRSC)

2 / 35

slide-9
SLIDE 9

ZRSC 2019: Text-to-speech without text

Waveform generator Target voice ‘the dog ate the ball’

3 / 35

slide-10
SLIDE 10

ZRSC 2019: Text-to-speech without text

Acoustic model

7 11 26 31

Waveform generator Target voice

11

3 / 35

slide-11
SLIDE 11

What do we get for training?

4 / 35

slide-12
SLIDE 12

What do we get for training?

No labels

4 / 35

slide-13
SLIDE 13

What do we get for training?

No labels :)

4 / 35

slide-14
SLIDE 14

What do we get for training?

No labels :)

Figure adapted from: http://zerospeech.com/2019 4 / 35

slide-15
SLIDE 15

Approach: Compress, decode and synthesise

Encoder Discretise FFTNet Decoder z1:N h1:N x1:T ˆ y1:T MFCCs Filterbanks Waveform Vocoder Compression model Symbol-to-speech module Speaker ID Embed 5 / 35

slide-16
SLIDE 16

Approach: Compress, decode and synthesise

Encoder Discretise FFTNet Decoder z1:N h1:N x1:T ˆ y1:T MFCCs Filterbanks Waveform Vocoder Compression model Symbol-to-speech module Training speaker Embed 5 / 35

slide-17
SLIDE 17

Approach: Compress, decode and synthesise

Encoder Discretise FFTNet Decoder z1:N h1:N x1:T ˆ y1:T MFCCs Filterbanks Waveform Vocoder Compression model Symbol-to-speech module Target speaker Embed 5 / 35

slide-18
SLIDE 18

Approach: Compress, decode and synthesise

Encoder Discretise FFTNet Decoder z1:N h1:N x1:T ˆ y1:T MFCCs Filterbanks Waveform Vocoder Compression model Symbol-to-speech module Speaker ID Embed 5 / 35

slide-19
SLIDE 19

Approach: Compress, decode and synthesise

Encoder Discretise FFTNet Decoder z1:N h1:N x1:T ˆ y1:T MFCCs Filterbanks Waveform Vocoder Compression model Symbol-to-speech module Speaker ID Embed 5 / 35

slide-20
SLIDE 20

Discretisation methods

  • Straight-through estimation (STE)

binarisation:

  • Categorical variational autoencoder

(CatVAE):

  • Vector-quantised variational

autoencoder (VQ-VAE):

6 / 35 0.9 −0.1 0.3 0.7 −0.8 h threshold 1 −1 1 1 −1 z 0.9 −0.1 0.3 0.7 −0.8 h z 0.86 0.01 0.02 0.11 0.00

e(hk+gk)/τ K

k=1 e(hk+gk)/τ

0.9 −0.1 0.3 0.7 −0.8 h z 0.8 −0.2 0.3 0.5 −0.6 Choose closest embedding e

slide-21
SLIDE 21

Neural network architectures

  • Encoder: Convolutional layers, each layer with a stride of 2
  • Decoder: Transposed convolutions mirroring encoder
  • Waveform generation: FFTNet autoregressive vocoder
  • Also experimented with WaveNet: Sometimes gave noisy output
  • Bitrate: Set by number of symbols K and number of striding layers

7 / 35

slide-22
SLIDE 22

Evaluation

Human evaluation metrics:

  • Mean opinion score (MOS)
  • Character error rate (CER)
  • Similarity to the target speaker’s voice

8 / 35

slide-23
SLIDE 23

Evaluation

Human evaluation metrics:

  • Mean opinion score (MOS)
  • Character error rate (CER)
  • Similarity to the target speaker’s voice

Objective evaluation metrics:

  • ABX discrimination
  • Bitrate

8 / 35

slide-24
SLIDE 24

Evaluation

Human evaluation metrics:

  • Mean opinion score (MOS)
  • Character error rate (CER)
  • Similarity to the target speaker’s voice

Objective evaluation metrics:

  • ABX discrimination
  • Bitrate

Two evaluation languages:

  • English: Used for development
  • Indonesian: Held out “surprise language”

8 / 35

slide-25
SLIDE 25

ABX on English with speaker conditioning

STE VQ-VAE CatVAE 10 20 30 ABX (%)

no speaker cond. speaker conditioning

9 / 35

slide-26
SLIDE 26

ABX on English for different compression rates

64 64 64 256 256 256 512 512 512 STE VQ-VAE CatVAE 10 20 30 ABX (%)

no downsampling

10 / 35

slide-27
SLIDE 27

ABX on English for different compression rates

64 64 64 256 256 256 512 512 512 STE VQ-VAE CatVAE 10 20 30 ABX (%)

no downsampling ×4 downsample

10 / 35

slide-28
SLIDE 28

ABX on English for different compression rates

64 64 64 256 256 256 512 512 512 STE VQ-VAE CatVAE 10 20 30 ABX (%)

no downsampling ×4 downsample ×8 downsample

10 / 35

slide-29
SLIDE 29

ABX on English for different compression rates

64 64 64 256 256 256 512 512 512 STE VQ-VAE CatVAE 10 20 30 ABX (%)

64 116 473 79 154 644 85 164 682 75 139 576 93 188 770 100 190 750 70 124 478 90 194 646 103 215 686 no downsampling ×4 downsample ×8 downsample

10 / 35

slide-30
SLIDE 30

Official evaluation results

CER MOS Similarity ABX Model (%) [1, 5] [1, 5] (%) Bitrate English: DPGMM-Merlin 75 2.50 2.97 35.6 72 VQ-VAE-x8 75 2.31 2.49 25.1 88 VQ-VAE-x4 67 2.18 2.51 23.0 173 Supervised 44 2.77 2.99 29.9 38 Indonesian: DPGMM-Merlin 62 2.07 3.41 27.5 75 VQ-VAE-x8 58 1.94 1.95 17.6 69 VQ-VAE-x4 60 1.96 1.76 14.5 140 Supervised 28 3.92 3.95 16.1 35

11 / 35

slide-31
SLIDE 31

Synthesised examples

Model Input Synthesised output Target speaker English: VQ-VAE-x4

Play Play Play

VQ-VAE-x4-new

Play

VQ-VAE-x4

Play Play Play

VQ-VAE-x4-new

Play

Indonesian: VQ-VAE-x4

Play Play Play

VQ-VAE-x4-new

Play

VQ-VAE-x4

Play Play Play

VQ-VAE-x4-new

Play

12 / 35

slide-32
SLIDE 32

Conclusions

  • Speaker conditioning consistently improves performance
  • Different discretisation methods are similar (VQ-VAE slightly better)
  • Different models difficult to compare because of bitrate
  • Future: Does discritisation actually benefit feature learning?

13 / 35

slide-33
SLIDE 33

https://github.com/kamperh/suzerospeech2019

slide-34
SLIDE 34

https://github.com/kamperh/suzerospeech2019 (Update coming soon)

slide-35
SLIDE 35

Straight-through estimation (STE) binarisation

  • STE binarisation:

zk = 1 if hk ≥ 0 or zk = −1 otherwise

  • For backpropagation we need: ∂J

∂h

  • For single element: ∂J

∂hk = ∂zk ∂hk ∂J ∂zk

  • What is ∂zk

∂hk with zk = threshold(hk)? Cannot solve directly

  • Idea: If zk ≈ hk then we could use ∂J

∂hk ≈ ∂J ∂zk

15 / 35

0.9 −0.1 0.3 0.7 −0.8 1 −1 1 1 −1 h z h4 z4 threshold

slide-36
SLIDE 36

Straight-through estimation (STE) binarisation

As an example, let us say hk = 0.7:

−1 0.7 1 16 / 35

slide-37
SLIDE 37

Straight-through estimation (STE) binarisation

Instead of direct thresholding, let us set zk = 1 with probability 0.85 and zk = −1 with probability 0.15:

−1 0.7 1

Estimated mean of zk over 500 samples: 0.668

17 / 35

slide-38
SLIDE 38

Straight-through estimation (STE) binarisation

  • So, instead of direct thresholding, we set zk = hk + ǫ, where ǫ is

sampled noise: ǫ =

  • 1 − hk

with probability 1+hk

2

−hk − 1 with probability 1−hk

2

  • Since ǫ is zero-mean, the derivative of the expected value
  • f zk is: ∂E[zk]

∂hk = 1

  • Therefore, gradients are passed unchanged through the thresholding
  • peration: ∂J

∂h ≈ ∂J ∂z

18 / 35

slide-39
SLIDE 39

Outcome of ZRSC 2019

19 / 35

slide-40
SLIDE 40

Coding sprint:

Stellenbosch University ZeroSpeech (SUZero) Team

slide-41
SLIDE 41

Why do we have ten authors on this paper?

Ryan Eloff Andr´ e Nortje Benjamin van Niekerk Avashna Govender Leanne Nortje Arnu Pretorius Elan van Biljon Ewald van der Westhuizen Lisa van Staden Herman Kamper

21 / 35

slide-42
SLIDE 42

Planned structure

  • Original idea: Arnu had a sprint for some other work

22 / 35

slide-43
SLIDE 43

Planned structure

  • Original idea: Arnu had a sprint for some other work
  • Duration: Two weeks (probably longer, but then you can leave)

22 / 35

slide-44
SLIDE 44

Planned structure

  • Original idea: Arnu had a sprint for some other work
  • Duration: Two weeks (probably longer, but then you can leave)
  • Two teams

22 / 35

slide-45
SLIDE 45

Planned structure

  • Original idea: Arnu had a sprint for some other work
  • Duration: Two weeks (probably longer, but then you can leave)
  • Two teams, with tech support (Elan) and someone crying (Herman)

22 / 35

slide-46
SLIDE 46

Planned structure

  • Original idea: Arnu had a sprint for some other work
  • Duration: Two weeks (probably longer, but then you can leave)
  • Two teams, with tech support (Elan) and someone crying (Herman)
  • Compression team:

Arnu, Ryan, Andr´ e, Leanne

  • Synthesis team:

Ewald, Benji, Lisa, Avashna

22 / 35

Encoder Discretise FFTNet Decoder z1:N h1:N x1:T ˆ y1:T MFCCs Filterbanks Waveform Vocoder Compression model Symbol-to-speech module Speaker ID Embed

slide-47
SLIDE 47

Planned structure

  • Original idea: Arnu had a sprint for some other work
  • Duration: Two weeks (probably longer, but then you can leave)
  • Two teams, with tech support (Elan) and someone crying (Herman)
  • Compression team:

Arnu, Ryan, Andr´ e, Leanne

  • Synthesis team:

Ewald, Benji, Lisa, Avashna

  • Teams would work in parallel

22 / 35

Encoder Discretise FFTNet Decoder z1:N h1:N x1:T ˆ y1:T MFCCs Filterbanks Waveform Vocoder Compression model Symbol-to-speech module Speaker ID Embed

slide-48
SLIDE 48

Planned structure

  • Herman talks to team leaders every day
  • Daily stand-ups within each of the teams
  • Slack: All communication
  • Trello: Track tasks with boards (backlog, in-progress, done)
  • Bitbucket: Version control using git, pull requests need to be reviewed

23 / 35

slide-49
SLIDE 49

Promises beforehand: What will you get from this?

24 / 35

slide-50
SLIDE 50

Promises beforehand: What will you get from this?

  • You can leave after two weeks (check with your supervisors)

24 / 35

slide-51
SLIDE 51

Promises beforehand: What will you get from this?

  • You can leave after two weeks (check with your supervisors)
  • Have fun

24 / 35

slide-52
SLIDE 52

Promises beforehand: What will you get from this?

  • You can leave after two weeks (check with your supervisors)
  • Have fun
  • Learn something about speech!

24 / 35

slide-53
SLIDE 53

Promises beforehand: What will you get from this?

  • You can leave after two weeks (check with your supervisors)
  • Have fun
  • Learn something about speech!
  • Learn some software engineering skills

24 / 35

slide-54
SLIDE 54

Promises beforehand: What will you get from this?

  • You can leave after two weeks (check with your supervisors)
  • Have fun
  • Learn something about speech!
  • Learn some software engineering skills . . . maybe

24 / 35

slide-55
SLIDE 55

Promises beforehand: What will you get from this?

  • You can leave after two weeks (check with your supervisors)
  • Have fun
  • Learn something about speech!
  • Learn some software engineering skills . . . maybe
  • Do something in a group

24 / 35

slide-56
SLIDE 56

Promises beforehand: What will you get from this?

  • You can leave after two weeks (check with your supervisors)
  • Have fun
  • Learn something about speech!
  • Learn some software engineering skills . . . maybe
  • Do something in a group
  • Learn where the DSP and MediaLabs are

24 / 35

slide-57
SLIDE 57

Promises beforehand: What will you get from this?

  • You can leave after two weeks (check with your supervisors)
  • Have fun
  • Learn something about speech!
  • Learn some software engineering skills . . . maybe
  • Do something in a group
  • Learn where the DSP and MediaLabs are
  • Maybe a paper

24 / 35

slide-58
SLIDE 58

Promises beforehand: What will you get from this?

  • You can leave after two weeks (check with your supervisors)
  • Have fun
  • Learn something about speech!
  • Learn some software engineering skills . . . maybe
  • Do something in a group
  • Learn where the DSP and MediaLabs are
  • Maybe a paper . . . probably

24 / 35

slide-59
SLIDE 59

Promises beforehand: What will you get from this?

  • You can leave after two weeks (check with your supervisors)
  • Have fun
  • Learn something about speech!
  • Learn some software engineering skills . . . maybe
  • Do something in a group
  • Learn where the DSP and MediaLabs are
  • Maybe a paper . . . probably . . . almost certainly

24 / 35

slide-60
SLIDE 60

Promises beforehand: What will you get from this?

  • You can leave after two weeks (check with your supervisors)
  • Have fun
  • Learn something about speech!
  • Learn some software engineering skills . . . maybe
  • Do something in a group
  • Learn where the DSP and MediaLabs are
  • Maybe a paper . . . probably . . . almost certainly
  • Worst case: Pizza and beer

24 / 35

slide-61
SLIDE 61
slide-62
SLIDE 62
slide-63
SLIDE 63

What actually happened

27 / 35

slide-64
SLIDE 64

What actually happened

  • Two teams

27 / 35

slide-65
SLIDE 65

What actually happened

  • Two teams ∼

27 / 35

slide-66
SLIDE 66

What actually happened

  • Two teams ∼
  • Herman talks to team leaders every day

27 / 35

slide-67
SLIDE 67

What actually happened

  • Two teams ∼
  • Herman talks to team leaders every day
  • Daily stand-ups within each of the teams

27 / 35

slide-68
SLIDE 68

What actually happened

  • Two teams ∼
  • Herman talks to team leaders every day
  • Daily stand-ups within each of the teams
  • Slack: All communication

27 / 35

slide-69
SLIDE 69

What actually happened

  • Two teams ∼
  • Herman talks to team leaders every day
  • Daily stand-ups within each of the teams
  • Slack: All communication :(

27 / 35

slide-70
SLIDE 70

What actually happened

  • Two teams ∼
  • Herman talks to team leaders every day
  • Daily stand-ups within each of the teams
  • Slack: All communication :(
  • Some people didn’t respond; different time zones complicated things

27 / 35

slide-71
SLIDE 71

28 / 35

slide-72
SLIDE 72

29 / 35

slide-73
SLIDE 73

30 / 35

slide-74
SLIDE 74

What actually happened

  • You can leave after two weeks

31 / 35

slide-75
SLIDE 75

What actually happened

  • You can leave after two weeks
  • But for Ryan, Andr´

e, Benji . . .

31 / 35

slide-76
SLIDE 76

What actually happened

  • You can leave after two weeks
  • But for Ryan, Andr´

e, Benji . . . almost two months, up to day of submission deadline

31 / 35

slide-77
SLIDE 77

What actually happened

  • You can leave after two weeks
  • But for Ryan, Andr´

e, Benji . . . almost two months, up to day of submission deadline :(

31 / 35

slide-78
SLIDE 78

What actually happened

  • You can leave after two weeks
  • But for Ryan, Andr´

e, Benji . . . almost two months, up to day of submission deadline :(

  • Have fun

31 / 35

slide-79
SLIDE 79

What actually happened

  • You can leave after two weeks
  • But for Ryan, Andr´

e, Benji . . . almost two months, up to day of submission deadline :(

  • Have fun ∼

31 / 35

slide-80
SLIDE 80

What actually happened

  • You can leave after two weeks
  • But for Ryan, Andr´

e, Benji . . . almost two months, up to day of submission deadline :(

  • Have fun ∼
  • Learn something about speech!

31 / 35

slide-81
SLIDE 81

What actually happened

  • You can leave after two weeks
  • But for Ryan, Andr´

e, Benji . . . almost two months, up to day of submission deadline :(

  • Have fun ∼
  • Learn something about speech!
  • Learn some software engineering skills . . . maybe

31 / 35

slide-82
SLIDE 82

What actually happened

  • You can leave after two weeks
  • But for Ryan, Andr´

e, Benji . . . almost two months, up to day of submission deadline :(

  • Have fun ∼
  • Learn something about speech!
  • Learn some software engineering skills . . . maybe
  • Do something in a group

31 / 35

slide-83
SLIDE 83

What actually happened

  • You can leave after two weeks
  • But for Ryan, Andr´

e, Benji . . . almost two months, up to day of submission deadline :(

  • Have fun ∼
  • Learn something about speech!
  • Learn some software engineering skills . . . maybe
  • Do something in a group
  • Learn where the DSP and MediaLabs are :(

31 / 35

slide-84
SLIDE 84

What actually happened

  • You can leave after two weeks
  • But for Ryan, Andr´

e, Benji . . . almost two months, up to day of submission deadline :(

  • Have fun ∼
  • Learn something about speech!
  • Learn some software engineering skills . . . maybe
  • Do something in a group
  • Learn where the DSP and MediaLabs are :(
  • Maybe a paper . . . probably . . . almost certainly

31 / 35

slide-85
SLIDE 85

What actually happened

  • You can leave after two weeks
  • But for Ryan, Andr´

e, Benji . . . almost two months, up to day of submission deadline :(

  • Have fun ∼
  • Learn something about speech!
  • Learn some software engineering skills . . . maybe
  • Do something in a group
  • Learn where the DSP and MediaLabs are :(
  • Maybe a paper . . . probably . . . almost certainly
  • Pizza and beer

31 / 35

slide-86
SLIDE 86

What we learned: Things we did that worked

32 / 35

slide-87
SLIDE 87

What we learned: Things we did that worked

  • Planning beforehand (Herman did some prototyping)

32 / 35

slide-88
SLIDE 88

What we learned: Things we did that worked

  • Planning beforehand (Herman did some prototyping)
  • Role assignment beforehand

32 / 35

slide-89
SLIDE 89

What we learned: Things we did that worked

  • Planning beforehand (Herman did some prototyping)
  • Role assignment beforehand
  • Make expectations clear upfront (e.g. authors on paper and order)

32 / 35

slide-90
SLIDE 90

What we learned: Things we did that worked

  • Planning beforehand (Herman did some prototyping)
  • Role assignment beforehand
  • Make expectations clear upfront (e.g. authors on paper and order)
  • Using team leaders to deal with big team

32 / 35

slide-91
SLIDE 91

What we learned: Things we did that worked

  • Planning beforehand (Herman did some prototyping)
  • Role assignment beforehand
  • Make expectations clear upfront (e.g. authors on paper and order)
  • Using team leaders to deal with big team
  • Flexible in restructuring things on the fly (based on listening to

recommendations from team)

32 / 35

slide-92
SLIDE 92

What we learned: Things we did that worked

  • Planning beforehand (Herman did some prototyping)
  • Role assignment beforehand
  • Make expectations clear upfront (e.g. authors on paper and order)
  • Using team leaders to deal with big team
  • Flexible in restructuring things on the fly (based on listening to

recommendations from team)

  • Pizza and beer

32 / 35

slide-93
SLIDE 93

What we learned: Things we did that worked

  • Planning beforehand (Herman did some prototyping)
  • Role assignment beforehand
  • Make expectations clear upfront (e.g. authors on paper and order)
  • Using team leaders to deal with big team
  • Flexible in restructuring things on the fly (based on listening to

recommendations from team)

  • Pizza and beer (Gino’s delivers)

32 / 35

slide-94
SLIDE 94

What we learned: Things we did that didn’t work

33 / 35

slide-95
SLIDE 95

What we learned: Things we did that didn’t work

  • Some roles weren’t clear enough (especially for first year masters

students)

33 / 35

slide-96
SLIDE 96

What we learned: Things we did that didn’t work

  • Some roles weren’t clear enough (especially for first year masters

students)

  • Some people had other stuff going on in the first two weeks

33 / 35

slide-97
SLIDE 97

What we learned: Things we did that didn’t work

  • Some roles weren’t clear enough (especially for first year masters

students)

  • Some people had other stuff going on in the first two weeks
  • We focussed on intermediate evaluations which turned out not be

that important in the end

33 / 35

slide-98
SLIDE 98

What we learned: Things we did that didn’t work

  • Some roles weren’t clear enough (especially for first year masters

students)

  • Some people had other stuff going on in the first two weeks
  • We focussed on intermediate evaluations which turned out not be

that important in the end

  • Don’t do this in the first two weeks of Systems and Signals 414

lectures

33 / 35

slide-99
SLIDE 99

What we learned: What we would do differently

34 / 35

slide-100
SLIDE 100

What we learned: What we would do differently

  • Smaller team: Can’t have stand-ups with 10 people; maybe apply the
  • ne-pizza rule

34 / 35

slide-101
SLIDE 101

What we learned: What we would do differently

  • Smaller team: Can’t have stand-ups with 10 people; maybe apply the
  • ne-pizza rule
  • Every team member should have a specific purpose

34 / 35

slide-102
SLIDE 102

What we learned: What we would do differently

  • Smaller team: Can’t have stand-ups with 10 people; maybe apply the
  • ne-pizza rule
  • Every team member should have a specific purpose
  • Locations: If possible, have everyone in one central place (a lab, not a

small room)

34 / 35

slide-103
SLIDE 103

What we learned: What we would do differently

  • Smaller team: Can’t have stand-ups with 10 people; maybe apply the
  • ne-pizza rule
  • Every team member should have a specific purpose
  • Locations: If possible, have everyone in one central place (a lab, not a

small room)

  • Get through the pipeline faster: Idea, model, implement, evaluate

34 / 35

slide-104
SLIDE 104

Conclusions about sprint

35 / 35

slide-105
SLIDE 105

Conclusions about sprint

  • Frustrating but fun at the same time

35 / 35

slide-106
SLIDE 106

Conclusions about sprint

  • Frustrating but fun at the same time
  • (New) students got to know each other

35 / 35

slide-107
SLIDE 107

Conclusions about sprint

  • Frustrating but fun at the same time
  • (New) students got to know each other

Low-resource speech and language (LSL)

35 / 35

slide-108
SLIDE 108

Conclusions about sprint

  • Frustrating but fun at the same time
  • (New) students got to know each other

Low-resource speech and language (LSL) and Lego group

35 / 35

slide-109
SLIDE 109

Conclusions about sprint

  • Frustrating but fun at the same time
  • (New) students got to know each other

Low-resource speech and language (LSL) and Lego group

  • Everyone learned something

35 / 35

slide-110
SLIDE 110

Conclusions about sprint

  • Frustrating but fun at the same time
  • (New) students got to know each other

Low-resource speech and language (LSL) and Lego group

  • Everyone learned something
  • Pizza and beer

35 / 35