Advanced Deep Learnin ing for Computer Vis isio ion Prof. - - PowerPoint PPT Presentation

advanced deep learnin ing for computer vis isio ion
SMART_READER_LITE
LIVE PREVIEW

Advanced Deep Learnin ing for Computer Vis isio ion Prof. - - PowerPoint PPT Presentation

Advanced Deep Learnin ing for Computer Vis isio ion Prof. Leal-Taix and Prof. Niessner 1 The Team Lecturers Prof. Dr. Laura Prof. Dr. Matthias Leal-Taix Niessner Tutors Tim Dave Ji Maxim Chen Hou Meinhardt Maximov Prof.


slide-1
SLIDE 1
  • Prof. Leal-Taixé and Prof. Niessner

Advanced Deep Learnin ing for Computer Vis isio ion

1

slide-2
SLIDE 2
  • Prof. Leal-Taixé and Prof. Niessner

Lecturers

  • Prof. Dr. Laura

Leal-Taixé

  • Prof. Dr. Matthias

Niessner Tim Meinhardt

Tutors

The Team

2

Ji Hou Maxim Maximov Dave Chen

slide-3
SLIDE 3
  • Prof. Leal-Taixé and Prof. Niessner

What is is this is cours rse about

  • Presentation of advanced Deep Learning methods for

various Computer Vision tasks

  • Focus on new methods, some of them presented
  • nly this year! There will be extra references, many
  • pportunities for you to dig deeper into the topics
  • Research-oriented course

3

slide-4
SLIDE 4
  • Prof. Leal-Taixé and Prof. Niessner

While we go over new methods…

  • You have to come up with your own ideas to solve a

specific vision problem!

  • Strong focus on the practical side: semester-long

project where you can put all the knowledge to practice

4

slide-5
SLIDE 5
  • Prof. Leal-Taixé and Prof. Niessner

Course o

  • rganiz

izatio ion

5

slide-6
SLIDE 6
  • Prof. Leal-Taixé and Prof. Niessner

About the le lecture

  • Theory: 12 lectures
  • Every Monday 10

10:0 :00-11 11:3 :30h h

  • Seminar Room, 02.13.010
  • Practical:
  • Project to be done in groups of 2 (non-negotiable!)
  • Presentations during the semester
  • Wednesdays 14:0

:00-15 15:3 :30h h (Seminar Room, 02.09.023)

  • Final poster presentation

https://dvl.in.tum.de/teaching/adl4cv-ws19/

6

slide-7
SLIDE 7
  • Prof. Leal-Taixé and Prof. Niessner

Gra radin ing system

  • Exam: 27

27th

th Febru

ruary, 13 13:3 :30-14 14:30

  • Review: 2 review sessions
  • Practical part = 2/3 of the grade
  • Exam = 1/3 of the grade

7

https://dvl.in.tum.de/teaching/adl4cv-ws19/

slide-8
SLIDE 8
  • Prof. Leal-Taixé and Prof. Niessner

Pro roje ject deadline

  • 21.10., today: project presentation
  • 23.10.: project assignments (projects <-> TAs)
  • 30.10

.10., mid idnig ight: delive liver r a 1 1 page abstract of f your r id idea fo for r th the pro roje ject.

  • Until 6.11.: Evaluation of the project and feedback

8

slide-9
SLIDE 9
  • Prof. Leal-Taixé and Prof. Niessner

Pro roje ject evaluation

  • Presentations: everyone needs to attend!
  • Firs

irst t pre resentatio ion: firs first re result lts, challe llenges

– 04 04.12 12.: Gro roups #1 1 – 11 11.12 .12.: : Gro roups #2

9

slide-10
SLIDE 10
  • Prof. Leal-Taixé and Prof. Niessner

Pro roje ject evaluation

  • Presentations: everyone needs to attend!
  • Second pre

resentation: alm lmost t fin final l re result lts, new th thin ings you trie tried

– 08 08.01. 1.: Gro roups #1 1 – 15 15.0 .01. 1.: : Gro roups #2

10

slide-11
SLIDE 11
  • Prof. Leal-Taixé and Prof. Niessner

Pro roje ject evaluation

  • Presentations: everyone needs to attend!
  • 04.0

.02.: .: fin final l deadlin line on re report (d (deadlin line noon) – Max 4 pages using CVPR template

  • Fin

inal l pre resentation = POSTER – Date 05.02. 13:00-16:00

11

slide-12
SLIDE 12
  • Prof. Leal-Taixé and Prof. Niessner

Gra radin ing system

  • Exam = 1/3 of the grade
  • Practical part = 2/3 of the grade

– Presentations (2 oral pres. + 1 poster) = 1/3 – Final report = 1/3 – Code/submission = 1/3

12

slide-13
SLIDE 13
  • Prof. Leal-Taixé and Prof. Niessner

Foll llowin ing up wit ith the pro rojects

  • Each project will be assigned to a TA and you will

have weekly office hours to discuss the progress

  • These will be announced after the projects are

approved

13

slide-14
SLIDE 14
  • Prof. Leal-Taixé and Prof. Niessner

Sli lides

  • Moodle is set up! Lecture will NOT be recorded.
  • Slides will be posted on Moodle and on the website:

https://dvl.in.tum.de/teaching/adl4cv-ws19/

  • Questions regarding organization of the course:
  • Emails to our individual addresses will not be answered!

adl4cv@dvl.in.tum.de

14

slide-15
SLIDE 15
  • Prof. Leal-Taixé and Prof. Niessner

Teams

  • Teams of two per project!
  • Moodle is set up!
  • If you do not have a team

– Chat after the lecture – Post it on Moodle

15

slide-16
SLIDE 16
  • Prof. Leal-Taixé and Prof. Niessner

Proje ject Id Ideas / Dir irectio ions

16

slide-17
SLIDE 17
  • Prof. Leal-Taixé and Prof. Niessner

3D Scene Unders rstandin ing

Ji Ji Hou

17

slide-18
SLIDE 18
  • Prof. Leal-Taixé and Prof. Niessner

Pro roje jects Dir irections

3D Detection/Segemntation/In Instance/Comple letion on

  • n various 3D data

18

slide-19
SLIDE 19
  • Prof. Leal-Taixé and Prof. Niessner

Pro roje ject Dir irections

  • 3D Detectio

ion on Sin Single RGB-D Image.

– Song, Shuran, and Jianxiong Xiao. "Deep sliding shapes for amodal 3d object detection in rgb-d images." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016. – Qi, Charles R., et al. "Frustum pointnets for 3d object detection from rgb-d data." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018. – Qi, Charles R., et al. "Deep Hough Voting for 3D Object Detection in Point Clouds." arXiv preprint arXiv:1904.09664(2019).

19

slide-20
SLIDE 20
  • Prof. Leal-Taixé and Prof. Niessner

Pro roje ject Dir irections

  • Lift

iftin ing 2D 2D det etectio ion to to 3D

– Srivastava, Siddharth, Frederic Jurie, and Gaurav Sharma. "Learning 2D to 3D Lifting for Object Detection in 3D for Autonomous Vehicles." arXiv preprint arXiv:1904.08494(2019). – Kulkarni, Nilesh, et al. "3D-RelNet: Joint Object and Relational Network for 3D Prediction." arXiv preprint arXiv:1906.02729(2019). – http://www.cvlibs.net/datasets/kitti/

20

slide-21
SLIDE 21
  • Prof. Leal-Taixé and Prof. Niessner

Pro roje ject Dir irections

  • Inst

stance Se Segmenta tatio ion/Completio ion on 3D re reconstr tructio ion

– Hou, Ji, Angela Dai, and Matthias Nießner. "3d-sis: 3d semantic instance segmentation of rgb-d scans." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019. – Hou, Ji, Angela Dai, and Matthias Nießner. "3D-SIC: 3D Semantic Instance Completion for RGB-D Scans." arXiv preprint arXiv:1904.12012 (2019).

21

slide-22
SLIDE 22
  • Prof. Leal-Taixé and Prof. Niessner

Pro roje ject Dir irections

  • 3D Det

etectio ion on

  • n Multi-Vie

iews

– Chen, Xiaozhi, et al. "Multi-view 3d object detection network for autonomous driving." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017. – Single View + Merging

22

slide-23
SLIDE 23
  • Prof. Leal-Taixé and Prof. Niessner

Pro roje ject Dir irections

  • How to

to co combine ge geometry ry and co color r (a (and ra radar) r)

  • Dai, Angela, and Matthias Nießner. "3dmv: Joint 3d-multi-view

prediction for 3d semantic scene segmentation." Proceedings of the European Conference on Computer Vision (ECCV). 2018.

  • Jaritz, Maximilian, Jiayuan Gu, and Hao Su. "Multi-view PointNet

for 3D Scene Understanding." arXiv preprint arXiv:1909.13603 (2019).

23

slide-24
SLIDE 24
  • Prof. Leal-Taixé and Prof. Niessner

Pro roje ject Dir irections

  • 3D Reconstru

ructi tion fro from RGB im image(s)

  • Choy, Christopher B., et al. "3d-r2n2: A unified approach for single

and multi-view 3d object reconstruction." European conference on computer vision. Springer, Cham, 2016.

  • Fan, Haoqiang, Hao Su, and Leonidas J. Guibas. "A point set

generation network for 3d object reconstruction from a single image." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.

24

slide-25
SLIDE 25
  • Prof. Leal-Taixé and Prof. Niessner

3D vis isio ion and NLP

Dave Z. . Chen

25

slide-26
SLIDE 26
  • Prof. Leal-Taixé and Prof. Niessner

Pro roje ject Dir irections

  • 3D

D Cross-modal Retri trieval: : Brid ridging the the Gap be betw tween 3D Obje bjects an and Natu atural l La Language De Desc scriptio tions

– Chen et al. "Text2Shape: Generating Shapes from Natural Language by Learning Joint Embeddings" ArXiv Preprint. 2018. – Han et al. "Y2Seq2Seq: Cross-Modal Representation Learning for 3D Shape and Text by Joint Reconstruction and Prediction of View and Word Sequences" The AAAI Conference on Artificial Intelligence. 2018. – Tutor: Dave Z. Chen – Contact: zhenyu.chen@tum.de

26

slide-27
SLIDE 27
  • Prof. Leal-Taixé and Prof. Niessner

Pro roje ject Dir irections

  • Auto

tomatic ic Descrip ription Genera rating fo for r 3D CAD models ls

– Xu et al. "Show, Attend and Tell: Neural Image Caption Generation with Visual Attention" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016. – Lu et al. "Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition . 2017. – Tutor: Dave Z. Chen – Contact: zhenyu.chen@tum.de

27

slide-28
SLIDE 28
  • Prof. Leal-Taixé and Prof. Niessner

Pro roje ject Dir irections

  • Scan2Cap:

: Genera rating descrip riptions fo for r objects in in 3D scenes

– Xu et al. "Show, Attend and Tell: Neural Image Caption Generation with Visual Attention" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016. – Lu et al. "Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition . 2017. – Tutor: Dave Z. Chen – Contact: zhenyu.chen@tum.de

28

slide-29
SLIDE 29
  • Prof. Leal-Taixé and Prof. Niessner

Pro roje ject Dir irections

  • Object Locali

lizatio ion in in 3D scenes usin ing Natu tural l Language

– Hu et al. "Natural Language Object Retrieval" Proceedings of the IEEE Conference

  • n Computer Vision and Pattern Recognition. 2016.

– Hu et al. "Segmentation from Natural Language Expressions" Proceedings of the IEEE European Conference on Computer Vision. 2016. – Tutor: Dave Z. Chen – Contact: zhenyu.chen@tum.de

29

slide-30
SLIDE 30
  • Prof. Leal-Taixé and Prof. Niessner

Pro roje ject Dir irections

  • Gro

rounding re refe ferrin ing exp xpressions in in 3D scenes with ith multi ltimodal l data ta

– Hu et al. "Natural Language Object Retrieval" Proceedings of the IEEE Conference

  • n Computer Vision and Pattern Recognition. 2016.

– Dai, Angela, and Matthias Nießner. "3dmv: Joint 3d-multi-view prediction for 3d semantic scene segmentation." Proceedings of the European Conference on Computer Vision. 2018. – Tutor: Dave Z. Chen – Contact: zhenyu.chen@tum.de

30

slide-31
SLIDE 31
  • Prof. Leal-Taixé and Prof. Niessner

Segmentation and trackin ing

Tim im Meinhardt

31

slide-32
SLIDE 32

Pro roje ject Dir irections

  • Vid

ideo obje ject segmentatio ion (sin ingle le/mult ltip iple le obje jects)

32

  • Prof. Leal-Taixé and Prof. Niessner
slide-33
SLIDE 33

Pro roje ject Dir irections

  • Vide

ideo obj bject segmentation (sin ingle/mult ltiple le obje bjects)

 Bringing OSVOS to real world pedestrian tracking scenarios:

  • One-Shot Video Object Segmentation. S. Caelles, K.-K. Maninis, J. Pont-Tuset, L. Leal-Taixe, D. Cremers, and
  • L. Van Gool.

IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.

  • Video Object Segmentation Without Temporal Information.

K.-K. Maninis, S. Caelles, Y. Chen, J. Pont-Tuset, L. Leal-Taixe, D. Cremers, and L. Van Gool. Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2017.  Related work:

  • OnAVOS: Online Adaptation of Convolutional Neural Networks for Video Object Segmentation. P.

Voigtlaender, B. Leibe, BMVC 2017. – Datasets: MO MOTS TS: Mu Mult lti-Object Tracking and and Seg egment ntation Paul Paul Vo Voigtlaender, , Mich Michael l Kra raus use, , Aljoša Ošep, Jona nathon n Lu Luiten en, , Be Berin rin Bal Balachandar Gnana Gnana Sekar, , And ndrea eas Gei Geiger, r, Bas Bastian n Lei

  • Leibe. CVPR 2019

019

– Tutor: Tim Meinhardt – Contact: tim.meinhardt@tum.de

33

  • Prof. Leal-Taixé and Prof. Niessner
slide-34
SLIDE 34

Pro roje ject Dir irections

  • Video obj
  • bject seg

segmentation (sin single/multiple obj

  • bjects)

 Enhancing OSVOS for multi-object segmentation:  Related work:

  • One-Shot Video Object Segmentation. S. Caelles, K.-K. Maninis, J. Pont-Tuset, L. Leal-Taixe,
  • D. Cremers, and L. Van Gool.

IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.

  • Video Object Segmentation Without Temporal Information.

K.-K. Maninis, S. Caelles, Y. Chen, J. Pont-Tuset, L. Leal-Taixe, D. Cremers, and L. Van Gool. Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2017.

  • OnAVOS: Online Adaptation of Convolutional Neural Networks for Video Object
  • Segmentation. P. Voigtlaender, B. Leibe, BMVC 2017.
  • CINM: CNN in MRF: Video Object Segmentation via Inference in a CNN-Based Higher-

Order Spatio-Temporal MRF. L. Bao, B. Wu, W. Liu, CVPR 2018.

– Tutor: Tim Meinhardt – Contact: tim.meinhardt@tum.de

34

  • Prof. Leal-Taixé and Prof. Niessner
slide-35
SLIDE 35

Pro roje ject Dir irections

  • Mult

ltip iple le object tr trackin ing in in re real-world ld scenari rios

35

  • Prof. Leal-Taixé and Prof. Niessner
slide-36
SLIDE 36

Pro roje ject Dir irections

  • Mul

ulti tiple obje bject tra racking in n rea real-world ld sce scenario ios – Meta-learning for:

  • Tracking without bells and whistles.

Philipp Bergmann, Tim Meinhardt, and Laura Leal-Taixe. IEEE International Conference on Computer Vision (ICCV), 2019.  Related work:

  • Collaborative Deep Reinforcement Learning for Multi-Object Tracking.

Liangliang Ren, Jiwen Lu, Zifeng Wang1, Qi Tian,Jie Zhou. ECCV 2018. – Tutor: Tim Meinhardt – Contact: tim.meinhardt@tum.de

36

  • Prof. Leal-Taixé and Prof. Niessner
slide-37
SLIDE 37

Pro roje ject Dir irections

  • Mul

ulti tiple le obje ject t tr track ckin ing in in real real-world ld sc scen enario ios – Building an appearance model for:

  • Tracking without bells and whistles.

Philipp Bergmann, Tim Meinhardt, and Laura Leal-Taixe. IEEE International Conference on Computer Vision (ICCV), 2019.

– Tutor: Tim Meinhardt – Contact: tim.meinhardt@tum.de

37

  • Prof. Leal-Taixé and Prof. Niessner
slide-38
SLIDE 38
  • Prof. Leal-Taixé and Prof. Niessner

Im Image Post-Processing, , Rendering and In Interpretability

Maxim im Maxim imov

38

slide-39
SLIDE 39
  • Prof. Leal-Taixé and Prof. Niessner

Pro roje ject Dir irections

  • Topic: Stereo Matching
  • Problem: Match

atching bl blurr rry imag ages

  • Main Points:

– How partially blurry images can be matched with sharp ones (for different tasks) – Estimate between 2 images: disparity map OR camera localization OR some other metric

  • Related Work\Info:

– “The Unreasonable Effectiveness of Deep Features as a Perceptual Metric” – “Efficient Deep Learning for Stereo Matching” – “Cascade Residual Learning: A Two-stage Convolutional Neural Network for Stereo Matching” – "DeMoN: Depth and Motion Network for Learning Monocular Stereo" – "Learning Monocular Depth by Distilling Cross-domain Stereo Networks" – Other works for stereo matching

  • Tutor: Maxim (maxim.maximov@tum.de)

39

slide-40
SLIDE 40
  • Prof. Leal-Taixé and Prof. Niessner

Pro roje ject Dir irections

  • Post-processin

ing meth thods: de-blu lurring, sta tabiliz lizatio ion, styliz tylizatio ion etc tc.

40

slide-41
SLIDE 41
  • Prof. Leal-Taixé and Prof. Niessner

Pro roje ject Dir irections

  • Topic: Image Processing
  • Problem: Vide

deo Stab Stabilizat ation

  • Main Points:

– Temporally coherent & sharp

– Only from videos

  • Related Work\Info:

– “Burst Image Deblurring Using Permutation Invariant Convolutional Neural Networks” – Google Approach – “Deep Online Video Stabilization” – "Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring" – "DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks" – Other motion deblurring papers

  • Tutor: Maxim (maxim.maximov@tum.de)

41

slide-42
SLIDE 42
  • Prof. Leal-Taixé and Prof. Niessner
  • Use CNN to

to re render r 2D Im Images (V (Vid ideo)

  • Estim

timate components of re f rendered im image

Pro roje ject Dir irections

42

slide-43
SLIDE 43
  • Prof. Leal-Taixé and Prof. Niessner

Pro roje ject Dir irections

  • Topic: Neural Rendering
  • Problem: Image Ren

endering fro rom intermediate re renders

  • Main Points:

– “Render” RGB image based on masks, normals, depth or RGB or etc – Make it realistic (appearance, shadow) – Different options (regular approach, focus on light\shadows, GAN, video)

  • Related Work\Info:

– “Geometric Image Synthesis”, – “Photographic Image Synthesis with Cascaded Refinement Networks”, – "IGNOR: Image-guided Neural Object Rendering" – "NVS Machines: Learning Novel View Synthesis with Fine-grained View Control" – Other Image synthesis papers

  • Tutor: Maxim (maxim.maximov@tum.de)

43

slide-44
SLIDE 44
  • Prof. Leal-Taixé and Prof. Niessner

Pro roje ject Dir irections

  • Topic: Representation and 3D Scene Understanding
  • Problem: Rep

Represe senta tatio tion of

  • f a scen

scene recon reconst stru ructio tion netwo network rk

  • Main Points:

– How to fuse representations from different viewpoints – Open topic

  • Related Work\Info:

– “Neural scene representation and rendering” – "Inverting Visual Representations with Convolutional Networks" – "Learning to Generate Chairs, Tables and Cars with Convolutional Networks" – "Learning a Probabilistic Latent Space of Object Shapes via 3D Generative- Adversarial Modeling" – "Neural Discrete Representation Learning" – "DeepVoxels: Learning Persistent 3D Feature Embeddings" – Other 3D Reconstruction papers with latent representation

  • Tutor: Maxim (maxim.maximov@tum.de)

44

slide-45
SLIDE 45
  • Prof. Leal-Taixé and Prof. Niessner

Pro roje ject Dir irections

  • Topic: Inverse-rendering
  • Problem: Illu

llumination estimation

  • Main Points:

– Use RGB (+ optionally Depth) – How mirror ball would look like given an image – Or\and estimate shadow map

  • Related Work\Info:

– “Neural Inverse Rendering of an Indoor Scene from a Single Image” – “DeepLight: Learning Illumination for Unconstrained Mobile Mixed Reality” – “What Is Around The Camera?” – "LIME: Live Intrinsic Material Estimation" – "Neural Inverse Rendering of an Indoor Scene from a Single Image" – "Learning to Reconstruct Shape and Spatially-Varying Reflectance from a Single Image" – AR selfie method – Other inverse-rendering papers

  • Tutor: Maxim (maxim.maximov@tum.de)

45

slide-46
SLIDE 46
  • Prof. Leal-Taixé and Prof. Niessner

Pro roje ject Dir irections

  • Topic: Interpretability\Generalization
  • Problem: De

Deep Le Learning Models Int nterpretabili lity

  • Main Points:

– Analysis + Visualization

– Common Problems – Open topic – Many related work

  • Related Work\Info:

– Github - pytorch-cnn-vizualization – Building blocks of interpretability – “The elephant in the room”, etc – Many other papers

  • Tutor: Maxim (maxim.maximov@tum.de)

46

slide-47
SLIDE 47
  • Prof. Leal-Taixé and Prof. Niessner

General topic ics

47

slide-48
SLIDE 48
  • Prof. Leal-Taixé and Prof. Niessner

Fake im image generation/detectio ion

  • Genera

rative adversa rsaria ial l netw tworks fo for r vid ideo generatio ion

  • Vondrick, Carl, Hamed Pirsiavash, and Antonio Torralba.

"Generating videos with scene dynamics." Advances In Neural Information Processing Systems. 2016

  • Kalchbrenner, Nal, et al. "Video pixel networks." arXiv

preprint arXiv:1610.00527 (2016)

  • Wang, Ting-Chun, et al. "Video-to-Video Synthesis." arXiv

preprint arXiv:1808.06601 (2018).

48

slide-49
SLIDE 49
  • Prof. Leal-Taixé and Prof. Niessner

Fake im image generation/detectio ion

  • DeepFakes++:

: fo forg rgery genera ratio ion and dete tection

  • Rössler, Andreas, et al. "FaceForensics: A Large-scale

Video Dataset for Forgery Detection in Human Faces." arXiv preprint arXiv:1803.09179 (2018).

  • Kim, Hyeongwoo, et al. "Deep Video Portraits." arXiv

preprint arXiv:1805.11714 (2018).

  • DeepFakes

49

slide-50
SLIDE 50

Pro roje ject Tim imelin ine

  • Oct 21st Project Introduction (second half of lecture)
  • Oct 23rd Projects Assignments (to TAs)
  • Oct 30th Abstract Submissions (midnight)
  • Until Nov 6th -> Feedback Projects
  • 4th + 11th December -> First presentations (group #1, #2)
  • 8th + 15th January -> Second presentation (group #1, #2)
  • Feb 4th -> Deadline report (noon)
  • Feb 5th -> Poster Presentation (Regular ex time slot)

50

slide-51
SLIDE 51
  • Prof. Leal-Taixé and Prof. Niessner

Next le lectures

  • Monday 4th November, 10-11:30h: Siamese Networks

– No lecture next week !!! (ICCV)

  • This Wednesday, 14-15:30h: Meeting here to assign

projects!

51

slide-52
SLIDE 52

See you on Wednesday 

52