Multimodal Machine Learning Main Goal Define a common taxonomy for - - PowerPoint PPT Presentation

multimodal machine learning main goal
SMART_READER_LITE
LIVE PREVIEW

Multimodal Machine Learning Main Goal Define a common taxonomy for - - PowerPoint PPT Presentation

Multimodal Machine Learning Main Goal Define a common taxonomy for multimodal machine learning and provide an overview of research in this area Introduction: Preliminary Terms Modality : the way in which something happens or is experienced


slide-1
SLIDE 1

Multimodal Machine Learning

slide-2
SLIDE 2

Main Goal

Define a common taxonomy for multimodal machine learning and provide an overview of research in this area

slide-3
SLIDE 3

Introduction: Preliminary Terms

Modality: the way in which something happens or is experienced Multimodal machine learning (MML): building models that process and relate information from multiple modalities

slide-4
SLIDE 4

History of MML

Audio-Visual Speech Recognition (AVSR) Multimedia content indexing and retrieval Multimodal interaction Media Description

  • McGurk effect
  • Visual information

improved performance when the speech signal was noisy

  • Searching visual

and multimodal content directly

  • Understanding

human multimodal behaviors (facial expressions, speech, etc.) during social interactions

  • Image captioning
  • Challenging

problem to evaluate

slide-5
SLIDE 5

Five Main Challenges of MML

1. Representation – representing and summarizing multimodal data 2. Translation – mapping from one modality to another (e.g., image captioning) 3. Alignment – identifying the corresponding elements between modalities (e.g., recipe steps to the correct video frame) 4. Fusion – joining information from multiple modalities to predict (e.g., using lip motion and speech to predict spoken words) 5. Co-learning – transferring knowledge between modalities, their representation, and their predictive models These challenges need to be tackled for the field to progress.

slide-6
SLIDE 6

Representation

Multimodal representation: a representation of data using information from multiple entities (an image, word/sentence, audio sample, etc.) We need to represent multimodal data in a meaningful way to have good models. This is challenging because multimodal data are heterogeneous.

slide-7
SLIDE 7

Joint

Representation

Coordinated

Example constraints: minimize cosine similarity, maximize correlation

slide-8
SLIDE 8

Joint Representation

Mostly used when multimodal data is present during training and inference Methods:

  • Simple concatenation
  • Neural networks
  • Probabilistic graphical models
  • Sequential representation

Neural networks are often pre-trained using an autoencoder on unsupervised data.

slide-9
SLIDE 9

Coordinated Representation

Similarity Models Structured Coordinated Space Models

Enforce similarity between representations by minimizing the distance between modalities in the coordinated space Enforce additional constraints between modalities Example: cross-modal hashing. Additional constraints are:

  • N-dimensional Hamming space
  • The same object from different modalities has

to have a similar hash code

  • Similarity-preserving

“dog”

slide-10
SLIDE 10

Translation: Mapping from one modality to another (e.g., image captioning)

Example-based

Use a dictionary to translate between modalities

Generative

Construct a model that translates between modalities

slide-11
SLIDE 11

Example-Based Translation

Combination-Based Retrieval-Based

Combines retrievals from the dictionary in a meaningful way to create a better translation Rules are often hand-crafted or heuristic. Use retrieved translation without modification Problem: Often requires an extra processing step (e.g., re-ranking of retrieved translations) – similarity in the unimodal space does not always mean a good translation Solution: Use an intermediate semantic space for similarity comparison. Performs better because the space reflects both modalities and allows for bi-directional translation. Requires manual construction or learning of the space, which needs large training dictionaries.

slide-12
SLIDE 12

Generative Translation

Constructing models that perform multimodal translation on a unimodal source Grammar-Based Encoder-Decoder Continuous Generation

Detect high-level concepts from source and generate a target using a pre-defined grammar More likely to generate logically correct targets Formulaic translations, need complex pipelines for concept detection Example: video description of who did what to whom and where and how Encode the source modality into a latent representation, then decode that representation into the target modality (one pass) Encoders: RNNs, DBNs, CNNs Decoders: RNNs, LSTMs May be memorizing the data Require lots of data for training Generate target modality at every timestep based on a stream of source modality inputs HMMs, RNNs, encoder-decoders

Requires the ability to understand the source and generate the target

slide-13
SLIDE 13

Translation Evaluation: A Major Challenge

There are often multiple correct translations. Evaluation methods

  • Human evaluation – impractical and biased
  • BLEU, ROUGE, Meteor, CIDEr – low correlation to human judgment, require

a high number of reference translations

  • Retrieval – better reflects human judgments

○ Rank the available captions and assess if the correct captions get a high rank

  • Visual question-answering for image captioning – ambiguity in questions

and answers, question bias

slide-14
SLIDE 14

Alignment

“Finding relationships and correspondences between sub-components of instances from two or more modalities.” Examples:

  • Given an image and caption, find the areas of the image corresponding to the

caption.

  • Given a movie, align it to the book chapters it was based on.
slide-15
SLIDE 15

Explicit Alignment (unsupervised and supervised)

Unsupervised: no direct alignment labels. Supervised: direct alignment labels. Most approaches inspired from work on statistical machine translation and genome sequences. If there is no similarity metric between modalities, canonical correlation analysis (CCA) is used to map the modalities to a shared space. CCA finds the linear combinations of data that maximizes their correlation Example applications:

  • Spoken words ↔ visual objects in images
  • Movie shots and scenes ↔ screenplay
  • Recipes ↔ cooking videos
  • Speakers ↔ videos
  • Sentences ↔ video frames
  • Image regions ↔ phrases
  • Speakers in audio ↔ locations in video
  • Objects in 3D scenes ↔ nouns in text
slide-16
SLIDE 16

Audio-Visual Speaker Diarization Based on Spatiotemporal Bayesian Fusion

slide-17
SLIDE 17

Implicit Alignment

Used as an intermediate step for another task Does not rely on supervised alignment examples Data is latently aligned during model training Useful for speech recognition, machine translation, media description, visual question-answering Example: alignment of words and image regions before performing image retrieval based on text descriptions Difficulties in alignment:

  • Few datasets with explicitly annotated

alignments

  • Difficult to design similarity metrics
  • May exist 0, 1, or many correct alignments
slide-18
SLIDE 18

Fusion

Early fusion - features integrated immediately (concatenation) Late fusion - each modality makes an independent decision (averaging, voting schemes, weighted combinations, other ensemble techniques) Hybrid fusion - exploits advantages of both

slide-19
SLIDE 19

Fusion Techniques

Multiple kernel learning (MKL):

  • An extension of kernel support vector

machines

  • Kernels function as similarity functions

between data

  • Modality-specific kernels allows for better

fusion MKL Application: performing musical artist similarity ranking from acoustic, semantic, and social view data. (McFee et al., Learning Multi-modal Similarity) Neural networks (RNN/LSTM) can learn the multimodal representation and fusion component end-to-end. They achieve good performance but require large datasets and are less interpretable. LSTM Applications:

  • Audio-visual emotion classification
  • Neural image captioning
slide-20
SLIDE 20
slide-21
SLIDE 21

Co-learning

Modeling a resource poor modality by exploiting a resource rich modality. Used to address lack of annotated data, noisy data, and unreliable labels. Can generate more labeled data, but also can lead to overfitting.

slide-22
SLIDE 22

Co-learning examples

Transfer learning application: using text to improve visual representations for image classification by coordinating CNN features with word2vec features Conceptual grounding: learning meanings/concepts based on vision, sound, or smell (not just on language) Zero-short learning (ZSL): recognizing a class without having seen a labeled example of it ZSL Example: using an intermediate semantic space to predict unseen words people are thinking about from fMRI data

slide-23
SLIDE 23

Zero-Shot Learning with Semantic Output Codes

slide-24
SLIDE 24

Grounding Semantics in Olfactory Perception

“This work opens up interesting possibilities in analyzing smell and even taste. It could be applied in a variety of settings beyond semantic similarity, from chemical information retrieval to metaphor interpretation to cognitive modelling. A speculative blue-sky application based on this, and other multi-modal models, would be an NLG application describing a wine based on its chemical composition, and perhaps other information such as its color and country of origin.”

slide-25
SLIDE 25

Paper Critique

This paper is very thorough in its survey of MML challenges and what researchers have done to approach them. MML is central to the advancement of AI; thus, this area must be studied in order to make progress. Future research directions include any MML projects that make headway in the five challenge areas.

slide-26
SLIDE 26

Questions?