preprocessing and dimensionality reduction
play

Preprocessing and Dimensionality Reduction J er emy Fix - PowerPoint PPT Presentation

Datasets Preprocessing Dimensionality reduction Preprocessing and Dimensionality Reduction J er emy Fix CentraleSup elec jeremy.fix@centralesupelec.fr 2017 1 / 73 Datasets Preprocessing Dimensionality reduction Where to get data


  1. Datasets Preprocessing Dimensionality reduction Preprocessing and Dimensionality Reduction J´ er´ emy Fix CentraleSup´ elec jeremy.fix@centralesupelec.fr 2017 1 / 73

  2. Datasets Preprocessing Dimensionality reduction Where to get data You need datasets You can use open datasets For example for experimenting a new ML algorithm: • UCI ML Repo : http://archive.ics.uci.edu/ml/ • Kaggle competitions, e.g. https: //www.kaggle.com/c/diabetic-retinopathy-detection • specific well known datasets for specific ML problems 2 / 73

  3. Datasets Preprocessing Dimensionality reduction Where to get data Some available datasets Face expression classification 48x48 pixel grayscale images of faces 0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral 28K Train; 3K for public test, another 3K for final test. Kaggle, ICML 2013 3 / 73

  4. Datasets Preprocessing Dimensionality reduction Where to get data Some available datasets Object localization/detection PascalVOC2012: 20 classes, 20000 Train images, 20000 Test, 11000 Test Avg image size : 469x387 pixels, RGB Classes : person/bird, cat, cow, dog, horse, sheep/aeroplane, bicycle, boat, bus, car, motorbike, train/bottle, chair, dining table, potted plant, sofa, tv-monitor http://host.robots.ox.ac.uk/pascal/VOC/ 4 / 73

  5. Datasets Preprocessing Dimensionality reduction Where to get data Some available datasets Object localization/detection ImageNet, ILSVRC2014: 1000 classes, 1.2M Train images, 50K Valid, 100K Test Avg image size : 482x415 pixels, RGB ImageNet Large Scale Visual Recognition Challenge, Russakovsky et al. (2015) 5 / 73

  6. Datasets Preprocessing Dimensionality reduction Where to get data Some available datasets Object localization/detection Open Images Dataset: https://github.com/openimages/dataset • ≈ 9M automatically labelled images, 4M human validated • 80M bounding boxes, 6000 classes • both meta labels (e.g. vehicle), fine-grain labels (e.g. honda nsx) 6 / 73

  7. Datasets Preprocessing Dimensionality reduction Where to get data Some available datasets Object segmentation COCO 2017: 200K images, 80 classes, 500K masks http://cocodataset.org/ 7 / 73

  8. Datasets Preprocessing Dimensionality reduction Where to get data Some available datasets Recommendation systems MovieLens, Netflix Prize, Anime Recommendations Database MovieLens 20M • 27K movies by 138K users • 5star ratings, 1/2 increment (0.0, 0.5, ..) • 20M ratings • metadata (e.g. genre) • links to imdb to enrich metadata https://grouplens.org/datasets/movielens/ 8 / 73

  9. Datasets Preprocessing Dimensionality reduction Where to get data Some available datasets Automatic speech recognition Timit, VoxForge, ... Timit : • 630 speakers, eight American english dialects • time-aligned orthographic, phonetic and word transcriptions • 16kHz speech waveform file for each utterance https://catalog.ldc.upenn.edu/ldc93s1 9 / 73

  10. Datasets Preprocessing Dimensionality reduction Where to get data Some available datasets Sentiment analysis Large Movie Review Dataset (IMDB) • 25K reviews for training, 25K reviews for testing • movie reviews (sentences), with rating ([1,10]) • aim : Are reviews on a given product positive/negative ? Maas(2011), Learning Word Vectors for Sentiment Analysis Automatic translation Dataset from the european parliament (Europarl dataset) • single language datasets (language model) • parallel corpora (translation), e.g. french-english (2M sentences), Czech-English (650K sentences), .. Europarl: A Parallel Corpus for Statistical Machine Translation, Philipp 10 / 73

  11. Datasets Preprocessing Dimensionality reduction Make your own dataset You need datasets You have a specific problem You may need to collect data on your own. • Crawl the web ? (e.g. Tweeter API, ..) • if supervized learning : assign labels (mechanical turk, domain experts (classifying tumors)) • Ensure you collected sufficient features 11 / 73

  12. Datasets Preprocessing Dimensionality reduction We need vectors, appropriately scaled, without missing values Preprocessing 12 / 73

  13. Datasets Preprocessing Dimensionality reduction We need vectors, appropriately scaled, without missing values Preprocessing data Data are not necessarily vectorial • Ordinal or Categorical : poor/faire/excellent ; Male/Female • Text documents : bag of words / word embeddings Even if vectorial • Missing data : check how missing values are indicated (-9, ’ ’, ..) → Imputation of missing values • Feature scaling 13 / 73

  14. Datasets Preprocessing Dimensionality reduction We need vectors, appropriately scaled, without missing values Your data might not be vectorial data Ordinal and categorical features Ordinal values have an order. Ordinal Feature value poor fair excellent Numerical feature value -1 0 1 Categorical values do not have an order (use one-hot) : Categorical value American Spanish German French Numerical value [1 , 0 , 0 , 0] [0 , 1 , 0 , 0] [0 , 0 , 1 , 0] [0 , 0 , 0 , 1] 14 / 73

  15. Datasets Preprocessing Dimensionality reduction We need vectors, appropriately scaled, without missing values Your data might not be vectorial data Vectorial representation of text documents Bag Of Words • define a vocabulary V , |V| = n • for each document, build a vector x so that x i is the frequency of the word V i e.g. V = { I , in , love , metz , machinelearning , study } I love machine learning and love metz too. → x = [1 , 0 , 2 , 1 , 1 , 0] I love studying machine learning in Metz. → x = [1 , 1 , 1 , 1 , 1 , 1] Does not take the order into account → N − gram , but this leads to sparser representations 15 / 73

  16. Datasets Preprocessing Dimensionality reduction We need vectors, appropriately scaled, without missing values Your data might not be vectorial data Vectorial representation of text documents Word/Sentence embeddings (e.g. word2vec, GLoVe, fasttext). Continuous Bag of Words (CBOW) : predict a word given its context • Input and output coded with one-hot • predict a word given its context • hidden layer : word representation Captures some semantic information. For sentences : tweet2vec, sentence2vec, word vector avg see also : Bayesian approaches (e.g. Latent Dirichlet Allocation) Pennington(2014) GloVe: Global Vectors for Word Representation ; Mikolov(2013) Efficient Estimation of Word 16 / 73 Representations in Vector Space ; https://fasttext.cc/

  17. Datasets Preprocessing Dimensionality reduction We need vectors, appropriately scaled, without missing values Some features might be missing Missing features • Completely drop out the samples with missing attributes, or the dimensions that have missing values • or try to impute , i.e. set a value in place of the missing attributes For missing value imputation, there are plenty of methods : • global : assign the mean, median, most frequent value of an attribute • local : based on k-nearest neighbors, decide which value to impute The bias you may introduce by imputing a value may depend on the causes of the missing values, see [Silva(2014)]. Silva(2014). A brief review of the main approaches for treatment of missing data 17 / 73

  18. Datasets Preprocessing Dimensionality reduction We need vectors, appropriately scaled, without missing values Some vectorial data might not be appropriately scaled Feature scaling • dimensions with the largest variations will dominate euclidean distances (e.g. nearest neighbors) • when gradient descent is involved, feature scaling makes convergence faster (because the loss is circular symmetric) • when regularization is involved, we would like to use a single regularization coefficient, independent on the scale of the features 18 / 73

  19. Datasets Preprocessing Dimensionality reduction We need vectors, appropriately scaled, without missing values Some vectorial data might not be appropriately scaled Feature scaling Given x i ∈ R d , you can normalize by : • min/max scaling : x i , j − min k x k , j ∀ i , ∀ j ∈ [0 , d − 1] x ′ i , j = max k x k , j − min k x k , j • z-score normalization : x i , j − µ j ∀ i , ∀ j ∈ [0 , d − 1] x i , j = σ j 1 � µ j = x k , j N k � 1 � ( x k , j − µ j ) 2 σ j = N k Your statistics must be computed from the training set and applied also 19 / 73 to test data.

  20. Datasets Preprocessing Dimensionality reduction Dimensionality reduction 20 / 73

  21. Datasets Preprocessing Dimensionality reduction Dimensionality reduction : what/why/how ? What Optimally transform x i ∈ R n into z i ∈ R d so that d << n It remains to define what means “optimally transform” Why • visualization of the data • interpretability of the predictor • speed up the algorithms whose complexity depends on n • data may occupy a manifold of lower dimensionality than n • curse of dimensionality : data get quickly sparse, models may overfit 21 / 73

  22. Datasets Preprocessing Dimensionality reduction Dimensionality reduction: why ? Data analysis/Visualization How are your data distributed ? How are your classes intricated ? Do we have discriminative features ? 22 / 73 t-SNE, Mnist, Maaten et al.

  23. Datasets Preprocessing Dimensionality reduction Dimensionality reduction: why ? Interpretability of the predictor e.g. Why does this predictor say the tumor is malignant ? Real risk = 0 . 92 ± 0 . 06 Real risk = 0 . 92 ± 0 . 05 UCI ML Breast Cancer Wisconsin (Diagnostic) dataset Real risk estimated by 10-fold CV. 23 / 73

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend