labeling information enhancement for multi label learning
play

Labeling Information Enhancement for Multi-label Learning with - PowerPoint PPT Presentation

Labeling Information Enhancement for Multi-label Learning with Low-rank Subspace An Tao* , Ning Xu, and Xin Geng Southeast University, China Outline Introduction 1 The LIEML Algorithm 2 Experiments 3 Conclusion 4 Introduction 1


  1. Labeling Information Enhancement for Multi-label Learning with Low-rank Subspace An Tao* , Ning Xu, and Xin Geng Southeast University, China

  2. Outline Introduction 1 The LIEML Algorithm 2 Experiments 3 Conclusion 4

  3. Introduction 1

  4. Introduction In traditional multi-label learning: Features: each pixels in the picture Label set: Y = { sky, water, cloud, beach, plant, house } Labeling information in multi-labellearning is categorical in essence. • Each label is regarded to be either relevant or irrelevant to the instance. We call such label as logical label .

  5. Introduction However… Same logical label set Y = { sky, water, cloud, beach, plant, house } Q: The two pictures can’t be differed with only logical labels. A: To describe the pictures better, we extend the logical label to be numerical. This new label is called numerical label .

  6. Introduction Specification of logical label Use 𝒛 " ∈ {−1,1} ) to denote the logical label vector. Label element of 𝒛 " = 1 : relevant to the instance. • Label element of 𝒛 " = −1 : irrelevant to the instance. • Specification of numerical label Use to denote the numerical label vector. > 0 : relevant to the instance. • Label element of < 0 : irrelevant to the instance. • Label element of • Absolute value of label element of : reflects the degree to which the label describes the instance.

  7. Introduction Overview of our LIEML algorithm for multi-label learning Logical Label Step 1: Label Enhancement (LE) + Numerical Label Feature Step 2: Predictive Model Induction Multi-label Model ü LE can be seen as a data preprocessing step which aims to facilitate in learning a better model.

  8. The LIEML Algorithm 2

  9. The LIEMLAlgorithm Symbol Definition: • : Dataset • : i -th instance vector • : i -th logical label vector • : i -th numerical label vector Label Enhancement Linear Model: • is a weight matrix. • is a bias vector.

  10. The LIEMLAlgorithm Label Enhancement Linear Model: • For convenient describing, we set . The model becomes: We then construct a stacked matrix Z : Label Enhancement ’ Target Matrix

  11. The LIEMLAlgorithm Label Enhancement Not deviate too much ’ The optimization problem of LE becomes: • L ( Z ): logistic loss function ü It prevents the label values in Z from deviating the original values too much.

  12. The LIEMLAlgorithm Labeling Information ↑ Label Enhancement Not deviate too much ’ Rank( Z ) ↓ • L ( Z ): nuclear norm and squared function Low-rank Assumption Q: Why construct the stacked matrix Z? A: We assume that the stacked matrix Z belongs to an underlying low-rank subspace. ü The stacked matrix Z is therefore an underlying low-rank matrix.

  13. The LIEMLAlgorithm Label Enhancement Not deviate too much ’ Rank( Z ) ↓ The target function T 1 for optimization is yielded as:

  14. The LIEMLAlgorithm Predictive Model Induction We build the learning model through an adapted regressor based on MSVR. The target function T 2 we wish to minimize is: • , , , , , , , , . . • is a nonlinear transformation of x to a higher dimensional feature space.

  15. The LIEMLAlgorithm

  16. Experiments 3

  17. Experiments Experiment Configuration Ten benchmark multi-label data sets: Six well-established multi-label learning algorithms: • BR, CLR, ECC, RAKEL, LP, and ML 2 Five evaluation metrics widely-used in multi-label learning: • Ranking-loss, One-error, Hamming-loss, Coverage, and Average precision

  18. Experiments Experimental Results

  19. Experiments Experimental Results

  20. Experiments Experimental Results LIEML ranks 1 st in the most cases! The model of LE in LIEML is linear, but nonlinear in ML 2 , it is uneasy for LIEML to beat ML 2 with the less efficient linear way. ü The results of the experiment validate the effectiveness of our LIEML algorithm for multi-label learning.

  21. Conclusion 4

  22. Conclusion Major Contribution This paper proposes a novel multi-label learning method named LIEML, which enhances the labeling information by extending logical labels into numerical labels. The labeling information is enhanced by leveraging the underlying low- rank structure in the stacked matrix. More Information My personal website: Our lab website: https://antao.netlify.com/ http://palm.seu.edu.cn/

  23. Thank You!

  24. The LIEMLAlgorithm Label Enhancement To optimize the target function T 1 : Step 1: gradient decent Alternative Step 2: shrinkage ü To improve the speed of convergence, we begin with a large value µ 1 for µ , and decay as .

  25. The LIEMLAlgorithm Predictive Model Induction To minimize T 2 ( Θ ; m ), we use an iterative quasi-Newton method called Iterative Re-Weighted Least Square (IRWLS). ≈ The quadratic problem can be solved as: • , , , , , . The solution for the next iteration ( Θ (k+1 ) and m ( k+1) ) of is obtained via a line search algorithm along ( Θ and m ) .

  26. Experiments Experiment Configuration

  27. Experiments Experiment Configuration Zhi-Hua Zhou, and Min-Ling Zhang. "Multi-label Learning." (2017): 875-881.

  28. Introduction In traditional multi-label learning: Some features Model learning An instance predicting No Label 1 Yes No No Label 2 . . . . . . . . . Yes Yes Label d

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend