Adversarial Nonnegative Matrix Factorization Lei Luo, Yanfu Zhang, - - PowerPoint PPT Presentation

adversarial nonnegative matrix factorization
SMART_READER_LITE
LIVE PREVIEW

Adversarial Nonnegative Matrix Factorization Lei Luo, Yanfu Zhang, - - PowerPoint PPT Presentation

Thirty-seventh International Conference on Machine Learning Adversarial Nonnegative Matrix Factorization Lei Luo, Yanfu Zhang, Heng Huang Electrical and Computer Engineering, University of Pittsburgh JD Finance America Corporation


slide-1
SLIDE 1

Adversarial Nonnegative Matrix Factorization

Lei Luo, Yanfu Zhang, Heng Huang Electrical and Computer Engineering, University of Pittsburgh JD Finance America Corporation luoleipitt@gmail.com

Thirty-seventh International Conference on Machine Learning

slide-2
SLIDE 2

Outline

➢Background ➢Motivation ➢Our Work ➢Experiments

slide-3
SLIDE 3

Outline

➢Background ➢Motivation ➢Our Work ➢Experiments

slide-4
SLIDE 4

➢The nonnegative matrix factorization (NMF) has been a prevalent nonnegative dimensionality reduction method ➢feature extraction, video tracking, image processing, and document clustering. ➢Popular models: standard NMF, RNMF(Truncated Cauchy NMF) ➢What is the aim of nonnegative matrix factorization ? ➢ It targets to factorize an m×N-dimensional matrix Y into the product AX of two nonnegative matrices, with n columns in A, where n is generally small. ➢ What make the success of nonnegative matrix factorization? ➢Successfully fitting noise term: ➢Novel training approaches in model design

Background

slide-5
SLIDE 5

Outline

➢Background ➢Motivation ➢Our Work ➢Experiments

slide-6
SLIDE 6

Motivation

➢The limitations of some existing methods ➢Existing methods are only suitable for some special types

  • f noises, e.g., Laplacian or Cauchy noise, which cannot

show the flexibility in facing the worst-case (i.e., adversarial) perturbations of data points. ➢Our method ➢We introduce a novel Adversarial Nonnegative Matrix Factorization (ANMF) model by emphasizing potential test adversaries that are beyond the pre-defined constraints.

slide-7
SLIDE 7

Outline

➢Background ➢Motivation ➢Our Work ➢Experiments

slide-8
SLIDE 8

Our work

➢NMF can be formulated as:

(1)

Assumptions: 1. the learned feature data A and given data Y are drawn from an unknown distribution at training time. The test data can be generated either from , the same distribution as the training data, or from , a modification of generated by an attacker.

  • 2. The action of the learner is to select parameters
  • f the Eq.

(1). The attacker has an instance-specific target, and encourages that the prediction made by learner on the modified instance, , is close to this target.

slide-9
SLIDE 9

Our work

➢The cost functions of each learner (Cl) and the attacker (Ca) are estimated by: ➢Ultimately, our model is expressed as: Theorem 1. Given X, the best response of the attacker is

(2) (3)

slide-10
SLIDE 10

Since there is an inverse of complicated matrix in (3), it is difficult to solve problem (2) by directly substituting (3) into (2). To mitigate this limitation, we consider (3) as a constraint of (2), which leads to the following problem:

(4) (4) (5)

slide-11
SLIDE 11

Theoretical Analysis

We define the empirical reconstruction error of NMF as follows:

slide-12
SLIDE 12

➢The proposed algorithm: We apply the Alternating Direction Method of Multipliers (ADMM)

  • ptimization algorithm to solve our problem
slide-13
SLIDE 13

Our work

Convergence Analysis: To simplify notations, let us define Theorem 4. Let be a sequence generated by Algorithm 1 that satisfies the condition Then any accumulation point of is a KKT point of problem (5).

slide-14
SLIDE 14

Outline

➢Background ➢Motivation ➢Our Work ➢Experiments

slide-15
SLIDE 15

Experiments

slide-16
SLIDE 16

Experiments

slide-17
SLIDE 17

Experiments

slide-18
SLIDE 18

References

  • Guan, N., Liu, T., Zhang, Y., Tao, D., and Davis, L. S. Truncated

cauchy non-negative matrix factorization for robust subspace learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.

  • Farnia, F., Zhang, J. M., and Tse, D. Generalizable adversarial training

via spectral normalization. arXiv preprint arXiv:1811.07457, 2018.

  • Hajinezhad, D., Chang, T.-H., Wang, X., Shi, Q., and Hong, M.

Nonnegative matrix factorization using admm: Algorithm and convergence analysis. In Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on, pp. 4742–4746. IEEE, 2016.

  • ……
slide-19
SLIDE 19