what s behind this model
play

What's behind this model? Fernando Martnez-Plumed, Ral Fabra, Csar - PowerPoint PPT Presentation

What's behind this model? Fernando Martnez-Plumed, Ral Fabra, Csar Ferri, Jos Hernndez-Orallo, M Jose Ramrez Quintana Context: Security Issues and Machine Learning Machine learning is being increasingly used in


  1. What's behind this model? Fernando Martínez-Plumed, Raül Fabra, Cèsar Ferri, José Hernández-Orallo, Mª Jose Ramírez Quintana

  2. Context: Security Issues and Machine Learning • Machine learning is being increasingly used in confidential and security-sensitive applications (such as spam , fraud detection , malware classification , network anomaly detection): • models are being deployed with publicly accessible query interfaces. • it is assumed that data can be An adversary that can learn the actively manipulated by an model can also often intelligent, adaptive adversary. evade detection

  3. Adversarial Learning • The adversary knows the model type (logistic regression, decision tree, etc.): • Model extraction : The adversary’s goal is to extract an equivalent or near-equivalent ML model. For instance: If f(x) is just a class label : - Traditional learning-theory settings with membership queries

  4. Adversarial Learning Black-box oracle access with membership queries that return just the predicted class label. Membership queries (to find points close to f ’s decision boundary) Idea : sampling m points, querying the oracle, and training a model f’ on these samples.

  5. Adversarial Learning • Examples of attack techniques based on different family learning techniques: •SVMs: • Biggio, B., Corona, I., Nelson, B., Rubinstein, B. I., Maiorca, D., Fumera, G., ... & Roli, F. (2014). Security evaluation of support vector machines in adversarial environments. In Support Vector Machines Applications (pp. 105-153). Springer International Publishing. •DTs and Ensembles of DTs: • Cui, Zhicheng, et al. "Optimal action extraction for random forests and boosted trees." Proceedings of the 21th ACM SIGKDD International Conf. on Knowledge Discovery and Data Mining. ACM, 2015. •Deep Neural Networks: • Learning Adversary-Resistant Deep Neural Networks.Qinglong Wang, Wenbo Guo, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, Xue Liu, C. Lee Giles. arXiv:1612.01401 • "Unsupervised representation learning with deep convolutional generative adversarial networks." Radford, Alec, Luke Metz, and Soumith Chintala. arXiv preprint arXiv:1511.06434 (2015).

  6. Detecting the ML family The adversary does NOT KNOW the ML model type. • Model Characteristics extraction: The adversary’s goal is to extract the type of ML model used as well as its intrinsic characteristics so that they can evade it or exploit its weaknesses, vulnerabilities or gaps .

  7. Detecting the ML family Model Characteristics extraction: • Machine learning family -> decision boundaries layouts • Feature space significance -> which input attributes are more important • varying the output (more discriminating) • requiring more magnitude or range (difficulty) • Attribute transformations -> and its effect on the boundaries and the model family. We plan to start with a small set of ML families (decision trees, set of rules, linear discriminants)

  8. Detecting the ML family Evaluation MN Meta-features M2 Oracle/Mimetic Dataset Orig M1 λ1, λ2, …, λX ( from Mimetic Model ) comparison ß1, ß2, …, ßY ( from Mimetic DS ) ML Algorithms: Mimetic Dec. Stump Classifiers Meta-features Decision tree (Decision Logistic Regression Trees) ... Meta-feature Algorithm Extraction Recommendation Query DN CN Strategies : D2 C2 Mimetic D1 Uniform C1 Mimetic Optimum size Mimetic Evaluation Papernot Mimetic datasets Oracle models (artificial) META-LEARNING FOR LEARNING MIMETIC TREES ALGORITHM IDENTIFICATION

  9. Mimetic Models

  10. Mimetic Models

  11. Mimetic Models

  12. Mimetic Models

  13. Inspecting IP models Model Characteristics extraction: • Are there relational patterns? -> X1==X2 • Is the model recursive? -> Exploiting recursive patterns can be a source of security issues • Attribute transformations ->Are complex features addressed by propositionalisation ?

  14. Any idea, collaboration, ….. will be welcome

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend