adversarial ai in cyber security who am i join trend
play

Adversarial AI in Cyber Security WHO AM I - PowerPoint PPT Presentation

Adversarial AI in Cyber Security WHO AM I Join Trend Micro on 2009 Infra Developer Threat Researcher Machine Learning Researcher Join XGen ML project on 2015 Now leading


  1. 資訊安全中的人工智能對抗 Adversarial AI in Cyber Security 張佳彥

  2. WHO AM I • Join Trend Micro on 2009 – Infra Developer – Threat Researcher – Machine Learning Researcher • Join XGen ML project on 2015 • Now leading the Machine Learning Research/Operation team of XGen

  3. Agenda • What is Machine Learning ? • What is Adversarial Machine Learning ? • Adversarial ML Methodologies • Possible countermeasures • Conclusions

  4. Machine Learning & Adversarial Machine Learning

  5. XGen ML – Layer protection

  6. What is Machine Learning

  7. What is Adversarial Machine Learning Adversarial machine learning is a technique employed in the field of machine learning which attempts to fool models through malicious input. - Wikipwdia

  8. What is Adversarial Machine Learning Image Recognition

  9. What is Adversarial Machine Learning Image Recognition

  10. What is Adversarial Machine Learning Spam Detection Spam content salad word

  11. Adversarial ML Methodologies

  12. Adversarial ML Methodologies • Evasion Attack • Black box • White box • model stealing • Poisoning Attack

  13. Adversarial ML Methodologies Prediction (classification) Predict misclassify Model Evasion Train Training Training set

  14. Adversarial ML Methodologies Prediction (classification) Predict misclassify Model Train Training Cats Poison Dogs Training set

  15. Evasion • Black Box • Hacker can only test model with Input/Output Input Output Model • White Box • Hacker knows the detail parameters of the model Input Output

  16. Black Box Evasion: Iterative Random Attack Evasion successful ratio = 1/1000

  17. Black Box Evasion: Genetic Algorithm ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● Model ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● Baseline (seed) ●●●●●●●● ●●●●●●●● ●●●●●●●● Probe ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● Select ●●●●●●●● Random Random ●●●●●●●● ●●●●●●●● 1 st generation lowest score next generation ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● n possible changes (DNA) ●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●●● N generation… Evasion successful ratio = 1/100

  18. Poison Attack • Online training

  19. Countermeasures

  20. Adversarial ML Countermeasures • Evasion Attack - Black box • Abuse Protection • Model Retrain • Reactive • Proactive (GAN) • Evasion Attack - White box • Data/feature/model protection • Poisoning Attack • Data/Label quality control

  21. Adversarial ML Countermeasures • Evasion Attack - Black box • Abuse Protection • Model Retrain • Reactive • Proactive (GAN) • Evasion Attack - White box • Data/feature/model protection • Poisoning Attack • Data/Label quality control

  22. Adversarial ML Countermeasures

  23. Adversarial ML Countermeasures • Evasion Attack - Black box • Abuse Protection • Model Retrain • Reactive • Proactive • Evasion Attack - White box • Data/feature/model protection • Poisoning Attack • Data/Label quality control

  24. Adversarial ML Countermeasures Security company model to identify malware Hacker generate malware to cheat classifier

  25. Adversarial ML Countermeasures Reactive model retrain

  26. Adversarial ML Countermeasures Proactive model retrain

  27. Adversarial ML Countermeasures What if the hair length is an important feature?

  28. Adversarial ML Countermeasures • Trade off • Robustness or Accuracy • Proactive or Reactive • Fast or Confidence

  29. Adversarial ML Countermeasures • Trade off • Robustness or Accuracy • Proactive or Reactive • Fast or Confidence

  30. Adversarial ML Countermeasures • Evasion Attack - Black box • Abuse Protection • Model Retrain • Reactive • Proactive (GAN) • Evasion Attack - White box • Data/feature/model protection • Poisoning Attack • Data/Label quality control

  31. Adversarial ML Countermeasures • Evasion Attack - Black box • Abuse Protection • Model Retrain • Reactive • Proactive (GAN) • Evasion Attack - White box • Data/feature/model protection • Poisoning Attack • Data/Label quality control

  32. Conclusions

  33. Conclusions • Almost all models can be cheated • Find possible vulnerabilities and take the proper actions • This is an endless battle • Pros: Global visibility and excellent operation • Cons: 1 FN will cause the damage

  34. Conclusions • There is no silver bullet for Cyber Security • Dynamic & Fast Response are the key points

  35. Thank You

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend