towards adversarial phishing detection
play

Towards Adversarial Phishing Detection T. K. Panum, K. Hageman, R. - PowerPoint PPT Presentation

Towards Adversarial Phishing Detection T. K. Panum, K. Hageman, R. R. Hansen, J. M. Pedersen 13th USENIX Workshop on Cyber Security Experimentation and Test Long Paper (Position) August 10, 2020 Motivation Phishing Attacks Advances in


  1. Towards Adversarial Phishing Detection T. K. Panum, K. Hageman, R. R. Hansen, J. M. Pedersen 13th USENIX Workshop on Cyber Security Experimentation and Test Long Paper (Position) – August 10, 2020

  2. Motivation Phishing Attacks Advances in technical security measures cause users to be victims of exploits Phishing attacks have exploited users for over two decades Numerous counter-measures have been developed to fight the problem Contradictory Effectiveness (Marchal et al., 2018) Multiple reports claim frequency of attacks remain high (or increasing) State-of-the-art detection solutions report impressive evaluation measures 1 Causes: Biased evaluations and infeasible deployment Adversarial Robustness Few methods evaluate their performance on attacks that seek to actively evade the proposed detection solution 1 Accuracy of ≥ 99 . 9 % . False Positive Rates of ≤ 1 % . Motivation Thomas Kobber Panum 1 / 13

  3. Adversarial Robustness Adaptive attacks Set of phishing attacks (true) Adaptive phishing attacks are attacks that remain undetected for a certain detection solution, yet maintain the functional properties of phishing attacks Exists due to discrepancy between model and reality Set of phishing attacks Adversarial Robustness (detection solution) Given solutions are likely to face adaptive attacks in a practical setting, evaluations : Observed attacks should seek quantify their performance : Adaptive attacks towards these (Ho et al., 2019) Motivation Thomas Kobber Panum 2 / 13

  4. Phishing Environments Attacks have existed across multiple environments We formalize the shared properties of such environments as: Environment for Phishing Attacks A messaging environment for which messages within this environment can fulfill the three axioms: Impersonating, Inducive, and Scalable. Messaging Environment Message An environment for which messages can be Contains some content and relate to a sender exchanged using a channel across multiple and recipient senders and recipients Formalization Thomas Kobber Panum 3 / 13

  5. Axioms 1 Lastdrager et al.’s Definition of Phishing Attacks Phishing is a scalable act of deception whereby impersonation is used to obtain information from a target. Impersonating Should deceive the recipient into trusting the fake identity of the sender Inducive Should induce some form of action that yields the attacker to obtain information Scalable Crafting the attack should be inexpensive (time, $) 1 These are merely abstract classes of information required to infer phishing, and does thereby not put logical constaints on the ability to obtain this information for concerete applications. Formalization Thomas Kobber Panum 4 / 13

  6. Assessment of Adversarial Robustness Examine the extend of which existing detection solutions have accounted for adversarial robustness Selected work cover influential- and recent publications Derived a four of commonly used strategies for detecting attacks: Visual Similarity, Reverse Search Credibility, Channel Meta-information, Statistical Modeling Discuss these strategies and their ability to account for the identified axioms Demonstrate techniques for creating perturbations that enable attacks to avoid detection Assessment Thomas Kobber Panum 5 / 13

  7. Visual Similarity similar? Axioms Phishing Attribute � Impersonating Sharing visual identity with an already observed benign message ÷ Inducive while originating from a different source. � Scalable Based on reflecting human perception in a computational setting Known to be a challenging and unsolved problem Incomplete coverage of axioms Assessment Thomas Kobber Panum 6 / 13

  8. Example: Normalized Compression Distance (Chen et al.) Compare visual similarity as intersection over union of byte compressions Simple attack 1 Use a color space that align closely with human color perception 2 Perturb all colors by small steps ( ± 1 % ) Our attack is remain imperceptible yet effectively breaks NCD: NCD ( x , x ′ ) − NCD ( x , x ) = − 0 . 96 ± 0 . 01 Assessment Thomas Kobber Panum 7 / 13

  9. Reverse Search Credibility signature test, document, demo search results Search Engine example.com if found → not phishing else → phishing Phishing Attribute Absence of a given website in the most relevant search results Axioms returned by querying search engines with a signature derived from ? Impersonating the given website. ? Inducive � Scalable Relies credit scoring using search engines Search engines are black boxes → Uncertainty Assessment Thomas Kobber Panum 8 / 13

  10. Channel Meta-information Strategy Constrain information used for inference to only be within the scope of the channel , ignoring the content of the respective messages . Axioms Phishing Attribute (case: Web) ( � ) Impersonating ÷ Inducive URLs resembling a URL from a known benign source. � Scalable Given: Inducive ↔ Content of messages Predictiveness using this strategy signal bias Incomplete coverage of axioms Assessment Thomas Kobber Panum 9 / 13

  11. Statistical Modeling Strategy Axioms Given a dataset containing information related to messages , and ( � ) Impersonating the presence of attacks within them, approximate a function f ( x ) ( � ) Inducive that can detect attacks. � Scalable Highly dynamic strategy, delimited by the information of the used dataset Selecting a model is often a trade-off between complexity and interpretability Parameters are selected using empirical performance Assuming generalization to out-of-distribution inputs Complex functions can be in the magnitude of millions of parameters WhiteNet (Abdelnabi et al., 2019): ≥ 100 M Assessment Thomas Kobber Panum 10 / 13

  12. WhiteNet (Abdelnabi et al., 2019) Model Unperturbed ǫ = 0 . 005 ǫ = 0 . 01 Traditional Training WhiteNet 81.0% 72.8% 62.5% WhiteNet (replica) 87.8% 30.0% 24.6% Adversarial Training WhiteNet 81.0% 79.0% 73.1% WhiteNet (replica) 90.3% 33.3% 30.8% Table: Precision (closest match) for WhiteNet and our replica model across perturbations created using the FGSM attack for various threat models ǫ . Model: Siamese Deep Neural Network (DNN) with ≥ 100 M parameters. Given two visual representations of web sites yield a similarity measure Adversarial examples (AE) are a known vulnerability to DNNs Found that stated robustness towards AE to be inaccurate Likely due to under-sampling during the creation of attacks Assessment Thomas Kobber Panum 11 / 13

  13. Design Guidelines We introduce a set of design guidelines for future detection solutions to follow: Accessible Provide a widely available implementation Statistical Models: Weights and/or dataset. Benefit: Allow for continuous evaluations (both empirical and adaptive) Explicit Attributes Clarify how information from the input space is used to infer attacks (Complex) Statistical Models: Attribution Methods Align with Axioms Focus on using functional properties of attacks for detection Absence: Predictiveness stemming from bias (symptoms not cause) Design Guidelines Thomas Kobber Panum 12 / 13

  14. Thank you! Thanks for listening! Thomas Kobber Panum tkp@es.aau.dk PhD Student Department of Electronic Systems Aalborg University, Denmark Closing Thomas Kobber Panum 13 / 13

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend