quantitative cybersecurity breach prediction and
play

Quantitative Cybersecurity: Breach Prediction and Incentive Design - PowerPoint PPT Presentation

Intro Data Forecast Info sharing Insurance Conclusion Quantitative Cybersecurity: Breach Prediction and Incentive Design Mingyan Liu Joint work with Yang Liu, Armin Sarabi, Parinaz Naghizadeh, Michael Bailey, Manish Karir M. Liu (U.


  1. Intro Data Forecast Info sharing Insurance Conclusion Quantitative Cybersecurity: Breach Prediction and Incentive Design Mingyan Liu Joint work with Yang Liu, Armin Sarabi, Parinaz Naghizadeh, Michael Bailey, Manish Karir M. Liu (U. Michigan) Quantitative Cybersecurity 1 / 44

  2. Intro Data Forecast Info sharing Insurance Conclusion Threats to Internet security and availability From unintentional to intentional, random to financially driven: • misconfiguration • mismanagement • botnets, worms, SPAM, DoS attacks, . . . Typical countermeasures are host based: • blacklisting malicious hosts; used for filtering/blocking • installing solutions on individual hosts, e.g., intrusion detection Also heavily detection based: • Even when successful, could be too late • Damage control post breach M. Liu (U. Michigan) Quantitative Cybersecurity 2 / 44

  3. Intro Data Forecast Info sharing Insurance Conclusion Our vision To assess networks as a whole, not individual hosts • a network is typically governed by consistent policies • changes in system administration on a larger time scale • changes in resource and expertise on a larger time scale • consistency (though dynamic) leads to predictability From a policy perspective: • leads to proactive security policies and enables incentive mechanisms , • many of which can only be applied at a network/org level. M. Liu (U. Michigan) Quantitative Cybersecurity 3 / 44

  4. Intro Data Forecast Info sharing Insurance Conclusion More specifically To what extent can we quantify and assess the security posture of a network/organization? • Enterprise risk management • Prioritize resources and take proactive actions • Third-party/Vendor validation To what extent can we utilize such assessment to design better incentive mechanisms • Incentives properly tied to actual security posture and security interdependence M. Liu (U. Michigan) Quantitative Cybersecurity 4 / 44

  5. Intro Data Forecast Info sharing Insurance Conclusion Outline of the talk • A incident forecasting framework and results • As a way to quantify security posture and security risks • Data sources and processing • A supervised learning approach • Risk assessment as a form of “public monitoring” • Enables inter-temporal incentives in enforcing long-term security information sharing agreements • Risk assessment as a form of “pre-screening” • Enables judicious premium discrimination in cyber insurance to mitigate moral hazard M. Liu (U. Michigan) Quantitative Cybersecurity 5 / 44

  6. Intro Data Forecast Info sharing Insurance Conclusion An incident forecasting framework Desirable features: • Scalability: we rely solely on externally observed data. • Robustness: data will be noisy, incomplete, not all of which is under our control. M. Liu (U. Michigan) Quantitative Cybersecurity 6 / 44

  7. Intro Data Forecast Info sharing Insurance Conclusion An incident forecasting framework Desirable features: • Scalability: we rely solely on externally observed data. • Robustness: data will be noisy, incomplete, not all of which is under our control. Key steps: • Tap into a diverse set of data that captures different aspects of a network’s security posture: source, type ( explicit vs. latent ). • Follow a supervised learning framework. M. Liu (U. Michigan) Quantitative Cybersecurity 6 / 44

  8. Intro Data Forecast Info sharing Insurance Conclusion Security posture data Malicious Activity Data: a set of 11 reputation blacklists (RBLs) • Daily collections of IPs seen engaged in some malicious activity. • Three malicious activity types: spam, phishing, scan. M. Liu (U. Michigan) Quantitative Cybersecurity 7 / 44

  9. Intro Data Forecast Info sharing Insurance Conclusion Security posture data Malicious Activity Data: a set of 11 reputation blacklists (RBLs) • Daily collections of IPs seen engaged in some malicious activity. • Three malicious activity types: spam, phishing, scan. Mismanagement symptoms • Deviation from known best practices; indicators of lack of policy or expertise: • Misconfigured HTTPS cert, DNS (resolver+source port), mail server, BGP. M. Liu (U. Michigan) Quantitative Cybersecurity 7 / 44

  10. Intro Data Forecast Info sharing Insurance Conclusion Cyber incident Data Three incident datasets • Hackmageddon • Web Hacking Incidents Database (WHID) • VERIS Community Database (VCDB) SQLi Hijacking Defacement DDoS Incident type Hackmageddon 38 9 97 59 WHID 12 5 16 45 Incident type Crimeware Cyber Esp. Web app. Else VCDB 59 16 368 213 M. Liu (U. Michigan) Quantitative Cybersecurity 8 / 44

  11. Intro Data Forecast Info sharing Insurance Conclusion Datasets at a glance Category Collection period Datasets Mismanagement Feb’13 - Jul’13 Open Recursive Resolvers, DNS Source Port, symptoms BGP misconfiguration, Untrusted HTTPS, Open SMTP Mail Relays Malicious May’13 - Dec’14 CBL, SBL, SpamCop, UCEPROTECT, activities WPBL, SURBL, PhishTank, hpHosts, Darknet scanners list, Dshield, OpenBL Incident Aug’13 - Dec’14 VERIS Community Database, reports Hackmageddon, Web Hacking Incidents • Mismanagement and malicious activities used to extract features. • Incident reports used to generate labels for training and testing. M. Liu (U. Michigan) Quantitative Cybersecurity 9 / 44

  12. Intro Data Forecast Info sharing Insurance Conclusion Data pre-processing Conservative processing of incident reports: • Remove irrelevant or ambiguous cases, e.g., robbery at liquor store, ”something happened”, etc. M. Liu (U. Michigan) Quantitative Cybersecurity 10 / 44

  13. Intro Data Forecast Info sharing Insurance Conclusion Data pre-processing Conservative processing of incident reports: • Remove irrelevant or ambiguous cases, e.g., robbery at liquor store, ”something happened”, etc. Challenge in data alignment, both in time and in space: • Security posture records information at the host IP-address level. • Cyber incident reports associated with an organization. • Alignment non-trivial: address reallocation, hosting services, etc. M. Liu (U. Michigan) Quantitative Cybersecurity 10 / 44

  14. Intro Data Forecast Info sharing Insurance Conclusion Primary and secondary features Mismanagement symptoms. • Five symptoms; each measured as a fraction • Predictive power of these symptoms. 1 1 CDF CDF 0.5 0.5 Victim org. Non−victim org. Victim org. Non−victim org. 0 0 0 0.5 1 0 0.2 0.4 % Untrusted HTTPS % openresolver M. Liu (U. Michigan) Quantitative Cybersecurity 11 / 44

  15. Intro Data Forecast Info sharing Insurance Conclusion Malicious activity time series. • Three time series over a period: spam, phishing, scan. • Recent 60 v.s. Recent 14. 4 1k 10k 3 8k 800 2 6k 600 1 4k 400 0 2k 10 20 30 40 50 60 10 20 30 40 50 60 10 20 30 40 50 60 Days Days Days M. Liu (U. Michigan) Quantitative Cybersecurity 12 / 44

  16. Intro Data Forecast Info sharing Insurance Conclusion Malicious activity time series. • Three time series over a period: spam, phishing, scan. • Recent 60 v.s. Recent 14. 4 1k 10k 3 8k 800 2 6k 600 1 4k 400 0 2k 10 20 30 40 50 60 10 20 30 40 50 60 10 20 30 40 50 60 Days Days Days Secondary features • Measuring persistence and responsiveness. M. Liu (U. Michigan) Quantitative Cybersecurity 12 / 44

  17. Intro Data Forecast Info sharing Insurance Conclusion A look at their predictive power: 1 1 CDF CDF 0.5 0.5 Victim org. Victim org. Non−victim org. Non−victim org. 0 0 0 0.5 1 0 10 20 30 Normalized "good" magnitude "Bad" duration M. Liu (U. Michigan) Quantitative Cybersecurity 13 / 44

  18. Intro Data Forecast Info sharing Insurance Conclusion Training subjects A subset of victim organizations, or incident group. • Training-testing ratio, e.g., 70-30 or 50-50 split . • Split strictly according to time: use past to predict future . Hackmageddon VCDB WHID Training Oct 13 – Dec 13 Aug 13 – Dec 13 Jan 14 – Mar 14 Testing Jan 14 – Feb 14 Jan 14 – Dec 14 Apr 14 – Nov 14 M. Liu (U. Michigan) Quantitative Cybersecurity 14 / 44

  19. Intro Data Forecast Info sharing Insurance Conclusion Training subjects A subset of victim organizations, or incident group. • Training-testing ratio, e.g., 70-30 or 50-50 split . • Split strictly according to time: use past to predict future . Hackmageddon VCDB WHID Training Oct 13 – Dec 13 Aug 13 – Dec 13 Jan 14 – Mar 14 Testing Jan 14 – Feb 14 Jan 14 – Dec 14 Apr 14 – Nov 14 A random subset of non-victims, or non-incident group. • Random sub-sampling necessary to avoid imbalance; procedure is repeated over different random subsets. M. Liu (U. Michigan) Quantitative Cybersecurity 14 / 44

  20. Intro Data Forecast Info sharing Insurance Conclusion Prediction performance 1 0.9 True positive 0.8 VCDB 0.7 Hackmageddon WHID 0.6 ALL 0.5 0.4 0.1 0.2 0.3 0.4 0.5 False positive Example of desirable operating points of the classifier: Accuracy Hackmageddon VCDB WHID All True Positive (TP) 96% 88% 80% 88% False Positive (FP) 10% 10% 5% 4% M. Liu (U. Michigan) Quantitative Cybersecurity 15 / 44

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend