machine learning prediction
play

Machine Learning Prediction of Blood Alcohol Content: A Digital - PowerPoint PPT Presentation

Machine Learning Prediction of Blood Alcohol Content: A Digital Signature of Behavior KIRSTIN ASCHBACHER, PH.D. A S S O C I AT E P RO F ES S O R , D I V I S I O N O F C A R D I O LO GY, S C H O O L O F M E D I C I N E U N I V E RS I T Y


  1. Machine Learning Prediction of Blood Alcohol Content: A Digital Signature of Behavior KIRSTIN ASCHBACHER, PH.D. A S S O C I AT E P RO F ES S O R , D I V I S I O N O F C A R D I O LO GY, S C H O O L O F M E D I C I N E U N I V E RS I T Y O F C A L I FO R N I A , SA N F R A N C I S CO K I RST I N . A S C H BAC H E R @ U C S F. E D U

  2. The Cost of Excessive Alcohol Use § A leading cause of preventable death § 1 in 10 deaths among adults, ages 20-64 § US costs: $224 billion in 2006 § High comorbidity with other mental health disorders (e.g., PTSD/MDD) https://www.cdc.gov/features/alcohol-deaths/index.html

  3. How a BACtrack Works § Consumed alcohol is absorbed into the bloodstream § Alcohol in the bloodstream moves across the membranes of the lung’s air sacs (alveoli). § The concentration of the alcohol in the alveolar air is directly related to the concentration in the blood. § As the alveolar air is exhaled, the alcohol in it can be detected by the breath alcohol testing device. https://www.bactrack.com/pages/bactrack-consumption-report

  4. External Validation of Accuracy Summary: Compared the accuracy of 3 smartphone-paired breathalyzers against a police-grade breathalyzer and against blood alcohol levels. Conclusions : Two devices – including BACtrack – were deemed accurate relative to the police- grade device with differences in BAC +/- 0.01. BACtrack was as closely related to blood alcohol levels as the police-grade device. http://injuryprevention.bmj.com/content/injuryprev/23/Suppl_1/A15.1.full.pdf

  5. The Business Need à The Data Product 1. Target Markets & Pain Points: Some users/health providers would like § tools to help make alcohol use safer 2. Data Product: If we could predict when a given user will § have a BAC >=.08, we could target them with messaging, or offer a chat-bot/coach 3. BAC Detection à Real-time Messaging

  6. Mac Machine Learning Pr hine Learning Predic edictio tion o n of B f Blo lood A d Alc lcoho hol C l Conten ent: t: K Aschbacher, R Avram, G Tison, K Rutledge, M Pletcher, J Olgin, G Marcus • Objective: • To identify a digital signature of self-monitored BAC levels that predicts the times, locations, and circumstances under which a user is likely to exceed the legal BAC driving limit of 0.08%. • Methods: • >1 million observations from 33,452 distinct users of the BACtrack device (accuracy comparable to police-grade devices). • Behavioral, timestamp, geolocation data • Machine learning was conducted by fitting data to a Gradient Boosted Classification Tree (GBCT), using train/cv/test partitions

  7. Are BACtrack data relevant to health at scale?

  8. Is there an association between BAC levels and Motor Vehicle Death Rates?

  9. Some of the states with the highest death rates have the fewest BACtrack users… more rural?

  10. Higher BAC levels are associated with a Higher Death Rate, but more so in states with fewer users Significance

  11. Predicted MV death rate for any given value of BAC and n-users

  12. Data Wrangling

  13. Data Management Clean & Organize Machine Learning ssh + conda + tmux + jupyter

  14. Data Security • Data is collected anonymously from users of the BACtrack app, which syncs with BACtrack smartphone-enabled breathalyzers • Data is viewed in aggregate only and is from users with with data storage activated, location services turned on, and does not represent data from all users • We use APS Redshift VPC security groups and S3 data encryption methods, and the data itself is deidentified • We analyze data on a secure cluster and interface with the data via ssh

  15. Machine Learning: Gradient Boosted Classification with XGBoost

  16. Gradient Boosted Classification 1. Weak learners (trees) are combined to make strong learners 2. Generalization capability is high 3. Overfitting is low – especially with cross-validation & tuning 4. Handles missing data well 5. Models non-linearities 6. Can be productionized 7. Our Label/Outcome: BAC < .08 versus BAC >= .08

  17. An Unfortunate Example of a Decision Tree https://homes.cs.washington.edu/~tqchen/pdf/BoostedTree.pdf

  18. Ensemble Learning Methods combine weak learners to create strong learners • Boosting Methods compute a set of weights for each training example at each level of the tree • Higher weights are given to incorrectly classified examples • Hence the tree attempts to find features to explain those examples at the next round https://homes.cs.washington.edu/~tqchen/pdf/BoostedTree.pdf

  19. Problem: We don’t have a lot of features! 1. BAC level 2. Timestamps 3. User-entered data ◦ BAC guess ◦ a user’s subjective guess prior to measuring ◦ “validated” ◦ Photos (sparse) ◦ Notes (sparse) 4. Geolocation (lat/lon) and zip codes

  20. The Neurocircuitry of Reward & Addiction • Neuroadaptations drive behavior change over time, characterized by: • Reactivity to cues/ triggers • Loss of pleasure/ seeking stress relief • Deficits in self-regulatory systems • When, where, and for whom? • Circadian variation in self-monitoring • Geographic variation in boredom/stress • The longer you’re engaging the more entrenched this pattern may be for you Volkow et al., N Engl J Med 2016. Substance Abuse and Mental Health Services Administration (US); Office of the Surgeon General (US). Facing Addiction in America: The Surgeon General's Report on Alcohol, Drugs, and Health [Internet]. Washington (DC): US Department of Health and Human Services; 2016 Nov. Figure 2.3, The Three Stages of the Addiction Cycle and the Brain Regions Associated with Them. Available from: https://www.ncbi.nlm.nih.gov/books/NBK424849/figure/ch2.f3/

  21. What’s the Digital Signature of a Habit? Definition: ◦ “An acquired behavior pattern … regularly followed until it has become almost involuntary.” Pattern: ◦ Frequency à Time ◦ Triggers à Time/Location ◦ Engagement with self-monitoring (reflects reward value of tracking)

  22. Patterns in Time • Temporal patterns are not investigated as often in traditional scientific studies • Self-monitoring has a temporal signature • API-connected devices capture time- based signatures

  23. When do People Monitor? • As expected, users are more likely to self-monitor on weekends and in the evenings • Specifically, users monitor their BAC about 5-6 times more often on Friday and Saturday nights, compared to weekday work-hours • Surprise! There’s a self-monitoring bump on weekdays around 7am … Eye-openers? (Note: values inside the graph are in units of thousands – e.g., 1.6 = 1.6k or 1600 measurements for that day and hour).

  24. When is BAC highest? • Supporting data validity, users have higher measured BAC levels on weekends • And in the evenings … • Interestingly, measured BAC levels peak around 1-2am … when the bars tend to close … • BAC self-monitoring peaks in the “wee hours” of the evening • Tuesday is the “soberest” day

  25. Is Location a Trigger for Drinking? • Many animal studies of alcohol use “ conditioned place preference ” (CPP). • When you pair alcohol with a certain place, an animal learns to prefer that place. • This suggests that places (locations) can be cues for alcohol consumption.

  26. Getting distances from Geolocation: • To scale efficiently – do as much as you can with tools like Redshift • AWS Redshift does a lot of things … even trigonometry

  27. Big Data can suffer from the problem of: Garbage In – Garbage Out

  28. Striking but Accurate

  29. Shorter Prior Distances since the prior measurement predicts higher BAC levels • The distance a user has traveled • INTERPRETATION: The highest BAC is predicted if, between subsequent BAC since your last measure, you traveled less than 1.5 km measures helps predict subsequent (and your distance data was not missing (>-999). BAC levels • Conditioned Place Preference suggests that short distances will be associated with higher BAC values • However, we may need to restrict this to distances between drinking episodes rather than measurements

  30. Evaluating & Optimizing Performance of Gradient Boosted Classification Trees with XGBoost 1. Performance under default settings 2. Class Balancing 3. Tuning Learning Rate along with The number of trees & max depth 4. Iterative Feature Engineering 5. Final Model Performance & Interpretation

  31. Balancing Classes

  32. Default Model Performance & Impact of Class Balancing DEV SET Imbalanced Balanced Imbalanced Balanced RESULTS Classes Classes (N=97,327) ROC-AUC 82.65% 82.70% Accuracy 77.97% 73.28% High BAC F1-Score 55% 63% Precision 69% 53% Recall 46% 77% • Default settings are: 10 estimators, learning_rate=.3, max_depth=6 • Balancing: positive scale weight = 2.38

  33. Tuning Hyperparameters Learning Rate Max_Depth Best # of CV-AUC • Used a 3-fold CV to evaluate Trees • Higher Learning rates à fewer 1.0 12 4 81.38% trees (n_estimators); faster!! 1.0 6 30 82.63% • However, maybe worse AUC 0.3 12 82 84.27% • Tune them together 0.3 6 ~489 84.30% • Also consider complexity of 0.1 12 >500 84.93% trees – ‘max_depth’ 0.1 6 >500 84.21%

  34. Model Development is Iterative: Using Feature Importances to inform feature engineering

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend