presentation of a scientific paper
play

Presentation of a Scientific Paper Naive Bayes Models for - PowerPoint PPT Presentation

Motivation The Contribution Summary Presentation of a Scientific Paper Naive Bayes Models for Probability Estimation Daniel Lowd and Pedro Domingos Bertrand Dechoux Aalborg University Bertrand Dechoux Naive Bayes Estimation Motivation


  1. Motivation The Contribution Summary Presentation of a Scientific Paper Naive Bayes Models for Probability Estimation Daniel Lowd and Pedro Domingos Bertrand Dechoux Aalborg University Bertrand Dechoux Naive Bayes Estimation

  2. Motivation Probability Estimation The Contribution Bayesian Network Summary Probabilistic Inference Over a Set of Variables One Solution : Bayesian Network visual representation of causality compact representation of the joint probability distribution Issues probabilistic inference is NP-hard approximate methods are not predictable enough focus : Naive Bayes Models Bertrand Dechoux Naive Bayes Estimation

  3. Motivation Probability Estimation The Contribution Bayesian Network Summary Probabilistic Inference Over a Set of Variables One Solution : Bayesian Network visual representation of causality compact representation of the joint probability distribution Issues probabilistic inference is NP-hard approximate methods are not predictable enough focus : Naive Bayes Models Bertrand Dechoux Naive Bayes Estimation

  4. Motivation Probability Estimation The Contribution Bayesian Network Summary Probabilistic Inference Over a Set of Variables One Solution : Bayesian Network visual representation of causality compact representation of the joint probability distribution Issues probabilistic inference is NP-hard approximate methods are not predictable enough focus : Naive Bayes Models Bertrand Dechoux Naive Bayes Estimation

  5. Motivation Probability Estimation The Contribution Bayesian Network Summary Generalities DAG(nodes, edges, conditional probabilities) chain rule � P ( X 1 , . . . , X n ) = P ( X i | pa ( X i )) X i learning : structure and parameters Bertrand Dechoux Naive Bayes Estimation

  6. Motivation Probability Estimation The Contribution Bayesian Network Summary Naive Bayes Models a simple (equivalent?) BN inference C Z Z � � � P ( x 1 , x 2 ) = P ( c ) P ( x 1 | c ) P ( x 2 | c ) P ( z | c ) inference : O ( k | X q | ) learning : state space of C and parameters Bertrand Dechoux Naive Bayes Estimation

  7. Motivation Probability Estimation The Contribution Bayesian Network Summary Naive Bayes Models a simple (equivalent?) BN inference C Z Z � � � P ( x 1 , x 2 ) = P ( c ) P ( x 1 | c ) P ( x 2 | c ) P ( z | c ) inference : O ( k | X q | ) learning : state space of C and parameters Bertrand Dechoux Naive Bayes Estimation

  8. Motivation the NBE algorithm The Contribution benchmark Summary Adding New Components Prior P ( C ) is updated ∼ uniformly previous P ( C ) → updated P ( C ) Bertrand Dechoux Naive Bayes Estimation

  9. Motivation the NBE algorithm The Contribution benchmark Summary Adding New Components Posterior d is a case in your database and introduces a bias P ( X i | c ) ∝ ˜ P ( X i = x i | d ) + λ × ˜ P ( X i ) Bertrand Dechoux Naive Bayes Estimation

  10. Motivation the NBE algorithm The Contribution benchmark Summary Pruning Existing Components keep only components responsible for 99,9% of P ( C ) P ( X = x ) = � k c = 1 P ( c ) � | X | i = 1 P ( x i | c ) i.e. remove low-weight components Bertrand Dechoux Naive Bayes Estimation

  11. Motivation the NBE algorithm The Contribution benchmark Summary A wrapper around Expectation-Maximisation input : P ( C ) , P ( X i | C ) convergence : optimize L(M|D) up to a threshold σ ouput : local optimum for P ( C ) , P ( X i | C ) Bertrand Dechoux Naive Bayes Estimation

  12. Motivation the NBE algorithm The Contribution benchmark Summary input training set T and hold-out set H Initialize M with one component while L ( M | H ) improves add k new components to M while L ( M | H ) improves EM step if L ( M | H ) improves M best ← M every 5 cycles, prune components of M M ← M best prune components of M k ← 2k two EM steps on M best using H and T ouput M best Bertrand Dechoux Naive Bayes Estimation

  13. Motivation the NBE algorithm The Contribution benchmark Summary input training set T and hold-out set H Initialize M with one component while L ( M | H ) improves add k new components to M while L ( M | H ) improves EM step if L ( M | H ) improves M best ← M every 5 cycles, prune components of M M ← M best prune components of M k ← 2k two EM steps on M best using H and T ouput M best Bertrand Dechoux Naive Bayes Estimation

  14. Motivation the NBE algorithm The Contribution benchmark Summary benchmarks Dataset 47 from the UCI repository 2 for for the collaborative filtering Compared Algorithms BNE , the proposed one WinMine , doing structure search a baseline : a naive bayes model with one component Bertrand Dechoux Naive Bayes Estimation

  15. Motivation the NBE algorithm The Contribution benchmark Summary results learning time equivalent EM (BNE) versus structure search (WinMine) modeling accuracy equivalent for a random Bayesian Network, an ’equivalent’ Naive Bayes Network can be found query speed and accuracy not equivalent BNE is most of the time at least as accurate as Gibbs Sampling and Belief Propagation, and definitively quicker Bertrand Dechoux Naive Bayes Estimation

  16. Motivation the NBE algorithm The Contribution benchmark Summary results learning time equivalent EM (BNE) versus structure search (WinMine) modeling accuracy equivalent for a random Bayesian Network, an ’equivalent’ Naive Bayes Network can be found query speed and accuracy not equivalent BNE is most of the time at least as accurate as Gibbs Sampling and Belief Propagation, and definitively quicker Bertrand Dechoux Naive Bayes Estimation

  17. Motivation The Contribution Summary Summary Naive Bayes Network are -arguably- better than random Bayesian Networks for probability estimation BNE is an algorithm to find such Naive Bayes Networks open discussion : knowledge extraction Open issues theory i.e. the proof is empirical only the influence of hidden variables the importance of used memory space Bertrand Dechoux Naive Bayes Estimation

  18. Motivation The Contribution Summary Summary Naive Bayes Network are -arguably- better than random Bayesian Networks for probability estimation BNE is an algorithm to find such Naive Bayes Networks open discussion : knowledge extraction Open issues theory i.e. the proof is empirical only the influence of hidden variables the importance of used memory space Bertrand Dechoux Naive Bayes Estimation

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend