command line completion clc
play

Command line completion (CLC) an illustration of learning and - PowerPoint PPT Presentation

Command line completion (CLC) an illustration of learning and decision making using the imprecise Dirichlet model Erik Quaeghebeur p. 1/15 Classical CLC in action login: erik Password: Last login: Tue Feb 17 08:24:47 on tty1


  1. Command line completion (CLC) an illustration of learning and decision making using the imprecise Dirichlet model Erik Quaeghebeur – p. 1/15

  2. Classical CLC in action login: erik Password: Last login: Tue Feb 17 08:24:47 on tty1 command-prompt$ _ – p. 2/15

  3. Classical CLC in action login: erik Password: Last login: Tue Feb 17 08:24:47 on tty1 command-prompt$ log<TAB> logger login logname logout command-prompt$ log_ – p. 2/15

  4. Classical CLC in action login: erik Password: Last login: Tue Feb 17 08:24:47 on tty1 command-prompt$ log<TAB> logger login logname logout command-prompt$ logn<TAB> command-prompt$ logname <ENTER> erik command-prompt$ _ – p. 2/15

  5. Classical CLC in action login: erik Password: Last login: Tue Feb 17 08:24:47 on tty1 command-prompt$ log<TAB> logger login logname logout command-prompt$ logn<TAB> command-prompt$ logname <ENTER> erik command-prompt$ ls<ENTER> mail/ logic.dvi logic.tex command-prompt$ _ – p. 2/15

  6. Classical CLC in action login: erik Password: Last login: Tue Feb 17 08:24:47 on tty1 command-prompt$ log<TAB> logger login logname logout command-prompt$ logn<TAB> command-prompt$ logname <ENTER> erik command-prompt$ ls<ENTER> mail/ logic.dvi logic.tex command-prompt$ dvips log_ – p. 2/15

  7. Classical CLC in action login: erik Password: Last login: Tue Feb 17 08:24:47 on tty1 command-prompt$ log<TAB> logger login logname logout command-prompt$ logn<TAB> command-prompt$ logname <ENTER> erik command-prompt$ ls<ENTER> mail/ logic.dvi logic.tex command-prompt$ dvips log<TAB> logic.dvi logic.tex command-prompt$ dvips logic.d<TAB> command-prompt$ dvips logic.dvi _ – p. 2/15

  8. Properties of classical CLC Two completion action types: list the possible completions, or return the unique completion. – p. 3/15

  9. Properties of classical CLC Two completion action types: list the possible completions, or return the unique completion. Rule-based: allows for context dependency, and requires a categorized database of commands. – p. 3/15

  10. Properties of classical CLC Two completion action types: list the possible completions, or return the unique completion. Rule-based: allows for context dependency, and requires a categorized database of commands. User independent: reliable, but does not take command history into account. – p. 3/15

  11. Complementing classical CLC We want to take the command-history into account: Whenever there are multiple completions possible. – p. 4/15

  12. Complementing classical CLC We want to take the command-history into account: Whenever there are multiple completions possible. By building and updating a model for the user’s behavior. – p. 4/15

  13. Complementing classical CLC We want to take the command-history into account: Whenever there are multiple completions possible. By building and updating a model for the user’s behavior. To add completion action types, such as returning the ‘best guess’ completion on the command line, listing a set of ‘best guesses’, listing all possible completions, but ordered. – p. 4/15

  14. The set of possible completions Two illustrative completions: command-prompt$ ha<TAB> halt hash command-prompt$ pin<TAB> pine ping pinky – p. 5/15

  15. The set of possible completions Two illustrative completions: command-prompt$ ha<TAB> halt hash ⇒ Ω ha = { halt , hash } ∋ ω ha command-prompt$ pin<TAB> pine ping pinky ⇒ Ω pin = { pine , ping , pinky } ∋ ω pin – p. 5/15

  16. b b b b The user as a multinomial process Model of the user’s behavior: A priori, there is a fixed probability t command for every command. – p. 6/15

  17. b b b b The user as a multinomial process Model of the user’s behavior: A priori, there is a fixed probability t command for every command. After typing part of a command, the remaining possible completions are chosen with the corresponding conditional probabilities. – p. 6/15

  18. b b b b The user as a multinomial process Model of the user’s behavior: A priori, there is a fixed probability t command for every command. After typing part of a command, the remaining possible completions are chosen with the corresponding conditional probabilities. Graphical representation of a user: ( t halt , t hash ) = ( 1 4 , 3 4 ) halt hash b ∆ ha – p. 6/15

  19. The user as a multinomial process Model of the user’s behavior: A priori, there is a fixed probability t command for every command. After typing part of a command, the remaining possible completions are chosen with the corresponding conditional probabilities. Graphical representation of a user: pinky ( t pine , t ping , t pinky ) b = ∆ pin ( 1 3 , 1 3 , 1 3 ) b pine ping b b – p. 6/15

  20. b b b b b b b b The user as a Markov process Model of the user’s behavior: A priori, there is a fixed probability t command|previous for every command and every previously typed command. – p. 7/15

  21. b b b b b b b b The user as a Markov process Model of the user’s behavior: A priori, there is a fixed probability t command|previous for every command and every previously typed command. After typing part of a command, the remaining possible completions are chosen with the corresponding conditional probabilities for the previous command. – p. 7/15

  22. The user as a Markov process Model of the user’s behavior: A priori, there is a fixed probability t command|previous for every command and every previously typed command. After typing part of a command, the remaining possible completions are chosen with the corresponding conditional probabilities for the previous command. Graphical representation of a user: ∆ pin|halt ∆ pin|hash b b b b b b b b – p. 7/15

  23. Knowledge about the user’s behavior Three models: An exact model: t command is known for all commands. – p. 8/15

  24. Knowledge about the user’s behavior Three models: An exact model: t command is known for all commands. A precise Dirichlet model (PDM): the uncertainty about the exact model is determined by a Dirichlet distribution. Di( � ϑ | h,� t ) halt hash – p. 8/15

  25. Knowledge about the user’s behavior Three models: An exact model: t command is known for all commands. A precise Dirichlet model (PDM): the uncertainty about the exact model is determined by a Dirichlet distribution. Di( � ϑ | h,� t ) halt hash r t = P Di ( � � ϑ | h,� t ) ∆ ha X ( � ϑ )Di( � t )d � P Di ( X | h,� ϑ | h,� � t ) = ϑ – p. 8/15

  26. Knowledge about the user’s behavior Three models: An exact model: t command is known for all commands. An imprecise Dirichlet model (IDM): the uncertainty is determined by a set of Dirichlet distributions. halt hash – p. 8/15

  27. Knowledge about the user’s behavior Three models: An exact model: t command is known for all commands. An imprecise Dirichlet model (IDM): the uncertainty is determined by a set of Dirichlet distributions. T halt hash | | t = P ( � � t = P ( � � ϑ | h, T ) ϑ | h, T ) t ∈ T P Di ( X | h,� P Di ( X | h, T ) = inf � t ) t ∈ T P Di ( X | h,� P Di ( X | h, T ) = sup � t ) – p. 8/15

  28. Observations, Sufficient statistics, and . . . Observations: (a sequence of) executed commands for the multinomial model, or (a sequence of) consecutively executed commands for the Markov model. – p. 9/15

  29. Observations, Sufficient statistics, and . . . Observations: (a sequence of) executed commands for the multinomial model, or (a sequence of) consecutively executed commands for the Markov model. Keep what’s relevant for the model: sufficient statistic , the number of occurrences of the commands � n , or the number of occurrences of a transition between commands N . – p. 9/15

  30. . . . Likelihood functions Likelihood function : likelihood of an exact model given the observations, n ( � a multinomial distribution L � ϑ ) , or a Whittle distribution L N (Θ) , proportional to the n ( � product of the L � ϑ ) for each of the previous commands. – p. 10/15

  31. Learning using a PDM/IDM Updating a Dirichlet distribution using Bayes’ rule: � n ( � ϑ | h,� n ) = Di( t ) L � ϑ ) f ( � ϑ | h,� t,� n | h,� P ( L � t ) � t n = h� t + � n ϑ | h n = h + n,� = Di( h + n ) . – p. 11/15

  32. Learning using a PDM/IDM Updating a Dirichlet distribution using Bayes’ rule: � n ( � ϑ | h,� n ) = Di( t ) L � ϑ ) f ( � ϑ | h,� t,� n | h,� P ( L � t ) � t n = h� t + � n ϑ | h n = h + n,� = Di( h + n ) . Graphically: prior posterior observed halt hash – p. 11/15

  33. Learning using a PDM/IDM Updating a PDM is updating the underlying distribution: � n P ( X | h,� P ( X | h n ,� t ) − → t n ) . – p. 11/15

  34. Learning using a PDM/IDM Updating a PDM is updating the underlying distribution: � n P ( X | h,� P ( X | h n ,� t ) − → t n ) . Graphically: pinky b � t n observed r r pine ping b b � t – p. 11/15

  35. Learning using a PDM/IDM Updating an IDM comes down to updating the corresponding (set of) PDM’s: P ( X | h n , T n ) = t n = h� t + � n inf { P ( X | h n ,� t n ) | h n = h + n,� h + n ,� t ∈ T } . – p. 11/15

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend