Preference Elicitation for Interface Optimization Krzysztof Gajos - - PowerPoint PPT Presentation

preference elicitation for interface optimization
SMART_READER_LITE
LIVE PREVIEW

Preference Elicitation for Interface Optimization Krzysztof Gajos - - PowerPoint PPT Presentation

Preference Elicitation for Interface Optimization Krzysztof Gajos and Daniel S. Weld University of Washington, Seattle Krzysztof Gajos Motivation: Supple Model-Based Interface Renderer Func. Custom Device User + + Interface Interface


slide-1
SLIDE 1

Preference Elicitation for Interface Optimization

Krzysztof Gajos and Daniel S. Weld

University of Washington, Seattle

slide-2
SLIDE 2

Krzysztof Gajos

slide-3
SLIDE 3

Motivation: Supple Model-Based Interface Renderer

+

Hierarchy

  • f State

Vars + Methods Screen Size, Available Widgets & Interaction Modes

Func. Interface Spec. Device Model User Trace Custom Interface Rendering

+

Model of an Individual User’s Behavior (or that of a Group)

{ <r

  • ot,
  • ,
  • >

<Lef tLi ght: Pow er,

  • f

f ,

  • n>

<Vent, 1, 3> <Pr

  • j

ector : I nput , vi deo, com puter > … }

Decision Theoretic Optimization

slide-4
SLIDE 4

[Gajos & Weld, IUI’04]

Supple Output

slide-5
SLIDE 5

Container factor weight: 0.0 Tab Pane factor weight: 100.0 Popup factor weight: 1.0 Spinner for integers factor weight: 5.0 Spinner (domain size) factor weight: 49.5238 Spinner for non-integers factor weight: 6.0 Slider factor weight: 45.7143 Progress bar factor weight: 0.0 Checkbox factor weight: 0.0 Radio button factor weight: 0.5 Horizontal radio button factor weight: 10.0 Radio button (>=4 values) factor weight: 0.0 Radio button (>=8 values) factor weight: 74.2857 Radio button for booleans factor weight: 14.2857 [Gajos & Weld, IUI’04]

Supple Depends on Weights

slide-6
SLIDE 6

RIA

[Zhou +, UIST’04; IUI’05]

slide-7
SLIDE 7

[Zhou +, UIST’04; IUI’05]

slide-8
SLIDE 8

Expected Cost

  • f Interruption

Probability of an interruptability state Ii

BusyBody

Cost of interrupting if user is in state Ii

[Horvitz +, CSCW’04]

slide-9
SLIDE 9

BusyBody

Expected Cost

  • f Interruption

Probability of an interruptability state Ii Cost of interrupting if user is in state Ii Needs to be elicited from the user for every interruptability state Ii

[Horvitz +, CSCW’04]

slide-10
SLIDE 10

[Agrawala +, SIGGRAPH’01]

LineDrive

slide-11
SLIDE 11

Arnauld: A Tool for Preference Elicitation

Arnauld Optimizing UI Application Weights Raises level of abstraction:

– instead of directly choosing weights…, – designers now interact with concrete outcomes

slide-12
SLIDE 12

Arnauld: A Tool for Preference Elicitation

Arnauld Optimizing UI Application Weights Raises level of abstraction:

– instead of directly choosing weights…, – designers now interact with concrete outcomes

slide-13
SLIDE 13

Arnauld: A Tool for Preference Elicitation

Arnauld Optimizing UI Application Weights Raises level of abstraction:

– instead of directly choosing weights…, – designers now interact with concrete outcomes

slide-14
SLIDE 14

Arnauld Optimizing UI Application Weights Raises level of abstraction:

– instead of directly choosing weights…, – designers now interact with concrete outcomes

Arnauld: A Tool for Preference Elicitation

slide-15
SLIDE 15

Benefits

  • Saves Developers Time

– By factor of 2-3x

  • Improves Quality of Weights

– Learned weights out-perform hand-tuned

  • Users May Want to Override Default Params

– Individual preferences – Multiple uses

slide-16
SLIDE 16

Our Contributions

  • Implemented Arnauld system for preference elicitation

– Applicable to most optimization-based HCI applications – Implemented on SUPPLE

  • Based on two interaction methods for eliciting

preferences

  • Developed a fast machine learning algorithm that

learns the best set of weights from user feedback

– Enables interactive elicitation

  • Investigated two query generation algorithms

– Keep the elicitation sessions short

slide-17
SLIDE 17

Outline

  • Motivation
  • Elicitation techniques

– Example critiquing – Active elicitation

  • User responses  constraints
  • Learning from user responses
  • Generating queries
  • Results & Conclusions
slide-18
SLIDE 18

Example Critiquing

slide-19
SLIDE 19

Via Customization Facilities

Click!

slide-20
SLIDE 20

Provides Training Example!

before after

Result of Customization

<

slide-21
SLIDE 21

Example Critiquing

 Exploits natural interaction

 Occuring during process of customizing interface

 Effective when cost function is almost correct But…  Can be tedious during early stages of parameter learning process  Requires customization support to be provided by the UI system (e.g. RIA, SUPPLE, etc.)

slide-22
SLIDE 22

Active Elicitation

slide-23
SLIDE 23

Active Elicitation UI in Two Parts

Structure provided by ARNAULD

slide-24
SLIDE 24

Active Elicitation UI in Two Parts

Content provided by the interface system for which we are learning weights

slide-25
SLIDE 25

Active Elicitation

 Convenient during early stages of parameter learning process  Binary comparison queries easy for user  Doesn’t require any additional support from UI system, for which parameters are generated But  Doesn’t allow designer to direct learning process Choice of Best Question is Tricky

slide-26
SLIDE 26

Limitations of Isolated Feedback

Both examples so far provided feedback of the form “All else being equal, I prefer sliders to combo boxes”

<

But what if using a better widget in one place Makes another part of the interface crummy?!

slide-27
SLIDE 27

> <

In isolation, sliders are preferred But using them may cause badness elsewhere

slide-28
SLIDE 28

Situated Feedback with Active Elicitation

slide-29
SLIDE 29

Situated Feedback with Example Critiquing

slide-30
SLIDE 30

Summary of Elicitation Interactions

Active Elicitation Example Critiquing Situated Isolated

slide-31
SLIDE 31

Outline

  • Motivation
  • Elicitation techniques
  • User responses  constraints
  • Learning from user responses
  • Generating queries
  • Results & Conclusions
slide-32
SLIDE 32

Turning User Responses Into Constraints

  • =

=

K k k k

interface f u interface

1

) ( ) cost(

All systems studied had linearly decomposable cost functions; these can be expressed as: A “factor” reflecting presence, absence

  • r intensity of some

interface property A weight associated with a factor

slide-33
SLIDE 33

From User Responses to Constraints

slider horizontal slider number for box combo box combo

u u u u

_ _ _ _ _

+

  • +

< ) ≥ cost( cost( )

1 1

_ _ _ _

= =

number for box combo box combo

f f 1 1

_

= =

slider horizontal slider

f f

  • =

=

  • K

k k k K k k k

interface f u interface f u

1 2 1 1

) ( ) (

slide-34
SLIDE 34

Outline

  • Motivation
  • Elicitation techniques
  • User responses  constraints
  • Learning from user responses
  • Generating queries
  • Results & Conclusions
slide-35
SLIDE 35

Learning Algorithm

Given constraints of the form: Find values of weights uk

Satisfying a maximum number of constraints And by the greatest amount

  • =

=

  • K

k k k K k k k

interface f u interface f u

1 2 1 1

) ( ) (

slide-36
SLIDE 36

Our Approach

Use a max-margin approach

Essentially a linear Support Vector Machine

Reformulate constraints:

i K k k k K k k k

slack margin interface f u interface f u +

  • =

= 1 2 1 1

) ( ) (

slide-37
SLIDE 37

Our Approach

Use a max-margin approach

Essentially a linear Support Vector Machine

Reformulate constraints:

i K k k k K k k k

slack margin interface f u interface f u +

  • =

= 1 2 1 1

) ( ) (

Per-constraint slack that accommodates unsatisfiable constraints Shared margin by which all constraints are satisfied

slide-38
SLIDE 38

Learning as Optimization

Set up an optimization problem that maximizes: Subject to the constraints:

i K k k k K k k k

slack margin interface f u interface f u +

  • =

= 1 2 1 1

) ( ) (

  • i

i

slack margin

slide-39
SLIDE 39

Learning as Optimization

Set up an optimization problem that maximizes: Subject to the constraints:

i K k k k K k k k

slack margin interface f u interface f u +

  • =

= 1 2 1 1

) ( ) (

  • i

i

slack margin

Solved with standard linear programming methods in less than 250 ms.

slide-40
SLIDE 40

Outline

  • Motivation
  • Elicitation techniques
  • User responses  constraints
  • Learning from user responses
  • Generating queries
  • Results & Conclusions
slide-41
SLIDE 41

Generating Queries

  • Important part of Active Elicitation

– Like game of 20 questions, order is key

  • Optimality is intractable
  • Introducing two heuristic methods

– Searching ℜn space of weights

  • General method: applies to all opt-based UI

– Search space of semantic differences

  • Faster
  • Requires tighter integration with the UI appl’ctn
slide-42
SLIDE 42

Generating Queries

  • Why is it important?

– Like game of 20 questions, order is key

  • Optimality is intractable
  • Introducing two heuristic methods

– Searching ℜn space of weights

  • General method: applies to all opt-based UI

– Search space of semantic differences

  • Faster
  • Requires tighter integration with the UI appl’ctn
slide-43
SLIDE 43

Visualizing the search thru ℜn space of weights

A binary preference question cleaves the space

slide-44
SLIDE 44

Preferred Region

Answering Question Creates Region

slide-45
SLIDE 45

Midway thru the Q/A Process…

What is the best immediate (greedy) question for cleaving?

slide-46
SLIDE 46

Good Heuristics for Cleaving

  • 1. As close to

the centroid as possible

  • 2. Perpendicular

to the longest axis of region

slide-47
SLIDE 47

Outline

  • Motivation
  • Elicitation techniques
  • User responses  constraints
  • Learning from user responses
  • Generating queries
  • Results & Conclusions
slide-48
SLIDE 48

Informal User Study

  • Four users

– Two Supple developers – Two “sophisticated users”

  • I.e. programmers w/o Supple experience
  • Developers asked to hand-build cost function

– Hand-coding took 2-3x longer – Resulting function “wrong” 35% of the time!

  • Using Arnauld to create cost function

– Got robust cost function in 10-15 minutes – All said Arnauld much easier & more accurate

slide-49
SLIDE 49

Number of Elicitation Steps

Learning Rate

Ratio of Learned Function to Ideal Of different question-generation algorithms

slide-50
SLIDE 50

Sensitivity to Noise

10% User Errors Number of Elicitation Steps Ratio of Learned Function to Ideal

slide-51
SLIDE 51

Related Work

  • Gamble Queries

– Outcomex vs. pBest + (1-p)Worst

  • Bayesian Learning

– [Chajewska, ICML’01] – Too slow for interactive use

40 seconds (too slow) 1 second (large error)

slide-52
SLIDE 52

Conclusions

  • Implemented Arnauld system for preference elicitation

– Applicable to most optimization-based HCI applications – Saves developers time – Creates better weights

  • Based on two interaction methods

– Example Critiquing – Active Elicitation – Investigated two query generation algorithms

  • Novel machine learning algorithm

– Learns good weights from user feedback – Fast enough for interactive elicitation