Evaluating Interface Designs Evaluating Interface Designs SE3830 - - PowerPoint PPT Presentation

evaluating interface designs evaluating interface designs
SMART_READER_LITE
LIVE PREVIEW

Evaluating Interface Designs Evaluating Interface Designs SE3830 - - PowerPoint PPT Presentation

Evaluating Interface Designs Evaluating Interface Designs SE3830 SE3830 - Jay Urbain J U b i Credits: Ben Shneiderman, Catherine Plaisant, Roger J. Chapman This graph depicts the relationships between 2367 people How to evaluate Interface


slide-1
SLIDE 1

Evaluating Interface Designs Evaluating Interface Designs

SE3830 J U b i SE3830 - Jay Urbain

Credits: Ben Shneiderman, Catherine Plaisant, Roger J. Chapman

slide-2
SLIDE 2
slide-3
SLIDE 3
slide-4
SLIDE 4

This graph depicts the relationships between 2367 people

slide-5
SLIDE 5

How to evaluate Interface Designs?

Find employer/employee: p y p y

  • http://www.ashtonstaffing.com

Expectations

  • 1. Effectively communicates corporate values

2 Clear user flows and “calls to action”

  • 2. Clear user flows and calls to action
  • 3. Supports power users versus browser types
  • 4. Spotlights and links key objectives and site goals
  • 5. Overall appropriate tone for brand strategy
  • 6. Easy to scan and read, maximized page density
slide-6
SLIDE 6

How to evaluate Interface Designs?

Find doc: (try Google first)

  • http://doctor.mcw.edu/search_form.php

– lumbar disc herniation milwaukee – Lumbar Endoscopic Discectomy MCW p y Expectations 1. Finds an MCW doc appropriate for user needs 2 Effectively communicates quality care most skilled latest tech & 2. Effectively communicates quality care – most skilled, latest tech, & they care about you. 3. Effectively communicates corporate values – I would think this would be quality of care! would be quality of care! 4. Clear user flows and “calls to action” 5. Supports power users versus browser types 6. Spotlights and links support key objectives and site goals 7. Overall appropriate tone can brand strategy 8. Easy to scan and read, maximized page density

slide-7
SLIDE 7

How to evaluate Interface Designs?

Find friend:

  • http://www.facebook.com/
  • Expectations?
  • 1. Find “appropriate” friend, make friend
  • 2. …
slide-8
SLIDE 8

Evaluating Interface Designs Evaluating Interface Designs

  • Expert Reviews

p

  • Usability Testing
  • Surveys
  • Acceptance Tests
  • Continuing Assessment
slide-9
SLIDE 9

Introduction Introduction

  • Designers can become so entranced with their creations

g that they may fail to evaluate them adequately.

  • We become too close to our work
  • We become too close to our work
  • Experienced designers have attained the wisdom and

humility to know that extensive testing, and outside feedback is a necessity.

slide-10
SLIDE 10

Introduction Introduction

  • What determinants drive the evaluation plan?

What determinants drive the evaluation plan?

slide-11
SLIDE 11

Introduction Introduction

  • Determinants of the evaluation plan

p

– stage of design – novelty of project – diversity of expected users y p – criticality of the interface – costs of product, and finances available for testing – time available time available – experience of the design and evaluation team

  • Length of evaluation plans might be a few days to years.

Costs co ld range from 20% of a project do n to 5%

  • Costs could range from 20% of a project down to 5%.
slide-12
SLIDE 12

Introduction (cont ) Introduction (cont.)

  • It use to be just a good idea to get ahead of the

j g g competition by focusing on usability and doing testing.

  • Rapid growth of interest in usability means that failure to

t t b hi hl i k test can be highly risky.

  • Usability is expected!

– Competition Competition – Failed contracts – Liability

slide-13
SLIDE 13

Challenges in Usability Testing Challenges in Usability Testing

What challenges might we face in usability testing? g g y g

slide-14
SLIDE 14

Challenges in Usability Testing Challenges in Usability Testing

1. Perfection is not possible in complex human endeavors. p p

  • Planning must include continuing methods to assess and repair

problems during the lifecycle of an interface.

2. Even though problems may continue to be found in prototype testing, need to eventually deliver a product.

  • Shoot the engineer ;-)

3 Most testing methods account for normal usage but 3. Most testing methods account for normal usage, but performance in unpredictable situations with high levels

  • f input can be difficult to test.
  • Web services, Air-traffic control, Critical care
  • Need testing methods to deal with stressful situations
slide-15
SLIDE 15

Expert Reviews Expert Reviews

  • Informal demos to colleagues or customers can provide

f l f db k useful feedback.

  • More formal expert reviews have proven to be more

effective.

  • Expert reviews can entail anywhere from a few hours to

weeks of effort.

  • Training may be required to explain the task domain.
slide-16
SLIDE 16

Expert Reviews Methods Expert Reviews Methods

  • Variety of expert review methods to chose from:

– Heuristic evaluation

  • 8 Golden rules
  • Nielson's 10 rules

– Guidelines review

  • Select from many guidelines
  • Can be very time consuming

– Consistency inspection

  • Within and across products
  • Pinup method

– Cognitive walkthrough (scenarios)

  • Especially effective for rare, critical tasks, & error conditions

– Formal usability inspection y p

  • Use moderator to present interface for review/feedback
slide-17
SLIDE 17

Expert Reviews Golden Rules Expert Reviews Golden Rules

Heuristic evaluation: Shneiderman's 8 Golden Rules

1. Strive for consistency – Consistent sequences, commands, and terminology 2. Cater to universal usability E bl f t t h t t – Enable frequent users to use shortcuts 3. Offer informative feedback – For every operator action, there should be some system feedback feedback. 4. Design dialogs to yield closure – Sequences of actions should be organized into groups with a beginning, middle, and end. 5. Prevent errors 6. Permit easy reversal of actions 7. Support internal locus of control 8 R d h t t 8. Reduce short term memory – Keep displays simple, multiple page displays be consolidated, window-motion frequency reduced, and sufficient training time.

slide-18
SLIDE 18

Expert Reviews Heuristics Expert Reviews Heuristics

Neilsen's Usability Heuristics: 1. Visibility of system status 2. Match between system and the real world 3 User control and freedom 3. User control and freedom 4. Consistency and standards 5. Error prevention 6 R iti th th ll 6. Recognition rather than recall 7. Flexibility and efficiency of use 8. Aesthetic and minimalist design 9. Help users recognize, diagnose, and recover from errors 10. Help and documentation

slide-19
SLIDE 19

Expert Reviews Guidelines Expert Reviews Guidelines

Android UI Design Guidelines

  • http://developer.android.com/guide/practices/ui_guidelines/index.html

Apple Human Interface Guidelines

  • http://developer.apple.com/library/mac/#documentation/UserExperience/Co

nceptual/AppleHIGuidelines/XHIGIntro/XHIGIntro.html iOS H I t f G id li iOS Human Interface Guidelines

  • http://developer.apple.com/library/ios/#documentation/userexperience/conc

eptual/mobilehig/Introduction/Introduction.html Windows User Experience Interaction Guidelines

  • http://msdn.microsoft.com/en-us/library/aa511258.aspx

UI Design with Qt

  • http://doc.qt.nokia.com/latest/qt-gui-concepts.html
slide-20
SLIDE 20

Expert Reviews Limitations Expert Reviews Limitations

  • Expert reviews should be scheduled at several points in

th d l t the development process.

  • Different experts tend to find different problems in an

i t f lti l i b hi hl ff ti interface, so multiple reviewers can be highly effective.

  • Major problem with expert reviews is the lack of domain

understanding.

  • Even experienced expert reviewers can have difficulty

p p y knowing how typical users, especially first-time users will really behave.

slide-21
SLIDE 21

Expert Reviews (cont ) Expert Reviews (cont.)

  • Define rating scale:
slide-22
SLIDE 22

Expert Reviews (cont ) Expert Reviews (cont.)

  • Homepage:
slide-23
SLIDE 23

Expert Reviews (cont ) Expert Reviews (cont.)

  • Homepage:
slide-24
SLIDE 24

Expert Reviews (cont ) Expert Reviews (cont.)

  • Aggregated score

f t features:

slide-25
SLIDE 25

Usability Testing and Laboratories Usability Testing and Laboratories

slide-26
SLIDE 26

Usability Testing and Laboratories (cont.)

  • Emergence of usability testing and laboratories started in the early

1980s 1980s.

  • Influenced by advertising and market research.
  • Usability testing can speed up (or slow down) many projects, and

produce significant cost savings.

  • A basic usability lab would have two areas:

– One for the participants to do their work – Another separated by a half-silvered mirror, for testers and observers,

  • r camera
  • r camera.
  • Goals

– Test UI hypotheses, find errors, and validate theories – Refine user interfaces rapidly

slide-27
SLIDE 27

Usability Testing and Laboratories (cont.)

Participants should be chosen to represent the intended p p user communities:

– domain knowledge, computing background, task experience, motivation education and level of ability with the natural motivation, education, and level of ability with the natural language used.

P f i l ti i t k ll bj t t d d

  • Professional practice is to ask all subjects to read and

sign a privacy, consent, and disclosure statement.

slide-28
SLIDE 28

Usability Testing and Laboratories Techniques

Techniques: q

  • Have users execute a task list.
  • After a suitable period of time to execute the task list,

invite general comments from the users.

  • Invite users to think aloud about what they are doing.

Working pairs typically elicit more feedback

  • Working pairs typically elicit more feedback.
  • Designer/testers should be supportive, but not take over
  • r give instructions, and should listen for clues.

g ,

slide-29
SLIDE 29

Usability Testing and L b t i T h i ( t ) Laboratories Techniques (cont.)

  • Video participants performing tasks is valuable for later review, and

for showing designers or managers the problems that users for showing designers or managers the problems that users encounter.

  • Many variant forms of usability testing have been tried:

y y g – Paper mockups – Discount usability testing – Competitive usability testing – Universal usability testing – Field test and portable labs – Remote usability testing C b k thi t t – Can-you-break-this tests

slide-30
SLIDE 30

Usability Testing and L b t i T h i ( t ) Laboratories Techniques (cont.)

  • Problems of usability tests?
slide-31
SLIDE 31

Usability Testing and L b t i T h i ( t ) Laboratories Techniques (cont.)

  • Limitations of usability tests

– Emphasize first-time usage – Coverage of user interface features can be limited – Difficult to project short term testing to long term Difficult to project short term testing to long term usage – Helpful to supplement with expert reviews, longer term monitoring. term monitoring.

  • Proponents of activity theory believe that realistic test

environments are necessary to evaluate information environments are necessary to evaluate information appliances, ambient technologies, and mobile devices.

slide-32
SLIDE 32

Survey Instruments Survey Instruments

  • Written user surveys are a familiar, inexpensive, and generally

acceptable companion for usability tests and expert reviews acceptable companion for usability tests and expert reviews.

  • Keys to successful surveys

– Clear goals in advance Clear goals in advance – Development of focused items that help attain the goals

  • Survey goals can be tied to the components of the Objects and

Action Interface model (previous lecture) of interface design Action Interface model (previous lecture) of interface design.

  • Users could be asked for their subjective impressions about

specific aspects of the interface such as the representation of: p p p – task domain objects and actions – syntax of inputs and design of displays.

slide-33
SLIDE 33

Survey Instruments (cont ) Survey Instruments (cont.)

  • Other survey goals would ascertain

– users background (age, gender, origins, education, income) – experience with computers (specific applications or software packages, length of time, depth of knowledge) p g , g , p g ) – job responsibilities (decision-making influence, managerial roles, motivation) – personality style (introvert vs extrovert risk taking vs risk personality style (introvert vs. extrovert, risk taking vs. risk aversive, early vs. late adopter, systematic vs. opportunistic) – reasons for not using an interface (inadequate services, too complex too slow) complex, too slow) – familiarity with features (printing, macros, shortcuts, tutorials) – the state of their feelings after using an interface (confused vs. clear frustrated vs in control bored vs excited) clear, frustrated vs. in-control, bored vs. excited).

slide-34
SLIDE 34

Surveys (cont ) Surveys (cont.)

  • Online surveys avoid the cost of printing and the extra effort

needed for distribution and collection of paper forms.

  • Many people prefer to answer a brief survey displayed on a

y p p p y p y screen, instead of filling in and returning a printed form, – although there is a potential bias in the sample.

  • Disadvantages:

– Getting people to do them (pay them!) – Time required to put a good survey together (pay them!) – Normalizing results across users (pay them to get a enough users). – Bias results based on who is willing to fill out the survey (people who need money ;-)

slide-35
SLIDE 35

Acceptance Test Acceptance Test

  • Architecting hardware and software systems require objective and

measurable goals for evaluating system performance (qualities) measurable goals for evaluating system performance (qualities).

  • If the system fails to meet these acceptance criteria, the system

must be reworked until success is demonstrated.

  • Rather than the vague and misleading criterion of "user friendly,"

measurable criteria for the user interface can be established: Ti t l ifi f ti – Time to learn specific functions – Speed of task performance – Rate of errors by users Human retention of commands over time – Human retention of commands over time – Subjective user satisfaction

slide-36
SLIDE 36

Acceptance Test (cont ) Acceptance Test (cont.)

  • In a large system, there may be eight or 10 such tests to carry out
  • n different components of the interface and with different user
  • n different components of the interface and with different user

communities.

  • Once acceptance testing has been successful, there may be a

p g , y period of field testing before release.

slide-37
SLIDE 37

Evaluation During Active Use Evaluation During Active Use

  • Successful active use requires constant attention from dedicated

managers user services personnel and maintenance staff managers, user-services personnel, and maintenance staff.

  • Perfection is not attainable, but percentage improvements are

possible. p

  • Observation.
  • Interviews and focus group discussions

– Interviews with individual users can be productive because the interviewer can pursue specific issues of concern. Group discussions are valuable to ascertain the universality of – Group discussions are valuable to ascertain the universality of comments.

slide-38
SLIDE 38

Evaluation During Active Use ( ) (cont.)

  • Continuous user-performance data logging

Th f hi h ld k i f – The software architecture should make it easy for system managers to collect data about: – The patterns of system usage – Speed of user performance Speed of user performance – Rate of errors – Frequency of request for online assistance – A major benefit is guidance to system maintainers in optimizing j g y p g performance and reducing costs for all participants.

  • Online or telephone consultants

M f l d if th k th i h – Many users feel reassured if they know there is a human assistance available – On some network systems, the consultants can monitor the user's computer and see the same displays that the user sees p p y

slide-39
SLIDE 39

Evaluation During Active Use ( ) (cont.)

  • Online suggestion box or e-mail trouble reporting

– Electronic mail to the maintainers or designers. – For some users, writing a letter may be seen as requiring too much effort.

  • Discussion group and newsgroup

Permit postings of open messages and questions – Permit postings of open messages and questions – Some are independent, e.g. America Online and Yahoo! – Topic list – Sometimes moderators – Social systems – Comments and suggestions should be encouraged. gg g

slide-40
SLIDE 40

Controlled Psychologically-oriented E i Experiments

  • Scientific and engineering progress is often achieved by

g g p g y improved techniques for precise measurement.

  • Progress in the designs of interfaces will be achieved as

researchers and practitioners evolve suitable human- performance measures and techniques. performance measures and techniques.

slide-41
SLIDE 41

Controlled Psychologically-oriented E i ( ) Experiments (cont.)

  • The outline of the scientific method as applied to human-computer

interaction might comprise these tasks: interaction might comprise these tasks: – Deal with a practical problem and consider the theoretical framework (research) – State a lucid and testable hypothesis yp – Identify a small number of independent variables that are to be manipulated – Carefully choose the dependent variables that will be measured S l t bj t d i bj t t – Select subjects and assign subjects to groups – Control for biasing factors (non-representative sample of subjects or selection of tasks, inconsistent testing procedures) – Apply statistical methods to data analysis Apply statistical methods to data analysis – Resolve the practical problem, refine the theory, repeatm and give advice to future researchers

slide-42
SLIDE 42

Controlled Psychologically-oriented E i ( ) Experiments (cont.)

C ll d i h l fi i h h

  • Controlled experiments can help fine tuning the human-computer

interface of actively used systems.

  • Performance could be compared with the control group

Performance could be compared with the control group.

  • Dependent measures could include performance times, user-

subjective satisfaction, error rates, and user retention over time.

slide-43
SLIDE 43

Summary Summary

  • Interface developers evaluate their designs by conducting expert

reviews, usability tests, surveys, and acceptance tests.

  • Once interfaces are released, developers perform continuous

performance evaluations using interviews, observation, surveys, or by logging user performance.

  • If you don't measure user performance, you are not focusing on

usability.

  • Must establish relationship of trust with user community.
  • Special attention required for novice users and users with

disabilities.

  • Create social mechanisms for feedback including surveys,

interviews, discussion groups, user meetings, etc.

slide-44
SLIDE 44
slide-45
SLIDE 45

The 8 golden rules

  • f interface design

1. Strive for consistency 2. Cater to universal usability 3. Offer informative feedback 4 D i di l t i ld l 4. Design dialogs to yield closure 5. Prevent errors 6. Permit easy reversal of actions 6. Permit easy reversal of actions 7. Support internal locus of control 8. Reduce short term memory

slide-46
SLIDE 46

Heuristics Heuristics

Neilsen's Usability Heuristics: y 1. Visibility of system status 2. Match between system and the real world 3. User control and freedom 4. Consistency and standards 5. Error prevention 6. Recognition rather than recall 7 Flexibility and efficiency of use 7. Flexibility and efficiency of use 8. Aesthetic and minimalist design 9. Help users recognize, diagnose, and recover from errors 9. Help users recognize, diagnose, and recover from errors

  • 10. Help and documentation
slide-47
SLIDE 47