human computer interaction
play

Human-Computer Interaction 16. Expert Evaluation Recap: Think Aloud - PowerPoint PPT Presentation

Human-Computer Interaction 16. Expert Evaluation Recap: Think Aloud Ask the user to Think Aloud while performing tasks on a system. You watch and learn how the user thinks about the task and where the user has problems using it. Observers


  1. Human-Computer Interaction 16. Expert Evaluation

  2. Recap: Think Aloud Ask the user to Think Aloud while performing tasks on a system. You • watch and learn how the user thinks about the task and where the user has problems using it. Observers are asked to objectively take notes of everything that users • say, without attempting to interpret their actions and words. Test sessions are often audio and video taped. •

  3. Recap: Focus Group An exploratory research method used to help researchers gather in-depth, qualitative information of their participants' attitudes and perceptions relating to concepts, products, services, or programs. Advantages • Quick and relatively easy to set up. o The group dynamic can provide useful information that individual data o collection does not provide (a big difference from interview). Disadvantages • The discussion can be dominated or sidetracked by a few individuals. o Data analysis is time consuming and needs to be well planned in advance. o Does not provide valid information at the individual level. o

  4. Recap: A/B Testing To compare two (or more) versions of a web page to see which one performs better. You compare two web pages by showing the two variants (let's call them A and B) to similar visitors at the same time. • Tracking the things users point at with the cursor, click on • Tracking where they’ve been and where they go Advantages • o Subjects decisions to data rather than HiPPO (Highest Paid Person’s Opinion) o Great for optimizing copy, colors, placements, shapes, imagery o You can be running hundreds simultaneously Risks • o Hard to test completely new products, features o If many many aspects are subject to fluctuation, what is the ‘actual’ site?

  5. Today’s agenda • Expert evaluation o Cognitive Walkthrough o Heuristic Evaluation

  6. Why doing evaluation? If we build a product, service, an interface, etc., how do we know: • § Whether it’s any good? § Whether the interface (between a system and user) meets requirements and criteria? § Whether the users are able to complete all important tasks? à Test Usability

  7. What is usability? “The effectiveness, efficiency and satisfaction with which a specified set of users can achieve a specified set of tasks in a particular environment.” (by ISO) 5 E’s • o Effective: Can a user reach one’s goals? Find what they are looking for? - Do what they want to do? - o Efficient: How fast to pursue the goals? Number of steps - o Engaging: Use it again? Recommend it to others? Number of revisits - o Error tolerant Number of errors - Satisfaction Recovering from errors - o Easy to learn Amount of effort to learn -

  8. Identify relative importance of evaluation factor

  9. Museum website

  10. Museum exhibition

  11. Evaluation factors What about… Self-service filling and payment system for a gas station • On-board ship data analysis system for geologists to search for oil • Fashion clothing website • College online course system •

  12. When to evaluate?

  13. When to evaluate? Throughout the design process • From the first descriptions, sketches, etc. of users needs through to the • final product Design proceeds through interactive cycles of “design – test - • redesign” Evaluation is a key ingredient for a successful design • Interactive Paper Wireframing prototyping sketches Coding User testing User testing User testing User testing

  14. How to evaluate? Asking users • – Users’ opinions: how do users think they will perform on a system? o Interview/Survey/Observation o Think aloud o A/B testing o Focus group Asking experts • – How do experts think the users will perform on a system? o Cognitive Walkthrough o Heuristic Evaluation

  15. Cognitive Walkthrough

  16. Cognitive Walkthrough A usability evaluation method in which one or more evaluators work through a series of tasks and ask a set of questions from the perspective of the user. The focus of the cognitive walkthrough is on understanding the system's learnability for new or infrequent users To see whether or not a new user can easily carry out tasks within a given • system A task-specific approach to usability • Premise: most users prefer to do things to learn a product rather than to • read a manual or follow a set of instructions.

  17. Define the tasks and actions needed First, you need to define the tasks. And then, you need a complete, written list of actions needed to complete the task. E.g., Task: Create a customized voicemail message on an iPhone Actions 1. Tap Voicemail 2. Tap Greeting 3. Tap Custom 4. Tap Record and speak your greeting 5. When you finish, tap Stop 6. To listen to your greeting, tap Play 7. To re-record, repeat steps 4 and 5 8. Tap Save Sometimes defining the tasks is all you need to do to realize there is a problem with the interface. (e.g., http://buenavista.typepad.com/buena_vista/2007/06/the_mobile_user.html)

  18. Three Questions to be Asked The cognitive walkthrough is structured around 3 questions that you ask of every step (or action) in the task. You ask these questions before, during and after each step (or action) of the task. If you find a problem, you make a note and then move on to the next step of the task. 1. Visibility: Is the control for the action visible to the user? 2. Affordance: Is there a strong link between the control and the action? (Will the user notice that the correct action is available?) 3. Feedback: Is feedback appropriate? (Will the user properly interpret the system response?)

  19. Q1. Visibility: Is the control for the action visible to users? To find problems with hidden or obscured controls E.g. is the button visible? To find issues with context-sensitive menus or controls buried too deep within a navigation system. If the control for the action is non-standard or unintuitive then it will identify those as well.

  20. Q2. Affordance: Is there a strong link between the control and the action? Will the user notice that the correct action is available? To find problems with ambiguous or jargon terms, or with other controls that look like a better choice

  21. Q3. Feedback: Is feedback appropriate? Will the user properly interpret the system response? To find problems when feedback is missing, or easy to miss, or too brief, poorly worded, inappropriate or ambiguous. For example, does the system prompt users to take the next step in the task?

  22. Who should conduct a Cognitive Walkthrough? Basically, any UI expert can conduct a cognitive walkthrough; however, there is a risk that someone who is already familiar with your jargon, language and system is going to miss things that someone who lacks that familiarity would find. If you have to use someone who is very familiar with the product, make sure they have user personas to hand – to try and guide them to “walk a mile in the user’s shoes”.

  23. Heuristic Evaluation

  24. Heuristic Evaluation A principle or “a rule of thumb” which can be used to identify usability problems in interaction design: a researcher walks through a product and compare it to the heuristics and make their own assessment as to whether the product follows these rules of thumb or not (the “heuristics”) To see whether or not a given system has any usability flaws • A more holistic usability inspection • Developed by Jakob Nielsen (1994) • Can be performed on working UI or on sketches •

  25. Heuristic Evaluation: Steps 1. Know what you will test and how: Before you begin any form of usability testing or user research it is essential for you to have an objective for your testing (Articulate them). 2. Understand users: You also need some background on your users. This form of testing doesn’t involve users but your evaluators need to be able to act on behalf of the user 3. Briefing session to tell experts what to do. Provide experts with task descriptions 4. Evaluation in which: – Each expert works separately – Take one pass to get a feel for the product – Take a second pass to focus on specific features 5. Debriefing session in which experts work together to prioritize problems

  26. 1. Visibility of system status Keep users informed about what is going on. Example: response time 0.1 sec: no special indicators needed • 1.0 sec: user tends to lose track of data • 10 sec: max. duration if user to stay focused on • action Short delays: Hourglass • Long delays: Use percent-done progress bars • Overestimating is usually better •

  27. 1. Visibility of system status Users should always be aware of what is going on. So that they can make informed decision - Provide redundant information -

  28. 2. Match between system and real world The elements and terms used in your system should match those used in the real world as closely as possible. Speak the users’ language • Follow real world conventions • Pay attention to metaphors •

  29. 3. User control and freedom Users don’t like to be trapped! Strategies Cancel button(or Esc key) for dialog • Make the cancel button responsive! • Offer “Exits” for mistaken choices, undo, redo • Don’t force the user down fixed paths • Don't make important irreversible actions easy • to perform Provide clearly marked "emergency exit" signs • Ask for 'confirmation' whenever you can, • without being annoying or overprotective.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend