Human-Computer Interaction
- 16. Expert Evaluation
Human-Computer Interaction 16. Expert Evaluation Recap: Think Aloud - - PowerPoint PPT Presentation
Human-Computer Interaction 16. Expert Evaluation Recap: Think Aloud Ask the user to Think Aloud while performing tasks on a system. You watch and learn how the user thinks about the task and where the user has problems using it. Observers
watch and learn how the user thinks about the task and where the user has problems using it.
say, without attempting to interpret their actions and words.
An exploratory research method used to help researchers gather in-depth, qualitative information of their participants' attitudes and perceptions relating to concepts, products, services, or programs.
collection does not provide (a big difference from interview).
To compare two (or more) versions of a web page to see which one performs
A and B) to similar visitors at the same time.
Opinion)
§ Whether it’s any good? § Whether the interface (between a system and user) meets requirements and criteria? § Whether the users are able to complete all important tasks?
“The effectiveness, efficiency and satisfaction with which a specified set of users can achieve a specified set of tasks in a particular environment.” (by ISO)
Satisfaction
What about…
final product
redesign”
User testing User testing User testing Paper sketches Wireframing Interactive prototyping Coding User testing
– Users’ opinions: how do users think they will perform on a system?
– How do experts think the users will perform on a system?
A usability evaluation method in which one or more evaluators work through a series of tasks and ask a set of questions from the perspective of the user. The focus of the cognitive walkthrough is on understanding the system's learnability for new or infrequent users
system
read a manual or follow a set of instructions.
First, you need to define the tasks. And then, you need a complete, written list of actions needed to complete the task. E.g., Task: Create a customized voicemail message on an iPhone Actions
1. Tap Voicemail 2. Tap Greeting 3. Tap Custom 4. Tap Record and speak your greeting 5. When you finish, tap Stop 6. To listen to your greeting, tap Play 7. To re-record, repeat steps 4 and 5 8. Tap Save
Sometimes defining the tasks is all you need to do to realize there is a problem with the interface.
(e.g., http://buenavista.typepad.com/buena_vista/2007/06/the_mobile_user.html)
The cognitive walkthrough is structured around 3 questions that you ask of every step (or action) in the task. You ask these questions before, during and after each step (or action) of the task. If you find a problem, you make a note and then move on to the next step of the task. 1. Visibility: Is the control for the action visible to the user? 2. Affordance: Is there a strong link between the control and the action? (Will the user notice that the correct action is available?) 3. Feedback: Is feedback appropriate? (Will the user properly interpret the system response?)
To find problems with hidden or obscured controls E.g. is the button visible? To find issues with context-sensitive menus or controls buried too deep within a navigation system. If the control for the action is non-standard or unintuitive then it will identify those as well.
Will the user notice that the correct action is available? To find problems with ambiguous or jargon terms, or with other controls that look like a better choice
Will the user properly interpret the system response? To find problems when feedback is missing, or easy to miss, or too brief, poorly worded, inappropriate or ambiguous. For example, does the system prompt users to take the next step in the task?
Basically, any UI expert can conduct a cognitive walkthrough; however, there is a risk that someone who is already familiar with your jargon, language and system is going to miss things that someone who lacks that familiarity would find. If you have to use someone who is very familiar with the product, make sure they have user personas to hand – to try and guide them to “walk a mile in the user’s shoes”.
A principle or “a rule of thumb” which can be used to identify usability problems in interaction design: a researcher walks through a product and compare it to the heuristics and make their own assessment as to whether the product follows these rules of thumb or not (the “heuristics”)
1. Know what you will test and how: Before you begin any form of usability testing or user research it is essential for you to have an objective for your testing (Articulate them). 2. Understand users: You also need some background on your users. This form of testing doesn’t involve users but your evaluators need to be able to act on behalf of the user 3. Briefing session to tell experts what to do. Provide experts with task descriptions 4. Evaluation in which: – Each expert works separately – Take one pass to get a feel for the product – Take a second pass to focus on specific features 5. Debriefing session in which experts work together to prioritize problems
Keep users informed about what is going on. Example: response time
action
Users should always be aware of what is going on.
The elements and terms used in your system should match those used in the real world as closely as possible.
Users don’t like to be trapped! Strategies
to perform
without being annoying or overprotective.
Be consistent and follow accepted industry standards in your site design. There are many accepted conventions on the Internet.
Help users recover from an error by giving a precise description of what the error is, why it occurred, and possible solutions for recovering from the error.
Help users recover from an error by giving a precise description of what the error is, why it occurred, and possible solutions for recovering from the error.
Eliminate error-prone conditions or check for them and ask for confirmation.
Aid users with specifying correct input.
Minimize the user’s memory load by making objects, actions, and options visible.
Minimize the user’s memory load by making objects, actions, and options visible.
comes to finding content on your site.
their goals in an efficient manner.
Do not offer more than is required for the user to perform a task. Be aesthetically pleasing.
Occam’s razor: Remove or hide irrelevant or rarely needed information – They compete with important information on screen
Present information in a natural order.
Help should be:
Advantages
true if you are only going to use a single evaluator.
the user experience. Problems
use of more than one evaluator is recommended.
may be open to debate.
need to use less skilled evaluators whose findings may not be as valuable.
1. Visibility of system status and losability/ findability of the mobile device 2. Match between system and the real world 3. Consistency and mapping (standards) 4. Good ergonomics and minimalist design 5. Ease of input, screen readability and glancability: 6. Flexibility, efficiency of use and personalization 7. Aesthetic, privacy and social conventions: 8. Realistic error management
Mobile
Christine: in terms of the set of mobile usability heuristics, I immediately thought of certain things while reading each heuristic. For instance, when reading the description for Heuristic 2, which was match between system and the real world, I instantly thought of the ability of the system to adapt to the real world environment in terms of the lighting. For example, when a person walks out of a building into bright light, their phone's brightness setting should automatically adjust so the user can see the information on the screen. In addition, Heuristic 5, which was ease of input, screen readability and glancability, made me think of Apple and how they have mastered this over the years. Beatrice: Though this article is about 12 years old and mobile phones have gone through major changes, I’m not surprised that this article’s determined heuristics apply because heuristics derive mostly from human behavior, not technology... Apple was able to follow some of the article’s heuristics with this feature: the aesthetic of a minimalist design to unlock it and thoroughly conquering the ease of input heuristic by reducing or avoiding the need for the user to use both hands. Thus, I think that standard heuristic evaluation for user experience and technology are slow changing, but there must be specific considerations with the advancement of features in mobile computing. Seiji: Though mobile computing was far less advanced than where we are today, I was surprised to find out that most of the mobile heuristics mentioned in this paper can still be applied to today’s mobile devices. However, there are probably a lot more heuristics that are needed to spot all problems, because technology had advanced so much since 2006... I wonder if this set of mobile heuristics had been standardized like Nielsen’s heuristics, or if they were only used for this particular research paper.