“Humans are so overwhelmed with technology, that usability and the entire user experience is essential to keep a product afloat.”
Usability is an attribute associated with products that tells us how easy it is for someone to use. Therefore, a user problem is a barrier that a person encounters when using a product. These problems are typically discovered and categorized in usability testing. Usability testing can save a good idea from failing in app form, such as the social media app, Forecast. Usability testing can find areas to improve, areas of greatest frustration or value, and validate your product.
How does usability testing work? Typically, you want at least 5 participants. Less participants can mean fewer discovered issues and a skewed perspective of what takes priority. Too many participants could be costly. Research indicates that 5 participants can typically reveal up to 85% of problems, and will most definitely reveal the highest priority issues.
Effective usability testing tells us what problems a user encounters, but also whether that problem is a game-changer or a small cosmetic issue.
Essentially, if the user is unable to complete a task, this problem is critical and must be fixed right away.
There are a lot of problems that might just delay a user from completing a task, so how do we prioritize these problems? The first thing we need to do is compile a list that lists all the problems found, which also means consolidating the problems that are essentially the same, but worded differently by evaluators as they interpret testing sessions.
Fixing usability problems may involve a complete overhaul of the system, which can get expensive. Therefore, it is important to determine which problems are the most important to fix right away. Be careful to understand that cosmetic errors may rank low priority, but an abundance of them will seriously impact your product’s credibility.
In order to do this, one could look at just the time delay: a few seconds to a few milliseconds. A user might notice a spelling error but it does not necessarily delay them for more than a fraction of a second. However, if they cannot find a search tool, this might delay them longer, which would be a high priority usability problem.
There is another layer of complexity here: what is the actual task being required of the user. If this task involves safety in any way, the outcome of the task should also be a factor in priority.
A few seconds could mean a lot more when designing an emergency system on University campuses, for instance.
This can be measured by looking at overall errors in usability testing. A user may have completed a task, but what % of them did so incorrectly?
We should not accept that these errors are just “human errors” because a user experience expert is able to predict these instances and put systems into place to prevent user error.
For instance, gmail reminds you to attach a document if you say “attached” anywhere in your email. This has saved me on many occasions. They accounted for the possibility of user error and put an extra safeguard into place.
So when evaluators are taking this master list of problems and rating them on priority, they should be looking at some of the complexities involved in the decision. Delays, errors, credibility of the product. One evaluator will not always be reliable in their prioritization because of subjectivity. Therefore, priority scores should be averaged from at least 3 evaluators.
Usability testing is a process that can result in extra cost and time to fix problems. However, without it, you may release a product to market that will fail. Humans are so overwhelmed with technology, that usability and the entire user experience is essential to keep a product afloat.
Once a product fails, users can be very unforgiving, and that company may not ever be able to bounce back from a product launch failure.
It is worth the time and investment to ensure that a product provides a good user experience so that product will have longevity in the market.
Please see links below for more information:
Nielson, J. (n.d.). Severity Ratings for Usability Problems. Retrieved February 2, 2015, from Useit.com: http://www.useit.com/papers/heuristic/severityrating.html
Travis, D. (2009). How to prioritize usability problems. Retrieved February 2, 2015, from Userfocus.com: http://www.userfocus.co.uk/articles/prioritise.html
Nielson, J. (2012). Usability 101: Introduction to usability. Retrieved February 2, 2015, from nn.com: http://www.nngroup.com/articles/usability-101-introduction-to-usability/
Nielsen, J., & Landauer, T. K. (1993, May). A mathematical model of the finding of usability problems. In Proceedings of the INTERACT’93 and CHI’93 conference on Human factors in computing systems (pp. 206-213). ACM.
Keenan, S. L., Hartson, H. R., Kafura, D. G., Schulman, R. S. (1999). The ssability problem taxonomy: A framework for classification and analysis. Empirical Software Engineering, 4, 71-104.
Skov, M. B., & Stage, J. (2005, November). Supporting problem identification in usability evaluations. In Proceedings of the 17th Australia conference on Computer-Human Interaction: Citizens Online: Considerations for Today and the Future (pp. 1-9). Computer-Human Interaction Special Interest Group (CHISIG) of Australia.
Lavery, D. Cockton, G. and Atkinson, M.P. (1997) Comparison of Evaluation Methods Using Structured Usability Problem Reports. Behaviour and Information Technology, 16(4), 246-266.
Ehmke, C., & Wilson, S. (2007, September). Identifying web usability problems from eye-tracking data. In Proceedings of the 21st British HCI Group Annual Conference on People and Computers: HCI… but not as we know it, 1, 119-128.
Sim, G., & Read, J. C. (2010, September). The damage index: an aggregation tool for usability problem prioritisation. In Proceedings of the 24th BCS Interaction Specialist Group Conference (pp. 54-61). British Computer Society.