Identifying & prioritizing usability barriers in a beta haptic design application

 Organization

Haptic technology | Corporate sponsored academic project

Activities

Expert evaluation | Usability testing | Moderation guide design | Survey design | Task analysis

Summary

For this project, the organization that my team and I worked with builds haptic technology and were about to launch a new beta product for designing haptic experiences.  No usability work had been completed previously and the release was just a few months away. Our objective was to identify major usability issues and to recommend high-impact/low-effort mitigations that could be addressed prior to release.  After completing an expert review of the product and conducting user testing on 10 target users, we found not only many major barriers to usability, but also a fundamental mismatch between the product and target user expectations.  Ultimately, the organization opted to delay the launch of the product.

Identifying the users & the product goals

The product that we would be evaluating was currently in development and not feature complete. The organization had built the product as part of an ecosystem of haptic technology.  Previously the organization had taken on the role of designing the actual haptic experience for their customers.  The new design tool would allow their customers (who are not haptics experts) to design haptic experiences themselves. During our initial conversations with the organization it became clear that although they had considered the different roles involved in design and implementation, they had not specified a particular product user.

We worked with the organization to narrow the focus on user interface designers who would be the ones working most extensively with the product. We also formulated a list of relevant characteristics for the user. Although skilled at learning and working with complex design programs, they would not necessarily have engineering experience nor specific expertise in haptics.  Our evaluation and testing work would adopt the perspective of this target user. 

Focusing evaluations through functional analysis

The product was a very complex application with many capabilities. In order to prioritize issues we first needed to understand and prioritize the product’s functionality. Based on our conversations with the organization we outlined the product’s core user goals as being to review, create, edit, and share haptic designs. We then completed a functional analysis of the application to identify the functions and use cases related to these product goals. With support from organization we formulated a set of representative tasks and scenarios that addressed these prioritized functions and use cases. This work allowed us to develop a targeted framework for our expert evaluation as well as the test plan and moderation guide for our usability tests.

Identifying issues through expert review

Next, we each conducted independent expert reviews of the application in the form of a cognitive walkthrough to surface task focused issues.  Since the product was not feature complete, we wanted to understand as broadly as possible barriers to overall user success rather than discrete issues. For the walkthrough, the expert adopts the perspective of the target user (an interface designer) who is new to the system and attempting to complete a set of tasks.

Tasks are broken into a specific order of operations (steps). A series of questions about user success were asked for each step within the task and were designated as “success”, “fail” or “partial-success”. Barriers to success were identified as issues and given severity ratings based on impact to the user’s ability to accomplish the step and the overall product goals.

After synthesizing the data from our four individual evaluations, we had uncovered 64 usability issues, including 15 rated with our highest severity rating, “Catastrophe”.  The issues were  summarized in the following categories: 1) Missing functionality needed for editing and iteration; 2) Lack of context & direction for the user; and 3) Diverging from standard design patterns for basic functions.

Designing usability tests

Using the findings from the expert review we next designed a usability test that could validate the findings and surface new ones. Since our findings from the expert review pointed to potential issues around the user’s understanding of the system and their mental model we wanted testing to address both discovery and assessment level questions:

  1. To  what degree does the product support interface designers with limited or no haptics experience to generate, iterate, and share haptic designs with stakeholders?

  2. What do target users (interface designers) expect from a  haptic design application?


To answer these questions we designed the test to measure user effectiveness, efficiency, and satisfaction, collecting relevant user performance, preference and satisfaction data. 

Remote testing with a beta product

Participants were recruited from professional networks with a screening questionnaire. Like our target users, all 10 participants had user interface design expertise, but with limited to no haptic design experience. The usability test consisted of participants completing four task scenarios with the product while thinking aloud. After completing each task users responded to a single ease of use questionnaire and addressed a set of post-task debrief questions to better understand their thought process. At the end of the entire session the participant completed a system usability scale (SUS) with the moderator asking follow-up questions where relevant.  Since one of the main objectives was to understand the user’s mental model, the moderator asked follow-up questions throughout the session.

All the usability testing sessions were conducted remotely using Zoom with a moderator and data logger and lasted 45-60 minutes. Users were not able to run the product directly on their computers, so the moderator granted the participant remote control access to their computer running the application. Since some elements of the product had not been developed the moderator needed to simulate certain functions (“you have now activated [feature] and you are seeing [described] ”. Given the complexity of the test set-up our pilot test was especially valuable to surface issues with our test logistics!

Synthesizing the data

We inputted all our data from our observation guide into a spreadsheet and calculated the overall success rate of the task by averaging the success rate for each step. We also compiled and organized participant quotes and researcher observations in order to determine high-level insights. We found low success rates and failing satisfaction ratings. Throughout the session, sentiments pointed to deep frustration with the product.  Testing validated the issues we had found in the expert evaluation, but it also underscored the importance of user satisfaction and expectation.  Confusion and lack of confidence had cascading consequences and participants struggled especially when features seemed to match a familiar design pattern but did not. Based on these findings and seeing the actual user impact, we increased the severity rating for some issues.

Framing the recommendations

Our findings pointed to a significant mismatch between the target user’s expectations and the product. Knowing that we had bad news to deliver to the organization, we prepared our presentation carefully to both convey the severity of the issues and to acknowledge where things went well. We also made clear that many of the issues could be addressed relatively easily and prioritized our recommendations based on near term and long term work. Although it was a difficult conversation the organization appreciated the insights. Ultimately, however, the organization opted to delay the launch of the product.