A methodical insight into our approach based on listening and analyzing experiences to improve them.
At TSW we talk about experience analysis which means taking care of people’s experiences, welcoming their testimonies in order to improve future digital, physical or hybrid moments.
In many cases, especially when we are dealing with a digital product (sites, apps, prototypes), the way we investigate this experience is a usability test, a methodology that allows us to verify a specific context, both from the in terms of pleasantness and effectiveness.
In reality, however, we can also analyze a non-digital experience with a usability test.
In the usability test, tasks are generally defined, which are nothing more than the usage scenarios for which the digital product has been hypothesized, and the selected participants are asked to try to complete these tasks. It is precisely they, the people, who are the fundamental fulcrum of the experience: in fact, they represent the potential future users.
In fact, every user (which in its Latin origin refers to someone who uses a tool) is the bearer of the needs that we would like to satisfy through our product. Feedback from a participant is priceless and is the key to designing efficiently.
With this method we can get to identify any experience barriers that may occur in trying to complete the tasks. Generally, between 5 and 8 people are enough to bring out more than 80% of usability barriers.
Starting from these assumptions, always valid when we talk about this type of analysis, let’s try to insert the concept of unmoderated usability tests. This method involves sending a link (generated by a special platform) which allows you to submit tasks remotely to people who have been recruited ad hoc.
People who decide to participate will read the task and try to complete it. The result can be categorized as percentages of “Success” or “Failure” for individual tasks. There is also a space for comments, in case someone is particularly motivated. This approach generally requires a large number of participants, although due to the very nature of the survey it can appear to be very cheap.
But what are we leaving out by using the remote variant? Let’s deal with the mere possibility of completing a task, without taking into consideration the pleasure of the experience. We are at a level of investigation that is undoubtedly more superficial.
This is why we prefer moderate utility tests: they place the person at the center of a broader listening process, which also evaluates the quality of what one experiences, not just its effectiveness.
The role of our researchers, when they play the role of moderators, is to literally stay close to our client’s client, empathize with him, enabling him to tell us all the perceived, even the uncomfortable ones. The moderator must favor the experience of “think aloud”, a sort of stream of consciousness that is stimulated and encouraged all those times that our participant would normally stop commenting and instead is invited to complete all his thoughts, to make it easier for us to understand.
This is how this type of investigation allows us to go deep into people’s experiences and understand why they happen or don’t happen, and why in that way. Understanding why will be the basis for building a real new alternative. The proposed solution will therefore effectively be the solution to a problem, including its underlying logic, and not just an attempt to technically solve an interface problem.
I’ll give you an emblematic example. The Back, Reply and Share functions are often represented with similar arrows.
If I presented them individually in an unmoderated test, I would probably get very good results in any combination of tasks (eg “Go back”, “Forward”). And probably this result could be dictated by the absence of alternatives and not by our interface. In a moderate test, on the other hand, any perplexity would be absolutely interceptable. And as a result, the element and the interaction with it could actually be improved.
Try to imagine this reasoning on complex dynamics such as the purchase decision. Wouldn’t you really like to understand a little more about why people behave the way they do?
For this reason, in our vision and interpretation of usability tests, we cannot give up the real understanding of our end user’s experience: it would mean losing one of the greatest values we can achieve. And that’s why you’re unlikely to hear us proposing unmoderated tests.