What are cognitive biases? Can we recognize them? Do they affect our work? Do they influence the perception of our product or our communication messages? Let’s try to answer these questions.
Cognitive biases are an inevitable and fundamental element of people’s lives. Or rather: mental shortcuts are a natural and spontaneous approach. These schemes are called heuristics and are adopted by most people to simplify reality and the enormous amount of information it presents to us. But they are not yet the phantom cognitive biases that everyone is talking about.
The term bias can be used when, in the scheme we adopt to make decisions, a systematic error is inserted, a cognitive distortion, which we could define as a flaw in reasoning, which does not necessarily lead us to make a wrong decision, but which nonetheless it defines the decision-making path we have taken, both consciously and unconsciously.
But we have already talked about the theoretical dimension of cognitive biases in a dedicated article. At that time we had already introduced some of the most common biases: availability, confirmation and halo effect.
Today we would instead like to offer you a new list of 5 biases, the most common ones for us. Let’s be clear: in our work we try to make any experience better for the people who experience it. Concretely: a website, an app, a point of sale, an entertainment opportunity, etc.
So in analyzing these experiences, both through the eyes of our experts and thanks to listening to the people who live them, we encounter cognitive biases, which influence these experiences, for better or for worse. We also recognize some of them because we, as people, apply them unconsciously.
Knowing them all (it’s impossible, there are more than 200, but at least the most important) allows us to identify them and reflect on whether they are good or bad for our way of working or for our business.
Let’s start again from the most common, which we had already mentioned in our previous article. Confirmation bias is very common both online and offline and is part of that group of biases that help filter information: in fact, this pattern leads us to look for, give importance or notice, only information that confirms our initial thesis.
In our work, the risk of unconsciously seeking confirmation of our opinion is around the corner in every question of an interview or intervention as moderator of a workshop or focus group. Having perhaps already noticed evidence that our expertise and experience recognizes as valid assumptions, we may be prompted to ask questions that confirm what we have noticed.
Here the skill of the researcher is revealed: even if he already has hypotheses, he must listen actively and be open to a diversity of opinions. The trick: questions must never already contain or suggest the answer we would like.
In short, let’s avoid what are called suggestive questions: “Don’t you also find that the buttons on a site are more intuitive if they tell you exactly where you will land?” it will then become: “How do you find this button? Where do you expect to land if you click it?” and so on.
Apophenia is a search for patterns and connections in a random set of data, facts, or information. You see a pattern where there is only randomness, in short.
This can happen, for example, when qualitative research is carried out: our clients hear the opinions of people involved in an interview or usability tests and think that perhaps these explain the reason for a rarely visited page, unwelcome content, a campaign that does not generate leads. Instead, what we are getting by listening to people are precious clues for producing hypotheses, the basis for starting the quantitative research phase that validates these theses.
To consider the probabilities of a link, a correlation or a cause-and-effect relationship, we must always rely on the numbers in the literature, which for example for an online questionnaire are 120 responses by type of person.
A communication campaign is a flop, and you naturally say: “it was obvious that it would go like this!”? This is the hindsight bias, which is part of the category that tries to make sense of the world: when you find yourself taking the outcome of an event that has already occurred for granted or predictable even if you would not have had any elements before it happened concrete to advance this hypothesis.
This is why we rely on listening through qualitative and quantitative research, as well as on the web: we don’t go blindly and we can’t say “well, I knew it would go this way”.
We encounter anchoring bias when we have too much information available and we have to make a decision, often relying on the first information we have encountered which will also influence all subsequent ones.
We encounter it when, in the research phase, we investigate the perceived value of a product or service. Let’s take the real estate sector for example, in particular the portal of an agency we have worked with. When searching for a rental property, the element that will be more suitable for the person’s search, and therefore characterized by a higher “findability”, will be displayed first and will set the person’s expectations according to the anchoring bias.
If the first property presented has a high price, perhaps higher than the average in relation to its characteristics, the potential tenant, finding the subsequent properties with prices more consistent with the market average, will evaluate them and perceive them as better products.
Outcome bias occurs when the goodness of the decision is judged based on the outcome rather than the value of the underlying logic.
This happens for example in cases where a customer contacts us to create a landing page but has no intention of investing in research, and therefore in this case in a test to evaluate its effectiveness. The landing is then designed on the basis of experience, skills and common sense: all valid ingredients.
But, without people’s involvement, we don’t have the certainty that it’s the best possible result. It may happen that the landing has the same good performance and the customer tells us: it wasn’t worth doing the test, it works anyway. But in reality we should ask ourselves: are we sure that it could not have worked even better?
These five are just a few examples of bias, which we encounter in our daily lives. Knowing them and knowing how to identify them in real contexts is part of the set of skills that derive from the study and research on man, which in TSW is carried out with the research team. All this, combined with technologies and places where we let our customers experience their customers firsthand, allows us to design better products and services together with companies and people in physical and digital contexts.