A process has been launched to analyze how generative AI describes and compares offerings in the decision-making process, starting from people’s information needs.

Today, people discover, compare, and evaluate brands and solutions in environments that didn’t exist until recently. Interaction spaces are changing, search methods are evolving, and generative artificial intelligence is becoming one of the places where expectations and selection criteria are formed.
In this scenario, Verti Assicurazioni, a direct subsidiary of the Mapfre group, has entrusted us with a project based on our proprietary AMPLIF-AI infrastructure. Not to establish a new channel, but to understand a deeper dynamic: when intermediation occurs through generative systems, it’s not just about being present that matters, but also about understanding how the brand is described and positioned in competitive comparisons.
Measuring the brand experience in LLMs means observing what happens before a choice is made and understanding how people’s information experience is constructed: the needs, expectations, and criteria that guide their search and decision-making.
In the insurance industry, as in many other fields, decisions require comparison and verification. People analyze options, seek clarification, and evaluate conditions. This phase is increasingly occurring through generative systems that collect and synthesize information, comparing different solutions.
In this context, the answer isn’t just information. It’s content that highlights certain aspects rather than others, associating a brand with certain characteristics, and placing it in direct comparison with specific competitors, defining their strengths and weaknesses. It is through this comparative synthesis that decisions can begin to be made.
From this awareness, Verti’s need to transform this evidence into a method of continuous observation: observing, measuring, and interpreting over time how its positioning is reconstructed within these environments.
AMPLIF-AI is based on a precise methodological premise: before interrogating generative models, it is necessary to understand how people use them. Therefore, each project begins with a qualitative research phase with real people, aimed at observing how generative artificial intelligence enters into information and comparison paths.
We analyze how customers and prospects use these tools in the decision-making process: what questions they ask the models, when they interrogate them, and what criteria emerge when evaluating different offers. This phase allows us to identify people’s real information needs and understand how AI intermediation is changing the customer journey.
Starting from these insights, our proprietary AMPLIF-AI infrastructure is configured, adapting it to the competitive context. The questions emerging from the research become the basis for building a system of structured queries that amplifies these logics on the main generative linguistic models.
The system submits thousands of queries to the models, building a large, replicable database that allows us to observe how the brand is represented in their responses. Specifically, we analyze:
The results are organized into dedicated dashboards and interpreted strategically, resulting in operational recommendations: updating information assets, reviewing communication priorities, and continuously monitoring the brand’s positioning in the generative systems.
The infrastructure is hosted on GDPR-compliant systems, ensuring data control, traceability, and governance.
The ongoing transformation does not distinguish between digital and traditional organizations. More and more exploration paths begin with generative interaction that synthesizes information, compares alternatives, and offers an initial overview of the available offerings.
The question is no longer whether these spaces will become relevant, but how to understand their logic and integrate it into communication and product strategies.
AMPLIF-AI was born for this: to combine listening and measurement in a single methodological framework, applying to generative systems the same approach that has always guided our projects — starting from observing people’s behavior to understand how decision-making processes are changing.