Not all information related of this case is shared, especially images and data, to respect the confidentiality of Activobank’s business. In this case, image sharing of screens and tests results was not possible.
Problem
Activobank is an innovative, mostly digital, bank. Its signature is to make all banking processes simple. It revolutionized the Portuguese banking landscape by simplifying processes that other banks said to be impossible, all within strict rules of security and data protection. It was the first bank to have a UX Research dedicated lab fully equipped with an eye-tracking technology extensively used for several purposes. I led that lab and conducted eye-tracking studies for both Design, Product, and Marketing departments.
For this particular case, there was this idea of having a new banner in the app, and designers wanted to know its best position. I responded, “Do you even know if our users even see the banners in our app?”. The phenomenon of banner blindness is extensively known, but it is mainly proven in desktop settings, not so much in mobile. Therefore, since the visual field is narrowed when looking at a mobile device, there is a belief that in this context banners are seen. Seen, not only perceived. The difference between “seen” and “perceived” is about attention. When something is seen, user attention is directed toward the element thus influencing decisions, thoughts, and feelings. But if it’s only perceived, it’s captured by the eye but not long enough to get the user’s attention, hence, the “blindness” effect.
So, in this case, the initial research question was already biased because there was this assumption that users saw the banners in the app. So, we decided to test first if they actually saw the banners and how these eventually could influence their decisions, thoughts, and feelings. Lastly, the banners were in two locations in the app, but they had the same information: promotion of personal credit with a lower interest rate.


What was done
I devised an experiment where we would have two versions of the same app:
- Without banners
- With banners
I also created two different scenarios that set the stage for the task section of the test:
- It is pay day, and the user goes to the bank app to check if the salary is already available in their bank account;
- The user has a large expense to pay, and goes to the app to see how much money is available, only to find that he/she doesn’t have any money to pay that bill.
The second scenario served to prime the user to be inclined towards a need of money and therefore more prone to see the personal credit banner. The first scenario was a control group, where no priming exists. The separation between having or not a banner was to observe if there were differences in task completion success and effectiveness considering the presence of the banner. Since I arranged 30 participants for each group, a normality of the population was assumed, and an ANOVA statistical test was conducted to look for significant differences between groups. A p-value of 0.05 was used.
I had four independent user groups distributed by versions of the app and scenarios, and conducted a quantitative comparative study between the four groups. The dependent variable was of course seeing the banner. What I determined as “seeing” the banner, was a sustained visual focus on the banner area for no less than 200ms. This is the minimum time it takes to an image to get from the eye to the visual cortex, and thus probably triggering user’s attention.
Furthermore, for the second scenario group, I had a pre-test interview where I set the scenario and asked them what they would do in this situation. Before moving on to the task section, I said that unfortunately, no friend or family member could loan them the money (in case their answer was to ask a friend or family for the money). I would redo this question after the task section of the test was complete and when they were already exposed to the banner to test if the banner had some effect on them, even if they didn’t see it.
The chosen tasks were the top tasks users do in the app, and one of these tasks is actually to look for the bank offer on credit products.
Eye-tracking tech was used to capture all eye movements during the task section. After each task I conducted a Retrospective Think Aloud protocol (which is the only way to have a Think Aloud protocol during an eye-tracking test) to gather qualitative feedback about their thoughts, feelings and decisions made.


Impact
The results were staggering but paradoxically not surprising. I made an experimental design to influence a group’s attention towards the personal credit theme because if even in this case they didn’t see the banner then there was no doubt that there was a banner blindness effect. Here are the results:
- No user saw the banner, regardless of location and despite if they were primed to be interested in personal credit. This same group, the one primed for a personal loan, didn’t change their initial response on what they would do; they didn’t put forward a new hypothesis of getting a personal loan;
- Most users didn’t even look at the banner – perception – as if they knew this was irrelevant information without needing their visual attention directed to it. This shows a learning effect: they saw the banner in the past, and now they avoid looking at that area altogether;
- Users showed a decrease in the success and effectiveness of task completion, especially in the screen where the banner was bigger. This decrease was not highly significant, but it shows that the banner is at least a bit damaging for the overall UX, especially if we consider that users go to the bank’s app every day.
Because of this research, both the Design and Product departments decided not to add any new banners and removed a banner from the location where the user’s performance decreased. But unfortunately, it was a half-won battle: the banner of the main screen was not removed. Besides this immediate impact, all the involved departments recognized the value of UX Research in general and started to use eye-tracking for many strategic decisions and other research methods to rely more on science and users than on educated guesses.
More case studies

UX research for players across the globe, encompassing their virtual identities, and within the context of the Metaverse.

How a contextual inquiry revealed that what users think they need is actually a symptom of another more pressing need