Within Best Buys, we knew our homepage had much to improve on. Earlier in the year, we evaluated how Best Buys was performing with our users. During this study, we identified users who had issues with understanding who we are, who writes our content and the breadth of content our editors write about. These issues would cause friction in trying to create a user base that would return to Best Buys. My colleague Raysa Marcelino, was tasked to create a new homepage to address these issues while my role was to validate if the new design had resolved the issues before development.
To help validate the homepage, I decided to rely on the tool known as Useberry. This powerful website helps recruit users and create testing plans for users to go through unmoderated.
The testing plan was going to be set up in 4 parts. The first two will be centred around tasks and then questions after and the third and fourth will be a questionnaire. All tasks were tested on desktop and mobile.
The first task was an open exploration. The premise was a user had entered the site Best Buys and we asked them to navigate the site freely. In reality, the user would be navigating a prototype of the site. Afterwards, we would ask them questions about their experience.
What would this test?: We wanted to gain the user's natural thoughts on the website. If they had to explore, where would they navigate to? How far down would they scroll? What areas of the page did they interact with?
Afterwards, we asked questions, focusing on if they could remember what the site was called, what they expected to find, in what situations would they use Best Buys and if they trust it and why or why not.
The prototypes were created on Figma, and with Useberry, we could link the prototype to the testing plan.
Task two was centred around finding content. Users were asked to find articles that focused on ear protectors for kids.
What did we want to find out?: There's more than one way to find content on Best Buys. We wanted to see if users would find the related content via the homepage, navigational menus or search. Additionally, we wanted to see if the user could find content regardless of the method.
Again this was prototyped in Figma and linked to Useberry.
In task 3 we presented screenshots of specific features (Editorial picks and special deals).
What did we want to find out?: We wanted to discover if users knew what these features do and how they affected trust levels.
We presented each screengrab and asked three open-ended questions. The questions were the same for each screengrab.
Task 4 asked the user about the experienced as a whole. This was asked at the end to allow the user to interact with the prototypes as much as possible. We asked 7 open-ended questions and two ranking-based questions.
The questions were as followed;
The test was conducted with 84 participants, split between mobile and desktop users. As you can imagine a lot of data was collected. Where possible Useberry will present data in a numeric format. An example is a chart breaking down the percentage of users who completed and failed a task.
However, due to many tasks involving open-ended questions, I had to sort through a large amount of data and pull out key qoutes from users.
To analyse this, I exported the majority of data from Useberry and uploaded it to Google Sheets. Where appropriate, I colour-coded each response whether the user supplied a negative, positive or neutral comment. This allowed me to gain a high-level summary of the sentiment between each question.
Then when building out my report and presentation to stakeholders I looked at the questions we wanted to answer and built out a story of how our new homepage was performing with users.
Key Insights:
To read the full presentation, click the link here.