Homepage Validation

Client
Global Savings Group
Skills
UX Design

Project Overview

Within Best Buys, we knew our homepage had much to improve on. Earlier in the year, we evaluated how Best Buys was performing with our users. During this study, we identified users who had issues with understanding who we are, who writes our content and the breadth of content our editors write about. These issues would cause friction in trying to create a user base that would return to Best Buys. My colleague Raysa Marcelino, was tasked to create a new homepage to address these issues while my role was to validate if the new design had resolved the issues before development.

The Setup

To help validate the homepage, I decided to rely on the tool known as Useberry. This powerful website helps recruit users and create testing plans for users to go through unmoderated.

The testing plan was going to be set up in 4 parts. The first two will be centred around tasks and then questions after and the third and fourth will be a questionnaire. All tasks were tested on desktop and mobile.

Task 01: Exploration Task

The first task was an open exploration. The premise was a user had entered the site Best Buys and we asked them to navigate the site freely. In reality, the user would be navigating a prototype of the site. Afterwards, we would ask them questions about their experience.

What would this test?: We wanted to gain the user's natural thoughts on the website. If they had to explore, where would they navigate to? How far down would they scroll? What areas of the page did they interact with?

Afterwards, we asked questions, focusing on if they could remember what the site was called, what they expected to find, in what situations would they use Best Buys and if they trust it and why or why not.

Task 01: Test Builder on Useberry

The prototypes were created on Figma, and with Useberry, we could link the prototype to the testing plan.

Prototype Screengrab

Task 02: Finding Content

Task two was centred around finding content. Users were asked to find articles that focused on ear protectors for kids.

What did we want to find out?: There's more than one way to find content on Best Buys. We wanted to see if users would find the related content via the homepage, navigational menus or search. Additionally, we wanted to see if the user could find content regardless of the method.

Task 2: Test Builder on Useberry

Again this was prototyped in Figma and linked to Useberry.

Prototype on Figma

Task 03: Feature Specific Questions

In task 3 we presented screenshots of specific features (Editorial picks and special deals).

What did we want to find out?: We wanted to discover if users knew what these features do and how they affected trust levels.

We presented each screengrab and asked three open-ended questions. The questions were the same for each screengrab.

  1. Where do you think this section/link will lead you to?
  2. How does this section affect your trust factor on Best Buys?
  3. Does this section increase or decrease your trust in the site content?
Task 03: Task Builder on UseBerry

Task 04: Generic Questions  

Task 4 asked the user about the experienced as a whole. This was asked at the end to allow the user to interact with the prototypes as much as possible. We asked 7 open-ended questions and two ranking-based questions.

The questions were as followed;

  1. What are your thoughts on the page?
  2. What drew your eye at first on the page?
  3. What did you like the most?
  4. What did you dislike the most?
  5. Do you think there was something missing on the page? If so, what and how could it be improved?
  6. Would you use this publication's website based on the homepage?
  7. Do you think the page provides an overview of what can be found on the site?
  8. From 1 to 5, with 1 meaning you have no trust and 5 meaning you have high trust, how much do you trust in Best Buys?
  9. From 1 to 5, with 1 meaning never and 5 meaning frequently, how likely would you use BestBuys?
Task 4: Task Builder on Useberry

Analysing our Results

The test was conducted with 84 participants, split between mobile and desktop users. As you can imagine a lot of data was collected. Where possible Useberry will present data in a numeric format. An example is a chart breaking down the percentage of users who completed and failed a task.

Chart Provided by Useberry

However, due to many tasks involving open-ended questions, I had to sort through a large amount of data and pull out key qoutes from users.

To analyse this, I exported the majority of data from Useberry and uploaded it to Google Sheets. Where appropriate, I colour-coded each response whether the user supplied a negative, positive or neutral comment. This allowed me to gain a high-level summary of the sentiment between each question.

Colour Coding Responses by Sentiment
Colour Coding Responses by Sentiment

Then when building out my report and presentation to stakeholders I looked at the questions we wanted to answer and built out a story of how our new homepage was performing with users.

Results

Key Insights:

  1. 71.59% of users correctly identify Best Buys and the content we deliver.
  2. 63.34% of Users were able to identify our content by only looking at the homepage.
  3. Out of a scale of 5, 31.2% of users rated us a 4, 33.9% rated us a 3, 22% rated us a 2, 10.1% rated us a 5 and 2.8% rated us a 1.
  4. 83.3% successfully found content relating to ear defenders for kids on desktop and 100% of users found the same content on Mobile.
  5. Users liked sections dedicated to the editors as they proved they are written by real people they found the layout easy to use.
  6. Users found some content to be biased and would like a fairer article.

To read the full presentation, click the link here.

Homepage Validation
Homepage Validation