By working as a Fjord consultant, I agreed to a confidentiality agreement.
Thus, in this case study I will only describe the process.

Context

During my summer internship at Fjord is a design agency part of Accenture Interactive. Check out their website here Fjord, Stockholm , I conducted a usability study of the booking flow of an airline client. I arrived towards the end of the project: a complete overhaul of the booking flow. Thus it was the perfect time to validate design alternatives and discover usability issues.

Senior interaction designer Caitlin Sullivan was my supervisor, and the study was my main responsibility. Caitlin provided invaluable feedback and guidance all throughout the process.

Test Plan

I started by defining a test plan which included:

  • goal, objectives
  • research questions,
  • participant characteristics,
  • test design method,
  • task list,
  • environment, equipment, logistics, moderator role,
  • data to be collected & evaluation measures

Due to time constraints, we decided to have 20 participants, in an A/B test. The airline compensated each participant for their time with a flight discount. We used Light and easy to use usability study software Silverback to record the usability test.

The study consisted of:

  1. pre-test background interview about the user’s habits
  2. task based usability test, with three booking tasks
  3. post-test questionnaire on We used a personalized version of the System Usability Scale. Read more here. user satisfaction

Screening Questionnaire

We divided participants by demographic variables such as age group and gender but also by behavioural variables such as:

  • business vs leisure travellers
  • frequent vs occasional travellers
  • family vs single travellers

To find the right participants we built a screening questionnaire. The airline company offered to distribute the survey on their official Facebook page, to have a wide reach. Our initial worry was that this would produce a bias in the data - all participants would be devoted brand followers. Yet, it was the fastest and most convenient way to get answers, so we compromised.

When I complete surveys from companies or peers, it often feels like the creator simply doesn't understand the realities of its target audience. To avoid this we pilot tested it multiple times with Fjord traveller colleagues before its launch. We aimed at having a clear and exhaustive, but brief questionnaire.

Great Responsibility

Nonetheless, the surprise came shortly after we sent out the survey. Over 400 survey answers in a matter of days. That’s when I felt I was working on something that truly mattered and affected a lot of people. It was an exhilarating feeling that came also with a great responsibility: to do it right.

Choosing the Right Participants

With so many survey answers, we had enough to select from. The issue now was choosing the right participants. I particularly wanted to avoid over-zealous brand fans and professional usability testers. We wanted to test with users who are representative for the company's user-base, not outliers who would be inclined to be 'nice' to the brand, or people who have learned how to act during usability studies.

We first screened out

  • incomplete answers,
  • those who didn't live in Stockholm,
  • those who didn't meet our behavioural screening criteria

Nonetheless, some of the users we tried to avoid ended up as participants: three brand fans, active on a web community dedicated to the brand (but not owned by it), and one professional usability tester.

Tasks

We formulated three tasks that tested:

  • the ease of use
  • visibility of relevant data
  • understanding of the content
  • different versions of the prototype

Building Working Prototypes

The project was at different stages of development. Half of the booking flow was already implemented as a web app. But the other half was only at hi-fi prototype stage.

So to properly test the entire booking flow, I built a dynamic prototype with I learned how to use Axure during this project. I found it easy to use and very malleable. Check it out here. Axure . Thus the study was conducted part on the actual web app, and part on the Axure prototype. Due to having an A/B test method, I also built different versions of the interaction.

Data Analysis

Once we gathered all the data, we had to measure quantitative and qualitative variables, such as:

  • time
  • number of errors
  • breakdowns
  • satisfaction

To analyse qualitative data, we printed all interface elements we tested, and glued them on carton boards. While analysing the video footage we visually pinpointed all usability issues and breakdowns with post-it notes and stickers directly on the interface elements. This helped us keep track of all problems and translate them into visual feedback for designers.

Presentation

Once the analysis was done, we had to present the results to clients. So for that, we synthesised quantitative data into graphs for better insight. Concrete changes to the interface arose from the breakdowns the users encountered. We also presented insights on how to improve the user experience, such as providing feedback, and making the user feel safe at every point in the flow.

Lessons Learned

Even if I have conducted usability studies before, this one was particularly enlightening. Working on a big scale project that influences the lives of many, made it a great learning experience. Unlike my previous, university-related, experiences with usability tests, this was a real world project, in which not everything is by the book, and things can get messy. Nonetheless, I could not let this stop me, but I needed to find creative ways around it.

Yet, my greatest take-away of this experience was working with amazing, creative professionals and learning how to deal with big clients. I learned the importance of being the advocate of user-centred design in face of any adversity, and to push for user validation at every stage of the design process.