It's really important to us to understand the people who will be using anything we build for a client. Our Discovery process is all about uncovering as much information and data as possible to inform our decisions in planning and designing a website. This process includes any number of research activities, including audience and stakeholder interviews, surveys, analytics review, content and UX audits, and more.
Something we sometimes hear is, “how statistically significant is your data?” Great question! Let’s talk about what “statistical significance” is and why that is not typically the end goal.
When using quantitative data, statistical significance, in the simplest of definitions, means that the results are probably true and are not due to chance.
To provide some context without getting too technical, if you wanted to survey an audience of 2,000 people and you want a 95% confidence level (a fairly common goal) with a 5% margin of error, you'd need 323 survey responses to achieve that confidence level and reach “significance.”
1. Usability is not the same as a clinical trial.
Statistical significance is much more imperative when the stakes are high, such as when validating a new drug, where the outcome might be a matter of life and death. Testing designs and understanding how audiences engage with a business or organization is less extreme, and less black and white, and the goal is more about finding out more information than we knew before and reducing uncertainty.
2. Statistical significance usually costs more money and takes more time.
The likelihood of statistical significance increases as our sample size increases. For surveys, that usually (but not always) means hundreds or thousands of responses. Often, we are working within budgets and timelines that don’t allow for the sample sizes required for statistical significance. Therefore, we abide by point number three:
3. We can make informed decisions by conducting just enough quantitative AND qualitative research.
Again, because we are looking to make decisions on things like website designs and audience messaging, we rely on just enough research to inform those decisions. We take the data size we can get with the budget and timeline we have, and use that to see the big differences. Usually, it’s those big differences that provide the biggest bang for our buck in improving the user experience of a website — such as upgrading site architecture or organizing pages more meaningfully.
We also seek to corroborate our research — insights we see in survey data is backed up and strengthened with information we’ve gleaned through an analytics review and qualitative audience interviews. Our goal is to reduce as much uncertainty as possible, get to the prototype phase and test and iterate. So while we many not reach statistical significance, we still reach meaningful and useful outcomes.
Our goal is to reduce as much uncertainty as possible, get to the prototype phase and test and iterate.
For example, for part of the Discovery work we did with Herff Jones, we looked at the analytics from their current website and also conducted a survey that gathered responses from their website audiences. The survey had 394 responses, but that still wouldn’t have been big enough to be statistically significant to their audience size. So, we used analytics to double check some of the survey data, particularly around e-commerce usage. With these two facets of research (analytics and the survey), we were able to gain reasonable certainty that the main reason people visited the site was to make a purchase.
At the time, that was a significant insight because the organization was still sorting out for which audience the website was intended. This piece of information led to a number of discussions and deeper dives into addressing audience needs and presenting products in a useful and easy-to-navigate way. And now, the current website sees visitors successfully finding their way to products and purchases!
If you want to learn more about statistical significance, especially as it pertains to user experience research, here is some further reading:
What Does Statistically Significant Mean? from Measuring U
Quantifying the User Experience by Jeff Sauro
Risk of Quantitative Studies by Jakob Nielsen