Much of the attention given to Daniel Kahneman’s book Thinking, fast and slow has been about how people make decisions and the implications for models of consumer behaviour. However, the book also points out that researchers have their own bias - the law of small numbers! Is this bias to blame for many modern day myths?
What is it?
It’s a general bias that makes people favour certainty over doubt. Most people, including many experts, don’t appreciate how research based upon small numbers or small populations can often generate extreme observations. As a result people have a tendency to believe that a relatively small number of observations will closely reflect the general population. This is reinforced by a common misconception that random numbers don’t generate patterns or form clusters. In reality they often do. Kahneman makes this observation:
“We are far too willing to reject the belief that much of what we see in life is random.” Daniel Kahneman, Thinking, fast and slow.
Why are researchers prone to the law?
Kahneman acknowledges that researchers (social and behavioural scientists in his case) have too much faith in what they learn from a few observations:
A well known example of this is the supposed ‘Mozart effect’. A study suggested that playing classical music to babies and young children might make them smarter. The findings spawned a whole cottage industry of books, CD and videos.
The study by psychologist Frances Rauscher was based upon observations of just 36 college students. In just one test students who had listened to Mozart “seemed” to show a significant improvement in their performance in an IQ test. This was picked up by the media and various organisations involved in promoting music. However, in 2007 a review of relevant studies by the Ministry of Education and Research in Germany concluded that the phenomenon was “nonexistent”.
What is to blame for the bias?
Kahneman puts much of the blame for people being subject to the bias of small numbers on System 1. This is because system 1:
Overall Khaneman believes people are prone to exaggerating the consistency and meaning of what they see. A tendency for causal thinking also leads people to sometimes see a relationship when there isn’t one.
Questions for Researchers?
Kahneman’s work raises some important questions for researchers and customer insight specialists.
As with all forms of bias reality is characterized by a spectrum of behaviours from the rigorous to the lax. From my experience on the client-side of research there are a number of reasons why research sometimes falls foul of the bias.
Observations from a client-side researcher!
I read this recently in a blog about website usability testing. This is a myth. The reason for only undertaking a small number of tests is because there are diminishing returns. After 5 to 10 tests few new usability risks tend to be generated. The law of small numbers still applies even when it involves human behaviour.
Like any form of qualitative research usability testing is a valuable way of uncovering potential risks and perceptions of a new design. However, just like traditional qualitative research, usability testing still benefits from being validated by using quantitative techniques (e.g. A/B or multivariate testing).
When I worked for a life insurance company I was constantly being challenged about the reliability of findings from small samples. The reason for this was simple. Almost all the senior management were actuaries. This meant they had an excellent grasp of the potential bias caused by sampling. This had the benefit that other departments were unlikely to be able to misuse research based upon small samples because they would meet the same challenges as I did.
DIY tools have given non-researchers easy access to the means of conducting and analyzing their own surveys. I am not against the use of these tools. Unfortunately though many non-researchers who use DIY research tools may not have sufficient knowledge of sampling and statistics to correctly design or analyse data from surveys. If this is the case it suggests that non-researchers may be particularly prone to bias resulting from the law of small numbers.
Key driver/multiple regression analysis is often used for modelling the influence of independent variables on a single dependent variable. However, such models can only infer a causal relationship and further experimentation and analysis is needed to support such a relationship.
The nature of survey data (e.g. independent variables are often correlated) and sample sizes does not always justify the use of such statistical techniques. Big data can play a key role here in providing more robust evidence for causal relationships. But without evidence to suggest a reason for a causal relationship it is important that a correlation between two variables is treated with the utmost caution. On a similar vein always be skeptical about a trend line that fits too well as this could be a Procrustean solution. This is where only data that fits the trend as been selected and data that doesn’t is discarded.
This is a training issue, but I frequently see PowerPoint slides that highlight differences between sub-samples that are not statistically significant. In most research agencies the modelling and analytics are carried out by a separate department from the account executives. This is not a problem provided the account executives who present data have sufficient understanding of the nature and limitations of the analysis they present. From my experience this is not always the case.
When companies treat research like a commodity and constantly expect to make cost savings there is a danger that sample sizes will be cut to the bone. As a result studies don’t deliver the required level of reliability. I briefly worked on a multi-country brand and advertising tracking study that only had sufficient sample to analyse on a three monthly basis. This proved very frustrating as it wasn’t sufficiently sensitive to measure the short-term impact (i.e. monthly) of bursts of advertising activity.
Researchers are by their nature pattern seekers and this can make them susceptible to seeing phenomena that are generated by a purely random process. There is nothing wrong with this provided we treat such patterns with caution and seek further data or more robust research to test our hypothesis. This is why researchers need to be trained to present results in a balanced and critical way so that management don’t jump to conclusions.
There is a growing tendency to expect to have data on tap. This is a characteristic of the digital age. But sometimes this leads to pressure to analyse and communicate continuous survey data too frequently. I came across a continuous customer satisfaction survey a few years ago where high level Key Performance Indicators were communicated to each business area on a monthly basis. However, despite most of the base sizes being far too small to identify any significant differences, the Customer Insight Manager was expected to comment on changes from the previous month’s score. This encouraged the inventing of reasons for changes that were not even statistically significant.
The law of small numbers gives researchers an interesting insight into our own potential fallibility. It warns us against listening to our intuition and relying on rules of thumb for determining sample size. Khaneman also provides a useful reminder to be careful on how findings are communicated when dealing with data from small numbers. Research and experimentation is after all an iterative process, so we should always be looking to validate results, whether from large or small scale studies. It is only through trial and error that we are ultimately able to separate insights from myths.
Thank you for reading my post and I hope it provided some useful insights.
Source: Thinking, fast and slow by Daniel Kahneman.
This post has also been published on the Green Book Blog!
The author: Neal Cole has over 20 years experience of working in market research and website optimization for some of the UK’s largest financial services providers and online retailers. Neal is currently a conversion specialist for a major online gaming company in Gibraltar. He is a full member of the Market Research Society and an Associate of the Chartered Institute of Marketing. You can follow Neal on Twitter @northresearch and view his LinkedIn profile.