Often in business, we are restricted to relatively small samples. In fact, a recent publication in the Journal of Organizational Behavior suggest that the most common type of business is a microbusiness – often defined as a business with less than 10 employees (Brawley & Pury, 2017). As many readers already know, most all statistics require many more participants. For instance, the most common recommendation for a correlation analysis is a minimum of 30 participants, and more advanced statistics most often require even more participants – often in the 100s.
But what is really the harm in having a small sample size? Can the results really be that misleading? The answer is yes.
This post discusses two concerns of small samples: power and representativeness.
Power is the likelihood of a statistical analysis to discover a significant result if a significant result actually exists in the population…But what does that mean? Well, I’ll discuss this much more in-depth in a later post, but sample size is an important component to calculating statistical significance. Even if an effect is extremely strong in the population, a statistical test using a small sample size will not identify that effect as statistically significant. Weird, right?
Let’s use this example: Imagine that we are studying pretty strong effect that has a population correlation of .40, such as the relationship between self-efficacy and job performance. To study this relationship, let’s say that we use a microbusiness – one with eight employees – and we measure self-efficacy and job performance with each employee. What is the likelihood that the resultant correlation between the two variables will be statistically significant, if we know the population correlation of the variables is .40? Well, the likelihood that the result will be statistically significant is only 15%! We would fail to reject the null more than every four out of every five times!
Crazy! This example demonstrates one important reason to have a large sample size – you cannot identify significant results even if they should be significant. To learn more about this phenomenon, I suggest reading more about statistical power (Cohen, 1992a, 1992b; Murphy et al., 2014) and playing with a sample size/power calculator (http://www.sample-size.net/correlation-sample-size/).
Next, let’s discuss having a representative sample. Even if we have more employees, let’s say 150, there is a chance that our sample is not representative of the population. If a sample is representative, it accurately reflects the members of the population. Often, we assume that a randomly selected sample is representative, but this is not always the case. Certain people may not volunteer to take your survey, and that may skew your results…But how bad can it be?
Well, let’s look at the self-efficacy and job performance example again with a correlation of .40. If we had a representative sample of 300 people, the result might look something like this:
Not too bad – the regression line shows a clear, increasing relationship. Now, let’s take 150 of these people and graph the results again:
Woah! Big difference! Now the correlation between the two is literally .00, and we only removed half of the participants. What happened?
As you guessed, I did not take a random subset of the 300 people. Instead, I selected only those that scored five or above on the self-efficacy measure, as you can see with the differing axis labels in the two charts. This resulted in the sample being non-representative (because everyone with a self-efficacy score under five was missing), and thereby the result was greatly different than the entire set of 300 people.
But could this ever happen in business? Yes! Imagine that you are feeling down about your work performance and unable to do the most basic tasks. Then, you see an email about a job survey to measure self-efficacy and performance. Would you take it? Maybe, but a lot of people would just delete the email in order to avoid facing their lackluster self-perceptions, abilities, and performance.
Also, who would typically take those surveys anyways? The grumpy employees that just want to do their work and go home? Or the goodie-goodies that do whatever their boss asks? I’d guess the latter, and the samples may not be representative of all these employees.
And think about those satisfaction surveys at restaurants. Yes, people that really hated the service or really loved the service will complete them…but what about all the people in the middle? Have you ever completed a satisfaction survey when the service was just okay? I’m guessing not, which resulted in the results being non-representative.
So, whenever you need to collect data, be sure to carefully consider your sample size – not only for statistical power, but also for representativeness. If you ignore these two aspects, then you could obtain results that are entirely misleading, and thereby implement policies that do nothing for your company – or worse!
Until next time, watch out for Statistical Bullshit! And email me at MHoward@SouthAlabama.edu if you have any questions, comments, or anything else!
Brawley, A. M., & Pury, C. L. (2017). Little things that count: A call for organizational research on microbusinesses. Journal of Organizational Behavior, 38, 917-920.
Cohen, J. (1992a). Statistical power analysis. Current directions in psychological science, 1(3), 98-101.
Cohen, J. (1992b). A power primer. Psychological bulletin, 112(1), 155.
Murphy, K. R., Myors, B., & Wolach, A. (2014). Statistical power analysis: A simple and general model for traditional and modern hypothesis tests. Routledge.