top of page

But then again … sometimes bigger IS better!

Yes, I know. Last week I argued that small is beautiful and now here I am contradicting myself. However, then I was talking about organizational size, now I’m talking about data and the size of one’s sample.

Let’s have some context. A couple of weeks ago I got tagged on social media about a report that had just been released. Without giving away too many details, I’ll just say that it was a study with implications for criminal justice reform. In fact, ‘implications’ probably isn’t the correct word. Recommendations. It included something like 9 recommendations for changing how the ‘system’ treats a certain category of criminal offense and the resulting crime victims. Victim-centred research, I think we can all agree, is the type of research of which we would all like to see more. Small problem, though, it was based on a sample size of thirteen (n=13). Yes, that’s correct: 13 people were interviewed to produce a set of 9 recommendations for changing different aspects of the operation of the criminal justice system.

While it is the case that the exact number of victims of this type are unknown, given the prevalence of this crime, we can assume that there are far more than 13 victims across the entire country of Canada, or even within the province in which the study* was conducted. Further, thanks to our friends at Juristat, we can construct a sampling frame (an estimate of the maximum number of victims) by looking at incidence of this crime each year) and use this estimate to put together a reasonable sample size.

There is no way to generalize from a sample of 13 (and some of these 13 people were from the same families, further limiting the utility of these 13 interviews). To be fair: the researchers acknowledged this (somewhat) when they refer to their work as “exploratory.” Exploratory means a researcher has no idea what they’re going to find, so they collect some initial data to guide future research. What ‘exploratory’ does NOT mean is: ‘we collected 13 interviews and now are going to make recommendations for reforming bits of the criminal justice system’. In fact, it’s a bit frightening that anyone thought that was okay. It’s even more frightening that they justified it with a statement along the lines of, ‘well, our findings fall in line with the (non-Canadian) literature, so we’ll go ahead and make some recommendations.

What is also missing are research questions. Even the most basic of exploratory qualitative studies are designed to answer research questions (hypotheses are found in in quantitative research). Identifying these research questions is important for many reasons, not the least of which is the question of whether the study draws on an appropriate sample size. One of the things that researchers must be concerned about are issues of validity and reliability. For example, knowing what the research questions are would help us to assess whether there are sufficient numbers of participants to answer those questions (ie whether the researchers successfully did what they set out to do). In qualitative, the trend is to discuss this issue as one of ‘saturation.’* Did the study collect sufficient data (points of view, experiences, beliefs, thoughts, feelings) that the inclusion of additional interviewees would not change the study’s findings? Since we do not know what the original research questions are, we have no way of knowing whether the results are valid. Again, if this was some piece of exploratory research intended to do nothing more than inform the development of another study, it might not matter. However, this study makes actual policy recommendations.

The sampling process used in this study was also interesting. The technical terms for what the researchers did was ‘purposive’ and ‘snow-ball' sampling. The former means they asked an agency to help pass information about the study to potential participants. Although this is widely done in various studies, it needs to be done with some caution as it can easily lead to biased samples because agencies often (inadvertently or otherwise) can cherry pick people who best represent what an agency would like researchers to know and to report. As for the second method – snow-ball sampling - many university research ethics boards frown on this because it’s another form of cherry-picking research participants and one in which other interviewees (because they made the referral) can guess who said what. Why does this matter? Because it means participant anonymity is easily breached. Your research participants shouldn’t know each others’ identity.

The TL/DR version: the researchers ended up with a tiny sample that could be fairly biased.

I realize that the standard response to critiques of small sample sizes in qualitative research is to say, ‘but that’s normal for qualitative!’. My response? It’s normal for shoddy qualitative research. Qualitative research suffers from a lack of credibility in the social sciences due to thing such as small sample sizes - nevermind generalizability, it's not clear findings are valid or reliable half the time. There seems to be some scholarly consensus that around 25-30 is appropriate in many instances. In this case, I would’ve gone for somewhere about or above 50 interviews. Indeed, one of my own studies, using a sampling frame, had a sample size of over 200 interviewees and we were able to begin analyzing data immediately, because we used a checklist based interview guide.

Why go to all the bother to take apart this study? Why not just ignore it and similar others? There’s a simple answer to that: if researchers don’t know they are putting out problematic work or, worse yet, are indifferent, how is the non-researcher supposed to know what counts as good research and, more importantly, how to tell what makes some research better than others. This is one of the benefits of the academic peer-review system: some of the dross gets weeded out. When it comes to unpublished, peer-reveiwed studies, you’re on your own. So, it’s important to know that bigger can be better and why.

* The researchers acknowledge that in 2016 alone there were 57 incidents of this offense in that one province. Assuming this represents an average annual figure, we could be looking at something like 400- 650 incidents over 10 years or a possible 800 to 1300 victims (assuming most people would have 2 surviving family members, an admittedly very low-end estimate). So, we’ve got 13 out of a possible 800-1300 victims.

bottom of page