A while back I read a report that was released with some fan fare. It was a study offering public policy recommendations on the criminal justice system based on 13 interviews with a highly biased sample. There was lots of discussion of research methods – from sampling to coding – and even some discussion of limitations. For those of you who don’t see what the problem with this is, I’ll spell out a few of the issues:
13 interviews out of a possible sample of let’s say, conservatively, 50,000 or more is really low
The selection process for potential interviewees was guided by an organization that was involved in the study
The sample included members of the same family
The sample included only individuals from one province despite the fact that they were making national recommendations
What really needed to be acknowledged, however, was the giant elephant on the page: you don’t make public policy recommendations from what work that was, at best, exploratory
Unfortunately, I see a fair amount of this type of work in both quantitative and qualitative research. Given my own background, my interest is more on the latter: work that draws on interviews, focus groups, documents, media, field observations, and so on. And what I’ve been seeing has been depressing.
There are several reasons for why junky qualitative research abounds. One reason is that there has been deep resistance among qualitative researchers to the idea of trying to set standards for their work. Not unlike those within policing who argue that policing is a “craft” and not a science, many qualitative researchers see their work as an art form beyond standards, evaluation and/or judgement. I’m going to guess most people who think this know very little about the market for classical and contemporary art and how it operates.
A second reason is the use of these methods by public policy groups crafting reports aimed at influencing their desired changes (see the example above). In relation to community safety, this trend has been exacerbated by the boom in evidence based policing. As a result of the work of the EBP Societies, Cambridge, George Mason, the College of Policing, as well as a host of individuals, EBP has become the latest buzz term and everyone wants to jump in on the game. Don’t have thorough training, skills and years of experience as a researcher? No problem. Anyone can do a few interviews, right?! And that’s what we’re seeing: a load of junk from pseudo-researchers who are riding the newest opportunity wave.
A lack of serious training in qualitative research methods for graduate students is another culprit. Many departments do not offer high level applied training in qualitative research skills. It’s amazing you can take advanced statistics, but how many programs offer advanced qualitative methods?
So, what to do about this? Start much needed conversations about advancing the quality part of qualitative research methods, which I’ve been doing over the past couple weeks ad nauseam. Another? Tackle the issue of what should and should not be considered “best evidence” for informing public policy and police practice. To that end, I’mma put my neck on the line. Below is my qualitative version of a hierarchy of evidence for informing public policy**.
This model, which is based on work by Jerry Ratcliffe (https://www.jratcliffe.net), makes one big ASSUMPTION: that the study being used was well-designed and well-executed. Even so, you can see I’ve added a new category for 0: studies that manifestly poorly designed and executed.
As always, these things are the result of others’ kindness. A huge thank you to Jerry, who generously shared his template with me. Also thanks to Aili Malm, Andrew Wheeler and Janne Graub for ideas and insights.