top of page

Research Killers: Fallacies about What Cannot be Measured

Recently, I posted a blog about the state of evidence in support of situation tables and hubs. In response, one correspondent argued that the reason for this situation is “it’s difficult to study behaviour that [doesn't] exist”. I gather what my correspondent intended to raise was the belief – commonly cited in terrorism studies – that it is difficult to measure the effects of programs where a successful intervention means that crimes or other behaviours were prevented. In response, I pointed out that, were that completely the case, the field of behavioural change (and much of the literature in psychology and social psychology) would be decimated because that is what a lot of research in these domains addresses: whether an intervention successfully changed someone’s behaviour.

In this blog I want to focus more narrowly on this fallacy – that it’s hard to measure what was prevented – in the context of deterring individual criminal and other offending. As a reminder: advocates for situation tables and hubs argue that participants identify high-risk, vulnerable individuals and, through collaborative efforts, direct these people to the resources most helpful for addressing their offending and other risk factors. (As a side note: what I find interesting about the belief inherent in this model - at least as represented to me by advocates - is that these individuals have somehow fallen through cracks in the social safety net and simply need to be re-directed, as opposed to representing already existing failures of those same services.) Returning to my primary point, we are fortunate in that most criminology researchers do not ascribe to this fallacy. In fact, there is a wealth of widely available criminological and other research that does exactly what my correspondent claims is too difficult: directly measures the deterrent effects of a program or other intervention. Here’s a sample of meta-analyses conducted on deterrence programs:

  • The effects of parental training programs on offspring delinquency (Piquero and Jennings)

  • Sexual offender treatment (Hanson et al. 2009; Schmucker and Loesel 2017)

  • Interventions with juvenile delinquents (Mackenzie and Farrington 2015)

  • Family-based programs to prevent delinquency and later offending (Welsh and Farrington 2006)

  • Mentoring programs to affect juvenile delinquency (Tolan et al. 2013)

  • Scared Straight programs on juveniles (Petrosino et al. 2013)

  • Effects of drug courts on individual offending (Mitchell et al. 2012)

How do researchers measure deterrent effects without relying on inferences from statistics or on program administrator’s potential biased views of success? There are several possible methods beyond drawing inferences from crime or other data. Each of those listed below requires access to the population studied and thus provides the opportunity to measure outcomes (did you actually deter this person’s offending?). Once you have that access, and the consent of participants, you can (off the top of my head):

- Conduct a randomized controlled trial

- Conduct pre/post test analyses using mixed, quantitative (surveys/questionnaires) or qualitative methods

- Engage in in-depth interviewing with participants to ask for their views and experiences

- Create a longitudinal study that tracks individual outcomes over time

As always, you do not need to take my word for any of this. There are resources available online that can be easily consulted. A great source of information on deterrent effects of crime programs is the Campbell Collaboration, which posts the results of detailed meta-analysis conducted by some of the top researchers in their respective fields, analyses that draw on quality research in the area. See: https://campbellcollaboration.org/library.html

*meta-analysis is a technique where data and results from multiple studies are combined to measure overall effects in an area (like sex offender treatments). Most meta-analyses look at randomized controlled trials but can also include other types of experimental designs. Regardless, researchers have to look at the rigor of the studies included and exclude those that do not meet a high standard of rigor.

bottom of page