top of page

Evaluation versus Experimentation: Where are We At?

I get asked a lot about evaluation. Usually this occurs in the context of discussing some program that has been in operation for several years and needs to demonstrate that its i meeting its intended goals to a funding agency. What makes these program evaluations even more challenging is the fact that program objectives are often ill-defined and/or not easily measurable, largely because they were not well thought-through from the beginning. How do you measure, for example: ‘building connections for positive community impacts to benefit young people’? What is the measurable outcome here? The connections? The positive community impacts? Benefits to young people? All too often what gets picked is the low-hanging fruit: analyzing connections between service agencies (outputs) rather than studying whether programs are actually producing beneficial impacts for anyone other than service providers (outcomes).

What makes all of this particularly frustrating is that such programs are often very loosely termed ‘social innovations’, which has been weaponized by some in the ‘social innovation community’ to mean exempt from critical analysis (because we’re going good here!). As we have seen in relation to all sorts of policing and community safety innovations, this means investments in unproven ‘solutions’ that most likely waste money and other resources, but ‘feel good’ (until the next innovation comes around).

What’s the solution to this problem? Well, in Canada we need to be doing something that seems unpopular within the policing and research communities: experimentation. I can almost hear some of you asking, ‘what’s the

difference?’, which is a great question.

With evaluations, what typically happens is that advocates implement a program, policy or practice, often with unclear, unrealistic and/or non-existent objectives and no clear-cut plans for measuring program effects. After the program has been launched for a while, they then collect rough data (frequently pre-test or post-test quantitative data or qualitative interviews asking people for their subjective opinions on whether something is working).

In experimentation, a program is specifically developed to target a problem, and then rigorously tested and tracked over time in order to ensure that it meets defined objectives (Sherman 2006; 2013). As Sherman (2006: 394) puts it, “Experimental criminology is not just the testing of other people’s programs.” It is, instead, carefully developed and articulated innovation that is designed to show whether something ‘works’ from a scientific point of view, rather than as an article of faith.

Because Canadian policing and policing research lacks a culture of experimentation, we are often forced into adopting and adapting program models from other sources. As we continue to do this, not only are we killing the possibility of homegrown innovations and retarding the skill sets of our research communities, but we are forced into the continual cycle we presently see of borrow and then, maybe, evaluate. Then we all attend conferences in which we discuss how these programs represent our new, ‘best practices’ and the needle never moves.

This is where we are at and, frankly, it needs to change.

bottom of page