Policing is full of programs and policing research is often full of program evaluations. I am not alone in noting that most of these evaluations are of poor quality, not conducted in an independent fashion, never designed into the program itself and/or produced only after a program has long been in operation and often with ad hoc or low-quality data. I think most of us have stories about poor evaluations. Mine is that I was once asked to provide methodological input into a program evaluation and quickly realized that the crime and disorder data – a key component of an evaluation focused on determining re-offending outcomes – was completely flawed or missing and would therefore not tell us anything of value about whether the program was meeting its objectives. Some time later I was sent a copy of the evaluation, which stated that crime data was included in the evaluation as an input. I searched in vain for any subsequent reference to how that data was used. It wasn’t. Rather than dealing honestly with the fact that the crime data could tell us nothing about re-offending, the evaluators just skipped over the issue and delivered a nicely complimentary piece on the program and its operation.
Reflecting on the poor, or non-existent, quality of program evaluations in Canadian policing, I was recently reminded of ‘Rossi’s Laws’. For those who have yet to meet this work, Peter Rossi is a sociologist and expert on evaluation. In 1987, Rossi wrote a paper in which he – somewhat tongue in cheek – delivered his four Laws of program evaluation. These are:
Rossi then went on to explain that the reason why social programs are often ineffective is that public welfare programs are notoriously difficult to design, and often created and implemented people without the skills and domain knowledge necessary to understand what is needed. He also added that “Basic social science furthermore is not advanced enough to provide strong guides to designing effective programs. The consequence is that the designing of social programs has been a kind of trial and error strategy of try-this-and-try-that with little accumulation of knowledge that might be the basis of social engineering” (Rossi 2003). Couple the fact of a lack of deep understanding of the social problem with faulty implementation and inappropriate interventions, and you get well-intentioned program failure.
In 2003, with the growth of increasingly better designed evaluations, Rossi backtracked slightly in his original assessment, stating that it was possible to produce a well-designed evaluation. However, such studies appear to be the minority and not that rule, as he, himself, concluded: “Given that the majority of impact assessments are conducted by the least competent and least well-funded sector, I believe that we can make the following generalization: The findings of the majority of evaluations purporting to be impact assessments are not credible” (Rossi 2003; original emphasis).
What does this mean? I think that Andrew Leigh (2008) has put this best: ““Rossi’s Law does not mean we should give up hope for changing the world for better. But we ought to be skeptical of anyone peddling panaceas.”