top of page

Return of the D.A.R.E. Zombie

Recently I was asked for my opinion on a program known as ‘Keepin It Real’ (KiR. For those of you who haven’t heard of it, it could also be termed ‘D.A.R.E. 2.0’. Like the original Drug Abuse Resistance Education program, KiR is intended to be an educational tool aimed at preventing drug and alcohol use and abuse among school-age children. While the original version of D.A.R.E. was strong on ideology (‘drugs are bad, kids!’), this new version draws on many of the current buzzwords in the Health Sciences – including ‘evidence based’ – arguing that it will teach youth core values and competencies that will increase their resistance to enticements to use drugs and alcohol.


But what exactly is the ‘evidence base’ for KiR? In support of their position that KiR is ‘evidence based’, the program developers, Michael Hecht and Michelle Miller-Day, offer two versions of ‘evidence based’. The first version is one commonly used: they have based their program on current research in youth development. The second version is less frequently employed and often misunderstood: that there is a substantial volume of independent, rigorous evaluative research on a program, policy or practice, in this case on KiR. Of these two things, I’m much more interested in #2. Why? It’s easy to base a social program on some body of research, but that fact alone doesn’t tell me that it ‘works’. The only way you and I can be confident that something ‘works’ is when we see 20-30 independent, methodologically solid, trials or evaluations of a program. Until then it might be ‘promising’ or, worse yet, ‘unproven.’


Where does KirR fit in terms of ‘what works’? For me, it’s in the unproven category. Simply put: there are insufficient numbers of quality studies produced by anyone other than the program developers, Hecht or Miller-Day. Yes, you read that correctly: the overwhelming bulk of studies on KiR (which are all positive, I note) were produced by the program developers and/or with their collaborators. This is not an isolated observation,: a systematic review by Caputi and McLellan (2017) made the same observation. In fact, based on their analysis of 11 papers that were deemed of sufficient quality to be synthesized, Caputi and McLellan noted:


Concerns remain regarding the appropriateness of the KiR D.A.R.E. programme: (1) KiR has only been tested on a narrow audience and may not be appropriate for D.A.R.E.’s larger audience, (2) KiR may not be effective in reducing substance use among elementary school students and (3) the specific versions of KiR implemented by D.A.R.E. (KiR D.A.R.E. and KiR D.A.R.E. Elementary) have yet to be tested for efficacy. The authors recommend independent, randomised trials for the KiR D.A.R.E. curriculum and the development of a standardised measure and evaluation system for in-school substance use prevention programmes (ibid.: 49).


So, if the evidence base for KiR is fairly weak, and the horrendous failings of D.A.R.E. are well-established why am I still being asked about it by police agencies that are either using it or looking to use D.A.R.E. 2.0??


1. The desire for easy to use, ‘out of the box’ solutions.


A few years ago, I let myself be convinced to work on a study of youth policing in rural and remote parts of the country. A sentiment I heard over and over from tired, stressed out police officers was the lack of time available for engaging proactively with their communities’ youth. Most got to their local schools maybe once a year and, as they had little time, training or expertise with which to prepare a presentation on a topic of mutual interest, favoured using D.A.R.E. or D.A.R.E. 2.0 as easy for them. I have found the same – this desire for an easy, quick fix to a socially, messy issue – to be as readily true of other jurisdictions across Canada. The reality is that police officers are not child development experts, some may not even have children, or, dare I say it, not even particularly care for children, and yet are deployed to local schools to deliver messages on the perils of drugs and alcohol despite their often, complete inadequacy for the task (due to a lack of training or guidance or clear objectives of this complicated task) .


2. A willingness to trust ‘experts’ with plausible claims


When I was most recently contacted about D.A.R.E. 2.0, the email I received was littered with the academic titles of its proponents (Ph.Ds.). My first piece of advice was to stop using their titles and then re-evaluate their claims. There are lots of people using academic titles to market their ideas. Indeed, the Internet is littered with ‘Doctors’ citing ‘cutting edge research’ and marketing their expertise as a product to fix your life, your family, your job and the whole world for a small to substantial fee. A generous estimate is that most of it lacks any substantial quality to fix what ails you, including DARE!. One of the advantages of scientific training is a healthy skepticism for pretty much everything. I don’t care about your academic credentials, I want to know what your training is, what your research background is, what your academic affiliations are, what tests have been done to validate the strength of your claims. While a police officer may be an expert in many things they may not be in others and not know to ask certain questions I know that police have healthy skepticism. After all, how many scams have most cops seen over the years? But what sometimes happens is that ‘Doctors’ come in and talk a good game (‘we ran multiple regressions’, ‘large N’, ‘we used both case control and cohort studies’) and, lacking an understanding of advanced methodology or stats, police can get overwhelmed and resort to deferring to those who apparently have the requisite academic and work experience – It seems reasonable enough!. I’ve personally witnessed this phenomenon and then felt compelled to pull people aside and say, flat out, ‘you’re being scammed!’ I totally get why there would be some awkwardness about appearing to challenge people and how they support their claims. No one – myself included – wants to feel like an idiot for not knowing something. The reality is that I could feel like an idiot almost all of the time in policing circles, because I could not possibly know of every policy, program or practice within the over 200 police agencies across Canada. I own what I don’t know. The same should be true for police officers: if you know what multivariate analysis is, that’s awesome (you can explain it to me, I don’t do stats). If you don’t, start with what you do know and work from there. Just be open to asking questions. Its okay, you’re not expected to know absolutely everything. We applaud questioning minds. A good researcher should be able to explain anything to anyone and do it without being a condescending or offensive. And, going back to the ‘Dr. So-and-So bit,’ make them earn your esteem, don’t hand it out just because they have a title. Be confident in questioning everything and know where to go for support and resources to do this well.


3. Credible sounding ‘seals of approval’


When asked about KiR I was told that it was ‘evidence based’ because it met NREPP minimum standards for being an evidence-based intervention. NREPP stands for the National Registry of Evidence Based Programs and Practices of the U.S. Substance Abuse and Mental Health Association. To get a NREPP seal of approval, a program or intervention receives two independent ratings from selected experts. Assessments focus on both the quality of the available research and the training materials and ability to implement. Sounds pretty good, right? Not exactly. A recent paper by Gorman (2017) highlighted several significant issues with the NREPP and its assessment process. In an analysis of 113 interventions admitted to NREPP, Gorman found: the overwhelming majority were supported by fewer than six research papers (in fact, approximately 50% had only 1 or 2 papers); weak methodology of studies cited;weak studies are treated as of equal merit to strong studies; the inclusion of pilot or preliminary studies rather than full-blown evaluations; the inclusion of studies that include practices that are shown to be harmful. Based on this analysis, Gorman concludes: “NREPP can be added to the list of evidence-based initiatives that no longer serve the purpose for which they were created and contribute to a waste of valuable societal resources and a degradation of science” (ibid.: 41).


So, what’s the take-away message? Unless something has a credible evidence base that will withstand any reasonable challenges, it’s only ‘potentially interesting’ or ‘unproven.’ And, if we’re interested in ‘keepin it real’, then the D.A.R.E. zombie remains ‘unproven’ and might even need a giant wooden stake through it.

*NREPP was cancelled in 2018.

References:

Caputi, T., and McLellan, T. 2017. “Truth and D.A.R.E.: Is D.A.R.E.’s new Keepin’ it REAL curriculum suitable for American nationwide implementation?” Drugs: Education, Prevention and Policy, 24:1, 49-57.

Gorman, D. 2017. “Has the National Registry of Evidence-based Programs and Practices (NREPP) Lost its Way?” International Journal of Drug Policy, 45: 40-41.

bottom of page