top of page

About Mistakes ...


Usually, when the topic of research ethics is taught in university classes, we focus on extreme cases like Zimbardo's prison experiment* and Milgram's Obedience to Authority** study - that is, stuff that wouldn't be too likely to happen in this day and age. Occasionally we might talk about, as my class did this week, the larger scams - ranging from the researcher who spent $96,000 of his research funds at strip clubs or the guy who ripped off about $200K of funding for cancer research.


Now some of you might possibly think: what has this to do with EBP? The answer is: everything.


Anyone who does applied research undertakes this work with the hope of discovering something of use to the community. From the police officer trying to show how a new patrol style might work (and thereby impress the Chief) to the junior researcher hoping to make tenure (with lots of publications) to the mid-career or senior scholar wanting to enter or maintain their existence in rarefied academic circles (like, say, the Editorial Board at the journal Criminology). Then, of course, there are the whole host of "researchers" simultaneously promoting their wares while conducting studies to show how their products and/or services will fix all manner of social ills. In other words, there's a lot of temptation out there. Much of it, I would suggest has less to do with venality than something a bit more insidious: our own errors and our unwillingness to own them and to be seen as having made a mistake.


In departing from the standard rhetoric taught in research ethics - that is, the stuff about not torturing research participants - I decided to bare my own throat and bring up one of the many mistakes in my career to show how temptation can easily arise. Here's the story:


Last year I coded, analyzed and wrote up the results of a co-authored study. The paper went out for review and generated a revise and resubmit. I passed the revisions onto a co-author who needed to take a look at the data. When he did, he discovered that I had made a fatal error: I had accidentally left in cases that should have been excluded. Compounding my embarrassment was the fact that I had to admit I had been sick when I had been working on the paper and was too pigheaded to realize that it's not smart to do data analysis when one is suffering from the brain fog and exhaustion that comes from a battle with rheumatoid arthritis.


As I explained to my grad class, we had several options:


1. tell the Editor what happened and completely re-write the paper omitting the missing cases;

2. pull in other data sets to increase the overall number of cases, take out the cases that should have been excluded, and hope nobody noticed the sleight of hand;

3. say nothing, make a few changes and pray


After some agonizing over losing the work, I wrote the Editor and pulled the paper.


Mistakes in research are common although no one ever wants to talk about them. And, because no one talks about them, we are not honest and open about how to deal with the garden variety of errors that befall us, or why some choices, aside from being more ethical, are just better choices. Lack of transparency creates a giant void for those who are learning the ropes of research and making their own mistakes.


After my mistake came to light, I had some informal postmortems with my co-authors in which we talked about what happened, why it happened and how to prevent similar mini-catastrophes in the future.


Here's some lessons from my mistake:


1. they will happen ... it is inevitable and you will not die or be shunned as a result.

2. it's not that you made a mistake that counts, ultimately it's how you respond to it.

3. no one can be perfect in all things, so ask your co-authors to check even your basic math

4. it's okay to be sick, tired and/or otherwise indisposed, and it's okay to ask for help or, even better, to not work when that's the case than to go full blown Joan of Arc and tank your study.


I don't tell this story to look like a heroine. I'm not. I'm the person who screwed up an analysis that ended up wasting other people's time and costing two junior scholars a much needed publication. And therein undoubtedly lies another temptation to try to "fudge" one's way out of a mistake and to see that "fudging" as only a harmless little lie: the so-called altruistic motive. The human ability to self-justify is truly an incredible thing.


One of the reasons I advocate for science above other ways of trying to know or see some aspect of the 'truth' is the belief in science as having something that other forms of 'knowing' lack: a self-correcting mechanism. Yes, fraudulent and other unethical behaviours happen in science. However, the insistence on transparency of methods, and the independent scrutiny of research, do allow for a degree of oversight and thus the ability to identify some of this activity and correct the record. That said, much more self-correction could and should occur at the level of the individual researcher, but this can only happen if we begin to be more honest and open about the mistakes we make, seeing them not as something shameful, but rather as a fundamental part of the scientific enterprise.


*Think sadistic fake guards torturing student 'prisoners' in a mock prison.

** Where an 'authority' figure stood over a research participant who believed they were delivering electric shocks to an unseen victim.



bottom of page