A couple of weeks ago, a police service issued a news release stating they had used local area crime data to measure whether crime and disorder had increased within a 250 meter zone around a supervised injection site. Their stated finding? Crime in the area had increased after the SIS was implemented. When this study was brought to my attention, I had one basic response: "no, thank you." You see, I’m only interested in studies in which the author(s) have published a detailed methodology, as well as comparably detailed analysis and findings sections. Call me demanding, but transparency - that is documenting and releasing full details of how you conducted your study - is a basic tenet of science and, to the best of my knowledge, that police agency has never actually released the study. I've looked. And looked. What they apparently released was a new release about a study that can't be independently validated. It might be the world's greatest study, but I can't verify that and that's important.
"Transparency and detail are everything in science." - Ben Goldacre "Science has authority not because of white coats or titles but because of precision and transparency: you
explain your theory, set out your evidence, and reference the studies that support your case." - Ben
What happens all too often these days is the practice of researchers or other groups presenting study conclusions as facts, when they have yet to publish the full study, thereby failing to open their work up to scrutiny. In other words, they’re asserting a claim to knowledge and/or authority, that we can’t test because they’re not being transparent in how they came to their results. I’m not the only one to decry this practice. I first became sensitized to the issue through Ben Goldacre’s work in picking apart dodgy claims in scientific and other news reporting*.
Before anyone accuses me of picking on "Unnamed Police Agency", I should point out that many within the academic community are similarly willing to ride roughshod over this hallmark of science. Indeed, something I find equally insidious is the practice of researchers publicizing preliminary findings from studies that are, in fact, still underway and thus possibly months and years off from possible publication (and thus from methodological scrutiny) or might never be published. I see this too often in both print journalism and through social media. Researchers acknowledging they are still in the data gathering phase of their research, but happily reporting away on observations that have yet to be empirically analyzed, but that we should take very, very seriously because of the importance of their claims made. Never mind the fact that none of this work has been subjected to any type of peer review or other independent scrutiny, individuals are gleefully self-citing their ongoing work as scientifically established facts to journalists who lack the interest, knowledge or willingness to apply a critical lens on what they're told, perhaps because the story needs a jazzy quote' or, is missing that hallmark of modern journalism: the 'counterpoint'.
What is the bottom line on all of this? If I can’t see your work, understand how you collected and analyzed your data, have the possibility to be able to validate some or all of your results, have confidence that your methods support your conclusions, then ... what you’re offering is called “an opinion”** and it may not be particularly valid. It certainly ought not be the stuff informing public discourse, policy or practice.
*Two books I highly recommend are Goldacre's I Think You'll Find It's a Bit More Complicated Than That and Bad Science.
**This is not to say that some research-related opinions are not worthwhile; if you are speaking on the general research literature as an internationally recognized expert in a specific field, that might certainly be an opinion worth considering. However, to be accorded that status, you should probably be recognized as an expert by someone other than yourself, your Mom and/or a journalist desperate for a quote.