It’s hard to know what’s true anymore. We have access to gobs of information thanks to the glories of the Internet, but all too often, it can be extremely difficult to parse fact from fiction. Ideological think tanks, bloggers, and hyper-partisan media consistently spin the story in completely opposite directions.
This is where scientists and journalists from neutral media and academia are supposed to come in. Granted, they make mistakes from time to time, but generally, they do a pretty good job of slashing through some of the muck.
But what about when they intentionally make stuff up?
Last month, the organization Stop Abusive and Violent Environments filed a complaint regarding one particularly egregious instance of research misconduct with the federal Office of Research Integrity in the Department of Health and Human Services.
In 2003, Professor Jacquelyn Campbell and her colleagues at Johns Hopkins University wrote in the American Journal of Public Health and the National Institute of Justice Journal that domestic violence is the leading cause of death among black women age 15 to 45. Supposedly, this was backed up by a Bureau of Justice Statistics report from 1998.
This sounds terribly alarming. But it’s complete crap. The Bureau of Justice Statistics never, ever reported this, though Eric Holder did cite the false findings in a 2009 speech. As of 2008, it was at least the eighth most common killer of young black women, the American Enterprise Institute reported. According to Google Scholar, the two erroneous studies have been cited a combined 672 times by other academics.
It’s unclear whether the researchers knowingly falsified their information or if it came down to lazy fact-checking, but this incident highlights a huge problem for anyone concerned about knowing anything.
The problem is that science and journalism are based heavily on other people not screwing up. There is a lot of trust placed in the experiences and research of others in both fields. Granted, competing news organizations often avoid citing one another like the plague, but they almost always get information from someone else. Firsthand accounts are the exception rather than the norm. Likewise, scientists usually cite one another for an empirical foundation on which they then test theories and hypotheses.
It’s easy to see why making these mistakes can have serious consequences. If policymakers use bunk findings and reports, or scientists and journalists cite bad information, they can make misinformed decisions that actually harm the general public. When it becomes apparent that media or scientists have spread misinformation, it can breed mistrust and spite from the community at large.
Now, I’m all for skepticism. We need more of it. But blatant breaches of trust are likely to alienate people from journalism and science and lead to overall disengagement. Methodological mistakes and exaggerating conclusions is one thing, but saying a government report said something that it clearly did not say is an unnecessary, avoidable, and dumb mistake.
Journalists and scientists each have their reviled characters who just made stuff up, and anyone who makes a serious transgression is usually severely punished. This reminds scientists and journalists what they stand for, but for the sake of their credibility, they must always remember that they can slip up too. Otherwise, they risk spreading the next big misconception or outright lie. Intentional or not, you can rarely ever come back from that.