by Carl V Phillips
As regular readers know, I have written a fair bit about the nature of lies. I make a serious study of it as part of the mission of this blog and my larger approach to the politics of harm reduction and real public health. I do this with as much scientific rigor as is possible for such a question. Recently a confluence of events — the ongoing attempts of the press to deal with Trump’s claims, dealing with my ex’s lawyer, and most importantly the “vaping causes seizures” controversy — reminded me that I have not updated my thinking on this for a while. So here goes.
First, any communicative act that is intended to cause people to believe something the actor knows to be false is a lie. There is no requirement that this include writing/speaking any technically false statement (or making any statement at all). Indeed, lies that include no technically false statements are often the most obvious and blatant. Someone who anchors their lie to a false statement might actually believe they are telling the truth (however, see below). By contrast, someone who carefully avoids any technically false statements while communicating something that is false might as well be confessing they are engaged in a carefully calculated act of deception.
Consider the anti-vaping propaganda of the week, in which FDA joined the usual suspects in blasting out the observation that the agency has received a handful of adverse event reports (AERs) in which someone vaped and then had a seizure. Needless to say, the press took the clickbait and ran with the story, often referring to a nonexistent “study” that showed that vaping causes seizures. The reality (as those who have looked into this know), is that the number of AERs in question is trivial compared to the exposed person-time. Given the rate of seizures in the population (not to mention other events someone writing an AER might call a seizure) and the prevalence of vaping, there were inevitably orders of magnitude more unreported seizures coincidentally associated with vaping than there were AERs. There has been no assessment that would suggest any AER case was causally related. (Sometimes it is possible to be pretty sure an AER was causal, but it requires knowing more than the two things happened on the same day. I will probably take up that topic in my next Patreon science lesson tutorial.)
It is technically a truthful statement when FDA says “there have been reports of…” or “FDA is investigating the possible link….” But everyone knows that is going to be widely interpreted as the indefensible statement “vaping is causing seizures”, so the acts of pushing out this information are clearly lies.
FDA knows this better than most, so they cannot even pretend to be innocent. One of the things that FDA and their sister consumer product regulatory agencies do as a standard practice is prohibit technically truthful lies. A manufacturer cannot say “our cornflakes have no added arsenic”, which is true but falsely implies their competitors do add arsenic. Nor can they run lab tests, determine that the arsenic content of their cornflakes is BDL but a competitor’s has a detectable but unimportant quantity, and use that “fact” in their marketing. The regulators would (correctly) object on the grounds that the average person would interpret this as implying that the competitor’s product is poisonous.
The fact that regulators have allowed (or, really, been forced to allow) a cottage industry to grow up around “non-GMO” claims is the exception that illustrates the importance of the rule. These claims trick some consumers into believing GMO ingredients in competing products pose a health threat, a false belief that the unlabeled products are the equivalent of arsenic-containing cornflakes. It also lets the certifying organization blackmail manufacturers into getting their stamp of approval lest the consumers who believe the lie avoid the products.
Thus, not only do we know that FDA’s touting of the AER data was a lie, but we know that they would prohibit exactly that type of a lie if it were coming from someone else. It matters not at all whether any of FDA’s statements (or those of any other propagandist talking about this) carefully worded everything so that each sentence was technically true. The communicative act was a lie.
Of course, it is possible that vaping really did cause some of the seizures in the AERs, or other unreported seizures. Does this make the communication a non-lie? No.
Causing people to believe that X is true (as they did) when what we know is that X is merely possible (which is approximately always the case) is clearly causing the audience to believe a falsehood. Indeed, actively announcing “it is possible that vaping is causing seizures” would in itself be a lie because it would inevitably not be interpreted as “it is never possible to rule out a causal relationship exists at some low rate, though we have no reason to believe this one is true.”
What if it were later discovered that, with some decent degree of confidence, vaping does seem to cause seizures? The recent statements would still have been lies. A very careful statement to the effect of “we do not currently believe there is causation, but we recognize that there is a possibility so we are investigating it” would be true and honest, and allow for the hypothetical discovery. But that is not what they have been saying.
Even that statement, no matter how carefully delivered, would inevitably create fear about the causation in some people who heard it, and would lead to some clickbait headlines. To some extent, this cannot really be blamed on someone who made a properly careful statement. But the medium matters. Someone making that careful statement when pointedly asked, or burying it in a technical report, is one thing. But pushing it out as a public announcement is lying: Knowing how it will be interpreted by some, only an actor who wanted to create unjustified fear would do that.
Circling back, recall that at the start I: (1) made reference to something that the actor knows to be false but (2) did not say that was the only condition under which something was a lie. This brings us to the silly hand-wringing about what to call Trump’s false claims. We cannot get into someone’s mind to really know they know that a communicated message is false. So some journalists take the silly position that therefore they cannot call anything a lie, no matter how blatantly false. That position, of course, renders the entire concept of a lie — as well as the concepts of love, premeditation, altruism, political bias, racism, beneficence, etc. — impossible to invoke as anything other than abstract metaphysics. It is a stupid position to take.
We never actually know anything about the material world, but it is disingenuous to interpret this as casting doubt in practical discussions or in any context other than narrowly-drawn philosophical debates. Imagine the fantasy scenario (sigh!) of an official getting dragged before a committee or court someday, and being challenged on his history of claiming that we do not know that vaping is safer than smoking. Then imagine him replying “we do not actually know that our minds are not just part of a very advanced computer simulation, and so perhaps there is actually no such thing as vaping or smoking.”
If we allow that we know anything, whatever exactly that means, then we know that FDA officials knew exactly what false beliefs their statements were going to create.
That is a clear case. But there are some cases where we cannot be so sure. When the typical non-expert anti-THR activist makes some absurd claim, it is quite plausible that they really are that clueless. I am sure you can think of examples.
For years, my assessment of such cases was that it was still a lie for the following reason: The writer/actor is not only making the false scientific claim (which they might believe to be true) but is implicitly asserting that they are sufficiently knowledgable about the matter to be able to represent to the audience that the claim is true. The latter is false, and the actor knows it to be false (or would know it if they bothered to consider whether it was true). So either they were intentionally deceiving with the main message or intentionally deceiving about being expert enough to judge the main message.
I still find that conceptualization useful, but the major new refinement to my thinking involves a variation: the “reckless disregard for the truth” test. Those familiar with US libel law will recognize the phrase as being a test for whether a false statement about a public figure constitutes libel. Roughly speaking, if you publish a false claim that a random private citizen cheated on his taxes, according to tip you heard, that is probably libel (though it may not be actionable if the publication had no material impact). But if you publish that about a public figure, it is only libel if it is false and you had a good reason to doubt the veracity of the tip and so you recklessly disregarded the truth. I think something analogous (though reversing private and public) helps delineate what constitutes a lie in scientific realms.
If a random soccer mom insists to her friends that all the kids are vaping and that it causes brain cell death and cancer of the third metatarsal, she is being gullible and unforgivably naive about her sources of “knowledge”, but she is not really lying if she believes it. It is a private act of stupidity. On the other hand, if she declares herself to be “MomsAgainstVaping” and starts publishing those claims, then she is engaged in a public act and is subject a “reckless disregard” test. She has intentionally adopted a persona that obligates her to not recklessly disregard the truth like she did at her book club, and so disseminating the same information is now a lie.
I realize there is little material distinction between this and my old view of why people can still be lying when they genuinely believe what they are saying. It is still a reason why someone presuming to make public pronouncements about something is a liar — whatever they believe about the statement — if they are unqualified to do so. But the new version has the advantage that we do not have a “turtles all the way down” problem in terms of knowledge. (What if she genuinely believes the pseudo-science, and does not know she lacks the expertise to judge it, and does not have enough understanding of what it takes to know science to realize that, and does not even know she does not understand that, and…?) The “reckless disregard” test replaces all that subjectivity with an objective — though obviously not precisely-defined — standard that someone making public claims must adhere to avoid being a liar.
Finally, recall that the first statement about what constitutes a lie invoked the intention to cause people to believe something false. What happens if the communicative act causes false beliefs, but the actor did not want it to? Consider that example of an honest and responsible FDA official (ha!) being asked if the agency is investigating those AERs and replying “we are….” That is an accurate answer that he knows will create an inaccurate perception in some quarters. Even if the “…” included a protest that this is totally speculative and most such investigations come to nothing, the damage would be done.
Intentions do matter. If that hypothetical official really wanted to avoid creating needless alarm, but simply lacked the communications skill to give a technically accurate answer that was not misleading, he would not have been lying. But how can we figure out intentions?
Once again we have to dismiss the nihilistic view that we can never know what someone intended (or know anything!) and make inferences from the data we have: Does the actor have a particular activist agenda? Has the actor elsewhere overtly stated the false claim that was merely implied in this case? Is it conceivable that the actor did not realize what message was being communicated? Was the communication crafted to exaggerate or call attention to the misleading claim? Or, by contrast, did the actor make an affirmative effort to discourage the likely misinterpretation?
I would say that in realms in which I work, there is not 1 possible lie in 1000 where intention is at all ambiguous. It is simply too easy to observe what political message someone is endorsing or (for the rare cases) to recognize the extraordinarily tortured language that is necessary to minimize the risk of politically-motivated misinterpretation. That does not work in all areas (just try having a public conversation about what science shows about racial or gender differences without someone sincerely believing you are advocating a racist or sexist position, whatever your intentions). But it works here.
To finish by reiterating the most important point: A lie need not include any statement that is technically false. Entire areas of regulation are based on the knowledge that it is easy to lie — to intend to create false beliefs — without any such statements.