To address concerns about the intentional spread of false information, we can draw an analogy between disinformation and fraud and conclude that disinformation requires accountability irrespective of the benefits accrued by the offenders. This comparison forms the core of my argument that social media platforms should treat disinformation with the seriousness it deserves. After I provide an analogical argument, I will address three objections.
But I do not aim to provide a solution to deliberate disinformation (or distinguish malicious from malevolent ones). Neither do I suggest to what extent we should take action against social media disinformation. Still, I urge that we take the problem seriously so we can all use our minds to tackle it. In other words, I aim to convince you that deliberate disinformation is a serious problem that needs people’s collective attention and ingenuity.
Before proceeding, I first distinguish two terms: Misinformation and disinformation. The difference is in the intent. While both terms contain the spreading of false information, the former is unintentional, while the latter is deliberate. In this article, I am interested in disinformation.
Please keep in mind that this is not a legal analysis, nor do I claim to possess legal expertise in any field. My comparison to fraud is conceptual rather than implying equivalency in a strict legal sense.
This is merely a blog post.
Table of Contents
Argument from Analogy
The following argument seeks to establish an analogy between fraud and the intentional spread of misinformation; they both involve harmful deception for personal gain (Clarification: I am not claiming that all misinformation is fraud, but a subset is). Through a series of premises, I will demonstrate that just as fraud is rightfully condemned and addressed on social media platforms, so too should deliberate misinformation. If deceptive practices like fraud warrant punitive measures, then misinformation, which similarly exploits trust and causes harm, deserves equivalent scrutiny and response from those platforms.
The following is the initial argument:
Premise 1: Fraud is wrong because it involves intentionally deceiving someone to gain personal benefits like money or other advantages at the victim’s expense.
Premise 2: Intentional misinformation, when spread to gain a personal benefit (e.g., financial gain, political power, or social influence), also involves deliberately deceiving others.
Premise 3: In both cases, the wrongness lies in the deliberate intention to deceive for personal gain, causing harm to others who rely on false information.
Premise 4: Free speech does not protect clearly harmful activities like fraud, which is legally prohibited despite the freedom of expression.
Premise 5: Just as social media platforms have a responsibility not to ignore users who engage in fraud despite freedom of speech, they also have a responsibility to address users who spread intentional misinformation, as both actions involve harmful deception that can cause significant damage to individuals and society.
Conclusion: Since fraud is considered wrong and punishable due to intentional deception for personal benefit at the expense of others and is not protected by free speech, deliberate misinformation that benefits the spreader should similarly be regarded as wrong and subject to consequences. Moreover, just as platforms are expected to take action against fraudulent users, they should also not ignore misinformation spreaders, given the parallel in the harm caused by both types of deception. This approach does not infringe on free speech but prioritizes the protection of the public from harmful, deceptive practices.
The above argument is an example of an analogical argument, where the same logical structure is applied consistently across two different but similar situations. If one accepts that fraud is wrong due to intentional deception and harmful consequences, the same reasoning should apply to intentional misinformation that harms others and benefits the spreader.
However, this does not mean that fraud and misinformation are the same in all aspects. For instance, fraud is more straightforward to prove than deliberate misinformation. Still, this does not affect the argument where it matters. The objection only shows that catching a spreader of deliberate misinformation is more challenging, and the steps we would take to combat fraud and deliberate misinformation would differ.
Thus, if one accepts that fraud is wrong based on these principles, it logically follows that intentional misinformation should also be treated as wrong. Furthermore, if social media platforms have a duty to act against fraudulent users due to the harm they cause, they should similarly address users who spread harmful misinformation, given the similar nature of the deception and its consequences.
Potential Objection 1
One may argue that the motivation behind spreading misinformation can differ fundamentally from committing fraud. Fraudsters seek tangible benefits like financial gain, political power, or social influence. In contrast, spreaders of misinformation might not be driven by a desire for tangible benefits. Instead, they may engage in misinformation for entertainment purposes, to provoke reactions, or simply out of a desire to disrupt or create chaos. The objection here is that because the motivations behind spreading misinformation can be less about gaining something tangible and more about deriving personal amusement or satisfaction, it complicates the analogy between fraud and misinformation.
This objection raises an important point: not all harmful behavior is motivated by direct, tangible benefits. In the case of misinformation, the spreader might be more interested in the thrill of causing confusion, the satisfaction of seeing their content go viral, or the enjoyment of manipulating others’ beliefs. These motivations can make spreading misinformation seem less severe despite being worthy of condemnation. Moreover, since the spreader might not gain anything tangible, like money or status, it could be argued that their actions are less deserving of punishment or restriction than fraud, where the harm is more direct.
Let’s now turn to a rebuttal.
Addressing the Objection: A Scenario
Imagine someone setting up a fake charity website to solicit donations for disaster relief funds(donate!). The donations are directed to an encrypted Bitcoin wallet so that even the fraudster doesn’t have access to it and never intends to access it. Their sole purpose is to deceive others, perhaps for amusement or to see how many people they can trick. The troll finds it amusing when people’s hard-earned money gets lost. The funds accumulate in this inaccessible wallet. Still, the fraudster gains no tangible benefit, money, political influence, or social status, just the satisfaction of successfully deceiving others.
Over time, however, the fraudster loses interest in the scheme. The initial thrill of deception fades, and they become desensitized to the act. Eventually, they abandon the project entirely, no longer finding self-satisfaction in the ongoing fraud. Yet, fraudulent operations continue autonomously with AI technology. It scrapes content from news and generates blog posts weekly. The fake charity website still collects donations and sends automatic emails to unsuspecting donors. Although the fraudster is no longer actively involved or deriving any benefit, the deception persists, continuing to cause harm.
In the scenario above, the fraudster doesn’t gain tangible benefits like money or power, yet the act is still fraudulent and morally wrong. This is because the essence of fraud lies in the intentional deception that causes harm, not necessarily in what the fraudster gains. The benefits motivate the offender to orchestrate the scheme, but they are not necessary conditions or the essence of fraud. Remember, after the fraudster abandons the project, they no longer receive emotional satisfaction; the harm continues, and they are responsible for it.
Similarly, those who spread misinformation may do so without expecting tangible benefits like money or power, but their actions can still be deeply harmful. Disinformation can mislead, confuse, and damage trust, potentially leading to real-world consequences like public panic, harm to individuals’ reputations, or even health crises. The impact on others can be significant, even if the spreader of misinformation gains nothing, including emotional satisfaction.
The key point is that fraud and misinformation involve intentional deception with harmful consequences. The absence of a tangible benefit to the deceiver does not mitigate the wrongness of their actions. The harm caused by their deception remains, which is why both actions should be taken seriously, regardless of whether the perpetrator gains something concrete from them.
Potential objection 2
One could argue that while fraud can often be identified and prosecuted with relative clarity, catching and holding deliberate misinformation spreaders accountable is far more difficult. Fraud typically involves clear, tangible actions like financial transactions, false advertising, or forged documents that can be traced and proven in court. Law enforcement agencies have established procedures and tools for investigating and prosecuting fraud, making it easier to catch perpetrators.
In contrast, misinformation spreaders often operate in more ambiguous territory. Misinformation can be subtle or cloaked in the guise of opinion, and it makes it harder to distinguish deliberate misinformation from a genuine mistake. Additionally, misinformation can spread quickly across various platforms and be shared by many users. What does this mean? It complicates the task of identifying the original source or intent. The sheer volume of content on social media and the global nature of these platforms further exacerbate the challenge of catching and holding misinformation spreaders accountable.
How may we rebuttal the objection?
Response to the Objection
While it is true that identifying and prosecuting deliberate misinformation spreaders can be more challenging than catching fraudsters, this does not negate the need for accountability. The difficulty in detection and enforcement does not justify allowing harmful behavior unchecked.
It is important to acknowledge that the penalties or restrictions for fraud and deliberate misinformation do not necessarily need to be the same. For instance, while fraud may often result in criminal prosecution, penalties for consistent and deliberate misinformation could range from restricting social media use to temporary or permanent bans from specific platforms. Criminal prosecution might also be appropriate in more severe cases where the misinformation leads to significant harm, given sufficient evidence of malice.
For instance, in the Sandy Hook case, Alex Jones, a conspiracy theorist and media personality, spread false claims that the 2012 Sandy Hook Elementary School shooting was a hoax. These baseless allegations led to severe emotional distress for the victims’ families, who were harassed and threatened by Jones’ followers. The harm caused by this deliberate misinformation was profound, and Jones was held liable in several defamation lawsuits that shut down his influential website. This case exemplifies how, in instances where misinformation causes substantial harm, mainly when spread with malice or reckless disregard for the truth, the consequences can and should extend beyond social media restrictions to include severe legal repercussions. This example shows that while the nature of penalties for fraud and deliberate misinformation might differ, both should be treated with the seriousness they deserve when they result in harm.
Therefore, the core principle remains consistent: the need to restrict harmful behavior does not disappear simply because it is harder to catch a malicious misinformation spreader. Even if the enforcement mechanisms and penalties vary, the goal is to protect the public from the damaging effects of both fraud and misinformation. Just as society imposes restrictions on those who commit fraud to prevent further harm, it should also impose appropriate measures on those who spread deliberate misinformation. The argument stands: if we recognize the necessity of restricting fraudulent behavior, we should also acknowledge the necessity of limiting harmful misinformation, even if the methods and penalties are not identical.
Potential Objection 3
A major objection could center on the difficulties associated with proving intent and assessing the broad scope of spreading false information. The danger here is penalizing a user genuinely expressing their beliefs. Unlike fraud, which typically involves clear and intentional deceit for personal gain, misinformation can arise from various sources, including individuals who may genuinely believe the false information they share or algorithms that amplify content without intent. This blurs the lines of culpability and challenges the direct application of punitive measures commonly associated with fraud. The variability in the origins and intentions behind misinformation, from malicious intent to naive sharing, complicates the enforcement of laws or policies designed to combat it. Addressing this objection requires distinguishing between harmful misinformation spread with fraudulent intent and misinformation resulting from less culpable circumstances. This distinction is crucial to ensure that responses are proportionate and do not inadvertently punish benign or unintentional misinformation, thus maintaining a balance between preventing harm and protecting freedom of expression.
Let’s now response to this objection.
Response to the Objection
One effective approach is to prioritize enforcement of the originators (or publishers) of disinformation. These are the individuals who first create and distribute false information with the intent to deceive or harm. By focusing enforcement on those who originate and repeatedly spread disinformation, enforcement remains targeted at those who actively sustain and amplify harmful falsehoods. This could include penalties like restrictions on social media usage, fines, or even criminal prosecution in severe cases. By focusing only on entities that knowingly distribute harmful disinformation with persistence, minimizing the impact on those who share it unknowingly.
Repeated offenses, in particular, signal a willful disregard for truth and potential harm, demonstrating a pattern of intentional harm. For example, if an originator continues to propagate disinformation after receiving credible counter-evidence and warning, this repeated offense would establish clear intent and culpability. Platforms can assist by tracking and labeling repeat disinformation offenders, giving users insight into the credibility of sources without censoring those who might have shared the information in good faith.
Thus, while the concern is real, it does not undermine the necessity of addressing intentional and harmful misinformation at its source. By focusing on originators and repeated offenders, and requiring clear evidence of intent, it’s possible to protect freedom of expression while effectively curbing the persistent spread of harmful disinformation.
Updated Argument: Premise-to-Conclusion Form
Now, in light of addressing some objections, I think we have a stronger argument.
Premise 1: Fraud is wrong because it involves intentionally deceiving someone, which results in harm to the victim.
Premise 2: Intentional misinformation, even when not spread for tangible gain (e.g., financial gain, political power, or social influence), involves deliberately deceiving others and causes harm, whether it’s financial loss, damage to public health, or other negative impacts.
Premise 3: In both cases, the wrongness lies in the deliberate intention to deceive, which results in harm to others who rely on the false information.
Premise 4: Just as social media platforms have a responsibility not to ignore users who engage in fraud, they also have a responsibility to address users who spread intentional misinformation, as both actions involve harmful deception that can cause significant damage to individuals and society.
Premise 5: Free speech does not protect clearly harmful activities like fraud, which is legally prohibited despite the freedom of expression. Similarly, free speech should not protect deliberate misinformation crafted to deceive and cause harm.
Conclusion: Since fraud, which intentionally deceives and harms others, is neither considered acceptable nor protected by free speech, deliberate misinformation should be regarded as wrong and subject to consequences. This aligns with the principle that free speech does not shield clearly harmful activities. Thus, social media platforms have a duty not only to act against fraudulent users but also against those spreading misinformation, given the significant harm caused by such deception, regardless of any tangible benefit to the spreader.
Rights Are Not Absolute
What about right?
Rights, including the right to free speech, are not absolute. When society restricts or takes away certain fundamental rights, it often does so because the individual has demonstrated that they cannot be trusted to use those rights responsibly. For instance, when someone commits a crime, they may lose their right to freedom of movement through imprisonment. This loss of rights occurs because their actions have shown that they are dangerous to others or society.
Similarly, when an individual uses their right to free speech to intentionally spread harmful disinformation, it may be necessary to impose restrictions. This is not about punishing speech itself but about recognizing that the person has abused their right in a way that causes harm. Just as we do not trust a convicted criminal to roam freely without supervision, we may not trust a person who has spread harmful disinformation to continue exercising their speech without oversight. The need to protect others from harm can justify these restrictions, highlighting that rights come with responsibilities, and when those responsibilities are grossly violated, society may limit the corresponding rights.
So, taking away freedom of speech (in limited contexts) is possible. But I think when we punish disinformation, we should not take away their right to speech per se but respond to the harm they have caused and may imminently cause through speech. The goal is, here, is to deter harm, not to restrict genuine expression. Monetary penalties may discourage certain speech, but they don’t remove the offender’s right to express themselves. Therefore, there is a difference between banning someone from a platform and punishing them for their acts on the platform.
I do not hold that we should ban someone from a platform, but we should penalize them for how they use it. In this view, the offender can remain on some social platform after a penalty, but further misuse of speech will have a further penalty. In my view, the penalty should be proportional to one’s wealth, so that a wealthy person won’t abuse the system, preventing wealthier offenders from simply “buying their way” out of accountability.
Read more: Why is Politics so Toxic?