Comments & Discussion

Boston University moderates comments to facilitate an informed, substantive, civil conversation. Abusive, profane, self-promotional, misleading, incoherent or off-topic comments will be rejected. Moderators are staffed during regular business hours (EST) and can only accept comments written in English. Statistics or facts must include a citation or a link to the citation.

There are 11 comments on Could Taxes Deter the Spread of Harmful Fake News?

  1. Interesting idea. Who decides if a story is fake news? If a story comes out saying something in an intentionally inflammatory way, but is technically true what happens? For example, to stop shark attacks on Cape Cod, kill all the baby seals. While true because it would eventually remove the sharks food supply and the would go elsewhere to eat, it is a very distasteful and unacceptable solution. And, it would not be subject to the tax.

    1. Are you arguing that the tax should be extended to ideas and speech that is simply distasteful, regardless of truth? Or are you posing another hypothetical to illustrate the dangers of this line of thinking?

      The first premise is abhorrent.

  2. “It’s not censoring free speech—rather it’s punishing them for the harm that their free speech causes.”

    This is literally the justification given by every tyrant in history. The fact that this view is shared by a BU Prof— one who went to Yale and MIT, no less— is scary.

    This should go without saying but, here are some questions for the author and Prof. Van Alstyne: who decides what is fake news? Who measures the supposed harm that’s it’s done? How is it measured? How are punishments to be doled out? Will people be arrested first? Is there a trial by peers? Is it really just a tax? What if I refuse to pay the tax for the alleged harm my words have caused? Do I go to jail? What happens if the definition of what is harmful changes as those in power change? What if I don’t agree with the current government?

    Have you, for a second, considered any of these things?

    Fake news is a problem. But I’d gladly choose to combat it in the marketplace place of ideas, through free exchange and expression and speech, than grant this authoritative power of trying to litigate the “harm” of speech to a government I may not trust or agree with.

  3. Thank you for posting this! This is an interesting way to visualize the spread of misinformation.
    There are, and should be, limits to free speech. In social media, we see the rapid spread of speech that unduly endangers people. An example of speech that is not allowed is yelling “fire” in a crowded theater. People and institutions can also be sued for defamation. However, we see impunity in the defamation of marginalized groups, which harms innocent people.

  4. *Knock knock knock*
    Good evening sir, do you have a minute to talk? Don’t worry, we’re not with the government, we’re just with the Disney-Facebook-Haliburton nonpartisan fact-checking center. Last week we ran your social media presence through a routine audit and found some incorrect material that we believe might harm the public good. Now, we’re not here to tell you what to think, but for the sake of others we strongly encourage a more prosocial online persona — if you like we can send you some educational materials on what sort of discourse would be more acceptable. Most users really do find that adopting a more constructive style of online interaction makes them feel happier and more connected to their communities. Now, this is just a minor warning and you won’t have any communication privileges revoked, but we will pencil in a follow-up audit in the near future to see how you’re improving. Thank you, and have a nice evening.

  5. BU Staffer, you are right – there are limits placed on speech, specifically when it endangers the physical safety of others. Yelling “fire” in a theater or bomb in an airport, as the article mentions, are examples of this. So too are specific threats of violence against other people. Libel laws — although sometimes vague and difficult to prove — can be used to seek damages from people that use their speech to intentionally damage someone’s reputation or ability to earn a living.

    But — this is nowhere near the same as what this article is proposing.

    This article explores the concept of curbing the “harm” done by misinformation or “fake news” through taxation. This isn’t about addressing acts of violence or even threats of violence, it is about creating a regulatory body that would decide what constitutes misinformation, calculates the “harm” done (I’d love to see the plan to quantify that), assigns blame (to individuals or groups, I suppose, but it isn’t clear), and then levies fines.

    I’ve asked some questions above that should make clear the absurdity and naivety of this line of thinking, but I’ll pose another rhetorical here: what if these policies/laws are put in place? And what if the current administration (it needn’t be the Trump administration, just whatever administration happens to be in power at a certain point in time), begins the process of levying taxes on groups or individuals based on the advice they’ve received from supposed apolitical judicial appointees and technocrats? Perhaps half the country is happy with the current policy because the alleged “offenders” are of the opposite political persuasion and maybe some of them truly are despicable people. But what if a different political party comes to power with the next election, and they have a different idea about misinformation and “fake news” and “harmful speech.” Perhaps they claim that a particular cable network is intentionally perpetuating “fake news.” Maybe they even begin considering certain forms of satire and comedy to be in the realm of “misinformation.”

    The central point here is this: Be careful what powers you are seeking to grant government, because those powers will not go away and although created out of noble intentions — i.e. fake news is bad — they could be turned against you faster than you think. Any examination of 20th century history provides countless examples of similar slides into tyranny. To be sure, misinformation campaigns were used in an attempt to subvert our democracy, cause social fractures, and create discord. However, I can think of no faster way to help those bad actors achieve their aims than by adopting the policies prescribed here.

    Words themselves, however offensive or hateful or misleading, are not acts of violence. Followed to its logical conclusion, the concept of penalizing people based on amorphous and changing definitions of “harmful speech” — whether initiated by the left or the right — lead to the end of free speech as we know it.

    Further reading:

    1. A note from Professor Marshall Van Alstyne:

      A few readers have reached out to object that the tax proposal sounds totalitarian! I’m delighted they’re willing to engage and they deserve a thoughtful response. Fake news is a hard problem so here are design elements that can help.

      1) The first point is to recognize disinformation as a form of pollution in your news feed just like carbon monoxide in your air supply or dioxin in your water stream. And, because fake news generates engagement, social platforms aren’t sufficiently motivated to clean the contaminants. Polluters need incentives to stop passing their poison.

      2) A good solution should scale. Facebook generates 4 petabytes of data each day. Like testing for pollutants in air or water, you don’t need to fact check everything. Just take a statistical sample to check the levels of contaminants. Want 90%, 95% or 99% accuracy? Simply take a bigger sample.

      3) Bias *is* a critical issue. One of the best ways to reduce bias is to separate the rules that define fake news, from the adjudication of fake news, and from enforcing penalties for fake news the same way we separate legislative, judicial, and executive branches of government. Critically, government should *not* be the certification authority but neither should Facebook. Both have too much potential for self-interest. We might need new organizations, more like and Snopes, that are as independent as possible for certification.

      4) A good solution adapts easily to tailored goals. Suppose you want to police (some) disinformation but not encumber free speech at all. There’s a solution for that too. In that case, apply the penalty narrowly to disinformation spread by foreign governments. They don’t have a citizen’s right to speak and shouldn’t be meddling in our elections anyway. Such a narrow intervention would reduce disinformation from foreign adversaries but have no effect on citizens themselves.

      For reference, the tax on foreign fake news pollution is only one of four separate ideas for fighting fake news. It happens to be the most controversial and so the most fake newsworthy :-). A full paper, “The Problem of Fake News” will be available July 2019.

      I don’t claim these proposals are perfect. Most existing solutions have serious drawbacks. I’d be interested in any better solutions you or others care to share.



  6. It appears this article is about vaccinations and that is iy since you’ve eatly stated your position regarding the Measles outbreak. Is their not questions regarding the MMR vaccine? Merck, the only maker of the MMR vaccine, is currently battling an anti-trust case for lying to the government, lying to the doctors and lying to the public.


    Would this be considered fake news? I bet Merck would think so, and because they are the money being anyone that promotes vaccine safety and effectiveness, I am sure someone would be muzzled for sharing the truth that place question, vaccines as a whole.

  7. Prof. Van Alstyne – Thank you for your willingness to engage openly with critique. I see a small bit of common ground, but unfortunately, there may only be enough real estate for us to build a Tiny House. I remain unconvinced on several key points, which I’ll try to distill down to several bullets.

    1) You acknowledge that bias would be an issue in any attempt to establish a system aimed at detecting Fake News, monetizing the harm, and enforcing the rules, which in this case means taxation. Here also you say that neither Government nor Silicon Valley should be involved in the “certification phase,” which I take to mean the detecting of Fake News, or the separation of what is Fake from what is Real. So, we need an outside organization for that, perhaps Snopes or a newly created non-Governmental, apolitical, independent body that can weed out fact from fiction. But in practice, any organization that is assigned the role of adjudicating what constitutes Misinformation, Disinformation, or Fake News — these terms seem to be used interchangeably — will inherently be politicized. The stakes are simply too high and the impact of their decisions too great for it not to be political. And the people that make up that organization will be influenced by the political process, bias, and lobbying efforts that appeal to self-interest.

    2) Putting the above aside, let’s suppose for an instant that it is possible to create an entirely objective, independent, all-knowing body that can identify Fake News with 100% accuracy. Even in that scenario, Government would need to be involved in the next phase (the enforcement phase), because only Government could impose taxation and coerce people to pay. And it would indeed be coercion, because behind every tax lies the implied threat of force; refuse to comply and your property will be taken or conceivably, you will lose your rights and go to jail. If Government is involved in the enforcement phase, it will inevitably be involved in all phases.

    3) No regulatory body – Government or otherwise – would be able to detect Fake News with 100% certainty, for many reasons. You seem to acknowledge this point, but the next logical question is: “what is the acceptable level of false positives?” Are we willing to live with a certain percentage of companies or people that will be wrongly taxed, or taxed too high? Is their recourse for mistakes?

    4) How do you detect intent? And does that matter? I ask because there is a difference between Company A that knowingly spreads false info in order to influence Outcome B, and a separate Company that is making a good faith argument which happens to be wrong. Perhaps a good example of the nuances involved is in the field of nutrition science and public health policy. For decades, the American Heart Association and the USDA advised that dietary fat increased cholesterol, which increased your risk of heart disease. It appears that many within the health establishment truly believed this research and urged people to eat diets low in fat and high in carbs. These conclusions gave us the Food Pyramid, impacted school lunches, changed the way food was marketed to Americans. Some of the science may still be debatable, but it is now abundantly clear that this advice was incredibly wrong. In fact, not just wrong, but harmful. You made the comparison to negative externalities, such as air pollution — well, what if there were lobbying companies, lawmakers, and agencies, that perpetuated bad science and harmful health policies, even after they came to better understand the causes of heart disease? How do you detect an incorrect, harmful Good Faith argument from an incorrect, harmful piece of intentionally placed Fake News? If you penalize all, regardless of intent, you are now wading into the waters of policing thought.

    5) There seems to be a focus on social media companies, because these are the vehicles through which so much information spreads. Many of them are U.S.-based companies, but what happens when after being subjected to penalties, they move operations overseas? Do we enact legislation to prevent those companies from operating within the U.S.? Do we restrict Americans’ ability to view their content online? This is the Chinese and Russian approach.

    6) Our common ground. In your 4th bullet, you say that one possibility is that in order to assuage concerns about the 1st Amendment, we simply focus on the disinformation campaigns being wrought by foreign governments. Absolutely, and I think the findings about Russia’s efforts to influence our last election are a case in point. We should try to use all available tools to prevent such an effort from happening again and to the extent we can hold individuals or companies accountable for having supported or aided that campaign, we should.

    So, what then is the answer, you ask? Well your question rests on the premise that there is a problem so big that it requires a widescale Government reaction, and I’m not convinced that’s true. I do know all attempts to combat “Fake News” should involve more free speech, not less, and that society cannot be made perfect through coercive measures. Any involvement from Government should be to ensure that no ideas (including bad ones) are being suppressed or silenced. Harmful ideas, mis-information, and Fake News need to be confronted with the even louder and more compelling voices of reason and logic.

    Respectfully, I hope any ideas involving the taxation of speech are shouted down loudly, as they are incompatible with a Liberal society. I appreciate the dialogue, and I would look forward to reading your forthcoming article or hearing a lecture of yours on the topic.

  8. Also, regarding the below statement: free speech did not lead to Russian interference in the elections. The influence campaign was state-sponsored and executed. There is no free speech in Russia. Perhaps if they had a 1st Amendment and Bill of Rights, the citizenry would be able to hold their Government accountable or restrain their foreign policy behavior.

    “Van Alstyne has also considered this difficulty in balancing free speech and harm. “On the one hand, free speech can lead to…increased justice, it can lead to whistle-blowing in cases of criminal activity. [On] the other hand, it can lead to Russian interference in elections.””

    Let’s Go Blues!

Post a comment.

Your email address will not be published. Required fields are marked *