top of page

Who Is Responsible for Misinformation Online? We All Are.

Social Media Users

“A lie can travel halfway around the world before the truth puts on its shoes.”

–Mark Twain


It might be misinformation to claim Mark Twain is the author of this quote, because, the truth is, no one knows who originally said it. But because it has been so often attributed to Mark Twain, it must be true. And me writing it here just made it seem even more true.


A phenomenon dubbed the “illusionary truth effect” finds that repeated information is perceived to be more truthful than new information. This extends to misinformation that seems unbelievable and stories that have been contested by fact-checkers. Additionally, adults as well as children are susceptible to this effect. In this study, three different age groups judged repeated statements to be true more often, and any prior knowledge did not protect them from believing repeated falsehoods.


In 2019, three MIT researchers discovered that false news spreads more rapidly on social media than real news, by a substantial margin, because internet users like and share it with others. They found it “takes true stories about six times as long to reach 1,500 people as it does for false stories to reach the same number of people.” 


This is bad news.


A Current Example


Misinformation example
Original Facebook Post

One needn’t look further than the current news cycle for a textbook example of the “illusionary truth effect.” In early September, a post on a private Facebook group called “Springfield Ohio Crime and Information” claimed that Haitian immigrants were eating people’s pets.



NewsGuard, an organization that counters misinformation, investigated this claim and tracked down the author of the original post, Springfield resident Erika Lee. Lee wrote the post upon being told by a neighbor “that her daughters [sic] friend lost her cat…” The neighbor told Lee that the friend saw it hanging from a tree at a house where Haitians lived. Lee related this fourth-hand information in her post. The neighbor, Kimberly Newton, told NewsGuard:


“I’m not sure I’m the most credible source because I don’t actually know the person who lost the cat,” Newton said about the rumor she had passed on to her neighbor, Lee, the Facebook poster. Newton explained to NewsGuard that the cat owner was “an acquaintance of a friend” and that she heard about the supposed incident from that friend, who, in turn, learned about it from “a source that she had.” Newton added: “I don’t have any proof.”


The unsubstantiated claim traveled to the social media network X, where user @BuckeyeGirrl posted a screenshot of Lee’s Facebook post. In a reply to her X post, @BuckeyeGirrl tagged @EndWokeness (a user with 2.9 million followers). That post was


X Re-Post
X Re-Post

viewed 4.9 million times. From there, the false story spread to other networks, TikTok, YouTube, Instagram, and more. Humorous and ridiculous “cat memes” began flooding the internet, and, in a matter of days, the rumor was repeated in front of a national television audience during the presidential debate. The story, which as of this writing still lacks credible evidence, continues to spread. 


By, the way... the cat in the story? Well, Miss Sassy was actually in a basement to whole time. Her owner later apologized to the Haitian neighbors.


What Can Be Done?


Social media platforms have no incentive to curb misinformation when it drives engagement on their sites. Lawmakers are stuck trying to define the fine line between free speech and censorship, while news organizations are scrambling to keep up with online information that moves at the speed of light. Meanwhile, public figures with massive online audiences are adding fuel to the fire by further elevating information that may or may not be true. Given all of this, there’s really only one way to stop the spread of harmful misinformation, and that’s for citizens to take matters into their own hands. This means arming ourselves with the skills to spot and stop falsehoods. These skills—collectively known as “media literacy”—can and must be practiced and modeled by adults and taught to our youngest citizens. It is not hyperbole to suggest our very democracy depends on this.


What a Media Literate Citizen Should Do


  • Consider the source. Anyone can post anything online. This is one of the wonderful, yet perilous, affordances of digital technologies. Therefore, it is incumbent upon anyone who uses the internet to take responsibility for the information they get by checking its source. Ask: Who is the creator of this information? Is this person or organization knowledgeable and credible, and do they provide first-hand information? Do they cite sources? Do they provide evidence? The nice thing about the internet is that the ability to do this research is literally at our fingertips.


  • Check your emotions. False information is often designed to elicit extreme reactions to incite internet users to like and share the content. We should resist the urge to react immediately. Ask: Is this information making me feel angry? Is it making me feel fearful? If so, this could be an often-used tactic to get you to react to and share the information. Remember to check your emotions and then check the source before trusting what you see.


  • Look at the quality of the writing. Words in all caps, headlines with glaring grammatical errors, evidence that the content has been created by AI, bold claims with no sources, sensationalist images, and ridiculous memes are all clues that the information may not be reliable.


  • Resist clickbait. Take time to actually read an entire article. Don’t let “clickbait” (a sensationalized or misleading headline designed to attract clicks on a piece of content) trick you into sharing it before you know what it is about. Understand that this is another tactic aimed at trying to get you to do just that.


  • Be ready for AI. Free and easy-to-use artificial intelligence (AI) tools make it dangerously simple for anyone to create convincing websites, articles, images, photos, and audio in a matter of moments. It is getting harder and harder to detect content created by AI, too. It’s not just bad actors using these tools, either. Well-meaning internet users, excited to try these new tools, use them to create and share funny memes and more. (Just scan the internet for examples of “cat memes” for some current examples.)


  • Let others do the work for you. There are some excellent fact-checking organizations you can turn to verify stories that seem dubious. Examples include NewsGuard, PolitiFact, Snopes, and MediaWise Teen Fact-Checking Network.


It’s Up To Us


The ability to share information with the world from the palm of your hand is an astonishing thing, but it comes with great responsibility. Stories originating from obscure corners of the internet can have an impact and potential consequences we simply couldn’t have imagined just a short time ago. If we still value the truth, then it's up to us—all of us—to take responsibility for what we read, like, share, and post online.


References


Pennycook, Gordon & Cannon, Tyrone. (2018). Prior Exposure Increases Perceived Accuracy of Fake News. Journal of Experimental Psychology: General. 147. 10.1037/xge0000465. 


Fazio, L., & Sherry, C. (2019, October 14). The Effect of Repetition on Truth Judgments across Development. https://doi.org/10.31234/osf.io/36mqc


Soroush Vosoughi et al. (2019, March 9). The spread of true and false news online. Science 359, 1146–1151. DOI:10.1126/science.aap9559


Diana Graber

Diana Graber is the author of Raising Humans in a Digital World: Helping Kids Build a Healthy Relationship with Technology and is the founder of Cyber Civics and Cyberwise.

Comentarios


Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • YouTube Social  Icon
bottom of page