Why “Misinformation” Was Dictionary.com’s 2018 Word Of The Year WATCH: These Experts Help Explain "Misinformation" In 2018 Previous Next Our 2018 Word of the Year Is … Misinformation The rampant spread of misinformation poses new challenges for navigating life in 2018. As a dictionary, we believe understanding the concept is vital to identifying misinformation in the wild, and ultimately curbing its impact. But what does misinformation mean? Dictionary.com defines it as “false information that is spread, regardless of whether there is intent to mislead.” The recent explosion of misinformation and the growing vocabulary we use to understand it have come up again and again in the work of our lexicographers. Over the last couple of years, Dictionary.com has been defining words and updating terms related to the evolving understanding of misinformation including disinformation, echo chamber, confirmation bias, filter bubble, conspiracy theory, fake news, post-fact, post-truth, homophily, influencer, and gatekeeper. Misinformation vs. Disinformation The meaning of misinformation is often conflated with that of disinformation. However, the two are not interchangeable. Disinformation means “deliberately misleading or biased information; manipulated narrative or facts; propaganda.” So, the difference between misinformation and disinformation comes down to intent. When people spread misinformation, they often believe the information they are sharing. In contrast, disinformation is crafted and disseminated with the intent to mislead others. Further confusing the issue is the fact that a piece of disinformation can ultimately become misinformation. It all depends on who’s sharing it and why. For example, if a politician strategically spreads information that they know to be false in the form of articles, photos, memes, etc., that’s disinformation. When an individual sees this disinformation, believes it, and then shares it, that’s misinformation. Misinformation and Social Media While the word misinformation has been around since the late 1500s, the nature of how information spreads has gone through drastic transformations over the last decade with the rise of social media. For most individuals on social media, fact-checking is an afterthought, if it is a thought at all, and misinformation thrives. This year, we saw technology platforms grapple with the role they play in the spread of misinformation. Critics blamed Facebook, in particular, pointing to the following: the revelation that Cambridge Analytica had harvested personal data on Facebook to create in-depth psychological profiles of individuals, which were used to influence the Brexit vote and the US election the abundance of fake political ads across the platform; even after Facebook required political ads to include “Paid for by” messaging, Vice News found this feature easily exploitable by simply lying CEO Mark Zuckerberg’s stance that Holocaust denial posts do not breach Facebook’s code of conduct because they are only wrong as opposed to intentionally misleading the lack of content moderation across languages on Facebook and WhatsApp that contributed to the ethnic cleansing and genocide of the Rohingya people in Myanmar Other tech platforms have made some very high-profile decisions on how to deal with individuals and communities who spread misinformation. This year, Twitter cracked down on millions of accounts that did not represent real human users for spreading misinformation. Several tech platforms, including Apple, Twitter, YouTube, and Facebook, banned the conspiracy theorist Alex Jones, who is especially known for spreading disinformation about school shootings. Another noteworthy banning happened in September when Reddit shut down the main subreddit dedicated to discussing the QAnon conspiracy theory; it had over 70,000 subscribers at that point. Memes might seem trite to those unfamiliar with them, but they can be an efficient way to spread disinformation and conspiracy theories in a viral, insidious way. The subsequent spread of misinformation contained in memes can have serious, even violent consequences. Cesar A. Sayoc Jr., who was charged with sending 13 bombs in the mail to outspoken opponents of President Trump this October, drove a white van with memes plastered on the windows. His actions were stoked by the messaging common in political memes which often spread misinformation. Politics, Health, and Etymology Regardless of how it spreads, misinformation is particularly rife when it comes to some specific areas. In early November, fact-checkers from the Washington Post shared their record of all the false or misleading claims President Trump has made since becoming president. As of the time of that report, the count was at 6,420, an average of about 10 false or misleading claims a day. These claims are heard around the world and believed by many. This year’s presidential election in Brazil is a case study in the role misinformation plays in elections. WhatsApp is a popular forum for discussion of politics in Brazil; but the end-to-end encryption on this platform makes it extremely difficult to control the spread of false news stories and misinformation in general. The New York Times paired with Federal University of Minas Gerais, the University of São Paulo, and the fact-checking platform Agência Lupa on a project in which they looked at a sample of 50 viral political images that circulated on WhatsApp leading up to the Brazilian presidential election. They found that 56 percent of these images were misleading either because they were completely false, they contained images or data used out of context, they had unsubstantiated claims, or they weren’t from a trustworthy source. This case shows how disinformation strategically spread by political campaigns can become misinformation when it is picked up and spread by individual supporters. Not all misinformation is tied so directly to politics. Over the last several years we’ve seen the rise of health-related misinformation. This September, Gwyneth Paltrow’s lifestyle empire GOOP paid $145,000 in civil penalties to settle a suit regarding misleading medical claims about the powers of jade and rose quartz vaginal eggs. Also related to health, a study published earlier this year in the American Journal of Public Health found that the same troll and bot accounts that attempted to influence the US election had also been sharing false information about vaccines on Twitter, with the goal of eroding public trust in vaccines. Even misinformation about etymologies made the rounds this year, which is unsurprising to anyone trained in lexicography. In July, the word tag trended in Dictionary.com lookups after a post on social media claiming to explain the origin of this word as an acronym went viral. Though this memed etymology was untrue, it gained traction because it was fun and interesting. Its journey to virality mirrors the spread of so many memes containing untrue claims. The Fight Against Misinformation The quest to quell misinformation is deeply important work, but those closely involved in this pursuit expose themselves to online attacks. First Draft, a project based out of the Shorenstein Center on Media, Politics and Public Policy at Harvard, fights mis- and disinformation. In October, its co-founder and director Claire Wardle told the Columbia Journalism Review that the journalists who debunk misinformation for First Draft do not get bylines because of the threat of online harassment: “Any time you try and say that, This is not true, you have a lot of haters.” In a world filled with misinformation, it’s easy to balk at actual facts that don’t confirm our own world-views. In April 2018, Mark Zuckerberg testified before Congress about the spread of misinformation on the platforms he runs, saying: It’s not enough to just give people a voice, we need to make sure that people aren’t using it to harm other people or to spread misinformation … Across the board, we have a responsibility to not just build tools, but to make sure that they are used for good. This is a noble goal, but a lot of work must be done to execute on this vision. Tech platforms must actively invest in this cause. It will not go away on its own, and those who want to spread disinformation will continue evolving their strategies to target weaknesses in the systems. Fighting disinformation and its spread as misinformation is an iterative process, not a quick fix. There are actions we can take to fight misinformation, even as individuals: we can improve our own media literacy by carefully considering our sources of information we can fact-check the stories we encounter on social media before believing them we can commit to reading entire articles, and not just headlines, before sharing them we can point others to fact-checking resources when we see misinformation spreading Armed with awareness, we can all do our best to recognize misinformation when we encounter it and work toward stopping its spread. This Year’s Runners-up Include … IMBD representation: Representation jumped out to us thanks to the box-office success of films like Black Panther and Crazy Rich Asians. Additionally, this word resonated with the historic midterm election wins for Muslim women, Native Americans, and LGBTQ candidates. self-made: The word self-made surged in lookups after the publication of a Forbes cover story calling Kylie Jenner a “self-made” billionaire. This word was also top of mind when the New York Times published an exposé about the true source of President Trump’s wealth. backlash: In 2018, we saw a backlash to the Me Too movement in certain circles, a backlash to Judge Kavanaugh’s confirmation to the Supreme Court, and a backlash against harsh voter suppression tactics. Glossary of Newly Defined or Updated Terms Related To Misinformation misinformation: false information that is spread, regardless of whether there is intent to mislead. disinformation: deliberately misleading or biased information; manipulated narrative or facts; propaganda. post-truth: relating to or existing in an environment in which facts are viewed as irrelevant, or less important than personal beliefs and opinions, and emotional appeals are used to influence public opinion. fake news: false news stories, often of a sensational nature, created to be widely shared or distributed for the purpose of generating revenue, or promoting or discrediting a public figure, political movement, company, etc. Sometimes facetious. (used as a conversational tactic to dispute or discredit information that is perceived as hostile or unflattering) bubble: a zone of cognitive or psychological isolation, in which one’s preexisting ideas are reinforced through interactions with like-minded people or those with similar social identities. filter bubble: a phenomenon that limits an individual’s exposure to a full spectrum of news and other information on the internet by algorithmically prioritizing content that matches a user’s demographic profile and online history or excluding content that does not. echo chamber: an environment in which the same opinions are repeatedly voiced and promoted, so that people are not exposed to opposing views. confirmation bias: Psychology. bias that results from the tendency to process and analyze information in such a way that it supports one’s preexisting ideas and convictions. implicit bias: Psychology. bias that results from the tendency to process information based on unconscious associations and feelings, even when these are contrary to one’s conscious or declared beliefs. influencer: a person who has the power to influence many people, as through social media or traditional media. gatekeeper: a person or thing that controls access, as to information, often acting as an arbiter of quality or legitimacy. homophily: the tendency to form strong social connections with people who share one’s defining characteristics, as age, gender, ethnicity, socioeconomic status, personal beliefs, etc. And, if you’re curious, here is our article and explanation for last year’s Word of the Year choice: complicit.Or, you could view all of our past Word of the Year selections here.