Misinformation vs Disinformation
With the rise of social media and digital platforms, malicious actors are able to quickly spread false information to large audiences, often with the intent to deceive or manipulate public opinion. There are differences though between misinformation, disinformation, and malinformation.
One of the most pervasive issues of our time is information disorder, and with it misinformation, disinformation and malinformation.
With the rise of social media and digital platforms, malicious actors are able to quickly spread false information to large audiences, often with the intent to deceive or manipulate public opinion. This has become a major issue for organizations, governments, and individuals alike.
Perhaps the most commonly used word to refer to this content is "fake news", but with its increased politicization and widespread general use, the word itself has almost become meaningless. The term simply doesn't refer to the intricacies of this type of content. This is why many researchers and industry professionals have adopted more nuanced terms.
According to FirstDraft, a non-profit coalition that eventually joined the Shorenstein Center for Media, Politics and Public Policy at Harvard’s Kennedy School, three terms used to define this type of content are:
- Disinformation: Content that is intentionally false and designed to cause harm. It is motivated by three factors: to make money; to gain political influence, either foreign or domestic; or to cause trouble for the sake of it.
- Misinformation: This also describes false content, but the person sharing doesn’t realize that it is false or misleading. Often a piece of disinformation is picked up by someone who doesn’t realize it’s false and that person shares it with their networks, believing that they are helping.
- Malinformation: Malinformation describes factual or genuine information that is shared with an intent to cause harm or sow fear.
What are some examples of disinformation campaigns?
2020 Election Interference by Russia and Iran
In 2019, Mark Zuckerberg and Facebook announced that Facebook discovered coordinated attempts by Russia and Iran to sow discord around the 2020 election.
Facebook and Twitter suspended accounts associated with a Russian influence operation that posed as an independent news outlet to target left-wing voters in the U.S. and Britain.
The operation, centered around a website called Peace Data, focused on U.S. politics and racial tensions in the run-up to the Nov. 3 election.
Investigators found that Peace Data recruited freelance journalists to write about domestic politics and pushed deliberate propaganda critical of right-wing voices and the center left.
The website published over 700 articles in English and Arabic between February and August this year and co-opted real people to make it easier for political influence operations to remain undetected.
Pro-PRC “HaiEnergy” Information Operations Campaign Leverages Infrastructure from Public Relations Firm to Disseminate Content on Inauthentic News Sites
Mandiant, a security firm owned by Google, discovered an ongoing information operations (IO) campaign by the People's Republic of China (PRC) using 72 suspected fake news sites and social media accounts. These sites posed as independent news outlets and published content in 11 languages that supported the political interests of the PRC.
The campaign criticizes the US and its allies tried to improve the international image of Xinjiang, and supported the reform of Hong Kong's electoral system. It also published content discrediting those critical of the Chinese government, including Chinese businessman Guo Wengui and German anthropologist Adrian Zenz.
In addition to public sector concerns, disinformation affects the corporate sector as well. As PWC writes, "Now, disinformation is moving into the corporate sector. Organized crime and sophisticated actors borrow disinformation techniques and methods and use the same creation and distribution mechanisms used in politically motivated disinformation campaigns."
The Use of a Deepfake to Steal US$243,000
According to the Wall Street Journal, a voice-generating AI software was used to mimic the voice of the CEO of a company’s Germany-based parent company to facilitate an illegal fund transfer.
The U.K. company was targeted by cybercriminals, who posed as the CEO of its parent company to demand an urgent wire transfer to a Hungarian-based supplier. The fraudsters made two more attempts for a transfer but were met with suspicion.
Deepfake audio fraud is a new form of cyber attack using AI, which extends traditional methods like phishing. Companies should be aware of these attack vectors to stay safe from social engineering scams.
How does disinformation and misinformation affect public discourse?
When false or inaccurate information is spread, it can lead to confusion and misunderstanding, resulting in opinions formed based on inaccurate or misleading information. For those deploying disinformation campaigns, one of the primary goals is to create an atmosphere of uncertainty and cause people to doubt the accuracy of all information, even accurate information.
This can lead to the spread of harmful conspiracy theories and the erosion of the public’s trust in public institutions, companies, individuals, and media. As a result, it is essential that the public is aware of the dangers posed by disinformation and misinformation, and that reliable sources of information are consulted when forming opinions or making decisions.
It's also important to acknowledge that these challenges exist across platforms, including emerging platforms such as Parler. According to a report created by PeakMetrics and NewsGuard, 87% of links shared on Parler around the Jan 6 insurrection contained misinformation.
Disinformation campaigns can be a major threat to organizations across many sectors, both public and private. Companies must take proactive measures to protect themselves.
For those concerned about disinformation against their organization should be sure to assess risks, develop a strategy for monitoring and responding to social media, fortify the brand against disinformation, and develop a recovery plan. Each of these actions requires collaboration across different departments and leaders, including the chief risk officer, chief communications officer, and crisis management executives. By taking these steps, organizations can better prepare themselves for a potential disinformation attack and minimize the harm it may cause to the company and its stakeholders.
This field is constantly changing as disinformation actors continue to update their tactics, as well as the impact they can have on the company. By having a well-prepared disinformation recovery plan, organizations can respond quickly and effectively to a potential attack and mitigate its impact.
Sign up for our newsletter
Get the latest updates and publishings from the PeakMetrics investigations team.