Disinformation is the dissemination of distorted or deliberately false information to create propaganda or achieve military goals (misleading the enemy). The process of disinformation includes manipulating information in order to mislead someone by providing incomplete information, distorting the context. Disinformation includes false and out of context information that is spread with the intent to deceive or mislead, primarily by people who want to distort public opinion or create and promote false agendas. Disinformation can be spread online by a variety of actors, including governments, state-sponsored organisations, extremist groups, and individuals.
According to the Oxford Living Dictionary, this concept looks a little different. They define disinformation as "false information or misleading propaganda created by state organisations against the opposition or the media."
The purpose of such an impact is always the same: the enemy must act as the manipulator wants.
Therefore, the creation and dissemination of disinformation is a conscious human activity to create a false impression and, accordingly, induce the subject to commit the desired action or inaction. A characteristic feature of "disinformation" is that it is actively used during the war, regardless of whether the war is "hot" or "cold".
An opinion about the role of the USSR in the development of the institution of creating and disseminating disinformation is expressed in the Washington Post article: “Before fake news, there was Soviet disinformation.”
The above disinformation campaigns are not new: remember the wartime propaganda used to turn public opinion against the enemy. The new method is the use of the Internet and social media to spread these campaigns. Spreading disinformation through social media can change election results, fuel conspiracy theories, and sow discord.
The influence of social media is undeniable. Around 300 million new photos are uploaded daily on Facebook, six thousand tweets are posted every second on Twitter, the most popular YouTube channels receive more than 14 billion views per week, and the messaging app Telegram has over 500 million users.
Social media platforms bring people from different societies together, facilitating information sharing that was unimaginable even two decades ago. Disinformation on social media is widespread. Such platforms are used to promote instability, spread political conflicts, and call for violence.
According to a study by the University of Oxford, organised social media disinformation campaigns have taken place in at least 81 countries, a trend that continues to grow every year by both public and private organisations.
Disinformation spread on social media can increase the risk of unrest in a variety of political environments, from repressive/authoritarian (China, Myanmar, Venezuela, Russia, etc.) to semi-democratic (Philippines, India, Indonesia, etc.).
Disinformation has also circulated on social media in countries that have historically had strong democratic institutions, including the United States and Britain, due in part to a lack of trust in institutions and domestic political influence.
During the 2016 US presidential election, Twitter found more than 50,000 Russian-linked spam accounts that were distributing divisive election-related material. Climate change denial, Russia's invasion of Ukraine, and the war in Syria are other issues that have spawned a flood of disinformation.
Fake news travels faster and more widely than true information, according to a 2018 study published in the journal Science by MIT Sloan professor Sinan Aral, Deb Roy, and Sorush Osugi of the MIT Media Lab. They found that fake news was 70% more likely to be shared on Twitter than the truth.
This effect is more pronounced in the case of political news. The researchers found that bots spread accurate and false information at the same rate, but humans are more likely to spread false information. The main reason is that people are attracted to new and unusual information.
Disinformation can spread incredibly quickly thanks to the technology, speed, and accessibility of the Internet.
According to MIT Sloan professors David Rand and Gordon Pennycook, people who share disinformation are lazy rather than biassed. Their 2018 study, which asked people to rate the accuracy of Facebook news headlines, found that people with analytical minds were more likely to tell the truth from lies, regardless of their political leanings.
The biggest flow of disinformation has been related to the COVID-19 pandemic. Of course, the problem was primary, even the disinformation associated with the epidemic was given a name - infodemics. Guy Berger, a senior UNESCO official and one of the pioneers in the fight against disinformation at the UN, said: “There is hardly any industry that has not been affected by disinformation due to the COVID-19 crisis.”
How best to deal with disinformation remains a difficult topic for debate. However, experts generally agree that collaboration between the public and social media is important and that curbing the spread of disinformation is inevitable.
Stephen Smith of the AI Algorithms Group at MIT's Lincoln Lab set out to better understand these campaigns by launching the Intelligence in Action (RIO) program. His goal is to create a system that automatically identifies materials containing disinformation, as well as those who distribute them on social networks.
What makes the RIO system unique is that it combines several analytical methods to determine where and how disinformation is spreading.
During the testing period of the RIO system, the team collected 28 million tweets from 1 million accounts to determine which messages were misleading and who was responsible for spreading disinformation. This system analysed user accounts and identified those who spread disinformation with 96% accuracy. "Selected accounts" are further analysed and the system focuses on their language and interaction with foreign media. The RIO system can detect both chatbots and people who are sources of disinformation. The system can be used by governments and other sectors to stop the spread of disinformation.
There are several alternatives to combating disinformation that can be implemented by a variety of institutions.
Governments can encourage independent, professional journalism. Society needs journalists to understand complex events and to cope with the ever-changing nature of social, economic and political events. Having an impartial and professional Fourth Estate is essential.
Governments should avoid censoring content and holding online platforms accountable for disinformation. This can limit freedom of speech, making people hesitant to share their political views for fear they might be censored. Such overly restrictive regulation could set a dangerous precedent and inadvertently encourage authoritarian regimes to restrict freedom of expression.
The media must continue to focus on high-quality journalism that inspires confidence and attracts large audiences. Over the past few years, many media organisations have seen a significant increase in readership and viewership, but at the same time, there has been a sharp decline in public trust in the media in recent years. In times of natural disasters, chaos and unrest, the world needs strong and flexible media to keep citizens informed about current events.
Tech companies need to invest in technology to find fake news and deliver it to users through algorithms and crowdsourcing. There are innovations in detecting fake news and disinformation that are useful for media platforms. For example, fake news detection can be automated, and social media companies need to invest in implementing these programs into their systems.
Funding media literacy programs should be a priority for governments. Such programs should help people become better consumers of online information. Investments are needed to facilitate collaboration between journalists, businesses, educational institutions and non-profit organisations to promote media literacy.
Social media companies such as Facebook are very active in combating disinformation. Between March and October 2020, Facebook removed more than 12 million posts containing disinformation about Covid-19.
But is censorship the best way to deal with fake news and conspiracy theories?
After Madi's video went viral (Carrie Madej, an osteopath uploaded a video to YouTube in June 2020 claiming that Covid-19 vaccines would change recipients' DNA. The video went viral with over 300,000 views on YouTube and was shared on other platforms such as Facebook, Instagram, Twitter and WhatsApp), social networks like YouTube and Facebook removed it from their platforms, preventing it from being shared (although it can still be found by searching). And while censorship can effectively stop the spread of this information, it also calls into question freedom of speech.
Freedom of speech is vital, but it cannot protect reliable information by itself. Freedom of speech protects the expression of all kinds of information and ideas, whether facts or opinions, true or false, sincere or satirical. Freedom of speech allows everyone to say what they want, with the exception of narrow exceptions provided by criminal law.
But freedom of speech does not protect methods of manipulation.
Measures can be taken against manipulative campaigns if they undermine national security or public order. Freedom of speech may be restricted for certain purposes, including the need to protect national security or public order, provided the restrictions are lawful, necessary and proportionate.
Freedom of speech is the cornerstone of human and civil rights, but does anyone have the right to speak freely while spreading disinformation?
Free speech does not mean that governments are free to engage in deliberate campaigns of disinformation and manipulation against both their own populations and the international community. It also does not mean that governments are powerless to protect their populations from deliberate disinformation campaigns by foreign regimes.
*This article was created in the frame of the project Building Digital Resilience in Armenian Society. Building Digital Resilience in Armenian Society is a certified training course on how to fight disinformation in real-time, identify and prevent its spread, and inform and warn others. The sessions have been followed by a series of articles exploring the topic. The goal of the project is strengthening Armenian media literacy, civil society, and its resilience to disinformation and/or strengthening the strategic communication capabilities of the Armenian state institutions.