Manipulation of content is as dangerous and spreads as quickly as a real virus
The risk of digital disinformation
Disinformation, disinformation, deception and traditional propaganda use all digital channels and social networks to disseminate their messages to the public. The challenge and the risk are complex, but can technology help? Can it reduce the risk of disinformation?
The transition from digital disinformation with digital communication tools and platforms creates a perfect storm. On the one hand, we have attackers looking for precious targets, who could be a voter who chooses a candidate, an investor evaluating a stock or a parent trying to understand a new medical treatment for his child. On the other hand, social media platforms offer data, analysis and targeting tools necessary to effectively deliver these messages.
It is important to clarify that when I refer to “attackers”, I am not talking about you, me or anyone who simply has a different opinion. Instead, I mean a coordinated network that uses misleading tools such as robots, avatars, spood websites, algorithmic exploitation techniques and various psychological manipulation methods to manipulate perception, stories and information with objective or errodent confidence, increasing the polarization or the conduct of objective or erroneous activities.
It is easy to blame Facebook for something disinfromation, but the reality is more complex. Manipulated content follows users and wherever users go, attackers follow. Although we often meet narrative manipulation on platforms like Facebook, X and Tiktok, it becomes more and more widespread on smaller group platforms such as WhatsApp. There, smaller and more intimate groups can be targets. In addition, this manipulation occurs in product exam flows, on discord, Instagram and many other communication channels. Essentially, if it is a platform where people discuss and communicate, it becomes a point of interest for attackers.
Can we fight digital with digital to reduce risks?
Since these attacks occur in the digital field, it is logical to counter them with digital tools. For example, we could monitor conversations and interactions on various platforms and channels to determine whether a discussion undermining the actions of a business is only an isolated incident or part of a coordinated attack. In addition, we could for example identify chatter suggesting a violent attack planned against a minority group following an electoral event. Once we have detected these models, we can report our results to social networks where this content is shared, provoking its withdrawal, or we could inform the police. In theory, this approach seems viable, but in practice, the situation is often much more complex.
An increasing number of companies are developing tools designed to detect content, monitor conversations, understand their context and identify the sources of these conversations. These companies vary in their capacities, including extent, depth and range of detection, their ability to navigate in different languages and vobacular specific to subjects, and the extent of the subjects they can monitor for potential threats.
There are notable differences between different solutions, which require expertise to identify the right platform for specific challenges, such as criminal activity, national security or commercial threats. Most of the solutions available on the market are not fully automated; Analysts with specific expertise in the field are involved in the interpretation of the data collected.
This means that these countermeasures are not only simple mobile applications that can be installed on our phones to “solve the problem”. Instead, these are sophisticated professionals of profession intended to be used by governments, large companies or organizations.
Challenge technologies during disinformation fight
Cost is an important factor to consider. The use of advanced solutions and dedicated experts can be quite expensive. Organizations must be strategic about the content they are looking for and focus on the most vulnerable subjects that can serve as a potential opening for an attack. Naturally, no one can afford to scan the entire internet at any time, and budgetary constraints can limit the efficiency of technology. The key solution lies in expert analysts who have an expertise in the field. They can help guide research and discover disinformation in the midst of the wide range of publications, videos, sharing, tastes, audio clips, memes and the overall wealth of online social experience.
The detection of offensive content is only the first step; kidnapping This poses an important challenge. Although some companies have better access and relationships with social networks, there are cases where these networks are not cooperative or not interested in removing identified content. Although some content is clearly in criminal activities or national security problems, other cases are more nuanced and may be of a political nature or represent what some consider valid opinions, even if these opinions are toxic and harmful. In addition, even when there is evidence that the content is part of an inauthentic and coordinated campaign, it still cannot be deleted.
Ladder is also a challenge. A recent article said that Politico, the online news website, has received $ 8 million from the USAID. However, This statement is false. Despite being inaccurate, the position collected 15,000 shares and reached 80 million views in two days. Unfortunately, this type of deceptive content is likely to remain indefinitely online.
One of the biggest challenges leading to the progress of powerful and effective technologies aimed at mitigating the manipulation of the content is the lack of Industry recognition. There is a significant difference between the extent of the problem – validated by numerous case studies – and the acknowledgment of receipt that it receives from decision -makers. For example, a study suggests that almost 50% of all S&P 500 companies are targeted by fake new attacks. Other research indicates that the content generated by AI can effectively deceive investorsAnd there is considerable evidence showing the impact of these attack campaigns on societal activities, contributing to polarization within communities. Since 2022, the World Economic Forum has identified Disinformation as a higher riskThe classifying it constantly among the first five risks each year.
What raises the question: will technology save us from disinformation? It may not save us entirely, but it could certainly help a lot.
Reduce the risk of disinformation
Let’s start with the most important point: States and businesses must recognize the risks they face. These risks are real and there are many examples of how disinformation affects companies and organizations. To counter these threats, organizations need a solid strategy that improves their resilience and defenses against narrative attacks.
Education and reversal can help fill the gaps where technology disagree. Just as we have learned to identify high sugar content foods in the supermarket or to be skeptical about e-mails asking us to reset our passwords, we can also learn to recognize the signs of online information in relation to the deceptive content.
In addition, good technological solutions must be implemented. The exploitation of an unwavering company or two factors authentication to protect digital assets is simply reckless. The same logic applies to the safeguarding of brands and handling content organizations. It is time to establish firewalls, procedures and attenuations to protect us from these sophisticated attacks and their harmful impact.