1. Introduction
Modern forms of information warfare are complex, multilevel and dynamically developing phenomena. Traditional methods of influence are complemented and enhanced by digital technologies. The development of information and communication systems, social networks, and automated platforms enables the dissemination of information at scale and in targeted ways, influencing public opinion, political stability, and social processes.
In the context of high-speed data exchange and global interconnectedness, information attacks are becoming increasingly covert and multifaceted. They can include targeted distortion of facts, manipulation of context, dissemination of disinformation, creation of artificial agendas, and use of automated bots to amplify the effect. At the same time, traditional approaches to counteraction are often insufficient. Since they are unable to ensure timely detection and analysis of hidden patterns in large, noisy data arrays.
The problem under consideration is distinctly interdisciplinary nature. Its solution, at a minimum, requires close integration of the humanities and technical sciences. The development of information technologies leads to transformations of society [
1,
2], which cannot always be explained using classical sociology and/or psychology. From our point of view, which underlies the methodology of this review, any attempt to create an effective method of counteracting modern forms of information warfare will highlight the difficulties of a fundamental nature as discussed in [
3]. The cited work shows that the highly fragmented disciplinary structure of science that had developed by the early 21st century is often an obstacle to understanding modern societal processes as a whole. This conclusion applies not only to the problems associated with information warfare. Accordingly, in this review this particular problem is considered, among other things, to substantiate the concept discussed in [
3]. Specifically, the issue of information warfare can be considered as the most illustrative example proving the adequacy of the thesis on the convergence of natural scientific, technical, and humanitarian knowledge. It is this thesis that underlies the methodology of this review.
An illustration of the importance of this thesis is the increasingly widespread use of generative artificial intelligence (GenAI [
4]) in science and education. The nature of which causes numerous discussions [
5,
6,
7]. GenAI is increasingly used to address a wide range of questions, including those of a humanitarian nature. The most general ideas about the essence of information warfare allow us to assert that GenAI can, depending on the algorithms embedded in it, prioritize the interests of certain political or other groups, etc. We emphasize that this does not imply outright falsehoods. Given the widespread use of such technologies, a relatively small (and often unnoticeable to the average user) shift in emphasis is sufficient.
This example clearly shows that the problem of information warfare is reaching a qualitatively new level. There is a famous expression attributed to Bismarck: “The Battle of Sadowa was won by a Prussian school teacher.” This statement illustrates an obvious thesis—the worldview is determined by the education system. Consequently, if this system is transformed, then society is transformed as well. At a minimum, these processes are interconnected.
Consequently, the nature of the future development of computing systems that form the basis of AI for various purposes, particularly GenAI, cannot but influence society (at least in the long term). There is every reason to believe that the “information technology—society” system is currently at a bifurcation point [
8], namely, there is a wide range of scenarios for the further development of this system.
Furthermore, evidence suggests that classical computing technology built on the von Neumann architecture has exhausted its development potential. This underscores the need to develop multiple types of neuromorphic systems [
9,
10]. Revies on this topic repeatedly emphasize that the von Neumann architecture has several significant shortcomings (at least relative to current demands). One of them is determined by the need for significant energy costs for data exchange between the actual computing nodes and memory blocks [
11,
12].
Developments in the field of neuromorphic materials, in turn, raise questions about the algorithmic principles underlying computing systems, including those implementing GenAI. As [
13] shows, that computing systems based on neuromorphic materials will be complementary to logics that differ from binary. To an even greater extent, this conclusion is true for materials that represent a further development of neuromorphic materials [
14]. In particular, attempts to create sociomorphic materials have already been reported in the literature [
15].
In general, the analysis of problems related in one way or another to information warfare requires the integrated use of digital signal processing (DSP) methods, new-generation computing architectures, and mathematical models capable of effectively identifying significant features and anomalies in information flows. Of particular importance are algorithms operating under modular arithmetic, as well as approaches that provide parallel data processing in real time. These technologies are used across military and government domains, and to protect critical infrastructure, financial systems, and media platforms.
This review is structured as follows.
Section 2 details the review methodology (databases, search keywords and year coverage, inclusion/exclusion criteria, deduplication and screening) and reports the PRISMA flow.
Section 3 develops the theoretical foundations of information influence, clarifying core concepts, mechanisms and levels of impact.
Section 4 surveys modern forms of information warfare, including coordinated online operations, platform-mediated manipulation, deepfakes and memetic tactics.
Section 5 analyzes the impact on the sociocultural code—language, value-normative, symbolic, historical-narrative and social-institutional layers—with illustrative cases.
Section 6 synthesizes methods of countering information attacks, spanning education and prebunking, platform-level interventions and detection pipelines (network, temporal, content and multimodal), as well as governance tools.
Section 7 focuses on IoT/IIoT threat and mitigation patterns in the context of information warfare (e.g., segmentation/zero-trust, SBOM, MUD and industrial control security).
Section 8 discusses the evolution of information-processing systems relevant to counter-IO—linking digital signal processing and emerging computing architectures to scalable detection and filtering.
Section 9 considers ethical and legal aspects (e.g., transparency, researcher access and accountability requirements).
Section 10 examines higher education in the context of information warfare and the need for a paradigm shift in curricula and training.
Section 11 outlines future directions and a research agenda.
Section 12 concludes. Map of the review presented in
Table 1.
The purpose of the review is to combine modern scientific and technological approaches in the field of countering information threats, providing a systemic understanding and a comprehensive view of possible methods of protection in the context of the rapid development of the digital environment.
2. Review Methodology
As noted above, the methodology of this review is based on the thesis of the convergence among natural science, technology, and humanities, whose significance is most evident in the problems of information warfare. Key evidence of the importance of interdisciplinary cooperation in this area is presented in
Section 3, where the theoretical foundations of information influence are discussed.
The PRISMA 2020 methodology was used to prepare this review, ensuring transparency and reproducibility of the literature selection process. During the initial search, a total of 1059 publications were identified. After removing duplicates and sources that did not meet the formal criteria, 838 papers remained for screening. A step-by-step assessment was conducted. Based on the titles and abstracts, about 168 publications were excluded that were not related to the research focus or did not meet academic standards. The remaining articles underwent a full-text analysis, during which 78 more sources were excluded due to insufficient methodological rigor or limited relevance. As a result, the final sample included 592 publications, which formed the basis of the analysis. The chronological scope of coverage is broad: although the main focus was on studies of recent years (2015–2025). The review also included earlier works that are fundamental to forming a theoretical basis and explaining the evolution of information impact methods. Thus, the final sample combines both contemporary studies reflecting current trends and challenges and classical works that allow us to trace the development of scientific thought in retrospect.
The literature search was conducted in leading international databases—Scopus, Web of Science, and Google Scholar, as well as in specialized academic publications. In addition, cross-references from review articles, monographs, and key publications were considered. When including sources, priority was given to publications in peer-reviewed journals and academic presses. This approach minimized the risk of bias and increased the reliability of the summarized data. To increase the completeness of the sample, a combination of keywords was used, including the terms “information warfare”, “information operations”, “propaganda”, “disinformation”, “hybrid warfare”, “sociocultural code”, “cognitive security”, “digital platforms”, “cybersecurity”, as well as other equivalents.
Table 2 provides information on source coverage, and
Figure 1 shows a PRISMA-style diagram.
3. Theoretical Foundations of Information Influence
The study of information impact requires a comprehensive approach that combines knowledge from cognitive psychology, sociology, communication theory, cybernetics, and political science. In the context of digitalization and global interconnectedness of the media, the information environment has become not only a channel for transmitting information but also an arena for targeted manipulation of public opinion and behavioral patterns. Modern research emphasizes that the effectiveness of such impacts is determined by both individual cognitive mechanisms of perception and the structure of the information networks through which information disseminated. Fundamental theoretical models—from the concepts of cognitive biases and social epistemology to network and epidemiological theories—allow us to identify patterns in the formation, amplification, and replication of meanings in the collective consciousness. These same models underlie the development of information operations strategies, including military doctrines of cognitive actions, platform algorithms and the attention economy. This section presents a systematic review of the key theoretical approaches necessary to understand the nature and mechanisms of modern information impact.
3.1. Cognitive Biases and the Psychology of Perception
Cognitive biases are systematic deviations in the process of perceiving, processing, and memorizing information, that lead to persistent errors in judgment and decision making [
16]. These phenomena are not random—they are associated with the characteristics of human memory, attention, and emotions, as well as evolved mechanisms for conserving cognitive resources [
17]. In the context of information influence, cognitive biases are exploited to guide the interpretation of facts, evoke emotional reactions, and reinforce the desired attitudes in the collective consciousness [
18]. One of the most studied is the confirmation bias, in which an individual tends to search for and interpret information so that it aligns with their existing beliefs [
19]. In the context of digital platforms, this effect is amplified by content personalization algorithms that form “information bubbles” (filter bubbles), in which users mainly see information that confirms their worldview [
20,
21]. This not only reduces the likelihood of critical analysis but also makes the audience more receptive to targeted narratives.
The illusory truth effect is an increase in the perceived credibility of information through repetition [
22]. Experiments show that even the refutation of an initially false message does not always reduce its subjective credibility, especially if repetitions occur in different contexts [
23,
24]. This phenomenon is widely exploited in propaganda campaigns, where key messages are regularly reproduced across multiple channels.
The anchoring effect demonstrates that initial numerical or verbal information exerts a disproportionate influence on subsequent evaluations [
25]. For example, inflated or deflated statistics in news stories can change the perception of an issue, even when accurate information is subsequently presented [
26].
Other significant cognitive biases used in information influence include:
- -
Halo effect—tendency to transfer positive or negative characteristics of an object to its other properties [
27].
- -
Primacy and recency effects—better memorization of the first and last presented information [
28].
- -
Selective perception—tendency to ignore data that does not meet expectations [
29].
- -
Framing effect—change in the interpretation of information depending on the wording or context [
30].
Psychological research shows that such biases are particularly pronounced under heightened emotional arousal [
31]. High emotional load—fear, anger, a sense of threat—reduces cognitive control, shifts information processing to a heuristic level, and makes a person more susceptible to simplified, binary interpretations of events [
32]. This explains why emotionally charged messages spread faster and more widely on social media [
33]. In addition to individual cognitive biases, information campaigns rely on social cognitive effects. For example, the false consensus effect creates the impression that a certain position is widespread and supported by the majority [
34]. The groupthink effect suppresses critical discussion within a group if it is assumed that there is general agreement [
35].
Modern neurocognitive studies using functional magnetic resonance imaging (fMRI) show that cognitive biases are associated with activity in brain structures responsible for processing reward, social evaluation, and emotions, such as the prefrontal cortex and amygdala [
36,
37]. This opens up prospects for a deeper understanding of the mechanisms of manipulation and the development of methods of cognitive immunoprophylaxis.
Taken together, cognitive biases underpin manipulative influence, since they allow us to bypass rational filters of perception, appealing directly to automatic, emotionally charged mechanisms of information processing. Understanding these distortions is key to developing effective strategies to counter modern forms of information warfare.
3.2. Communication Theories and Social Epistemology
Communication theories play a fundamental role in understanding the mechanisms of information influence, because they allow us to conceptualize both the structure and dynamics of message dissemination across society. The evolution of communication models—from linear schemes of information transmission to multilevel network concepts—has reflected changes in the technological and media environments, as well as a shift in emphasis from data transmission to the construction of meanings and collective interpretations [
38,
39]. One of the early approaches that laid the foundation for the analysis of information processes was Las—swell’s linear communication model, which reduces communication to the chain “who says—what—through which channel—to whom—with what effect” [
40]. Despite its schematic nature, this model is still used in structuring information influence strategies and assessing the effectiveness of media campaigns [
41].
In the context of information warfare, the linear model is important for mapping sources and target audiences, but insufficient for analyzing complex networks and feedback loops. A significant development of linear approaches was the two-stage communication model by Lazarsfeld and Katz, which identifies the role of “opinion leaders” in the transmission and interpretation of messages [
42]. Opinion leaders, as highly trusted nodes in social networks, can amplify or weaken information signals, including disinformation [
43]. Recent empirical research shows that in the online environment, the role of opinion leaders is often played by micro-influencers or thematic communities, rather than large media figures. These “middle nodes” become key to the sustained dissemination of narratives [
44]. With the transition to the digital age, interactive and transactional communication models have become widespread, emphasizing the bidirectional nature of information exchange [
45]. Here, communication is viewed as a process of constant mutual influence, where the audience not only receives but also actively forms and redistributes content. This corresponds to the reality of social networks, where disinformation can be amplified through reposts, comments, and memetic transformations of original messages [
46].
Social epistemology complements these models by focusing on how knowledge and beliefs are formed, disseminated, and legitimized in a collective context [
47]. According to Goldman and Olson, social epistemology examines the institutions, technologies, and practices through which the “epistemic ecology” of society is shaped [
48]. In the context of information warfare, epistemic ecology is under attack: alternative sources of legitimacy are created, standards of evidence are eroded, and trust in traditional knowledge institutions is undermined [
49].
One of the key concepts of social epistemology is epistemic trust: the willingness to accept information from certain sources as true [
50]. Manipulative information campaigns often aim to erode epistemic trust in independent media, scientific institutions, and government agencies, thereby opening space for the promotion of alternative narratives [
51]. Recent research shows that loss of epistemic trust correlates with increased susceptibility to conspiracy theories and quasi-scientific claims [
52]. Communication theories that consider the influence of network structures are particularly relevant for the analysis of disinformation. The network approach allows viewing the dissemination of messages as a process that depends on the topology of connections between participants [
53]. For example, studies based on graph theory show that nodes with high betweenness centrality play a critical role in the transmission of information between clusters and can be targets for information intervention [
54].
An important area is the integration of communication theories with epidemiological models, where information is viewed as an “infectious agent” [
55]. These hybrid models enable to predict the dynamics of fake news spread and assess the effectiveness of “information quarantine” or prebunking strategies [
56]. At the same time, social epistemology suggests that the speed of disinformation spread depends not only on network structure, but also on cultural norms, the level of media literacy, and trust in sources [
57]. Framing and agenda-setting play a significant role in information impact. According to McCombs and Shaw, media do not so much tell people what to think as they determine what to think about [
58].
Framing, in turn, determines the interpretive framework of events, influencing which aspects are perceived as significant [
59]. In the modern digital environment, framing is often implemented through visual and memetic forms, which enhance its emotional and cognitive effectiveness [
60]. Together, communication theories and social epistemology provide powerful tools for analyzing and countering information attacks. They allow identification of nodes and channels of dissemination, mechanisms for legitimizing knowledge, and weak points in the epistemic structure of society. For developers of information-security strategies, the key conclusion is the need to combine technical network analysis with a cultural and cognitive understanding of the processes that shape information perception.
3.3. Network and Epidemiological Models of Spread
Understanding the mechanisms of information dissemination in the modern media space is impossible without network and epidemiological models. These approaches help to formalize complex message circulation processes, consider the structural features of communication systems, and predict the dynamics of information attacks. Network models are based on graph theory, in which the information ecosystem is represented as nodes (individual users, organizations, media accounts) and edges (connections between them, such as subscriptions, reposts, or mentions) [
61]. Classic network analysis indicators—degree, betweenness centrality, closeness centrality, and eigenvector centrality—help identify key agents of message dissemination [
53]. For example, nodes with high betweenness centrality are often “bridges” between clusters and can ensure the rapid transition of disinformation to new network segments [
54]. One of the fundamental features of information networks is the small-world property, identified by Watts and Strogatz [
62]. Even in large and sparse networks, there are relatively few steps needed to connect any two nodes. This property explains why disinformation can reach a global audience in a matter of hours, especially in the presence of viral content and memetic forms.
Another key characteristic is the scale-free (scale-invariant) degree distribution of networks, in which node degrees follow a power law [
63]. Such networks are resilient to random failures but are extremely vulnerable to targeted interventions on hub nodes. Research shows that on Twitter and Facebook, about 1–5% of users can be responsible for more than 80% of the spread of certain narratives [
64].
Based on network representations, diffusion models have been developed that integrate social and behavioral factors. The most well-known epidemiological models are the SIR (Susceptible-Infected-Recovered) type and its modifications—SIS, SEIR, SEIZ (where Z is the “zombie” state of a misinformed agent) [
65]. In the context of information warfare, “infection” corresponds to the state of an individual who adopts and spreads disinformation, and “recovery” can mean either debunking or refusal to further spread the message [
66].
The SIR model in the information context has been complemented by threshold models, in which an individual accepts information only when a certain number of signals from the environment are reached [
67]. This allows us to describe the phenomenon of “viral breakout”, when a narrative that has long remained on the periphery suddenly enters the mainstream.
Modern works demonstrate that classical epidemiological approaches need to be adapted to the specific features of digital platforms. For example, Borge-Holthoefer et al. [
68] propose a SEIZ model in which state Z describes users who are resilient to refutation and continue to spread falsehoods despite access to corrective information. Such “infection resilience” is particularly characteristic of politically motivated disinformation campaigns.
An important area is the combination of network analysis and epidemiological models with agent-based modeling (ABM) methods [
69]. ABM accounts for individual differences in agents—media literacy level, political orientation, emotional perception—and observe how these factors affect the macrodynamics of dissemination. For example, studies on COVID-19 show that misinformation about vaccines spread faster and deeper online than science-based messages due to emotional richness and algorithmic amplification [
70]. Particular attention is paid to role structures in the network. The work of Lerman et al. [
71] shows that “super-spreaders” and “super-receivers” are different categories of users, and counteraction strategies should take both into account. Eliminating super-spreaders slows the growth of misinformation, but long-term resilience is achieved by increasing the media literacy of super-receivers [
72]. Hybrid approaches also use percolation models to describe cascade processes [
73]. Here, dissemination is viewed as a process of penetration through a network, in which the “permeability” of nodes depends on trust, cultural factors, and personal experience. When a critical connection density is reached, a “percolation threshold” emerges, beyond which dissemination becomes uncontrollable.
In terms of countermeasures, network and epidemiological models allow testing the effectiveness of measures—from removing nodes to changing content delivery algorithms [
74]. However, several studies [
75] emphasize that without considering social context and cross-platform dynamics, such measures are short-lived.
Recent developments include the use of dynamic multilayer networks (multiplex networks), in which each layer represents a separate platform or type of interaction (e.g., public posts, private messages, offline meetings) [
76]. Modeling on such structures reveals that disinformation often “migrates” between layers: blocking on one network causes an increase in activity in another. Network and epidemiological models of dissemination provide both an analytical and a predictive basis for understanding the dynamics of information attacks. Their strength lies in their ability to identify critical points, predict consequences, and evaluate the effectiveness of countermeasures. A key challenge for current research is integrating these models with cognitive and cultural factors to create more accurate and robust analytical tools.
3.4. Attention Economy and Platform Architecture
The concept of the attention economy dates back to the 1970s, when Herbert Simon pointed out that under information abundance, the scarce resource is not information but human attention [
77]. In the modern digital space, this idea is of central importance, as user attention has become a key asset for which social networks, news aggregators, streaming services, and online platforms compete. The digital environment has transformed attention into a commodity subject to monetization through advertising models, targeting, and the sale of user-behavior data [
78]. Platforms, relying on behavioral data, build algorithms that rank and personalize content to maximize user time on the platform. These algorithms work like digital “magnets”, continuously optimizing the delivery of information to retain attention and stimulate repeat visits [
79]. Modern platforms use a combination of recommender systems, ranking algorithms, and content-filtering systems based on machine learning and deep neural networks [
80]. These mechanisms are trained on huge amounts of user data and, as recent studies show, tend to reinforce existing preferences, forming so-called “filter bubbles” and “echo chambers” [
20].
The effect of algorithmic personalization has been documented in studies on Facebook and YouTube, where recommendation systems show a tendency to direct users to more polarized and emotionally charged content [
81]. In the case of YouTube, algorithms increase the likelihood of being steered to conspiracy videos if users had previously interacted with similar content, even in the absence of an explicit search [
82].
In the context of information warfare, platform architecture and the attention economy play a dual role. On the one hand, the highly competitive environment forces platforms to optimize algorithms for engagement metrics—clicks, likes, and reposts—without directly considering the credibility of the content [
83]. On the other hand, this same mechanism creates conditions for increased amplification of disinformation, since false or sensational messages tend to evoke a stronger emotional response and accordingly, generate more interactions [
84]. An empirical study shows that false news spreads faster on Twitter and reaches more users than true news, and this effect is not explained by bots—it is associated with the behavior of real users [
64]. This indicates that the platform architecture focused on engagement metrics objectively amplifies narratives that correspond to attention patterns, rather than the quality of information.
The attention economy is closely related to cognitive psychology. People tend to pay more attention to information that evokes strong emotions (fear, outrage, surprise), which, in turn, encourages platforms to offer such content [
33]. Algorithmic systems trained on behavioral data unintentionally exploit cognitive biases, including confirmation bias and negativity bias [
84].
From the perspective of military and strategic information operations doctrines, the attention economy and platform architecture create an infrastructure that enables large-scale and targeted manipulation of public opinion [
85]. State and non-state actors, including online troll farms and coordinated botnets, use knowledge of algorithmic mechanisms to advance favorable narratives [
86]. Modern NATO StratCom [
87] and European External Action Service [
88] reports indicate that the architecture of social platforms does not simply passively transmit information but actively shapes the context and trajectories of its dissemination, influencing the likelihood of users encountering a particular narrative.
One of the key challenges is the cross-platform migration of content. Disinformation blocked in one network can be instantly reproduced in another, and the architecture of interconnections (cross-posting, messengers, forums) helps bypass barriers [
89]. Research shows that Telegram has become one of the main platforms—“recipients” of removed content from Facebook and Twitter, while maintaining a high speed of dissemination [
90].
These features indicate the need to develop architectural solutions that can work not within a single platform but at the ecosystem level.
Thus, the attention economy and platform architecture are fundamental elements of the information impact infrastructure. Their key feature is algorithmic optimization for engagement metrics, which, although not initially aimed at disinformation, creates a favorable environment for its exponential spread. Understanding these mechanisms is a prerequisite for developing effective countermeasures to modern forms of information warfare.
3.5. Strategic and Military Doctrines of Cognitive Operations
Cognitive operations (CO) within a military context are the deliberate manipulation of the perceptions, thinking, emotions, and decision-making processes of target audiences to achieve strategic or operational objectives. The approach emerged at the intersection of psychological operations (PSYOPS), strategic communications (STRATCOM), information operations (IO), and cyber operations, but has become in its own right over the past two decades as a key dimension of modern conflict [
91].
The origins of CO can be traced back to the Cold War military doctrines, when information and psychological warfare were viewed as an integral part of political struggle [
92]. However, with the advent of digital communications and social media, the cognitive space has become an integral part of the “battlefield”. A 2018 RAND report documents a shift from traditional “information superiority” to the concept of “cognitive superiority,” in which the object of influence is no longer the communications infrastructure but the processes of perception and interpretation of information [
93]. NATO doctrinal documents such as the Allied Joint Doctrine for Strategic Communications (AJP-10) define the cognitive domain as “the domain in which beliefs, values, and perceptions are formed and changed” [
94]. Unlike the information domain, which describes the channels and technical means of data transmission, the cognitive domain focuses on subjective interpretations and their strategic management.
The United States developed the CO concept as part of a combination of PSYOPS, military information operations, and cyber operations. US Special Operations Command reports emphasize the need to integrate data on the cultural characteristics of target audiences into content impact algorithms [
95]. The Pentagon views cognitive operations as a component of “Multi-Domain Operations” (MDO), in which psychological impact is synchronized with cyber and information attacks [
96].
Russia officially uses the term “information-psychological impact” (IPI), which essentially overlaps with the concept of CO. The Russian Federation Information Security Doctrine (2016) states that “information impact on collective consciousness” may aim to undermining cultural and spiritual values, as well as destabilizing the domestic situation [
97]. Analysis by the NATO StratCom COE indicates that Russia systematically uses cognitive methods as part of hybrid strategies [
98].
China, within the framework of the Three Warfares concept—psychological, legal, and media—emphasizes the cognitive effects of information campaigns [
99]. The White Paper on National Defense of the PRC (2019) states that “victory in future wars will be determined by the ability to shape the perception and interpretation of events” [
100].
Cognitive operations are embedded in a broader paradigm of hybrid warfare, in which military, diplomatic, economic, and information means are combined to achieve goals without direct armed conflict [
101]. In this scheme, the cognitive component acts as a “force multiplier”, since successfully changing the perception of the target audience can reduce the need to use kinetic means [
102].
Cognitive operations use a wide range of methods, include [
103]:
- -
targeted distribution of content based on behavioral data;
- -
creation and management of “frames” (narratives) with high emotional load;
- -
exploitation of cognitive biases (e.g., the anchoring effect and the confirmation effect);
- -
use of synthetic media (deepfakes) to enhance credibility.
A key feature of modern CO implementation is integration with platform architecture (see
Section 3.4) and the use of algorithmic affordances of social media [
104]. Research shows that the combination of algorithmic personalization and psychologically validated narratives can significantly increase the effectiveness of influence [
105].
The doctrine of CO is being developed at the level of national strategies and international alliances. The 2021 NATO report “Cognitive Warfare” [
106] explicitly states that cognitive space is a new domain of conflict along with land, sea, air, space, and cyberspace. The document warns that cognitive warfare is aimed not only at soldiers but also at civilians, including shaping the perception of threats and allies.
In 2020–2022, the United States conducted a series of exercises simulating operations in the cognitive domain, in which social media simulations were used to test the resilience of military units to information attacks [
107]. Such training demonstrates an institutional recognition that winning future conflicts requires dominance in the cognitive domain.
Despite the growing attention to CO, there is a danger of an escalating “cognitive arms race” in which states invest in increasingly sophisticated methods of manipulating perceptions [
108]. This raises serious ethical questions, including the balance between national security and the human right to freedom of thought and belief [
109].
With the cognitive domain becoming an arena for strategic confrontation, developing countermeasures requires an interdisciplinary approach that combines political science, cognitive psychology, computer science, and international law.
4. Modern Forms of Information Warfare
The evolution of communication technologies and digital platforms has profoundly changed the nature of information conflicts. While in the past information warfare relied primarily on traditional media and propaganda channels. Today, the key arenas of confrontation have become social networks, instant messaging apps, video-hosting services, and interactive platforms. The development of personalization algorithms, tools for automated content distribution, and generative technologies has led to the emergence of new, more sophisticated forms of attack—from botnets and coordinated campaigns to synthetic media and memetic weapons. These forms often operate in a complex manner, reinforcing each other. These hybrid operations creating difficult-to-distinguish hybrid operations aimed at manipulating public consciousness, undermining trust in institutions, and fragmenting the sociocultural space. This section examines the key manifestations of information warfare in their current technological and strategic context.
4.1. Disinformation, Fake News and False Narratives
In modern literature, it is suggested to distinguish between mis-, dis- and malinformation: errors and inaccuracies without the intention to mislead; deliberate lies; and accurate information disseminated with malicious intent (e.g., doxxing). This framework is outlined in the Council of Europe report [
110], which also shows that the boundaries between categories are dynamic and context dependent (e.g., motives, production processes, and distribution routes). An important conclusion is that the source and process are at least as significant as the “factual result” itself, which is critical to developing countermeasures.
A philosophically precise definition of “disinformation” was proposed by Don Fallis: false (or misleading) content disseminated with the intent to mislead [
111]. This definition emphasizes intent, distinguishing disinformation from unintentional error, and is aligns well with law-enforcement practice and communications ethics. In parallel, there is the term “fake news”, which in academic usage has been typologized as: satire, parody, fabrication, manipulation (including deepfake editorials), advertising mimicry, and propaganda. This typology [
112] notes that under one “umbrella” label are hidden different genres with different harmfulness and mechanisms of distribution—and therefore different response tools.
The review by Lazer et al. [
113] in Science systematizes the empirical evidence: the role of platform algorithms, behavioral targeting, social context, and psychology. The authors emphasize that sustainable solutions go beyond simple “delete/tag” and include interface design, researchers’ access to data, and prebunking (inoculation). The scale of impact in electoral contexts has been documented by a series of studies: the Oxford Internet Institute’s report on the IRA account pool for the US Senate [
114], Work on France’s 2017 election [
115], and an analysis of the “Brexit botnet” [
116]. All of them document coordinated inauthentic behavior (CIB), the use of botnets, memetic formats, and cross-platform integration. This does not necessarily mean that each message is highly “persuasive,” but it does demonstrate industrial-scale reach and repetition—two factors critical to the perpetuation of false narratives. At the same time, individual exposure to “untrusted domains” is heterogeneous: panel data from Guess, Nyhan, and Reifler [
117] showed that a significant portion of the audience does not encounter such sources regularly, while small cohorts are overexposed and become “superconsumers” of inauthentic sites. For counter-policy, this means that targeted measures are needed, not just “platform-wide” regulation.
The cognitive side of the phenomenon explains the persistence of false narratives. The confirmation bias [
118] inclines people to seek out and interpret information in support of their own beliefs. The “illusory truth effect” [
119] shows that simple repetition increases subjective credibility even when a person knows the correct answer. Reviews of “post-truth” [
120] highlight the limited effectiveness of post-factum refutations without changing frames and context. This explains the success of the “firehose of falsehood” [
121], which relies on speed, volume, and variability rather than internal coherence. Finally, in times of crisis, an “infodemic” works: saturating the field with contradictory messages, in which the lack of trusted reference points (e.g., institutions and experts) makes the audience receptive to simple, emotionally charged explanations. WHO-The Lancet [
122] observations during COVID-19 confirm that combating the infodemic requires a combination of technical filters, platform partnerships with fact-checking organizations, and media literacy measures [
123,
124,
125].
4.2. Botnets, Trolling, and Coordinated Inauthentic Behavior
Among the most studied technological forms of modern information influence are botnets—automated or semi-automated accounts on social media that imitate the activity of real users. Their key function is to scale content distribution, manipulate engagement metrics (likes, retweets, comments), and create the illusion of consensus on certain topics [
126].
Botnet mechanisms are varied. In technical terms, they can be based on API access scripts to platforms (for example, Twitter API before the introduction of restrictions in 2023), use the infrastructure of “bot farms” with centralized control or distributed architectures via infected user devices (i.e., botnet models known from cybersecurity) [
127]. In parallel, “human bots” are actively used—low-paid operators acting according to preset scenarios (troll farms), a pattern especially typical of political trolling [
128]. Coordinated Inauthentic Behavior (CIB) as a concept is codified in the policies of Meta and other platforms and is studied in the academic community from the perspective of network analysis, anomaly detection, and digital attribution [
129]. The key criterion is the presence of hidden centralized coordination aimed at misleading the audience about the true nature of the interaction.
Research by E. Ferrara and colleagues [
130] shows that botnets can amplify both disinformation and legitimate political messages, which complicates the binary classification of “harm/benefit”. At the same time, the pace of publications, the rhythm of daily activity, and the simultaneous distribution of the same URLs are reliable indicators of automation [
131].
A classic example of a large-scale CIB operation is the activity of the Russian “troll factory” (Internet Research Agency, IRA), identified in the reports of the US Senate and analyzed in research [
132]. Hundreds of pages and thousands of accounts aimed at polarizing society and interfering in the 2016 elections were identified on Facebook and Instagram.
Similar patterns are documented in other contexts: in the 2017 French elections [
133], in the Catalan crisis [
134], and in the media coverage of the Hong Kong protests [
135]. In each case, campaigns are multilingual, integrated across cross-platform assets (Twitter, Facebook, YouTube, messaging apps), and use memetic content for organic reach.
The CIB problem is compounded by “digital astroturfing”—the imitation of grassroots support using synthetic accounts [
136]. As experiments by Vosoughi, Roy, and Aral [
137] show, false information spreads faster and more deeply on Twitter than truthful information, even without the use of bots; however, bots significantly accelerate the early stages of diffusion, increasing the chances of a narrative taking hold.
In response, platforms are implementing automatic detection methods: behavioral metrics, graph analysis, machine learning based on content features, and metadata [
138]. However, as Keller and Klinger [
139] note, each new detector quickly causes attackers to adapt—publication patterns, message frequency, and distribution by time zones change.
In the academic and applied context, interest in international regulation of CIB is growing. NATO StratCom COE [
140] and the European External Action Service (EEAS) [
141] reports point to the need to synchronize cybersecurity and information standards, since the boundaries between “purely technical” and “purely information” threats in the bot environment are blurred.
4.3. Deepfakes and Synthetic Media
Deepfakes are synthetic media files—images, audio, or video recordings, created or modified using deep learning algorithms, most often based on generative adversarial networks (GANs) [
142]. Their key feature is a high degree of realism in the absence of an authentic source of an event or statement. Having emerged as an experimental direction in computer vision in the mid-2010s, deepfakes quickly became a tool for both entertainment and information attacks. The technological basis of deepfakes lies in the ability of GANs and their variants (StyleGAN, CycleGAN, and related models) to model data distributions and synthesize new samples that are visually indistinguishable from real ones [
143]. For audio fakes, architectures based on voice conversion models and text-to-speech systems with deep learning are used [
144]. With the advent of multimodal generators (e.g., DALL ·E 2, Imagen, and Sora), it became possible to create falsified content in several media channels simultaneously [
145].
In the context of information warfare, deepfakes have two key application scenarios [
146]:
Discrediting political figures and public leaders—creating false videos with statements or actions that did not occur.
Substitution of evidence: falsification of photo and video evidence used in the media and court proceedings.
Examples of this kind have been recorded in a number of countries. In 2019, a deepfake depicting Gabonese President Ali Bongo was distributed on Facebook and Twitter, which caused a political crisis and an attempted military coup [
147]. In 2020, during the US elections, there were cases of using altered videos to discredit candidates [
148]. Westerlund’s research [
149] points out that as generation algorithms develop, the risk of “invisible interference” in political processes through the mass production of synthetic materials increases.
The problem is exacerbated by the fact that the availability of generation tools has increased dramatically: the DeepFaceLab and the FaceSwap libraries, and cloud-based AI platforms enable deepfake creation without deep technical expertise [
150]. At the same time, anti-detection methods are developing—including adding noise-like artifacts or using adaptive frame generation to bypass detection algorithms [
151].
In terms of countermeasures, academic and corporate labs are developing methods for automatic detection of deepfakes based on analysis of facial microexpressions, blink rate discrepancies, lip movement patterns, and compression artifacts [
152]. However, as Verdoliva [
153] and Korshunov and Marcel [
154] note, in the context of an “arms race”, the accuracy of such methods decreases as new generative models appear. Organizations such as Europol and the UN view deepfakes as a threat to information security and public trust [
155]. Europol’s Facing Reality? (2022) report proposes including synthetic media in the list of priority threats to cyberspace. In 2023, the European Parliament began discussing legislation on labeling AI-generated content [
156]. Deepfakes and synthetic media are therefore becoming an increasingly important element of the information warfare arsenal. Their danger lies in their ability to rapidly erode trust in visual and audio evidence, which undermines the basis of public consensus and makes rapid verification during crises nearly impossible.
4.4. Memetic Weapons and Infooperations in Social Networks
The term “memetic weapon” traces back to the concept of the meme proposed by R. Dawkins in the book “The Selfish Gene” (1976), where the meme is described as a unit of cultural information transmitted through imitation and communication [
157]. In contemporary contexts, memes have become not only an object of study for cultural scientists, but also a tool for targeted information influence. In the context of information warfare, memetic weapons are understood as visual, textual, or audiovisual cultural artifacts created and distributed with the aim of changing the perceptions, beliefs or behaviors of target audiences [
158].
Social networks have radically increased the potential of memetic communication due to the high speed of replication, algorithmic adjustment of news feeds and a low threshold for user participation [
159]. Memes in the digital environment have several strategic properties [
160]:
- -
Virality—the ability to spread exponentially in network structures.
- -
Brevity and density—the transmission of complex meanings in a minimal form.
- -
Emotional intensity—using humor, sarcasm, or outrage to enhance the response.
- -
Mimicry of “folk art”—which makes them difficult to attribute.
Within the framework of information operations, memes perform the following functions [
161]:
- -
Cognitive contagion—introducing simplified and emotionally charged narratives that replace complex discussions.
- -
Discrediting the enemy—forming a negative image through an ironic or grotesque image.
- -
Signaling belonging—marking “us” and “them” in online communities.
- -
Triggering—activating certain reactions upon repeated contact with the meme.
Empirical studies show that memetic campaigns are often part of broader coordinated information operations. For example, Johnson et al. [
162] analyzed Twitter activity during the 2016 US presidential election and identified network clusters that distributed political memes in conjunction with botnets. In [
163] demonstrated that memes were used to increase polarization in Europe, with platform algorithms facilitating “echo chambers.”
Memes play a special role in crisis and conflict situations. For example, during the conflict in eastern Ukraine, when visual jokes and caricatures become a tool for both mobilizing supporters and demoralizing opponents [
164]. A study by Ylä-Anttila [
165] shows that political memes on Facebook during campaigns can form stable emotional frames, shaping long-term perceptions of events.
The effectiveness of memetic weapons is due to cognitive mechanisms: ease of memorization, emotional coloring, conformity, and the effect of repetition [
166]. These same mechanisms make counteraction difficult—memes quickly adapt, reformat, and integrate into new contexts.
Countermeasures include:
- -
automated meme detection using computer vision and contextual analysis [
167];
- -
proactive creation of countermemes and positive narratives [
168];
- -
media literacy and critical thinking as long-term protection [
169].
However, as Phillips notes [
170], excessive censorship of memes can be perceived as a restriction of freedom of expression, which in itself can become an object of information manipulation.
Thus, memetic weapons are an effective but difficult to control tool of information warfare, highly adaptable and able to penetrate the collective consciousness through everyday communication on social networks.
4.5. Geopolitical Cases
Analyses of the 2016 U.S. presidential election document systemic disinformation and coordinated operations using bots, targeted advertising, and memetic formats. Howard, Woolley, and Calo (JITP, 2018) [
171] describe computational propaganda and challenges to voting rights; the DiResta (Tactics & Tropes) [
172] team report and SSCI materials document IRA activities aimed at polarization and targeting vulnerable groups. The resulting estimates show uneven exposure and the contribution of small “superconsuming” cohorts, while simply increasing advertising transparency without cognitive prevention has limited impact [
173,
174,
175]. In the European Union, information attacks are considered part of hybrid threats: the East StratCom Task Force maintains a case database (EUvsDisinfo), and the Joint Communication JOIN/2020/8 sets the framework for countering COVID-19 disinformation (rapid fact-checking, cooperation with platforms, and transparency). In parallel, “digital diplomacy against propaganda” is developing (Bjola and Pamment), but the constant dilemma is the balance with human rights protections [
176,
177,
178]. Since 2014, information operations in Ukraine have been integrated with military actions; after 24 February 2022, there has been an increase in multilingual campaigns, deepfakes, operations on Telegram, and synchronization with cyberattacks. International studies document both state and civilian practices of digital resilience, OSINT and counter-memetic campaigns. Current analytical reviews (Chatham House, Atlantic Council/DFRLab) and academic literature (International Affairs) converge in their assessment: the key factors are speed, cross-platform and narrative localization [
179,
180,
181,
182].
In East Asia, China’s ‘cognitive war’ against Taiwan remains the most systemic. It combines psychological, legal, and “public opinion warfare” (the so-called “Three Warfares”). And manifests itself in electoral cycles through the fabrication of narratives, exploitation of vulnerabilities in the media environment, and the long-term “wear and tear” of trust in institutions [
183,
184,
185].
In parallel, Hong Kong 2019–2020 has become a field of intense two-way struggle over meaning: studies document how “fakes” and rumors simultaneously served to delegitimize the protest and mobilize it, increasing polarization and networked forms of leadership [
186].
In Southeast Asia, the most tragic case is Myanmar, where algorithmically amplified extremist speech and disinformation on Facebook were woven into a violent campaign against the Rohingya; academic works suggest moving away from the reduction to “hate speech” to the analysis of institutional and platform mechanisms of escalation [
187,
188]. The Philippines demonstrates a “production” model of online disinformation. In 2016, digital fan communities and informal “influencer” brokerage networks amplified the effect of offline mobilization of Duterte supporters. And in 2022, persistent narratives of “authoritarian nostalgia” and conspiracy theories contributed to the electoral dynamics, with the audience participating in the “co-production” of disinformation [
189,
190].
In South Asia, India has become a laboratory for “encrypted” information warfare: mass political communication via WhatsApp in the 2019 election campaign created opaque channels for the circulation of statements that are poorly amenable to platform moderation control. Empirical studies show that “tiplines” (crowdsourced lines for receiving suspicious content) allow for early detection of viral messages and reconstruction of cross-platform flows, while open social networks only partially reflect what is happening in end-to-end (E2E) environments [
191,
192].
In the multicultural and multilingual context of Kazakhstan, resilience relies on institutional measures of cybersecurity and information policy. The basic framework is set by the Cybersecurity Concept “Cyber Shield of Kazakhstan” (Government Resolution No. 407 of 30 June 2017) and subsequent initiatives; ITU confirms the launch and implementation of the concept. Academic publications analyze vulnerabilities and lessons from the pandemic period for information security. Practice shows that during crises, the share of disinformation through social networks and instant messengers increases—both technical monitoring measures and media literacy programs are required [
193,
194,
195]. Taiwan demonstrates a model of “civic digital resilience”: a collaborative effort between the state, the civic tech community, and fact-checking. Research documents persistent attacks from the Chinese side, especially during electoral periods; at the same time, academic and applied works describe institutional and civic responses, ranging from co-fact-checking to preventive communication and educational programs [
185,
196,
197].
Thus, the forms of information warfare are already very diverse. However, there is every reason to believe that they will evolve further. As shown in the next section, there are tools that allow influencing the basis of the mentality of the population of any state—its sociocultural code.
5. Impact on the Sociocultural Code: Theory and Practice
The sociocultural code (in the literature, similar concepts such as “mentality”, “civilizational code”, etc., were also often used earlier) is a set of values, norms, symbols, historical narratives, and collective ideas that shape the identity of a society. In the context of an information war, attacks on this code become a strategic tool that can weaken internal cohesion, undermine trust in historical heritage and transform the system of value orientations. Such influences can be direct—through the distortion of historical facts and the imposition of alternative interpretations of events—or indirect, including the gradual erosion of cultural constants through mass culture, media, and educational practices. Modern information campaigns, based on the achievements of cognitive psychology, linguistics, and digital technologies, are capable of deliberately modifying the symbolic space of a nation, altering both external image perception and internal self-identification processes. This section is devoted to the theoretical understanding of the phenomenon of the sociocultural code, the analysis of the mechanisms of its undermining and the consideration of practical examples of both successful attacks and effective defense strategies.
In studies of information security and cultural studies, the term “sociocultural code” is used to denote a set of stable symbols, values, norms, and narratives that structure the perception of the world by a particular community [
198]. These elements act as a kind of “matrix” of collective identity, ensuring the continuity of historical experience and certain behavioral patterns. The structure of the sociocultural code includes linguistic forms, myths and historical narratives, cultural rituals, symbols, and value orientations, which together create the basis for social interaction and political mobilization [
199]. Information warfare, considered as a set of targeted impacts on the cognitive, emotional and behavioral spheres of society, is increasingly focused on attacks on the sociocultural code, and not just on operational disinformation. This approach is associated with the understanding that the destruction or modification of cultural meanings has more long-term consequences than short-term manipulation of facts [
200]. The impact on the cultural code can be direct, for example, by substituting historical interpretations or the discrediting of national symbols, or indirect—through the gradual introduction of alternative value models and changes in the usual linguistic field [
201].
The mechanisms of identity fragmentation used in information operations are often built on opposing groups within a single society. Segmentation of the cultural field along ethnic, religious, regional, or political lines leads to a weakening of the common identity and an increase in the potential for conflict [
202]. This process actively uses emotional triggers associated with traumatic historical events or social injustices, which allows operators of information attacks to activate “latent” lines of cleavage [
203].
One of the key areas of cultural demobilization is the undermining of trust in national history and memory. Through the systematic dissemination of revisionist interpretations and selective coverage of historical facts, a feeling is formed that the past is subject to manipulation and cannot serve as a source of collective meaning [
204]. Similar strategies are observed, for example, in a number of post-Soviet countries, where media content deliberately downplayed the role of national figures or, by contrast, focused on collaborationist episodes, creating a sense of “historical guilt” [
205].
The cognitive and symbolic aspects of the transformation of the cultural code manifest in changing the meanings of established symbols and concepts. This process is often accompanied by a semiotic “reversal” of symbols—in which positive cultural markers are redefined as negative, and vice versa. Visual images traditionally associated with national pride or spiritual values can be reinterpreted in a satirical or derogatory way, which reduces their mobilization potential [
206]. Similar processes occur in language: keywords and phrases acquire new connotations that can transform public discourse without an obvious change in the facts [
207]. The protection of cultural constants in the context of information warfare presupposes a set of measures that include both institutional and public initiatives. Key elements include the support and development of the national language, the preservation and popularization of historical memory, and the formation of positive narratives reflecting the values and traditions of society [
208]. Educational programs that promote critical perception of information and strengthen cultural identity among young people play an important role [
209]. At the same time, modern approaches require a synthesis of humanistic and technological solutions: from digital archives and interactive museum exhibits to automated systems to monitor and analyze information attacks on the cultural space [
210]. Practice shows that an attack on the sociocultural code is as dangerous as physical intervention or economic pressure. Changing basic cultural guidelines can radically transform the political behavior of the population, change the perception of the legitimacy of power, and even influence international alliances. Therefore, the protection of cultural constants should be treated as a national-security priority in the context of a globalized information space [
211].
5.1. The Concept and Structure of the Sociocultural Code
In the humanities and social sciences, the concept of a “sociocultural code” is used to denote a set of symbolic systems, norms, values, customs, and semiotic structures that ensure the reproduction of the cultural identity of a society and the integration of its members into a common system of meanings [
212]. This code acts as a kind of “matrix” of collective thinking and behavior, determining how individuals interpret the surrounding reality, react to external events, and interact with each other. In cultural studies and semiotics, a sociocultural code is interpreted as a multilevel semiotic system that includes linguistic, visual, behavioral, and institutional elements [
213]. Yu. M. Lotman, in his theory of the semiosphere, described culture as a “mechanism that creates and transmits texts,” in which codes are the rules for generating and interpreting these texts [
214]. In the anthropological perspective (Clifford Geertz, Pierre Bourdieu), the cultural code is considered a set of “perceptual patterns” and “fields of practice” that structure social interaction [
215,
216].
Modern studies (Hofstede; Schwartz) clarify that the sociocultural code includes both explicit components (language, official symbols, laws) and latent ones (deep values, myths, collective memory) [
217,
218]. This makes it a complex and stable object, yet vulnerable to systemic influences.
According to Hofstede and Schwartz, the sociocultural code can be considered an integration of several interconnected subsystems:
Language code—a system of natural language, including vocabulary, syntax, idioms, speech practices. Language not only conveys information but also structures thinking (Sapir-Whorf hypothesis) [
219].
Value-normative code—a set of moral and ethical principles, norms of behavior and social expectations that define what is acceptable and unacceptable [
220].
Symbolic code—national and cultural symbols (flag, coat of arms, rituals, architecture) that serve as markers of identity [
221].
Historical-narrative code—collective ideas about the past, historical myths, heroic figures, traumatic events that shape national memory [
222].
Social-institutional code—the structure of social institutions (family, education, religion, state), which consolidates and transmits cultural norms [
223].
Each of these elements has its own system of protection against changes but can be modified by targeted information impact.
In the context of information warfare, the sociocultural code is considered both a strategic resource and a vulnerability. Changes in its individual elements can lead to a transformation of collective identity, political loyalty, and even to readiness for mobilization [
224]. For example, the substitution of a historical narrative or redefinition of symbolic meanings can provoke a split in society or a decrease in resistance to external pressure [
225].
An analysis of conflicts of the last decade (Ukraine, Syria, Hong Kong) shows that the impact on the sociocultural code is often carried out through media campaigns, educational programs, cultural products, and social networks [
226,
227]. These channels enable the combination of soft forms of influence (soft power) with elements of cognitive operations (see
Section 3.5), creating the effect of “long-term penetration” into the cultural fabric of society.
One of the key characteristics of the sociocultural code is its dynamic stability. Research shows that codes can adapt to external challenges while maintaining their core values [
228]. However, under a massive and multichannel attack—especially using digital technologies and algorithmic targeting—this resilience declines [
229]. The platform architecture of social media, operating on the basis of the “attention economy” (see
Section 3.4), facilitates the consolidation of new meanings through repetition and emotional reinforcement [
230]. Thus, understanding the structure and functions of the sociocultural code is a prerequisite for developing strategies to protect it in the context of modern information warfare.
Note that the above interpretations of the sociocultural code are primarily descriptive. In [
231,
232], a neural network theory of the noosphere proposes, which allows us to reveal the mechanisms of formation of the sociocultural code at a level that allows for the interpretation of its evolution. Note that according to Vernadsky, the noosphere is understood as the Earth’s shell, arising because of the collective activity of Homo sapience [
233].
This interpretation of the sociocultural code is based on the following considerations. It is generally accepted that as a result of communicating with each other, people exchange information. This, however, is a rather rough approximation. In reality, any communication between people de facto comes down to the exchange of signals between the neurons that make up their brains. Thus, a collective neural network arises, which at the global level, can be identified with the noosphere, as understood by V.I. Vernadsky. Qualitative differences in such a global network from the totality of its individual fragments are demonstrated in rigorous mathematical models.
It is appropriate to emphasize that recent theories have been proposed based on arguments from the field of physics, which in a similar way consider the Universe by analogy with a neural network [
234]. The conclusions made in the cited report are based on the results obtained in the field of the general theory of evolution [
235,
236]. Vanchurin’s concept [
234] clearly is consistent with the conclusions obtained in other fields of knowledge, where it was shown that complex systems of the most diverse nature can be modeled as neural networks, and it is this factor that determines their behavior [
237,
238,
239].
The neural network interpretation of the noosphere allows us to assert that, along with the personal level, there is also a suprapersonal level of information processing [
240,
241]. Indeed, as shown, in particular, in [
242] at the level of a correct mathematical model, the ability of a neural network to store and process information nonlinearly depends on the number of its elements. This conclusion is confirmed by current practice—otherwise, it would not make sense to create increasingly larger artificial neural networks, as is the case in practice [
243,
244]. Consequently, if interpersonal communications generate a global (albeit fragmented) neural network, its properties cannot be reduced to the properties of individual elements, i.e., relatively independent fragments of such a network, localized within individual brains. In philosophical literature, a similar conclusion has long been formulated: “social consciousness cannot be reduced to the consciousness of individuals”. The conclusion about the existence of the suprapersonal level also allows us to reveal the essence of the collective unconscious, the existence of which was previously confirmed only on an empirical basis [
245,
246].
From the perspective of the neural network interpretation of the noosphere, the collective unconscious is formed by objects developing precisely at the suprapersonal level of information processing. Information objects formed at the suprapersonal level of information processing can have a very different nature, and a significant part of them are obviously associated with the category of fashion. This can be most clearly traced based on the conclusions made in the famous monograph by Baudrillard [
247]. In the cited monograph, it was convincingly shown that the cost of many goods presented on the market actually consists of two components. One of them is associated with the satisfaction of current human needs (including physiological ones). The other is symbolic and therefore, informational in nature. An obvious example: clothes should protect a person from the cold, but there are also branded clothes, the main purpose of which is to demonstrate the social status of the owner. The same can be said in relation to goods of many other groups (branded watches, cars, etc.)
As shown in [
248], mature scientific theories, natural languages, and other systems. are also considered suprapersonal information objects. A sociocultural code, which is a set of supra-personal information objects that determine the characteristics of the collective behavior of the population of a certain country, ethnic group, is also formed by a similar mechanism.
5.2. Information Warfare as an Attack on Cultural and Historical Meanings
Cultural and historical meanings form the foundation of collective identity, ensuring continuity between the past, present, and future of society. In the humanities, cultural meanings are understood as a set of symbolic contents encoded in language, art, religious, and social rituals, as well as in collective narratives that form images of the desirable and the unacceptable [
249]. Historical meanings, in turn, represent interpretations of key events of the past, anchored in the memory of society through official chronicles, educational programs, commemorative dates, and material artifacts [
250]. Information warfare in their modern understanding consider these meanings not merely as a context in which the struggle is waged, but as targets of influence. Since a change in the interpretation of the past or a redefinition of cultural markers can transform the value orientations of the population and thereby change its political and social behavior [
251]. In political communication and conflict studies, scholars argue that the struggle for meaning is a struggle for power over the perception of reality [
252]. M. Foucault pointed out that power operates through control over discourse, which determines what is considered truth and what is subject to oblivion [
253]. In the context of information warfare, this means that strategic influence is aimed not so much at facts as at their interpretation and symbolic framing.
Modern doctrines of cognitive operations (see
Section 3.5) consider cultural and historical meanings as key points of application of efforts, since they underlie “frames”—cognitive structures through which people comprehend information [
254]. Altering frames can radically change social attitudes without direct coercion, which makes this approach extremely attractive to strategic actors.
One of the most common methods of attack is rewriting history, in which an alternative version of key events is promoted through textbooks, documentaries, popular media, and Internet resources [
255]. This may involve both emphasizing certain aspects and completely excluding inconvenient facts from public discourse.
Another method is the redefinition of symbols. Flags, monuments, and national heroes can be reinterpreted—from the glorification of previously marginalized figures to the demonization of historical leaders [
256]. This strategy is often accompanied by visual campaigns on social media, memetic content, and hashtag actions aimed at mass user engagement [
257].
Discursive inversion is also used, in which key concepts of national identity (e.g., “freedom”, “independence”, “tradition”) are filled with opposite or distorted content [
258].
A study by Ross shows how a new historical narrative was formed in post-revolutionary France through educational reforms and popular culture [
259]. In more modern examples, the analysis of the Ukrainian crisis of 2014–2022 demonstrates that both sides in the conflict actively use the media to reinforce their own interpretations of events and discredit the opposing ones [
260]. In the Baltic countries, over the past three decades, programs have been implemented to reinterpret the Soviet period, which have been accompanied by the dismantling of monuments and the changing of street names [
261]. In Syria and Iraq, the actions of terrorist organizations such as ISIS have included the targeted destruction of cultural artifacts, which was aimed at breaking cultural continuity and undermining the identity of local communities [
262].
The digitalization of the information space has radically increased the scale and speed of attacks on cultural and historical meanings. Social media and video-hosting sites allow instant distribution of content in which cultural symbols are reinterpreted through visual and auditory forms [
263]. Algorithmic recommendation mechanisms amplify the impact by creating a selective reality for users, in which certain versions of the past become dominant [
264].
In addition, an analysis of algorithmic campaigns on Twitter and Facebook (2016–2020) shows that a significant portion of the coordinated operations include culturally loaded messages—from nationalist slogans to posts about controversial historical dates [
265].
Effective protection of cultural and historical meanings requires the integration of measures into educational, cultural, and information policies. This includes transparency of historical sources, multidisciplinary examination of educational materials, the development of digital media literacy, and the involvement of society in the discussion of cultural heritage [
266]. Of particular importance is the creation of positive narratives that strengthen identity and not just respond to external attacks [
267].
Thus, information warfare in the 21st century is increasingly taking place on the battlefield of cultural and historical meanings. Success in this struggle depends not only on control over information channels, but also on the ability to protect and develop the system of meanings that underlies the collective “we”.
5.3. Mechanisms of Identity Fragmentation and Cultural Demobilization
Identity fragmentation in the context of information warfare is a process of blurring, splitting, and redefining collective ideas about belonging to a particular social, cultural or national group. This process is not limited to the diversity of cultural forms that naturally arises in a globalized world. It involves targeted information- and communication-based efforts aimed at undermining shared value orientations, symbols, and narratives that ensure the unity of society [
268].
Cultural demobilization is a closely related phenomenon in which a community loses its willingness to protect and reproduce its cultural codes and becomes passive in maintaining and transmitting them. As Castells notes, the modern communications network creates opportunities for the atomization of groups and individuals, which in a political context reduces the ability of society to consolidate around common goals [
269].
One of the key psychological mechanisms of identity fragmentation is cognitive dissonance, which occurs as a result of exposure to contradictory and mutually exclusive narratives [
270]. When an individual is confronted with competing interpretations of events, he or she may lose confidence in the validity of any of them, which undermines the sense of belonging to a group with a clearly articulated position.
Another factor is social categorization [
271]. In the context of information warfare, the boundaries between “us” and “them” are deliberately blurred or redrawn, which leads to the formation of new subgroups with competing identities within the formerly single community.
Framing and priming effects have an additional impact: the constant repetition of certain associations and interpretations of cultural symbols changes how the audience perceives their cultural and historical belonging [
272].
From a sociological perspective, identity fragmentation is based on network segmentation—a process in which digital platforms form closed communication clusters (“echo chambers”) where messages of homogeneous content circulate [
273]. This leads to a decrease in mutual understanding between groups and an increase in intergroup hostility [
274].
Platform algorithms optimized to maximize engagement contribute to the fact that the user sees predominantly content that corresponds to his already established beliefs, which accelerates the polarization and fragmentation of the cultural space [
275].
Media also play a role in the creation of competing narrative centers. In conditions of media pluralism, these centers can exist naturally, but in a situation of information confrontation, their emergence is often the result of targeted work by external actors seeking to weaken the dominant identity [
276].
Modern technologies enable personalized propaganda in which messages are individually selected based on analysis of a user’s digital footprint [
277]. This enhances the fragmentation effect, as different groups receive fundamentally different versions of reality.
Additionally, memetic campaigns become a tool for “infiltrating” alternative meanings into popular culture. Memes, images, and slogans, often devoid of obvious political overtones, can eventually become markers of subcultural identities opposed to the official one [
278,
279,
280,
281].
Cultural demobilization often becomes the target result of information operations. This state is characterized by a decrease in participation in cultural practices, a refusal to protect cultural symbols, and a passive attitude towards their transformation. As the analysis of Qureshi (2021) shows, such effects arise not only due to direct pressure, but also due to “fatigue” from constant conflicts in the media space [
282].
Information campaigns that create a constant feeling of uncertainty and mistrust contribute to apathy and loss of interest in collective action, which weakens society’s resilience to external challenges [
283].
Modern research suggests using a comprehensive approach to monitoring identity fragmentation, including analysis of discursive fields in social media, tracking changes in the popularity of cultural symbols, and assessment the level of trust in national institutions [
284].
Psychometric methods make it possible to identify a decrease in the sense of belonging to a national community, while sociotechnical indicators record an increase in the number of information clusters that are inconsistent with each other [
285].
Taken together, these methods make it possible not only to diagnose identity fragmentation, but also to assess the effectiveness of counteraction strategies.
6. Methods of Countering Information Attacks
The growing scale and complexity of information threats require the development of comprehensive and multilevel countermeasure strategies. Unlike reactive measures of the past, modern approaches rely on preventive preparation of society, algorithmic tools to detect and block malicious content, and the formation of sustainable cognitive and cultural barriers to manipulation. Effective protection is based on a combination of media literacy, independent fact-checking, the implementation of architectural and algorithmic solutions at the level of digital platforms, international regulation, and the use of specialized technical tools for authentication and tracking the origin of information. Of particular importance are interdisciplinary approaches that combine the capabilities of computer science, psychology, linguistics, law, and international relations. This section systematizes key practices and technological approaches that prove effective in the context of modern information warfare and considers prospects for their further improvement.
6.1. Fact-Checking and De-Banking
Fact-checking is a systematic activity aimed at identifying, verifying, and refuting false information in public discourse. It was initially developed in journalistic practice as a tool for improving the quality of news content and ensuring audience trust in the media [
286]. However, in the context of modern information warfare, fact-checking has acquired strategic importance, becoming one of the key forms of information defense. Along with it, debunking is actively used—the process of exposing false statements by providing contextual data and evidence of inconsistency [
287]. Historically, fact-checking practices are rooted in the editorial standards of print journalism of the 20th century. However, their institutionalization as a separate direction began in the early 21st century against the backdrop of the growth of Internet journalism and social networks, which accelerated the circulation of unverified information [
288]. Some of the pioneers in this area were the organizations FactCheck.org (founded in 2003) and PolitiFact (2007), which set standards for the transparency of sources, verification procedures, and public presentation of results [
289].
In the context of information warfare, fact-checking performs a dual function: on the one hand, it is a tool for protecting public discourse from disinformation, and on the other, it serves as a mechanism for strengthening critical thinking and media literacy of the population [
290]. Research shows that systematic fact-checking helps reduce trust in false narratives and curb their dissemination on social media [
291].
In the modern media space, there are two dominant models of organizing fact-checking: institutional and independent. The first is integrated into large media outlets, government agencies, or international organizations that have the resources to conduct large-scale investigations [
292]. Some examples include BBC Reality Check and AFP Fact Check, which operate globally. The independent model is represented by nonprofit organizations and civic initiatives financed by grants, donations, and crowdfunding [
293]. These include Bellingcat, StopFake, The Poynter Institute, which specialize in specific topics—from military conflicts to environmental information. Such organizations often have greater flexibility and can respond quickly to local information attacks. Both models face the problem of trust: critics argue that institutional projects can be subject to political or corporate interests, while independent initiatives run the risk of limited resources and a possible lack of formalized methodological standards [
294].
The development of digital technologies has led to the emergence of automated fact-checking systems that use natural language processing (NLP), machine learning, and computer vision [
295]. Platforms such as ClaimReview and Full Fact integrate algorithms for verifying claims in real time, which help reduce the time lag between the emergence of false information and its refutation [
296].
In the context of an information warfare, OSINT (Open Source Intelligence) tools play an important role, allow analysis of open sources, verify photos and videos, identify manipulations with images, and determine the geolocation of events [
297]. Such methods are actively used in investigations of war crimes and information operations, as shown by cases in Syria and Ukraine [
298].
Debunking differs from simple fact-checking in that it involves not only identifying the unreliability of a claim but also explaining the mechanisms of manipulation underlying it [
299]. Psychological research shows that simple refutation often does not lead to a change in beliefs, especially if the false information corresponds to the value orientations of the audience [
300]. Therefore, effective debunking includes an emotionally neutral presentation, the use of authoritative sources, and the demonstration of logical contradictions [
301].
One promising approach is prebunking—a preventive refutation in which the audience is familiarized with typical disinformation techniques in advance, which form cognitive “immunity” [
302]. Experiments conducted within the framework of the “Inoculation Theory” project (van der Linden et al.) show that prior information about the manipulative techniques used reduces their effectiveness [
303].
Despite its effectiveness, fact-checking faces a number of limitations. First, a significant portion of false messages are distributed in closed digital ecosystems that are inaccessible to public monitoring [
304]. Second, debunking often spreads more slowly than misinformation and has a smaller viral effect [
305]. Third, there is a backfire effect, whereby audiences exposed to debunking becomes more convinced of their beliefs [
306].
Furthermore, in polarized societies, fact-checking can be interpreted as political censorship, undermining trust in the fact-checking organizations themselves [
307]. This necessitates high transparency in methodology and sources, as well as independent auditing of the work of fact-checking organizations [
308].
Research on the effectiveness of fact-checking shows mixed results. A meta-analysis by Nyhan and Reifler (2020) shows that short-term reductions in belief in falsehoods are relatively easy to achieve, but long-term effects require repeated and systematic interventions [
309]. The “CrossCheck” initiative, launched during the 2017 French elections, shows that coordinated work between media and technology companies can significantly reduce the level of disinformation in the public space [
310]. Thus, fact-checking and debunking are important elements of the system for countering information warfare, but require constant improvement of methodology, implementation of technological innovations and strengthening of public trust.
6.2. Prebunking (Inoculation) and Cognitive Defense
Prebunking, also called cognitive inoculation, is a strategy for countering disinformation in which audiences are provided with advanced information about potential manipulative techniques and false narratives before they are exposed to them in real-world contexts [
311]. This approach borrows metaphorically from the principles of biological vaccination: just as a vaccine introduces a weakened pathogen to build immunity, cognitive inoculation “inoculates” a person with knowledge of manipulation, building resistance to it [
312]. Inoculation theory was proposed by William McGuire in the early 1960s [
313] and was originally used in social psychology to study resistance to persuasion. It suggests that pre-exposure to weak versions of an argument, together with their refutation, stimulates the development of cognitive “defense” mechanisms. Later studies adapt this approach to the contemporary media environment and show its high effectiveness in combating online disinformation [
314].
In the context of information warfare, prebunking plays a special role, as it shifts the focus from reactive measures (debunking) to proactive ones, reducing the vulnerability of the target audience even before an information attack begins [
315].
Modern prebunking programs are implemented in several formats:
Educational campaigns—short-term courses, videos, and infographics that introduce users to manipulation techniques such as emotional triggers, false dichotomy, substitution of sources, fabrication of evidence [
316].
Game simulations—interactive games such as “Bad News Game” or “Go Viral!”, developed at the University of Cambridge, which allow users to “be in the role” of the creator of disinformation, thereby mastering the mechanisms of its production and protection from it [
317].
Integration into platforms—social networks and search engines now integrate prebunking elements into their interfaces, e.g., through notifications about checking sources or warnings about manipulative content [
318].
Empirical studies demonstrate that even short-term interaction with prebunking materials increases users’ ability to recognize manipulative techniques by 20–30%, and the effect can persist for several weeks [
319].
The effectiveness of prebunking relies on several key cognitive mechanisms:
- -
Metacognition—increased awareness of one’s own processes of information perception and critical analysis [
320].
- -
Epistemic vigilance—the ability to assess the reliability of an information source and the internal consistency of argumentation [
321].
- -
Cognitive load—the ability to distribute attention between several information streams, reducing susceptibility to manipulative messages [
322].
It is important that the “information vaccine” be dosed: an excessive number of examples of false information can cause the opposite effect—an increase in cynicism and distrust toward all sources [
323].
Despite the obvious advantages, prebunking has several limitations. First, it requires an accurate prediction of which narratives will be used by the enemy, which is not always possible [
324]. Second, its effectiveness decreases if the audience has already been subjected to deep cognitive polarization [
325]. Third, in multicultural societies, it is necessary to adapt prebunking materials to cultural codes and local contexts to avoid misunderstanding or rejection [
326].
In strategic terms, prebunking can be considered an element of a broader concept of cognitive resilience, which includes media literacy, critical thinking, and social trust [
327]. Its integration into national educational systems, as well as into mechanisms of international cooperation through NATO, the EU, and the UN, is seen as a promising avenue for protection against transnational information threats [
328].
6.3. Algorithmic and Architectural Measures of Platforms
Algorithmic and architectural measures are a key layer of defense for digital platforms against the spread of disinformation and other forms of information attacks. These measures cover both the design of the platform itself—its interfaces, data structures, content moderation, and ranking mechanisms—and the use of algorithmic systems, including machine learning and artificial intelligence, to automatically detect and neutralize malicious information flows [
329].
The architecture of a digital platform largely determines the speed and scale at which disinformation spreads. Research shows that the presence of instant repost functions, personalized recommendation algorithms, and weak integration of source verification mechanisms amplify the effect of “information cascades” [
330]. Platforms whose architecture focuses on maximizing engagement (engagement-driven design) often inadvertently create an environment favorable for the viral spread of false narratives [
331]. In response, major social media platforms implement architectural restrictions to slow the spread of unverified content. For example, Twitter (now X) tested a limit on retweets without comments, and WhatsApp tested a limit on forwarding messages to groups [
332]. These measures demonstrate that architectural design can be used as a tool to deter information attacks.
Machine learning has become a central element in the fight against disinformation. Modern algorithms can analyze millions of pieces of content in real time, classifying them by their credibility, sentiment, source, and style [
333]. The most common approaches include:
- -
Natural language processing (NLP) models trained on labeled corpora of fake and credible news [
334].
- -
Graph models to detect coordinated networks and anomalous account activity [
335].
- -
Computer vision to detect deepfakes and manipulations in images and videos [
336].
The effectiveness of algorithms largely depends on the quality of the training data. Having multicultural and multilingual datasets are critical for global platforms, as localized disinformation often uses unique cultural and linguistic codes [
337].
A recent trend is to integrate algorithmic solutions directly into the architecture of platforms. For example, automatic fact-checking algorithms can be built into the post-publishing interface, warning the author about potentially inaccurate information before publication [
338]. YouTube has integrated algorithms for recognizing and removing malicious videos into the content upload chain, which allows them to be blocked before they become publicly available [
339].
Despite their obvious advantages, algorithmic and architectural measures have limitations. First, they are prone to errors—both false positives and false negatives [
340]. Second, the closed nature of algorithms and the lack of transparency in platform architectures have drawn criticism from researchers and human rights organizations, pointing to the risks of censorship and manipulation of algorithms for political purposes [
341]. Third, attackers adapt to algorithms using content obfuscation, memetic forms, and coded messages [
342].
Empirical evidence suggests that a combination of architectural constraints and algorithmic moderation can significantly reduce the rate at which disinformation spreads. For example, a study by Mosseri et al. (2022) finds that algorithmic deactivation of the top 0.1% of disinformation nodes in a network reduces the overall volume of false content by 25% within one week [
343]. However, long-term sustainability of such measures requires continuous algorithmic updates and adaptation of architectural solutions [
344].
The future of algorithmic and architectural measures lies in the development of explainable AI (XAI), which will increase the transparency of decisions to block or flag content [
345], and the introduction of decentralized architectures that provide information verification based on blockchain technologies [
346]. In addition, platforms are increasingly considering the integration of “information encryption layers”—systems that allow the verification of the origin of content using metadata and digital signatures [
347].
6.4. Identifying and Neutralizing Coordinated Networks
Coordinated Inauthentic Behavior (CIB) is the concerted action of a group of interconnected accounts or digital entities aimed at manipulating public opinion, undermining trust in institutions, and spreading particular narratives [
348]. Such networks can include fully automated bots, “semi-automated” accounts operated by humans (cyborg accounts), and real user profiles engaged in a campaign through covert coordination [
349]. The nature of CIB is that each individual node in the network may appear legitimate, but the overall pattern of interactions indicates artificial and directed behavior. The network approach to studying CIB relies on social graph models, in which nodes represent accounts and edges represent interactions between them (likes, reposts, mentions, and replies) [
350]. In the theory of complex networks, such structures often have characteristic topological features: increased clustering, abnormally high density of connections within subgroups, synchronicity of activity, and similarity of content patterns [
351]. Research on the dynamics of information flows shows that coordinated networks form “information resonances”—periods of synchronous publications timed to certain events or news items [
352].
Modern methods for detecting CIB can be divided into several categories:
Social network analysis. This approach uses centrality metrics (degree centrality, betweenness centrality), community detection and graph density analysis to identify abnormally connected clusters [
353].
Temporal activity analysis. Coordinated campaigns are often characterized by synchronous publications, so detection algorithms use correlation of time series of account activity [
354].
Content analysis. Natural language processing (NLP) methods can detect high lexical and syntactic similarities between posts, which may indicate the use of templates or a centralized script [
355].
Multimodal analysis. Combining behavioral, network, and content features yields the highest accuracy in detecting CIB, especially when using deep neural network architectures [
356].
Many platforms and research groups are implementing machine learning algorithms to automatically detect instances of CIB. Popular architectures include Graph Neural Networks (GNNs), which can model complex dependencies between nodes and reveal hidden coordination structures [
357]. Additionally, temporal clustering methods and anomaly detectors based on autoencoder models are used [
358]. An interesting example is the “Hamilton 68” project (Alliance for Securing Democracy), which tracked Russian-language bot farms on Twitter, identifying synchronous bursts of activity and characteristic content markers [
359].
In 2018, Facebook publicly disclosed for the first time a large-scale CIB network linked to the Internet Research Agency in Russia [
360]. Analysis shows that the network of hundreds of pages and accounts operated in multiple language segments and was coordinated through closed admin groups.
In 2020, Twitter and Facebook jointly blocked a network allegedly linked to Iran that used fake media brands to advance political narratives in the US and Europe [
361].
During the COVID-19 pandemic, coordinated anti-vaccine campaigns have been recorded, using bots and real accounts to amplify narratives about the “dangers of vaccination” [
362].
Neutralization of CIB includes both technical and organizational measures [
363]:
- -
removal or blocking of identified network nodes;
- -
deactivation of the management infrastructure (e.g., closed admin groups);
- -
notification of users who interacted with malicious content;
- -
cooperation with government and international structures to exchange threat data.
An important area is preventive detection, when algorithms record early signs of coordination, allowing intervention before the campaign reaches its maximum reach [
364].
Despite the successes in detecting CIB, serious problems remain: the high adaptability of networks that can change activity patterns to bypass algorithms; the difficulty of attributing campaigns to specific state or non-state actors; legal and ethical restrictions on mass monitoring of online activity [
365].
In addition, there is a risk of false positives, in which active but legitimate communities are mistakenly classified as CIB, which can lead to a violation of freedom of expression [
366].
The development of CIB detection methods is moving toward complex multilevel systems that combine behavioral analysis, NLP, computer vision, and network algorithms into a single architecture. A promising direction is the introduction of interactive platforms for OSINT analysis, where researchers and platforms can jointly track suspicious campaigns in real time [
367].
6.5. Supporting Media Literacy and Digital Critical Thinking
Media literacy and digital critical thinking are increasingly considered in the scientific and political agenda as key elements of society’s resilience to modern forms of information warfare. In conditions where the information environment is saturated with both reliable and manipulative messages, it is the individual’s ability to critically evaluate sources, check facts, and understand the mechanisms of disinformation and manipulation that becomes a barrier against information attacks [
368]. In academic terms, media literacy includes not only technical skills to search and process information, but also cognitive skills for analyzing, interpreting, and critical understanding media messages [
369]. According to UNESCO (2018), media literacy is “a set of competencies that enable citizens to receive, evaluate, use, create, and disseminate information and media content in various forms” [
370]. Digital critical thinking, in turn, focuses on the ability to identify logical errors, cognitive distortions, and hidden narratives in digital communications. It is related to the concept of cognitive resilience, which implies the ability to resist information manipulation even in conditions of high emotional or social engagement [
371].
Empirical studies in recent years have demonstrated a direct link between the level of media literacy and the likelihood of spreading fake news [
372]. Thus, Gaillard et al. (2020) show that participants with a high level of source verification skills and knowledge of the basic principles of journalism are less likely to share disinformation materials on social media [
373]. Similarly, a study by Roozenbeek and van der Linden (2019) show that short-term online games simulating strategies for spreading disinformation can increase users’ resilience to fakes by activating the so-called “inoculation effect” [
374].
Media literacy support can be implemented in several forms:
Formal education. Incorporating media literacy into school and university curricula is one of the most effective ways to develop critical thinking in the long term [
375]. Finland and Estonia are often cited as examples of successful integration of media literacy into the national education system, which correlates with low levels of trust in disinformation sources [
376].
Informal learning. Massive online courses (MOOCs), seminars and trainings run by NGOs, universities and media organizations make it possible to reach adult audiences who are not always involved in formal education [
377].
Gaming and simulation methods. Projects such as the Bad News Game [
374] and Harmony Square [
378] show that interactive formats increase engagement and retention of material and also help participants better recognize manipulative techniques.
At the global level, an important coordinator of efforts to develop media literacy is UNESCO, which promotes the concept of “Media and Information Literacy” (MIL) [
370]. Since 2018, the European Commission has implemented the “Digital Education Action Plan”, which includes the development of media literacy and digital critical thinking as a priority for EU member states [
379]. In the United States, Stanford History Education Group (SHEG) projects are underway aimed at developing critical assessment skills for digital content in schoolchildren and students [
380].
Since 2020, pilot projects to introduce media literacy into the school curriculum have been implemented in Kazakhstan, with the support of the OSCE and local NGOs [
381]. The main focus is on the ability to verify sources, recognize fake images, and analyze the context of publications.
A meta-analysis by Burkhardt [
382] shows that most media literacy programs lead to a statistically significant improvement in information assessment skills, but the effect may be short-lived if knowledge is not constantly reinforced. Roozenbeek et al. [
383] note that regular, recurring training provides higher resistance to manipulation than one-time educational events.
Key barriers to media literacy development include:
- -
Information overload—users may experience “fact-checking fatigue” in the face of a large information flow [
384].
- -
Low level of digital infrastructure in some countries, which limits access to online learning resources [
385].
- -
Cultural and language barriers that affect the adaptation of educational materials [
386].
- -
Polarization of society, in which even factually accurate information may be rejected due to political identity [
387].
In the future, approaches to media literacy will increasingly integrate with AI technologies. A possible direction is the creation of personalized training systems that adapt materials to a user’s cognitive profile, thereby increasing the effectiveness of assimilation [
388]. Another promising direction is the introduction of media literacy skills into corporate cybersecurity programs, which protect both personal and organizational information assets [
389].
6.6. Technical Measures (Authentication, Traceability)
Technical measures for authentication and tracking the origin of digital content are one of the key areas in countering modern forms of information warfare, especially in the context of the rapid growth of synthetic media and deepfakes. Unlike cognitive or educational approaches, these methods rely on cryptographic, network, and infrastructure solutions that provide the ability to verify the authenticity of the source and the immutability of the content [
390].
The classic mechanism for ensuring authenticity is the use of public key infrastructure (PKI) and digital signatures. In the PKI model, each entity (user, server, or content) receives a unique pair of keys—private and public. The private key is used to create a digital signature, and the public key is used to verify it [
391]. These technologies are widely used in protecting email (S/MIME, PGP), web communications (TLS/SSL), and can be adapted to verify multimedia content.
In the context of the fight against disinformation, digital signatures allow:
- -
confirm authorship;
- -
guarantee post-creation integrity;
- -
integrate verification into browsers, social networks, and news platforms.
For example, the Content Authenticity Initiative (CAI) project, initiated by Adobe, Twitter and The New York Times, develops standards for embedding metadata and cryptographic signatures directly into multimedia files [
392].
Blockchain, as an immutable and decentralized ledger, provides the ability to record the moment of content creation, the chain of its changes and owners. This is especially relevant for photo and video materials that can be modified during distribution. The Truepic platform uses blockchain to verify images, storing their hashes and metadata in a public blockchain [
393].
The advantages of blockchain include:
- -
decentralization, excluding the control of a single organization;
- -
immutability of records;
- -
transparency and verifiability.
The limitations include the high cost of transactions in some networks and the difficulty of scaling when working with a large volume of media files [
394].
Provenance tracking involves recording the entire “lifecycle” of a digital object, from its creation to its current state. Modern provenance tracking systems include:
- -
metadata embedding (EXIF, IPTC);
- -
digital watermarks—hidden markers that are resistant to transformations [
395];
- -
trusted chain standards (e.g., C2PA—“Coalition for Content Provenance and Authenticity”) that combine cryptographic signatures and secure metadata [
396].
C2PA, developed with the participation of Microsoft, Adobe, and the BBC, provides for the creation of a “provenance manifest” that can be automatically verified before displaying content on the platform.
With the development of generative models capable of creating photorealistic images and audio recordings, the need for automated counterfeit detection tools has increased. Deep learning algorithms are now used not only to recognize deepfakes [
397], but also to automatically validate cryptographic attributes of content.
The combination of artificial intelligence with authentication systems allows:
- -
identify metadata inconsistencies;
- -
analyze pixel and spectral characteristics;
- -
check consistency between the declared source and the content’s style.
Meanwhile, there are already examples of implementation:
- -
BBC Project Origin—an initiative to mark original news materials using C2PA and verify chains of trust before publication [
398].
- -
Starling Lab (Stanford University)—a project using blockchain and cryptographic methods to protect digital evidence, including photo and video materials from conflict zones [
399].
- -
Microsoft Video Authenticator—a tool that analyzes photo and video content to identify possible manipulations [
400].
Despite technological advances, the implementation of technical measures faces a number of problems [
401]:
- -
the lack of a global content verification standard;
- -
potential vulnerability of metadata to deletion or substitution;
- -
the need for a scalable infrastructure;
- -
ethical issues related to privacy and anonymity.
An important aspect remains the balance between ensuring the authenticity of information and preserving the right to anonymous expression, especially in authoritarian regimes where anonymity protects activists [
402].
6.7. Regulatory Initiatives and International Agreements
In recent years, the European Union has built a multilayered regulatory framework to combat disinformation and related risks—from obligations for platforms and advertising intermediaries to requirements for AI transparency and the protection of media pluralism. This architecture is important not only as a “legal background” for technical and organizational measures (see
Section 6.1,
Section 6.2,
Section 6.3,
Section 6.4,
Section 6.5 and
Section 6.6), but also as a mechanism for aligning incentives: the regulator forces participants in the digital ecosystem to take into account the social costs of information attacks, which are underestimated in purely market logic.
The basic “framework” is set by the Digital Services Regulation (DSA). It extends to intermediaries and platforms, and introduces a new set of responsibilities for “very large online platforms” (VLOPs) and “very large online search engines” (VLOSEs). Including systemic risk assessments (disinformation, manipulation of the information environment), mitigation measures (adjustment of recommendations, design interventions, and increased moderation) and data access obligations for researchers, as well as expanded transparency of advertising and recommendation systems [
403,
404]. These mechanisms institutionalize cooperation between academia and platforms and make possible the verification of “anti-fake news” claims (see
Section 6.3 and
Section 6.4). The Commission explicitly links the DSA to the need to “prevent illegal and harmful online activity and the spread of disinformation” and strengthens co-regulatory tools. At the “soft” (but de facto mandatory for VLOPs under the DSA) level, the Enhanced Code of Practice on Disinformation (2022) is in effect: 44 commitments, from demonetization of lies to libraries of political advertising, tools for users and increased transparency, with the Code conceived as a recognized “code of conduct” in the logic of the DSA (co-regulation) [
405,
406]. The practice of 2023–2025 has shown that the withdrawal of individual companies from the Code does not exempt them from the strict requirements of the DSA. Supervision and sanctions are transferred to the plane of “hard” law (fines, investigations into systemic risks)—a line that the European Commission has consistently pursued in public communications and reports [
407]. From October 2025, Regulation (EU) 2024/900 on transparency and targeting political advertising will come into force. It introduces mandatory labeling of political ads, public libraries (archives), disclosure of the source of funding and strict restrictions on microtargeting, especially based on sensitive data and in relation to minors. The regulation is directly linked to the objectives of countering information manipulation and foreign interference in electoral processes and complements the DSA requirements for advertising transparency [
408].
Practical consequence: large platforms are restructuring their product and advertising policies in the EU in advance, and in some cases are announcing their refusal to carry out political advertising, citing the difficulty of complying with the new rules—an indicator of the “rigidity” of the regime and its ability to change market behavior [
409,
410,
411]. The European Media Freedom Act (EMFA) entered into force on 7 May 2024, with key provisions applying from 8 August 2025. It strengthens guarantees of editorial independence, transparency of media ownership, allocation of public advertising budgets, and introduces procedures to reduce the arbitrary risks of media platforms “taking down” content (including requirements for notification, justification, and appeals). In the context of the IW, this is the “second leg” of the balance: while increasing the responsibility of platforms under the DSA, the EU protects media pluralism and the procedural rights of media players, so that the fight against disinformation does not transform into an unjustified suppression of legitimate journalism [
412,
413].
At the sectoral level, the EMFA is coupled with the updated Audiovisual Media Services Directive (AVMSD, 2018/1808), which extended duties on protection against incitement to hatred, violence and terrorism, as well as protection measures for minors, to video sharing platforms (VSPs). For IW, this is an important “bridge” from classic broadcast regulation to platform regulation: VSPs are required to have report-and-takedown mechanisms, measures against inflammatory/terrorist content, and media literacy efforts [
414]. The AI Act enshrined transparency obligations for systems that generate or manipulate content: users must be informed that they are interacting with AI; synthetic content (including deepfakes) must be labeled; and providers of shared AI systems are required to implement detection/labeling technologies and provide a set of accompanying documentation.
The “unacceptable risk” prohibitions will apply from 2 February 2025, and the transparency obligations will be phased in over 2025–2026. These provisions are directly related to
Section 6.6 on content authentication/origin, creating a legal “base” for technical initiatives (C2PA, CAI, etc.) [
415,
416]. Of thematic importance is Regulation (EU) 2021/784 on countering the dissemination of terrorist content online (TERREG): it requires platforms to remove material identified by competent authorities within one hour, introduces mechanisms for cross-border orders, and procedural guarantees. For the architecture of countering IW, this is a precedent of the “hourly norm”, which in crisis scenarios (information attacks accompanied by real threats) sets the bar for “default responsiveness” and pushes towards pre-prepared procedures and interfaces for interaction with authorities and researchers [
417,
418].
Although NIS2 is not about “content”, it strengthens the cybersecurity of digital service providers and critical infrastructure, minimizing the condition under which IW is accompanied by sabotage and attacks on the availability of platforms and media. In the circuit of communication networks, NIS2 increases the requirements for risk management, incident reporting and supply chains—this is the “cyber foundation” for the resilience of the information environment, especially in the phase of escalations and hybrid operations [
419].
The updated eIDAS 2.0 (Regulation (EU) 2024/1183) forms the European Digital Identity Wallet—the basis for trusted attributes and electronic signatures on the user side. For countering IW, this opens the way for the mass availability of verifiable attributes (e.g., verified “author” or “organizational” labels that are semi-transparent to privacy), which can be used in conjunction with content origin standards (see
Section 6.6) and “signed media content” policies on platforms [
420,
421]. The Data Act (Regulation (EU) 2023/2854) is not about disinformation per se but is important for the data ecosystem: it introduces rules on fair access/reuse of data and will come into force on 12 September 2025. For research and regulatory reporting, it strengthens the “right to know”, especially where platforms and intermediaries act as “data holders”, which can facilitate risk audits and assessments of DSA measures. As well as support for independent research into the spread of malicious narratives (in combination with the DSA researcher access regime) [
422,
423,
424].
What does this all mean in practice:
The DSA-AI Act-EMFA legal “triangle” simultaneously sets obligations to reduce systemic risks (including disinformation), transparency of AI/synthetics, and procedural guarantees for the media. In terms of the operational policy of platforms, this means the need for built-in risk interventions (ranking adjustments, cascade slowdowns, and ad repository), default synthetic labeling, and procedural transparency in the moderation of editorial content.
Political advertising is separated into its own “high transparency and responsible targeting mode,” which closes the main “force multiplier” of AI in electoral cycles—microtargeting based on sensitive data and opaque sponsorship.
Crisis and priority content categories (terrorism, violence, national security incidents) receive a procedural “fast track” in the TERREG style; combined with DSA crisis response protocols, this creates operational readiness for surges in coordinated attacks.
Researcher access and reporting are no longer a “goodwill” option: DSA/EMFA/Data Act and code regimes make “evidence-based” control a permanent norm—a critical condition for external validation of the effectiveness of measures.
7. IoT/IIoT Threats and Mitigation Patterns in the Context of Information Warfare
The Internet of Things (IoT) and the Industrial Internet of Things (IIoT) significantly expand the attack surface: billions of low-cost, and heterogeneous devices form a distributed fabric of sensors and actuators that serves both information functions (data collection and routing) and cyber-physical processes (production, transportation, and energy management). Recent reviews identify a triad of risks: (i) limited computing and energy resources, leading to simplified cryptographic stacks; (ii) a fragmented ecosystem of standards and unstable supply chains; (iii) long device lifecycles with rare updates [
425,
426,
427,
428]. In the case of IIoT, these vulnerabilities are exacerbated by requirements for high fault tolerance and deterministic latency, which make traditional information security tools (scanning, patch management, service restarts) difficult to apply [
429,
430].
IoT acts as both a tool and an object of information warfare. First, IoT botnets (Mirai is a typical example) allow DDoS attacks to be scaled up to the level of strategic pressure on communications platforms and content delivery infrastructure, disrupting access to media resources [
431,
432]. Second, compromising sensors and telemetry channels opens up the possibility of data substitution and undermining trust—especially critical in the energy and transportation sectors [
428,
433]. European reports record an increase in “hacktivist” campaigns using DDoS, where IoT devices are used as tools, and attacks serve as a diversionary maneuver before data theft [
434]. Incidents in the energy sector of Ukraine show that even if the targets of attacks are physical infrastructure, the information effects (disorientation of operators, overload of support lines, loss of trust in monitoring) are achieved faster than material damage [
435]. Threats, information effects and defense patterns are given in
Table 3.
Typical threat profile
mass botnet campaigns (scanning open services, dictionary attacks on weak passwords, exploitation of firmware update vulnerabilities);
privacy attacks (analysis of encrypted traffic and metadata to reconstruct behavioral scenarios in “smart homes”);
CPS/IIoT integrity attacks (injection of false data, substitution of time and synchronization, compromise of protected subsystems).
Counteraction patterns
Security-by-design. The NISTIR 8259/8259A baselines require: unique identification, secure updating, configuration protection, logging, component transparency (SBOM), managed passwords, and cryptography. ETSI EN 303 645 sets out similar minimum requirements for consumer devices [
436,
437,
438].
Network whitelisting and profiling. RFC 8520 (MUD) translates the manufacturer’s intent into network policy: the device publishes allowed interactions, and the controller applies a “default deny” ACL. This dramatically reduces the attack surface; the method is complemented by behavioral analytics (e.g., N-BaIoT) [
439].
Easy cryptography. For limited protocols (CoAP/UDP, 6LoWPAN), OSCORE (RFC 8613) and EDHOC (RFC 9528) are used, providing authentication and integrity with minimal overhead [
440,
441].
Segmentation and Zero Trust. NIST SP 800-82 Rev.2 recommends tight isolation of OT segments, DMZs, one-way gateways, and restricted protocols [
442]. This reduces the likelihood of lateral movement by attackers and minimizes IW effects.
Vulnerability and Supply Chain Management. Component transparency (SBOM), firmware signing, secure OTA updates and periodic key rotation are mandatory practices [
443].
DDoS counteraction. Ingress filtering (BCP-38), scrubbing centers, quarantine VLANs and MUD profiles are used. This increases the cost of mobilizing IoT botnets for IoT campaigns [
444].
8. Evolution of Information Processing Systems from the Point of View of Information Warfare Issues
The development of digital signal processing technologies and computing architectures opens up fundamentally new opportunities for identifying, filtering, and neutralizing information threats. In the context of rapidly growing data volumes and increasingly complex manipulation patterns, traditional methods of information analysis are becoming insufficient. They are replaced or supplemented by high-performance algorithms capable of operating in real time, analyzing both obvious and hidden signs of information attacks.
Furthermore, the development of computing technology has an increasingly noticeable impact on society, which is increasingly evident. As noted above, the “society—information technology” system is currently at a bifurcation point. The variability of the development vectors of computing systems (and, consequently, AI) implies several plausible scenarios for this transformation in the foreseeable future. Due to the growing geopolitical turbulence, these transformations are likely to be associated with one or another form of information warfare.
As emphasized in [
8], the transformation of computing technology is due to objective reasons. In particular, the possibilities for further improvement of computing technology built on semiconductors, the von Neumann architecture and binary logic have been largely exhausted. In the last decade, there has been an increasingly active search for alternatives. In addition to the above-mentioned works devoted to neuromorphic materials [
10,
11], one can mention quantum computing [
445,
446], attempts to implement optical computers [
447,
448], computing systems based on DNA [
449,
450], etc. It is permissible to assert that the issue of creating computing systems on a quasi-biological basis [
451,
452] is already on the agenda, an obvious prerequisite for which is work in the field of electronics based on polymer hydrogels [
453,
454], biocompatible electronics [
455,
456], and related areas.
Research in the field of creating neuromorphic materials is closely related to improving the algorithms for the functioning of neural networks, specifically, with work in the field of explainable neural networks [
457], which are becoming increasingly relevant. Indeed, neural network training algorithms are currently well developed [
458], but the result of such training cannot always be predicted, nor can the specific algorithms used by the neural network be established. In conditions where neural networks (and AI built on their basis) are increasingly used in critically important areas of activity, it is necessary to ensure their predictability, which leads to the need to create explainable neural networks.
This raises the question of deciphering the network’s internal code, which can be addressed using methods proposed in [
459,
460]. The algorithmic/operational transparency of neural-network operation, as emphasized in [
461], in turn, is of great importance for the synthesis of neuromorphic materials, since from a chemical point of view it is convenient to implement their tuning/training directly in the synthesis process.
Different types of advanced computing technology correspond to different vectors of development of the system “society—information technology” (global information and communication environment). Thus, quasi-biological computing, which may interface with the human brain, aligns most closely with transhumanist concepts [
462,
463] tracing back to Huxley [
464].
Ideas connected to transhumanism are diverse. In particular, the literature presents works that defend the concept of ‘digital immortality,’ i.e., transferring human consciousness to a non-biological information carrier [
465,
466]. From a formal point of view, such an approach obviously has a right to exist: the operational capabilities of existing computing systems are approaching those possessed by the human brain.
However, there is an important nuance. As emphasized in [
467], the existing definitions of intelligence are purely descriptive. The essence of intelligence—and especially human consciousness—remains unresolved. Consequently, at the level of research that can be achieved in the foreseeable future, it will likely remain unclear what information should be “written into a computer” in order to obtain a “copy” of a real person.
This example clearly highlights one of the main problems of a methodological (if not philosophical) nature that arises when trying to implement “strong” AI. The question of its algorithmic basis is of fundamental importance. This returns us to the above-mentioned range of problems related to the creation of neuromorphic materials. The brief overview in this area presented above, as well as the overview presented in [
14], allows us to state that such an algorithmic basis can be variable. Existing AI and the computing tools implementing them are built on binary logic, which, however, is not mandatory. Thus, in the 1970s, not unsuccessful attempts were made to implement computers operating with ternary logic [
468,
469]. Moreover, there is no reason to assume that binary logic is organically inherent in human thinking. As emphasized in [
3], Aristotle’s logic covers the simplest—from the point of view of formalization—type of reasoning. Human thinking cannot be reduced to it.
Consequently, it is permissible to speak not only about the variability of scenarios for the development of computer technology, but also about the variability of the algorithms that underlie it. This returns to the question of the neural network theory of society (more broadly, the noosphere) and its role in the information warfare. If society is an analog of a neural network, then from general methodological considerations it follows that there must be algorithms for processing information that are complementary to its structure. Note that AI is increasingly integrated with telecommunication networks. This means that the emergence of algorithms of the above type will turn AI into a kind of “mediator” between the ordinary (personal) and suprapersonal levels of information processing. Thus, a methodological basis arises for creating nontrivial tools for influencing various components of the sociocultural code, in particular, the collective unconscious.
This section first examines the most obvious problems associated with the development of new algorithms for information processing as they relate to the problems of information warfare (the task of filtering information noise). Then, from the same perspective, the possibilities of practical use of transformations like the Fourier transform but built on the use of finite algebraic structures (Galois fields, etc.) are considered. Next, new approaches to the construction of convolutional neural networks are considered, which are of significant interest from the point of view of explainable AI, which are closely related to the Fourier–Galois transforms and similar ones. It is further proved that finite algebraic structures (a special case of which underlies modular arithmetic) are indeed of significant interest from the point of view of creating nontrivial computing systems. The section concludes with a consideration of some humanitarian aspects of the issues raised.
8.1. Digital Signal Processing in Problems of Filtering Information Noise
Digital signal processing (DSP) is one of the key tools in modern systems for countering information attacks, especially in cases where the information flow is multimodal—including text, audiovisual, and metadata. Initially, DSP developed as a field focused on processing physical signals (speech, radio-frequency, images), but in the last two decades its methods have been adapted to work with “information noise”—chaotic or specially generated data that complicates the extraction of relevant information [
470,
471]. In the context of information warfare, information noise is defined as a set of messages that do not carry a useful semantic load or intentionally distort the signals necessary for decision-making. A classic example is the “pollution” of the information space by coordinated botnets, creating the illusion of public consensus or, conversely, mass discontent [
472]. DSP provides tools for detecting such anomalies through spectral analysis, filtering, correlation methods, and signal decomposition into basis functions.
One of the fundamental approaches is the use of linear and nonlinear filters to extract the useful signal from the stream. Linear filtering (e.g., using low-pass or high-pass filters) suppresses high-frequency bursts of activity characteristic of coordinated injections. Nonlinear methods, including median filtering and morphological operations, are effective in suppressing impulse noise and identifying stable patterns [
473,
474].
The use of frequency analysis, in particular, the fast Fourier transform (FFT) and wavelet transforms, enables translation of the information stream into the spectral domain, where it is easier to detect hidden periodicities or characteristic attack signatures. Studies show that even in text data (e.g., in tweet sequences), spectral characteristics can be identified that indicate the artificial origin of messages [
475].
For complex multimodal signals, singular value decomposition (SVD) and principal component analysis (PCA) methods are relevant, which allow one to identify basic data structures and discard random or artificially created variations [
476]. This is especially effective in detecting fake images and videos, where signs of manipulation can be masked by noise.
An important area is the use of adaptive filters that change their parameters depending on the statistics of the input signal. Such filters, based on the LMS (Least Mean Squares) and RLS (Recursive Least Squares) algorithms, are used in real-time systems that analyze social media dynamics and network traffic [
477].
Modern research in this area increasingly turns to hybrid methods that combine DSP with machine learning. For example, spectral features identified at the stage of digital processing used as inputs to neural networks to classify information sources or identify anomalies [
478,
479]. Thus, DSP serves as a “first-stage” filter, preparing data for more complex analytical models.
Many of these methods have already been adapted to the tasks of countering information warfare. Thus, publications on digital filtering of news streams propose algorithms that allow reducing the level of information noise by 40–60% without losing relevant messages [
480]. In combination with computing architectures optimized for parallel processing, this opens up opportunities for large-scale monitoring of the information space in real time.
Thus, the DSP provides a basic technological basis for combating information noise, acting not only as a data “cleaning” tool, but also as a means of early detection of attacks disguised as a natural information flow. Its further development in conjunction with new computing architectures described in the following sections is key to increasing the resilience of the information infrastructure to impacts.
The examples above support this assessment. Even in such a field as filtering information noise, the use of finite algebraic structures, in particular, Galois fields, shows clear promise.
8.2. Using Fourier–Galois Transforms to Reveal Hidden Patterns
The Fourier–Galois Transform (FGT) is an extension of the classical Discrete Fourier Transform (DFT) over finite fields and rings, allowing discrete data to be processed under modular arithmetic [
481,
482]. Unlike the traditional DFT, which operates in the field of complex numbers (i.e., continuously changing quantities), the FGT is defined for elements of finite algebraic structures, making it particularly suitable for digital computers implementing operations in residue number systems (RNS) or modular arithmetic [
483]. In the context of information warfare, the FGT opens up unique opportunities for analyzing large and noisy data sets. The fact is that many types of manipulative information attacks manifest themselves as weak but persistent patterns in time series or network activity that can be masked by noise. When working in complex arithmetic, extracting such patterns is difficult due to round-off errors and high computational costs, while using FGT in finite fields allows one to preserve accuracy and reduce the load on computational resources [
484].
The study by Kadyrzhan et al. (2024) on partial digital convolutions [
485] and the work by Suleimenov & Bakirov (2025) on finite rings [
486] show that integrating FGT into logical-algebraic models allows one to effectively decompose an information flow into orthogonal components even under conditions of incomplete data. This property is critically important, for example, when analyzing botnet activity, when observations capture only a fragment of the interaction graph.
From a theoretical point of view, FGT is based on the properties of multiplicative subgroups of finite fields GF(p
n) or rings Z
m. Let p be a prime number, then the field GF(p
n) contains a multiplicative group of order p
n − 1, on which discrete harmonics can be constructed, similar to complex exponentials in DFT [
487]. This allows one to determine the frequency components of data encoded in modular form, which is especially useful in architectures operating on the principle of residual arithmetic.
Practical applications of FGT in countering information attacks include:
- -
Detection of coordinated activity: modular analysis of time sequences of publications in social networks reveals hidden cycles and synchronization of actions between anonymous accounts.
- -
Detection of hidden periodicity: FGT can find repeating structures even in highly fragmented data, which is difficult in classical spectral analysis.
- -
Data compression and filtering: due to operation in modular arithmetic systems, FGT allows one to implement efficient algorithms for compression and rejection of noise components without losing critical signals [
488,
489].
In addition, the computational efficiency of FGT is enhanced by using RNS, where calculations in different modules are performed in parallel and the result is combined using the Chinese Remainder Theorem [
490]. This makes it possible to process large amounts of information in real time, which is essential for monitoring information flows in the face of rapidly evolving attacks.
The combined use of FGT with partial convolution methods (see
Section 6.2) enables building hybrid analysis systems, where selected data segments are subjected to high-precision spectral decomposition, and the analysis results are used to detect and localize attack sources.
Additional prospects for using FGT in spatial analysis problems are demonstrated in the work by Suleimenov et al. [
491] on mosaic structures in discrete coordinate systems. In this study, a method for forming two-dimensional and three-dimensional discrete spaces based on finite algebraic rings using mosaic (tiling) partitions is proposed. Such structures allow representing large data arrays as regular fragments with orthogonal properties, which simplifies their processing and analysis of frequency characteristics. In the context of countering information threats, this makes it possible to effectively identify spatio-temporal correlations between different segments of the information flow, for example, when analyzing the geographic distribution of disinformation sources or visualizing clusters of synchronous activity in social media. The use of mosaic coordinate systems together with FGT provides an additional level of detail for spectral analysis, allowing not only to detect hidden patterns, but also to localize them in the data space, which is especially important for complex monitoring systems. Thus, FGT is not just a mathematical tool, but a fundamental element of computing architectures aimed at ensuring resistance to information threats. Its development within the framework of finite algebraic structures opens up prospects for building high-performance and energy-efficient systems integrated into both specialized processors and data analysis software packages.
This example is primarily illustrative, but it demonstrates that relatively simple patterns can underlie very complex structures. An important task is to develop tools that establish such patterns in a verifiable manner.
This brings us back to the question of methods for constructing explainable neural networks, since such networks are among the main tools for image analysis, including from the point of view of establishing their quantitative parameters [
492,
493].
8.3. Methodology of Partial Convolutions and Discrete Logical-Algebraic Models
Modern electronics (which, among other things, is the basis of computing technology) is largely based on the theory of linear electrical circuits [
494]. The main advantage of this theory is that it allows us to reduce the analysis of any processes occurring in such networks to an examination of their amplitude-frequency characteristics. The mathematical basis for this is the convolution theorem, which states that the Fourier transform of the convolution of two functions is the product of their Fourier transforms [
495].
In the classical theory of linear electric circuits, signal models are functions that take real or complex values. When moving to digital signals, i.e., signals that correspond to a certain set of discrete levels, it is natural to use functions that take values in any suitable finite algebraic structure (Galois field, finite algebraic ring, etc. [
496]). We emphasize that a signal is a physical process, and a function that serves as its model is a mathematical object. The choice of the latter is therefore no more than a matter of convenience and convention [
497].
This conclusion is applicable, in particular, to problems of digital image processing [
498]. A digitized image can be represented as a function that takes discrete values in a finite range of amplitudes. Consequently, its model can be a function that takes values, for example, in a Galois field.
For such fields, an analog of the convolution theorem has been formulated, a visual proof of which is given, for example, in the work [
499]. However, an important caveat applies. Direct application of the analog of the convolution theorem, formulated in terms of Galois fields, does not correspond in physical meaning to convolution calculated in terms of continuity (real or complex values). This difficulty is overcome in [
500], in which the concept of partial digital convolution was introduced.
Partial Digital Convolution (PDC) is a method for processing discrete signals in which convolution operations are performed not over the entire domain of a function, but over selected subsets of it, which allows for a significant reduction in computational complexity and adaptation of algorithms to specific features of the input data [
501]. This approach aligns the ranges of input and output variables. This makes it possible to construct orthogonal bases for convolution over various finite fields and eliminates the occurrence of inconsistencies in the ranges of input and output variables that arise during direct calculations of digital convolution. Due to this, the algebraic properties of convolution are preserved while reducing computational complexity.
Kadyrzhan et al. (2024) reported [
485] a formalization of the partial digital convolution method based on algebraic extensions and construction of orthogonal bases. The authors show that the use of algebraic ring theory and modular arithmetic enables PDC implementation in computing systems with a high degree of parallelism, while ensuring minimal accuracy loss. This approach is especially promising when integrated into specialized processor architectures focused on processing streams in real time. One of the key advantages of PDC is its compatibility with discrete logical-algebraic models—formal structures that describe data not as real numbers but as elements of finite sets (rings, fields, or lattices). Such models allow processing information signals in a more compact and error-resistant form, which is critical for problems where data can be distorted or fragmented [
502,
503]. The logical-algebraic approach to convolutions is based on operations in finite algebraic structures, which opens up opportunities for optimization for specific hardware platforms. For example, the use of operations modulo a prime or composite number makes it possible to use highly efficient computation methods in RNS, which in turn enable computation without intermediate carries [
486]. This is directly related to research in the field of finite ring computing architectures and RNS [
504,
505].
The practical value of this approach is evident in the problems of filtering information noise and detecting hidden correlations. For example, in the analysis of social networks, partial convolutions can be applied to time-series “windows” reflecting the activity of individual user groups, and logical-algebraic models enable robust comparison of such patterns even with a high degree of data noise. In combination with the DSP methods described in
Section 6.1, PDC and logical-algebraic models enable the construction of a multilevel filtering architecture: at the first level, selecting priority flow segments, at the second, high-precision processing of selected segments using error-resistant algebraic methods.
In the context of information warfare, the methodology in question is of particular importance when analyzing massive data streams, such as social media or network logs, where full processing of all messages in real time is not always technically possible. enables focusing computational resources on segments flagged as potentially anomalous by preliminary (e.g., spectral or statistical) analysis, thus accelerating response to threats [
506].
Thus, the methodology of partial convolutions and its integration with discrete logical-algebraic models form the basis for creating new-generation computing systems capable of effectively combating information attacks in the context of big data and limited resources. Their further development, including in the direction of optimization for FPGAs and specialized processors, enables scalable real-time monitoring and filtering systems.
However, the prospects for its use are not limited to this. Unlike the analog of the convolution theorem, as discussed in [
498], the method of partial convolutions allows us to consider digital convolutions with full preservation of the physical meaning of the transformations performed. Consequently, this method can be considered as a certain step forward in relation to the methods of creating explainable neural networks. Indeed, in accordance with the classical theory of linear electrical circuits, it is possible to obtain a circuit with specified characteristics, starting from its transfer function (in other terminology—amplitude-frequency characteristic). Similarly, a convolutional neural network can be built starting from an analog of the transfer function (more precisely, a set of such functions, each defined over its own Galois field), constructed using the method in [
485].
Methodologically, this corresponds to the problem considered in
Section 8.2. A complex image represented in digital form must be reduced to simpler characteristics. Establishing the transfer function of a convolutional neural network allows us to solve this problem for many important applications that use neural networks of this type, e.g., [
507]. A similar approach can be used for relatively small images, and in this case the analog of the transfer function is not only found explicitly but also serves as a basis for constructing electrical circuits that solve a specific problem [
508].
This approach, among other things, creates the prerequisites for deciphering the true algorithms of the functioning of neural networks and their naturally arising analogs. In particular, in the future, it is possible to set the task of deciphering the sociocultural code. This, in turn, creates prerequisites for optimizing the impact on such a code, which is directly related to the problems of information warfare. Consequently, the analysis of the prospects for the development of computing tools using finite algebraic structures is of interest from this point of view as well.
8.4. Finite Ring and Residual System Computer Architectures
Finite ring and RNS based architectures are a special class of digital computing systems in which operations are performed modulo one or more integers. Their key feature is the ability to perform arithmetic operations in parallel without the need to carry propagation, which leads to a significant increase in the computational speed [
490,
509].
In the context of countering information warfare, such architectures have a number of advantages:
- -
high throughput—the ability to process large amounts of data in real time;
- -
energy efficiency—due to the elimination of complex carry operations;
- -
fault tolerance—due to modular redundancy and the ability to detect/correct errors;
- -
built-in data protection—modular arithmetic complicates direct recovery of the original data when intercepting intermediate results [
510].
Suleimenov et al. (2023) [
504] show that the use of multivalued logic based on algebraic rings significantly expands the capabilities of such architectures in signal processing. The use of finite rings (for example, Zm, where m is a composite number) allows one to implement calculators capable of working not only with binary but also with multivalued logical variables, which is especially useful when modeling complex systems with a large number of states, including social networks and communication platforms.
Suleimenov & Bakirov (2025) [
486] propose a method for constructing discrete coordinate systems based on finite algebraic rings. This solution has direct practical significance for spatiotemporal data analysis systems in problems of monitoring information flows.
From the point of view of computational theory, RNS is based on the decomposition of integers into a set of residues in mutually prime moduli {m
1, m
2, …, m\_k}. Arithmetic operations are performed in each modulus independently, which enables full parallelism. The result is reconstructed using the Chinese Remainder Theorem (CRT) [
511]. This approach is ideally suited for FPGA and ASIC architectures, where each modular branch can be implemented as a separate computing unit.
A notable case is a multiplication-modulo-7 method developed and patented in Kazakhstan [
512]. This method optimizes one of the key operations in RNS and ring arithmetic systems, reducing cycle count and increasing performance of specialized processors. For information security and information noise filtering tasks, such optimizations are critical, since multiplication is a basic operation in most spectral analysis methods (including Fourier–Galois transforms, see
Section 8.3).
Modern research [
513,
514] shows that architectures based on finite rings and RNS are successfully used in cryptography, digital filtering systems, and even in machine learning. For example, in botnet detection and filtering tasks, RNS architectures allow implementing fast and error-resistant correlation analysis algorithms without losing performance as the data volume grows. A significant contribution to the development of computing architectures based on modular arithmetic was made by Kadyrzhan et al. (2025) [
515], dedicated to the prospects of using quasi-Mersenne numbers in the design of parallel-serial processors. The work shows that such modules make it possible to simplify the hardware implementation of multiplication and addition due to the features of the representation that are close to Mersenne numbers, but with a more flexible choice of parameters. This provides a compromise between performance, hardware costs, and the versatility of the architecture, which is especially important for systems that perform digital filtering and identify manipulative patterns in data streams in real time.
Research conducted in this direction also made it possible to create fairly simple means for calculating discrete logarithms [
516]. Calculating such logarithms was considered a very difficult task for a long time. As noted in [
517], the first practical public-key cryptosystem, the Diffie-Hellman key exchange algorithm, relies on the assumption that discrete logarithms are computationally hard. This assumption, characterized as a hypothesis, underpins the presumed security of various other public-key schemes [
517]. At the time of the cited publication, this hypothesis was the subject of extensive debate, and this situation persisted until recently [
518].
Furthermore, even problems that were previously considered and solved purely in terms of continuous functions admit a transition to a discrete description. In particular, in [
519] it is shown that the description of the propagation of electromagnetic waves in space can be reduced to a discrete form. On this basis, in [
520] it is shown that arbitrary converters of electromagnetic radiation can also be reduced to an equivalent circuit that contains only discrete elements. This potentially creates the prerequisites for the use of algebraic methods, including in problems of this type.
For clarity, we will also provide a small numerical illustration based on the materials of the works [
500,
505].
As a simple illustration of modular processing in a Residue Number System, consider the integer N = 73 represented under the modulus set {5, 7, 11}. The residues are (73 mod 5, 73 mod 7, 73 mod 11) = (3, 3, 7). An arithmetic operation can then be performed independently in each modulus. For instance, let us add M = 52, which is represented as (2, 3, 8). Their sum in RNS is (3 + 2 mod 5, 3 + 3 mod 7, 7 + 8 mod 11) = (0, 6, 4). Using the Chinese Remainder Theorem, this corresponds to the integer 125 in the original domain. The modular decomposition allows the operation to be carried out fully in parallel, without carry propagation, and then efficiently reconstructed.
A toy example of the Fourier–Galois Transform can be given over the finite field GF(7). Consider the input vector f = [2–5]. Let ω be a primitive element of GF(7), for instance ω = 3, which has order 6. The transform is computed as F(k) = Σ f(n) ω^(kn) mod 7. For k = 0, F(0) = 1 + 2 + 3 + 4 = 10 ≡ 3 (mod 7). For k = 1, F(1) = 1·3^0 + 2·3^1 + 3·3^2 + 4·3^3 ≡ 1 + 6 + 5 + 6 ≡ 18 ≡ 4 (mod 7). Proceeding analogously for other k yields the transformed sequence. This simple example demonstrates how algebraic periodicities in finite fields can be exploited to perform convolution-like operations with full determinism and reduced complexity.
To illustrate the practical benefits of modular architectures, consider the example of a residue number system (RNS) processor configured with the moduli set {2, 3, 5, 17, 257}. The dynamic range of this configuration is approximately 217, which is equivalent to a 16-bit integer processor. Importantly, the critical path is determined solely by the largest modulus (257), corresponding to the complexity of an 8-bit adder or multiplier, while all smaller channels operate in parallel. Assuming a conservative clock frequency of 200 MHz for such an 8-bit modular unit, the throughput of vector additions in RNS reaches around 200 million “16-bit equivalent” additions per second. By comparison, a conventional 16-bit binary adder under the same technology assumptions typically operates at about 150 MHz, yielding approximately 150 million additions per second; thus, the RNS scheme achieves a ≈1.3× gain for addition. The advantage becomes even clearer for multiplication: while classical 16-bit binary multipliers often require multiple cycles or reduced frequency (50–150 million multiplications per second effective throughput), RNS multipliers remain one-cycle operations per modulus, sustaining ≈200 million per second, i.e., up to 2–3× faster. Moreover, scaling to higher dynamic ranges is achieved by simply adding new moduli without extending the critical path, thereby maintaining frequency while increasing representable precision. This carry-free parallelism highlights why RNS-based processors are particularly attractive for real-time and high-throughput applications. In a complementary manner, Fourier–Galois Transforms (FGT) preserve consistency of value domains by ensuring that both inputs and outputs remain strictly within GF(pn), thereby avoiding fractional results inherent in classical convolution. This modular consistency across both RNS and FGT reinforces their practical value as efficient and deterministic alternatives to conventional arithmetic.
Thus, the combination of finite rings, RNS and specialized modular algorithms, including optimized multiplication schemes, forms the basis of high-performance computing systems capable of effectively countering modern threats in the information environment. Their use in complex monitoring and filtering systems allows for prompt analysis of information flows and rapid detection of anomalies even in conditions of massive attacks.
8.5. Intelligent Filters for Resistance to Manipulative Patterns
Intelligent filters for resilience against manipulative patterns are a class of digital information processing systems that use machine learning algorithms, signal theory, and mathematical models based on finite algebraic structures to identify and suppress malicious or manipulative information influences. Their key task is to automatically distinguish relevant information from distorted or intentionally modified information in the conditions of intense information noise [
74,
521].
In the context of information warfare, manipulative patterns can take various forms—from the mass publication of similar messages (astroturfing) and the targeted dissemination of false narratives to the synchronous activity of botnets and the use of deepfakes [
522]. Classical filtering methods based on static rules or heuristics are not flexible enough, since attackers quickly adapt their tactics. In contrast, intelligent filters are capable of online learning and self-adaptation, enabling long-term resilience [
523]. The mathematical basis of such filters often combines:
- -
spectral analysis methods (including Fourier–Galois transforms, see
Section 8.3) to identify hidden periodicities;
- -
partial digital convolutions to isolate characteristic spatio-temporal structures;
- -
residual number systems and finite ring arithmetic for high-speed parallel data processing;
- -
multivalued logic to model uncertainty and make decisions under incomplete information [
524].
From an architectural point of view, an intelligent filter usually includes several processing levels:
Data preprocessing and normalization—removing noise components, restoring gaps, modular data decomposition.
Feature extraction—spectral, statistical, and topological features of the information flow.
Pattern classification—using neural networks (including graph networks), Bayesian models, or hybrid logical-algebraic schemes.
Dynamic adaptation—adjusting filter thresholds and a set of features based on the attacker’s current tactics.
Real-world studies in the field of applying intelligent filters to countering IW problems demonstrate high effectiveness. For example, the work of Song et al. (2019) [
489] show that integrating spectral methods with trainable models can achieve up to 95% accuracy in detecting coordinated attacks on Twitter. Other studies [
525] describe the use of algorithms based on residue arithmetic for video-streams processing and detecting anomalous image manipulations, which is relevant for detecting deepfakes.
An important area is resistance to adaptive attacks. Here, a special role is played by algorithms that can generate “counter-patterns”—information noise that distorts the attacker’s data and reduces the effectiveness of his learning algorithms. This approach is close to the concept of “adversarial machine learning” but is applied in the opposite direction—for defense, not for attack [
526].
The practical implementation of intelligent filters is often based on hybrid architectures that combine FPGA/ASIC modules for high-speed signal processing and CPU/GPU blocks for high-level analysis. Such integration enables high throughput and complex analytical functionality at the same time, which is critical for real-time monitoring of social networks during massive information attacks. Thus, intelligent filters for resistance to manipulative patterns are one of the key technological components of the modern arsenal of protection against information threats. Their development is directly related to progress in the field of digital signal processing, algebraic calculators and machine learning, and their implementation in content monitoring and moderation systems allows not only to identify, but also to neutralize malicious effects before they reach a critical mass.
Thus, traditional ML/NLP detectors demonstrate significant achievements in recognizing manipulated content, but have limitations in the context of noisy data, high load, and the need for predictable behavior. Alternative computational approaches, in particular, methods based on the RNS, Fourier–Galois transform (FGT), and partial digital convolutions, open up opportunities for deterministic processing and efficient parallelization, while maintaining robustness to noise and reducing resource intensity. A comparative analysis of the presented directions is summarized in
Table 4. An extended version of the analysis, including additional criteria and more detailed characteristics, is given in
Appendix A (
Table A1).
9. Ethical and Legal Aspects of Combating Disinformation
Countering disinformation inevitably involves a complex balance between protecting information security and preserving fundamental rights and freedoms, primarily freedom of speech. Strengthening controls and content filtering can help reduce the spread of false information, but carry the risk of censorship, restrictions on political pluralism, and abuses by government or corporate actors.
In international practice, approaches to regulating the information space range from soft forms of self-regulation and codes of conduct to tough legislative initiatives with administrative and criminal sanctions. At the same time, the key challenge remains the development of universal norms that account for both the cultural and legal characteristics of different countries and the cross-border nature of digital communications.
This section analyzes the main ethical dilemmas and legal mechanisms in the field of combating disinformation and assesses how different regulatory models affect democratic resilience and human rights.
9.1. Freedom of Speech vs. Information Security
The issue of the relationship between freedom of expression and information security is one of the central ethical and legal challenges in countering disinformation. Freedom of speech is enshrined in key international legal acts, including Article 19 of the Universal Declaration of Human Rights (1948) [
527] and Article 19 of the International Covenant on Civil and Political Rights (1966) [
528]. This emphasize the right of every person to freely express their opinion, including “freedom to seek, receive and impart information and ideas of all kinds.” However, the same documents stipulate that this right may be limited “in the interests of national security, public order, public health or morals.” Thus, at the normative level, there is recognition of the need for a balance between individual freedom of expression and the collective security of the information space. In the context of an information warfare, this balance is especially fragile. Mass disinformation campaigns aimed at undermining democratic institutions can threaten not only political stability, but also human rights, including the right to reliable information [
529,
530].
Academic literature identifies two key approaches to resolving this dilemma. The first is liberal-absolutist, according to which any form of restriction of freedom of speech is unacceptable, and disinformation threats should be neutralized exclusively through counter-narratives and increased media literacy [
531]. The second is pragmatic-regulatory, allowing for restrictions on certain types of content in the presence of a proven threat to national or public security [
532].
In practice, most states adhere to a mixed approach. For example, the European Union in its “Code of Practice on Disinformation” (2022) [
533] sets out voluntary commitments of online platforms to remove harmful content, while introducing transparency and accountability requirements to avoid unjustified censorship. At the same time, some countries (for example, Singapore with its POFMA law—“Protection from Online Falsehoods and Manipulation Act”) [
534] apply strict centralized control over information, which has drawn criticism from human rights organizations [
535].
Research shows that excessive restrictions on freedom of speech under the pretext of information security can have the opposite effect—a decrease in trust in government institutions and an increase in activity in shadow communication channels [
536]. In turn, the complete lack of regulation in the context of digital platforms with algorithmic personalization creates conditions for the rapid spread of false narratives and manipulative patterns [
64].
The issue of proportionality of interference deserves special attention. The European Court of Human Rights (case “Handyside v. United Kingdom”, 1976) established that freedom of expression extends not only to information that is perceived positively or neutrally, but also to that which “offends, shocks or disturbs” [
537]. This means that measures aimed at combating disinformation must be strictly proportionate to the threat and avoid creating an excessive chilling effect on legitimate public discourse [
538].
In modern conditions, the principle of minimum necessary restriction has become a key ethical guideline, suggesting that any restrictions on freedom of speech must be [
539]:
- -
enshrined in law;
- -
necessary to achieve a legitimate goal;
- -
proportionate to the potential damage;
- -
subject to independent judicial or public control.
Thus, finding a balance between freedom of speech and information security requires fine-tuning of the law, an interdisciplinary approach, and transparent accountability mechanisms. Only under these conditions can the risk of abuse be minimized and, at the same time, the threats of information warfare be effectively countered.
9.2. Platform Responsibility and Moderation
The debate over platform responsibility for content has evolved from “soft” self-regulation to hybrid models with strict legal obligations. In the 2010s, voluntary norms formed the basis—internal community rules, public reports on compliance with standards, industry “principles”. However, by the early 2020s, the center of gravity had shifted to co-regulation: states set the frameworks and procedures, and private platforms ensure their operational implementation.
In the United States, §230 Communications Decency Act (47 U.S.C. §230) preserves immunity for intermediaries for editorial decisions, which contributed to the early stage of “moderation by own rules” and mass automation. In the EU, the logic of positive duties is enshrined: the Digital Services Act (DSA) established a detailed system of procedures and accountability, especially for “very large online platforms” (VLOPs): regular assessments of systemic risks, mitigation measures, independent audits, access to data for researchers and supervisory powers of regulators [
540].
Thus, moderation is no longer solely a matter of private discretion and is integrated into a risk-management legal regime. This applies not only to obviously illegal content (terrorism, exploitation of children, calls for violence), but also to the “gray zone”—borderline legal materials that may pose a public danger: disinformation campaigns, intimidation, coordinated inauthentic behavior, manipulative recommendations.
The DSA has institutionalized a risk-based approach: platforms are required to regularly assess risks (election interference, threats to security and human rights, the impact of recommendation systems), publish reports, take proportionate measures (adjusting algorithms, strengthening advertising verification, setting up moderation rules), undergo independent audits, and provide researchers with standardized access to data [
540].
At the national level, Germany implements the NetzDG with a notice-and-action procedure and complaint reporting [
541], and the UK implements the Online Safety Act with a duty of care framework, which obliges platforms to systematically prevent the risks of harmful content through risk assessment, safety-by-design, and Ofcom supervision [
542]. Similar processes are underway across Asia, where states are creating their own regulatory frameworks to counter disinformation. In Singapore, the key instrument is the Protection from Online Falsehoods and Manipulation Act (POFMA, 2019), which empowers authorities to issue mandatory directions (correction directions) to correct false claims. The law focuses on factual distortions, and failure to comply with the directions entails administrative and criminal liability [
543].
In China, the regulatory complex includes the Cybersecurity Law (2017), the Data Law (2021), and the Personal Information Law (2021), as well as specialized acts such as the Deep Synthesis Regulations (2022), which regulate the use of algorithms and synthetic content: it introduces deepfake labeling requirements and imposes obligations on providers to prevent the manipulative use of technologies [
544]. Taiwan passed the Anti-Infiltration Act in 2020, which limits external funding and coordination of information campaigns, complementing it with measures by the National Communications Commission (NCC) aimed at countering disinformation in the media environment [
545].
In South Asia, India has implemented the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which strengthened the responsibilities of intermediaries in content moderation, the introduction of grievance officers, and the transparency of algorithms. An attempt to supplement the regulation with the creation of a state Fact-Check Unit was stayed by the Supreme Court in March 2024 as raising issues of proportionality and freedom of expression [
546]. In addition, the Digital Personal Data Protection Act, 2023 is in force, forming rules for the processing and targeting of personal data in political communication [
547].
In Southeast Asia, Vietnam, through the Cybersecurity Law (2019) and Decree No. 53/2022, has established mechanisms for the removal of false information and data localization requirements for global platforms [
548]. In 2021, Hong Kong amended the Personal Data (Privacy) Ordinance, criminalizing doxxing and empowering the PCPD to issue takedown orders [
549]. In 2022, the Philippines adopted the SIM Registration Act, aimed at curbing anonymous dissemination of disinformation on instant messaging apps [
550]. In Indonesia, amendments to the ITE Law clarified the concept of defamation and simultaneously criminalized the “dissemination of false statements causing public concern” [
551].
At the “soft law” level, the Recommendation of the Committee of Ministers of the Council of Europe on the roles of Internet intermediaries (CM/Rec(2018)2) and the updated Santa Clara Principles on transparency and procedural guarantees (user notifications, clear reasons for blocking, the possibility of appeal, the publication of metrics) are enshrined [
552,
553].
The main operational problem is scale and reliance on automation. The moderation system of large platforms combines rules, tools, and people: filters and classifiers (hash matching, machine-learning (ML) models), manual review queues, appeals, escalations to policy teams, interaction with “trusted flaggers” and law enforcement agencies. Even with high accuracy metrics, algorithmic decisions systematically make mistakes: false positives affecting minority content, cross-cultural contexts, irony, and memes. Therefore, the institution of procedural guarantees—notifications, accessible appeals, independent reviews (including the Oversight Board Meta)—has become a central element of “moderation ethics” [
554,
555,
556].
Research shows that the “right to explanation” and traceability of decisions reduce the “chilling effect” and strengthen user trust, even when content is not reinstated [
557]. The accountability system faces dilemmas. First, “over-removal vs. under-enforcement”: strict NetzDG deadlines and threats of fines push platforms to premature takedowns, which is criticized by NGOs and the Council of Europe [
541,
552]. Second, transparency and access to researchers’ data (DSA, Art. 40) conflict with privacy, commercial confidentiality, and security; the solution is standardized access interfaces, independent verification of projects, and audit of datasets [
540]. Third, the problem of interoperability and encryption: extending moderation to closed channels undermines cryptographic protection. Therefore, regulators increasingly limit themselves to metadata, reporting, and “safety by design” instead of mass content scanning [
542,
543].
At the governance level, the institutionalization of the “procedural constitutionalism” of platforms is taking shape: codes, registries and libraries of political advertising, labeling of state media; transparency of recommendation systems and user choice “switches” (DSA); external experts, auditors, academic partnerships, public councils and independent dispute resolution bodies (Oversight Board). In the literature, this is described as a “new administrative system of online speech,” where private rules and state requirements form a regime closer to administrative law than to editorial discretion [
554,
558,
559]. Finally, the platform becomes not only a “moderator,” but also an “architect of the environment.” Responsibility shifts “up the stack”: from deleting posts to managing systemic risks through feed design, notification cadence, labeling and virality controls, advertiser vetting, and researcher access. Empirical evidence shows that resilience to disinformation depends more on the properties of recommendation systems and advertising infrastructure than on targeted removals. Therefore, the DSA, the Santa Clara Principles and the Council of Europe recommendations agree on one thing: platform accountability is measured not by “removed/not removed” acts, but by processes—risk assessments, explainability of algorithms, access to data, the possibility of appeal and independent oversight [
543,
552,
553]. A summary table of regulatory acts is provided in
Appendix B,
Table A2.
Despite these limitations, several jurisdictions have already accumulated evidence of successful countermeasures against information operations. Initiatives such as the DSA, POFMA, the Anti-Infiltration Act, the SIM Registration Act, as well as NATO initiatives and Ukraine case studies, demonstrate measurable outcomes ranging from reductions in cybercrime to increased transparency of political advertising. A summary of these successful practices and their outcomes is provided in
Appendix B Table A3.
9.3. Risks of Censorship and Abuse by Government Agencies
Countering disinformation inevitably creates the temptation to interpret state powers broadly: the more “elastic” the legal basis and the broader the discretion of the authorities, the higher the likelihood that measures intended to protect the public interest will turn into a tool for suppressing criticism and controlling the public sphere. This is manifested in three interrelated practices: overly vague offenses, accelerated “emergency” procedures without due guarantees, and opaque administrative moderation delegated to platforms. International standards proceed from the opposite: any restrictions on freedom of expression are permissible only if they meet the test of legality, legitimacy of the goal, and necessity/proportionality of the intervention; otherwise, a “chilling effect” occurs, when citizens refrain from legitimate speech for fear of sanctions [
560,
561]. The risks of vagueness are most often associated with laws aimed at “false information”, “fakes”, or “undermining public order”.
The joint declaration of the UN, OSCE, OAS and African Commission special rapporteurs (2017) explicitly recommended abandoning criminal law provisions based on vague categories of “falsehood” and instead strengthening government transparency and media literacy [
562]. Similar conclusions are contained in the reports of the UN Special Rapporteurs on freedom of opinion and expression (D. Kaye; I. Khan): regulation of disinformation should be “a scalpel, not a sledgehammer,” exclude criminal prosecution for honest misconceptions and protect investigative journalism and scientific debate [
563,
564]. When vague norms are backed up by high fines and accelerated blocking procedures, in practice it is critical and opposition speech that suffers the most—this is recorded in the annual reviews “Freedom on the Net” (Freedom House), which notes an increase in arbitrary blocking of media outlets, NGO pages and individual journalists under the pretext of combating “fakes” and “extremism” [
565]. Emergency regimes and “fast” content-removal mechanisms pose a separate threat to procedural guarantees.
The European Court of Human Rights has consistently emphasized that the protection of freedom of expression also extends to information that “offends, shocks or disturbs” (Handyside v. the United Kingdom (1976)) and that interference must be justified by a “pressing social need,” with sanctions and procedures proportionate to the aim [
538]. The practice of instant blocking without judicial review, mass seizures of editorial equipment, long-term “temporary” bans on the dissemination of information—all this entails disproportionate damage to public discourse. The case of Delfi AS v. Estonia (2015) is also indicative, where the Court simultaneously recognized the admissibility of certain liability of intermediaries and pointed out the need for a delicate balance between the effectiveness of law enforcement and the prevention of excessive deletion of lawful speech [
566]. If such regimes lack notifications to the addressee, reasoned decisions, prompt and effective means of appeal, independent review and statistical reporting, the risk of abuse increases sharply—this is indicated by both the Council of Europe recommendations on the roles of internet intermediaries (CM/Rec(2018)2) and the updated Santa Clara Principles on transparency and due process [
552,
553].
Another layer of abuse is structural measures affecting the communications infrastructure: targeted Internet shutdowns, traffic slowdowns, blocking of messaging apps and platforms during protests and elections. From the perspective of international law, such shutdowns are considered a “last resort” and, as a general rule, disproportionate, since they affect the rights of millions and paralyze access to vital information. This is how they are qualified in the reports of the Access Now (“KeepItOn”) campaign and the resolutions of the UN Human Rights Council calling on states to refrain from “deliberately disrupting access to information online” [
567,
568]. In practice, “blocks” are often used preventively, outside of a state of emergency and without clear legal argumentation, which makes them more compatible with the policy of control than with legitimate security protection. Finally, delegating moderation to platforms without a clear legal framework (“soft censorship”) allows governments to influence the dissemination of content through “informal” channels—sending out “recommendations,” closed work with “trusted informants” that is not subject to judicial review.
ARTICLE 19 and the OECD emphasize that such practices undermine accountability and turn platforms into quasi-governmental regulators without democratic constraints; a minimum set of guarantees includes public registers of government requests, user notifications, statistics on removed content, independent audits, and researchers’ access to data [
557,
569]. Thus, the ethical and legal risks of combating disinformation stem not from the idea of regulation itself, but from the lack of legal precision, procedural guarantees, and independent control. These risks are mitigated by narrow and clearly defined legal frameworks (with priority given to civil and administrative remedies over criminal ones), judicial and quasi-judicial oversight, transparent notice-reason-appeal procedures, auditing of algorithms and risk assessments, and international principles that enshrine the priority of proportionality and necessity. Without these conditions, the fight against disinformation too easily turns into a fight against dissent—and thereby undermines the very resilience of societies that it is intended to protect.
10. Higher Education in the Context of Information Warfare: Crisis and the Need for a Paradigm Shift
The main target of the use of information warfare tools has apparently been and remains the higher education system. This is natural: it is here that highly qualified personnel and the worldview of the political, economic and scientific elite are formed, which subsequently influences the behavior of society as a whole. The most politically and economically active social groups become conduits of external influences, including influence on the sociocultural code of a particular country and ethnic group.
Consequently, the analysis of the influence of information warfare on the university sector requires an assessment of the current state of higher education. This is also important from the point of view of the basic thesis of this review: modern civilization is at a bifurcation point, and this logic fully applies to higher education. The initial question, therefore, is the nature of the modern interaction of higher education with society.
Without delving too deeply into history, let us begin in the early 19th century, when Wilhelm von Humboldt formulated the principles underlying the classical university, which flourished at the turn of the 19th and 20th centuries. These principles assumed that genuine higher education was possible only with the active involvement of students in scientific activities. This approach became one of the cornerstones of the second industrial revolution, which led to a radical transformation of lifestyle: electricity became a mass industrial commodity, the metro and tram networks of European cities developed, and industry grew rapidly.
However, at that time, higher education remained predominantly elite. Thus, in the Russian Empire, access was limited to certain social strata., As evidenced by the high cost of book collections: a private library could cost as much as a noble estate.
In the 20th century, the situation changed: higher education became mass. Today, universities everywhere face the “mass challenge”: the need to train a large number of teachers, significant costs for teaching and infrastructure borne by families and the state. The impact of the higher education system on collective consciousness is growing, while the tools that were effective at the turn of the 19th and 20th centuries are gradually losing their effectiveness. Professors are no longer the undisputed leaders of public opinion (especially noticeable in post-Soviet states); students increasingly draw their ideological guidelines from the Internet and social networks. Nevertheless, under certain conditions, higher education is capable of regaining its original positions.
The crises in higher education are caused, among other things, by objective factors previously considered in our works [
570,
571,
572]. In particular, the curricula of most universities largely reproduce the disciplinary structure of science. By the beginning of the 21st century, this structure has become excessively differentiated, which now acts as a brake on scientific and technological progress [
3]. In the sphere of higher education, the negative effect is even more pronounced, which is clearly seen in the example of Kazakhstan. A simple count of specialties and specializations demonstrates the staffing shortfalls in many areas, which prevents development of a full-fledged competitive environment. With a population of about 20 million people, it is objectively impossible to ensure the implementation of educational programs corresponding to such a ramified disciplinary grid.
This brings us back to the thesis stated at the beginning of the review—about the need for convergence of natural science, technical and humanitarian knowledge. If in traditional engineering fields (urban planning, civil engineering, etc.) the system oriented towards the paradigms of the early 20th century still functions largely by inertia, then in relation to the problems of information warfare, such an approach is clearly untenable.
This review shows that a specialist in the field of information warfare must master significant amounts of knowledge in both the humanities and technical fields. Moreover, when training strategists in this field, it is necessary to consider the development trajectories of computer technology and possible “black swans” caused by its non-trivial evolution. Today, most countries lack dedicated degree programs in this field; within the Humboldt paradigm, their creation is conceptually difficult.
The classical model assumed the sequence “first knowledge—then practice”: a student must master a significant amount of theory, after which he or she can move on to practical activities. When prerequisite knowledge becomes excessive, this linear logic fails. The training of specialists must be based on other principles.
The problems of information warfare, therefore, require a transition to a new paradigm of higher education that better corresponds to the massification of higher education. Here, various issues are closely intertwined—from the influence of generative artificial intelligence already transforming higher education to the training of personnel for the education system itself. Both areas, as well as intermediate tasks, are at the forefront of information confrontation. Generative models influence collective consciousness, including via the university sector; the teaching corps, in turn, broadcast to society the narratives that they have learned in the process of professionalization. A fundamental task arises: is it possible to overcome them in the conditions of the historically established “capture” of the teaching corps by certain sets of narratives? A similar question arises regarding the use of GenAI.
Historical experience shows that overcoming such limitations is possible. The new most often matures within the old, overcoming it. However, in the context of information warfare, waiting for the “natural” course of change, as was the case in previous eras, seems an unacceptable luxury. Given the tasks of higher education, these processes require targeted acceleration. The post-industrial paradigm of continuous education, based on step-by-step training, where each subsequent step is closely related to practice, is increasingly relevant. A simple example is the training of FPV drone operators, for both civilian and military tasks. Having mastered the basic skills and demonstrated the results, the student moves on to the next level. In such a model, the key element is not examinations but motivation.
Today, access to high-quality educational resources is practically unlimited: there are high-level open courses, including courses from teachers at the Massachusetts Institute of Technology (MIT) and other universities. It is critical that the student sees the personal usefulness of knowledge at each stage. In practical terms, this means the possibility of starting at an early stage (for example, at the school level), developing basic skills in using technologies, and then selecting motivated participants for more advanced training. At the same time, there is no need to strictly reproduce traditional classroom practices; the educational process can be designed as a trajectory in which the content and level of requirements increase as practical effectiveness is proven.
In this regard, the role of broad humanities education of technical specialists increases. It is impossible to work effectively in the field of information warfare without mastering basic socio-political doctrines and the corresponding cultural and philosophical foundations. Modern society and the higher education system derived from it (especially in the field of technology, computer technology, etc.) often marginalize the general humanitarian component. However, the problems of information warfare demonstrate that this position is unstable: a professional in this field must combine the competencies of a technical specialist and a humanist.
The solution to this problem should be sought in integrative, step-by-step models of training: first, a demonstration of the practical necessity of training (not for the sake of a diploma, but for solving specific problems), then an awareness of the inevitability of the humanitarian component as a condition of professional competence. Step by step, a specialist is formed for whom training becomes a continuous process of a lifetime, and life experience is a source of strategic thinking. Such a trajectory should be organically woven into the system of higher education: there is no point in long-term accumulation of abstract information outside of practice. It is important that the practical significance of knowledge is manifested and realized at each stage of professional development.
11. Future Directions and Research Agenda
The rapid development of technologies, including artificial intelligence, neural interfaces, and automatic content-generation systems, is creating fundamentally new horizons for information warfare. If today the main focus is on social networks, recommendation algorithms, and manipulation of collective consciousness through digital media. Then in the near future we expect a shift to a deeper integration of influences at the cognitive and neurophysiological levels.
The scientific community increasingly explores the synthesis of approaches from various disciplines—from sociology and political science to neuroscience and computer modeling—which opens up opportunities for developing more accurate and personalized methods of protection. At the same time, there is a growing need for international collaborations to develop global standards for countering information threats and ensure adaptation to local conditions.
This section outlines promising areas of research, analyzes potential scenarios for the evolution of information warfare and identifies key challenges facing the academic and expert community in the coming years.
11.1. Interdisciplinary Syntheses: AI + Sociology + Law
The development of methods for countering modern forms of information warfare requires the integration of approaches from various scientific disciplines. New methodological frameworks are emerging at the intersection of artificial intelligence, sociology and law, allowing for the simultaneous consideration of technical, social, and regulatory aspects of information security.
Since the late 2010s, artificial intelligence (AI) has become the main technical engine in detecting and neutralizing disinformation. Deep learning algorithms are used to automatically identify fake content, recognize deepfakes, and analyze network structures for the dissemination of false narratives [
573].
For example, studies by Ghorbanpour et al. [
574] demonstrate the successful application of transformer architectures (BERT, RoBERTa) to automatically detect disinformation on polysemantic political topics. Yet, the researchers emphasize that purely algorithmic filtering does not take into account the sociocultural context and therefore requires supplementation with social-scientific approaches.
The sociological component helps explain how people perceive and disseminate information in the context of digital platforms. The work of Cinelli et al. (2021) [
575] shows that even with the same content presentation, the perception of messages varies significantly depending on the structure of social connections and the algorithmic recommendations of platforms.
In this context, AI can be used to model information flows and filter bubbles, which makes it possible to predict which segments of society are most vulnerable to certain types of cognitive operations. However, correct interpretation requires sociological expertise to explain the cultural and political factors shaping media behavior.
The legal aspect forms the regulatory framework that defines the boundaries of the permissible use of AI in the fight against disinformation. The European “Digital Services Act” (2022) [
576] and the draft “AI Act” (expected to be adopted in 2025) [
577] introduce mandatory requirements for the transparency of algorithms and the risk assessment of automated systems. The interaction of AI with law here is two-way: on the one hand, legislation limits the use of technologies to avoid censorship and discrimination; on the other hand, it stimulates the development of more transparent and accountable algorithms. An example is the research by Gorwa et al. [
578], which analyzes the legal liability of platforms for automatic content moderation.
Effective integration of AI, sociology and law requires the creation of interdisciplinary research platforms where algorithm developers work closely with social researchers and lawyers. Such projects are already being implemented in the EU within the framework of the Horizon Europe program, for example, the MediaResist project (2022–2025) [
579], which develops tools to assess the information resilience of society considering both technological and socio-cultural and legal factors. Thus, interdisciplinary synthesis enables more precise and ethically justified technologies for countering information warfare, minimizing the risks of excessive interference and simultaneously increasing the effectiveness of protection.
11.2. Scenarios of the Information War of the Future: From Cyber Operations to Neural Interfaces
The future of information warfare forms at the intersection of cyber operations, cognitive technologies, and neural interfaces. If in the 2010s the key tools were social media, botnets, and cyberattacks on infrastructure. Then in the 2030s and 2040s a transition to complex hybrid operations is predicted that combine the management of information flows, direct influence on cognitive processes, and integration with brain–computer interface (BCI) technology [
580,
581]. Classic cyber operations of the future include not only hacking and data leaks, but also their instant transformation into disinformation campaigns using generative AI. Automation of the creation of falsified content (deepfake 2.0) allows messages to be adapted in real time to the psychological profiles of the target audience [
582]. Such capabilities are already being demonstrated in experiments on “personalized propaganda,” where algorithms select arguments based on behavioral data [
81].
The most radical direction will be the development of BCI and neurostimulation technologies that allow direct recording and, in the future, modification of brain activity patterns. Research in DARPA (N3 program) [
583] and Chinese military projects on “intellectual warfare” (Intelligentized Warfare) [
584] already indicate the possibility of integrating neural interfaces into military and intelligence systems. Such technologies can potentially be used to transmit information, train, or manipulate the operator’s emotional states in combat or political missions [
585].
Next-generation AI systems will be able to act as autonomous “information agents” that conduct long-term communications with target subjects, simulating social connections and forming long-term influence. Research in the field of social bots and conversational AI shows that people can form trust in autonomous digital agents [
586], which in the context of future IW could become critically dangerous. In parallel, the field of neuromarketing and applied cognitive technologies is developing, where incentives are selected based on biometric and psychophysiological indicators [
587]. The use of these methods in a military context will allow for “targeted” cognitive attacks that affect decision-making in crisis situations [
588].
Future IW scenarios suggest the emergence of “personalized information bubbles” created based on continuous analysis of user data in combination with access to their neuroprofile. This will lead to the need to develop fundamentally new security measures, including “cognitive firewalls”—systems that filter incoming information at the level of sensory perception [
589].
The shift in emphasis from mass information campaigns to individualized neurocommunications creates a legal vacuum. International humanitarian law and cybersecurity norms do not yet include provisions directly regulating the use of BCI in military or propaganda operations [
590]. In 2021, the European Parliament adopted a resolution on human rights in the era of neurotechnology [
591], but it is advisory in nature. By the middle of the 21st century, information warfare may transform into an integrated system of “info-neuro-cyber operations,” where the key target will be not just the information field, but the human cognitive architecture. This requires the accelerated development of interdisciplinary defense strategies that include technical, legal, cultural, and neuropsychological measures.
12. Conclusions
In conclusion, modern forms of information warfare are a multilayered phenomenon, where traditional manipulation methods are enhanced by digital technologies, from personalization algorithms to generative AI. The review shows that effective counteraction requires a comprehensive approach: from understanding cognitive and social mechanisms to applying advanced computational methods (FGT, RNS) and to implementing legal frameworks. Despite progress in identifying threats, challenges remain—ethical dilemmas, censorship risks, and the evolution of attacks toward the neurocognitive level. Further research should focus on interdisciplinary syntheses, international standards, and adaptive defense systems to ensure the resilience of society in the era of information dominance.
Materials presented in the review confirm once again that the problems of information warfare are multifaceted, covering both humanitarian and technical aspects. In the technical dimension, the development of new computing systems aimed at gradually approaching the biological prototype—the human brain—is of particular importance. The study of such problems can serve as a tool for waging information warfare, since the attitude towards various versions of the concept of transhumanism remains extremely ambiguous. At the same time, the identification of certain prospects for the further development of humanity inevitably affects public consciousness, forming corresponding research directions and setting new vectors of scientific research.
One of the main conclusions that follows from the analysis is the need to move to a different paradigm of higher education based on the principle of convergence of natural science, technical, and humanitarian knowledge. This principle is of universal importance for the education system as a whole, but the inertia of social structures, especially such a conservative system as the university, makes the transformation of paradigms an extremely slow and difficult process. The problems of information warfare in this context play a special role, since they clearly demonstrate that the existing disciplinary structure of science creates critically important obstacles, including those affecting the development of higher education. Consequently, this area can become the driver of the practical implementation of new educational models.
At the same time, the problems of information warfare are not limited to the sphere of higher education, although the latter occupies a key place in the formation of the sociocultural code. Of significant importance is also the development of new computing systems that are capable of being complementary to the structure of society to a certain extent. No less important is the provision of forms of interaction with the suprapersonal level of information processing. It can be assumed that many key contradictions associated with information warfare will be determined precisely by the struggle for influence on this level, since artificial intelligence, as shown in the review, already performs the function of an intermediary between the individual and suprapersonal levels of information processing.