Next Article in Journal
EmotiCloud: Cloud System to Monitor Patients Using AI Facial Emotion Recognition
Previous Article in Journal
A Prompt Optimization System Based on Center-Aware Textual Gradients
Previous Article in Special Issue
A Blockchain-Driven Cyber-Systemic Approach to Hybrid Reality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Viable System Model and the Taxonomy of Organizational Pathologies in the Age of Artificial Intelligence (AI)

by
Jose Perez Rios
1,2
1
E.T.S.I. Informática, Campus Miguel Delibes s/n, University of Valladolid, 47014 Valladolid, Spain
2
WOSC (World Organization of Systems and Cybernetics), Lincoln LN2 1RE, UK
Systems 2025, 13(9), 749; https://doi.org/10.3390/systems13090749
Submission received: 29 June 2025 / Revised: 23 August 2025 / Accepted: 25 August 2025 / Published: 29 August 2025
(This article belongs to the Special Issue CyberSystemic Transformations for Social Good)

Abstract

How can we address the rapid advancement and widespread adoption of Artificial Intelligence (AI) across various sectors of society? This paper aims to provide insights through a cybernetic-systemic approach, which is well-suited to tackle the complexities of this new landscape. We will utilize the framework of Organizational Cybernetics (OC) and the Viable System Model (VSM), along with Perez Rios’s Taxonomy of Organizational Pathologies (TOP). By examining the key risks and potential challenges posed by AI through the lens of OC, this paper seeks to contribute to the development of more effective strategies for responsible innovation and governance in AI.

1. Introduction

The rapid advancement of Artificial Intelligence (AI) presents challenges of unprecedented dimensions for humanity. Its ability to make decisions independently, which can impact individuals and organizations, makes it essential to understand the implications of its expansion, integration, and application. Historically, the use of any innovation—such as nuclear energy—has relied on human activation. However, new developments in AI are now questioning this ability to make choices.
In the field of cyber-systemics, a pertinent question arises: Can insights from this field be applied to address these new challenges? Concepts such as “variety,” “variety engineering,” “recursive structure design,” and “governance” of these structures and approaches, such as Beer’s Organizational Cybernetics (OC) and the Viable System Model (VSM), along with Perez Rios’ Taxonomy of Organizational Pathologies (TOP) and other elements of cyber-systems, can help us navigate this new complexity.
The rapid advancement and widespread adoption of AI in various sectors of society have sparked both excitement about its potential benefits and concerns regarding its possible negative effects on individuals and institutions, as well as democracy. On one hand, an optimistic perspective views AI as a tool that empowers people and enhances their decision-making capabilities. On the other hand, some people perceive it as a threat, fearing that it could exacerbate societal inequalities, manipulate public opinion, or undermine democratic values.
This paper will explore the potential effects of the widespread dissemination of AI and how Organizational Cybernetics (OC) can help identify key dangers it may present at various levels of society. These levels include individuals, local communities, regions, countries, transnational organizations, and even the global community and the planet. I will outline several facets of this complex issue and propose elements that could help address them. Specifically, I will point out how Organizational Cybernetics can be employed to design and diagnose recursive structures. Additionally, I will explain how utilizing a Taxonomy of Organizational Pathologies can facilitate and enhance communication among decision-makers on these complex issues.
The research question this paper addresses is whether human beings have the methodological means to not only benefit from the extensive implementation of artificial intelligence but also to confront the potential negative effects arising from its use. The focus of the article is on the practical application of existing methodologies to tackle this question. The availability of such means could benefit humanity on multiple levels, including individuals, local communities, autonomous regions, countries, and transnational institutions. Consequently, the audience for this topic can be broad.
To make the analytical thread that connects the main components of the paper explicit: the rapid diffusion of AI introduces novel sources of complexity (variety) that can threaten the viability of individuals and institutions. Organizational Cybernetics is the approach proposed to address this challenge. Within this approach, the Viable System Model (VSM) outlines the essential elements (functions) that a viable organization must sustain. The Taxonomy of Organizational Pathologies then offers a controlled vocabulary to detect recurrent failure modes on those elements. In what follows, I apply this logic across recursive levels (from individual to transnational and planetary scales).
The paper is organized as follows. After this introduction to the topic, the second section will summarize the key elements of the methodological approach used to examine the impact of extensive artificial intelligence (AI) use on humans and society, specifically through the lens of organizational cybernetics. I will outline the main components of the framework I propose for applying this methodology to explore complex issues such as AI utilization, as well as the Taxonomy of Organizational Pathologies (TOP) that may arise within organizations. The reason for employing the TOP in this context is that it serves as a tool to enhance the understanding of complex organizational challenges and facilitate communication among decision-makers, aiding them in diagnosing or designing the organization under consideration. The third section will discuss relevant aspects of the extensive use of AI and its potential effects on individuals and society. I will examine these impacts at various levels, including individuals, local communities, regions, countries, the world, and the entire planet. Both positive and negative impacts will be addressed. Additionally, I will explore how the TOP can assist decision-makers in identifying key flaws at different organizational levels resulting from AI implementation. Finally, the paper will present conclusions drawn from this exploration. This analysis aims to be helpful in developing strategies for the responsible and beneficial integration of AI into society. By employing a multilevel approach, the study examines the effects of AI across different scales and integrates concepts from organizational cybernetics and systems thinking.

2. Organizational Cybernetics

The aim of this section is to show the basic elements of Organizational Cybernetics and the VSM together with a methodological framework to facilitate the application of S. Beer’s Organizational Cybernetics (OC) and Viable Systems Model (VSM). This conceptual framework is complemented by the description of a Taxonomy of the most frequent Organizational Pathologies. This taxonomy can be useful to handle the multiple impacts that the extensive use of AI can have at various levels of society, human beings, and democracy.
The VSM is an approach that possesses characteristics that make it one of the most powerful tools available today to help managers understand the complexity of an organization. Compared to other management approaches, it offers a conceptual framework encompassing the entire organization and its possible multiple levels. It makes it possible to quickly identify the components (functions) of its organizational and information structure. In addition, once the conceptual elements of the VSM are understood, it provides a language that allows very rapid communication between decision-makers about the components necessary for the diagnosis or design of an organization.

2.1. Organizational Cybernetics and the Viable System Model

Organizational Cybernetics (OC) is a systemic approach developed by Stafford Beer (Beer, 1979, 1981, 1985, 1989) [1,2,3,4]. In this work, I will only highlight some of the key elements of OC that are essential for understanding the methodological framework I propose to apply (Perez Rios 2008a, b; 2010; 2012) [5,6,7,8]. Some of those main elements of OC are:
(a)
Viability (capacity to maintain the existence of the organization through time regardless of the environment changes)
(b)
Variety (indication of the level of complexity of the situation or issue under consideration)
(c)
Ashby Law (according to Ashby (1956)) [9] “only variety can destroy variety”
(d)
Conant-Ashby theorem (declares that “every good regulator of a system must be a model of that system” (Conant and Ashby, 1970) [10]
(e)
Viable System Model (VSM). This creation of S. Beer (1979, 1981, 1985, 1989) [1,2,3,4] provides a comprehensive model of the essential components (subsystems, communication channels, relations among subsystems, etc.) that any organization must have to be viable (Figure 1). The names he gives to those subsystems (functions) are System 1, System 2, System 3, System 3*, System 4, and System 5.
System 1 is responsible for producing and delivering the goods or services that the organization is designed to produce and deliver. In the example shown in Figure 1, System 1 comprises three elemental operational units (Op. Units 1, 2, and 3), which can be company divisions, regions in a country, countries in a transnational organization, etc.
System 2’s primary role is to guarantee the harmonic functioning of System 1’s organizational units
System 3 is responsible for optimizing the functioning of the whole set of System 1, made up of the different operational units. We can say that it is responsible for the “here and now” of the organization.
System 4’s chief responsibility is monitoring the organization’s overall environment. It takes care of the organization’s “outside and then” to maintain it in constant readiness for change.
System 5 takes care of normative decisions and is responsible for defining the organization’s purpose, ethos, vision, and identity.
(f)
Recursive character of the VSM. Another fundamental aspect of the VSM is the recursive character of viable systems. All viable systems contain viable systems and are themselves contained in viable systems. In Figure 2, we can see how, inside the ellipses and rectangles, which represent the elemental operational units, an exact replica of the system in focus is contained (turned 90 degrees). The most important aspect of the recursive conception of viable systems is that no matter which place they occupy within the chain of systems, they must always, to be viable, contain the five systems or functions that determine viability.
(g)
Communication channels (Figure 3) are responsible for connecting all those systems or functions and linking the organization with its environment.
A deep understanding of an organization’s different functions (systems) and the communication channels connecting them provides a comprehensive framework for designing information systems and diagnosing the quality and adequacy of existing ones.
The set of communication channels and information systems can be seen as the organization’s nervous system. It provides the essential element (information) that allows all functions to perform their tasks.
Particularly relevant are the algedonic channels, whose role is to collect and transmit to System 5 information critical for the organization’s viability.

2.2. Methodological Framework

Multiple publications describe the elements of OC and VSM, starting evidently with the publications of its creator, S. Beer (1979, 1981, 1985) [1,2,3]. Other researchers have continued with this line of work, each emphasizing different aspects of its use. I can mention, among others to Espejo, (1999, 2008) [11,12], Espejo R. and A. Reyes (2011) [13], M. Schwaninger (2006) [14], P. Hoverstadt (2008) [15], J. Pérez Ríos (2012) [8], A. Espinosa (2023) [16], W. Lassl (2019) [17], M. Pfiffner (2022) [18].
Each of them puts emphasis on different aspects when applying the VSM. Pfiffner, for example, differentiates three dimensions to consider in the VSM: Anatomy, Physiology, and Neurology.
I added a fourth dimension to the three indicated by Pfiffner: identity. One reason for this is that only after we answer identity-related questions such as “Who am I?” “Why are we here for?” etc., can we study all other dimensions. The methodological framework I present will consider these four dimensions.
Like the differentiation that is often made between the “mind” and “brain” in the human being (Eccles, 1994) [19], I believe that identity (fourth dimension) in its broad sense could, in a way, be assimilated to the “mind” and the whole organization with its five systems and its three dimensions (anatomical, physiological, and neuronal) to the bodily part of the human being, including the brain.
Concerning the approach used when conducting an organization’s VSM study, I generally use a top-down rather than a bottom-up approach. That means that I first clarify the organization’s identity, purpose (System 5), and boundaries, and once this is done, I proceed with the analysis of all other elements. In summary, I start first with synthesis and then proceed with analysis. Designing or diagnosing a system without having previously clarified its identity and purpose makes no sense.
This section describes some of the main components of the methodological “framework” I proposed for applying OC and the VSM (Perez Rios 2008b, 2010, 2012,) [6,7,8].
In essence, the framework is structured in four stages. First, clarify the identity, purpose, and boundaries of the organization. Second, create the vertical structure of the whole organisation to cope with the relevant complexity of the environment. Third, design or diagnose each organization that makes up the whole, following the indications set out in the VSM. Fourth, check the coherence of all vertical levels of the organization (levels of recursion) to ensure that the identity and purposes of each level are consistent and coherent. Let us look at them.
In the first stage, the organization’s identity and purpose are highlighted. This clarification allows one to get a better idea of what the organization is, and what it is not, and what its goal or purpose should (or should not) be, bearing in mind that different observers may assign different purposes to the same organization. The answer to these questions (identity and purpose) will help us delimit the organization’s boundaries (what belongs to the organization and what to the environment).
  • Identity recognition
One of the first things to do in this first stage, and before any other, is to identify the organization we intend to study/create, which means making its identity and purpose explicit. For example, this stage is crucial when considering the development of artificial intelligence platforms and tools, whose impact I will comment on in the next section.
The answer to the question of what the organization or system under study is may not be trivial. Giving a clear answer to this question implies knowing also what it is not (Schwaninger, 2009) [14]. The answer to these two questions will help us to delimit what is part of the organization and what is part of its environment, thus clarifying its boundaries. In today’s business world, where, in many cases, a company’s multiple activities (research, design, production, distribution, etc.) are decentralized and spread across the world, to establish the limits of what the company is, that means to delimit where the company or the organization ends and where the environment begins may not be easy.
The same can be said of the characterization of what the company is in the sense of clearly identifying its purpose. Beer’s well-known statement that “the purpose of a system is what it does” alerts us about the diversity of assessments of what it does, depending on the observer. Observers may attribute different “purposes” to the same company or organization (Espejo & Reyes, 2011) [13].
In the second stage, we identify the vertical structure of the organization. This stage comes about through “complexity unfolding,” using this process term in the sense Espejo gave (2008) [12]. To help the organization cope with the relevant complexity (variety) of the environment, this is broken down into sub-environments, and these into sub-sub-environments, and so forth. The same is done with the organization, so each sub-organization, sub-sub-organization, and so on will have to cope only with their respective limited environments. The result of this process will be a set of recursion levels.
  • Recursion Levels-Key Factors Matrix
Once we have vertically unfolded complexity, the next step is identifying the main elements to consider at each recursion level. This identification allows us: (a) To clarify the specific purpose at each level, helping to guarantee that each of those particular purposes is recursively coherent with those of the previous level and so on, up to the broad general purpose of the whole organization; and (b) To identify the particular aspects to be taken into consideration at each recursion level (specific environment, stakeholders, legal or normative requirements, external agents, particular actions, etc.).
Keeping the whole structure visible should facilitate coherence between the different actions at all recursion levels. This visibility of the entire structure can be facilitated with the help of the Recursion Levels-Key Factors Matrix (Pérez Rios, 2008d, 2010, 2012) [6,7,8]. In Figure 4, we have an example of a generic matrix in which one can see the recursion levels (in the rows) and the key factors (columns) considered pertinent for the case under study (in this example, ten). If we use various recursion criteria, we could have multiple matrices (Figure 5). Using this matrix allows the decision-makers to have a global view of the whole organization and to see the relevant and significant factors at each recursion level. It also allows us to check if the purposes at each level and other elements, such as actions, are systemically coherent.
The matrix serves as a unifying reference point, facilitating communication among decision-makers. Whether they are political authorities, public technicians, or citizens, the matrix provides a shared understanding of the entire system they are managing. That shared understanding fosters collaboration and paves the way for effective decision-making. This consideration of the various recursion levels and their relations may play an essential role in analyzing the development and use of artificial intelligence, as we will see in the next section dedicated to this issue.
The matrix can become more complex if we have many recursion levels, different recursion criteria, and many interest factors. Each recursion criterion can represent a specific dimension through which we want to analyze the organization. For instance, in a company, we might be interested in examining the different recursion levels based on geographical regions or product lines. In an academic setting, we could focus on recursion levels from the perspective of academic disciplines or teaching modes, such as online, in-person, or hybrid formats.
The third stage involves examining the diverse vertical levels created in the previous stage. At each level, we analyze the components that comprise it, namely the specific “environment” of the level chosen, the “organization” whose activities will be related to this environment, and the “management” corresponding to this organization. In the next section, in which I explore the impact of AI at different levels, this clarification is very relevant.
Next, we embark on a detailed evaluation of the elements (named System 1, System 2, System 3, System 3*, System 4, System 5, communication channels, information systems, etc.) that the Viable System Model identifies as being necessary (and sufficient) for ensuring the organization’s viability (Figure 1). This evaluation serves the crucial purpose of checking if the organization has all those elements, as well as verifying if all of them dispose of what they need to perform their function. Lastly, we must ensure that all of them are indeed performing their function.
In the fourth stage, we delve into the extent to which the different organizations (and sub-organizations) at the various recursion levels are linked, assessing the coherence among all the elements while being mindful of the organization’s identity and purpose as a whole. In Figure 6, the connections between System 5 are highlighted at various recursion levels to signal the needed links between them.
An expansion of this framework could consider approaches that include not only the issue (or system) being observed and studied but also the observer. This cybernetic approach is referred to as second-order cybernetics (Von Foerster, 1995, 2003) [20,21]. Additionally, we could further broaden the scope of our study to explore the interrelations between the environment, human organizations, and non-human entities, especially if autonomous artificial intelligent agents become a reality. The study of the mutual impacts of all these actors and the environment presents a new level of complexity and a significant challenge for organizational cybernetics to address [22,23].
After describing the main components of the proposed methodological approach, in the next section, I will expose what I consider some of the more frequent “flaws” called “pathologies” that may appear in organizations and can alert us about their vulnerabilities. This knowledge will also be very beneficial to designing a new organization to avoid being born with those deficiencies.
Their knowledge can also give decision-makers a quick tool to diagnose what may not be right in the organizations they pretend to manage. It also provides a language (in a similar way that medicine doctors can share a sickness with other doctors in a fast way by using the name of the syndrome or the pathology that the patient is suffering) to speed up the conversations among various managers or decision-makers in general.

2.3. Organizational Pathologies

In this section, I will show some of the most frequent pathologies we may find in organizations viewed through the lenses of OC and the VSM. Identifying a pathology is a prerequisite to prescribing any treatment for the diagnosed deficiency. This knowledge is useful either to design new organizations so that they are created free of them or for diagnosing existing ones so that, once identified, we can try to eliminate that pathology. Some authors, such as Espejo, Hetzler, Hoverstadt, and Schwaninger, in addition to Beer, worked around a similar issue [12,15,24,25]. Others also addressed the pathologies issue by exploring different paths, such as Katina, P.F., and Keating, C., [26,27,28,29,30] who worked around systems-theory-based pathologies for Complex Systems Governance, elaborating constructs across systems-of-systems contexts. Other authors, such as Morales Allende, M.M.; Ruiz-Martin, C, and alt., [31,32] worked on the connection between VSM-grounded pathologies to resilience indicators and applications, or Yolles and Flink, who explore the relation between personality, pathology, and mindsets to organizational dysfunctions [33].
In 2008, I developed a Taxonomy of Organizational Pathologies (Perez Rios, 2008a, 2008b) [5,6] to help identify them. I categorized them into three distinct groups (2010, 2012) [7,8]: Structural Pathologies, Functional Pathologies, and Information and Communication Channel Pathologies. Each group focuses on a specific aspect of methodology application. 1. **Structural Pathologies** refer to issues that may arise from inadequate handling of vertical complexity. 2. **Functional Pathologies** relate to the essential functions that need to be present in any viable organization, as indicated by the Viable System Model (VSM). 3. **Information and Communication Channel Pathologies** involve flaws in the design of information systems or communication channels necessary for the effective functioning of the functions mentioned above. In Appendix A, I show a summarized description with images of all the pathologies.
Let us see those pathologies in some more detail.

2.3.1. Structural Pathologies

When designing or diagnosing the vertical structure of an organization, some of the fundamental questions are related to these:
Is the design of the organization’s vertical structure adequate to face the complexity of the organization’s environment?
The pathologies referred to in this group arise from either the insufficient complexity unfolding, the lack of organizations capable of managing intermediate environmental levels, or unclear relationships among organizations. The main ones are the following:
  • PI1. Non-existence of vertical unfolding.
When an adequate vertical unfolding is lacking, it becomes challenging, or even impossible, for a single large organization to manage the full range of variety it faces.
  • PI2. Lack of recursion levels (first level).
While vertical unfolding is established, the first level of recursion is left unaddressed, resulting in part of the environmental variety being neglected.
  • PI3. Lack of recursion levels (middle levels).
Although vertical unfolding has taken place, intermediate recursion levels are left unoccupied. This results in the environmental variety needing to be managed at either the next or previous recursion level, which can be difficult or unfeasible, or, even more critically, it may not be managed at all.
  • PI4. Entangled vertical unfolding.
There are multiple interrelated memberships across various levels. Inadequate integration and communication between recursion levels hinder effective management when multiple memberships are involved.
In Figure 7, I present the map of the pathologies included in this group.

2.3.2. Functional Pathologies

In this group, I analyze the pathologies relating to each of the organization’s components. The aim is to see whether the essential functions (systems) necessary for the organization’s viability exist and work adequately.
Pathologies Related to System 5
When designing or diagnosing an organization, some of the fundamental questions are related to these:
Do I have a clear idea of who (the organization) I am, what my purpose is, and what the boundaries of my organization are?
If we do not know well who we are, our purpose, what we want to reach, etc., how can we design or diagnose anything related to the organization (or issue) under consideration?”
That is why, when applying the VSM, I consider it mandatory to start (in general) by answering those questions (System 5), which are so related to clarifying the organization’s Identity. Those are probably the most difficult questions to answer.
Some frequent pathologies associated with an inadequate functioning or design of System 5 are the following:
  • PII1. Ill-defined identity.
Identity has not been sufficiently clarified or defined (“I do not know who I am”).
  • PII2. Institutional schizophrenia.
Two or more different identity conceptions produce conflict within an organization.
  • PII3. System 5 collapses into System 3 (Non-existing metasystem).
System 5 intervenes undesirably in the affairs of System 3.
  • PII4. Inadequate representation vis-a-vis higher levels.
There is a poor connection between Systems 5 of organizations belonging to different recursion levels within the same global organization.
Pathologies Related to System 4
Crucial questions related to this system are the following:
Do I know what is happening outside my organization and what the future will look like?
Does the organization have a continuous activity concerned with exploring in real time what is going on outside the organization (environment: technologies/legislations/etc.) and also exploring what changes may appear as possible in the future related to the organization’s identity/purpose/viability)?
The organization must have an active communication/interaction between the activity of exploring the “outside and future” (System 4) and the activity of handling the “inside and present.” (System 3). If an organization does not have it, it will not adapt. This organ (Homeostat System 4-System 3) is the “adaptation organ” of the organization.
Some frequent pathologies associated with an inadequate functioning or design of System 4 are the following:
  • PII5. “Headless chicken”.
System 4 is missing or, if it does exist, does not work properly.
  • PII6. Dissociation of System 4 and System 3.
The homeostat System 4—System 3 does not work properly. Each component system carries out its function separately but does not communicate and interact as it should with the other system.
Pathologies Related to System 3
Some fundamental questions to answer related to System 3 are:
Management style: Is the management of Operations designed and working adequately? Is the balance between autonomy and cohesion properly configured?
One of the crucial aspects to consider when applying the VSM is ensuring that systems (organizations) are self-regulating. Fundamental for that is to place the decision point as close as possible to where the corresponding decision need arises. Consequently, the operational units must have sufficient capacity to decide and act; in other words, they must be allowed the necessary degree of autonomy, limited only by the cohesion requirements of the organization as a whole.
I mentioned previously the kinds of problems engendered by a lack of communication between System 3 and System 4 and its effect on the activity of the System 4-System 3 homeostat, ranging from dysfunction to its failure to work altogether, so now I will focus mainly on the pathologies caused by a dysfunction of System 3 with regard to its task of integrating the elements of System 1.
  • PII7. Inadequate management style.
System 3 either intervenes excessively or fails to engage adequately in the management affairs of System 1. An authoritarian management approach can stifle the autonomy and creativity of System 1, hindering its potential.
  • PII8. Schizophrenic System 3.
Conflict arises within System 3 due to its dual roles in both the operational system and the management meta-system of the organization.
  • PII9. Weak connection between System 3 and System 1.
The operational units that make up System 1 operate independently, lacking adequate integration and support from System 3.
  • PII10. Hypertrophy of System 3.
System 3 takes on too many responsibilities, some of which should be managed directly by System 3*, System 2 and System 1.
Pathologies Related to System 3*
Some fundamental questions related to the design or diagnosis of an organization, concerning this system are:
Are things done correctly, are unethical behaviors going on, and are corruption practices detected early?
The pathology most frequently associated with this system results from its absence or failure to function adequately. System 3* is vital in detecting and preventing inappropriate organizational activities or behaviors.
In the case of the deployment of AI, a good design and functioning of this system are vital for human beings and organizations, society, and democracy. It will capture inadequate uses of AI that may harm persons or society.
The pathology associated with an inadequate functioning or design of System 3* is:
  • PII11. Lack or insufficient development of System 3 *.
The lack or insufficient development of System 3* allows undesirable behavior and/or activities to continue in System 1.
Pathologies Related to System 2
Some fundamental questions to answer related to this system are:
Are my operating units governed by “Every man for himself!” Is chaos proliferating and reigning in our organization? Are we being overwhelmed by bureaucracy?
System 2 intends to make the organizational units that comprise System 1 function harmoniously. These units may be related to persons, groups, local communities, regions, countries, etc., or simply compete for the organization’s shared resources, which might lead to conflict as a result of each one attempting to achieve its own goals. System 2 deals with such issues.
Considering that System 2 contributes to the harmonious behavior of the operational units in System 1, let’s examine some pathologies typical of its poor design or functioning.
  • PII12. Disjointed behavior within System 1.
A lack of adequate interrelations between the elemental operating units that conform to System 1 leads to their fragmentary behavior.
  • PII13. Authoritarian System 2.
System 2 shifts from a service orientation towards authoritarian behavior.
Pathologies Related to System 1
Some relevant questions related to this system are the following:
Are we producing and delivering what we should? Do the operational units of the organization work in harmony among themselves, or are some of them absorbing more resources than they should from the whole? Do the operational units have excessive power in the organization?
System 1 can be made up of several elementary operational units which, for example, in a country of distinct autonomous communities, in the case of a firm, may consist of different product lines, in a university of the various faculties, in a health system of the various health areas, etc. From the structural point of view of the VSM, the components of each unit are always the same (environment, operations, management, and System 2).
System 1 aims to ensure that it provides the environment with the goods or services that constitute its reason for existence.
Among the pathologies that could affect System 1, I will mention the following.
  • PII14. Autopoietic “Beasts”.
The individual operating units that make up System 1 behave as if their personal goals are the only reasons for their existence. They disregard any considerations that might transcend their individual interests, ignoring the necessity to align their goals within an integrated System 1.
  • PII15. Dominance of System 1: Weak Metasystem.
The power of System 1 operates without the constraints of a properly defined metasystem (comprising System 3, System 4, and System 5).
Pathologies Related to the Complete System
  • PII16. Organizational Autopoietic “Beasts”.
The uncontrolled growth and activity of certain individual parts of the organization jeopardize the overall viability of the entire organization.
  • PII17. Lack of Metasystem.
A clear definition of identity and purpose remains either inadequate or entirely absent. This fragility in the metasystem disturbs the equilibrium between immediate, management-focused activities (“here and now”) and longer-term, adaptive initiatives. Consequently, the connections between organizations at various levels of recursion become insufficiently established, impairing collective effectiveness.
In Figure 8, I present the map of the pathologies included in this group.

2.3.3. Pathologies Related to Information Systems and Communication Channels

The role of the communication channels within the VSM is to connect all functions/sub-systems and, within them, the people and the organization’s operational units with the different environments to which they relate.
In this section, I point out some of the pathologies related to the existence and constitution of communication channels and, in broader terms, of information systems.
  • PIII1. Lack of information systems.
Some necessary information systems are missing, insufficiently developed, or ineffective.
  • PIII2. Fragmentation of information systems.
It refers to the case where information systems exist in the organization but work fragmentarily, with poor or non-existent connections between them.
The consequences will be a lack of coordination, inconsistencies, a lack of knowledge in some functions of what is happening in others, a general increase in costs, etc.
  • PIII3. Lack of key communication channels.
Specific required communication channels that should connect the different functions do not exist, or, if they do, are either inadequately designed or work improperly.
  • PIII4. Lack of or insufficient algedonic channels.
Particularly serious is the non-existence (or insufficient presence) of algedonic channels. These channels have the essential function of transmitting information on any incident occurring in System 1 (or also originating in the environment and captured by System 4) that may have a significant (or even vital) impact on the organization’s viability.
  • PIII5. Communication channels incomplete or with inadequate capacity.
Communication channels do not have all the necessary elements for transmitting the required information (transducers, channels capacity, and sender-receiver in both directions). For example, absence of transducers or their inadequacy or low capacity of the channels to carry the amount of information per unit of time required.
The same will happen if the design and choice of the “sensors” at the emission points, or how the information is displayed to the receivers, are inadequate.
In Figure 9, I present the map of the pathologies included in this group.
The next section is dedicated to discussing various aspects related to the extensive use of AI and its potential impact on humans and society. I will examine this impact at multiple levels, including individuals, local communities, regions, countries, and the world at large. Both positive and negative effects will be addressed. Additionally, I will explore how the TOP framework can assist decision-makers in identifying key potential flaws across different organizational levels.

3. Impact of AI on Human Society

After a short comment on what is understood by AI, I will examine what can be the positive and the adverse effects of its introduction and use in human society.

3.1. Definition of AI

There are several ways to qualify AI. For example, the EU defines an AI System by its ability to infer and use models or algorithms from data and produce results such as predictions, content, recommendations, or decisions (EU Law 1689, 2024) [34]. Other entities, such as the UK Parliament [35] offer the following definitions:
“Narrow AI (designed to perform a specific task, using information from specific datasets, and cannot adapt to perform another task). Artificial General Intelligence (AGI) or Strong AI (as an AI system that can undertake any intellectual task/problem that a human can). AGI is a system that can reason, analyze, and achieve a level of understanding that is on par with humans, something that has yet to be achieved. Machine learning is a method that can achieve narrow AI; it allows a system to learn and improve from examples without all its instructions being explicitly programmed. Deep learning is a type of machine learning whose design has been informed by the structure and function of the human brain and the way it transmits information.”
Let’s consider some of the implications (positive and negative) of extensive use of AI for humans and society.

3.2. Impact on Humans and Society

The extensive integration of Artificial Intelligence (AI) into various facets of society presents a complex interplay of benefits and challenges. While AI has the potential to enhance decision-making, increase efficiency, improve public services, etc. it also poses significant risks to privacy, equality, and society (e.g., the integrity of democratic processes).
Addressing these challenges requires a multifaceted approach, including the development of ethical frameworks, enhancement of transparency, public education, and the implementation of robust regulatory measures. By proactively engaging with these issues, society can harness AI’s advantages while mitigating its adverse effects, ensuring that technological advancement aligns with human rights.

3.2.1. Positive Effects

Among the various positive effects on human beings and society, we can mention, among many others (Manyika, J., 2025) [36]:
  • Assist people in everyday tasks, help them access information and knowledge, and help them pursue their creative endeavors.
  • Increase Efficiency and Productivity: AI can automate tasks, analyze data quickly, and enhance decision-making processes, leading to increased efficiency in multiple sectors. It can provide personalized learning and potentially make tasks easier and safer. AI can also drive technological innovation and, in general, contribute to economic progress.
  • Informed Decision-Making: AI may aid in analyzing complex datasets, providing insights that support policymakers in crafting evidence-based policies
  • Accelerate scientific advances in many fields, such as medicine, physics, climate sciences, etc.
  • Improving Public Services: AI can benefit public services such as traffic management and resource allocation.
  • Improving Healthcare: AI can support medical diagnosis, treatment, and patient care.
  • Augmenting Human Capabilities: AI has the potential to revolutionize various aspects of life, serving as a powerful tool for human advancement and helping reduce inequalities in society.
In general, it can help humanity progress in its most significant challenges and opportunities (e.g., food security, health, well-being, etc.).
But, together with many benefits, AI can also bring many dangers. Let us now consider some possible negative impacts that an extensive diffusion and use of AI can produce. I will initially mention some effects affecting society and individuals in general, and later, I will disaggregate those effects on the various pertinent levels (individuals, local communities, regions, countries, etc.).
In section one of this paper, when describing the methodological framework to apply the VSM, I mentioned the convenience of vertically unfolding the complexity of the issue at hand to handle each level (called recursion levels) separately. So, this is what I will do to explore the specific harms produced by AI use in each of those levels.

3.2.2. Negative Effects and Risks of AI

From the abundant literature related to AI’s possible negative effects on human beings and society, I will mention only some of them. Among some of its negative impacts it can be mentioned:
  • Privacy Erosion: The proliferation of AI-driven surveillance systems poses significant threats to individual privacy, infringing on individual privacy rights, as these technologies can monitor and analyze personal behaviors extensively (Manheim and Kaplan, 2019) [37].
  • Bias, discrimination and amplification of Inequality: AI can exacerbate social and economic disparities if not implemented thoughtfully, potentially leading to job displacement and unequal access to technological benefits. AI systems trained on biased data can perpetuate and even exacerbate existing inequalities, affecting marginalized groups disproportionately. (The Guardian, 2025) [38].
  • Misinformation and Manipulation: AI-generated content, such as deepfakes, can spread misinformation, undermining public trust in democratic institutions and democratic processes (Manheim and Kaplan, 2019) [37].
  • Erosion of Accountability: The opacity of AI decision-making processes can lead to challenges in holding entities accountable for actions influenced by AI (Carnegie Endowment, 2024) [39].
  • Potentially complex impacts on society with the possibility of unintended or unforeseen consequences (Manyika, 2025) [36].
Important considerations related to this issue have been made by the European Commission. Its President Ursula von der Leyens, for example, when talks about the multiple benefits of AI, warns in the “WHITE PAPER On Artificial Intelligence—A European approach to Excellence and Trust” (19 February 2020; COM (2020) 65 final) [40] about some of its risks and the need to be aware of its impact, not only from an individual perspective but also for the society as a whole:
“Artificial Intelligence is developing fast. It will change our lives by improving healthcare (e.g., making diagnosis more precise, enabling better prevention of diseases), increasing the efficiency of farming, contributing to climate change mitigation and adaptation, improving the efficiency of production systems through predictive maintenance, increasing the security of Europeans, and in many other ways that we can only begin to imagine. At the same time, Artificial Intelligence (AI) entails a number of potential risks, such as opaque decision-making, gender-based or other kinds of discrimination, intrusion in our private lives or being used for criminal purposes.”
“Europe… it can develop an AI ecosystem that brings the benefits of the technology to the whole of European society and economy:
  • for citizens to reap new benefits for example improved health care, fewer breakdowns of household machinery, safer and cleaner transport systems, better public services;
  • for business development, for example a new generation of products and services in areas where Europe is particularly strong (machinery, transport, cybersecurity, farming, the green and circular economy, healthcare and high-value added sectors like fashion and tourism); and
  • for services of public interest, for example by reducing the costs of providing services (transport, education, energy and waste management), by improving the sustainability of products and by equipping law enforcement authorities with appropriate tools to ensure the security of citizens, with proper safeguards to respect their rights and freedoms.
Given the major impact that AI can have on our society and the need to build trust, it is vital that European AI is grounded in our values and fundamental rights such as human dignity and privacy protection. Furthermore, the impact of AI systems should be considered not only from an individual perspective, but also from the perspective of society as a whole” [40].
Other authors focus on the challenge AI presents to bioethics, which focuses on the relationships among living beings (Cheng-Tek Tai, 2020) [41].
In response to the consciousness about the benefits but also the dangers implied in the development and use of AI, a number of initiatives have been launched to take care of them. Some examples of them are the following:
  • The Bletchley Declaration by Countries attending the AI Safety Summit (1–2 November 2023) [42] that focused on:
    (“…our agenda for addressing frontier AI risk will focus on:
    -
    Identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase in the context of a wider global approach to understanding the impact of AI in our societies.
    -
    Building respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognizing our approaches may differ based on national circumstances and applicable legal frameworks. This includes alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.)”
  • UN General Assembly (11 March 2024) “Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development” [43].
“…Acknowledges that the United Nations system, consistent with its mandate, uniquely contributes to reaching global consensus on safe, secure and trustworthy artificial intelligence systems, that is consistent with international law, in particular, the Charter of the United Nations; the Universal Declaration of Human Rights; and the 2030 Agenda for Sustainable Development, including by promoting inclusive international cooperation and facilitating the inclusion, participation and representation of developing countries in deliberations.”
3.
Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet (AI Action Summit, posted on 11 February 2025) [44].
In February 2025, a Summit was held in Paris related to the AI diffusion issue. In it, they agreed on some priorities, as settled in its announcement:
“Participants from over 100 countries, including government leaders, international organizations, representatives of civil society, the private sector, and the academic and research communities gathered in Paris on 10 and 11 February 2025 to hold the AI Action Summit.”
“…we have affirmed the following main priorities:
-
Promoting AI accessibility to reduce digital divides.
-
Ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all.
-
Making innovation in AI thrive by enabling conditions for its development and avoiding market concentration, driving industrial recovery and development.
-
Encouraging AI deployment that positively shapes the future of work and labor markets and delivers opportunity for sustainable growth.
-
Making AI sustainable for people and the planet.
-
Reinforcing international cooperation to promote coordination in international governance”.

3.2.3. Legislation to Regulate the AI Use

Concerning the need to regulate somehow the use of AI to avoid some of its potentially harmful challenges, there is an intense debate contrasting two different approaches: one proposing a lighter, sectoral/regulatory-guidance approach and no restriction to AI development and use (at least at its initial phases of development) represented mainly by the current approach in USA, and the other approach promoted initially by Europe (EUC) that emphasizes creating and applying some regulations to avoid the variety of harms that it may produce, being the most recent communication the statement made on 11 February 2025, Action Summit above mentioned.
Concerning this second approach, one of the more developed and detailed regulations is the one issued by the EU (EU AI Act: Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024) [24]. In it, particular mention is made of the potential risks, which are classified depending on the severity of the harm they could produce and indicate how to behave. Let us see some of its elements.
“The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence systems (AI systems) in the Union, in accordance with Union values, to promote the uptake of human centric and trustworthy artificial intelligence (AI) while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union (the ‘Charter’), including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation. This Regulation ensures the free movement, cross-border, of AI-based goods and services, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.”
Particularly relevant is the classification of risks that the EUC did, which are described in the Act mentioned above.
Classification of risks (Act: 2.3 EU AI Act risk levels):
-
Unacceptable risk. Prohibited. Example: Systems considered a threat to people’s security, livelihoods and rights (e.g., social scoring, mass surveillance). Chapter II (Prohibited practices), Article 5.
-
High risk. Requires conformity assessment. Systems whose function, purpose and modalities involve high risk (e.g., education, critical infrastructures)I, Chapter III (High-Risk AI Systems), Section 1, Article 6.
-
Limited risk. Requires transparency. Systems used for interacting with people (e.g., chatbots). Chapter IV (Transparency Obligations for Providers and Deployers of Certain AI Systems), Article 50.
-
Minimal risk. Demands a Code of Conduct. The rest of the systems. They imply no obligations, but voluntary codes of conduct are recommended. Section 4 (Codes of practice), Article 56–63.
Besides the issue of risk handling, the EU also dedicates attention to “Measures in support of Innovation.” Chapter VI, Article 57.
Other authors also list potential AI risks, as is the case for example of (Urwin, M; Built In, 25 July 2024) [45], who describes 14 risks of AI:
-
Lack of AI transparency and explainability
-
Job losses due to AI automation
-
Social manipulation through algorithms
-
Social surveillance with AI technology
-
Lack of Data privacy using AI tools
-
Biases due to AI
-
Socioeconomic inequality because of AI
-
Weakening ethics and goodwill because of AI
-
Autonomous weapons powered by AI
-
Financial crisis brought about by AI algorithms
-
Loss of human influence
-
Uncontrollable self-aware AI
-
Increased criminal activity
-
Broader economic and political instability
Once considered some of the beneficial effects and potential risks that the use of AI can have, let us look at some of the more worrying aspects of AI diffusion at different levels of society, ranging from the individual to organizations at the broadest level.
In Section 2.2, I have shown the usefulness of considering several levels in studying a complex organization or complex issue to facilitate the treatment of the complexity (variety) involved. Following that recommendation, I will explore in the next section the impact of AI on various levels of society: individual, local community, region, country, Europe (as an example of a transnational organization), the whole world, and planet Earth. It should be noted that some of those impacts may be present in different levels but they will be mentioned only once.

3.2.4. Impacts of AI Deployment at Various Levels

  • 1. Individual level.
There is a danger that the individual may be too influenced by the information that reaches him through social media and may rely too heavily on AI in their decision-making, neglecting critical thinking and diminishing the sense of identity with their values and goals, vision of the world, etc. The capacity to learn and adapt may also be affected, as well as a diminishing self-awareness.
Erosion of Autonomy and Skill Degradation. AI systems may lead to individuals over-relying on AI, diminishing their problem-solving, critical thinking, and emotional intelligence. This can manifest as a loss of personal agency and an inability to adapt to changing circumstances. Also, it may be a danger for a loss of privacy (Rainie, L. Anderson, J. et al., 2024 [46]; Schertel, L. Stray, J. et al. (2024) [47]; Kreps S. and Kriner D. (2023) [48]; Csernatoni, R. (2024) [39].
Increased Dependency and Laziness. The constant use of AI for daily tasks can cause a reduction in mental effort and a greater reliance on AI for problem-solving, leading to a decline in human agency (Ahmad, S.F., Han, H., Alam, M.M. et al., 2023 [49])
Psychological Manipulation. Individuals become susceptible to manipulation by AI-driven psychological techniques, which can exploit human vulnerabilities and alter behaviors or beliefs. (Csernatoni, R., 2024) [39].
  • 2. Local level.
Local community organizations may not be adequately prepared to manage AI implications at the local level, resulting in the absence of clear boundaries or proper organizations for the intermediate environment between individuals and the region. This may be a cause of unequal distribution of resources and opportunities.
Social Disconnection and Fragmentation. Increased reliance on AI for communication and interaction can lead to reduced face-to-face interactions, potentially eroding community bonds and social cohesion. (Rainie, L. Anderson, J. et al., 2024 [46]; Innerarity, D., 2024) [50]
Unequal Access and Digital Divide. Not all community members may have equal access to AI technologies and the skills to use them, leading to social and economic inequalities. This can marginalize certain groups. (Innerarity, D., 2024) [50]; Rainie, L. Anderson, J. et al., 2024 [46]).
Erosion of Local Governance. Using AI in local decision-making may bypass or undermine local democratic processes, reducing citizen participation and accountability. (Innerarity, D., UNESCO 2024) [50].
Emergence of “Smart City” Dystopias. Over-reliance on AI to manage community resources and services may lead to a loss of individual control and increased surveillance, creating environments prioritizing efficiency over human well-being. (Anderson, J. Rainie, L. and Luchsinger, A. (2018) [51]; Rainie, L. Anderson, J. et al., 2024 [46]).
  • 3. Regional level. (Various regions in a country)
Regional entities (e.g., landers, regions, Autonomous Communities, etc.) may excessively control AI implementation without adequately considering local communities’ specific needs and future requirements, resulting in poor adaptation of AI to local communities and a lack of planning for the future.
Regional Economic Disparities. Adopting AI-driven automation may lead to economic concentration in certain regions while others face job losses and economic stagnation. This can exacerbate regional inequalities. (Rainie, L. Anderson, J. et al., 2024 [46]).
Fragmented Governance Structures. The lack of coordination and interoperability between AI systems across different jurisdictions may hinder regional development and create inefficiencies. (Rainie, L. Anderson, J. et al., 2024 [46]); Schertel, L. Stray, J. et al. (2024) [47].
Loss of Regional Identity. The homogenization of cultural and economic practices due to AI-driven globalization may result in the loss of regional distinctiveness. (Rainie, L. Anderson, J. et al., 2024 [46]).
  • 4. Country level.
National governments may prioritize AI development for economic reasons and potentially neglect social and ethical implications, as a result disregarding the need to represent their citizens’ interests effectively or adapt to the needs of local communities.
There may be an inadequate representation of lower levels in national decision-making processes and a lack of strategic vision in the mid-and long-term at the national level.
Erosion of Democratic Institutions. AI-generated misinformation and deepfakes may undermine public trust in electoral processes and create political polarization. (Read, A. 2023 [52]).
Increased Surveillance and Control. Governments may use AI for mass surveillance, undermining civil liberties and democratic freedoms. (Rainie, L. Anderson, J. et al., 2024 [46]).
Algorithmic Bias and Discrimination. AI systems may reflect and perpetuate societal biases in the justice system, employment, and other sectors, leading to discriminatory outcomes. (Rainie, L. Anderson, J. et al., 2024 [46]).
Centralization of Power. AI technologies may lead to a concentration of power in the hands of a few political leaders or corporations, weakening democratic institutions and reducing citizen participation. (Read, A. 2023 [52]).
Increased Political Polarization. AI may create filter bubbles and echo chambers that reinforce existing biases and prejudices, leading to a decline in the quality of political discourse. (Rainie, L. Anderson, J. et al., 2024 [46]; Summerfield et al. 2024 [53]).
Undermined accountability. AI could undermine the principle that representatives and public officials need to be accountable for their actions by blurring lines of accountability when policies fail because it is unclear whether a human or a machine made the final decision (Innerarity, D.; UNESCO 2024) [50].
  • 5. Europe Level (as an example of Transnational organizations).
The EU level struggles to implement a unified and coherent AI strategy due to differing national priorities and the complexity of the European governance system. There is a potential for inadequate or insufficient representation of the diverse needs of member states, as well as a lack of coordination and fragmentation that may lead to slow and ineffective decision-making.
Lack of Coordinated AI Policy. The absence of a unified approach to AI governance across the European Union may lead to regulatory fragmentation and hinder the responsible development of AI technologies. (Read, A. 2023 [52]).
Exacerbation of Regional Disparities. The uneven implementation of AI technologies may exacerbate existing economic and social disparities between different European regions and countries. (Read, A. 2023 [52]).
  • 6. World Level.
Lack of global coordination and governance of AI, leading to fragmented approaches and unequal distribution of benefits and risks. The absence of a global identity or a system of shared values and shared goals may lead to conflict and a risk of fragmentation. There is also the risk of a race for AI dominance that may cause international tension and instability. The absence of a proper world organization or government with clear rules and shared values presents the danger of each player seeking their own benefit, with no accountability, thus creating a chaotic global environment where the negative consequences of AI may prevail over its positive uses.
Global Power Imbalances. The development and control of AI technologies may create new forms of global power imbalances (geopolitical tensions and ethical disparities), further marginalizing developing nations. Exacerbation of global inequalities due to unequal access to AI technology. (Rainie, L. Anderson, J. et al., 2024 [46]; Read, A. 2023 [52]).
Lack of Global Governance Frameworks. The absence of international agreements on the ethical development and deployment of AI may lead to a situation where AI is used irresponsibly without consideration of global implications. (Rainie, L. Anderson, J. et al., 2024 [46]; Read, A. 2023 [52]).
Escalation of Global Conflicts. AI-driven weapon systems may increase the risk of conflict and instability, particularly in the absence of international norms and regulations. (Rainie, L. Anderson, J. et al., 2024 [46]; Read, A. 2023 [52]).
Digital Colonialism. Powerful countries or corporations may exploit less developed nations through AI-driven systems that do not benefit the local population or further perpetuate existing inequalities. (Rainie, L. Anderson, J. et al., 2024 [46]).
  • 7. Planet level.
Decisions about AI are made without consideration for the long-term environmental and planetary consequences. The lack of a planetary consciousness and a disconnection from the biosphere may result in decisions that compromise the planet’s long-term viability.
Environmental Degradation. The massive energy consumption required for AI development and deployment may exacerbate climate change and generate other environmental risks. (Rainie, L. Anderson, J. et al., 2024 [46]).
Unforeseen Systemic Risks. The deployment of increasingly complex and autonomous AI systems may lead to unforeseen consequences and systemic risks that are difficult to anticipate and control. (Rainie, L. Anderson, J. et al., 2024 [46]).
Lack of Global Cooperation. The absence of global institutions capable of addressing planet-level challenges associated with AI may lead to the misuse of AI technologies and potentially catastrophic consequences. (Rainie, L. Anderson, J. et al., 2024 [46]).
In the following section, I will briefly describe how the pathologies included in the TOP presented in Section 2.3 and categorized into Structural (Group I), Functional (Group II), and Communication and Information (Group III) may be present because of the extensive application of AI.

4. Organizational Pathologies and AI Diffusion

I will follow the group categorization used in the TOP and add some examples related to the impact of AI’s implementation for each group.
  • Group I: Structural Pathologies
These pathologies concern the organization’s basic design and how it handles the complexity of its environment through the creation of sub-organizations and levels of recursion [10].
  • P.I.1—Non-existence of vertical unfolding: When a necessary division of the environment and organization into sub-components is absent.
  • Example: A national government might fail to recognize the need for specific bodies to address AI-related issues at different levels (local, regional, national), leading to an overburdened central authority. This results in an inability to address issues effectively due to the lack of sub-organizations focused on particular aspects of the problem.
  • P.I.2—Absence of first-level recursion: When the organization starts its division at a second level, leaving the first level without an organization to manage the complexity of the whole environment.
  • Example: A democratic system might create specific bodies to handle AI ethics and policy but lack a general entity that oversees the entire AI landscape and its global implications, leaving the system vulnerable to broader trends it is not tracking.
  • P.I.3—Lack of recursion levels (Middle levels): Vertical unfolding is accomplished, but intermedium recursion levels are left empty. This leaves the corresponding environmental variety to be dealt with at either the next or the previous recursion level or, even worse, to be handled by no one at all.
  • Example: A system might have bodies dealing with AI at the national and local levels, but lack organizations focused on AI’s regional impacts. These unaddressed areas can lead to inconsistent policies and unequal distribution of resources and benefits related to AI. This lack of intermediate organizations can result in relevant issues not being addressed or being treated insufficiently.
  • P.I.4—Entangled vertical unfolding: When organizations have confused lines of responsibility and belonging, especially when multiple relationships and different criteria for complexity unfolding exist.
  • Example: A situation in which AI policy is influenced by multiple organizations (e.g., tech companies, government agencies, international bodies) without a clear structure of responsibility and communication, leading to conflicts of interest, disjointed implementation, and lack of accountability. This can lead to an erosion of public trust in the overall governance of AI. This pathology manifests when organizations lack proper channels of communication with common relations of belonging and when the representation of those organizations in the required instances is missing.
  • Group II: Functional Pathologies
These pathologies refer to an organization’s internal workings, specifically whether the five functions or subsystems of the VSM are adequately represented and functioning [10].
  • P.II.1—Undefined or poorly defined identity: When an organization lacks clarity about its purpose, boundaries, and values.
  • Example: A democratic nation might struggle to define its stance on AI ethics, leading to inconsistent policies, a lack of trust from citizens, and a susceptibility to external manipulation. This also translates to a lack of agreement on the values and goals that AI should serve within the democratic system. The system suffers from a lack of knowledge of its own identity and purpose.
  • P.II.2—Collapse of System 5 into System 3: When System 5 (policy and identity) inappropriately intervenes in the operations of System 3 (integration and resource allocation), leading to a weakening of both systems.
  • Example: A situation where the highest political authorities are overly involved in the day-to-day management of AI-related initiatives, hindering the operational independence of System 3. This would prevent the system from performing its main functions. System 5’s excessive intervention weakens its functions as well as those of System 4.
  • P.II.3—Inadequate representation to higher levels: When an organization is unable to represent its interests and values to the systems that contain it, causing disconnection and lack of coherence.
  • Example: A governmental body tasked with AI regulation may fail to communicate its needs and concerns to international regulatory bodies, leading to policies that are misaligned with the national context and the values of its society, also interrupting the transmission of values.
  • P.II.4—“Headless Chicken”: System 4 malfunctions due to a lack of proper monitoring of the external and future landscape, leading to a failure to adapt to new changes and trends in AI.
  • Example: A country that is slow to adopt AI literacy programs or to anticipate emerging risks associated with AI, lagging in innovation, and failing to adapt to the changing nature of social interactions and public discourse impacted by AI.
  • P.II.5—Dissociation between System 4 and System 3: When Systems 4 (intelligence and planning) and 3 (integration) fail to work together harmoniously, leading to a lack of coordination and the inability to translate future plans into present actions.
  • Example: A democratic nation develops strategic plans for AI development but fails to implement them effectively because of conflicts or misunderstandings between government planning and operation bodies or because the needs and limitations of the operational level (System 3) are not taken into account at the planning level (System 4). In this case, System 4 perceives System 3 as short-sighted, while System 3 perceives System 4 as unrealistic.
  • P.II.6—Inadequate management style: When System 3 over-intervenes in the operational units of System 1, restricting the necessary autonomy of the operational units.
  • Example: A government or agency that attempts to micromanage the development of AI tools at the local level, reducing the autonomy of the local agencies and hindering their ability to respond to specific needs.
  • P.II.7—Weak connection between System 3 and System 1: When the relationship between System 3 and System 1 is not well established, causing a lack of communication and coordination.
  • Example: If government guidelines on AI are not communicated or applied effectively at the local level, leading to disparities and implementation gaps, a weak link between System 3 and System 1 appears.
  • P.II.8—Hypertrophy of System 3: When System 3 is excessively developed while Systems 2 and 3* are insufficient, causing System 3 to be overwhelmed and reduce its capacity to coordinate the whole system.
  • Example: An agency overcentralizes control of all AI implementations without creating adequate coordination mechanisms (System 2) or audit structures (System 3*), leading to inefficiency and a lack of adaptability. The interventions of System 3 lead to discouragement in the management of the operational units.
  • P.II.9—Absence or insufficient development of System 3*: When System 3* (audit and data collection) is not well developed, leading to a lack of information about the system’s performance and potential problems.
  • Example: There are no independent audits and assessments on AI’s impact on vulnerable groups, which creates a lack of accountability. A lack of proper data collection and monitoring about AI’s effects results in a lack of alignment of behaviors in the operational units.
  • P.II.10—Fragmented behavior in System 1: A lack of coordination and collaboration among the operational units of System 1, leading to competition for resources, lack of a continuous flux between the units, and a general failure to work together.
  • Example: Individual communities or organizations pursuing AI initiatives without proper coordination or a sense of overall objectives or strategies. This leads to duplications, inefficiencies, and unequal access to resources and opportunities.
  • P.II.11—Autopoietic organizational beasts: When a subsystem focuses on its own goals and growth at the expense of the overall system’s purpose.
  • Example: An AI ethics board that becomes more focused on its own institutional expansion and influence rather than on ensuring that ethical AI policies are integrated effectively, thereby becoming an “autopoietic beast.”
  • P.II.12—Lack of a Meta-system When functions proper of System 3, System 4, and System 5 are not clear. Their application is diffused in different directors, without a clear identification of the function to which they belong.
  • Example: A situation where the different parts of the government that are tasked with the management of AI fail to define which area is in charge of oversight, future planning, or policy development. They do not interact properly as different elements of a meta-system.
  • Group III: Information System and Communication Channel Pathologies
These pathologies relate to the flow of information within the organization and between the organization and its environment.
  • P.III.1—Absence of Information Systems: When there is no adequate infrastructure to provide the information necessary for decision-making throughout the organization.
  • Example: A democratic system lacking a central platform to share information on AI policy will cause disjointed implementation and difficulties accessing the available information. The lack of information systems will produce a lack of connection between the functions, and the decisions made with incomplete, inappropriate, or delayed information will be poorly founded.
  • P.III.2—Fragmentation of information systems: When useful information systems exist in isolation from one another, creating silos of information that do not communicate with each other.
  • Example: A system in which different government agencies use incompatible data systems for tracking the impact of AI, leading to inconsistencies, difficulties in data integration, and duplication of efforts.
  • P.III.3—Absence of essential communication channels: When the connections necessary to provide information between different parts of the system are missing.
  • Example: There is a lack of communication between AI researchers, policymakers, and the public. This absence leads to a gap in understanding, creates distrust, and hinders a collaborative approach to AI governance. It also leads to an incomplete network in which the functions cannot perform due to the lack of information, the partial nature of the information, its unintelligible format, or its delayed arrival.
  • P.III.4—Absence or Insufficiency of algedonic channels: The absence of alarm signals that can alert to critical problems in the system, which compromises the viability of the organization.
  • Example: A lack of monitoring mechanisms or public feedback channels to identify and address unexpected consequences of AI deployment, such as algorithmic bias, or discriminatory outcomes. The absence of these channels prevents the timely activation of the system to react and mitigate risks.
  • P.III.5—Communication channels incomplete or with inadequate capacity: When there are issues with how messages are sent, received, and understood, such as unclear language or delayed communication. This can result in misinterpretations, conflicts, and inefficient processes.
  • Example: If the information regarding AI regulations or ethical guidelines is unclear or difficult to understand for local governments or public organizations, it leads to misinterpretations, lack of adherence, and inconsistent application of the guidelines, distorting the messages received by those who are intended to apply them.
These are some examples to illustrate the various pathologies that can arise in organizations, especially when facing the complexities and challenges related to AI. Using the Viable System Model and the Taxonomy of Organizational Pathologies can help create a comprehensive framework for diagnosing these issues and formulating more effective interventions.
Once we have identified some of the potential impacts that the extensive use of AI may have on humans and society, we could explore some possible actions to avoid them. A detailed review of all of them is outside the scope of this paper, so I will limit our reflection to indicate that identifying possible actions can be helped by using the VSM approach at each of the various organizational levels. The knowledge of the TOP can help us identify actions and design relevant organizational structures while always considering the vision, purposes, and aims described by System 5s.

5. Conclusions

This paper has explored the potential of Organizational Cybernetics and its Viable System Model (VSM), along with Pérez Ríos’s framework and Taxonomy of Organizational Pathologies (TOP) to apply them, as powerful analytical tools for understanding the multifaceted impacts of Artificial Intelligence (AI) on society. The rapid advancement and widespread adoption of AI present both remarkable opportunities and significant risks across various levels of organization, from individuals to the planet.
Our analysis reveals that AI’s effects vary considerably depending on the organizational level under consideration. By applying the VSM framework, this study has examined the potential positive and negative consequences of AI diffusion at individual, local community, regional, national, transnational (exemplified by Europe), global, and planetary levels. This multi-level approach allows for a more nuanced understanding of the complex interplay between AI technologies and societal structures.
The paper highlights how Pérez Ríos’s proposed methodological framework, which emphasizes clarifying an organization’s identity, purpose, and boundaries, creating a vertical structure, designing or diagnosing each organizational level according to the VSM, and checking coherence across recursion levels, is crucial for analyzing the impact of AI. The Recursion Levels-Key Factors Matrix is presented as a valuable tool for clarifying the vertical structure and ensuring coherence between different actions at all recursion levels.
Furthermore, the paper describes Pérez Ríos’s Taxonomy of Organizational Pathologies (TOP), categorizing 26 common flaws into Structural, Functional, and Information System & Communication Channel pathologies. By connecting these pathologies to the potential adverse effects of AI, the paper shows how the TOP can help identify vulnerabilities and facilitate communication among decision-makers regarding complex AI-related issues. Examples provided throughout the paper illustrate how the diffusion of AI can exacerbate existing organizational weaknesses or create new pathologies at various levels. These pathologies often manifest as governance failures, communication breakdowns, and structural deficiencies.
The study underscores that while AI holds immense potential for enhancing decision-making, increasing efficiency, and improving public services, it also presents significant risks to privacy, equality, democratic processes, and the environment. Addressing these challenges requires a multifaceted approach that includes the development of ethical frameworks, the enhancement of transparency, public education, and the implementation of robust regulatory measures, as exemplified by the EU AI Act [24]. International collaborations like the Bletchley Declaration and initiatives by the UN further emphasize the global recognition of these challenges.
In conclusion, the integration of organizational cybernetics, the VSM, and the TOP provides a comprehensive and holistic lens through which to analyze the intricate impacts of AI across different organizational scales. By understanding the potential benefits and associated pathologies, decision-makers can be better equipped to develop effective strategies for mitigating adverse consequences and promoting responsible innovation in the age of AI. The insights presented in this paper underscore the critical importance of ethical considerations, transparency, interdisciplinary collaboration, and proactive measures to harness the transformative power of AI for the benefit of society while safeguarding human rights and democratic values. The application of these cybernetic principles can guide the design of resilient and viable systems capable of navigating the complexities introduced by the pervasive diffusion of artificial intelligence.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A

Table A1. Organizational Pathologies.
Table A1. Organizational Pathologies.
I. STRUCTURAL PATHOLOGIES
I1. Non-existence of vertical unfolding.PI1
The lack of an adequate vertical unfolding, when needed, renders it difficult or impossible for a single large organization to deal with the total variety it faces.Systems 13 00749 i001
I2. Lack of recursion levels (first level).PI2
Vertical unfolding is accomplished, but the first recursion level is left empty, leaving part of the total environmental variety unattended.Systems 13 00749 i002
I3. Lack of recursion levels (middle levels).PI3
Vertical unfolding is accomplished, but intermediate recursion levels are left empty. That leaves the corresponding environmental variety to be dealt with at either the next or the previous recursion level (which is difficult or impossible) or, even worse, to be handled by no one.Systems 13 00749 i003
I4. Entangled vertical unfolding.PI4
Various interrelated level memberships. Inadequate integration/communication between recursion levels when multiple memberships are present.
II. FUNCTIONAL PATHOLOGIES
PATHOLOGIES RELATED TO SYSTEM 5
II1. Ill-defined identity.PII1
Identity has not been sufficiently clarified or defined (“I do not know who I am”).Systems 13 00749 i004
II2. Institutional schizophrenia.PII2
Two or more different identity conceptions produce conflict within an organization.Systems 13 00749 i005
II3. System 5 collapses into System 3 (Non-existing metasystem).PII3
System 5 intervenes undesirably in the affairs of System 3.Systems 13 00749 i006
II4. Inadequate representation vis-a-vis higher levels.PII4
Poor connection between System 5s organisations pertaining to different recursion levels within the same global organization.
PATHOLOGIES RELATED TO SYSTEM 4
II5. “Headless chickenPII5
System 4 is missing or, if it does exist, does not work properly.Systems 13 00749 i007
II6. Dissociation of System 4 and System 3.PII6
The homeostat System 4—System 3 does not work properly. Each component system carries out its function separately but does not communicate and interact as it should with the other system.Systems 13 00749 i008
PATHOLOGIES RELATED TO SYSTEM 3
II7. Inadequate management style.PII7
System 3 intervenes excessively or inadequately in the management affairs of System 1. For example, an authoritarian management style constrains System 1’s autonomy.Systems 13 00749 i009
II8. Schizophrenic System 3.PII8
Conflict arises between the roles of System 3 due to its simultaneous inclusion both in the system (operations) and the metasystem (management).Systems 13 00749 i010
II9. Weak connection between System 3 and System 1.PII9
The operational units that make up System 1 operate independently, lacking adequate integration and support from System 3.Systems 13 00749 i011
II10. Hypertrophy of System 3.PII10
System 3 arrogates to itself too much activity, some of which should be carried out by System 3*, System 2 and System 1 directly.Systems 13 00749 i012
PATHOLOGIES RELATED TO SYSTEM 3*
II11. Lack or insufficient development of System 3*.PII11
The lack or insufficient development of a System 3* allows that undesirable behaviour and/or activities go on in System 1.Systems 13 00749 i013
PATHOLOGIES RELATED TO SYSTEM 2.
II12. Disjointed behaviour within System 1.PII12
A lack of adequate interrelations between the elemental operating units that conform to System 1 leads to their fragmentary behaviour.Systems 13 00749 i014
II13. Authoritarian System 2.PII13
System 2 shifts from a service orientation towards authoritarian behaviour.Systems 13 00749 i015
PATHOLOGIES RELATED TO SYSTEM 1
II14. Autopoietic “beasts”.PII14
Elemental operating units constituting System 1 behave as if their individual goals are the only reason for being. Regardless of any considerations transcending their interests, they ignore the need to harmonize their individual goals within an integrated System 1.Systems 13 00749 i016
II15. Dominance of System 1. Weak metasystem.PII15
The power of System 1 is not handled within the limits set by the metasystem (System 3, System 4, and System 5).Systems 13 00749 i017
II16. Organizational autopoietic “beasts”.PII16
The uncontrolled growth and activity of some individual parts of the organization put the viability of the whole organization at risk.
II17. Lack of metasystem.PII17
Insufficient or missing definitions of identity and purpose. A weak or incomplete
metasystem shifts the balance between the “outside and future” and the “here and now” management-oriented activities towards the “here and now”, leaving adaptation- oriented activities unattended. Inadequate connections exist between organizations at different recursion levels.
Systems 13 00749 i018
III. PATHOLOGIES RELATED TO INFORMATION SYSTEMS AND COMMUNICATION CHANNELS
III1. Lack of information systems.PIII1
Some of the necessary information systems are missing, insufficiently developed or not working correctly
III2. Fragmentation of information systems.PIII2
Information systems exist in the organization, but they work in a fragmentary way, with poor or non-existent connections between them.
III3. Lack of key communication channels.PIII3
Certain required communication channels that should connect the different functions do not exist, or, if they do, are either inadequately designed or work improperly.
III4. Lack of or insufficient algedonic channels.PIII4
Necessary algedonic channels are missing, or, if they do exist, are poorly designed for their function or do not work correctly.
III5. Communication channels incomplete or with inadequate capacity.PIII5
Necessary communication channels do not have all the necessary elements for transmitting required information (transducers, channels capacity and a sender-receiver in both directions
(Pérez Ríos, 2008b) [6].

References

  1. Beer, S. The Heart of Enterprise; John Wiley & Sons: Chichester, UK, 1979. [Google Scholar]
  2. Beer, S. Brain of the Firm, 2nd ed.; John Wiley & Sons: Chichester, UK, 1981. [Google Scholar]
  3. Beer, S. Diagnosing the System for Organizations; John Wiley & Sons: Chichester, UK, 1985. [Google Scholar]
  4. Beer, S. The viable system model: Its provenance, development, methodology and pathology. In The Viable System Model, Interpretations and Applications of Stafford Beer’s VSM; Espejo, R., Harnden, R., Eds.; Wiley: Chichester, UK, 1989. [Google Scholar]
  5. Pérez Ríos, J. Aplicación de la Cibernética Organizacional al estudio de la viabilidad de las organizaciones. In Patologías Organizativas Frecuentes (Parte II); DYNA: Bilbao, Spain, 2008; Volume 83. [Google Scholar]
  6. Pérez Ríos, J. Diseño y Diagnóstico de Organizaciones Viables. Un Enfoque Sistémico; Iberfora 2000: Valladolid, Spain, 2008; ISBN 978-84-612-5845-1. [Google Scholar]
  7. Pérez Ríos, J. Models of Organizational Cybernetics for Diagnosis and Design. Kybernetes Int. J. Syst. Cybern. 2010, 39, 1529–1550. [Google Scholar] [CrossRef]
  8. Pérez Ríos, J. Design and Diagnosis for Sustainable Organizations: The Viable System Method; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2012. [Google Scholar]
  9. Ashby, W.R. An Introduction to Cybernetics; Chapman Hall: London, UK, 1956. [Google Scholar]
  10. Conant, R.C.; Ashby, W.R. Every good regulator of a system must be a model of that system. Int. J. Syst. Sci. 1970, 1, 89–97. [Google Scholar] [CrossRef]
  11. Espejo, R.; Bowling, D.; Hoverstadt, P. The viable system model and the VIPLAN software. Kybernetes Int. J. Syst. Cybern. 1999, 28, 661–678. [Google Scholar] [CrossRef]
  12. Espejo, R. Observing organisations: The use of identity and structural archetypes. Int. J. Appl. Syst. Stud. 2008, 2, 6–24. [Google Scholar] [CrossRef]
  13. Espejo, R.; Reyes, A. Organisational Systems: Managing Complexity with the Viable System Model; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  14. Schwaninger, M. Intelligent Organizations—Powerful Models for Systemic Management; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  15. Hoverstadt, P. The Fractal Organization: Creating Sustainable Organizations with the Viable System Model; Wiley: Chichester, UK, 2008. [Google Scholar]
  16. Espinosa, A. Sustainable Self-Governance in Businesses and Society: The Viable System Model in Action; Francis & Taylor: London, UK; Routledge: London, UK, 2023; ISBN 9781032354972. [Google Scholar]
  17. Lassl, W. The Viability of Organizations; Springer: Berlin/Heidelberg, Germany, 2019; Volume 1–3. [Google Scholar]
  18. Pfiffner, M. The Neurology of Business: Implementing the Viable System Model; Springer: Berlin/Heidelberg, Germany, 2022. [Google Scholar]
  19. Eccles, J.C. How the Self Controls Its Brain; Springer: Berlin/Heidelberg, Germany, 1994. [Google Scholar]
  20. von Foerster, H. Cybernetics of Cybernetics, 2nd ed.; Future Systems Inc.: Minneapolis, MN, USA, 1995. [Google Scholar]
  21. von Foerster, H. Ethics and Second-Order Cybernetics. In Understanding Understanding. Essays on Cybernetics and Cognition; Springer: New York, NY, USA, 2003; pp. 287–304. ISBN 0-387-95392-2. [Google Scholar]
  22. Lepskiy, V. Evolution of cybernetics: Philosophical and methodological analysis. Kybernetes 2018, 47, 249–261. [Google Scholar] [CrossRef]
  23. Espejo, R.; Lepskiy, V. An agenda for ontological cybernetics and social responsibility. Kybernetes 2021, 50, 694–710. [Google Scholar] [CrossRef]
  24. Hetzler, S. Pathological systems. Int. J. Appl. Syst. Stud. 2008, 2, 25–39. [Google Scholar] [CrossRef]
  25. Schwaninger, M. Modeling with Archetypes: An Effective Approach to Dealing with Complexity. Computer Aided Systems Theory-EUROCAST 2003; Springer: Berlin/Heidelberg, Germany, 2003; LNCS Volume 2809, pp. 127–138. [Google Scholar]
  26. Katina, P.F. Systems Theory-Based Construct for Identifying Metasystem Pathologies for Complex System Governance. Ph.D. Thesis, Engineering Management & Systems Engineering. Old Dominion University, Norfolk, VA, USA, 2015. [Google Scholar] [CrossRef]
  27. Katina, P.F. Emerging systems theory-based pathologies for governance of complex systems. Int. J. Syst. Syst. Eng. 2015, 6, 144–159. [Google Scholar] [CrossRef]
  28. Katina, P.F. Systems Theory as a Foundation for Discovery of Pathologies for Complex System Problem Formulation. In Applications of Systems Thinking and Soft Operations Research in Managing Complexity; Masys, A., Ed.; Advanced Sciences and Technologies for Security Applications; Springer: Cham, Switzerland, 2016. [Google Scholar] [CrossRef]
  29. Keating, C.B.; Katina, P.F. Prevalence of pathologies in systems of systems. Int. J. Syst. Syst. Eng. 2012, 3, 243–267. [Google Scholar] [CrossRef]
  30. Keating, C.B.; Katina, P.F.; Chesterman, C.W., Jr.; Pyne, J.C. (Eds.) Complex System Governance: Theory and Practice. In Engineering Management & Systems Engineering Faculty Books; Springer: Berlin/Heidelberg, Germany, 2022. [Google Scholar]
  31. Morales Allende, M.M.; Ruiz-Martin, C.; Lopez-Paredes, A.; Perez Ríos, J. Aligning Organizational Pathologies and Organizational Resilience Indicators. Int. J. Prod. Manag. Eng. 2017, 5, 107–116. [Google Scholar] [CrossRef]
  32. Ruiz-Martin, C.; Pérez Ríos, J.; Wainer, G.; Pajares, J.; Hernández, C.; López Paredes, A. The Application of the Viable System Model to Enhance Organizational Resilience. In Advances in Management Engineering; Hernández, C., Ed.; Lecture Notes in Management and Industrial Engineering; Springer International Publishing: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  33. Yolles, M.; Flink, G. Personality, pathology and mindsets: Part 3 (of 3)—Pathologies and corruption. Kybernetes 2014, 43, 135–143. [Google Scholar] [CrossRef]
  34. European Union EUR-Lex, Document 32024R1689. (EU AI Act: Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024). 2024. Available online: https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng (accessed on 30 January 2025).
  35. Rough, E.; Sutherland, N. Debate on Artificial Intelligence. Debate Pack 28 June 2023 Number CDP 2023/0152. UK Parliament. House of Commons Library. 2023. Available online: https://commonslibrary.parliament.uk/research-briefings/CDP-2023-0152/ (accessed on 1 April 2025).
  36. Manyika, J. Getting AI Right: A 2050 Thought Experiment. 2025. Available online: https://www.digitalistpapers.com/essays (accessed on 23 February 2025).
  37. Manheim, K.; Kaplan, L. Artificial Intelligence: Risks to Privacy and Democracy. Yale J. Law Tech. 2019, 21, 106. Available online: https://yjolt.org/artificial-intelligence-risks-privacy-and-democracy?utm_source=chatgpt.com (accessed on 14 February 2025).
  38. The Guardian ‘Engine of Inequality’: Delegates Discuss AI’s Global Impact at Paris Summit. 2025. Available online: https://www.theguardian.com/technology/2025/feb/10/ai-artificial-intelligence-widen-global-inequality-climate-crisis-lead-paris-summit?CMP=share_btn_url (accessed on 23 February 2025).
  39. Csernatoni, R. Carnegie Europe. Carnegie Endowment for International Peace (2024). Can Democracy Survive the Disruptive Power of AI? (18 Dec 2024). 2024. Available online: https://carnegieendowment.org/research/2024/12/can-democracy-survive-the-disruptive-power-of-ai?lang=enn (accessed on 4 March 2025).
  40. European Commission White Paper on Artificial Intelligence—A European Approach to Excellence and Trust. Brussels, 19.2.2020. 2020. Available online: https://commission.europa.eu/system/files/2020-02/commission-white-paper-artificial-intelligence-feb2020_en.pdf (accessed on 15 February 2025).
  41. Tai, M.C.-T. The impact of artificial intelligence on human society and bioethics. Tzu Chi Med. J. 2020, 32, 339–343. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
  42. Gov.uk (updated 13 February 2025). The Bletchley Declaration by Countries Attending the AI Safety Summit, 1–2 November 2023. Available online: https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023 (accessed on 24 January 2025).
  43. United Nations, General Assembly. Seventy-Eighth Session, Agenda Item 13. 11 March 2024. Available online: https://docs.un.org/en/A/78/L.49. (accessed on 15 February 2025).
  44. Élysée (Official website of the President of France). Artificial Intelligence Action Summit (10–11 February 2025). Available online: https://www.elysee.fr/en/emmanuel-macron/2025/02/11/statement-on-inclusive-and-sustainable-artificial-intelligence-for-people-and-the-planet (accessed on 4 March 2025).
  45. Thomas, M. 14 Risks and Dangers of Artificial Intelligence (AI). BuiltIn. 2024. Available online: https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence (accessed on 26 January 2025).
  46. Rainie, L.; Anderson, J. A New Age of Enlightenment? A New Threat to Humanity?: The Impact of Artificial Intelligence by 2040. ELON University. 2024. Available online: https://imaginingthedigitalfuture.org/reports-and-publications/the-impact-of-artificial-intelligence-by-2040/ (accessed on 24 January 2025).
  47. Schertel, L.; Stray, J. AI as a Public Good: Ensuring Democratic Control of AI in the Information Space. Forum on Information & Democracy. February 2024. Available online: https://informationdemocracy.org/wp-content/uploads/2024/03/ID-AI-as-a-Public-Good-Feb-2024.pdf (accessed on 24 January 2025).
  48. Kreps, S.; Kriner, D. How AI Threatens Democracy. J. Democr. 2023, 34, 122–131. [Google Scholar] [CrossRef]
  49. Ahmad, S.F.; Han, H.; Alam, M.M.; Rehmat, M.; Irshad, M.; Arraño-Muñoz, M.; Ariza-Montes, A. Impact of artificial intelligence on human loss in decision making, laziness and safety in education. Humanit. Soc. Sci. Commun. 2023, 10, 311. [Google Scholar] [CrossRef] [PubMed]
  50. Innerarity, D. Artificial Intelligence and Democracy. UNESCO 2024. Available online: https://www.unesco.org/en/articles/artificial-intelligence-and-democracy (accessed on 29 January 2025).
  51. Anderson, J.; Rainie, L.; Luchsinger, A. Artificial Intelligence and the Future of Humans. Pew Research Center (10 December 2018). 2018. Available online: https://www.pewresearch.org/internet/2018/12/10/artificial-intelligence-and-the-future-of-humans/ (accessed on 26 January 2025).
  52. Read, A. A Democratic Approach to Global Artificial Intelligence (AI) Safety. Policy Brief. November 2023. WFD. Available online: https://www.wfd.org/sites/default/files/2023-11/A%20democratic%20approach%20to%20global%20artificial%20intelligence%20%28AI%29%20safety%20v2_0.pdf (accessed on 24 January 2025).
  53. Summerfield, C.; Argyle, L.; Bakker, M.; Collins, T.; Durmus, E.; Eloundou, T.; Gabriel, I.; Ganguli, D.; Hackenburg, K.; Hadfield, G.; et al. How Will Advanced AI Systems Impact Democracy. arXiv 2024, arXiv:2409.06729. [Google Scholar]
Figure 1. The Viable System Model (VSM) showing some of its main components. Sources: Adapted from Beer (1985) [3]; Perez Rios, (2008b, p. 55) [6].
Figure 1. The Viable System Model (VSM) showing some of its main components. Sources: Adapted from Beer (1985) [3]; Perez Rios, (2008b, p. 55) [6].
Systems 13 00749 g001
Figure 2. Viable system model, showing a second level of recursion. Sources: Adapted from Beer (1985) [3]; Pérez Ríos (2008b, p. 56) [6].
Figure 2. Viable system model, showing a second level of recursion. Sources: Adapted from Beer (1985) [3]; Pérez Ríos (2008b, p. 56) [6].
Systems 13 00749 g002
Figure 3. VSM showing communication channels in first and second recursion levels (Pérez Ríos, 2008b, p. 67) [6].
Figure 3. VSM showing communication channels in first and second recursion levels (Pérez Ríos, 2008b, p. 67) [6].
Systems 13 00749 g003
Figure 4. Recursion Levels-Key Factors Matrix (Pérez Ríos 2008b, p. 90) [6].
Figure 4. Recursion Levels-Key Factors Matrix (Pérez Ríos 2008b, p. 90) [6].
Systems 13 00749 g004
Figure 5. Recursion Levels-Key Factors Matrix. Multiple recursion criteria (Pérez Ríos 2008b, p.90) [6].
Figure 5. Recursion Levels-Key Factors Matrix. Multiple recursion criteria (Pérez Ríos 2008b, p.90) [6].
Systems 13 00749 g005
Figure 6. Coherence between Systems 5 of different levels of recursion. (a) Perez Rios, 2008b (p. 67) and (b) Perez Rios, 2008b (p. 115) [6].
Figure 6. Coherence between Systems 5 of different levels of recursion. (a) Perez Rios, 2008b (p. 67) and (b) Perez Rios, 2008b (p. 115) [6].
Systems 13 00749 g006
Figure 7. Structural Pathologies (Perez Rios, 2008b) [6].
Figure 7. Structural Pathologies (Perez Rios, 2008b) [6].
Systems 13 00749 g007
Figure 8. Functional Pathologies (Perez Rios, 2008b) [6].
Figure 8. Functional Pathologies (Perez Rios, 2008b) [6].
Systems 13 00749 g008
Figure 9. Information System and Communication Channel Pathologies (Perez Rios, 2008b) [6].
Figure 9. Information System and Communication Channel Pathologies (Perez Rios, 2008b) [6].
Systems 13 00749 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Perez Rios, J. The Viable System Model and the Taxonomy of Organizational Pathologies in the Age of Artificial Intelligence (AI). Systems 2025, 13, 749. https://doi.org/10.3390/systems13090749

AMA Style

Perez Rios J. The Viable System Model and the Taxonomy of Organizational Pathologies in the Age of Artificial Intelligence (AI). Systems. 2025; 13(9):749. https://doi.org/10.3390/systems13090749

Chicago/Turabian Style

Perez Rios, Jose. 2025. "The Viable System Model and the Taxonomy of Organizational Pathologies in the Age of Artificial Intelligence (AI)" Systems 13, no. 9: 749. https://doi.org/10.3390/systems13090749

APA Style

Perez Rios, J. (2025). The Viable System Model and the Taxonomy of Organizational Pathologies in the Age of Artificial Intelligence (AI). Systems, 13(9), 749. https://doi.org/10.3390/systems13090749

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop