The Viable System Model and the Taxonomy of Organizational Pathologies in the Age of Artificial Intelligence (AI)
Abstract
1. Introduction
2. Organizational Cybernetics
2.1. Organizational Cybernetics and the Viable System Model
- (a)
- Viability (capacity to maintain the existence of the organization through time regardless of the environment changes)
- (b)
- Variety (indication of the level of complexity of the situation or issue under consideration)
- (c)
- Ashby Law (according to Ashby (1956)) [9] “only variety can destroy variety”
- (d)
- Conant-Ashby theorem (declares that “every good regulator of a system must be a model of that system” (Conant and Ashby, 1970) [10]
- (e)
- Viable System Model (VSM). This creation of S. Beer (1979, 1981, 1985, 1989) [1,2,3,4] provides a comprehensive model of the essential components (subsystems, communication channels, relations among subsystems, etc.) that any organization must have to be viable (Figure 1). The names he gives to those subsystems (functions) are System 1, System 2, System 3, System 3*, System 4, and System 5.
- (f)
- Recursive character of the VSM. Another fundamental aspect of the VSM is the recursive character of viable systems. All viable systems contain viable systems and are themselves contained in viable systems. In Figure 2, we can see how, inside the ellipses and rectangles, which represent the elemental operational units, an exact replica of the system in focus is contained (turned 90 degrees). The most important aspect of the recursive conception of viable systems is that no matter which place they occupy within the chain of systems, they must always, to be viable, contain the five systems or functions that determine viability.
- (g)
- Communication channels (Figure 3) are responsible for connecting all those systems or functions and linking the organization with its environment.
2.2. Methodological Framework
- Identity recognition
- Recursion Levels-Key Factors Matrix
2.3. Organizational Pathologies
2.3.1. Structural Pathologies
Is the design of the organization’s vertical structure adequate to face the complexity of the organization’s environment?
- PI1. Non-existence of vertical unfolding.
- PI2. Lack of recursion levels (first level).
- PI3. Lack of recursion levels (middle levels).
- PI4. Entangled vertical unfolding.
2.3.2. Functional Pathologies
Pathologies Related to System 5
Do I have a clear idea of who (the organization) I am, what my purpose is, and what the boundaries of my organization are?
- PII1. Ill-defined identity.
- PII2. Institutional schizophrenia.
- PII3. System 5 collapses into System 3 (Non-existing metasystem).
- PII4. Inadequate representation vis-a-vis higher levels.
Pathologies Related to System 4
Do I know what is happening outside my organization and what the future will look like?
- PII5. “Headless chicken”.
- PII6. Dissociation of System 4 and System 3.
Pathologies Related to System 3
Management style: Is the management of Operations designed and working adequately? Is the balance between autonomy and cohesion properly configured?
- PII7. Inadequate management style.
- PII8. Schizophrenic System 3.
- PII9. Weak connection between System 3 and System 1.
- PII10. Hypertrophy of System 3.
Pathologies Related to System 3*
Are things done correctly, are unethical behaviors going on, and are corruption practices detected early?
- PII11. Lack or insufficient development of System 3 *.
Pathologies Related to System 2
Are my operating units governed by “Every man for himself!” Is chaos proliferating and reigning in our organization? Are we being overwhelmed by bureaucracy?
- PII12. Disjointed behavior within System 1.
- PII13. Authoritarian System 2.
Pathologies Related to System 1
Are we producing and delivering what we should? Do the operational units of the organization work in harmony among themselves, or are some of them absorbing more resources than they should from the whole? Do the operational units have excessive power in the organization?
- PII14. Autopoietic “Beasts”.
- PII15. Dominance of System 1: Weak Metasystem.
Pathologies Related to the Complete System
- PII16. Organizational Autopoietic “Beasts”.
- PII17. Lack of Metasystem.
2.3.3. Pathologies Related to Information Systems and Communication Channels
- PIII1. Lack of information systems.
- PIII2. Fragmentation of information systems.
- PIII3. Lack of key communication channels.
- PIII4. Lack of or insufficient algedonic channels.
- PIII5. Communication channels incomplete or with inadequate capacity.
3. Impact of AI on Human Society
3.1. Definition of AI
“Narrow AI (designed to perform a specific task, using information from specific datasets, and cannot adapt to perform another task). Artificial General Intelligence (AGI) or Strong AI (as an AI system that can undertake any intellectual task/problem that a human can). AGI is a system that can reason, analyze, and achieve a level of understanding that is on par with humans, something that has yet to be achieved. Machine learning is a method that can achieve narrow AI; it allows a system to learn and improve from examples without all its instructions being explicitly programmed. Deep learning is a type of machine learning whose design has been informed by the structure and function of the human brain and the way it transmits information.”
3.2. Impact on Humans and Society
3.2.1. Positive Effects
- Assist people in everyday tasks, help them access information and knowledge, and help them pursue their creative endeavors.
- Increase Efficiency and Productivity: AI can automate tasks, analyze data quickly, and enhance decision-making processes, leading to increased efficiency in multiple sectors. It can provide personalized learning and potentially make tasks easier and safer. AI can also drive technological innovation and, in general, contribute to economic progress.
- Informed Decision-Making: AI may aid in analyzing complex datasets, providing insights that support policymakers in crafting evidence-based policies
- Accelerate scientific advances in many fields, such as medicine, physics, climate sciences, etc.
- Improving Public Services: AI can benefit public services such as traffic management and resource allocation.
- Improving Healthcare: AI can support medical diagnosis, treatment, and patient care.
- Augmenting Human Capabilities: AI has the potential to revolutionize various aspects of life, serving as a powerful tool for human advancement and helping reduce inequalities in society.
3.2.2. Negative Effects and Risks of AI
- Privacy Erosion: The proliferation of AI-driven surveillance systems poses significant threats to individual privacy, infringing on individual privacy rights, as these technologies can monitor and analyze personal behaviors extensively (Manheim and Kaplan, 2019) [37].
- Bias, discrimination and amplification of Inequality: AI can exacerbate social and economic disparities if not implemented thoughtfully, potentially leading to job displacement and unequal access to technological benefits. AI systems trained on biased data can perpetuate and even exacerbate existing inequalities, affecting marginalized groups disproportionately. (The Guardian, 2025) [38].
- Misinformation and Manipulation: AI-generated content, such as deepfakes, can spread misinformation, undermining public trust in democratic institutions and democratic processes (Manheim and Kaplan, 2019) [37].
- Erosion of Accountability: The opacity of AI decision-making processes can lead to challenges in holding entities accountable for actions influenced by AI (Carnegie Endowment, 2024) [39].
- Potentially complex impacts on society with the possibility of unintended or unforeseen consequences (Manyika, 2025) [36].
“Artificial Intelligence is developing fast. It will change our lives by improving healthcare (e.g., making diagnosis more precise, enabling better prevention of diseases), increasing the efficiency of farming, contributing to climate change mitigation and adaptation, improving the efficiency of production systems through predictive maintenance, increasing the security of Europeans, and in many other ways that we can only begin to imagine. At the same time, Artificial Intelligence (AI) entails a number of potential risks, such as opaque decision-making, gender-based or other kinds of discrimination, intrusion in our private lives or being used for criminal purposes.”
“Europe… it can develop an AI ecosystem that brings the benefits of the technology to the whole of European society and economy:
for citizens to reap new benefits for example improved health care, fewer breakdowns of household machinery, safer and cleaner transport systems, better public services; for business development, for example a new generation of products and services in areas where Europe is particularly strong (machinery, transport, cybersecurity, farming, the green and circular economy, healthcare and high-value added sectors like fashion and tourism); and for services of public interest, for example by reducing the costs of providing services (transport, education, energy and waste management), by improving the sustainability of products and by equipping law enforcement authorities with appropriate tools to ensure the security of citizens, with proper safeguards to respect their rights and freedoms.
Given the major impact that AI can have on our society and the need to build trust, it is vital that European AI is grounded in our values and fundamental rights such as human dignity and privacy protection. Furthermore, the impact of AI systems should be considered not only from an individual perspective, but also from the perspective of society as a whole” [40].
- The Bletchley Declaration by Countries attending the AI Safety Summit (1–2 November 2023) [42] that focused on:(“…our agenda for addressing frontier AI risk will focus on:
- -
- Identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase in the context of a wider global approach to understanding the impact of AI in our societies.
- -
- Building respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognizing our approaches may differ based on national circumstances and applicable legal frameworks. This includes alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.)”
- UN General Assembly (11 March 2024) “Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development” [43].
“…Acknowledges that the United Nations system, consistent with its mandate, uniquely contributes to reaching global consensus on safe, secure and trustworthy artificial intelligence systems, that is consistent with international law, in particular, the Charter of the United Nations; the Universal Declaration of Human Rights; and the 2030 Agenda for Sustainable Development, including by promoting inclusive international cooperation and facilitating the inclusion, participation and representation of developing countries in deliberations.”
- 3.
- Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet (AI Action Summit, posted on 11 February 2025) [44].
“Participants from over 100 countries, including government leaders, international organizations, representatives of civil society, the private sector, and the academic and research communities gathered in Paris on 10 and 11 February 2025 to hold the AI Action Summit.”
- -
- Promoting AI accessibility to reduce digital divides.
- -
- Ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all.
- -
- Making innovation in AI thrive by enabling conditions for its development and avoiding market concentration, driving industrial recovery and development.
- -
- Encouraging AI deployment that positively shapes the future of work and labor markets and delivers opportunity for sustainable growth.
- -
- Making AI sustainable for people and the planet.
- -
- Reinforcing international cooperation to promote coordination in international governance”.
3.2.3. Legislation to Regulate the AI Use
- -
- Unacceptable risk. Prohibited. Example: Systems considered a threat to people’s security, livelihoods and rights (e.g., social scoring, mass surveillance). Chapter II (Prohibited practices), Article 5.
- -
- High risk. Requires conformity assessment. Systems whose function, purpose and modalities involve high risk (e.g., education, critical infrastructures)I, Chapter III (High-Risk AI Systems), Section 1, Article 6.
- -
- Limited risk. Requires transparency. Systems used for interacting with people (e.g., chatbots). Chapter IV (Transparency Obligations for Providers and Deployers of Certain AI Systems), Article 50.
- -
- Minimal risk. Demands a Code of Conduct. The rest of the systems. They imply no obligations, but voluntary codes of conduct are recommended. Section 4 (Codes of practice), Article 56–63.
- -
- Lack of AI transparency and explainability
- -
- Job losses due to AI automation
- -
- Social manipulation through algorithms
- -
- Social surveillance with AI technology
- -
- Lack of Data privacy using AI tools
- -
- Biases due to AI
- -
- Socioeconomic inequality because of AI
- -
- Weakening ethics and goodwill because of AI
- -
- Autonomous weapons powered by AI
- -
- Financial crisis brought about by AI algorithms
- -
- Loss of human influence
- -
- Uncontrollable self-aware AI
- -
- Increased criminal activity
- -
- Broader economic and political instability
3.2.4. Impacts of AI Deployment at Various Levels
- 1. Individual level.
- 2. Local level.
- 3. Regional level. (Various regions in a country)
- 4. Country level.
- 5. Europe Level (as an example of Transnational organizations).
- 6. World Level.
- 7. Planet level.
4. Organizational Pathologies and AI Diffusion
- Group I: Structural Pathologies
- P.I.1—Non-existence of vertical unfolding: When a necessary division of the environment and organization into sub-components is absent.
- Example: A national government might fail to recognize the need for specific bodies to address AI-related issues at different levels (local, regional, national), leading to an overburdened central authority. This results in an inability to address issues effectively due to the lack of sub-organizations focused on particular aspects of the problem.
- P.I.2—Absence of first-level recursion: When the organization starts its division at a second level, leaving the first level without an organization to manage the complexity of the whole environment.
- Example: A democratic system might create specific bodies to handle AI ethics and policy but lack a general entity that oversees the entire AI landscape and its global implications, leaving the system vulnerable to broader trends it is not tracking.
- P.I.3—Lack of recursion levels (Middle levels): Vertical unfolding is accomplished, but intermedium recursion levels are left empty. This leaves the corresponding environmental variety to be dealt with at either the next or the previous recursion level or, even worse, to be handled by no one at all.
- Example: A system might have bodies dealing with AI at the national and local levels, but lack organizations focused on AI’s regional impacts. These unaddressed areas can lead to inconsistent policies and unequal distribution of resources and benefits related to AI. This lack of intermediate organizations can result in relevant issues not being addressed or being treated insufficiently.
- P.I.4—Entangled vertical unfolding: When organizations have confused lines of responsibility and belonging, especially when multiple relationships and different criteria for complexity unfolding exist.
- Example: A situation in which AI policy is influenced by multiple organizations (e.g., tech companies, government agencies, international bodies) without a clear structure of responsibility and communication, leading to conflicts of interest, disjointed implementation, and lack of accountability. This can lead to an erosion of public trust in the overall governance of AI. This pathology manifests when organizations lack proper channels of communication with common relations of belonging and when the representation of those organizations in the required instances is missing.
- Group II: Functional Pathologies
- P.II.1—Undefined or poorly defined identity: When an organization lacks clarity about its purpose, boundaries, and values.
- Example: A democratic nation might struggle to define its stance on AI ethics, leading to inconsistent policies, a lack of trust from citizens, and a susceptibility to external manipulation. This also translates to a lack of agreement on the values and goals that AI should serve within the democratic system. The system suffers from a lack of knowledge of its own identity and purpose.
- P.II.2—Collapse of System 5 into System 3: When System 5 (policy and identity) inappropriately intervenes in the operations of System 3 (integration and resource allocation), leading to a weakening of both systems.
- Example: A situation where the highest political authorities are overly involved in the day-to-day management of AI-related initiatives, hindering the operational independence of System 3. This would prevent the system from performing its main functions. System 5’s excessive intervention weakens its functions as well as those of System 4.
- P.II.3—Inadequate representation to higher levels: When an organization is unable to represent its interests and values to the systems that contain it, causing disconnection and lack of coherence.
- Example: A governmental body tasked with AI regulation may fail to communicate its needs and concerns to international regulatory bodies, leading to policies that are misaligned with the national context and the values of its society, also interrupting the transmission of values.
- P.II.4—“Headless Chicken”: System 4 malfunctions due to a lack of proper monitoring of the external and future landscape, leading to a failure to adapt to new changes and trends in AI.
- Example: A country that is slow to adopt AI literacy programs or to anticipate emerging risks associated with AI, lagging in innovation, and failing to adapt to the changing nature of social interactions and public discourse impacted by AI.
- P.II.5—Dissociation between System 4 and System 3: When Systems 4 (intelligence and planning) and 3 (integration) fail to work together harmoniously, leading to a lack of coordination and the inability to translate future plans into present actions.
- Example: A democratic nation develops strategic plans for AI development but fails to implement them effectively because of conflicts or misunderstandings between government planning and operation bodies or because the needs and limitations of the operational level (System 3) are not taken into account at the planning level (System 4). In this case, System 4 perceives System 3 as short-sighted, while System 3 perceives System 4 as unrealistic.
- P.II.6—Inadequate management style: When System 3 over-intervenes in the operational units of System 1, restricting the necessary autonomy of the operational units.
- Example: A government or agency that attempts to micromanage the development of AI tools at the local level, reducing the autonomy of the local agencies and hindering their ability to respond to specific needs.
- P.II.7—Weak connection between System 3 and System 1: When the relationship between System 3 and System 1 is not well established, causing a lack of communication and coordination.
- Example: If government guidelines on AI are not communicated or applied effectively at the local level, leading to disparities and implementation gaps, a weak link between System 3 and System 1 appears.
- P.II.8—Hypertrophy of System 3: When System 3 is excessively developed while Systems 2 and 3* are insufficient, causing System 3 to be overwhelmed and reduce its capacity to coordinate the whole system.
- Example: An agency overcentralizes control of all AI implementations without creating adequate coordination mechanisms (System 2) or audit structures (System 3*), leading to inefficiency and a lack of adaptability. The interventions of System 3 lead to discouragement in the management of the operational units.
- P.II.9—Absence or insufficient development of System 3*: When System 3* (audit and data collection) is not well developed, leading to a lack of information about the system’s performance and potential problems.
- Example: There are no independent audits and assessments on AI’s impact on vulnerable groups, which creates a lack of accountability. A lack of proper data collection and monitoring about AI’s effects results in a lack of alignment of behaviors in the operational units.
- P.II.10—Fragmented behavior in System 1: A lack of coordination and collaboration among the operational units of System 1, leading to competition for resources, lack of a continuous flux between the units, and a general failure to work together.
- Example: Individual communities or organizations pursuing AI initiatives without proper coordination or a sense of overall objectives or strategies. This leads to duplications, inefficiencies, and unequal access to resources and opportunities.
- P.II.11—Autopoietic organizational beasts: When a subsystem focuses on its own goals and growth at the expense of the overall system’s purpose.
- Example: An AI ethics board that becomes more focused on its own institutional expansion and influence rather than on ensuring that ethical AI policies are integrated effectively, thereby becoming an “autopoietic beast.”
- P.II.12—Lack of a Meta-system When functions proper of System 3, System 4, and System 5 are not clear. Their application is diffused in different directors, without a clear identification of the function to which they belong.
- Example: A situation where the different parts of the government that are tasked with the management of AI fail to define which area is in charge of oversight, future planning, or policy development. They do not interact properly as different elements of a meta-system.
- Group III: Information System and Communication Channel Pathologies
- P.III.1—Absence of Information Systems: When there is no adequate infrastructure to provide the information necessary for decision-making throughout the organization.
- Example: A democratic system lacking a central platform to share information on AI policy will cause disjointed implementation and difficulties accessing the available information. The lack of information systems will produce a lack of connection between the functions, and the decisions made with incomplete, inappropriate, or delayed information will be poorly founded.
- P.III.2—Fragmentation of information systems: When useful information systems exist in isolation from one another, creating silos of information that do not communicate with each other.
- Example: A system in which different government agencies use incompatible data systems for tracking the impact of AI, leading to inconsistencies, difficulties in data integration, and duplication of efforts.
- P.III.3—Absence of essential communication channels: When the connections necessary to provide information between different parts of the system are missing.
- Example: There is a lack of communication between AI researchers, policymakers, and the public. This absence leads to a gap in understanding, creates distrust, and hinders a collaborative approach to AI governance. It also leads to an incomplete network in which the functions cannot perform due to the lack of information, the partial nature of the information, its unintelligible format, or its delayed arrival.
- P.III.4—Absence or Insufficiency of algedonic channels: The absence of alarm signals that can alert to critical problems in the system, which compromises the viability of the organization.
- Example: A lack of monitoring mechanisms or public feedback channels to identify and address unexpected consequences of AI deployment, such as algorithmic bias, or discriminatory outcomes. The absence of these channels prevents the timely activation of the system to react and mitigate risks.
- P.III.5—Communication channels incomplete or with inadequate capacity: When there are issues with how messages are sent, received, and understood, such as unclear language or delayed communication. This can result in misinterpretations, conflicts, and inefficient processes.
- Example: If the information regarding AI regulations or ethical guidelines is unclear or difficult to understand for local governments or public organizations, it leads to misinterpretations, lack of adherence, and inconsistent application of the guidelines, distorting the messages received by those who are intended to apply them.
5. Conclusions
Funding
Data Availability Statement
Conflicts of Interest
Appendix A
I. STRUCTURAL PATHOLOGIES | |
---|---|
I1. Non-existence of vertical unfolding. | PI1 |
The lack of an adequate vertical unfolding, when needed, renders it difficult or impossible for a single large organization to deal with the total variety it faces. | |
I2. Lack of recursion levels (first level). | PI2 |
Vertical unfolding is accomplished, but the first recursion level is left empty, leaving part of the total environmental variety unattended. | |
I3. Lack of recursion levels (middle levels). | PI3 |
Vertical unfolding is accomplished, but intermediate recursion levels are left empty. That leaves the corresponding environmental variety to be dealt with at either the next or the previous recursion level (which is difficult or impossible) or, even worse, to be handled by no one. | |
I4. Entangled vertical unfolding. | PI4 |
Various interrelated level memberships. Inadequate integration/communication between recursion levels when multiple memberships are present. | |
II. FUNCTIONAL PATHOLOGIES | |
PATHOLOGIES RELATED TO SYSTEM 5 | |
II1. Ill-defined identity. | PII1 |
Identity has not been sufficiently clarified or defined (“I do not know who I am”). | |
II2. Institutional schizophrenia. | PII2 |
Two or more different identity conceptions produce conflict within an organization. | |
II3. System 5 collapses into System 3 (Non-existing metasystem). | PII3 |
System 5 intervenes undesirably in the affairs of System 3. | |
II4. Inadequate representation vis-a-vis higher levels. | PII4 |
Poor connection between System 5s organisations pertaining to different recursion levels within the same global organization. | |
PATHOLOGIES RELATED TO SYSTEM 4 | |
II5. “Headless chicken | PII5 |
System 4 is missing or, if it does exist, does not work properly. | |
II6. Dissociation of System 4 and System 3. | PII6 |
The homeostat System 4—System 3 does not work properly. Each component system carries out its function separately but does not communicate and interact as it should with the other system. | |
PATHOLOGIES RELATED TO SYSTEM 3 | |
II7. Inadequate management style. | PII7 |
System 3 intervenes excessively or inadequately in the management affairs of System 1. For example, an authoritarian management style constrains System 1’s autonomy. | |
II8. Schizophrenic System 3. | PII8 |
Conflict arises between the roles of System 3 due to its simultaneous inclusion both in the system (operations) and the metasystem (management). | |
II9. Weak connection between System 3 and System 1. | PII9 |
The operational units that make up System 1 operate independently, lacking adequate integration and support from System 3. | |
II10. Hypertrophy of System 3. | PII10 |
System 3 arrogates to itself too much activity, some of which should be carried out by System 3*, System 2 and System 1 directly. | |
PATHOLOGIES RELATED TO SYSTEM 3* | |
II11. Lack or insufficient development of System 3*. | PII11 |
The lack or insufficient development of a System 3* allows that undesirable behaviour and/or activities go on in System 1. | |
PATHOLOGIES RELATED TO SYSTEM 2. | |
II12. Disjointed behaviour within System 1. | PII12 |
A lack of adequate interrelations between the elemental operating units that conform to System 1 leads to their fragmentary behaviour. | |
II13. Authoritarian System 2. | PII13 |
System 2 shifts from a service orientation towards authoritarian behaviour. | |
PATHOLOGIES RELATED TO SYSTEM 1 | |
II14. Autopoietic “beasts”. | PII14 |
Elemental operating units constituting System 1 behave as if their individual goals are the only reason for being. Regardless of any considerations transcending their interests, they ignore the need to harmonize their individual goals within an integrated System 1. | |
II15. Dominance of System 1. Weak metasystem. | PII15 |
The power of System 1 is not handled within the limits set by the metasystem (System 3, System 4, and System 5). | |
II16. Organizational autopoietic “beasts”. | PII16 |
The uncontrolled growth and activity of some individual parts of the organization put the viability of the whole organization at risk. | |
II17. Lack of metasystem. | PII17 |
Insufficient or missing definitions of identity and purpose. A weak or incomplete metasystem shifts the balance between the “outside and future” and the “here and now” management-oriented activities towards the “here and now”, leaving adaptation- oriented activities unattended. Inadequate connections exist between organizations at different recursion levels. | |
III. PATHOLOGIES RELATED TO INFORMATION SYSTEMS AND COMMUNICATION CHANNELS | |
III1. Lack of information systems. | PIII1 |
Some of the necessary information systems are missing, insufficiently developed or not working correctly | |
III2. Fragmentation of information systems. | PIII2 |
Information systems exist in the organization, but they work in a fragmentary way, with poor or non-existent connections between them. | |
III3. Lack of key communication channels. | PIII3 |
Certain required communication channels that should connect the different functions do not exist, or, if they do, are either inadequately designed or work improperly. | |
III4. Lack of or insufficient algedonic channels. | PIII4 |
Necessary algedonic channels are missing, or, if they do exist, are poorly designed for their function or do not work correctly. | |
III5. Communication channels incomplete or with inadequate capacity. | PIII5 |
Necessary communication channels do not have all the necessary elements for transmitting required information (transducers, channels capacity and a sender-receiver in both directions |
References
- Beer, S. The Heart of Enterprise; John Wiley & Sons: Chichester, UK, 1979. [Google Scholar]
- Beer, S. Brain of the Firm, 2nd ed.; John Wiley & Sons: Chichester, UK, 1981. [Google Scholar]
- Beer, S. Diagnosing the System for Organizations; John Wiley & Sons: Chichester, UK, 1985. [Google Scholar]
- Beer, S. The viable system model: Its provenance, development, methodology and pathology. In The Viable System Model, Interpretations and Applications of Stafford Beer’s VSM; Espejo, R., Harnden, R., Eds.; Wiley: Chichester, UK, 1989. [Google Scholar]
- Pérez Ríos, J. Aplicación de la Cibernética Organizacional al estudio de la viabilidad de las organizaciones. In Patologías Organizativas Frecuentes (Parte II); DYNA: Bilbao, Spain, 2008; Volume 83. [Google Scholar]
- Pérez Ríos, J. Diseño y Diagnóstico de Organizaciones Viables. Un Enfoque Sistémico; Iberfora 2000: Valladolid, Spain, 2008; ISBN 978-84-612-5845-1. [Google Scholar]
- Pérez Ríos, J. Models of Organizational Cybernetics for Diagnosis and Design. Kybernetes Int. J. Syst. Cybern. 2010, 39, 1529–1550. [Google Scholar] [CrossRef]
- Pérez Ríos, J. Design and Diagnosis for Sustainable Organizations: The Viable System Method; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2012. [Google Scholar]
- Ashby, W.R. An Introduction to Cybernetics; Chapman Hall: London, UK, 1956. [Google Scholar]
- Conant, R.C.; Ashby, W.R. Every good regulator of a system must be a model of that system. Int. J. Syst. Sci. 1970, 1, 89–97. [Google Scholar] [CrossRef]
- Espejo, R.; Bowling, D.; Hoverstadt, P. The viable system model and the VIPLAN software. Kybernetes Int. J. Syst. Cybern. 1999, 28, 661–678. [Google Scholar] [CrossRef]
- Espejo, R. Observing organisations: The use of identity and structural archetypes. Int. J. Appl. Syst. Stud. 2008, 2, 6–24. [Google Scholar] [CrossRef]
- Espejo, R.; Reyes, A. Organisational Systems: Managing Complexity with the Viable System Model; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
- Schwaninger, M. Intelligent Organizations—Powerful Models for Systemic Management; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
- Hoverstadt, P. The Fractal Organization: Creating Sustainable Organizations with the Viable System Model; Wiley: Chichester, UK, 2008. [Google Scholar]
- Espinosa, A. Sustainable Self-Governance in Businesses and Society: The Viable System Model in Action; Francis & Taylor: London, UK; Routledge: London, UK, 2023; ISBN 9781032354972. [Google Scholar]
- Lassl, W. The Viability of Organizations; Springer: Berlin/Heidelberg, Germany, 2019; Volume 1–3. [Google Scholar]
- Pfiffner, M. The Neurology of Business: Implementing the Viable System Model; Springer: Berlin/Heidelberg, Germany, 2022. [Google Scholar]
- Eccles, J.C. How the Self Controls Its Brain; Springer: Berlin/Heidelberg, Germany, 1994. [Google Scholar]
- von Foerster, H. Cybernetics of Cybernetics, 2nd ed.; Future Systems Inc.: Minneapolis, MN, USA, 1995. [Google Scholar]
- von Foerster, H. Ethics and Second-Order Cybernetics. In Understanding Understanding. Essays on Cybernetics and Cognition; Springer: New York, NY, USA, 2003; pp. 287–304. ISBN 0-387-95392-2. [Google Scholar]
- Lepskiy, V. Evolution of cybernetics: Philosophical and methodological analysis. Kybernetes 2018, 47, 249–261. [Google Scholar] [CrossRef]
- Espejo, R.; Lepskiy, V. An agenda for ontological cybernetics and social responsibility. Kybernetes 2021, 50, 694–710. [Google Scholar] [CrossRef]
- Hetzler, S. Pathological systems. Int. J. Appl. Syst. Stud. 2008, 2, 25–39. [Google Scholar] [CrossRef]
- Schwaninger, M. Modeling with Archetypes: An Effective Approach to Dealing with Complexity. Computer Aided Systems Theory-EUROCAST 2003; Springer: Berlin/Heidelberg, Germany, 2003; LNCS Volume 2809, pp. 127–138. [Google Scholar]
- Katina, P.F. Systems Theory-Based Construct for Identifying Metasystem Pathologies for Complex System Governance. Ph.D. Thesis, Engineering Management & Systems Engineering. Old Dominion University, Norfolk, VA, USA, 2015. [Google Scholar] [CrossRef]
- Katina, P.F. Emerging systems theory-based pathologies for governance of complex systems. Int. J. Syst. Syst. Eng. 2015, 6, 144–159. [Google Scholar] [CrossRef]
- Katina, P.F. Systems Theory as a Foundation for Discovery of Pathologies for Complex System Problem Formulation. In Applications of Systems Thinking and Soft Operations Research in Managing Complexity; Masys, A., Ed.; Advanced Sciences and Technologies for Security Applications; Springer: Cham, Switzerland, 2016. [Google Scholar] [CrossRef]
- Keating, C.B.; Katina, P.F. Prevalence of pathologies in systems of systems. Int. J. Syst. Syst. Eng. 2012, 3, 243–267. [Google Scholar] [CrossRef]
- Keating, C.B.; Katina, P.F.; Chesterman, C.W., Jr.; Pyne, J.C. (Eds.) Complex System Governance: Theory and Practice. In Engineering Management & Systems Engineering Faculty Books; Springer: Berlin/Heidelberg, Germany, 2022. [Google Scholar]
- Morales Allende, M.M.; Ruiz-Martin, C.; Lopez-Paredes, A.; Perez Ríos, J. Aligning Organizational Pathologies and Organizational Resilience Indicators. Int. J. Prod. Manag. Eng. 2017, 5, 107–116. [Google Scholar] [CrossRef]
- Ruiz-Martin, C.; Pérez Ríos, J.; Wainer, G.; Pajares, J.; Hernández, C.; López Paredes, A. The Application of the Viable System Model to Enhance Organizational Resilience. In Advances in Management Engineering; Hernández, C., Ed.; Lecture Notes in Management and Industrial Engineering; Springer International Publishing: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
- Yolles, M.; Flink, G. Personality, pathology and mindsets: Part 3 (of 3)—Pathologies and corruption. Kybernetes 2014, 43, 135–143. [Google Scholar] [CrossRef]
- European Union EUR-Lex, Document 32024R1689. (EU AI Act: Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024). 2024. Available online: https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng (accessed on 30 January 2025).
- Rough, E.; Sutherland, N. Debate on Artificial Intelligence. Debate Pack 28 June 2023 Number CDP 2023/0152. UK Parliament. House of Commons Library. 2023. Available online: https://commonslibrary.parliament.uk/research-briefings/CDP-2023-0152/ (accessed on 1 April 2025).
- Manyika, J. Getting AI Right: A 2050 Thought Experiment. 2025. Available online: https://www.digitalistpapers.com/essays (accessed on 23 February 2025).
- Manheim, K.; Kaplan, L. Artificial Intelligence: Risks to Privacy and Democracy. Yale J. Law Tech. 2019, 21, 106. Available online: https://yjolt.org/artificial-intelligence-risks-privacy-and-democracy?utm_source=chatgpt.com (accessed on 14 February 2025).
- The Guardian ‘Engine of Inequality’: Delegates Discuss AI’s Global Impact at Paris Summit. 2025. Available online: https://www.theguardian.com/technology/2025/feb/10/ai-artificial-intelligence-widen-global-inequality-climate-crisis-lead-paris-summit?CMP=share_btn_url (accessed on 23 February 2025).
- Csernatoni, R. Carnegie Europe. Carnegie Endowment for International Peace (2024). Can Democracy Survive the Disruptive Power of AI? (18 Dec 2024). 2024. Available online: https://carnegieendowment.org/research/2024/12/can-democracy-survive-the-disruptive-power-of-ai?lang=enn (accessed on 4 March 2025).
- European Commission White Paper on Artificial Intelligence—A European Approach to Excellence and Trust. Brussels, 19.2.2020. 2020. Available online: https://commission.europa.eu/system/files/2020-02/commission-white-paper-artificial-intelligence-feb2020_en.pdf (accessed on 15 February 2025).
- Tai, M.C.-T. The impact of artificial intelligence on human society and bioethics. Tzu Chi Med. J. 2020, 32, 339–343. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Gov.uk (updated 13 February 2025). The Bletchley Declaration by Countries Attending the AI Safety Summit, 1–2 November 2023. Available online: https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023 (accessed on 24 January 2025).
- United Nations, General Assembly. Seventy-Eighth Session, Agenda Item 13. 11 March 2024. Available online: https://docs.un.org/en/A/78/L.49. (accessed on 15 February 2025).
- Élysée (Official website of the President of France). Artificial Intelligence Action Summit (10–11 February 2025). Available online: https://www.elysee.fr/en/emmanuel-macron/2025/02/11/statement-on-inclusive-and-sustainable-artificial-intelligence-for-people-and-the-planet (accessed on 4 March 2025).
- Thomas, M. 14 Risks and Dangers of Artificial Intelligence (AI). BuiltIn. 2024. Available online: https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence (accessed on 26 January 2025).
- Rainie, L.; Anderson, J. A New Age of Enlightenment? A New Threat to Humanity?: The Impact of Artificial Intelligence by 2040. ELON University. 2024. Available online: https://imaginingthedigitalfuture.org/reports-and-publications/the-impact-of-artificial-intelligence-by-2040/ (accessed on 24 January 2025).
- Schertel, L.; Stray, J. AI as a Public Good: Ensuring Democratic Control of AI in the Information Space. Forum on Information & Democracy. February 2024. Available online: https://informationdemocracy.org/wp-content/uploads/2024/03/ID-AI-as-a-Public-Good-Feb-2024.pdf (accessed on 24 January 2025).
- Kreps, S.; Kriner, D. How AI Threatens Democracy. J. Democr. 2023, 34, 122–131. [Google Scholar] [CrossRef]
- Ahmad, S.F.; Han, H.; Alam, M.M.; Rehmat, M.; Irshad, M.; Arraño-Muñoz, M.; Ariza-Montes, A. Impact of artificial intelligence on human loss in decision making, laziness and safety in education. Humanit. Soc. Sci. Commun. 2023, 10, 311. [Google Scholar] [CrossRef] [PubMed]
- Innerarity, D. Artificial Intelligence and Democracy. UNESCO 2024. Available online: https://www.unesco.org/en/articles/artificial-intelligence-and-democracy (accessed on 29 January 2025).
- Anderson, J.; Rainie, L.; Luchsinger, A. Artificial Intelligence and the Future of Humans. Pew Research Center (10 December 2018). 2018. Available online: https://www.pewresearch.org/internet/2018/12/10/artificial-intelligence-and-the-future-of-humans/ (accessed on 26 January 2025).
- Read, A. A Democratic Approach to Global Artificial Intelligence (AI) Safety. Policy Brief. November 2023. WFD. Available online: https://www.wfd.org/sites/default/files/2023-11/A%20democratic%20approach%20to%20global%20artificial%20intelligence%20%28AI%29%20safety%20v2_0.pdf (accessed on 24 January 2025).
- Summerfield, C.; Argyle, L.; Bakker, M.; Collins, T.; Durmus, E.; Eloundou, T.; Gabriel, I.; Ganguli, D.; Hackenburg, K.; Hadfield, G.; et al. How Will Advanced AI Systems Impact Democracy. arXiv 2024, arXiv:2409.06729. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Perez Rios, J. The Viable System Model and the Taxonomy of Organizational Pathologies in the Age of Artificial Intelligence (AI). Systems 2025, 13, 749. https://doi.org/10.3390/systems13090749
Perez Rios J. The Viable System Model and the Taxonomy of Organizational Pathologies in the Age of Artificial Intelligence (AI). Systems. 2025; 13(9):749. https://doi.org/10.3390/systems13090749
Chicago/Turabian StylePerez Rios, Jose. 2025. "The Viable System Model and the Taxonomy of Organizational Pathologies in the Age of Artificial Intelligence (AI)" Systems 13, no. 9: 749. https://doi.org/10.3390/systems13090749
APA StylePerez Rios, J. (2025). The Viable System Model and the Taxonomy of Organizational Pathologies in the Age of Artificial Intelligence (AI). Systems, 13(9), 749. https://doi.org/10.3390/systems13090749