Next Article in Journal
Improvement and Replacement: The Dual Impact of Automation on Employees’ Job Satisfaction
Previous Article in Journal
Shaping and Optimizing the Image of Virtual City Spokespersons Based on Factor Analysis and Entropy Weight Methodology: A Cross-Sectional Study from China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Defining Complex Adaptive Systems: An Algorithmic Approach

Department of Computer Science, School of Computing and Engineering, University of Huddersfield, Queensgate, Huddersfield HD1 3DH, UK
*
Author to whom correspondence should be addressed.
Systems 2024, 12(2), 45; https://doi.org/10.3390/systems12020045
Submission received: 25 October 2023 / Revised: 25 January 2024 / Accepted: 25 January 2024 / Published: 30 January 2024

Abstract

:
Despite a profusion of literature on complex adaptive system (CAS) definitions, it is still challenging to definitely answer whether a given system is or is not a CAS. The challenge generally lies in deciding where the boundaries lie between a complex system (CS) and a CAS. In this work, we propose a novel definition for CASs in the form of a concise, robust, and scientific algorithmic framework. The definition allows a two-stage evaluation of a system to first determine whether it meets complexity-related attributes before exploring a series of attributes related to adaptivity, including autonomy, memory, self-organisation, and emergence. We demonstrate the appropriateness of the definition by applying it to two case studies in the medical and supply chain domains. We envision that the proposed algorithmic approach can provide an efficient auditing tool to determine whether a system is a CAS, also providing insights for the relevant communities to optimise their processes and organisational structures.

1. Introduction

Complex adaptive systems (CASs) exist in almost every aspect of life as well as in every realm of research. Examples of CASs include human systems, human society, ecosystems, stock markets, immune systems, health care systems, economic systems, supply chain management systems, language, social systems, and business models. Researchers distil a CAS definition by giving nature-inspired examples, such as the human immune system, or by enlisting the characteristics of a CAS from systems developed by humans, such as healthcare systems, supply chain management systems, or societal systems [1,2,3]. Despite their ubiquitous nature, there is no universally accepted definition of what CASs are (or are not), though there have been several attempts at a definition [4].
The difficulty in arriving at a consensus around a CAS definition lies in deciding on inclusion and exclusion criteria. In most cases, the characteristics of a CAS are chosen based on what is suitable to a given system and what is relevant to a particular context. Moreover, complications arise from the juxtaposition of the properties of CASs, complex systems (CSs) in general, and agent-based approaches that are commonly used for the modelling and simulation of a CAS. Overlapping properties and vague boundaries between these make it more difficult to decide what should and should not be defined as a CAS. The pluralistic nature of the relevant literature hinders the development of a rigorous definition of a CAS.
The absence of an agreed CAS definition is problematic as it can prevent researchers from being able to apply CAS theory to decide whether a system is a CAS or not and to select a system that meets the requirements. Several examples can be found in the literature where authors face a dilemma in defining a CAS and its applications.
For example, [5] attempted to apply CAS theory concepts to palliative care while stating the fact that complexity concepts, in particular, lack clarity. They argue that future research can benefit from drawing on a theoretical model of a CAS, especially with regard to the development of interventions and understanding complex interactions, which are crucial for bringing innovation and change into healthcare. Similarly, Hughes et al. [6] recommended the use of complexity theory to rethink the English healthcare system. Their systematic review points towards the utilisation of CS theory to address the intricacies and challenges exhibited by English healthcare systems, while the lack of an agreed CAS definition hobbles their ability to apply CAS theory to the English healthcare system. The authors of Jagustović et al. [7] utilised a mixture of attributes from system thinking, CSs, and CASs. The authors argue that a lack of agreed definitions in the wider systems thinking area, including CSs and CASs, may hinder the ability to develop and apply systems thinking skills, especially in the context of dealing with complex issues, such as food security.
In this paper, we propose a positional synthesis of CAS definitions following a two-stage algorithmic approach. The first stage focuses on complexity-related properties, and the second level deals with adaptive aspects, including self-organisation and emergence. In the first stage, the agents have to be autonomous, pro-active, sociable, and reactive. These properties are considered a boundary line between a CS and a CAS. In the second stage, memory, learning and adaptation, aggregate behaviour, and an evolutionary process represent a step-by-step approach towards self-organisation and emergence and a complete CAS definition.
In summary, the contributions of this article are the following:
  • A clear delineation between a CS and a CAS, providing minimal agent properties to meet a CS definition and an ordered set of properties to meet a full CAS definition;
  • A robust algorithmic definition of a CAS that can act as a basis for an auditing tool that can determine whether a system is a CAS;
  • An exploration of the proposed definition through two case studies.
The remainder of this article is organised as follows. Section 2 summarises the results of a systematic review of CAS definitions, offering an analysis of key debates that inform the proposed algorithmic CAS definition in Section 3.
The application of this definition in a case study in the healthcare domain is provided in Section 4, followed by a second case study on supply chain management in Section 5. Finally, Section 6 discusses the potential impact of the proposed definition and concludes.

2. Review of CAS Definitions

2.1. Systematic Review Methodology

We conducted a systematic literature review following the guidelines of Kitchenham and Charters [8], relying primarily on keyword-based search. We used a conjunction of two levels of keywords. The first level includes a disjunction between the terms “Complex System” and “Complex Adaptive System”. The second level includes qualifiers to point towards an exploration of definitions rather than particular systems and includes a disjunction between the terms “definition”, “principles”, and “characteristics”.
In order to maximise coverage, we opted to use Google Scholar as the primary resource for automated search. Elsevier’s Scopus and Clarivate’s Web of Science were also considered, but the decision to use Google Scholar was primarily to benefit from its comprehensive coverage, combined with the fact that transparency in search and rank heuristics employed [9] was not considered a requirement for our review. We also did not limit our search to a particular time period, acknowledging that many seminal papers on the definition of complexity were published many decades ago.
In order to ensure the reviewed studies remain within the scope of defining a CS and CAS, we employed several inclusion and exclusion criteria. We first included only studies that are peer-reviewed and written in the English language. Additionally, we focused only on studies that were published in books or peer-reviewed journals and also excluded any studies that had not been cited at least once. Finally, while studies may include case studies and applications that involve the use of CS or CAS definitions, they must contribute to these definitions rather than using previously published definitions.
A Google Scholar search (as detailed above) yielded more than 18,000 results. Through filtering and analysis according to the aforementioned criteria, 12 studies were selected to be presented in detail for the remainder of this paper.

2.2. Reviewed Definitions

According to Cilliers [10], complexity is the outcome of high-level interactions among elements that operate locally with limited information, with a CS including the following characteristics:
  • A CS comprises several interconnected elements interacting dynamically;
  • These interactions are nonlinear, exhibiting rich behavioural patterns and competition;
  • Because CSs constantly change and evolve, these systems do not hold equilibrium conditions, as these conditions make the system flat;
  • Individual elements in CSs do not have knowledge of the behaviour of the whole system.
Ladyman et al. [11] define a CS as a systematic placement of interacting elements in a disordered manner exhibiting organisation and memory. In contrast to Cilliers [10], the interactions do not need to be dynamic and nonlinear, but they need to take place without the need for any sources outside the system. For Ladyman et al. [11], the defining characteristic of a CS is the emergence of a robust order at the system level out of disordered interactions at the microscopic level.
Simon [12] defines complexity in the form of decomposable hierarchies. A CS consists of interconnected subsystems, and those subsystems have further interconnected subsystems and so on. The interactions among and within subsystems are different depending on the level of focus. That difference may be, for instance, in the order of magnitude of the interactions in a decomposable system or in the behavioural patterns of individual elements. These hierarchies representing CSs take time to evolve, and this evolution does not necessarily yield the ability of a CS to reproduce itself.
Rudall and Wallis [13] achieved insights into a CAS definition by collecting 20 definitions and analysing diverse variations of these definitions from different perspectives. The authors categorised the definitions in terms of their use by professionals and attempted to elicit a single CAS definition from the amalgamation of the collected ones. Due to diverse variations between the definitions, it is not clear which CAS definition was suitable for which system. Common CAS characteristics across definitions were identified, including goal-oriented elements and nonlinearity, but there is a lesser degree of acceptance of the terms of self-organisation, emergence, and evolutionary processes in non-academic communities. In contrast, the latter properties are quite commonly referenced by academics.
Chan [14] describes a CAS in terms of attributes such as distributed control, connectivity, co-evolution, sensitive dependence on initial conditions, emergent order, distance from equilibrium, and state of paradox. This work highlights the importance of defining complexity first before delving into other properties. The presented properties analysis is deeply rooted in chaos and complex theory, with the flavour of emergence.
According to Holland [15], a CAS supports evolution, aggregate behaviour, and anticipation. Its elements have distributed positions without central control and authority. These elements adapt and change according to changes in the circumstances. A system’s ability to change its rules leads towards adaptations. The evolution of a CAS is a continuous process generating new forms of emergent behaviour, history, and context, making it challenging to develop a theory or perform an experiment. Aggregate behaviour represents the amalgamation of the interactions among elements. The design of elements enables them to anticipate the effects of a number of reactions. Anticipation is the only property that differentiates between a CAS and a CS. Emergence in a CAS becomes complicated and difficult to assimilate because of anticipation.
Plsek and Greenhalgh [16] defined a CAS as a set of agents interconnected with each other in a way that an individual agent can affect the actions of other agents, and these agents may exhibit unpredictable behaviour as a result. The core properties of a CAS include emergent behaviour, the agent’s internal rules, the adaptation of the systems and its agents, the co-evolution of the subsystems, non-linearity, and unpredictability.
Zimmerman et al. [1] refer to a CAS as a collection of closely interconnected and interacting agents that carry out tasks from their structural rules or schema [17]. The word “complex” in a CAS refers to a significant number of connections among several diverse agents. The word “adaptive”, in turn, represents the agent’s learning and changes as a result. Finally, the word “system” is a collection of interconnected and interdependent entities. A list of properties of a CAS includes causal factors, the diverse connections among agents, distributed control that leads to self-organization, co-evolution, nonlinear behaviour, and the dependence of a CAS on its history.
Agent-based modelling (ABM) is the one of the most widely used approaches to defining a CAS [4,18]. A universally accepted definition of an agent does not exist. Wooldridge and Jennings [19] describe the following agent properties: autonomy, reactivity, pro-activeness, and social ability. Franklin and Graesser [20] expand the above properties to include the following: flexibility, mobility, character, temporality continuity, learning, and adaptation. It is also noted that the reactivity of an agent is an alias of sensing and acting. "Goal-oriented" represents pro-activity, and the communication of agents is named as social ability, with adaptivity being closely associated with learning. Castle and Crooks [21] list similar properties from the literature. This list includes autonomy, heterogeneity, activity, pro-activeness, goal-directed agents, reactive or perceptive agents, bounded rationality, interacting or communicative agents, mobility, and learning and adaptive agents.

2.3. Analysis

From the above literature, it is evident that there is no consensus in academic research on the specific properties that define a CAS, and one of the main reasons appears to be a similar lack of consensus in understanding the properties of agents in an agent-based model (ABM) of a CAS. The following research questions arise in this context:
  • What are the minimal agent properties required to define a CS or a CAS?
  • What are the minimal properties for a system to be considered a CS?
  • What are the minimal properties for a system to be considered a CAS?
  • How can a CS and a CAS be differentiated?
In the remainder of this section, we highlight some of the challenges and debates from the reviewed literature that inform the development of the proposed CAS definition in Section 3.

2.3.1. Complexity and Hierarchy

According to Zimmerman et al. [1], a CS contains multiple interactions among different elements. The complexity of a system involves the formation of the inter-relationships among its elements. Previous experience and the feedback of information act as sources of selectivity to achieve the evolution of a CS. History helps validate and justify a particular evolution step.
In terms of hierarchy, researchers disagree on its relevance to a CS. Simon [12] refers to hierarchy as one of the pivotal building blocks of a CS that represents the relationships among subsystems. Such a hierarchy exhibits the aforementioned evolutionary process instead of a traditional hierarchy that exercises authority among subsystems. These evolutionary hierarchies in CSs are decomposable. On the other hand, Rouse [2] argues that hierarchical decomposition does not suit a CAS primarily because of the potential for information loss during the interactions among the subsystems.

2.3.2. Self-Organisation and Emergence

Cilliers ([10], p. 90) defines self-organisation as the ability of CSs to “develop or change internal structure spontaneously and adaptively in order to cope with, or manipulate, their environment”. Zimmerman et al. [1] also highlight distributed control, instead of centralised control, as the characteristic that enables a system to self-organise. Viewed more broadly, self-organisation for Camazine et al. ([22], p. 7) refers to any pattern-formation process.
There is a debate in the literature on whether self-organisation and emergence should be included in the definition of a CS or should be reserved for a CAS. Zimmerman et al. [1] describe emergence as the outcome of relationships among agents in a CAS. Their work emphasises the fact that emergence portrays an unpredictable situation in which the evolution of the system cannot be described, promoting the idea that the whole of a system is more than the sum of its parts. Ladyman et al. [11] posit that the inclusion of emergence in a CS may be problematic, as emergence represents an increase in complexity, and these are linked concepts.
Amaral and Ottino [23] differentiate between complicated and complex systems with the former lacking the ability to self-organise and adapt to what the latter possesses. As such, these definitions assume self-organisation and adaptability to be an aspect of complexity, while, in terms of having multiple interconnected interacting agents who, however, have limited capacity to respond to changes in the environment, complexity is considered merely complicated.

2.3.3. Minimal CS and CAS Properties

Research has revealed a pattern to define a CS and a CAS. The first step of this pattern is to declare several of the interacting elements of a system with nonlinear behaviour. The second step is to define self-organisation and emergence as representing an evolutionary process. These steps are common across the literature [10,11,12,14,15,24,25,26]. However, the boundaries between a CS and CAS remain unclear, and some definitions in the literature allow the erroneous inference that every CS is a CAS, while some use CS to refer to what is termed a CAS in other sources.
CAS definitions, in terms of their characteristics, share some properties that are described by almost every researcher. There are variations in the interpretation of the characteristics of a CAS, but core concepts such as the complexity of a system and self-organisation exist. Simon [12], Holland [15], and Zimmerman et al. [1] refer to the history and feedback of information of a CAS as pivotal elements.
Table 1 summarises the definitions as presented in the literature reviewed in this section. We will further expand on our critical analysis of these and other concepts related to CS and CAS in the next section as we explain our proposed CAS definition.

3. A Novel CAS Definition

Drawing from the analysis in the previous section, here, we propose a definition of a CAS that follows a two-stage structure with preconditions. The first stage represents the complex systems part of the definition, and the second level defines a mechanism under which self-organisation and emergence can be achieved to fulfil the adaptive facet. Preconditions are attached to each stage, and the second stage cannot be considered unless all first-stage preconditions are met. The proposed definition can also act as an algorithmic framework explaining the steps that are necessary and sufficient to establish that a system is a CAS. Note that the sequencing of conditions implies a prerequisite relationship: for instance, it does not make sense to check whether agents learn and adapt to changes unless it has been confirmed that states and behavioural patterns are stored. The intention of the proposed algorithmic approach is to provide a stepwise process of establishing conditions that then allow other more complex conditions to be evaluated, which would not be possible directly. Additionally, when a condition is not met, the algorithm returns to the previous condition that was satisfied and remains there until the available information changes, reflecting the maximum subset of CS or CAS conditions that are satisfied rather than ending with an outcome of not being a CS or not being a CAS.
The framework follows from the work of Cilliers [10] and Holland [15] and adopts the properties of the weak agent definition introduced by Wooldridge and Jennings [19]. We propose to deal with the contradiction between hierarchy and self-organisation by using linked levels as an alternative to hierarchy. These levels do not represent sub-systems or hierarchies and do not influence each other or the environment. We also propose representing history as distributed memory, asserting that self-organisation and emergence are more visible with the use of memory.

3.1. First Stage: Complex System

To begin with, as a system, a CS is expected to contain more than one element. Hence, the root precondition that is the foundation of both the first stage and the framework as a whole is the existence of multiple, interconnected elements. We will refer to these elements as agents, following the established agent-based modelling paradigm. If this condition is not met, then there is no reason to explore any other properties, and we can safely deduce that a system is neither a CS nor a CAS.
Having several such agents, however, is not enough to provide an assurance of the complexity of a system. Inter-relationships, interdependencies, and interactions among these elements guarantee complex behaviour in a system. So, a CS must have several interconnected agents with high-level interdependencies among them.
As mentioned in Section 2.3.1, complexity science scholars do not agree on whether the aforementioned inter-relationships between agents need to correspond to a hierarchy. Our proposal is more aligned with the viewpoint of Rouse [2], considering that the hierarchy concept is not aligned with the core principles of a CAS. If a hierarchy in a system exhibits command and control attributes in subsystems or if the higher-level systems influence the sublevel systems and vice-versa, then the hierarchy violates one of the basic principles of self-organisation. This is summarised in Proposition 1.
Proposition 1
(Complex systems and hierarchy). A hierarchical organisation of agents is not a prerequisite for a complex system. It suffices that agents exhibit the characteristics in Definition 1 without needing to be organised in a hierarchy.
The second and final precondition for a system to be considered a CS is nonlinearity. More specifically, the interactions between the elements of the system need to be rich and dynamic in nature. This can involve competition, a high level of interdependencies, and co-operation among agents that gives rise to nonlinear behaviour.
Based on these, Definition 1 captures the conditions for completing the first stage of our framework.
Definition 1
(Complex system). A system is a Complex System if and only if it contains multiple interconnected and interdependent interacting agents, and these agents exhibit nonlinear behaviour.
It should be noted that Definition 1 aligns best with what Amaral and Ottino [23] call complicated systems, as discussed in Section 2.3.2. This is because the authors reserve the term complex system to refer to what we and others in the literature [4,14,15,27,28,29] refer to as a CAS to further highlight their ability to adapt. Moreover, Amaral and Ottino [23] do not explicitly name nonlinearity as a characteristic of complicated systems, and they mention nonlinear dynamics as a requirement to study complex systems.

3.2. Second Stage: Complex Adaptive System

The second stage of our definition is focused on adaptive behaviour (active rather than passive, in the sense of adaptation being driven by agent behaviour), which we consider the core of a CAS. Based on our interpretation of the literature, such a system is not designed by a person or a professional system designer. Instead, it is the result of an amalgamation of the interactions among the autonomous agents in a CAS. This is referred to as an emergent phenomenon. Emergence is an evolutionary process. An evolutionary process characterises a self-organised system. The journey from a CS that does not exhibit adaptive behaviour to a system supporting self-organisation and emergence defines a CAS. The kernel of this journey is the design of autonomous agents and the mechanics under which these autonomous agents interact, act, react, and make decisions to achieve their intended aims and objectives. This makes up the working horse of a CAS, and it can be named the internal structure of a CAS. One can dive deeply into this internal structure and find that the pivotal elements are the assimilation of the information by the autonomous agents, their learning and adaptation, and their understanding of the consequences of their actions and decisions at an individual level on others or the system. This thinking is distilled in Proposition 2.
Proposition 2
(From complex to complex adaptive systems). The complexity of a system is a necessary but insufficient condition for a CAS. The complexity of a system on its own does not provide a robust foundation to achieve adaptive behaviour. Instead, a system must go beyond achieving complexity into supporting memory, learning, adaptation, self-organisation, and emergence to achieve adaptive behaviour.
In order to come up with an algorithmic, step-by-step manner of charting this transition from CS to CAS, which is captured by Proposition 2, we looked to arrange the prerequisites listed across different sources in the literature in the order of increasing importance and difficulty. Naturally, this exercise has placed prerequisites that are related to agent-level features before those that move to a system level.
The root precondition for achieving adaptive behaviour is for agents to be autonomous, pro-active, reactive, and have social ability, following the weak definition of agency by Wooldridge and Jennings [19]. Similar to the first stage, if this condition is not met, there is no reason to explore any other properties, and we can safely deduce that a system is not a CAS. If the condition is met, further conditions need to be explored, focusing on memory, learning, adaptation, aggregate behaviour, evolutionary processes, self-organisation, and emergence.

3.2.1. Autonomous, Proactive, Reactive Agents with Social Ability

The agents need to have the following properties as a minimum, as defined by Wooldridge and Jennings [19]:
  • Autonomous: Agents are defined to be autonomous if they perform their operations without any internal or external intervention. They are fully independent, and there is no central authority;
  • Reactive: Agents perceive the environment in which they reside and respond to the changes in a timely manner;
  • Proactive: Agents intend to act and have goals to be achieved, exhibiting goal-oriented behaviour;
  • Social ability: Agents have means of communication to act, react, and be responsible for their own actions to make decisions and achieve their goals.

3.2.2. Memory

Having confirmed the aforementioned agent properties, we move on to the next condition, which is related to memory. According to Cilliers [10], memory is a pivotal part of an adaptive system. Without memory, self-organisation cannot be achieved. It refers to the ability to store patterns of behaviour. The current behavioural patterns can be based on the previously stored patterns or conditions of the system. The information stored in the memory is organised in a distributed way. The process of gauging the importance of stored patterns involves the integration of information rather than stacking information. On the one hand, unused information is gradually eliminated to provide more space to store new patterns. On the other hand, the repetition of information strengthens the need for representation and leads to retaining such patterns in the memory.
Definition 2
(Memory). An agent has memory if it is able to store patterns of behaviour in a dynamic manner, retaining or discarding patterns based on their frequency of occurring.
Note that in our definition, we do not specify whether memory is stored within each individual agent or across the whole system as a whole in terms of emergent properties over time. As long as some ability of storing and retrieving patterns of behaviour is supported, then we consider the condition of having memory to have been met.

3.2.3. Learning and Adaptation

At this point, we assume that information regarding behavioural patterns, changes in the environment, and systems states are stored in a memory. When a new pattern, new change in the environment, or a new state of the system emerges, the system learns and adapts according to the new situation or conditions.
Learning and adaptation have been used interchangeably by Holland [15]. Definitions of learning and adaptation can be derived from cognitive psychology; however, in our context, we follow on from Holland and identify learning as a mechanism of adaptation, as explained in Proposition 3.
Proposition 3
(Learning and adaptation). An agent learns as a result of its own actions, the actions of other agents, feedback received from other agents or the agent’s own actions, and changes to the environment or agents’ structures. Adaptation is observed when an agent changes its behaviour or strategies as a result of learning.
The learning and adaptation of a system, as expressed in Proposition 3, can be achieved when a system compares the new information with the stored information, as described above.

3.2.4. Aggregate Behaviour and Evolutionary Process

The previous condition of learning and adaptation refers to an individual agent that performs its tasks, interacts with other agents, and responds to changes in the memory and the environment. This behaviour is at a local level, referred to by Cilliers [10] as the microscopic level. Our analysis of a CAS up to this point has focused exclusively on the microscopic level. An aggregate behaviour, then, according to Cilliers, is the amalgamation of individual agents’ interactions and takes place at a system level, which is referred to as the macroscopic level. In essence, this condition generalises and aggregates learning and adaptation at a system level.
According to Cilliers [10], learning and adaptation at a system level increases complexity and is the driving force behind consistent and continuous evolutionary processes that lead a CAS to develop over time. Hence, for a system to be considered a CAS following our definition, it needs to exhibit learning and adaptation at a macroscopic level and use this aggregate behaviour to drive an evolutionary process that allows it to develop and adapt.

3.2.5. Self-Organisation and Emergence

According to Cilliers [10], the trail of an individual agent’s actions, interactions, and reactions affects the overall performance towards achieving self-organisation and emergence. Self-organisation follows a bottom-up approach. The key point here is that an agent’s assimilation of the information and understanding of the importance of its own actions, reactions, and interactions is what contributes towards the overall behaviour of a system at the macroscopic level. If an individual entity’s work produces intended aims and objectives, then the aggregated behaviour leads to achieving self-organisation. The actions, interactions, changes in the environment and the agent’s response to these changes can be stored in a memory for comparison purposes when a new change or state of a system emerges. Memory is the kernel of a self-organised system, and aggregate behaviour cannot be achieved without memory.
Learning, adaptation, and aggregate behaviour, as described above, play a paramount role in achieving emergence. Emergence takes place at the macroscopic level and is the result of the collective behavioural response and activities of all agents. Outcomes of such collective behaviour and its management can provide the basis to achieve reliable processes in a large system or an organisation.
The links between self-organisation and emergence and previous conditions are captured in Proposition 4.
Proposition 4
(Prerequisites to self-organisation and emergence). Self-organisation and emergence are not possible without memory, learning, and adaptation. Once agents use memory, learn, and adapt, they are self-organised. Self-organisation can be observed at the agent or system level. Emergence can be observed at the system level only.
Definition 3 captures the conditions for completing the second and final stage of our framework.
Definition 3
(Complex adaptive system). A system is a complex adaptive system if and only if it is a complex system according to Definition 1 and satisfies the following additional properties according to Proposition 2: the agents are autonomous, pro-active, and reactive; have social ability; have memory; can learn and adapt; show aggregate behaviour; show evolutionary processes; and show self-organisation and emergence.
Definitions 1 and 3 need to be considered in an algorithmic manner, following the flowchart provided in Figure 1, which is split into two levels that deal with properties at the agent level and system level. This ensures that the least amount of properties are examined before deciding whether a system is a CS or a CAS. Figure 2 illustrates the relationships between the requirements included in the proposed definitions in the form of a network. Note that the algorithm of our approach follows a linear structure without parallelisation to facilitate a step-by-step evaluation and reduces the number of requirements needing evaluation before determining that a system does not satisfy the definitions. Table 2 summarises all the definitions and propositions that underpin our approach and are presented in this section.

4. Case Study 1: Clinical Commissioning Groups (CCGs)

In this and the following section, we present two case studies to exemplify the proposed definition and explain how it can be applied to determine whether a system is a CAS.
The first case study focuses on establishing whether CCGs in the English National Health Service (NHS) are CASs. According to [30], the roles and responsibilities in a CCG include a generic set and a specific set. A generic set of roles and responsibilities is applied to every person who is working for a CCG. However, a specific set is applicable to specific people in addition to the generic set of roles and responsibilities.

4.1. Complex System

4.1.1. Inter-Connected and Inter-Dependent Agents

A governing body of a CCG consists of different representatives from different practices (surgeries). These representatives act on behalf of the practices, which are the pillars of a CCG. All members of the governing body and the governing body itself are interconnected and interdependent agents. For example, these members perform duties for a governing body, and a governing body has responsibilities for these members. It is the governing body’s responsibility to make sure that its members understand the contents of its policies and laws. The members of the governing body have the responsibility to comply with the governing body’s policies. The actions and decisions of a lay member, who monitors and manages the economic health of a CCG, affect other members of the governing body and vice versa. A lay member who is responsible for patient and public engagement connects the public to a CCG.

4.1.2. Nonlinearity

Nonlinearity is characterised by the highest degree of inter-relationships among the agents. These inter-relationships represent the interconnections and interdependencies among the agents in a CCG and are dynamic in nature.
Patients receive health- and care-related services from a CCG’s staff. A chair of the governing body is a leading designation to make sure that its members fulfil their duties and responsibilities smoothly and swiftly, as described in the CCG constitution. Furthermore, the chair is the most important role in a CCG, monitoring the governing body and providing assurance of the effective working of a CCG. The services provided by CCG staff not only depend on them but also on patient behaviour. The management of patients and CCG staff involvement depends on all stakeholders engaging in health- and care-related activities. These complex interdependencies fulfil the criteria for nonlinearity.

4.2. Complex Adaptive System

4.2.1. Autonomy of Agents

We now focus on the autonomy of agents, as all other properties (pro-activeness, reactivity, and social ability) can be considered as being met by the CCG elements. The construction of a CCG reveals that a chair of the governing body, who has a leading role, represents a partial central authority to manipulate and influence the other members’ activities and decisions taken by those members. When considering the fact that the chair of the governing body is responsible for others, we see that this may impinge on the development of other autonomous agents.
According to our framework, the chair of the governing body is an autonomous agent who can passively act as a facilitator. The chair of the governing body can only remind the governing body members of their duties. All other agents should be autonomous, and they should be responsible for their own actions, reactions, and decisions. If this is not the case, then a CCG cannot be considered a CAS.

4.2.2. Memory

Memory is one of the pivotal building blocks of a CAS. The structure of a CCG does not clearly state that an individual’s actions, interactions, and decisions are recorded in any form of memory. However, all agents in a CCG can have their own memory in which they can store their actions, reactions, and decisions. Each agent can use its memory to observe a previous experience and behavioural patterns and to receive feedback based on the stored history. This will augment an agent’s performance and lead its activities towards achieving self-organisation and emergence.

4.2.3. From Learning and Adaptation to Self-Organisation and Emergence

Agents in a CCG can use information stored in the memory to learn from previous experience and adapt if a new situation arises. Learning and adaptation in a CCG in this way can enable an individual agent to play a role on an individual basis as a step towards achieving the self-organisation phenomenon.
In a CCG scenario, the autonomy, memory, learning, and adaptation of every member, including the governing body, at an individual level play a paramount role in achieving aggregate behaviour. The achievement of the aims and objectives of a CCG will be the amalgamation of the actions, reactions, and decisions of all members at a microscopic level. The overall performance of a CCG can be observed at the macroscopic level. In this way, a CCG can use a bottom-up strategy instead of the top-down approaches that are used in traditional systems. The most important part of this process is the assimilation of an individual member regarding the consequences of the actions, reactions, and decisions at the microscopic and macroscopic levels.
The achievement of the aims and objectives of a CCG using autonomous agents, memory, learning, adaptation, and aggregate behaviour represents an evolutionary phenomenon at the macroscopic level. An evolutionary phenomenon representing emergence takes place by virtue of the bottom-up strategies, as described above.

4.2.4. Results

By following our definition, we have determined that CCGs fulfil the conditions for being a CS; however, we have uncovered two crucial aspects of a CCG that influence its representation as a CAS. First, there is doubt as to whether the agents in a CCG exhibit autonomy, depending on whether the governing body and its chair can be viewed as having authority over the rest or not. Second, while the agents in a CCG are implicitly assumed to meet the memory condition, this is not explicitly stated in a CCG structure.

5. Case Study 2: Supply Chain Management

The second case study is based on the description of an integrated supply chain management system by Fox et al. [31], as illustrated in Figure 3, focusing on a variety of interlinked processes, such as logistics and supplier selection, transportation, resource and warehouse management, and scheduling and dispatching.

5.1. Complex System

5.1.1. Interconnected and Interdependent Agents

A supply chain management system consists of many interconnected and interdependent interacting agents. For example, the functional agents include order acquisition agents, logistic agents, transportation agents, and scheduling agents. Furthermore, the interconnectedness and interdependencies are clearly evident in Figure 3.

5.1.2. Nonlinearity

The agents in a supply chain management system are engaged in a pursuit of competitive and co-operative activities with a high degree of interdependencies. For example, a logistic agent depends on order acquisition, transportation, and resource agents and vice versa. These higher-level, complex interdependencies fulfil the criteria for nonlinearity.

5.2. Complex Adaptive System

5.2.1. Autonomy of Agents

As in case study 1, pro-activeness, reactivity, and social ability can be considered as being met by the supply chain agents due to their goal-oriented nature, their ability to perceive the environment, and their established communication mechanisms. In terms of autonomy, the agents can make their own decisions according to a given situation or changes in the environment without any central authority. For example, an order acquisition agent is an independent agent who can decide when to communicate with logistic agents.

5.2.2. Memory

The use of memory is not described explicitly in a supply chain management system. However, plenty of interactions are actually recorded in various forms of memory, such as the movement of products manipulated by the logistic agents, the handling of customer data and orders by the order acquisition agents, or scheduling by dispatching agents. The behavioural patterns of all these agents can also be stored in a memory for comparison purposes, which is becoming more common with the increasing application of intelligent and data-driven approaches in supply chain management.

5.2.3. From Learning and Adaptation to Self-Organisation and Emergence

When conditions or changes in a supply chain management system emerge, then the system can learn and adapt according to new changes or conditions. A comparison between new behavioural patterns and stored behavioural patterns can increase the intricacy of the system.
An individual agent of a supply chain carries out its duties, interacts with other agents, and responds to changes in the conditions or in an environment or when a new situation emerges. The overall performance of a supply chain management system depends on the actions, reactions, and interactions of every individual agent. Similar to case study 1, the aggregate behaviour of a supply chain management system is the amalgamation of individual agents’ activities.
A supply chain management system could be an evolutionary process if and only if autonomous, pro-active, and reactive agents utilise memory to learn, adapt, self-organise, and produce aggregate behaviour, as described above. Agents in a supply chain management system can anticipate the outcomes of responses based on their previous experience and stored behavioural patterns. As a result, self-organisation and emergence can be achieved.

5.2.4. Results

By following our definition, we have determined that a supply chain management system can be considered to be both a CS and a CAS. However, this is dependent on the assumption that memory is indeed a capability of the agents involved, even if it is not explicitly included in the description of the roles and responsibilities of the relevant stakeholders.

5.3. Further Examples

The proposed definitions for a CS and CAS can be applied to a wide range of systems that have been described in the literature to decide whether they meet the full range of requirements for a CAS or whether they only meet the subset that defines a CS. For instance, Ladyman and Wiesner [32] describe the following as being CSs: the universe, climate systems, the human brain, matter and radiation in physics, ant colonies, and bee hives. Based on our framework, these would also be considered CSs, as they meet the requirements of having multiple interacting and interdependent agents but do not meet one or more of the requirements related to a CAS.
Looking at the ant colony example through the lens of our framework results in the following analysis. In a large colony of ants, the ability for high growth is due to the ants pushing other ants forward so that these ants continue their tasks. Ants interact and become interconnected and interdependent by using communication channels of physical touch or by sensing other ants’ pheromones. This communication is nonlinear in nature as, for instance, the ant reactions are not proportionate to the amount of pheromone deposited [33]. Given this analysis, the ant colony system is considered a CS based on our approach. Moving forward to the CAS-based part of our approach, one can consider characteristics such as feedback mechanisms, memory, and decision-making to form part of what defines an ant colony. However, it is indisputable that ants cannot be considered autonomous, especially considering the particular role that queen ants play [34]. Hence, the ant colony cannot be considered a CAS according to our definition.
There are plenty of examples of systems that are defined as CASs in the literature and can potentially be considered to be CASs when following our approach. These include computer games [35], competition in healthcare markets [36], circular economy supply networks [37], schools [38]; innovation networks [39], and smart cities 5.0 [40]. Looking at the smart cities example further, as described by Svítek et al. [40], smart services represent agents that communicate their decisions, adjust their action plans, and enact these plans in real-time situations. Such services interact with other services and are interdependent (e.g., by an output of one being input to another). As such, a smart city system is considered a CS according to our framework. These services communicate through the exchange of data, react to input, perceive the environment through sensors, and have specific goals. Hence, when viewed as agents, they are autonomous, pro-active, sociable, and reactive. In most cases, they are also capable of storing states and behaviour patterns, with the exception of those that may be using an ephemeral protocol. In terms of learning and adaptation, the information provided by Svítek et al. [40] is not enough to make a decision; however, it is not uncommon for smart services to use, for instance, some form of machine and reinforcement learning approach. In what concerns aggregate behaviour, evolutionary process, self-organisation and emergence, while the authors do not provide a detailed explanation, they do claim that adaptability, resilience, and sustainability are properties that emerge through the collective behaviour of interacting agents. When taking this into account, we can consider smart cities 5.0, as defined by Svítek et al. [40], as fulfilling the requirements of our CAS definition.

6. Discussion and Conclusions

Rittel and Webber [41] distilled the meaning of a wicked problem by comparing the problems faced by scientists and engineers and the problems encountered by planners and identified 10 characteristics of wicked problems, including their uniqueness, the absence of a single solution, and the difficulty of testing solutions. Francalanza et al. [42] refer to designing a system to represent a large organisation as a wicked problem. The traditional methods with which to design a system have been proven to be inadequate to deal with the challenges arising from such a wicked problem, primarily because they are based on top-down approaches and depict a hierarchy that involves authoritative elements. CAS theory provides an impetus to address those challenges; however, attempts to apply it stumble on the fact that a complete and accurate definition of what a CAS is (and what it is not) cannot be agreed on.
The main benefit of CAS theory, in this context, is that it does not follow a top-down approach; instead, it is based on the bottom-up methodologies with no strict hierarchy. While Rouse [2] highlighted the fact that hierarchical decomposition is not consistent with the nature of a CAS, the author stopped short of proposing an alternative to it. During the development of the proposed CAS definition, linked levels emerge as an alternative.
The proposed algorithmic framework represents a synthesis of CAS theories described in the literature. The main intention is to facilitate the process of determining whether a system is a CS, a CAS, or neither. Based on our definition, a system is a CS if and only if it involves many agents who exhibit nonlinear behaviour. The complexity of a system, with autonomous, pro-active, and reactive agents, along with their learning and adaptation leading to an evolutionary phenomenon, form the preconditions to achieve self-organisation and emergence and fulfil all the requirements for a CAS.
From the two case studies presented, we have shown how the proposed framework can be applied to academic and non-academic realms. Exploring which conditions are met and how insights can be drawn in relation to the health and supply chain sectors can lead to potential improvements in the performance of these systems. For instance, in their analysis of integrated care systems, Hughes et al. [6] pointed out a desideratum to adopt complexity concepts for the betterment of the English NHS. We argue that only embracing complexity theory will not be enough to achieve this goal. In order to foster the efficacy of the English NHS, it will be interesting to view integrated care and CCGs through the lens of our framework.
The limitations of the proposed framework include its reliance on the availability of detailed information regarding agent characteristics and their inter-relationships. If such information is unavailable, then any outcome can only be based on assumptions, as evidenced in the two presented case studies, where the satisfaction of the memory condition could not be determined conclusively based on the available information. Additionally, the framework may not be fit for purpose for relatively small systems and organisations, where deciding whether they are a CS or CAS may not require such a detailed and thoroughly documented process.
In the future, we intend to conduct an extensive validation and evaluation of the proposed definition across different case studies to determine the correctness and completeness of the criteria included in the definition, in particular, the ones that refer to the adaptive facet of a system. This will allow an understanding of whether any criteria may be uncommon in real-world systems or whether any criteria are missing. Case studies may move beyond the two domains explored initially, such as traffic management and geographic information systems [43].
Moreover, it will be interesting to explore the implementation of the proposed definition, given its algorithmic nature, using different models based on ABM, allowing us to take advantage of the software, systems, and simulations in the wider ABM literature. This will enable us to develop an automated audit tool [44] that will leverage the algorithmic nature of the proposed definition to implement a step-by-step evaluation of available system descriptions to determine whether a system is a CS or a CAS, and explain why (or why not) this is the case. We envision that such a tool will enable a deeper understanding of real-world systems, allowing relevant stakeholders to propose improvements regarding their structure, organisation, and operation.

Author Contributions

Conceptualization, M.A.A., G.B. and R.H.; methodology, M.A.A. and G.B.; software, M.A.A.; validation, M.A.A. and G.B.; formal analysis, M.A.A. and G.B.; investigation, M.A.A. and G.B.; resources, M.A.A. and G.B.; data curation, M.A.A.; writing—original draft preparation, M.A.A.; writing—review and editing, M.A.A., G.B. and R.H.; visualization, M.A.A. and G.B.; supervision, G.B. and R.H.; project administration, M.A.A.; funding acquisition, M.A.A., G.B. and R.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zimmerman, B.; Lindberg, C.; Plsek, P. A complexity science primer: What is complexity science and why should I learn about it. In Adapted From: Edgeware: Lessons From Complexity Science for Health Care Leaders; VHA Inc.: Dallas, TX, USA, 1998. [Google Scholar]
  2. Rouse, W.B. Health care as a complex adaptive system: Implications for design and management. Bridge-Wash.-Natl. Acad. Eng. 2008, 38, 17. [Google Scholar]
  3. Mitchell, M. Complex systems: Network thinking. Artif. Intell. 2006, 170, 1194–1212. [Google Scholar] [CrossRef]
  4. Abbott, R.; Hadžikadić, M. Complex adaptive systems, systems thinking, and agent-based modeling. In Advanced Technologies, Systems, and Applications; Springer: Berlin/Heidelberg, Germany, 2017; pp. 1–8. [Google Scholar]
  5. Hodiamont, F.; Jünger, S.; Leidl, R.; Maier, B.O.; Schildmann, E.; Bausewein, C. Understanding complexity—The palliative care situation as a complex adaptive system. BMC Health Serv. Res. 2019, 19, 1–14. [Google Scholar] [CrossRef] [PubMed]
  6. Hughes, G.; Shaw, S.E.; Greenhalgh, T. Rethinking integrated care: A systematic hermeneutic review of the literature on integrated care strategies and concepts. Milbank Q. 2020, 98, 446–492. [Google Scholar] [CrossRef]
  7. Jagustović, R.; Zougmoré, R.B.; Kessler, A.; Ritsema, C.J.; Keesstra, S.; Reynolds, M. Contribution of systems thinking and complex adaptive system attributes to sustainable food production: Example from a climate-smart village. Agric. Syst. 2019, 171, 65–75. [Google Scholar] [CrossRef]
  8. Kitchenham, B.; Charters, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; Technical Report EBSE 2007-001, Keele University and Durham University Joint Report; Scientific Research: Atlanta, GA, USA, 2007. [Google Scholar]
  9. Martín-Martín, A.; Thelwall, M.; Orduna-Malea, E.; Delgado López-Cózar, E. Google Scholar, Microsoft Academic, Scopus, Dimensions, Web of Science, and OpenCitations’ COCI: A multidisciplinary comparison of coverage via citations. Scientometrics 2021, 126, 871–906. [Google Scholar] [CrossRef] [PubMed]
  10. Cilliers, P. Complexity and Postmodernism: Understanding Complex Systems; Routledge: London, UK, 2002. [Google Scholar]
  11. Ladyman, J.; Lambert, J.; Wiesner, K. What is a complex system? Eur. J. Philos. Sci. 2013, 3, 33–67. [Google Scholar] [CrossRef]
  12. Simon, H.A. The architecture of complexity. In Facets of Systems Science; Springer: Berlin/Heidelberg, Germany, 1991; pp. 457–476. [Google Scholar]
  13. Rudall, B.H.; Wallis, S.E. Emerging order in CAS theory: Mapping some perspectives. Kybernetes 2008, 37, 1016–1029. [Google Scholar] [CrossRef]
  14. Chan, S. Complex adaptive systems. In ESD. 83 Research Seminar in Engineering Systems; MIT: Cambridge, MA, USA, 2001; Volume 31, pp. 1–19. [Google Scholar]
  15. Holland, J.H. Complex adaptive systems. Daedalus 1992, 121, 17–30. [Google Scholar]
  16. Plsek, P.E.; Greenhalgh, T. The challenge of complexity in health care. BMJ 2001, 323, 625–628. [Google Scholar] [CrossRef]
  17. Baryannis, G.; Woznowski, P.; Antoniou, G. Rule-Based Real-Time ADL Recognition in a Smart Home Environment. In Proceedings of the Rule Technologies. Research, Tools, and Applications: 10th International Symposium on Rules and Rule Markup Languages for the Semantic Web (RuleML 2016), Stony Brook, NY, USA, 6–9 July 2016; Alferes, J.J., Bertossi, L., Governatori, G., Fodor, P., Roman, D., Eds.; Lecture Notes in Computer Science. Springer: Berlin/Heidelberg, Germany, 2016; Volume 9718, pp. 325–340. [Google Scholar] [CrossRef]
  18. Dorri, A.; Kanhere, S.S.; Jurdak, R. Multi-agent systems: A survey. IEEE Access 2018, 6, 28573–28593. [Google Scholar] [CrossRef]
  19. Wooldridge, M.; Jennings, N.R. Intelligent agents: Theory and practice. Knowl. Eng. Rev. 1995, 10, 115–152. [Google Scholar] [CrossRef]
  20. Franklin, S.; Graesser, A. Is it an Agent, or just a Program?: A Taxonomy for Autonomous Agents. In International Workshop on Agent Theories, Architectures, and Languages; Springer: Berlin/Heidelberg, Germany, 1996; pp. 21–35. [Google Scholar]
  21. Castle, C.J.; Crooks, A.T. Principles and Concepts of Agent-Based Modelling for Developing Geospatial Simulations; Centre for Advanced Spatial Analysis (UCL): London, UK, 2006. [Google Scholar]
  22. Camazine, S.; Deneubourg, J.; Franks, N.; Sneyd, J.; Theraulaz, G.; Bonabeau, E. Self-Organization in Biological Systems; Princeton University Press: Princeton, NJ, USA, 2001. [Google Scholar]
  23. Amaral, L.A.; Ottino, J.M. Complex networks: Augmenting the framework for the study of complex systems. Eur. Phys. J. B 2004, 38, 147–162. [Google Scholar] [CrossRef]
  24. Funtowicz, S.; Ravetz, J.R. Emergent complex systems. Futures 1994, 26, 568–582. [Google Scholar] [CrossRef]
  25. Holden, L.M. Complex adaptive systems: Concept analysis. J. Adv. Nurs. 2005, 52, 651–657. [Google Scholar] [CrossRef] [PubMed]
  26. Siegenfeld, A.F.; Bar-Yam, Y. An introduction to complex systems science and its applications. Complexity 2020, 2020, 6105872. [Google Scholar] [CrossRef]
  27. Dooley, K. Complex adaptive systems: A nominal definition. Chaos Netw. 1996, 8, 2–3. [Google Scholar]
  28. Tarvid, A. Complex Adaptive Systems and Agent-Based Modelling. In Agent-Based Modelling of Social Networks in Labour–Education Market System; Springer: Berlin/Heidelberg, Germany, 2016; pp. 23–38. [Google Scholar]
  29. Carmichael, T.; Hadžikadić, M. The fundamentals of complex adaptive systems. In Complex Adaptive Systems; Springer: Berlin/Heidelberg, Germany, 2019; pp. 1–16. [Google Scholar]
  30. NHS Commissioning Board. Clinical Commissioning Group Governing Body Members: Role Outlines, Attributes and Skills; NHS Commissioning: Leeds, UK, 2012. [Google Scholar]
  31. Fox, M.S.; Chionglo, J.F.; Barbuceanu, M. The Integrated Supply Chain Management System; Department of Industrial Engineering, University of Toronto: Toronto, ON, Canada, 1993. [Google Scholar]
  32. Ladyman, J.; Wiesner, K. What Is a Complex System? Yale University Press: New Haven, CT, USA, 2020. [Google Scholar]
  33. Sumpter, D.J.; Beekman, M. From nonlinearity to optimality: Pheromone trail foraging by ants. Anim. Behav. 2003, 66, 273–280. [Google Scholar] [CrossRef]
  34. Carpendale, J.I.M.; Frayn, M. Autonomy in ants and humans. Behav. Brain Sci. 2016, 39, e95. [Google Scholar] [CrossRef]
  35. Van Bilsen, A.; Bekebrede, G.; Mayer, I. Understanding Complex Adaptive Systems by Playing Games. Inform. Educ. 2010, 9, 1–18. [Google Scholar] [CrossRef]
  36. Alibrahim, A.; Wu, S. Modelling competition in health care markets as a complex adaptive system: An agent-based framework. Health Syst. 2020, 9, 212–225. [Google Scholar] [CrossRef] [PubMed]
  37. Braz, A.C.; de Mello, A.M. Circular economy supply network management: A complex adaptive system. Int. J. Prod. Econ. 2022, 243, 108317. [Google Scholar] [CrossRef]
  38. Keshavarz, N.; Nutbeam, D.; Rowling, L.; Khavarpour, F. Schools as social complex adaptive systems: A new way to understand the challenges of introducing the health promoting schools concept. Soc. Sci. Med. 2010, 70, 1467–1474. [Google Scholar] [CrossRef]
  39. Long, Q.; Li, S. The Innovation Network as a Complex Adaptive System: Flexible Multi-agent Based Modeling, Simulation, and Evolutionary Decision Making. In Proceedings of the 2014 Fifth International Conference on Intelligent Systems Design and Engineering Applications, Hunan, China, 15–16 June 2014; pp. 1060–1064. [Google Scholar]
  40. Svítek, M.; Skobelev, P.; Kozhevnikov, S. Smart City 5.0 as an urban ecosystem of Smart services. In Service Oriented, Holonic and Multi-Agent Manufacturing Systems for Industry of the Future: Proceedings of SOHOMA 2019 9; Springer: Berlin/Heidelberg, Germany, 2020; pp. 426–438. [Google Scholar]
  41. Rittel, H.W.J.; Webber, M.M. Dilemmas in a general theory of planning. Policy Sci. 1973, 4, 155–169. [Google Scholar] [CrossRef]
  42. Francalanza, E.; Borg, J.; Constantinescu, C. Approaches for handling wicked manufacturing system design problems. Procedia CIRP 2018, 67, 134–139. [Google Scholar] [CrossRef]
  43. Papadakis, E.; Baryannis, G.; Petutschnig, A.; Blaschke, T. Function-Based Search of Place Using Theoretical, Empirical and Probabilistic Patterns. ISPRS Int. J. Geo-Inf. 2019, 8, 92. [Google Scholar] [CrossRef]
  44. Omar, M.; Baryannis, G. Semi-automated development of conceptual models from natural language text. Data Knowl. Eng. 2020, 127, 101796. [Google Scholar] [CrossRef]
Figure 1. Flowchart for the proposed algorithmic approach to define CAS.
Figure 1. Flowchart for the proposed algorithmic approach to define CAS.
Systems 12 00045 g001
Figure 2. Network of relationships between the requirements.
Figure 2. Network of relationships between the requirements.
Systems 12 00045 g002
Figure 3. Integrated supply chain management based on Fox et al. [31].
Figure 3. Integrated supply chain management based on Fox et al. [31].
Systems 12 00045 g003
Table 1. Summary of the main definitions of complexity, hierarchy, self-organisation and emergence in the reviewed literature.
Table 1. Summary of the main definitions of complexity, hierarchy, self-organisation and emergence in the reviewed literature.
TermDefinitionSources
ComplexityInvolves the formation of inter-relationships among multiple interacting elements.[1]
HierarchyOne of the pivotal building blocks of a CS that represents the relationships among subsystems[12]
Decomposition of a CAS into a hierarchy of subsystems is not suitable because of the potential for information loss during the interactions among subsystems.[2]
Self-organisationThe ability of CSs to “develop or change internal structure spontaneously and adaptively in order to cope with, or manipulate, their environment”.[10]
EmergenceAn unpredictable situation resulting from the outcome of the relationships among agents in a CAS, in which the evolution of the system cannot be described.[1]
A characteristic of systems exhibiting “downward causation”, representing an increase in complexity.[11]
ComplicatedA system containing large numbers of components that have limited ability to respond to changes in the environment.[23]
ComplexA system containing large numbers of components that are able to adapt, self-govern, and emerge.[23]
Table 2. Summary of the main definitions and propositions underpinning our approach to CSs and CASs.
Table 2. Summary of the main definitions and propositions underpinning our approach to CSs and CASs.
Hierarchy and Complex SystemsA hierarchical organisation of agents is not a prerequisite for a Complex System.
Complex SystemA system that contains multiple interconnected and interdependent interacting agents that exhibit nonlinear behaviour.
From Complex to Complex Adaptive SystemsComplexity is a necessary but insufficient condition for a Complex Adaptive System, as supporting memory, learning and adaptation, and self-organisation and emergence are required to achieve an adaptive behaviour.
MemoryThe ability of an agent to store patterns of behaviour in a dynamic manner, retaining or discarding patterns based on their frequency of occurring.
Learning and AdaptationAn agent learns as a result of its own actions, the actions of other agents, and changes to the environment or agents’ structures. Adaptation is observed when an agent changes its behaviour or strategies as a result of learning.
Aggregate BehaviourThe amalgamation of individual agents’ interactions; this takes place at a system level, which is referred to as the macroscopic level [10].
Evolutionary processLeads a CAS to develop over time based on learning and adaptation taking place at the system level [10].
Self-organisation and EmergenceThe ability of agents to organise themselves through memory, learning, and adaptation at both the agent and system levels, which leads them to exhibit properties and behaviours at the system level that are not apparent at the agent level.
Complex Adaptive SystemA Complex System is where agents are autonomous, pro-active, and reactive; have social ability; have memory; can learn and adapt; show aggregate behaviour; show evolutionary processes; and show self-organisation and emergence.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ahmad, M.A.; Baryannis, G.; Hill, R. Defining Complex Adaptive Systems: An Algorithmic Approach. Systems 2024, 12, 45. https://doi.org/10.3390/systems12020045

AMA Style

Ahmad MA, Baryannis G, Hill R. Defining Complex Adaptive Systems: An Algorithmic Approach. Systems. 2024; 12(2):45. https://doi.org/10.3390/systems12020045

Chicago/Turabian Style

Ahmad, Muhammad Ayyaz, George Baryannis, and Richard Hill. 2024. "Defining Complex Adaptive Systems: An Algorithmic Approach" Systems 12, no. 2: 45. https://doi.org/10.3390/systems12020045

APA Style

Ahmad, M. A., Baryannis, G., & Hill, R. (2024). Defining Complex Adaptive Systems: An Algorithmic Approach. Systems, 12(2), 45. https://doi.org/10.3390/systems12020045

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop