Next Article in Journal / Special Issue
Developing Self-Similar Hybrid Control Architecture Based on SGAM-Based Methodology for Distributed Microgrids
Previous Article in Journal / Special Issue
Fighting CPS Complexity by Component-Based Software Development of Multi-Mode Systems
Open AccessArticle

How to Deal with the Complexity of Future Cyber-Physical Systems?

by Martin Törngren 1,*,† and Paul T. Grogan 2,†
1
Department of Machine Design, KTH Royal Institute of Technology, 100 44 Stockholm, Sweden
2
School of Systems and Enterprises, Stevens Institute of Technology, Hoboken, NJ 07030, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Designs 2018, 2(4), 40; https://doi.org/10.3390/designs2040040
Received: 30 September 2018 / Revised: 19 October 2018 / Accepted: 19 October 2018 / Published: 22 October 2018

Abstract

Cyber-Physical Systems (CPS) integrate computation, networking and physical processes to produce products that are autonomous, intelligent, connected and collaborative. Resulting Cyber-Physical Systems of Systems (CPSoS) have unprecedented capabilities but also unprecedented corresponding technological complexity. This paper aims to improve understanding, awareness and methods to deal with the increasing complexity by calling for the establishment of new foundations, knowledge and methodologies. We describe causes and effects of complexity, both in general and specific to CPS, consider the evolution of complexity, and identify limitations of current methodologies and organizations for dealing with future CPS. The lack of a systematic treatment of uncertain complex environments and “composability”, i.e., to integrate components of a CPS without negative side effects, represent overarching limitations of existing methodologies. Dealing with future CPSoS requires: (i) increased awareness of complexity, its impact and best practices for how to deal with it, (ii) research to establish new knowledge, methods and tools for CPS engineering, and (iii) research into organizational approaches and processes to adopt new methodologies and permit efficient collaboration within and across large teams of humans supported by increasingly automated computer aided engineering systems.
Keywords: complexity; cyber-physical systems; systems engineering; uncertainty complexity; cyber-physical systems; systems engineering; uncertainty

1. Introduction

Modern technological development is fueled by simultaneous advances in software, data science and artificial intelligence (AI), communications, computation, sensors, actuators, materials, and their combinations such as 3D printing, batteries and augmented reality. Different perspectives on these advances have led to the creation of many terms—Cyber-Physical Systems (CPS), the Internet of Things (IoT), Industry 4.0, and the Swarm—to represent new classes of technologically enabled systems.
We focus on CPS as a more general notion (e.g., see [1,2]). CPS was introduced in 2006 in the U.S. to characterize “the integration of physical systems and processes with networked computing” for systems that “use computations and communication deeply embedded in and interacting with physical processes to add new capabilities to physical systems” [3]. In this context, the word cyber alternatively refers to the dictionary definition of “relating to, or involving computers or computer networks” [4] or more general feedback systems as in the field of cybernetics pioneered by Wiener [5]. We consider both interpretations of CPS to be valid and, unless otherwise noted, use the term cyber to refer to computing or software parts of a CPS. For further definitions and viewpoints of CPS, see [6] for an overview of existing agendas and roadmaps.
A CPS is thus characterized by an integration of computing and physical elements, typically including feedback loops, into various networks with both physical and cyber interfaces. We consider human-CPS to be a special (but common) category of CPS where humans are integral elements of the system together with technical parts. Figure 1 illustrates interactions between cyber (C), physical (P) and human (H) elements as well as information and physical interactions between the corresponding components. CPS are developed, produced, used and maintained by teams of people (H) and supporting tools, including computer aided engineering software (C) and hardware-in-the-loop simulation ((C) and (P))). We refer to these (people and tools) as Collaborative Information Processing Systems (CIPS). Finally, both CPS and CIPS act in specific environments. For example, the CPS environment may be a factory in which an automated vehicle operates and the CIPS environment includes related stakeholders such as component suppliers as well as applicable legislation and engineering guidelines/standards.
Cyber-physical systems play a key role in a large number of application domains including in transportation, energy, health and well-being, manufacturing, and indeed as part of smart infrastructures and cities—thus not only relating to industrial domains, see e.g., [7,8].
The combined advances of multiple technologies indicate an ongoing technological shift where new types of systems and systems-of-systems are emerging within and across domains such as those just mentioned. CPS are becoming autonomous, intelligent, connected, and collaborative, resulting in the creation of Cyber-Physical Systems of Systems (CPSoS) with unprecedented capabilities and opportunities. CPSoS are characterized by independent system evolution and the lack of single responsible system integrator (compare for instance with an intelligent transportation system), see e.g., [9].
The implications are that CPS will be widespread and a multitude of key societal functions (such as water, energy, transportation, and health-care) will rely on the proper operation of such CPSoS (henceforth we will use the term CPS to refer to both CPS and CPSoS, unless a detailed distinction is necessary).
This paper focuses on the complexity of future CPS, intuitively interpreted as a system characteristic making it difficult, and sometimes even impossible, to accurately predict behavior over time, especially in terms of understanding all relevant interactions among CPS elements and with the environment. We believe unprecedented capabilities and opportunities will be achieved by a corresponding unprecedented technological complexity, where unknown or poorly understood such interactions may lead to unexpected and undesired effects including effort overruns, poor operational performance, or even system failures.
Our work aims to improve understanding and awareness of complexity in technical artifacts and contribute to establishing new foundations, knowledge and methodologies for engineering design. Evolving approaches to manage complexity attempt to keep up the pace with technology and systems development and avoid unreasonable monetary, personal and societal risks. The full potential can only be obtained when new engineering methodologies are in place to ensure future CPS are sufficiently safe, secure, available and cost-efficient. In addressing these questions, our paper draws upon recent investigations of CPS complexity [6,10,11].
The structure of the paper follows the graphical outline in Figure 2. Section 2 establishes general perspectives on complexity to describe what general notions constitute complexity (Section 2.1), its effects (Section 2.2), and how to manage its evolution (Section 2.3). Section 3 focuses on CPS to elaborate on their specific facets of complexity (Section 3.1), the implications of future CPS on complexity (Section 3.2), and limitations of existing design or management methodologies (Section 3.3). Section 4 discusses how to improve our ability to deal with complex CPS including avenues to address the previously identified facets and limitations. Finally, Section 5 concludes with a summary including our call for action.

2. Perspectives on Complexity

2.1. Sources of Complexity

It is natural to first elaborate on what we mean by complexity because there are many interpretations and definitions. This discussion views systems design and management as a series of decisions [12] where complexity is a barrier to effective decision-making. Features such as emergence, non-linearity, temporal dynamics, memory, and interaction effects contribute challenges commonly attributed to complexity. Several frameworks argue how complex systems require a distinct set of decision-making and design methodologies.
Snowden’s Cynefin framework differentiates sense-making strategies for simple, complicated, complex, and chaotic systems [13]. While simple systems present obvious fact-based solutions, complicated systems demand additional effort but still benefit from best practices available to cover known issues and phenomena including side effects. Complex systems embody unpredictability and a lack of well established practices and thus more knowledge needs to be developed. Chaotic systems omit the very existence of patterns to support decision-making. This model presents complexity as a barrier to decision-making which requires discovery of new knowledge or information to overcome.
From another perspective, Jackson and Keys classify system methodologies along two dimensions [14]. A technical dimension varies between mechanical (reducible or decomposable) and systemic (irreducible) problem types. A social dimension varies between between unitary or pluralistic decision-makers. The most complex or ‘wicked’ problems span systemic-pluralistic contexts where technical and social sources complicate decision-making due to an inability to separate decisions and link cause-and-effect combined with a general lack of local control over the decision-making process. This model presents complexity as a structural challenge to make decisions in alternative contexts.
Drawing from these frameworks, general sources of complexity in decision-making include large scale or scope from irreducibility of decisions, plurality of opinions or objectives, unpredictability of outcomes, and lack of causal theory or knowledge. More specific to design as a decision-making process, alternative views frame complexity as a source of increased design effort [15] or uncertainty in meeting desired goals of the designed artifact [16]. While at first these two perspectives appear similar, they highlight a fundamental difference between objective (descriptive) and subjective (perceived) sources of complexity [17].
Objective sources of complexity represent an inherent amount of work or information in a system independent of the people involved. They represent sources of effort that remain for a hypothetical omniscient designer to achieve desired system functions with perfect knowledge. For example, the number of components, types of interactions, number of paths through a logical program, and responses to external stimuli are sources of objective complexity which require explicit design effort to define, integrate, and operate. Objective complexity contributes to Cynefin’s concept of complicated systems or Jackson and Key’s systemic/pluralistic contexts.
Subjective sources of complexity consider the challenges to interpreting, understanding, and anticipating design as a human activity rather than an omniscient one and can be viewed as accidental or additional effort [18]. For example, limits to accumulated knowledge, working memory, and language or communication are sources of subjective complexity which contribute to uncertainty in achieving desired outcomes. Subjective sources of complexity are, by definition, always relative as advances in technology and knowledge influence human activity and relate to Cynefin’s concept of complex systems which exhibit unknown unknowns (knowable but unknown to the human observers).

2.2. Effects of Complexity

Complexity contributes two competing effects on system design illustrated in Figure 3. Most first think of distinctly negative effects: uncertainty in achieving functional goals [16], difficulty in separating, analyzing, or solving sets of components [19] or more numerous tasks to achieve a function [15] resulting in increased cost, schedule, or effort [20,21]. These effects are caused by both objective (e.g., number of tasks, total information content, responding to high environmental variability) and subjective sources related to limitations of individual designers. Prior development of complexity metrics attempts to establish scaling laws to predict cost or effort as a function of observable quantities such as lines of code [22], number of logical paths through a program (similar to behavioral state–space) [23], number of components and degree of coupling [19], and architectural topology [24].
However, complexity also contributes positive effects as a key enabler for improved system performance. Well-designed objectively complex systems can afford larger capacity through parallelism, greater response to external stimuli through feedback loops, higher efficiency by closely coupling components, and increased intelligence from adaptability and learning. Under optimal control, complexity allows a system to reach greater levels of performance by more closely matching behavior to needs [25]. All of these benefits, however, critically rely on the design activity being executed well, subject to the limited abilities of participating designers.
To summarize, objective complexity contributes to both expected, desired effects through essential components and to undesired effects (namely: extra effort) through accidental sources. Subjective complexity contributes only to unexpected, undesired effects due to limited human knowledge, perception, and cognitive abilities but can be directly influenced by mitigating or overcoming human limits. Thus, humans present both the source of and means to tip the balance of complexity positive by mitigating the negative downsides with improved new knowledge and tools. Knowledge establishes explanatory causal links between phenomena and helps improve anticipation and perception of negative downsides. Tools help humans to focus efforts at higher levels of abstraction, relying on automation to optimally solve well-characterized problems.

2.3. Evolution of Complexity

Technological progress is measured by the performance of an engineered system to achieve desired goals efficiently (minimize costs) and effectively (maximize value). Firms expend significant research and development effort to accumulate knowledge and refine designs for increasingly sophisticated systems. This dynamic is perceptible: artifacts transition from complex to complicated domains as a knowledge base is established and matured. Meanwhile, novel compositions may still exhibit complex features due to unknown emergent behavior. This section argues there are two paths to improve performance: knowledge-driven activities to reduce subjective sources of complexity and architecture-based activities to reduce objective sources of complexity.
Although technological revolutions are unpredictable [26], there remain consistent patterns of performance growth over long time frames. For example, Koh and Magee illustrate technological progress as long-term exponential growth of key performance measures in diverse fields such as information and energy technology [27,28]. While patterns such as Moore’s law emphasize component features such as the number of transistors per square inch, Christensen argues innovation at the architectural level, not the component level, drives strategic advantage [29,30]. Figure 4 illustrates the effects of technological progress as a series of S-curves which model the initial growth and saturation of alternative product architectures.
Incremental performance improvement within product architectures benefits from knowledge-driven advances, akin to mitigating the effects of subjective complexity and reducing complex systems to complicated systems. Sustained improvement over longer time frames requires architectural innovation, which resembles a paradigm change where new technologies or the integration of new technologies provide capabilities exceeding the state-of-the-art. Elegant architectures, once matured, limit negative effects of complexity such as unintended side effects without constraining desired performance (see e.g., Section 2.2.1 of [31] for a definition of composability and Section 7.4.1 of [31] for examples of such side effects for distributed real-time computer systems). This pathway deals with objective complexity and leverages design strategies such as modularity [32], layered architectures [33], and abstractions through the use of models [34]. However, some requisite level of knowledge is required to conceive of new product architectures, preventing the two modes of evolution from being separate.

3. Complexity in CPS

In Section 3.1 we elaborate complexity facets of current CPS, and then turn to discuss the evolution of such facets for future CPS (Section 3.2) and how this complexity relates to methodological limitations (Section 3.3).

3.1. Facets of Complexity

Complexity for CPS is highly multifaceted, arising from the CPS itself, its environment, its design process, and its design organization as a CIPS, recall Figure 1. These aspects or viewpoints are interrelated [35] as we will discuss further in this section.
Fundamentally, CPS complexity facets stem from:
  • The environment in which a CPS acts and undertakes tasks corresponding to its functional and extra-functional requirements. Environments and tasks are intimately related to the capabilities of CPS [36].
  • The cyber components, where software-defined behaviors and sophisticated electronics platforms lead to very large state-spaces with implications for understanding, maintaining and predicting system behaviors. Unintended effects may arise from behaviors and assumptions referring to other software components, resource sharing and the CPS environment. Interactions between software and the electronics platforms and reuse of legacy components (sometimes black-box or poorly documented/understood), further contribute to complexity. Interactions between components in computer systems include both local and global interactions and exhibit much faster characteristic timescales compared to physical interactions [10].
  • The physical components, where a key source of complexity arises from side effects which can be the same order of magnitude as the intended behavior (e.g., friction-induced thermal effects between surfaces in contact) [37]. Component interactions and side effects are further multifold (e.g., motion, heat, electromagnetism) and are characterized by strong local effects.
  • Interactions between the cyber and physical components. Combining cyber and physical components enables feedback and adaptive systems, providing cost-efficient capabilities otherwise impossible but also characterized by more complex behaviors (e.g., hybrid real-time systems) including a multitude of possible faults and failure modes. As a particular characteristic, a CPS will be characterized by a multiplicity of interfaces and interrelations that encompass both explicit and implicit dependencies among CPS components, and between the CPS and its environment [10]. Correspondingly, a change in one part of a system may affect many others, producing unexpected or undesired side effects from the close coupling and tight integration. The integration of cyber and physical components moreover requires reconciling different worldviews and traditions (see e.g., [38]). Lacking timing (real-time) abstractions for software systems poses a key limitation for predictability [39].
The CPS design process reflects these complexity facets. Establishing a number of desired properties such as performance, cost-efficiency, sustainability, upgradability and safety becomes challenging because of the strong interdependencies expressed in shared design variables. Many of these properties are of a systems nature and depend on the successful development and integration of cyber- and physical components as part of end-to-end feedback interactions, subject to the various side-effects and dependencies mentioned. This results in a situation where the properties are generally difficult to trade-off explicitly and predict during design. It is striking that several of the U.S. definitions of CPS emphasize co-design of cyber and physical parts and the importance of multi-disciplinarity because CPS requires “integration” and composition across a number of aspects and layers (see e.g., [3,10,40]).
Interestingly, integration of cyber and physical components also has a partially limiting effect on complexity. Integration requires each side (cyber and physical) to deal with the limitations and constraints of the other and, specifically, physical constraints limit cyber actions. For example, inertia and energy constraints or cost and safety considerations limit potential actions of cyber components, often forcing designs to reduce various facets of complexity in comparison to general purpose software systems with fewer such constraints.
The over time evolving complexity of CPS is manifested by the introduction of computer control systems and networked control systems that are integrated with, and control, physical systems to provide novel capabilities. This trend has led to needs for a corresponding evolution of organizational responsibilities and processes. For example, introducing local control systems in cars required new responsibilities. Connecting these control systems over Controller Area Networks (CAN) networks (adding the networking dimension to cyber) introduced distributed functions for coordinating subsystems (various physical actuators and sensory subsystems). An example category of such functions are those that provide active safety, for example vehicle stability control, further illustrating the growing scope of responsibilities (e.g., for the network, for the coordination among local controllers and sensors), see e.g., [41].
Facets of complexity for a CPS are intricately tied to its CIPS and vice-versa, linking the CPS, CIPS, and their respective environments in Figure 1. Theories alternately known as the mirroring hypothesis or Conway’s law posit a product architecture parallels its parent organization structure [35,42]. Facets of organizational complexity include overcoming communication barriers across organizational units to resolve technical dependencies and aligning multiple viewpoints with the overall system objectives.
Today, thousands of engineers develop what we consider to be advanced machines such as vehicles and aircraft. No single person is able to grasp the complete design and there is no complete model capturing all dependencies in the system. As a remedy to deal with complexity, CPS designers have since long introduced computer support tools (Computer Aided Engineering, CAE) to help to manage complexity in terms of extending memory and reasoning capabilities of individual designers and to support collaboration among large teams of engineers. It only natural that a minimum amount of (objective) CIPS complexity will be required to deal with complex CPS acting in complex environments. Research has confirmed the important role of integration mechanisms, such as organizational structure, work procedures and methods, training, social systems, and computer-aided engineering, to promote multidisciplinary collaboration. In addition, strategic management in relation to the organization is essential because the organizational power base is likely to evolve (e.g., from physical/electronics to software and data) and since future CPS will require new (complementary) competences, roles and responsibilities [43]. Related to the mirroring hypothesis, the CIPS has to evolve along with the changing content of the CPS product/services.
In summary, CPS are characterized by strong coupling and interdependency between cyber and physical components as illustrated in Figure 1. They exhibit facets of complexity such as heterogeneity, size, computability, uncertainty, dynamic behavior and structural dependencies [10,11,44]. We believe CPS represents a fundamental shift from other existing engineering products similar to architecture-enabled innovation in Figure 4 and, thus, carries potential benefits of higher objective complexity with related challenges of overcoming associated subjective complexity.

3.2. Evolution of Future CPS

Overall, CPS are increasingly being fitted with new functions, deployed in new domains, and as larger systems. The implications are that future CPS are facing an increasing envelope of functional and extra-functional requirements. Examples of functional requirements include for instance typical use-cases such as predictive maintenance and increasing levels of CPS automation. Examples of more stricter (and newer) extra-functional requirements include cyber-security (due to increasing connectivity and collaboration), safety (due to extended types of tasks in more complex environments) and environmental sustainability (e.g., due to stricter legislation). The cross-domain nature of many new CPS applications, where for example robotics technologies are adopted by automated cars, and where telecom networks may find their use for smart machines, poses both large opportunities and challenges.
In the following we elaborate the following aspect for evolving CPS: (1) systems integration and scope, (2) intelligence and level of automation, and (3) advanced tasks and complex environments—and then provide conclusions based on the trends in these areas.
As introduced in Section 1, technological advances enable entirely new types of integration and connectivity in CPS across:
  • Technological areas such as physical, embedded, networked and information (e.g., cloud and edge computing) systems,
  • Standalone systems, for example integrating vehicles and infrastructure to form intelligent transportation systems, and
  • Life-cycle stages, in particular making data available throughout the life-cycle and enabling software upgrades. These concepts are closely related to so-called DevOps (Development- Operations integration) associated with continuous software development, integration, and deployment with feedback from operational systems.
At the same time, increasing levels of automation and intelligence including data analytics are being introduced at a rapid pace in CPS. These capabilities rely on a number of Artificial Intelligence (AI) techniques including machine learning. The prospects of AI and data analytics are seen as game-changers, highlighted by a report by the National Science and Technology Council in the US [45] and Section 0.2 of the European strategic research agenda on electronic components and systems [8]. Context awareness is essential for new types of AI-based CPS, including among other things the ability to understand what entities are currently part of the near environment and to infer what their intentions are. Further maturation of AI technologies is likely to drive increasing levels of automation in a number of application domains.
Because of their potential to solve societal challenges and generate revenue, future CPS will also be tasked with increasingly difficult tasks in open environments. Automated driving on public roads provides a useful example where highly advanced technologies will be deployed and spread across society as opposed to traditional manufacturing applications where advanced technologies are enclosed and used by few. Similar trends are seen in manufacturing with robot-human collaboration systems and in many other application domains, see e.g., [8].
Such smart CPS must deal with tasks in dynamic and changing environments (e.g., highly varying traffic environments which also evolve over time with changing infrastructures and behaviors of humans and CPS). In general, this implies that all operational conditions will not be known a priori (at the time of system development) and, correspondingly, system adaptation and evolution after deployment, including learning from field data, will be essential, see e.g., [10]. The broader implications indicate open CPS will face a variety of existing and new types of uncertainties from partially unknown environments, security vulnerabilities and attacks (e.g., with the difficulty of anticipating attacks by providing appropriate “attacker models”), and changing properties of the CPS itself (e.g., due to partial failures, upgrades or learning). Uncertainty may relate to aspects for all life-cycle stages and parts of a CPS (e.g., in terms of environment perception, embedded knowledge and actual physical capabilities), as well as to the CPS environment. A systematic treatment of uncertainty thus becomes important, taking various sources of uncertainty into account, see e.g., [46].
With this evolution, new types of CPS and CPSoS will be characterized by intelligence, automation, flexibility, connectivity, and data- and service-related business models, see e.g., [1,9,47,48]. Integration with information technologies, most notably cloud and edge computing, will provide platforms to enable new functionalities, on top of which new services can be built, see e.g., [49]. These developments will further stimulate technological development and inventions, continuing the drive towards more sophisticated, large-scale, intelligent and adaptive CPS likely to continue on a path of complexity growth. Using the Cynefin terminology, it is in our understanding relevant to characterize current advanced CPS as representing a mixture of complicated and complex systems while future CPS will fall into the class of complex systems until requisite knowledge can be obtained.
In summary, the continued evolution of CPS will have a drastic impact on complexity, clearly increasing the objective complexity by orders of magnitude, and also introducing new phenomena in terms of smart evolving automated CPS, raising the subjective complexity. The new types of CPS will face more of unknowns, dependencies, hidden assumptions and accidental complexity (due to systems integration, legacy, etc.). This in turn leads to concerns whether we are really prepared to embrace such new levels of complexity.

3.3. Limitations of Existing Methodologies

This section reviews and discusses the key limitations of existing methodologies to address complexity of future CPS.
As discussed by [10], current methodologies use a number of techniques to deal with CPS complexity, including, (1) process-based approaches (e.g., as checklists for all aspects that needs to be considered, and for aligning the different life-cycles of software and hardware), (2) model-based and computer aided engineering—which support complexity management through multiple abstractions and views where the challenge then becomes the integration/reconciliation among these views), (3) design and architecting measures including layered architectures and principles to promote composability, and finally, (4) through people/organizational approaches that enhance skills, motivation and collaboration.
While these measures work for current systems, it is generally considered that they do not scale to the next generation CPS. Consequently, there are also multiple calls for new methodologies, see e.g., [8,10,50,51,52]. As an example, the Electronic Components and Systems for European Leadership (ECSEL) roadmap, which was developed in collaboration between three industrial-academic associations representing hundreds of organizations active in electronics, embedded systems and CPS [8], highlights five essential areas to manage future CPS across application domains:
  • Systems and components: architecture, design and integration,
  • Connectivity and interoperability,
  • Safety, security and reliability,
  • Computing and storage,
  • Electronics process technology, equipment, materials and manufacturing.
Many of these topics relate to CPS complexity facets and also consider increased levels of automation, intelligence, and evolvability, (see e.g., Chapters 6 and 8 in [8]). Roadmaps call for technical measures and improvements on a broad scale including engineering education, processes and organizations for CPS, and legislation see e.g., [7,10,51].
Dealing with the increasingly complex environments, uncertainty, and the inherent dependencies and side effects in CPS, represents core aspects that must be addressed to improve the state of methodologies. As described in Section 3.2, future CPS are likely to be tasked with increasingly challenging tasks in open, dynamic and changing environments. Describing these varying environments and systematically dealing with uncertainty represents a key challenge.
Dependencies related to a CPS, as described in Section 3, are closely related to the concept of composability, to be discussed in the following.
The success of Very Large Scale Integration (VLSI) for integrated circuits and design methods for software can be traced to high-level design abstractions and synthesis methods, and, in particular for VLSI, synthesis methods yielding correctness by design. Similar goals have been targeted for CPS; however, while we agree that correctness by design is a highly worthwhile goal and that unnecessary complexity should be eliminated where ever possible (see e.g., [31]), correctness by construction represents a challenge for CPS. Abstraction and decomposition, as traditional means to deal with complexity, assume side effects can be eliminated or managed. As investigated by [37], the success for digital systems design in VLSI arises because their characteristics enable abstractions and eliminate or robustly deal with side effects. Similar success has not been achieved for mechanical engineering because the same characteristics are not valid; for example, platform-based design is much more complicated with multi-fold side effects [10]. It follows that the endeavor of correct-by-design is even further challenging for CPS.
The essence of CPS design aims to combine and integrate elements (e.g., C, P and H) and components (e.g., C-P components) to achieve the desired functionality and extra-functional properties (e.g., performance, reliability and flexibility), within - and in interaction with - varying partly unknown dynamic environments. This integration may take place during design (off-line) or run-time for adaptive systems. We see the lack of a systematic treatment of “composability” (integration) as one overarching limitation of existing methodologies. Composability has been defined in Section 2.2.1 of [31] as “An architecture is said to be composable with respect to a specified property if the system integration will not invalidate this property, once the property has been established at the subsystem level. ... Examples of such properties are timeliness or testability. In a composable architecture, the system properties follow from the subsystem properties”.
The likely characteristics of future CPS (recall Section 3.2) will clearly require novel ways for reasoning about system level properties and in achieving composability. In the following we elaborate the multidimensional topic of CPS composability and identify what we see as as specialized composability issues including dealing with human-CPS, CPS incorporating AI, trustworthiness and CPSoS.
  • Composability across components, disciplines and aspects. When composing CPS components, multiple technologies needs to be integrated (electrical, mechanical, hydraulic, software, etc.) and multiple aspects needs to be reconciled — referring to for example properties such as cost, safety and availability. When reasoning about such compositions, multiple theories will apply, each focusing on one or more aspects of integration (e.g., logic-interface automata, composing transfer functions, scheduling, overall reliability, etc.). Real and successful composition must consider all these aspects, including characteristic side effects and dependencies of CPS. For this there is no existing comprehensive theory or methodology. Current best practices can manage systems of today’s complexity, albeit using heuristics and with questionable cost-efficiency. Beyond composition at design or production time, CPS will be increasingly adaptive and able to reconfigure during run-time. There are thus needs to improve interoperability and reason about composability over multiple aspects also at run-time. In a similar vein, with the use of DevOps and learning systems with CPS, the software behaviors of parts of the CPS will change during operation, requiring the ability to monitor and ensure proper operations, see e.g., [1,8].
  • Human-CPS integration. Regardless of the system type and level of automation, humans acting as developers, operators, users, and maintainers will increasingly have to deal with and interact with CPS. Increasing levels of automation poses challenges for human-machine interaction, e.g., for cases where humans (e.g., pilots in an aircraft) may still be responsible to act in emergency situations. We are currently transitioning to increasing levels of automation and smarter systems and the lessons learned in the automation history remain highly relevant to improve support for humans interacting with highly capable CPS [53]. As an example of a related challenge in automated driving, leading companies have abandoned the so-called “level 3” of automated driving, moving directly to “level 4” where the automation system is also responsible for fall-back maneuvers, see e.g., [54]. A key notion for human CPS is that of intent, i.e., the understanding of what drives the action or inaction of an agent. There is also a need to identify deviations from normal behaviour of such (human) agents, in particular behaviour that leads to decisions/actions of significance for the functioning of CPS [8] (e.g., Chapter 5.3.3).
  • AI within CPS. While AI and machine learning technologies enable entirely new types of applications, the use of in particular machine learning in terms of neural networks raises concerns about how to deal with transparency (black-box behavior), robustness and predictability (e.g., when data is outside of a training set), and how to cost-efficiently verify, validate and assure such systems, see for example [55,56].
  • Trustworthiness. Trust involves properties such as privacy, cybersecurity, safety and availability, which affect each other, requiring new holistic methodologies. Security risks already exist for current CPS and will increase as CPS become even more widely adopted, connected and with an increasing use of open source software. Absolute safety and security will not be possible so on-line measures are necessary to deal with security breaches and safety related anomalies. Moreover, the increasing deployment and use of CPS increases the importance of their availability, implying traditional fail-safe solutions are not an option. Future systems must be fault-tolerant while balancing the complexity increase due to the introduction of redundancy, adaptability and fall-back measures. Finally, issues related to liability, ethics and assurance are recognized as essential. Who if any will/should take responsibility when a complex CPS fails, what are the ethical considerations of decisions made by highly automated systems, and what is required by an assurance case for a future complex CPS? [8]
  • CPSoS. Future CPS are likely to form part of CPSoS. Such systems, may also because of their novelty and scale, relate to multiple domains, and require consideration a larger set of stakeholders, jurisdictions, regulations and standards [9].
Existing development methodologies are usually centered in either physical systems, software, or data management. Methodologies are also not integrated when it comes to providing a holistic view of complex CPS, CIPS and environments. Each entity itself is complex but also strongly interrelated. For example, the corresponding CIPS encompasses an increasing quantity of tools, data, information and knowledge. Structuring, managing and ensuring interoperability among such assets provides a great challenge on its own. Information capture and formalization requires extra work efforts including the maintenance of relations and validity of this information. Methodologies are also not aligned across the life-cycle stages. This is, for example, visible with the increasing difficulty of CPS maintenance, requiring people and tools with additional skills/capabilities. Future methodologies will need to embrace and integrate all these perspectives.

4. Addressing Limitations to Complexity

Given our analysis and review of CPS complexity and state of the art, we conclude that progress is required to deal with future CPS. This section addresses how we should act to overcome the identified limitations and the required means and strategies required. Research agendas discussed in previous sections propose a variety of measures to deal with future CPS including strategic research areas identified by ECSEL [8] and a mix of socio-technical priorities. We contrast these perspectives by relating to our discussion on the evolution of complexity in general (Section 2.3) and for CPS (Section 3.2).
With the human creativity and ingenuity, we envision a future better prepared to deal with unprecedented complexity, corresponding to our ability to make a paradigm-changing architectural transition as indicated in Figure 4. This future development scenario implies that organizations will have access to existing and new knowledge and methods that explicitly address and manage uncertainty, dependencies and composability. Future tools and automation will assist humans in CPS design to: (i) improve efficiency and effectiveness of inter-human communication, (ii) better accommodate (design) tasks for which our memories and processing capabilities are insufficient, i.e., automating complex tasks such as design space exploration, change management and verification, and (iii) provide adequate collaboration between humans and supporting AI systems. Organizations and management will be strategically prepared by establishing greater awareness of complexity and adopting life-long learning. Awareness relates to creating an understanding of both the benefits and risks associated with complexity.
To reach such a setting/scenario, and to be able to deal with the transition towards such a mature state, there is a need to promote and drive efforts that address:
  • Complexity awareness by people and organizations, as well as improving training, leveraging existing knowledge and best practices,
  • Research towards establishing CPS foundations that addresses the identified limitations of current CPS methodologies,
  • Multi-disciplinary collaboration for the above mentioned educational and research efforts, and also in terms of exchange of best practices across industrial domains and between academia and industry.
In order to address complexity there is a need to create and raise awareness for various CPS stakeholders. This represents an important first step, implying that organizations would be better prepared to assess costs and risks for both projects and products/services. Training and spreading knowledge and best practices play important roles and go beyond engineers. There is a need to consider and include a broader set of stakeholders including policy-makers and the general public to raise awareness of CPS implications (opportunities, risks and challenges) [7].
Complexity metrics have potential to bring awareness to developing organizations and decision-makers. Relevant metrics developed with respect to objective and subjective complexity could give insights and guide decisions. Developing representative metrics has, however, shown to be very difficult. While many have been proposed, they tend to emphasize objective complexity and focus on a few aspects [57]. Few have been adopted in practice, and those that have (e.g., lines of code) are very coarse grain. Thus, while it would be valuable to have operational complexity metrics, other measures will be needed for raising awareness, including through training and guidelines with attention to the multiple aspects of CPS complexity. In addition, it is necessary to develop new ways of organizing life-long learning as a way to systematically increase organizational capabilities through use of new knowledge and tools.
There are several challenges for education and training in relation to future CPS, see e.g., [51,58]. Fundamentally these refer to the trade-off between breadth and depth and identifying appropriate knowledge profiles, preparing students for continued learning (self-learning), the fact that teachers as well as universities have disciplinary backgrounds/profiles with corresponding difficulties in teaching topics cutting across the aspects of a CPS, and finally, the fact that teaching, in many cases, has a low status compared to research. In the short term, curricula need renewal to consider the needs for the “cross-cutting” aspects of CPS, such as developing new courses that address safety and security, safety and AI for CPS, etc. There is clearly room for new schemes for continued education and strong needs to raise the importance and status of teaching and training in this area.
As emphasized by several research agendas, see e.g. [1,7], there is a need for new foundations of CPS engineering to address the issues elaborated in Section 3.3. We envision that such new foundations would encompass new abstractions,(uncertainty) modeling frameworks, composable multi-view models and composable architectures. Based on new knowledge in these areas, novel methods and tools can be developed, eventually yielding new methodologies.
In parallel, there is a need to drive research into organizational approaches and processes that can adopt such methodologies while catering for efficient collaboration within and across large teams of humans supported by increasingly automated computer aided engineering systems. There is a need for alignment and tailorability among the product, organization and process architectures as discussed earlier. This structuring should align with the corresponding structure of teams and their interactions and processes with specific consideration for synchronization and alignment and software and hardware processes. Automation and assistance by CAE systems will be needed to support the above and in particular effective communication between people and teams (currently requiring huge efforts as part of development). Strategic management efforts will be essential for all these endeavours.
Many of the above efforts will require multidisciplinary collaboration because of the nature of CPS. Collaboration between industry and academia will also be essential. While we hope to make much progress in dealing with future CPS, they will continue to evolve, demanding longer-term efforts that continuously work towards spreading existing knowledge and for gaining new knowledge. There will correspondingly be a need to advance formal methods and techniques (for dealing with complicated CPS) as well as engineering heuristics (for dealing with truly complex CPS). Correspondingly, it will be important to leverage cross-domain collaboration encompassing the exchange of best practices across industrial domains and between academia and industry.

5. Discussion and Conclusions

This paper argues CPS, as close integrations of physical, cyber, and human elements, represent an architectural innovation over existing engineered systems carrying both unprecedented potential to improve performance but also unprecedented challenges. From an objective perspective, CPS complexity arises from technical sources such as number and heterogeneity of components, degree of interdependency and response to stimuli in an open environment. While some some aspects of complexity are desired for performance, from a subjective perspective the existing knowledge base on CPS is limited, generating difficulty for engineers to anticipate and predict their behavior. We argue new design methods, tools and, ultimately, knowledge will mature CPS.
Limitations to dealing with future CPS arise from the CIPS responsible for designing and operating CPS. Design of future CPS must overcome current limitations for describing highly varying environments, and in dealing with uncertainty and composability. Composability covers both the integration of cyber and physical components and the integration of technical systems with human counterparts. Issues such as effective collaboration between humans and intelligent systems, establishing trustworthiness, and considering a broader system of systems perspective must be addressed.
Our call to action can be summarized in three points. First, for people and organizations involved with CPS, efforts are needed to increase the awareness of the sources of complexity and approaches to mitigate the unexpected and undesired side effects arising from an insufficient or immature knowledge base. Second, new research should specifically address the limitations to current CPS methodologies including identifying approaches to better understand uncertainty and composability. Finally, improved collaboration across disciplines and between industry and academia is necessary to implement the educational and research efforts described above.
Responding to this call will be important to reach the projected societal-scale advantages of CPS while managing the new risks introduced by such CPS.

Author Contributions

Both authors contributed equally to this article. M.T. led efforts on Section 1, Section 3, and Section 4. P.T.G. led efforts on Section 2 and Section 5 and implemented the four figures.

Funding

This research was partially funded for Martin Törngren by the Platforms4CPS project, supported by the European Commission H2020 program, Grant Agreement No. 731599, and a KTH sabbatical grant.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Damm, W.; Sztipanovits, J.; Baras, J.S.; Beetz, K.; Bensalem, S.; Broy, M.; Grosu, R.; Krogh, B.H.; Lee, I.; Ruess, H.; et al. Towards a Cross-Cutting Science of Cyber-Physical Systems for Mastering all-Important Engineering Challenges. 2016. Available online: https://cps-vo.org/node/27006 (accessed on 4 August 2018).
  2. Lee, E.A.; Seshia, S.A. Introduction to Embedded Systems: A Cyber-Physical Systems Approach, 2nd ed.; MIT Press: Cambridge, MA, USA, 2017. [Google Scholar]
  3. Cengarle, M.V.; Bensalem, S.; McDermid, J.; Passerone, R.; Sangiovanni-Vincentelli, A.; Törngren, M. Characteristics, Capabilities, Potential Applications of Cyber-Physical Systems: A Preliminary Analysis. 2013. Available online: http://www.cyphers.eu/sites/default/files/D2.1.pdf (accessed on 11 July 2018).
  4. Cyber. Merriam-Webster Dictionary; Merriam-Webster: Springfield, MA, USA, 2018. [Google Scholar]
  5. Wiener, N. Cybernetics. Sci. Am. 1948, 179, 14–19. [Google Scholar] [CrossRef] [PubMed]
  6. Platforms4CPS. Foundations of CPS—Related Work. 2017. Available online: https://platforum.proj.kth.se/tiki-index.php?page=Foundations+of+CPS+-+Related+Work (accessed on 11 July 2018).
  7. Schätz, B.; Törngren, M.; Bensalem, S.; Cengarle, M.V.; Pfeifer, H.; McDermid, J.; Passerone, R.; Sangiovanni-Vincentelli, A. Research Agenda and Recommendations for Action. 2015. Available online: http://cyphers.eu/sites/default/files/d6.1+2-report.pdf (accessed on 20 October 2018).
  8. AENEAS; ARTEMIS Industry Association; EPoSS. Strategic Research Agenda for Electronic Components and Systems. 2018. Available online: https://efecs.eu/publication/download/ecs-sra-2018.pdf (accessed on 20 October 2018).
  9. Engells, S. European Research Agenda for Cyber-Physical Systems of Systems and Their Engineering Needs. 2015. Available online: http://www.cpsos.eu/wp-content/uploads/2016/06/CPSoS-D3.2-Policy-Proposal-European-Research-Agenda-for-CPSoS-and-their-engineering-needs.pdf (accessed on 10 August 2018).
  10. Törngren, M.; Sellgren, U. Complexity Challenges in Development of Cyber-Physical Systems. In Principles of Modeling; Lohstroh, M., Derler, P., Sirjani, M., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2018; Volume 10760. [Google Scholar] [CrossRef]
  11. Grogan, P.T.; de Weck, O.L. Collaboration and Complexity: An experiment on the effect of multi-actor coupled design. Res. Eng. Des. 2016, 27, 221–235. [Google Scholar] [CrossRef]
  12. Ullman, D.G. Robust Decision-making for Engineering Design. J. Eng. Des. 2001, 12, 3–13. [Google Scholar] [CrossRef]
  13. Snowden, D.J.; Boone, M.E. A Leader’s Framework for Decision Making. Harv. Bus. Rev. 2007, 85, 68–76. [Google Scholar] [PubMed]
  14. Jackson, M.; Keys, P. Towards a System of Systems Methodologies. J. Oper. Res. Soc. 1984, 35, 473–486. [Google Scholar] [CrossRef]
  15. Bashir, H.A.; Thomson, V. Models for estimating design effort and time. Des. Stud. 2001, 22, 141–155. [Google Scholar] [CrossRef]
  16. Suh, N.P. A theory of complexity, periodicity and the design axioms. Res. Eng. Des. 1999, 11, 116–132. [Google Scholar] [CrossRef]
  17. Schlindwein, S.L.; Ison, R. Human knowing and perceived complexity: Implications for systems practice. Emerg. Complex. Organ. 2004, 6, 27–32. [Google Scholar]
  18. Brooks, F.P. No Silver Bullet: Essence and Accidents of Software Engineering. IEEE Comput. 1987, 20, 10–19. [Google Scholar] [CrossRef]
  19. Summers, J.D.; Shah, J.J. Mechanical engineering design complexity metrics: Size, coupling, and solvability. J. Mech. Des. 2010, 132, 021004. [Google Scholar] [CrossRef]
  20. Braha, D.; Maimon, O. A Mathematical Theory of Design: Foundations, Algorithms and Applications; Springer Science & Business Media: Dordrecht, The Netherlands, 1998. [Google Scholar] [CrossRef]
  21. Arena, M.V.; Younossi, O.; Brancato, K.; Blickstein, I.; Grammich, C.A. Why Has the Cost of Fixed-Wing Aircraft Risen? A Macroscopic Examination of the Trends in U.S. Military Aircraft Costs over the Past Several Decades; Monograph MG-696-NAVY/AF; RAND Corporation: Santa Monica, CA, USA, 2008. [Google Scholar]
  22. Albrecht, A.J.; Gaffney, J.E. Software Function, Source Lines of Code, and Development Effort Prediction: A Software Science Validation. IEEE Trans. Softw. Eng. 1983, SE-9, 639–648. [Google Scholar] [CrossRef]
  23. McCabe, T.J. A Complexity Measure. IEEE Trans. Softw. Eng. 1976, 4, 308–320. [Google Scholar] [CrossRef]
  24. Sinha, K.; de Weck, O.L. Empirical Validation of Structural Complexity Metric and Complexity Management for Engineering Systems. Syst. Eng. 2016, 19, 193–206. [Google Scholar] [CrossRef]
  25. Deshmukh, A.V.; Talavage, J.J.; Barash, M.M. Complexity in Manufacturing Systems. Part 1: Analysis of Static Complexity. IIE Trans. 1998, 30, 645–655. [Google Scholar] [CrossRef]
  26. Simon, H.A. The Steam Engine and the Computer: What Makes Technology Revolutionary. Comput. People 1987, 36, 7–11. [Google Scholar]
  27. Koh, H.; Magee, C.L. A functional approach for studying technological progress: Application to information technology. Technol. Forecast. Soc. Chang. 2006, 73, 1061–1083. [Google Scholar] [CrossRef]
  28. Koh, H.; Magee, C.L. A functional approach for studying technological progress: Extension to energy technology. Technol. Forecast. Soc. Chang. 2008, 75, 735–758. [Google Scholar] [CrossRef]
  29. Christensen, C.M. Exploring the Limits of the Technology S-Curve. Part I: Component Technologies. Prod. Oper. Manag. 1992, 1, 334–357. [Google Scholar] [CrossRef]
  30. Christensen, C.M. Exploring the Limits of the Technology S-Curve. Part II: Architectural Technologies. Prod. Oper. Manag. 1992, 1, 358–366. [Google Scholar] [CrossRef]
  31. Kopetz, H. Real-Time Systems: Design Principles for Distributed Embedded Applications, 1st ed.; Kluwer Academic Press: Dordrecht, The Netherlands, 1997. [Google Scholar] [CrossRef]
  32. Baldwin, C.Y.; Clark, K.B. Design Rules: The Power of Modularity; MIT Press: Cambridge, MA, USA, 2000. [Google Scholar]
  33. Doyle, J.C.; Csete, M. Architecture, constraints, and behavior. Proc. Natl. Acad. Sci. USA 2011, 108, 15624–15630. [Google Scholar] [CrossRef] [PubMed][Green Version]
  34. Maier, M.W. The Art of Systems Architecting, 3rd ed.; CRC Press: Boca Raton, FL, USA, 2009. [Google Scholar] [CrossRef]
  35. Eppinger, S.D.; Salminen, V. Patterns of Product Development Interactions. In Proceedings of the International Conference on Engineering Design, Glasgow, UK, 21–23 August 2001. [Google Scholar]
  36. Simon, H. The Sciences of the Artificial, 3rd ed.; MIT Press: Cambridge, MA, USA, 1996. [Google Scholar]
  37. Whitney, D.E. Why mechanical design cannot be like VLSI design. Res. Eng. Des. 1996, 8, 125–138. [Google Scholar] [CrossRef][Green Version]
  38. Derler, P.; Lee, E.A.; Torngren, M.; Tripakis, S. Cyber-Physical System Design Contracts. In Proceedings of the ICCPS ’13: ACM/IEEE 4th International Conference on Cyber-Physical Systems, Philadelphia, PA, USA, 8–11 April 2013. [Google Scholar]
  39. Lee, E.A. Computing Needs Time. Commun. ACM 2009, 52, 70–79. [Google Scholar] [CrossRef]
  40. National Institute of Standards and Technology, Cyber Physical Systems Public Working Group. Framework for Cyber-Physical Systems—Release 1.0. 2016. Available online: https://pages.nist.gov/cpspwg/ (accessed on 4 August 2018).
  41. Johansson, K.H.; Törngren, M.; Nielsen, L. Vehicle Applications of Controller Area Network. In Handbook of Networked and Embedded Control Systems; Birkhäuser Boston: Boston, MA, UAS, 2005; pp. 741–765. [Google Scholar] [CrossRef][Green Version]
  42. MacCormack, A.; Baldwin, C.; Rusnak, J. Exploring the duality between product and organizational architecture: A test of the “mirroring” hypothesis. Res. Policy 2012, 41, 1309–1324. [Google Scholar] [CrossRef]
  43. Adamsson, N. Interdisciplinary Integration in Complex Product Development: Managerial Implications of Embedding Software in Manufactured Goods. Ph.D. Thesis, Department of Machine Design, KTH Royal Institute of Technology, Stockholm, Sweden, 2007. [Google Scholar]
  44. Sillitto, H. Architecting Systems: Concepts, Principles and Practice. Volume 6: Systems; College Publications: London, UK, 2014. [Google Scholar]
  45. Networking and Information Technology Research and Development Subcommittee. The National Artificial Intelligence Research and Development Strategic Plan; Executive Office of the President or the United States: Washington, DC, USA, 2016. [Google Scholar]
  46. Zhang, M.; Selic, B.; Ali, S.; Yue, T.; Okariz, O.; Norgren, R. Understanding Uncertainty in Cyber-Physical Systems: A Conceptual Model. In Proceedings of the 12th European Conference on Modelling Foundations and Applications, Vienna, Austria, 4–8 July 2016; Springer-Verlag: Berlin/Heidelberg, Germany, 2016; Volume 9764, pp. 247–264. [Google Scholar] [CrossRef]
  47. Rajkumar, R.; Lee, I.; Sha, L.; Stankovic, J. Cyber-physical systems: The next computing revolution. In Proceedings of the 47th Design Automation Conference, Anaheim, CA, USA, 13–18 June 2010; IEEE: Anaheim, CA, USA, 2010. [Google Scholar] [CrossRef]
  48. Horváth, I.; Rusák, Z.; Li, Y. Order beyond chaos: Introducing the notion of generation to characterize the continuously evolving implementations of cyber-physical systems. In Proceedings of the ASME 2017 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Cleveland, OH, USA, 6–9 August 2017; ASME: Cleveland, OH, USA, 2017. [Google Scholar] [CrossRef]
  49. Satyanarayanan, M. The Emergence of Edge Computing. IEEE Comput. 2017, 50, 30–39. [Google Scholar] [CrossRef]
  50. Jacobson, I.; Lawson, H. Software and systems. In Software Engineering in the Systems Context: Addressing Frontiers, Practice and Education; College Publications, 2015; Volume 6, Chapter 1. [Google Scholar]
  51. National Academies of Sciences, Engineering, and Medicine. A 21st Century Cyber-Physical Systems Education; The National Academies Press: Washington, DC, USA, 2016. [Google Scholar] [CrossRef]
  52. Acatech National Academy of Science and Engineering. Living in a Networked World. Integrated Research Agenda Cyber-Physical Systems (agendaCPS). 2015. Available online: http://www.cyphers.eu/sites/default/files/acatech_STUDIE_agendaCPS_eng_ANSICHT.pdf (accessed on 4 August 2018).
  53. Bainbridge, L. Ironies of automation. Automatica 1983, 19, 775–779. [Google Scholar] [CrossRef]
  54. Waymo. Waymo Safety Report: On the road to Fully Self-Driving. 2018. Available online: https://storage.googleapis.com/sdc-prod/v1/safety-report/Safety%20Report%202018.pdf (accessed on 30 August 2018).
  55. Wagner, M.; Koopman, P. A Philosophy for Developing Trust in Self-driving cars. In Road Vehicle Automation 2; Meyer, G., Beiker, S., Eds.; Lecture Notes in Mobility; Springer: Cham, Switzerland, 2015; pp. 163–171. [Google Scholar] [CrossRef]
  56. Amodei, D.; Olah, C.; Steinhardt, J.; Christiano, P.; Schulman, J.; Mané, D. Concrete Problems in AI Safety. arXiv, 2016; arXiv:1606.06565. [Google Scholar]
  57. Sheard, S.A.; Mostashari, A. A Complexity Typology for Systems Engineering. In Proceedings of the INCOSE International Symposium, Chicago, IL, USA, 12–15 July 2010. [Google Scholar] [CrossRef]
  58. Törngren, M.; Grimheden, M.E.; Gustafsson, J.; Birk, W. Strategies and considerations in shaping cyber-physical systems education. ACM SIGBED Rev. 2016, 14, 53–60. [Google Scholar] [CrossRef]
Figure 1. Conceptual view of a CPS with cyber (C), physical (P), and human (H) components arranged in multiple systems ( S i ) and a CIPS responsible for design, production, and maintenance.
Figure 1. Conceptual view of a CPS with cyber (C), physical (P), and human (H) components arranged in multiple systems ( S i ) and a CIPS responsible for design, production, and maintenance.
Designs 02 00040 g001
Figure 2. Graphical outline of this paper.
Figure 2. Graphical outline of this paper.
Designs 02 00040 g002
Figure 3. The balance between unexpected, undesired effects and expected, desired effects is influenced both by complexity and by knowledge and tools.
Figure 3. The balance between unexpected, undesired effects and expected, desired effects is influenced both by complexity and by knowledge and tools.
Designs 02 00040 g003
Figure 4. Notional representation of linked technology S-curves highlighting performance gains from knowledge-driven and architecture-enabling activities.
Figure 4. Notional representation of linked technology S-curves highlighting performance gains from knowledge-driven and architecture-enabling activities.
Designs 02 00040 g004
Back to TopTop