Next Article in Journal
Effects of Policy Communication Changes on Social Media: Before and After Policy Adjustment
Previous Article in Journal
Customer-Directed Counterproductive Work Behavior of Gig Workers in Crowdsourced Delivery: A Perspective on Customer Injustice
Previous Article in Special Issue
Systemic Creative Problem-Solving: On the Poverty of Ideas and the Generative Power of Prototyping
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Talking Resilience: Embedded Natural Language Cyber-Organizations by Design

Engineering Faculty, Uninettuno International Telematic University, Corso Vittorio Emanuele II, 39, 00186 Rome, Italy
*
Author to whom correspondence should be addressed.
Systems 2025, 13(4), 247; https://doi.org/10.3390/systems13040247
Submission received: 28 February 2025 / Revised: 18 March 2025 / Accepted: 31 March 2025 / Published: 2 April 2025

Abstract

:
This communication examines the interplay between linguistic mediation and knowledge conversion in cyber-sociotechnical systems (CSTSs) via the WAx framework, which outlines various work representations and eight key conversion activities. Grounded in enactivist principles, we argue that language is a dynamic mechanism that shapes, and is shaped by, human–machine interactions, enhancing system resilience and adaptability. By integrating the concepts of simplexity, complixity, and complexity compression, we illustrate how complex cognitive and operational processes can be selectively condensed into efficient outcomes. A case study of a chatbot-based customer support system demonstrates how the phases of socialization, introspection, externalization, combination, internalization, conceptualization, reification, and influence collaboratively drive the evolution of resilient CSTS designs. Our findings indicate that natural language serves as a bridging tool for effective sense-making, adaptive coordination, and continuous learning, offering novel insights into designing technologically advanced, socially grounded, and evolving sociotechnical systems.

1. Introduction

Cognition, even in its most basic forms, cannot exist in isolation. It requires a context in order to establish a relationship with it and subsequently structure itself accordingly. Unquestionably, the context in which the human species operates is a technological one. We are a species characterized by the generation of artifacts (norms, practices, and tools) to interact with nature, with the artifacts themselves, and with each other. The most powerful and fundamental artifact that has shaped human cognition is language, a tool totally rooted in our nature that can simultaneously enable or inhibit our cognitive processes. From a perspective grounded in enactivism, linguistic meaning does not preexist in abstract symbols but emerges dynamically through interaction with the environment [1]. Right now, more than ever, the increasing integration of novel technologies into organizational structures has led to profound transformations in sociotechnical systems (STSs), hence, in such interactions. The boundaries between human and machine agency are becoming increasingly porous, with natural language serving as a key mediating element in this transition. Once again, far from being a simple communication tool, language actively structures the way in which human and non-human agents coordinate, adapt, and generate emergent forms of organization. Understanding these dynamics requires a conceptual framework that takes into account the role of linguistic mediation in the interaction between structured simplicity and emergent complexity within these systems [2]. In this context, the WAx framework—which highlights the interplay between multiple representations of work in cyber-sociotechnical systems (CSTSs)—offers an additional lens to examine how knowledge exchange and system adaptation are shaped by language-driven transitions between tacit and explicit knowledge [3].
A recent scientific study conducted by a multidisciplinary team used Grounded Theory on an axiological dimension based on 89 documents to define concepts relating to the simplicity that emerges in complex systems, such as STSs [4]. These concepts—namely simplexity and complixity—provide a productive lens for analyzing complex systems, and this might be particularly useful in settings where automation technologies such as collaborative robots (cobots), personal assistants, chatbots, and expert systems reshape work practices. Simplexity refers to the paradoxical process by which complex underlying interactions yield intuitively simple and efficient outcomes at the user level. Complixity, by contrast, captures the emergence of structured interactions that arise when previously discrete systems or agents become entangled, giving rise to new functional regularities. Together, these concepts enable a faceted understanding of how STSs balance operational complexity with accessible, structured functionality [5]. In particular, language exemplifies this principle by allowing individuals to compress intricate ideas into manageable forms, facilitating communication while retaining conceptual depth. Moreover, linguistic products such as narratives, metaphors, and specialized terminologies serve as cognitive tools that structure human understanding beyond the veil of complexity. These can be considered conceptual artifacts in all respects. In fact, they shape interpretation by both constraining and enabling flexibility in meaning-making.
For instance, scientific and technical discourses distill highly complex concepts into structured terminologies that enable efficient knowledge transfer. Through the lens of simplexity, language can be seen as an evolutionary adaptation that optimizes the trade-off between expressive power and cognitive efficiency. A new wine for the old ETTO’s bottle [6]. Similarly, legal language, despite its apparent rigidity, ensures interpretive stability while accommodating contextual adaptability. From this perspective, language does not merely transmit information but actively participates in shaping cognitive and social dynamics. By externalizing cognition into shared linguistic artifacts, humans reduce their cognitive load while enhancing collective intelligence. This perspective resonates with the broader framework of complex adaptive systems (CAS), which highlights the interdependence between structures and emergent properties [7]. Linguistic systems, as dynamic and self-organizing networks, embody the principles of CAS by evolving through iterative processes of selection, refinement, and innovation [8,9].
The aim of this brief communication is to explore the application of simplexity and complixity in STSs, focusing on their impact on organizational resilience and adaptability in settings where automation and AI facilitate human–machine interactions. Instead of analyzing a single empirical case, our discussion employs a general model of modern organizations to illustrate how companies integrate cobots, personal assistants, chatbots, and expert systems. The goal is to examine linguistic mediation, resilience, and emergent complexity in human–machine collaboration rather than presenting a conventional case study [10]. The following sections will first explore how natural language communication, including interactions between humans and non-human agents, contributes to systemic resilience. We will then reinterpret language through the lens of simplexity, complexity, and information compression, as described in recent studies on managing simplicities in complex adaptive systems [4,11]. Finally, we will conclude by synthesizing these insights and discussing their broader implications for STS design.

2. Resilience: Language as a Stabilizing Mechanism

Nowadays, in workplaces, humans and machines must work together seamlessly to keep the systems running under both expected and unexpected conditions [12,13].
Natural language is increasingly being characterized as a preferential stabilization mechanism in this human–technology interplay [14]. For example, an artificial intelligence agent capable of asking for clarification or providing status updates in natural language can interface with other expert systems (human and non-human), ontologies, or connect to the internet, and thus handle scenarios beyond its original programming instead of failing in a rigid way [15]. Communication with human operators is the secret to exploiting robots in a wider range of scenarios, extending beyond those initially anticipated during the programming phase [16,17]. Rich communication between different subsystems is a prerequisite for effective and efficient coordination. A high signal-to-noise ratio improves feedback loops between parts and the learning process. Clear communication channels help to convey timely warnings, explain changes, and prevent small errors from becoming serious, thereby facilitating adaptation. In a modern CSTS, one-way commands must give way to interactive discussion, and only the natural language format can guarantee the negotiated alignment of understanding between the actions of all agents, be they human, organizational, or technological [18].
Whether interpreted as a primary communicative artifact or as a complex adaptive subsystem, it is undeniable that natural language is at the root of the four pillar abilities of resilience: responding, monitoring, anticipating, and learning [19,20,21]. Effective sense-making, coordination, and knowledge transfer in CSTSs underlie each ability and are therefore inherently dependent on language and communication [22,23,24]. The AI assistant that explains a sudden change in data (monitoring variability in data), the customer service bot that calmly handles a flood of queries (responding promptly in an impossible manner for humans), the digital assistant that asks for help when it encounters an ambiguity (anticipating a potential problem), the expert system that collects data and updates its ontology (learning)—all are leveraging language to absorb shocks and maintain core functions. Thus, neglecting the linguistic and communicative underpinnings of these abilities has limited resilience engineering by overlooking how information flow and semantic alignment contribute to resilient performance.
Recognizing language as a primary enabling factor heals the rift that has developed between traditional resilience frameworks and the systemic perspective of control theory [25]. Effective control in complex systems implies the need for communication between components. In other words, treating language and communication as fundamental integrative mechanisms links Hollnagel’s adaptive potential to Leveson’s emphasis on communication and control, providing a more solid foundation for engineering resilience.
“Resilience can be defined as the intrinsic ability of a system to adjust its functioning prior to, during, or following events (changes, disturbances, and opportunities), and thereby sustain required operations under both expected and unexpected conditions. This definition requires possessing the potentials to respond, monitor, anticipate, and learn”.
[26]
“Intersubjectivity is the defining property of communication… Shared (social) sensemaking creates and nourishes common awareness and understanding of the ‘operating point’, and in so doing facilitates coordination and safer performance. This is an essential condition for the emergence of safety and resilience… In this way, dialogic sensemaking provides a resource for resilience, by enabling a shared awareness of “the sense of the event” (phronesis) and a collective response to the actual and potential”.
[27]
“Thus, approaches to safety, like resilience engineering, must be based on accounts of work-as-done to afford a dialogue for learning. […] Control in open systems (those that have inputs and outputs from their environment) implies the need for communication […]”.
[28]
A common thread emerges: language acts as a stabilizing force in CSTSs, allowing them to bend without breaking. Whether it is an office worker chatting with an AI assistant or an engineer giving instructions to a robot, natural language communication creates a feedback loop that keeps the system on track. When unforeseen situations arise, linguistic interaction allows for rapid sense-making and response. In essence, humans and machines can talk through the problem and make adjustments in real time. This dynamic yields resilience.
It is important to emphasize that, despite current limitations, the use of language as an interface makes technology more accessible and transparent. You do not need to be an engineer to intervene; you can use conversation, the most natural tool we have, to control and understand complex systems. This lowers barriers and builds trust, which in turn improves the resilience of the sociotechnical whole. Operators are more likely to stay engaged and take the initiative when they can easily communicate with the system. In effect, language transforms a collection of humans and machines into a more cohesive community of agents, each aware (to some degree) of the states and goals of the others. As one study noted, even the way we describe a system influences how well we can manage it—“Complexity depends not only on the object but also on the language used to describe that object”, and having the right vocabulary leads to a better understanding of systems as self-adaptive [4]. By developing richer, more intuitive languages for humans and non-human agents to interact, we effectively enhance the system’s ability to understand itself and adjust. This shared awareness and adaptability is the essence of resilience. Looking ahead, these trends are likely to deepen. We are seeing rapid advances in natural language processing and AI—large language models, e.g., are being integrated with IoT networks to make them more “intelligent and responsive” via conversational interfaces [29]. Language is a control mechanism: a way to steer STSs through complexity by continually aligning perspectives and actions. Embracing this insight, and extending it with future innovations, means our increasingly digital societies can remain stable and adaptable—anchored by the simple, profound power of communication. Moreover, studies in high-stakes domains find that AI systems must communicate information naturally, effectively, and efficiently to avoid overloading human partners [30]. By presenting information in an intelligible way (e.g., an alert message or spoken explanation), a language-enabled system can reduce cognitive strain on humans and maintain trust, ensuring that people remain engaged and responsive.
Language might function as a coordinating feedback loop that holds adaptive, coordinated, and robust behavior together—a feedback loop of shared understanding that stabilizes the overall system.

3. Language: Compressed Complexity

The previous section highlighted how natural language underpins system resilience by enabling robust communication, prompt adaptive responses, and coordinated actions across diverse agents. In the following section, we shift our focus to another pivotal role of natural language: its capacity to condense complex cognitive and operational processes. By streamlining multifaceted information, language reduces cognitive load and fosters clarity, thereby enhancing the efficiency of interactions within CSTSs. This transition underscores that, in addition to stabilizing system functions, natural language is integral to managing and simplifying inherent complexities.
Through the use of specialized terminology, metaphorical expressions, and structured narratives, large amounts of information are encapsulated in concise symbolic forms [31,32]. In fields such as medicine or law, for example, a few well-chosen words can represent entire conceptual frameworks, allowing experts to communicate efficiently without resorting to exhaustive explanations. This phenomenon is not just a matter of linguistic brevity; it reflects an underlying cognitive economy. Because our interactions with the environment require rapid and accurate decision-making, language compresses sensory, experiential, and contextual data into manageable units, thereby reducing cognitive load [32,33,34,35]. This compression is consistent with enactivist principles, which posit that cognition emerges through continuous interaction with the environment; in this context, language both shapes and is shaped by these interactions [36]. It acts as a vital channel for the extraction and transmission of relevant information.
Completing this process is the ability of language to generate seemingly simple and solid patterns from intrinsically complex systems. The concept of simplexity emphasizes that simplicity in language is not a reduction in information per se but rather an adaptive condensation that preserves essential meaning while making it accessible and operationally effective [37]. It is this emergent simplicity that allows individuals to quickly comprehend context-sensitive meaning, facilitating coordination and collective action in dynamic environments. At the same time, complixity captures the way in which new organized structures emerge when disparate linguistic and cognitive elements interact [11,38]. This concept emphasizes that when previously isolated systems—such as different dialects, cultural linguistic practices, or even human–machine communication protocols—are intertwined, new forms of order can emerge. For example, the development of creoles in multilingual contact zones is an example of complixity: different linguistic inputs merge and simplify into a cohesive yet innovative communication system. Similarly, in the field of digital communication, the integration of natural language processing with human–machine interfaces has led to the emergence of conversational norms that differ from traditional language use. These hybrid forms, which incorporate elements of formal language, colloquial expressions, and even visual symbols such as emojis, represent a new type of linguistic order. The convergence of heterogeneous systems can foster unexpected simplicity that improves usability and adaptability, bridging the gap between rigid structures and flexible, emergent behaviors.
Collectively, these three interrelated phenomena—complexity compression, simplexity, and complixity—reveal the adaptive power of language. By compressing complex information, language provides the cognitive scaffolding necessary for effective communication [39,40]. Through simplexity, it produces stable and accessible models that support everyday interactions and the collective creation of meaning. Finally, through complixity, it enables the synthesis of diverse inputs into new and coherent structures that drive the evolution of STSs. In this way, language emerges as a fundamental and adaptive mechanism at the basis of our ability to navigate and thrive in a world characterized by both uncertainty and rapid change.

4. Discussions: Rethinking CSTS Design

The WAx (Work-As-x) framework offers a structured lens through which to examine how knowledge is shared, transformed, and applied in CSTSs [3]. This framework identifies multiple “varieties of work” and clarifies the interplay between tacit and explicit knowledge, while providing a guide to eight foundational knowledge conversion activities that might foster adaptability and meaningful innovation. In parallel, the previous sections of this article invite a reframing of how system architects can capitalize on emergent simplicities arising from complex processes. Connecting these two strands—WAx’s knowledge-centric perspective and the insight that complexity can be selectively “compressed” into manageable forms—can illuminate how CSTS solutions might be conceived more intelligently.
In the WAx framework, socialization, introspection, externalization, combination, internalization, conceptualization, reification, and influence—namely foundational knowledge conversion activities—contribute to the cyclical process of knowledge creation and transformation (Table 1).
At first glance, these conversions appear to be separate phases in a linear sequence, yet their essence is iterative and recursive. A group of blunt-end operators typically deals with high-level strategy, resources, and organizational priorities, while sharp-end operators apply knowledge directly in the field. Their interplay is often intricate, mediated by multiple knowledge artifacts and shaped by ongoing negotiations.
The development of a chatbot-based customer support system offers a concrete illustration of how WAx knowledge conversions unfold in tandem with the harnessing of simplexity. The system’s design commences with socialization, in which domain specialists, software engineers, and representatives from the user community share tacit knowledge. The domain specialists contribute to strategic goals such as improving user satisfaction and compliance with industry regulations. In parallel, the chatbot developers bring anecdotal insights about user preferences or the limitations of current technical architecture. Informal discussions reveal potential tensions: domain experts emphasize the need for thoroughness, while developers advocate rapid iteration. Simplexity emerges naturally during these conversations when the team collectively zeroes in on just a few key user intents, compressing the complexity of infinite linguistic possibilities into a curated set that lays the groundwork for subsequent refinement.
Introspection follows as each participant privately reflects on the ideas gained during socialization. A software engineer might reconsider perceived obstacles in natural language processing, mentally refining the previously shared ideas into a personal schema for interface design. The resulting clarity exemplifies a form of complexity compression as scattered concepts coalesce into an individual’s simpler mental template, simultaneously retaining the complexity required for high-level decision-making. Here, the principle of simplexity manifests as a self-imposed filtering process in which non-essential details are set aside in favor of capturing the patterns that significantly affect user experience.
Externalization reintroduces these distilled insights into the collective environment. The engineer translates a personal schema into design diagrams and formal specifications, making tacit ideas explicit and accessible. This step not only propels the larger team forward but also reveals how the original complexity has been partially tamed. Written documentation, diagrams, and other explicit artifacts illustrate how certain complexities—such as variations in user vocabulary—can be “flattened” into a manageable rule set. Adding synergy from complementary disciplines exemplifies complixity since the explicit knowledge merges prior user research, corporate brand guidelines, and universal usability standards. The result is a set of design documents that condense multiple contexts into a consistent artifact.
Combination proceeds by uniting these newly externalized documents with other explicit assets, including codes of practice, normative guidelines, and previous design handbooks. The emerging single repository retains the necessary depth to reflect actual user challenges but presents it in a streamlined format. While the knowledge might appear simpler, it is the product of many underlying complexities selectively incorporated or sidelined. These bridging efforts can further give rise to complixity, particularly when standard operating procedures, user experience heuristics, and domain-specific regulations intersect. This synergy can be observed when the combined artifact, for example, references accessibility best practices that were originally developed in another domain. Through an adept integration of external material, the system’s design acquires new capabilities, reflecting how crossing contexts consolidates knowledge into novel forms of simplification.
Internalization occurs as the project team, especially the software developers, absorbs the official guidelines. At this point, they treat the single repository not as a voluminous document replete with disclaimers but as an implicit set of reference points embedded in their daily workflow. Awareness of user demands regarding specialized technical jargon, along with constraints imposed by corporate legal guidelines, shifts from explicit references to ingrained practices. This transition maps exactly onto the principle of complexity compression: initial high-level detail is condensed into cognitively manageable heuristics, thus allowing developers to implement design features without continuous recourse to documentation. Just as the WAx model posits that internalization results in newly formed tacit knowledge, the practical system design implicitly reflects and sustains that knowledge.
Subsequently, conceptualization arises from observing real interactions with the chatbot in a semi-public testing environment. Log data reveal that users pose ambiguous queries when uncertain about technical terms, prompting system confusion or digressions. Developers and domain experts interpret this phenomenon, forming new tacit insights about the obscurity of user language. The cyclical nature of WAx surfaces again as the domain knowledge reinvigorates individual mental models. Complexity that was initially collapsed into a handful of user intents may need partial re-expansion but doing so fosters new selective filters. At this stage, the synergy of multiple contexts—exploring, for instance, how advanced users differ from novices—underscores the power of complixity. Solutions that guide novice users toward simpler dialogues, while providing advanced individuals the option for more in-depth inquiries, illustrate the combined effect of emergent simplifications across distinct user populations.
Reification then translates these refined insights back into explicit changes in the chatbot’s architecture and conversation flows. The inclusion of fallback mechanisms or a specialized query rephrasing process exemplifies the detour principle from simplexity theory. What appears at first to add layers—extra code, new user interface components—ultimately simplifies the user’s experience, shifting hidden complexities to an optimized operational backend. Influence, finally, operates more subtly. The presence of domain preconceptions and developer biases consistently shape how knowledge is elicited, disclosed, and integrated. Knowledge externalized under the impetus of one participant’s mental model can be selectively reframed or reorganized by another participant, fostering additional cycles of simplification and synergy.
By weaving WAx’s knowledge conversions with simplexity, complixity, and complexity compression, and harnessing natural language as a bridging tool, CSTS design becomes adaptive and synergy-rich. Problem-solving is streamlined, bridging distinct contexts fosters emergent solutions, and teams build agile, socially grounded systems that continuously evolve without crippling complexity, thereby remaining resilient.
This brief communication provides a focused exploration of natural language’s transformative role in contemporary CSTSs. Key messages include the following:
  • Natural language as the preferred medium: In today’s CSTSs, natural language is emerging as the primary medium for facilitating effective communication and coordination among diverse agents;
  • Complexity’s simplicities: The concepts of simplexity and complixity offer a novel lens to understand how complex interactions can be distilled into simpler, operationally efficient outcomes, providing a critical theoretical basis for future studies. Moreover, natural language plays a key role in compressing multifaceted cognitive and operational processes, reducing cognitive load and enabling rapid sense-making;
  • Resilience engineering and natural language: natural language, as CAS itself, represents a functional substructure of CSTSs that is integral to resilience engineering since it supports the pillar abilities of resilience: responding, monitoring, anticipating, and learning;
  • Reconnecting resilience engineering and control theory perspectives: This brief communication helps reconcile the approaches of Hollnagel and Leveson—currently seen as divergent—by emphasizing the role of natural language as a control mechanism in every system’s activities, fostering resilient performance.
Given these insights, companies should integrate adaptive linguistic systems within their broader infrastructures to enhance resilience, while policymakers are urged to establish robust regulatory frameworks and ethical guidelines that ensure transparency and build user trust in AI-mediated communication. Chatbot developers, in turn, are encouraged to implement real-time linguistic feedback mechanisms that allow for dynamic adjustments across diverse conversational contexts.

5. Limitations and Future Developments

This study offers a conceptual exploration of how natural language contributes to system resilience and adaptive coordination in CSTSs. However, this paper does not include a comprehensive theoretical framework or formal model as it is intended as a brief communication rather than a full research article. The discussion draws on evidence from neurolinguistics, primatology, and STSs. However, given the concise format of a brief communication, the selected examples and qualitative insights do not cover the entire spectrum of interactions and complexities found in real-world systems. While the analysis benefits from an extensive interdisciplinary literature—reflected in the 40 references—it does not aim to provide comprehensive quantitative data or exhaustive empirical validation. Future research should aim to operationalize the key concepts introduced here and test them through systematic empirical studies across diverse contexts.
When applying these ideas to chatbots, several challenges emerge that require further investigation. One major obstacle is the current limitation of natural language processing capabilities. Many advanced chatbot systems are proficient at handling scripted interactions, yet they struggle with nuanced language and complex conversational contexts. This often results in difficulties when a chatbot is required to perform deep, context-sensitive knowledge conversion—an essential function for supporting resilience within CSTSs. Such limitations may reduce a chatbot’s ability to bridge tacit and explicit knowledge effectively. The role of linguistic mediation in enhancing chatbot adaptability is still underexplored. For example, varying the complexity of language in chatbot responses might influence user trust and cognitive load, yet the optimal balance for maintaining clarity while delivering detailed information is not well understood. The ethical and cognitive implications of deploying chatbots in knowledge-intensive settings demand further study. Concerns include the risk of over-reliance on automated systems, potential biases in information delivery, and the long-term effects on user cognition and decision-making. Future research should focus on developing real-time adaptive learning mechanisms that enable chatbots to adjust to novel situations, optimizing the language complexity of interactions to improve trust and efficiency, and establishing robust evaluation metrics to measure their impact on system resilience and knowledge transfer. Addressing these challenges and questions will pave the way for more effective, resilient, and ethically sound integration of chatbots within cyber-sociotechnical systems.

Author Contributions

Conceptualization, A.T., A.F. and E.R.; methodology, A.T. and A.F.; software, A.T. and A.F.; validation., A.T., A.F. and E.R.; formal analysis, A.T. and A.F.; investigation, A.T. and A.F.; resources, A.T. and A.F.; data curation, A.T. and A.F.; writing—original draft preparation, A.T. and A.F.; writing—review and editing, A.T. and A.F.; visualization, A.T. and A.F.; supervision, A.T. and A.F.; project administration, E.R., A.T. and A.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

Warm thanks are extended to Maria Amata Garito, the Uninettuno coordinator of this research project. Special thanks to the editor and the reviewers for their suggestions, which significantly enhanced the work.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of this study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
CASComplex adaptive system
CSTSCyber-sociotechnical system
ETTOEfficiency–Thoroughness Trade-Off
IoTInternet of Things
STSSociotechnical system
WAxWork-As-x

References

  1. Varela, F.J.; Thompson, E.; Rosch, E.; Kabat-Zinn, J. The Embodied Mind; Mit Press: Cambridge, MA, USA, 2017. [Google Scholar]
  2. Kretzschmar, W.A. Language and Complex Systems; Cambridge University Press: Cambridge, UK, 2015; 230p. [Google Scholar]
  3. Patriarca, R.; Falegnami, A.; Costantino, F.; Di Gravio, G.; De Nicola, A.; Villani, M.L. WAx: An Integrated Conceptual Framework for the Analysis of Cyber-Socio-Technical Systems. Saf. Sci. 2021, 136, 105142. [Google Scholar] [CrossRef]
  4. Falegnami, A.; Tomassi, A.; Gunella, C.; Amalfitano, S.; Corbelli, G.; Armonaite, K.; Fornaro, C.; Giorgi, L.; Pollini, A.; Caforio, A.; et al. Defining Conceptual Artefacts to Manage and Design Simplicities in Complex Adaptive Systems. Heliyon 2024, 10, e41033. [Google Scholar] [CrossRef] [PubMed]
  5. Lindemann, U.; Maurer, M.; Braun, T. The Challenge of Complexity. In Structural Complexity Management; Springer: Berlin/Heidelberg, Germany, 2009; pp. 1–20. ISBN 978-3-540-87888-9. [Google Scholar]
  6. Hollnagel, E. The ETTO Principle: Efficiency-Thoroughness Trade-Off: Why Things That Go Right Sometimes Go Wrong, 1st ed.; CRC Press: Boca Raton, FL, USA, 2017; ISBN 978-1-315-61624-7. [Google Scholar]
  7. Chernyshova, E.; Piccoli, V.; Ursi, B. Multimodal Conversational Routines: Talk-in-Interaction through the Prism of Complexity. In Language Is a Complex Adaptive System: Explorations and Evidence; Language Science Press: Berlin, Germany, 2022; ISBN 978-3-96110-345-4. [Google Scholar]
  8. The “Five Graces Group”; Beckner, C.; Blythe, R.; Bybee, J.; Christiansen, M.H.; Croft, W.; Ellis, N.C.; Holland, J.; Ke, J.; Larsen-Freeman, D.; et al. Language Is a Complex Adaptive System: Position Paper. Lang. Learn. 2009, 59, 1–26. [Google Scholar] [CrossRef]
  9. Lund, K.; Basso Fossali, P.; Mazur, A.; Ollagnier-Beldame, M. (Eds.) Language Is a Complex Adaptive System: Explorations and Evidence; Language Science Press: Berlin, Germany, 2022; ISBN 978-3-96110-345-4. [Google Scholar]
  10. Pollini, A.; Giacobone, G.A.; Zannoni, M.; Pucci, D.; Vignali, V.; Falegnami, A.; Tomassi, A.; Romano, E. Human-Machine Interaction Design in Adaptive Automation. Procedia Comput. Sci. 2025, 253, 1034–1044. [Google Scholar] [CrossRef]
  11. Falegnami, A.; Tomassi, A.; Corbelli, G.; Romano, E. Managing Complexity in Socio-Technical Systems by Mimicking Emergent Simplicities in Nature: A Brief Communication. Biomimetics 2024, 9, 322. [Google Scholar] [CrossRef] [PubMed]
  12. Dhungana, D.; Haselböck, A.; Schmidbauer, C.; Taupe, R.; Wallner, S. Enabling Resilient Production Through Adaptive Human-Machine Task Sharing. In Towards Sustainable Customization: Bridging Smart Products and Manufacturing Systems; Andersen, A.-L., Andersen, R., Brunoe, T.D., Larsen, M.S.S., Nielsen, K., Napoleone, A., Kjeldgaard, S., Eds.; Lecture Notes in Mechanical Engineering; Springer International Publishing: Cham, Switzerland, 2022; pp. 198–206. ISBN 978-3-030-90699-3. [Google Scholar]
  13. Romero, D.; Stahre, J. Towards the Resilient Operator 5.0: The Future of Work in Smart Resilient Manufacturing Systems. Procedia CIRP 2021, 104, 1089–1094. [Google Scholar] [CrossRef]
  14. Galinier, F.; Bruel, J.-M.; Ebersold, S.; Meyer, B. Seamless Integration of Multirequirements in Complex Systems. In Proceedings of the 2017 IEEE 25th International Requirements Engineering Conference Workshops (REW), Lisbon, Portugal, 4–8 September 2017; pp. 21–25. [Google Scholar]
  15. Hashimoto, S. KANSEI Robotics to Open a New Epoch of Human-Machine Relationship—Machine with a Heart. In Proceedings of the ROMAN 2006—The 15th IEEE International Symposium on Robot and Human Interactive Communication, Hatfield, UK, 6–8 September 2006; p. 1. [Google Scholar]
  16. Matsumoto, N.; Fujii, H.; Okada, M. Minimal Design for Human–Agent Communication. Artif. Life Robot. 2006, 10, 49–54. [Google Scholar] [CrossRef]
  17. Ferrari, D.; Benzi, F.; Secchi, C. Bidirectional Communication Control for Human-Robot Collaboration. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 7430–7436. [Google Scholar]
  18. Souza, V.E.S. A Requirements-Based Approach for the Design of Adaptive Systems. In Proceedings of the 2012 34th International Conference on Software Engineering (ICSE), Zurich, Switzerland, 2–9 June 2012; pp. 1635–1637. [Google Scholar]
  19. Hollnagel, E. Systemic Potentials for Resilient Performance. Resilience in a Digital Age: Global Challenges in Organisations and Society; Springer: Cham, Switzerland, 2022. [Google Scholar] [CrossRef]
  20. Hollnagel, E. Safety-II in Practice: Developing the Resilience Potentials; Routledge: Abingdon, UK, 2017; ISBN 978-1-351-78076-6. [Google Scholar]
  21. Hollnagel, E. The Four Cornerstones of Resilience Engineering. In Resilience Engineering Perspectives; CRC Press: Boca Raton, FL, USA, 2009; Volume 2, ISBN 978-1-315-24438-9. [Google Scholar]
  22. Cantelmi, R.; Gravio, G.D.; Patriarca, R. Reviewing Qualitative Research Approaches in the Context of Critical Infrastructure Resilience. Environ. Syst. Decis. 2021, 41, 341–376. [Google Scholar] [CrossRef] [PubMed]
  23. Hutchison, D.; Pezaros, D.; Rak, J.; Smith, P. On the Importance of Resilience Engineering for Networked Systems in a Changing World. IEEE Commun. Mag. 2023, 61, 200–206. [Google Scholar] [CrossRef]
  24. Grimm, D.A.P.; Gorman, J.C.; Cooke, N.J.; Demir, M.; McNeese, N.J. Dynamical Measurement of Team Resilience. J. Cogn. Eng. Decis. Mak. 2023, 17, 351–382. [Google Scholar] [CrossRef]
  25. Ham, D.-H. Safety-II and Resilience Engineering in a Nutshell: An Introductory Guide to Their Concepts and Methods. Saf. Health Work 2021, 12, 10–19. [Google Scholar] [CrossRef] [PubMed]
  26. Hollnagel, E.; Nemeth, C.P. From Resilience Engineering to Resilient Performance. In Advancing Resilient Performance; Springer: Cham, Switzerland, 2022; pp. 1–9. [Google Scholar] [CrossRef]
  27. Kilskar, S.S.; Danielsen, B.-E.; Johnsen, S.O. Sensemaking and Resilience in Safety-Critical Situations: A Literature Review. In Safety and Reliability—Safe Societies in a Changing World; CRC Press: Boca Raton, FL, USA, 2018; ISBN 978-1-351-17466-4. [Google Scholar]
  28. Leveson, N.G. Engineering a Safer World: Systems Thinking Applied to Safety; The MIT Press: Cambridge, MA, USA; London, UK, 2011; ISBN 978-0-262-01662-9. [Google Scholar]
  29. Zong, M.; Hekmati, A.; Guastalla, M.; Li, Y.; Krishnamachari, B. Integrating Large Language Models with Internet of Things: Applications. Discov. Internet Things 2025, 5, 2. [Google Scholar] [CrossRef]
  30. De Melo, C.M.; Kim, K.; Norouzi, N.; Bruder, G.; Welch, G. Reducing Cognitive Load and Improving Warfighter Problem Solving with Intelligent Virtual Assistants. Front. Psychol. 2020, 11, 554706. [Google Scholar] [CrossRef] [PubMed]
  31. Romanenko, E.; Kutz, O.; Calvanese, D.; Guizzardi, G. Towards Semantics for Abstractions in Ontology-Driven Conceptual Modeling. In Advances in Conceptual Modeling; Sales, T.P., Araújo, J., Borbinha, J., Guizzardi, G., Eds.; Lecture Notes in Computer Science; Springer Nature: Cham, Switzerland, 2023; Volume 14319, pp. 199–209. ISBN 978-3-031-47111-7. [Google Scholar]
  32. Ben-Oren, Y.; Hovers, E.; Kolodny, O.; Creanza, N. Cultural Innovation Is Not Only a Product of Cognition but Also of Cultural Context. Behav. Brain Sci. 2025, 48, e4. [Google Scholar] [CrossRef] [PubMed]
  33. Hobbs, J.R.; Mulkar-Mehta, R. Toward a Formal Theory of Information Structure. In Evolution of Semantic Systems; Küppers, B.-O., Hahn, U., Artmann, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 101–126. ISBN 978-3-642-34996-6. [Google Scholar]
  34. Kocatepe, M. Reconceptualising the Notion of Finding Information: How Undergraduate Students Construct Information as They Read-to-Write in an Academic Writing Class. J. Engl. Acad. Purp. 2021, 54, 101042. [Google Scholar] [CrossRef]
  35. Oakes, L.; Cashon, C.; Casasola, M.; Rakison, D. Emerging Competence with Symbolic Artifacts: Implications for the Study of Categorization and Concept Development Get Access Arrow. In Infant Perception and Cognition; Oxford University Press: Oxford, UK, 2010; ISBN 978-0-19-536670-9. [Google Scholar]
  36. Cowley, S.J.; Gahrn-Andersen, R. Simplexity, Languages and Human Languaging. Lang. Sci. 2019, 71, 4–7. [Google Scholar]
  37. Cowley, S.J.; Gahrn-Andersen, R. Simplexifying: Harnessing the Power of Enlanguaged Cognition. Chin. Semiot. Stud. 2022, 18, 97–119. [Google Scholar] [CrossRef]
  38. Gahrn-Andersen, R.; Cowley, S.J. Semiosis and Bio-Mechanism: Towards Consilience. Biosemiotics 2018, 11, 405–425. [Google Scholar]
  39. Giorgi, F.; Bruni, L.E. Developmental Scaffolding. Biosemiotics 2015, 8, 173–189. [Google Scholar]
  40. Penn, D.C.; Holyoak, K.J.; Povinelli, D.J. Darwin’s Mistake: Explaining the Discontinuity between Human and Nonhuman Minds. Behav. Brain Sci. 2008, 31, 109–130. [Google Scholar] [CrossRef] [PubMed]
Table 1. Summary of language support mechanisms within the WAx framework, illustrating their roles in knowledge conversion and enhancing resilience in CSTSs.
Table 1. Summary of language support mechanisms within the WAx framework, illustrating their roles in knowledge conversion and enhancing resilience in CSTSs.
Knowledge Conversion ActivityDescriptionRole in Enhancing ResilienceIllustrative Example
SocializationInformal sharing of tacit knowledge among team membersEstablishes initial alignment and mutual understandingTeam discussions between developers and domain experts
IntrospectionIndividual reflection and reassessment of tacit insightsRefines personal understanding to inform improvementsA developer reviewing and interpreting user feedback
ExternalizationConverting internal knowledge into explicit representationsFacilitates effective knowledge transfer and collective learningDocumenting design decisions or process workflows
CombinationIntegrating diverse explicit knowledge sourcesConsolidates information into coherent, actionable strategiesMerging guidelines, standards, and user feedback
InternalizationEmbedding explicit knowledge into daily practicesEnhances operational performance and continuous learningApplying documented procedures in routine operations
ConceptualizationForming abstract models from gathered insightsAnticipates challenges and guides future decision-makingDeveloping predictive models for error-handling
ReificationTransforming abstract models into tangible artifactsEnables practical system adaptations and improvementsUpdating the chatbot’s architecture based on conceptual insights
InfluenceShaping practices through accumulated experience and contextual biasesDrives iterative refinement and continuous evolutionIterative adjustments based on evolving user needs
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tomassi, A.; Falegnami, A.; Romano, E. Talking Resilience: Embedded Natural Language Cyber-Organizations by Design. Systems 2025, 13, 247. https://doi.org/10.3390/systems13040247

AMA Style

Tomassi A, Falegnami A, Romano E. Talking Resilience: Embedded Natural Language Cyber-Organizations by Design. Systems. 2025; 13(4):247. https://doi.org/10.3390/systems13040247

Chicago/Turabian Style

Tomassi, Andrea, Andrea Falegnami, and Elpidio Romano. 2025. "Talking Resilience: Embedded Natural Language Cyber-Organizations by Design" Systems 13, no. 4: 247. https://doi.org/10.3390/systems13040247

APA Style

Tomassi, A., Falegnami, A., & Romano, E. (2025). Talking Resilience: Embedded Natural Language Cyber-Organizations by Design. Systems, 13(4), 247. https://doi.org/10.3390/systems13040247

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop