Next Article in Journal
Enhancing Efficiency and Regularization in Convolutional Neural Networks: Strategies for Optimized Dropout
Previous Article in Journal
What We Know About the Role of Large Language Models for Medical Synthetic Dataset Generation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Knowledge-Driven Framework for AI-Augmented Business Process Management Systems: Bridging Explainability and Agile Knowledge Sharing

1
Research and Development Unit, EKA S.r.l., Via Garruba 3, 70122 Bari, Italy
2
Department of “Ingegneria dell’Innovazione”, University of Salento, Piazza Tancredi 7, 73100 Lecce, Italy
*
Author to whom correspondence should be addressed.
AI 2025, 6(6), 110; https://doi.org/10.3390/ai6060110
Submission received: 29 April 2025 / Revised: 19 May 2025 / Accepted: 23 May 2025 / Published: 28 May 2025

Abstract

:
Background: The integration of Artificial Intelligence (AI) into Business Process Management Systems (BPMSs) has led to the emergence of AI-Augmented Business Process Management Systems (ABPMSs). These systems offer dynamic adaptation, real-time process optimization, and enhanced knowledge management capabilities. However, key challenges remain, particularly regarding explainability, user engagement, and behavioral integration. Methods: This study presents a novel framework that synergistically integrates the Socialization, Externalization, Combination, and Internalization knowledge model (SECI), Agile methods (specifically Scrum), and cutting-edge AI technologies, including explainable AI (XAI), process mining, and Robotic Process Automation (RPA). The framework enables the formalization, verification, and sharing of knowledge via a well-organized, user-friendly software platform and collaborative practices, especially Communities of Practice (CoPs). Results: The framework emphasizes situation-aware explainability, modular adoption, and continuous improvement to ensure effective human–AI collaboration. It provides theoretical and practical mechanisms for aligning AI capabilities with organizational knowledge management. Conclusions: The proposed framework facilitates the transition from traditional BPMSs to more sophisticated ABPMSs by leveraging structured methodologies and technologies. The approach enhances knowledge exchange and process evolution, supported by detailed modeling using BPMN 2.0.

1. Introduction

Over time, knowledge has assumed a central role within organizations and is now universally recognized as one of the primary resources capable of determining a company’s competitive success. Knowledge management (KM) represents a response to the demands imposed by globalization, new technologies, and the cognitive perspectives of businesses in general. The objective of KM is to enhance competitiveness and maintain the cognitive structures of the organization itself.
Often, the tools used within companies provide only a partial solution to the problem, as they focus on the formalization and preservation of knowledge and, in some cases, its manual validation, without implementing automated mechanisms that would not only facilitate management but also enable the extraction and validation of knowledge. Moreover, such context appears to be characterized by an emerging trend: the growing accessibility of business process execution data, together with advancements in AI, has paved the way for a new class of information systems known as ABPMSs. These systems enable execution flows to emerge dynamically, eliminate the need for explicit modifications to software applications for adaptation, and autonomously identify, validate, and implement improvement opportunities in real time.
This trend is confirmed by the interest found in the scientific literature: noteworthy is the case of the study conducted by Dumas et al. [1], which focuses on the integration of AI into BPMSs, acting as a foundational resource; this document seeks to inspire ongoing research and development in AI-driven Business Process Management (BPM). It outlines a strategic path for future advancements while highlighting the transformative role of AI technologies in optimizing business processes.
Specifically, the integration of AI technology into BPMSs opens up new opportunities to leverage automation, enhancing the efficiency and adaptability of business processes while requiring minimal but impactful involvement from human agents during execution. Unlike traditional BPMSs, which rely on predefined flows and rules, an ABPMS can analyze the current state of a process or multiple processes to determine actions that optimize process performance.
Although significant progress has been made in the integration of AI in BPMSs, in the study by Dumas, several critical challenges that hinder widespread adoption and effectiveness are highlighted. One significant hurdle is achieving situation-aware explainability, meaning that ABPMSs must be equipped to provide clear, context-sensitive explanations for their actions, decisions, and contributions to business process goals. These explanations must be accessible and actionable for users, bridging the gap between advanced automation and human understanding.
Another key challenge lies in integrating behavioral dimensions into these systems. This involves the ability to analyze and interpret patterns of task execution and organizational routines based on multiple observations, enabling a more nuanced understanding of complex processes.
Furthermore, a critical requirement for an ABPMS consists of the ability to support interactive communication with users, offering explanations for their actions, and proactively suggesting improvements or adaptations to business processes.
Finally, transitioning from traditional BPMSs to ABPMSs must be approached incrementally, through a modular adoption strategy, allowing organizations to prioritize and implement features progressively, ensuring alignment with their specific needs and objectives while minimizing disruption.
Although the manifesto outlines various challenges, it does not provide exhaustive solutions or methodologies for addressing these challenges, indicating a gap in practical implementation strategies.
This study proposes a framework that aims to overcome the aforementioned limitations. In building on these considerations, a simple system, such as a forum, was designed to highlight the active contributions of each member of the Agile team involved, while also being clear and easy to use to encourage knowledge sharing. In addition, the knowledge sharing and management solution will include automated steps, thereby optimizing the methodology to make it more dynamic and efficient.
Specifically, the approach proposed by Yakyma [2] will be considered, which effectively describes how, within an Agile approach, particularly the Scrum framework, it is possible to empirically identify the various stages characterizing the SECI methodology.
However, the main limitation of this framework lies in its failure to define at which point in the KM process the lessons learned and best practices are formalized and validated. Furthermore, it does not specify how typical roles in the Scrum framework, such as the Scrum Master and CoP, contribute to the validation and dissemination of these forms of knowledge within the organization. The proposed framework will, therefore, also consider and overcome limitations of the Yakyma work and will consist of a software tool and content management processes that support the KM process. Specifically, the system will have the following features:
  • Be web-based and user-friendly;
  • Enable the quick capture of a lesson at any time, its dissemination to potential users, and its easy and fast retrieval;
  • Include a dedicated section for lessons learned and patterns;
  • Create a backlog of discussions, lessons, and lessons learned to be discussed and validated (or not) during meetings and CoP;
  • Incorporate advanced search techniques, leveraging keyword searches and other specialized hyperlinked relationships.
The primary objective of this study is to develop a knowledge-driven framework that addresses the current limitations of ABPMSs, introducing real-time feedback and structured knowledge sharing, thus promoting explainability. The integration of behavioral insights and Communities of Practice (CoPs) will, moreover, promote and facilitate user engagement. Finally, a core aspect is ensuring a smooth transition from traditional BPMSs through modular and incremental adoption. Specifically, the research is guided by the following key questions: How can AI enhance the adaptability and efficiency of BPMSs while maintaining situation-aware explainability? What role does Agile knowledge sharing play in optimizing AI-augmented business processes? To achieve these objectives, the state of the art is analyzed in Section 2, to gain insight into the evolution of the field and highlight current and future directions. Then, the proposed framework is presented, and a set of requirements to support the process modeling are defined, as outlined in Section 3. Then, in Section 4, details on the technical implementation are presented. Finally, in Section 5, the proposed framework is evaluated in terms of its ability to address the identified challenges Section 6 and Section 7 focus on the limitations of the research and future paths of investigation.

2. State of the Art

In recent years, BPMSs have made great strides through the application of AI techniques, giving rise to ABPMSs. These ABPMSs are designed to increase operational efficiency and provide assistance to human operators in completing complex tasks and making crucial decisions while also solving the issues and challenges typical of traditional BPMSs. In this context, some studies in the literature prove to be fundamental as a reference point; for example, the work of Dumas et al. [1] highlights how an ABPMS can adopt a Hybrid Process Intelligence approach that strikes a balance between automation and human intervention. This approach not only promotes continuous learning but also makes business processes more adaptable to changing needs.
However, there are also practical and theoretical issues yet to be explored. This section has the task of analyzing the state of the art, highlighting specific areas of interest that this study intends to explore.
At the same time, an overview of current challenges and trends will be provided, considering future directions of the application of AI to BPMSs.

2.1. The Impact of AI on KM

AI has proven to be an enabling technology in terms of improving every aspect of the knowledge lifecycle, from creation to storage, transfer, and application. As the study by Kovačić et al. [3] highlights, AI has also found application in complex contexts such as manufacturing, helping to optimize business processes and strategies.
AI, applied for optimizing production processes, makes it possible to identify similar patterns among different data and to create and discover new knowledge that can be exploited by the organization to increase its competitive advantage, also with a view toward continuous improvement. In addition, important evidence emerges from the analysis that AI can be used both to create new knowledge, e.g., through real-time analysis of production data, and to support operators in improving their prior knowledge. Furthermore, as the study by Psarommatis and Kiritsis [4] shows, the knowledge created by AI systems is often not sufficient to solve the problems that can occur during a manufacturing process, which is why experience and company best practices—present within a company knowledge base—are key to solving such problems during executive operations.
AI has promising applications in areas such as Data Management and Analysis, as stated by Enholm et al. [5], in a way that helps manage the increasing amounts of data collected through advanced tools to extract useful information supporting the decision-making process. This is possible by automating complex tasks such as information collection, classification, and analysis, thus significantly reducing the manual workload.
Furthermore, as highlighted by Dumas et al. [1], the introduction of advanced Natural Language Processing (NLP) techniques improves information retrieval, allowing rapid and targeted access to relevant data. With these technologies, structured knowledge bases can be created with minimal human input. Through the use of advanced NLP tools, it is possible not only to improve information retrieval but also to make the access of essential data more efficient by automating the collection of knowledge from numerous sources and aggregating it systematically and coherently. This automation makes it possible to build highly detailed and versatile knowledge bases with minimal manual effort, thus leaving more space for higher value-added activities. Furthermore, AI facilitates the personalization of information distribution, adapting it to the specific needs and preferences of users, thus improving the overall KM experience. AI-enabled tools, in turn, serve to strengthen institutional collaboration because they provide insights based on the collective knowledge of an organization and initiate the kind of evidence-based discussions that aim to solve problems. AI is also known to improve learning and collaboration within an organization. On this subject, the study conducted by Taherdoost and Madanchian [6] shows how humans use a better understanding of their surroundings and then proceed with real-time analytics and prediction tools. Jarrahi et al. [7] also highlight the importance of knowledge sharing, as AI enables people working on similar problems to connect and network with each other, thus fostering collaboration across departments. At the same time, AI, as stated by Alavi et al. [8], offers personalized recommendations and intuitive tools that facilitate access to resources by fostering an organizational culture based on continuous learning. Indeed, AI, especially Generative AI (GenAI), facilitates knowledge transfer, supporting training and fostering a culture of learning, but at the same time, it raises concerns about over-reliance on such systems and sharing sensitive information. It also increases productivity and innovation by automating tasks and providing useful insights. However, challenges such as AI bias and the marginalization of junior workers need to be addressed.
On the other hand, from a strategic perspective, AI proves to be essential for innovation and the development of new products and services. As highlighted by Thakuri et al. [9], the impact of AI on KM is profound, improving decision making through data analysis and Machine Learning (ML). It thus emerges that AI improves knowledge acquisition and sharing by automating the collection and dissemination of information, making it more accessible to employees, thus promoting innovation through the analysis of market trends, leading to the development of new products and services.
This enables the optimization of internal operations and the formalization and transfer of best practices, as in the Compliant Knowledge Transfer Model proposed by Linder [10], which can enable the transfer and distribution of knowledge within the organization so that it can be used for future developments related to a related product or process. This model, however, has proven to be structured on a sequential approach and difficult to adapt to the needs and characteristics of an organization operating in competitive environments, such as today’s environments, characterized by increasingly short delivery times. However, the adoption of AI in KM also presents challenges: in the study conducted by Chen [11], it is highlighted that the automation of processes such as data tagging and categorization can lead to excessive dependence on systems’ technological capability, requiring human oversight to avoid problems. The paper discusses the interaction between AI and KM, highlighting how their convergence can improve organizational performance and innovation. KM has been shown to provide the necessary context and structure for AI algorithms, while AI can accelerate knowledge discovery and streamline content curation. Several studies suggest that effective cooperation between human expertise and AI capabilities is crucial to maximize the benefits of KM systems. The literature shows that the most significant impact of AI on KM is, therefore, the increase in efficiency. This impact is important because it directly influences the way organizations operate and manage their knowledge assets: by automating repetitive tasks such as labeling, indexing, and categorizing data, AI significantly reduces the time and effort required for KM processes. In this way, organizations can focus on strategic initiatives instead of manual work, obtaining faster and more precise access to information, which is crucial in today’s fast-paced business environments.

2.2. Impact of AI on BPMSs

The integration of AI into BPMSs is profoundly revolutionizing the way organizations manage and optimize their workflows, as evidenced by numerous studies in the literature. In this regard, AI plays a multifunctional role, helping to improve operational efficiency, automation, and decision quality, with implications that transform organizational dynamics toward greater adaptability and innovation. Casciani et al. [12] explore the innovative realm of ABPMSs, which leverage AI technology to enhance the execution and adaptability of business processes, arguing that a key feature of these systems is their conversational capability, allowing them to engage proactively with users, making business processes more efficient and user-friendly. However, the integration of these technologies also requires reflection of their ethical and transparency aspects, and these aspects should ensure that human expertise and verification remain at the core of them.
Additionally, Kokala [13] draws attention to the revolutionary potential of AI-driven workflows and intelligent automation in transforming Business Process Management. Organizations can attain previously unheard-of levels of operational efficiency, cost optimization, and quality improvement utilizing a variety of cutting-edge technology. Zebec [14] also emphasizes AI’s ability to enhance decision making in BPM. The technology directly impacts organizational efficiency, adaptability, and overall performance, thus allowing adaptability to market changes.
Additionally, AI is key in optimizing Supply Chain Management (SCM), in processing huge datasets, and consequently in achieving higher degrees of organization and performance of knowledge management. As highlighted in the study by Helo and Hao [15], the integration of AI technologies into SCM enables organizations to extract valuable information from complex datasets, optimizing operational processes, demand forecasting, and customer service quality with the aim of continuous improvement. This transformation is enabled by continuous ML, which fosters constant adaptability to market and operational changes. AI is another powerful dimension that helps to make decisions. As Aggarwal [16] pointed out, AI supports BPM through predictive analytics, data-driven insights, and recommendation systems that complement and optimize all forms of decision making. Perhaps most important is the ability to assess risks and prospects through active collaboration with internal human input, further highlighting how AI improves managerial judgment so that decision making is more robust and effective. Similarly, Dumas et al. [1] introduce the concept of ABPMSs, systems that combine AI and human knowledge, providing adaptive autonomy to processes without sacrificing the possibility of human intervention, further confirming a balance between automation and control.
Advanced automation, enabled by technologies such as RPA, is a key aspect of AI’s contribution, as discussed by Rosemann and Szelągowski [17,18]: automating repetitive tasks not only reduces errors and operational costs but also frees up human resources for operations with greater strategic and creative value, increasing productivity and encouraging innovation.
Schaschek et al. [19] also explain how AI, together with predictive and prescriptive analytics, enables the identification and elimination of bottlenecks in processes to optimize overall operational effectiveness.
In particular, AI, according to Wang [20], gives way to the automation of processes, predictive analytics, and personalized customer interactions in specific areas like PLM. This enhances the user experience and scalability. Moreover, the use of AI along with other emerging technologies such as IoT and blockchain consolidates safer and more transparent operations while handling ethical challenges associated with such innovations. Advanced techniques such as Large Process Models (LPMs) and NLP open new frontiers toward process management optimization, as Kampik and De Nicola demonstrate [21,22]. The ability of GenAI models to analyze event logs enables the identification of patterns and possible improvements in operating flows, boosting more effective and collaborative management between humans and machines. According to Fahland [23], in those aspects regarding transparency and confidence is where XAI has an essential standing. First of all, by clearly explaining, if possible, in an understandable manner, the reason for automatic choices might improve user confidence regarding systems driven by an AI agent in specific settings and help users raise the degree of confidence, and the understanding or assurance regarding safety of use in deploying advanced solutions could improve. This will also facilitate the wide acceptance of rising technologies while helping achieve knowledgeable and responsible use of AI. Real-time process management using predictive analytics, process mining, or intuitive interactions driven by NLP and chatbots has been in place for quite a while, as debated by Olatunji [24], Chapela-Campa [25], and Dumas [1]. This enables enterprises to respond swiftly to critical situations and make the customer experience personal, elevating perceived value and satisfaction. In this digital transformation space, according to Gabryelczyk et al. [26], AI acts as a facilitator towards intelligent automation, real-time analytics, and process optimization. Identifying inefficiencies and offering tailored solutions are key pillars for organizational adaptability and customer satisfaction.
Finally, Salvadorinho’s [27] study focuses on the importance of KM in digital manufacturing paradigms, highlighting that the rapid and actionable transition of tacit knowledge from experienced professionals to new hires is a key challenge. The study highlights the usefulness of the BPMN 2.0 standard for formalizing operational instructions that support workforce turnover, preserving the company’s knowledge resources, and promoting more effective use of AI tools. This synergy between KM and advanced technologies further consolidates the transformative impact of AI in business processes.

2.3. Challenges in ABPMSs

Among the many transformative advantages promised by the inclusion of AI in BPMSs, such as increased efficiency, smarter decision making, and much more, are also some of the very important research challenges that warrant detailed consideration, including the following:
  • Situation-aware Explainability: It will be required in ABPMSs where the explanation is not only context-relevant but must also be easily understood by the human user. Typical questions would relate to why a specific task is performed, what decisions it is making, and at least some conclusions about objectives in the business process. The challenge is in deriving mechanisms to elaborate on such explanations in a form that is understandable by a user as the basis for further system activities, as well as for informed decisions from such understanding.
  • Autonomy Framed by Constraints: Although ABPMSs are designed to operate autonomously, this autonomy must be bounded by specific operational assumptions, goals, and environmental constraints. The systems should not only act independently but also engage in a dialogue with human users. It involves explaining their actions and providing advice for process changes or improvements. The challenge is in conceptualizing a model in which a two-way interaction is possible, keeping the system effective without compromising its accountability to the oversight of people.
  • Continuous Improvement: ABPMS should present executed improvements in the business process. This will involve new skills and methods by which a system can ingest applications according to its operational capability and anticipated outcomes. The challenge will be in integrating sophisticated AI capabilities that will facilitate this learning process for adjusting system operations through feedback and changing scenarios.
  • Hybrid Process Intelligence: This concept states that there is a need for an ABPMS to communicate with a human as “learning apprentices”. AI has to learn the user’s work practices rather than expect the user to modify the work. Hence, the challenge is to design systems that learn through human experiences and modify their behavior accordingly so that they work in synergy with human users in executing the processes.
  • Trust and Reliability: To ensure its acceptance and efficiency, the ABPMS has to gain credibility and trustworthiness. All information regarding transparency in AI’s decision making, qualified usage of data, and exceptions in areas or types of unpredicted situations must be available to users. Making systems perform better is not the only challenge; convincing the users is also considered an important point where users should trust the decisions and actions from the system.
These challenges indicate that developing ABPMSs continues to be multifaceted and continually involves the process of research to resolve the issues. Each challenge entails its opportunities for innovation and improvement in the field of BPM.
In this context, the framework proposed in this study represents a relevant contribution in resolving these challenges. First, the framework features a dedicated section for lessons learned and patterns, thus responding to the challenge of situation-aware explainability. The framework provides a structured way to document and explain decisions, actions, and outcomes. The inclusion of automated steps ensures that these explanations are clear, relevant, and easily understandable by users, enabling agents to act on this information effectively. The ability of the framework to capture discussions and lessons learned into a consolidated backlog is perhaps the most significant mechanism by which behavioral data aim accumulated over time. It can provide access to a historical repository, allowing for the identification of recurring patterns in task performance and organizational routines, and the nature of the work. This facility can further enhance its power in identifying and analyzing complex behaviors through advanced searches and hyperlinked relationships between objects and events. Additionally, it connects related elements during different timeframes to deepen the understanding of complex behavioral dynamics in the organization.
The forum-based architecture of the framework is designed to promote an active conversation engagement among Agile teams within the context of the challenge of Conversational Engagement. This system creates an interactive environment in which much knowledge shared becomes a driver for continuous improvement and process adaptation. Automated tools within the framework optimize this engagement by dynamically proposing contextually relevant actions, aligning these suggestions with operational constraints while maintaining the focus on team collaboration and productivity.
Moreover, the proposed framework effectively responds to the existing challenge of Broader Contextual Reasoning: through its emphasis on the unrestricted capture and dissemination of lessons, the framework accommodates a wide array of situational factors, including time, location, and group associations. This feature makes certain that vital contextual components are taken into consideration in the decision-making processes. This advanced search enables the linking of discussions and lessons to a particular scenario, where they are most relevant, thus improving the framework’s overall value. Hence, the system is backed by a more comprehensive and context-aware approach to learn and solve problems within organizations.
The modular design of the framework, as well as its simple web-based interface, enables transition to more advanced ABPMSs from legacy systems gradually and flexibly. The organizations are, hence, able to structure their transformation processes according to individual needs so that earlier benefits, such as lessons learned, discussions, and the CoP process, could be introduced. With the adaptability of the framework, organizations are liberated to extend their use most favorably in varying business contexts according to operational goals.

2.4. Emerging Trends in ABPMSs

Although much has been accomplished, the field of BPM is undergoing a profound transformation as a result of the advances of AI. Such transformation is proof of the possible future development of BPMSs, accomplished by integrating AI capabilities such as ML, NLP, and predictive analytics, thus enabling systems that promise not only enhanced efficiency and decision making but also more personalized and ethical solutions tailored to modern business needs. To understand the scope of innovation, collaboration, and ethical issues within the domain of AI-augmented BPM, the emerging trends will be elaborated on in the following sections.
The rise of Intelligent Process Automation deserves mention, as it combines AI technologies such as ML, NLP, and RPA. Such advanced systems can learn from information, evolve, and even make decisions on their own. This progress marks a major change from traditional automation to intelligent systems [19].
One of the key trends is the increasing attention to explainability and transparency. In the future, ABPMSs will have more capability to provide better and comprehensible explanations for their actions, which is vital to promote user confidence and cooperation between human coworkers and AI systems. Furthermore, these systems are expected to be more conversational, user-oriented, and data-driven, meaning that they are capable of starting a dialogue with users, advising about changes that might be appropriate, or about how processes can be modified in real time through the use of analyses [28].
It is also emphasized in the literature [1] that human and AI interactions should be improved. ABPMSs would behave as learning apprentices, who would observe human workplace activities and use that information, together with feedback given by users, to modify their actions. In this way, the collaboration between the AI and the human user is enhanced. Additionally, mechanisms of continuous learning and improvement are supposed to enable these systems to modify what they do based on the interaction with users and past events, ensuring that the systems remain relevant and effective.
The ultimate goal of AI-enhanced BPM will be the automation of business processes with the help of Cognitive Process Automation (CPA), where human cognitive processes are mimicked to perform tasks that require complex reasoning and learning. Such capabilities enable businesses to operate in volatile environments with increased sophistication. In addition, AI systems have been built as lifelong learners. AI systems include self-improvement features through the modeling of new data and experiences. This means BPM tools can outgrow themselves and improve over time to align them with current market expectations. AI embedded in BPM tools will, however, bring up the need for developing governance frameworks and ethical considerations as it is increasingly being incorporated into BPM systems. Use of AI in critical decisions requires principles of fairness, accountability, and transparency to build trust with relevant parties and legal compliance [1,19].
With Customer Experience Management, AI also creates new avenues to explore. It facilitates service customization by analyzing the existing customer database to help make more contextually appropriate decisions, thus deepening and increasing customers’ satisfaction [19].
Another area of relevance in research could be discussed in Generative Models for Synthetic Data. Organizations nowadays face the major challenge of testing AI and ML models in BPM because of the lack of robust datasets. Further studies are needed to understand the ability of Generative Adversarial Networks (GANs), Transformers, and Autoencoders in generating synthetic datasets. Investigating how these can be combined to produce such high-quality synthetic data that can be used for training purposes would significantly advance this area of research [28].
Finally, efforts toward interoperability and standardization will enable different systems and technologies to work together seamlessly. This will simplify the integration of ABPMSs into existing IT infrastructures, enhancing their overall utility and effectiveness [1].

3. Materials and Methods

3.1. Framework Description

The following section focuses on the description of the proposed innovative framework. This framework is based on the application of the SECI [29] model in combination with Agile methodologies, in particular the Scrum framework, and the use of advanced technologies such as XAI, RPA, and process mining (PM).
The Scrum framework offers a clear organizational structure based on well-defined roles, such as the Scrum Master and Team Members, and iterative processes. This approach facilitates the creation, sharing, and validation of knowledge, enabling agile and collaborative management within teams [30].
The SECI model, which guides the knowledge transformation cycle, describes how knowledge can evolve through four stages: from informal socialization, where it is shared implicitly among members, to its formalization, combination, and dissemination as explicit knowledge.
The proposed methodological and technological framework is based on the reference architecture shown in Figure 1, consisting of several architectural layers within which there are methodological components, understood as a set of methods, rules, best practices, definitions, and technological components, and also as a set of techniques and tools.
Data are collected and then appropriately processed and made suitable for complex computational systems analysis. The information, thus structured, following the logical flow highlighted in the reference architecture, is forwarded to the next layer, where the component dedicated to AI and XAI analysis, namely the AI Explainability Module, is located.
This module consists of an AI algorithm suitable for processing domain data and all those methodologies and techniques related to XAI. This is possible using libraries and open source tools made available through data analysis tools (e.g., R3 or techniques implementable with Python), as well as a business layer capable of formalizing the technical data in a way that makes them suitable for the analyses conducted within the mentioned module.
Also, an opportune methodology, defined as ethical XAI methodology, allows for defining a set of guidelines and approaches useful to evaluate and improve the reliability and ethicality of the developed systems. The Innovative Human–Computer Interaction Module (HCI) provides users with a series of graphical interfaces that will allow the different operational figures involved to have information and reports with the right level of granularity and support for the different decision-making processes.
These interfaces allow all knowledge concerning the decision-making process to be made explicit and thus able to be assimilated by the human operator. Such interfaces are defined through an HCI methodology, which allows, through a series of best practices and guidelines typical of the manufacturing sector, to best design the interfaces that are the subject of this architectural layer. To ensure usability and alignment with user expectations, the design of these interfaces was conducted following established human–computer interaction (HCI) principles [31], with particular attention to User-Centered Design (UCD) [32] and Nielsen’s usability heuristics [33]. Key design decisions are guided by heuristics such as system state visibility, consistency and standards, error recognition rather than recall, and error prevention. These considerations were integrated to ensure that the Innovative HCI Module allows effective access to granular information and also supports cognitive assimilation and user empowerment throughout the decision-making process. The results derived from the XAI methodologies and the knowledge elicited by the Innovative HCI Module are conveyed within a repository called Process Optimization Rules, (realized through archives of structured and unstructured data) that constitutes a sort of enterprise knowledge base through which it will be possible to elicit and disseminate all the knowledge explicated through the functionalities exposed so far.
At the same time, this repository feeds the Explainable Workflow Designer module, through which—thanks to appropriate rules and formalisms typical of the BPM domain—it is possible to model, optimize, and execute the analyzed factory processes using a notation that allows for syntactically and semantically abstracting the formalisms typical of the manufacturing sector and related to the different phases of a product’s lifecycle (e.g., the Design, Manufacturing, Inspection, Delivery, and Maintenance and Service phases). This is made possible using Business Process Engine solutions (such as, for example, Camunda or BonitaSoft, as well as Business Process Modeling solutions). To support this module, the architecture includes an RPA-based methodology useful to support the Modeling and Optimization process, to uniquely and objectively identify what are the process tasks to be automated. In addition, a repository is provided containing all those rules useful for the definition of an explainable process workflow.
Finally, the processes thus optimized and automated will be useful for subsequent data collection and further optimization and automation of the same, thus determining the cyclical nature of the proposed methodological and technological framework.
Specifically, we will consider the approach proposed by Yakyma [2], who well describes how, within an Agile approach, specifically the Scrum framework, it is possible to empirically find the various stages characterizing the SECI methodology. However, the main limitation of this framework is that it does not define at what point in the KM process the lessons learned and best practices are formalized and validated, and how the typical figures of the Scrum framework, i.e., the Scrum Master and the CoPs contribute to validating and disseminating these forms of knowledge within the organization. Moreover, although the SECI model is considered highly relevant and valuable in the context of the study, it is important to consider that the model’s applicability in different cultural contexts has highlighted some notable limitations. A theoretical analysis by Easa and Fincham [34] on the introduction of the socialization, externalization, combination, and internalization processes of the model in Arab, Chinese, and Russian organizational contexts defined both similarities and differences compared to the Japanese context, suggesting that the SECI model needs to be adapted to consider a cultural dimension, so that the model could fit well in a heterogeneous context. Therefore, as depicted in Figure 2, adding a “Culturization” dimension in the four SECI processes is significant for use in a multicultural context.
Finally, the framework includes a software system designed to be user-friendly and accessible via a web platform. This system allows lessons learned to be quickly captured and archived in an organized manner, making them easily retrievable and shareable with all members of the organization. It also includes a section dedicated to lessons learned and patterns that offers advanced tools for searching for specific information, using keywords and hyperlinks to facilitate effective and interconnected navigation; the creation of a backlog of discussions, lessons, and lessons learned to be discussed and validated (or not) in meetings and CoPs; and advanced search techniques based on keyword searches and other specialized hyperlink relationships.
By combining these components, the framework creates a dynamic and collaborative environment, optimizing the creation, sharing, and utilization of knowledge within organizations. Within the Scrum framework, the system defines specific roles and responsibilities to ensure smooth and effective operation. The Scrum Master plays the role of KM facilitator, supporting the team in achieving goals and promoting the adoption of best practices. The Scrum Team includes all members involved in the knowledge creation process, working collaboratively to generate content and contribute to its validation. Members of CoP, on the other hand, are responsible for validating and disseminating lessons learned within the organization. They are a key link in ensuring that knowledge is integrated into business processes and shared between all levels of the organization.
A simplified view of the used nomenclature is shown in Table 1. This nomenclature is useful when defining and managing roles in the requirements elicitation phase.
The proposed framework is articulated through a structured and integrated methodological and technological approach, enabling explainable AI-driven optimization of business processes. It consists of three core technological modules, the AI Explainability Module, the Innovative HCI Module, and the Explainable Workflow Designer, supported by three methodological pillars: process modeling and optimization methodology, HCI methodology, and ethical XAI methodology. Figure 3 shows a functional representation through which it is possible to gain a deeper understanding of the framework’s logic.
The methodological process unfolds through the following key steps:
  • Process Modeling and Formalization: The first step involves defining the operational workflow through the Explainable Workflow Designer. This module allows domain experts to model business processes using a custom, user-friendly notation, abstracted from the complexity of BPMN 2.0. Thanks to a drag-and-drop interface and domain-specific semantics, non-technical users can formalize actors, data flows, and decision points. This approach not only supports initial process mapping but also facilitates subsequent automation and orchestration.
  • Data Analysis and Predictive Modeling: Once the process is defined, the AI Explainability Module is applied to perform predictive analytics using AI models. The system ingests structured and unstructured data (e.g., sensor, maintenance, and production data) and applies ML techniques to forecast key operational indicators. For example, it enables the estimation of Remaining Useful Life (RUL) for mechanical components.
  • Explainability and Human-in-the-Loop Interaction: To ensure that the AI’s outputs are transparent and actionable, XAI techniques (e.g., SHAP and LIME) are employed. These methods highlight the most influential variables driving predictions. The Innovative HCI Module then presents these insights via interactive visualizations tailored to the cognitive models of domain operators. This Human-in-the-Loop (HITL) paradigm empowers users to interpret model outputs, validate system recommendations, and retain decision-making authority.
  • Knowledge Codification and Execution: The results, once validated, are formalized into a knowledge repository through the Workflow Designer. Best practices and optimization rules are stored in the Process Optimization Rules database, enabling continuous knowledge sharing and reuse. This integration of formalized process knowledge with live operational data supports real-time monitoring and adaptive decision making.
  • Ethical Evaluation: The ethical XAI methodology guides organizations in assessing the ethical compliance of AI systems based on EU principles, including transparency, robustness, privacy, and social well-being. Using a structured set of indicators and visual tools (e.g., Kiviat diagrams), the framework enables quantifiable evaluation of AI behavior, ensuring responsible adoption.
A detailed depiction of the framework’s architecture is shown in Figure A1 of Appendix A. The proposed framework adopts an architectural approach based on the design of functional modules, each autonomous and capable of real-time communication with all other modules within the framework. This approach was chosen because such architecture is commonly used to achieve high performance, reliability, and scalability. Furthermore, the architectural framework, based on event-driven logic, is characterized by the distribution of microservices. This paradigm enables the definition and development of highly manageable, loosely coupled, and independently deployable technological components. Particular attention has been given to the component responsible for executing the business logic. In this layer, a Business Process Engine (BPE), repositories to support operations, a set of components for AI-based methodologies, and components for data collection and data transformation have been defined. Additionally, a Middleware Service Bus (MiSB) has been integrated to ensure the proper functioning of the execution engine, as well as interoperability and contextualization within a specific domain. Regarding the application layer, the logical–functional component dedicated to user interaction has been equipped with tools and methods to simplify and streamline operations during the definition and execution phases of a business process. This is further enhanced by real-time messaging, which enables the exchange and dissemination of organizational knowledge derived from optimization logic. Specifically, the technological tools and utilities developed within the project framework are designed with information models and content that are presented to the user in an organized and aggregated format, leveraging advanced UI/UX concepts. This ensures proper visualization and usability through appropriate authentication and authorization policies. The features of the enabling platform will allow users to manage a business process by collaboratively and interactively defining its characteristics, properties, and rules, to enable knowledge improvement throughout the process lifecycle. This facilitates the design and execution, orchestration, and monitoring of the process, supporting the identification of strengths and weaknesses for optimization purposes. Another key feature of the framework is the design of abstract logic components that allow for the definition and subsequent instantiation of user interface models using low-code/no-code approaches. These are aimed at creating dynamic interfaces through a set of graphical artifacts that incorporate common web elements. Finally, to ensure a comprehensive and holistic design, a series of dynamic connectors was also developed. These support the configuration, within an experience design perspective, of the interfaces, enabling the collection, visualization, and management of technical data in real-time, near real-time, and historical modes. Similarly, considerations were made to define technological components and modules capable of supporting and enabling more effective human–AI interaction. Complementary to the definition of the logical-functional architecture described, an analysis was conducted of major tools and libraries supporting process modeling and execution, such as Camunda, Power Automate, SAP Workflow Service, and Oracle BPM Suite. The analysis revealed that Camunda enables event-based modeling using the BPMN 2.0 standard and allows for easier integration with various IT technologies. Another highly important aspect was the examination of several frameworks, libraries, and technological solutions widely used in the fields of ML and Deep Learning. Their basic properties, features, strengths, and weaknesses were analyzed. The initial focus was on ML frameworks and libraries that do not require special hardware or infrastructure, aiming to simplify the complex process of data analysis and provide integrated environments beyond standard programming languages. Subsequently, the analysis shifted to key libraries that support the processing of large data volumes and require high computational performance. The proposed knowledge management framework is situated within a broader technological methodological approach, which includes both methodological and technological components. In particular, the ethical XAI methodology enables the assessment of the ethicality of AI methodologies and algorithms. This is achieved using specific properties, metric dimensions, and KPIs derived from the analysis of the relevant technical literature. More specifically, the ethical XAI methodology has been defined based on requirements and guidelines established at the legislative level, aimed at supporting the development of ethical AI systems. In particular, the core properties considered for this evaluation are the fundamental requirements for ethical AI as defined by the European Commission [35,36]: human oversight, robustness and safety, privacy and data governance, transparency, diversity and non-discrimination, social and environmental well-being, and accountability. For each of these properties, it has been possible to define reference dimensions, as well as the units of measurement necessary for their numerical and objective assessment. From the perspective of algorithms and training and validation techniques, predictive maintenance approaches in the manufacturing sector have been considered, with data preprocessing that includes scaling and a split into training, validation, and test sets in proportions of 70%–20%–10%. The AI models chosen (and trained to perform the regression task of ‘predicting the numerical value of the remaining life of a machine’) are the following:
  • An MLP deep neural network: It is a simple neural network of the order of 100,000 parameters with “dense” connections. This model was chosen as it belongs to the family of neural networks (the currently most opaque AI algorithms), but we wanted to maintain the simplicity of the architecture to speed up training (Keras Python library).
  • A Random Forest: This AI algorithm oversees the training of 200 decision trees that converge to a common decision. Unlike the neural network, Random Forest is a medium-opaque algorithm and was chosen to allow XAI techniques to obtain results more simply, to compare them with those of the neural network (Python’s Scikit-learn library).
Two XAI techniques were applied to these AI models (as detailed below), both model-agnostic (i.e., not linked to a specific AI model), so that they can be used for both models, drawing on the same explanation “potential”, ensuring the comparison of the results obtained:
  • Local Interpretable Model-Agnostic Explanations (LIME): An XAI technique that falls into the category of explanations by simplification, where simpler surrogate models are used to simplify the complex model and draw explanations from it. In addition to it being model-agnostic, LIME was chosen due to its simplicity of implementation and common use in the scientific literature.
  • SHapley Additive exPlanations (SHAP): An XAI technique that falls into the category of explanations for feature importance in which the samples are perturbed to estimate a value (called a Shap value) for each input, meaning the index of the importance that the feature has towards the output (technique used: KernelExplainer). SHAP was selected both because it falls into both macro-families of LIME (to achieve results of the same potential) and to contrast it with LIME, given the different approach to explanation.
Moreover, the use of process mining proved to be of fundamental importance in gathering valuable information for the process. Specifically, given an event log—the historical collection of execution logs of process tasks—process mining techniques were applied to model (Process Discovery), monitor the execution flow (Conformance Checking), and identify existing inefficiencies (Performance Analysis). The analysis conducted through these process mining techniques provided a comprehensive overview of the current situation, highlighting the need for an additional step to achieve the process optimization objectives.

3.2. Hardware and Software Tools

To achieve the results presented, selecting proper tools to support the experimentation was crucial. The selection process was strategically focused to ensure scalability, maintainability, and performance optimization. Camunda v.7.19 was selected as the best tool for the process’s formalization, modeling, and execution. For the development of the business logic component, Java v.17 was employed, enabling robust backend processes, seamless integration, and efficient handling of core application functionalities. AngularJS v.13 was selected for the implementation of the HCI component, delivering an interactive and user-friendly interface. Python v.3.11.1 was employed to develop the analytical component, facilitating complex data analysis. For the storage and versioning of structured and unstructured data, PostgreSQL v.15 and MongoDB v.6 were adopted, ensuring high reliability and contextual data persistence. ActiveMQ v.5.17.0 was selected to enable event-driven messaging management. Finally, the process modeling and representation were executed following BPMN 2.0 notation, leveraging the capabilities of the SAP Signavio Process Manager tool (academic version), ensuring a process’s representation that meets recognized and established standards.

3.3. Knowledge Management System Requirements

The KM system consists of 18 core requirements, each of which contributes to ensuring the effectiveness, traceability, and valorization of corporate knowledge. These requirements describe the functionalities needed to support the knowledge lifecycle, from collecting and sharing experiences, to formalizing best practices, to managing roles and teams. The system aims at promoting collaboration between users, facilitating knowledge transfer, and ensuring an organized, structured management of information.
An overview of the knowledge management requirements is presented in Table 2.
Each of these requirements will be described in detail in the following subsections.

3.3.1. Requirement ID 1: System Scope

The KM system shall be able to collect the information related to the KM process, store it appropriately and persistently, and allow it to be visualized at each instant of its lifecycle and each step of the project management cycle. In particular, the system shall be able to realize the following:
  • Initiate a discussion, allowing users to present problems and suggestions related to daily operations, aimed at solving the problem and collaboratively discussing the suggestion. The collection and cataloging of discussions can form a backlog of topics managed within the Scrum Team itself (tacit KM during the socialization step).
  • Define a Lesson, allowing users to formalize an experience and discuss it collaboratively. The set of all inserted lessons can form a topic backlog accessible by all Scrum Teams during each operational activity (tacit to explicit KM during the outsourcing step).
  • Define a lesson learned. This will be made possible by the functionality implemented in the system through which a lesson learned can be formalized and shared. The set of all lessons learned entered can form a backlog of topics to be evaluated in the Scrum of Scrums (SoS), consisting of the Scrum Masters, or team leaders, of the different teams.
  • Formalize a pattern. The system should allow users to formalize and share a best practice. This database of inserted best practices can form a backlog of topics to be discussed in a CoP.
  • Manage roles and groups; it is important to consider that a group is characterized by figures having different roles. If we want to remain in the IT sphere, we can consider, as an example, a team consisting of a developer, a tester, a team leader, a unit leader, etc.
  • Track the history of each discussion, lesson, lesson learned, and best practice.
  • Ensure access to the pattern archive for each user to support the propagation of explicit to tacit corporate knowledge during the internalization step.

3.3.2. Requirement ID 2—Account Management

The system provides for different account types, depending on the user’s role within the group to which they belong.

3.3.3. Requirement ID 3—System Authentication

The user will be able to access the system after authentication with a basic authentication mechanism, which will have to be carried out in compliance with corporate security policies and involve the entry of a username and password.

3.3.4. Requirement ID 4—Definition of Role in the Scrum Team

The system envisages that the user belonging to the Scrum Team can perform the following functions:
  • Initiate a discussion, aimed at solving a problem and/or discuss a suggestion for improvement in a collaborative manner;
  • Formalize a lesson;
  • Participate in a discussion;
  • Propose improvements to a discussion.

3.3.5. Requirement ID 5—Scrum Master Functionality

The system provides the user with the role of team leader within the Scrum Team, i.e., the Scrum Master; in addition to inheriting all functionalities granted to Scrum Team Members, the user will be able to carry out the following:
  • Approve a discussion;
  • Formalize a lesson and submit it for system validation;
  • Approve a lesson in lessons learned;
  • Reformulate an existing pattern to create a new lesson.

3.3.6. Requirement ID 6—Scrum Member of Scrum Meeting Functionality

The system provides the user with the role of team leader within the Scrum Team; i.e., the Scrum Master, who is part of the Scrum of Scrum Meeting; in this way, this user can inherit all the functionalities granted to the members of the Scrum Team and especially to the Scrum Masters.

3.3.7. Requirement ID 7—CoP Member Functionality

The system foresees that the unit leader may be a member of a CoP and will have the opportunity to participate in a CoP discussion aimed at validating a lesson learned to the rank of pattern and thus company best practice. The CoP member inherits all the functionalities granted to the Scrum Master.

3.3.8. Requirement ID 8—Initiation and Formalization of a Discussion

The system will have to allow the user to submit issues and/or suggestions related to the Scrum Team’s operations. Each contribution will be tracked to avoid loss of information. The user responsible for starting the discussion, i.e., the Scrum Master, must enter the following:
  • Title of the discussion;
  • Discussion start date.
Once concluded, the Scrum Master will be able to archive the discussion. Alternatively, if it is assessed that the discussion is relevant to the definition of new knowledge, this will be formalized by entering the following additional information:
  • Title identifying the proposed knowledge;
  • Keywords (tags);
  • Description of the proposed new knowledge.

3.3.9. Requirement ID 9—Lesson Formalization

A lesson can be formalized by any user who has gained experience to share, positive or negative, within the Scrum Team. At the same time, such formalization may occur when a team leader requests the transformation of a lesson discussion. The user responsible for creating the lesson must enter the following:
  • The name of the lesson;
  • Keywords (tags);
  • The areas of use of the lesson;
  • The description of the lesson.
If the lesson was generated from a discussion, the system automatically fills in the information entered in the fields of the discussion to which the lesson refers.

3.3.10. Requirement ID 10—Lesson Management

Once the lesson has been validated and stored in a special repository, the system allows it to be visible to all users, and allows it to be subject to subsequent discussion by users who wish to improve it.

3.3.11. Requirement ID 11—Creation of the Lesson Learned

The lesson learned can be created from a lesson formalized by the Scrum Master or a member of the Scrum Team. Alternatively, it can be created from an existing pattern that has been reformulated in such a way that it can be revalidated by the system.

3.3.12. Requirement ID 12—Approval of Lesson Learned

The lesson learned can be validated by the SoS. The lesson learned starts from an existing lesson or pattern and consequently inherits its attributes. For this reason, the user responsible for the approval procedure must enter a series of information into the system, some of which, as already mentioned, is inherited from the lesson from which it derives:
  • Title of the lesson learned;
  • Keywords (tags);
  • Validation date of the lesson learned (automatically compiled by the system);
  • The areas of use of the lesson;
  • The description of the lesson;
  • The validation of Key Performance Indicators (KPIs).

3.3.13. Requirement ID 13—Managing Lesson Learned

The system allows the LL, once validated and stored in the appropriate repository, to be visible to all users and subject to discussion by users wishing to improve it.

3.3.14. Requirement ID 14—Pattern Approval

A best practice can be validated by the CoP. The best practice starts with a lesson learned. The team leader must enter the same information into the system as required for lessons learned as well as additional information regarding any additional constraints to be taken into account.

3.3.15. Requirement ID 15—Pattern Management

The system allows the Pattern, once validated, to be visible to all users, and also offers them the possibility to make suggestions for improvement and its further reformulation and analysis. The system will also keep track of the connection between the Best Practices and the Lessons, Lesson Learned, corrective actions etc.

3.3.16. Requirement ID 16—Content Categorization

Content must be categorized by content type (discussion, lesson, lesson learned, and pattern) as well as by topic. The content is identified by a unique ID. The system allows all users to search for content by ID or by keywords (tags), type, field of use, and/or date of creation; the system will display to the user the list of content that meets the selected criteria (content metadata).

3.3.17. Requirement ID 17—Import/Export

It is required that the system allows the import of lessons, lessons learned, and knowledge patterns generated. Table 2 gives an overview of the above requirements and a brief description of them.

4. Implementation

In the following section, we will proceed with the definition of a process model aimed at the creation of new knowledge. The development of this model is based on the methodology proposed by Yakima. This methodology enables the definition of how, within an Agile workflow, KM traverses the various steps characterizing the SECI model. The four SECI phases are implemented in the process as follows:
  • Socialization (Tacit to Tacit): In this initial phase, tacit knowledge is shared informally among Scrum Team Members through direct interaction, collaborative work, and practical experience exchange. This occurs during Agile events such as daily stand-ups, sprint reviews, and retrospectives, where challenges, insights, and emerging solutions are openly discussed. Although no formal validation takes place at this stage, it lays the groundwork for identifying potential lessons learned.
  • Externalization (Tacit to Explicit): Tacit knowledge is transformed into explicit knowledge through documentation and verbalization. This phase is reflected in Discussion Management activities, where the Scrum Team, typically under the guidance of the Scrum Master, captures knowledge in the form of documents, presentations, or textual contributions within shared repositories. While not yet validated, these contributions are systematically recorded and tracked, preparing them for future evaluation. Furthermore, the Scrum Master facilitates the documentation process and ensures that relevant knowledge is systematically discussed during retrospectives and Scrums of Scrums. This active role supports both the validation and dissemination of emerging insights across teams.
  • Combination (Explicit to Explicit): During this phase, explicit knowledge is systematically aggregated, compared, and refined to generate structured insights. This stage constitutes the core of the formalization and validation process for lessons learned and the derivation of reusable patterns. Lessons are initially submitted via a standardized input form designed to ensure the consistency and completeness of the captured information. The subsequent validation process follows two parallel paths, depending on the nature of the evidence supporting the lesson. When lessons are underpinned by quantitative data, such as KPIs, they are assessed through an automated validation engine (DKE). Conversely, when lessons rely on qualitative insights, their relevance and applicability are evaluated during SoS meetings, which provide a collaborative forum for expert judgment and peer review. Once a lesson is recognized as broadly applicable across comparable contexts, it is elevated to the status of a pattern. Patterns are then forwarded to the relevant CoP, where further refinement and formal validation occur. The CoP is responsible for evaluating the generalizability, structural consistency, and overall quality of each pattern. This includes the definition of input and output parameters, identification of applicable constraints, and estimation of the expected impact, particularly concerning performance metrics such as KPIs. In addition to their validation function, CoPs play a central role in disseminating validated patterns and best practices throughout the organization. They promote reuse by organizing thematic workshops, maintaining centralized repositories, and mentoring project teams in applying the most relevant knowledge assets to their specific contexts. Overall, this phase serves as a critical component of organizational learning. It transforms individual and team-level experiences into codified, reusable knowledge assets that can inform future projects and enhance decision making across the enterprise.
  • Internalization (Explicit to Tacit): Finally, explicit knowledge is internalized by individuals through concrete application in their daily activities. Teams apply the validated best practices or patterns in their operational contexts, adapting them as needed. Feedback resulting from real-world application may trigger new insights, which, if relevant, restart the SECI cycle, beginning again with socialization.
However, this approach does not, as previously described, identify what point knowledge is formalized into lessons learned and best practices within the Scrum framework, nor does it clearly define the roles of the several actors within the KM pipeline of the organization.
The proposed process model aims to overcome these limitations, thereby representing an innovative process development. The subsequent pages will describe the overall workflow leading to the definition of lessons learned and patterns considered as corporate best practices.
For the representation of the process, BPMN 2.0 notation was employed, using the Signavio Process Manager modeling tool, which is accessible through an academic license.
Signavio Process Manager is a straightforward platform for BPM, designed to enable users to design, model, and enhance their business processes. This software facilitates the creation of appropriate user manuals for the modeled business processes, providing clear explanations for executing individual tasks and elucidating how specific decisions are made within the organization.
This feature is critically important, given the primary objective of this deliverable, to defining the process model for the creation and sharing of knowledge within complex operational environments.

4.1. Overview of the Process

Given the overall size of the resulting process, we initially chose to focus on a macro-level view of the process itself, shown in Figure 4, followed by a detailed definition of the various subprocesses and their respective tasks.
The logical flow leading to the definition of a pattern and its management as a best practice begins with a discussion within a Scrum Team. If new knowledge is extracted from this discussion, it can be further validated as a lesson learned and subsequently elevated to a pattern with the status of a best practice. In parallel, a data analytics system operates, using appropriate algorithms, to suggest new features of knowledge to those responsible for its validation and formalization.
An aspect not to be overlooked is the limited duration of these subprocesses, which highlights their roots in the Agile methodologies previously studied.

4.2. Discussion Management

In the following section, we will focus on the detailed definition of the Discussion Management process, for which a detailed representation is shown in Figure 5.
To properly describe the process, it is crucial to make an important assumption: the socialization phase defined in the SECI model proposed by Yakima [2] is not explicitly represented using BPMN 2.0 notation. This phase occurs upstream of the process modeled here. Additionally, the Scrum Team referenced in this model is formed dynamically, meaning the creation and aggregation of the team members occur in response to specific needs arising from the diverse skills of the participants. These considerations allow us to introduce the knowledge taxonomy, provided in Table 3:
Once the Scrum Master has called the various participants to the meeting, they initiate a discussion where some members of the Scrum Team, following significant experiences that occurred during their operational activities, propose new knowledge. The proposal of new knowledge can also occur following the sharing of results obtained by individual users and members of the organization as a result of the assessment of new knowledge. The other members participating in the discussion contribute (one or more contributions) to improve what has been presented and better define the proposed knowledge.
After a series of contributions, the Scrum Master evaluates the knowledge derived from the discussion they initiated; following a positive evaluation of the proposed knowledge. They then formalize the new knowledge and the resulting discussion in a format that allows for its transposition (e.g., PowerPoint or Word document). The output obtained will then be stored within the Best Practice Warehouse (BPWH) and will be labeled as “Discussion”.
The fields to be populated within the dedicated interface will be the following:
  • Title of the discussion;
  • Description, where a summary of the defined knowledge is provided;
  • Attachment, which can be a doc, ppt, or pdf file, in which the newly defined knowledge is explained, and which was the medium through which such knowledge was appropriately externalized.
The following tasks are described, which are relevant for understanding the process:
  • Configuring the Scrum Team: In this task, the Scrum Master proceeds to configure the Scrum Team by selecting all the key figures necessary to define the working team. Once the various members are selected, the Scrum Master sends an email (or notification) to convene the selected figures.
  • Composing the Scrum Team: Once the figures constituting the potential team are identified, the Scrum Master proceeds to form the actual Scrum Team. In other words, the various team members who will participate in the process of creating and managing new knowledge are grouped.
  • Starting the Discussion: Once the Team is appropriately configured and “composed”, the Scrum Master starts a discussion, during which the various team members can propose the “new knowledge” that emerged during the operational activity related to a specific sprint. In this case, a form is considered, characterized by the following fields:
    Title of the discussion;
    Date the discussion started;
    Brief description of the discussion.
  • Proposing New Knowledge: In this task, a member of the Scrum Team proposes and shares new knowledge with the team members, which can be an approach, a method with which they addressed and solved a problem, or developed a particular feature during their latest operational activities. In this task, one of the Scrum Team Members may also share the results derived from the assessment of knowledge they developed, which was previously validated by an analysis engine called the Data KPI Engine (DKE) and designed for the assessment and evaluation of data—through the selection of appropriate KPIs—to extract new knowledge;
  • Contributing to the formalization of New Knowledge: the members of the Scrum Team can contribute to the formalization of the newly proposed knowledge. This activity does not end in a single task but occurs multiple times until a shared and valid proposal is reached. This task will be characterized by a limited duration within which the proposal must be improved and formalized. In this case, a text area, provided by a chat tool, will allow the various Scrum Team Members to add their comments and contributions to the formalization of the new knowledge.
  • Viewing and Evaluating Contributions: In this task, the Scrum Master proceeds to view and evaluate the various contributions and the knowledge proposed during the discussion. This is a very important task because it represents the moment when the knowledge is considered valid and can then proceed to the subsequent tasks that characterize the KM process, or be definitively discarded.
  • Formalizing the Discussion: In this task, the Scrum Master formalizes the discussion in a format that allows it to be archived within the BPWH:
    Title of the Discussion: This field should include the title characterizing the knowledge that emerged during the discussion;
    Date: It is important to insert the date on which the discussion is formalized, providing a temporal reference for the activity performed, which will also facilitate its later retrieval;
    Keywords (Tags): These are the keywords that help identify and potentially facilitate the search for the proposed knowledge;
    Description (Text Area): A brief description is inserted within this field to summarize the main characteristics of the validated knowledge, which will then be stored within the BPWH.
  • Archiving the Discussion: In this task, once the discussion has been formalized by the Scrum Master, it will be stored in a repository, which will constitute a backlog of discussions from which new knowledge proposals can later be validated.

4.3. Lesson Learned Management

Once knowledge has been formalized as a discussion, the next steps are the formalization of this into a lesson and its analysis with the aim of its possible promotion to the rank of lesson learned. These aspects are described in the lesson learned management process, shown in Figure 6.
The Scrum Team (or its designated user) proceeds with the formalization of the knowledge into a lesson; to do this, in the “Lesson Formalization” task, a dedicated form is completed where a series of relevant information related to its description is entered.
Once the entered information is “saved”, in the task called “Lesson Submission”, the user appointed by the Scrum Team proceeds to define and enter a series of additional information, such as the following:
  • Priority assigned to various defined KPIs.
  • Minimum percentage of success related to the result obtained and visualized through a RADAR chart; this value defines the acceptable percentage of result achievable within a specific time frame.
  • Time duration of the analysis to be carried out.
  • Threshold for evaluating the assigned priorities in case of subjective evaluation by the DKE.
Once the analysis is completed, the system sends a notification to the members of the Scrum Team containing the obtained results; in case of a positive outcome, the obtained result (i.e., the resulting lesson) will be stored within the appropriate tables of the BPWH. Moreover, if the obtained result is “subjective”, meaning that not all previously defined criteria were met, the result is submitted to the Scrum of Scrums meeting. This artifact constitutes a meeting of the various Scrum Teams of the organization, to which some members of different Scrum Teams may also participate, depending on their technical expertise regarding the issues to be addressed during the meeting.
In this case, the SoS analyzes the lesson and, if deemed valid, proceeds with its abstraction into a lesson learned and its storage in the BPWH. Conversely, in the case of an “objective” result, the SoS’s task will be to proceed with a “formal” abstraction of the lesson learned of the analyzed knowledge and its storage in the appropriate repository.
Once these steps are completed, the SoS sends a notification to the Communities of Practice within the organization to conduct a deeper analysis of the generated knowledge and further abstract it into a pattern.
As can be seen from the lesson learned management process diagram, it is possible to follow, even partially, an alternative path. This is referred to as “Pattern Assessment” because it allows the reevaluation of an already existing pattern in a context different from the one initially considered. In this case, the user will consider a dataset useful for this purpose and any associated KPIs. Once this field is completed, the analysis proceeds by fully adhering to the previously considered tasks.
It should also be noted that in the case of multiple valid competing configurations, an event triggers a daemon to define a ranking among them and suggest it to the developer assessing the pattern.
Other alternative paths relate to modifying the pattern after its use by the developer or applying an entirely new dataset, with possible modifications to the pattern’s architecture following the results derived from the association rules characterizing the data analysis engine.
The following tasks are considered relevant for understanding the process:
  • Lesson Formalization: In this task, the Scrum Master or the designated Scrum Team Member retrieves a previously formalized discussion and proceeds to complete the dedicated form; the fields to be completed are the following:
    Lesson title;
    Tags;
    Scope of use;
    Lesson description;
    Attachments (if any).
If the lesson derives from knowledge already evaluated (objectively or subjectively valid), the user in question, namely the Scrum Master, will still proceed to complete a series of appropriately pre-filled fields. In the case of objective validation, the process will move directly to the task called “Lesson Learned Abstraction”, while in the case of prior subjective validation, it will proceed with the “Lesson Validation” task.
  • Lesson Submission: In this task, through a dedicated interface, the dataset to be used for lesson validation and the KPIs to be achieved within a specific time frame are defined. Specifically, the fields to populate are the following:
    Priority assigned to each KPI;
    Analysis duration;
    Threshold;
    Minimum success percentage.
  • DKE: In this task, the DKE proceeds with the validation of the submitted lesson; in the event of a positive outcome, it is stored in the appropriate repository, and a notification is sent both to the Scrum Team that formalized the lesson and to the SoS meeting so that they can proceed with further validation and its promotion to the rank of lesson learned;
  • Lesson Validation: In this task, various SoS members review the results of the analysis carried out by the DKE on the proposed lesson.
    This task can also be triggered by a system notification following knowledge that has previously been subjectively validated and formalized by the Scrum Team and is now ready for evaluation by the SoS meeting.
  • Lesson Learned Abstraction: Once the actual validity of the results of the received lesson is confirmed, the SoS members approve the lesson and promote it to the rank of lesson learned. The fields populated are the following:
    Lesson Learned Title: a text field to enter the title characterizing the lesson learned;
    Tags: a text field to enter a series of keywords to identify the lesson learned and potentially facilitate its search by other members of the organization;
    Scope of Use: a field indicating the main scopes of use of the lesson learned;
    Lesson Learned Description: a field dedicated to a description of the lesson learned and its main features and functionalities;
    Achieved KPIs: the main KPIs obtained from the lesson learned;
    Used Services: definition of the main technological services used;
    Attachments (if any): PDF, PPT, or DOC files to attach to the form containing additional information related to the structure and features of the lesson learned.
Completing this task entails the following:
  • Sending a notification to the CoP members so that they can proceed with further evaluation and validation of the lesson learned to the rank of best practice pattern;
  • Archiving the newly approved lesson learned within the BPWH.

4.4. Pattern Management

The lesson learned management process described above concludes with sending a notification to the CoP, which characterizes the organization, to inform them about the formalization of new knowledge. At this point, the pattern management process starts. A detailed depiction of this process is shown in Figure 7.
The CoP, once it views the new lesson learned, evaluates whether it has characteristics that allow it to be promoted to the rank of a pattern.
If this evaluation has a positive outcome, the CoP proceeds to abstract and define the pattern by completing a dedicated form, where a set of information is inserted. Some of this information may be inherited (although most is re-entered) from the originating lesson learned, while other details must be added from scratch, such as the following:
  • Input parameters, including the name, type, and acceptable range;
  • Output parameters, including the name, type, and acceptable range;
  • Functions for calculating output parameters based on the input parameters;
  • Constraints to monitor: defined about the input and output parameters and/or the data provided by the services used;
  • Interfaces exposed to external applications.
Once this step is completed, the CoP archives the new pattern in the appropriate tables within the BPWH and, at the same time, disseminates it across the organization. This ensures that it can be viewed and internalized by the various Scrum Teams, who may subsequently proceed to assess it for defining further knowledge. Below, the relevant and fundamental tasks for understanding the process are described:
  • Viewing the Lesson Learned. In this task, members of the CoP view the validated lesson learned flagged by the SoS and evaluate whether what has been proposed can be considered a pattern to be utilized across the organization.
  • Abstracting the Pattern. In this task, members of the CoP decide that the lesson learned in question is reusable in various contexts and fields. Therefore, it is regarded as a pattern within the organization. Following this decision, a form is filled out where the following information is entered:
    • Pattern Title: a text field where the title that characterizes the pattern is inserted;
    • Tags: a text field to insert a series of keywords to identify the pattern and facilitate its search by other organization members;
    • Usage Domains: a field indicating the main domains where the pattern can be applied;
    • Pattern Description: a field dedicated to describing the pattern and its main characteristics and functionalities;
    • KPIs: the main KPIs obtained by the pattern;
    • Services Used: definition of the main services used by the pattern;
    • Input Parameters: definition of acceptable parameters (name, type, and acceptable range);
    • Output Parameters: definition of acceptable parameters (name, type, and acceptable range);
    • Constraints: definition of constraints to monitor, based on input and output parameters and/or data provided by the services used;
    • Attachments: PDF, PPT, or DOC files to be attached to the form, containing additional information about the pattern’s structure and characteristics.
    Once the form is completed, the new knowledge is saved within the BPWH and subsequently disseminated throughout the organization.
    During this task, members of the CoP may discuss additional informative contributions, releases, or optimizations derived from the ML algorithms that characterize the data analytics component.
  • Dissemination within the organization. The pattern, newly approved to the rank of best practice, is disseminated throughout the organization to ensure that various members of the Scrum Teams can use it during their coding activities.

4.5. Patter Usage and Editing

This process describes the various steps that lead a member of the Scrum Team to search for, select, and use a pattern to meet their operational needs. In general, the process, shown in Figure 8, evolves as follows: The user views the list of all patterns present in the BPWH. Once the most suitable one is identified, the user selects the pattern.
Following its use and the feedback obtained, the user can proceed to modify the pattern, thus initiating the continuous improvement process of the knowledge. Modifying the pattern triggers the process called “Lesson Learned Management”, which leads to the first formalization of a lesson, its validation, and the subsequent abstraction steps into a lesson learned and pattern.
At the same time, the user can choose to assess a pattern, evaluating its performance in a context different from the one in which it was defined. The results will contribute to expanding the knowledge base associated with the pattern under review.
Below is a description of the individual tasks characterizing the process:
  • Viewing the pattern list: In this task, the Scrum Team Member views the list of patterns present in the BPWH; this also allows them to view the different properties and characteristics of the various patterns present.
  • Selecting a pattern: In this task, the Scrum Team Member selects the pattern they consider most appropriate for their needs from the list returned by the BPWH.
  • Using the pattern: In this task, the user, a member of the Scrum Team, proceeds to use the pattern by leveraging its available functionalities.
Of particular interest are the tasks, patterns to be submitted for assessment, and patterns to be modified, which are directly connected to the Discussion Management process, as highlighted with the red circle and arrow shown in Figure A2 of Appendix A.
Also relevant is the assessment of the pattern, which connects various processes, as highlighted with a red circle and arrow in Figure A3 and Figure A4 of Appendix A. This link triggers the initiation of an additional branch of the lesson learned management process, characterized by the task aptly named “Assessment Pattern”.
In the task in question, the assessment of the pattern is carried out by considering, for evaluation purposes, a reference dataset and the evaluation of said pattern within a context different from the one for which it was initially designed. If the assessment successfully meets all imposed criteria, this will lead to an expansion of the informational contribution of this type of knowledge; this will impact the task of pattern formalization, where the user can input the additional information obtained from the analysis just performed.

4.6. Knowledge Assessment Management

In the Knowledge Assessment management process, for which a detailed depiction is shown in Figure 9, a single user independently assesses the potential knowledge they have generated, without proceeding to any formalization.
This evaluation is realized by indicating aspects, such as the following:
  • KPIs, i.e., performance indicators and the potential quality level that can be achieved from the knowledge being analyzed.
  • Data to be analyzed to obtain these KPIs.
  • Prioritization of the various defined KPIs.
  • Minimum success percentage related to the result obtained and displayed by the resulting RADAR chart; this value helps define the acceptable result percentage achievable within a specific time frame.
  • Duration of the analysis to be carried out.
  • Thresholds through which the priorities assigned can be evaluated in the case of a subjective assessment by the DKE.
Once the analysis is completed, the system sends a notification with the results obtained. If the outcome is positive, the user shares everything with their Scrum Master and Scrum Team. Finally, the system shares the analyzed knowledge within specific tables in the BPWH, as highlighted by Figure A5 in Appendix A.
The following tasks are described to aid in understanding the process:
  • Knowledge Assessment: In this task, the user, once the data collection and processing have been completed, defines all the information necessary to start the knowledge analysis. Through a specific form, the following information is entered:
    Data;
    KPIs;
    Priority assigned to each KPI;
    Duration of the analysis;
    Threshold;
    Minimum success percentage;
    Brief description of the knowledge to be tested.
  • DKE: In this task, the DKE will proceed with the validation of the submitted knowledge. If the outcome is positive, the knowledge will be stored in the appropriate repository.

4.7. Data Analysis Engine

In this subprocess, the data analytics system can be characterized by three specific processes:
  • Best configuration;
  • Association rule management;
  • Machine Learning.

4.7.1. Best Configuration

As previously mentioned in the lesson learned management process, in the case of a pattern assessment, the user, before submitting it for analysis by the DKE, defines, through a dedicated form, the different possible configurations based on which the validation of the new knowledge will take place. If multiple valid competing configurations occur, a specific algorithm, after analyzing the request, performs an automatic search within the Data Lake, identifying the results obtained from previous assessments.
Once all relevant data are retrieved, the algorithm interprets the results of the recently conducted analysis. If it is possible to extract a historical record from previous assessments with different configurations of the pattern in question, it proceeds to define an appropriate ranking. This process is depicted in Figure 10.
Following the data analysis, the system sends a notification with the results obtained. The recipients of this notification are the members of the CoPs, that is, those responsible for the abstraction of a pattern. The useful information related to the pattern’s ranking is obtained by the system after it has been used multiple times. This information, or more precisely what the system defines, is itself a pattern, but it does not represent new knowledge in the strict sense of the term. The impact of the data analysis process on the pattern formalization process is further highlighted in Figure A6 of Appendix A, where the red circle and arrow show the input point into the pattern formalization.
Instead, it helps to improve and evolve the informational asset within an already existing knowledge base. Knowing which configuration is the best will therefore add to the existing information that accompanies a pattern, thus determining a sort of natural evolution of the pattern, or, in other words, its “release”. After further uses of the same pattern, these results may be refuted, leading to new rankings and consequently new releases. Naturally, what is described here may impact the “constitutive” aspects that contribute to the abstraction of a pattern, such as “input parameters” and/or “output parameters”, leading to changes in the pattern.
Once again, this will affect the task of pattern formalization, where the user will see a pattern abstraction form where they can enter the additional information obtained from the analysis just performed.

4.7.2. Association Rule Management

In this case, the data analysis algorithm uses association rules to determine relationships between additional data and the data defined by the Scrum Team in the formal definition of the pattern, as highlighted in Figure 11. This occurs after the retrieval and subsequent analysis of data from a dedicated Data Lake. Based on the results obtained, the algorithm defines the association rules through which new, relevant data can be correlated to the execution of a particular pattern.
The knowledge derived from this portion of the analysis algorithm comes after several uses of a pattern: the analysis algorithm can obtain valid information by having access to multiple observations, which are obtained from the use of a pattern by different users.
The information generated by the data analysis system, in this case, will define additional data to be analyzed and considered in the assessment of a pattern, as highlighted with a red circle and arrow in Figure A7 of Appendix A. These considerations allow us to state that when the system performs the data analysis and sends the results, the recipients of these results will be the members of the Scrum Team. Based on the insights provided, they will proceed to define a new lesson and generate new knowledge.
As previously mentioned, these functionalities bring significant dynamism to knowledge, which does not evolve exclusively as a result of the assessment by developers and the different Scrum Teams within an organization, but also autonomously, leveraging the data derived from the usage of the same patterns across the organization. The process affected by the output of this subprocess is the lesson learned management process, and the task most impacted by this notification is the lesson formalization task. In response to the information received, the members of the Scrum Team will redefine a new lesson, considering new data and relevant KPIs, thus determining the mechanism for generating new knowledge.

4.7.3. Machine Learning

The following section describes the data analysis process using ML, for which a detailed depiction is presented in Figure 12. The algorithm, while observing the functioning of a specific pattern, retrieves data from the Data Lake, then proceeds to analyze and process it. Once the analysis is complete and the results necessary for optimizing the pattern’s configuration are obtained, the algorithm will aim to optimize the constituent elements that characterize the abstraction of the pattern.
The knowledge derived from this portion of the data analysis algorithm further refines the informational asset related to a pattern, thus resulting in a new release, as highlighted in Figure A8 of Appendix A. Essentially, the impact of the output from this algorithm is such that it affects the formal abstraction of a pattern and its constituent elements, as described in the case of the first data analysis algorithm. In this case as well, the process involved is pattern management, and the task affected by this output is the pattern abstraction task, where the user will view the pattern abstraction form and can enter the additional information obtained from the recently performed analysis.

5. Results and Discussion

To evaluate the success and validity of the proposed framework, real industrial scenario validation was crucial. A first pilot study focused on the predictive maintenance of a centrifugal hydraulic pump. Using the Explainable Workflow Designer, the operational workflow was modeled to capture key tasks and dependencies. This modeling helped surface tacit knowledge and codify best practices, which are otherwise difficult to share via traditional means. With the AI Explainability Module, sensor data from the machine were analyzed using predictive algorithms. XAI techniques such as SHAP and LIME provided interpretable results regarding failure causes and critical sensor thresholds. The results were presented to factory operators through the Innovative HCI Module, enabling immediate, contextualized understanding of anomalies. When certain sensor thresholds were exceeded, operators were guided by visual insights and best practices modeled within the framework to act proactively and prevent downtime. This approach empowered human operators to internalize domain knowledge more effectively, facilitated continuous learning, and reduced dependency on technical experts for process understanding. In a second pilot study, the framework was applied to MRO (Maintenance, Repair, and Overhaul) processes for aircraft engines. The engine lifecycle process was modeled using the Workflow Designer to define tasks and data flows. Predictive analytics were employed to estimate component wear, while XAI visualizations provided clarity on which operational variables had the most impact on engine degradation. The integration of explainability facilitated more strategic maintenance scheduling and improved process governance.
Moreover, a series of assessment tests were conducted in different operational scenarios typical of Smart Manufacturing, Smart Health, and Smart Energy. These assessment tests were conducted by referring to a series of Key Performance Indicators (KPIs) based on logic and approaches typical of the AGILE methodology, which the knowledge management framework is inspired by. In particular, the following metrics have been considered:
  • Lead Time, i.e., the time required to complete operational activities about the knowledge patterns obtained and formalized through the framework.
  • Service Level Agreement (SLA), understood as the framework’s ability to ensure high service levels by leveraging the newly acquired knowledge.
  • Error Rate, meaning the rate of errors resulting from applying the newly formalized knowledge.
  • Mean Time To Detection (MTTD) and Mean Time to Repair (MTTR), which refer to the average time needed to identify errors during operations and to resolve them using the acquired knowledge.
To evaluate the performance of the proposed framework, a series of load tests were conducted to observe how system response times change as the number of requests from third-party users increases. Specifically, metrics based on the APDEX (Application Performance Index) standard were examined, and different scenarios were considered by varying the number of samples—i.e., the number of requests—to obtain objective indications regarding system performance and potential degradation. For each scenario, additional aspects such as Response Time Over Time and Response Time Percentiles were analyzed. This choice is strongly linked to the informative value that can be derived from these analyses. Response Time Over Time provides insights into the variation in service performance over time, as well as empirical evidence of potential bottlenecks or queuing as load and time increase. Response Time Percentiles, on the other hand, offer a view of the SLA delivered as the number of users or requests changes. Regarding the actual performance results, the component successfully handled 100% of the simulated requests in the case of 500 samples, with very high response performance and a slight performance degradation observed at the 99th percentile. This trend was maintained consistently even for samples of 2000 and 2256 requests. However, performance degradation was observed starting at 5000 samples, and suboptimal performance was recorded at 15,000 samples.
Additionally, a comparative analysis with contemporary ABPMS frameworks was conducted, showing that the proposed framework introduces distinctive and innovative elements, particularly regarding the integration of knowledge management within the process adaptation cycle. Table 4 summarizes the results of this analysis.
The AI-Augmented Business Process Management System manifesto by Dumas et al. [1] outlines an advanced lifecycle based on six phases- perception, reason, enact, adapt, explain, and improve, supported by reliable AI technologies. While offering a modern view of process evolution, this approach does not detail operational mechanisms for the validation and dissemination of knowledge through specific roles. In particular, it lacks an organizational structure that systematically links learning from experience with process adaptation. The Self-Adaptive ERP framework proposed by Maged and Kassem [37] represents a significant example of intelligent automation, focusing on the technical adaptation of ERP systems using AI and NLP. However, its approach remains predominantly technological and does not address the socio-organizational dimensions of knowledge management, neglecting the fundamental role of people and professional communities in making continuous innovation sustainable. Similarly, the Large Process Model approach, a central conceptual framework for software-supported BPM in the era of generative AI proposed by Kampik et al. [21], is based on the combination of large language models (LLM) and knowledge-based systems to generate contextual recommendations. Again, the focus is on data processing and artificial intelligence, without defining roles or mechanisms for the validation, formalization, and structured dissemination of the emerging knowledge. The table presents a comparison of the different ABPMS frameworks based on their approach to knowledge management. In this context, the proposed framework stands out for its socio-technical approach that integrates technological tools with collaborative organizational structures. A key element is the introduction of clearly defined roles, such as Scrum Masters and Communities of Practice (CoPs), responsible for collecting, validating, formalizing, and disseminating lessons learned and best practices. This approach transforms operational experience into shared knowledge, feeding a continuous cycle of learning and improvement. The framework also implements a structured iterative cycle that, starting from validated feedback, promotes process adaptation in response to emerging needs and changing contexts. The combination of automation and collaborative knowledge management makes the framework particularly suitable for complex and dynamic organizational contexts, where operational agility depends on technology, but also on the ability to systematically learn, share, and reuse what the organization learns over time. These features fill gaps in existing frameworks and propose a conceptual evolution of ABPMS, geared towards enhancing collective intelligence and supporting continuous, sustainable, and adaptive innovation. The findings of this study show that the framework proposed adequately meets several challenges in the development of ABPMSs. The framework perceptively develops very innovative features to establish grounds for solutions to the above-referred challenges, moving it further ahead of usual approaches. One of the main challenges concerns the ability to ensure the explainability of the situation, i.e., to provide contextually relevant and easily understandable explanations.
The framework addresses this problem through a section on lessons learned and patterns, in which decisions, actions, and results are documented in a structured format. The provision of always-available and relevant explanations helps users better grasp operational decisions. Furthermore, through advanced research features and links to events and objects within the system, it makes it possible to analyze complex behaviors with an in-depth understanding of organizational dynamics.
Another critical issue is the balance between system autonomy and human control: in this case, the framework is characterized by the integration of an interactive and conversational environment forum-based architecture. This architecture favors the active involvement of team members, allowing the system to act independently, but always with responsibility. These automated tools suggest actions and recommendations based on context, keeping within operational constraints of dynamic cooperation and at the same time preserving the human control aspect. One of the main features of the framework is continuous improvement, since centralizing and capturing lessons learned and repetitive patterns will allow business processes to advance through an iterative approach. Analysis of historical data allows trends to be identified and adapted to changing organizational needs, an approach that improves current operations and also prepares the system to respond effectively to future challenges. From the point of view of Hybrid Process Intelligence, the framework adopts a collaborative model that sees AI as “learning apprentices” that adapts to users’ practices, so as not to impose rigid workflows. This approach, enhanced through the forum-based architecture, would encourage individuals to contribute together in developing processes that are even more efficient and appropriate to real needs. In the end, the framework provides values such as trust and reliability, with transparency in actions and decisions about handling exceptions and the unexpected. Centralized discussions and organized lessons learned have improved the user’s confidence against the backdrop of advanced research tools that provide accurate, auditable information. Accuracy is one of the most frequently cited dimensions in the field of data quality. It refers specifically to how closely the values contained in the data reflect reality or the true value of the attribute the data are meant to represent. In other words, accurate data are those that correctly represent what they are intended to describe, within a specific context of use [38]. According to the definition provided by the ISO/IEC 25024 standard, accuracy is “the degree to which data has attributes that correctly represent the true value of the intended attribute of a concept or event in a specific context of use”. Based on this definition, all concepts referring to data correctness, the absence of errors, and, in general, a faithful representation of reality, fall under the accuracy dimension [39]. A specific aspect related to accuracy is the Data Accuracy Range, which is an indicator used to assess whether the values in a dataset fall within predefined limits. This type of measurement is based on verifying whether values are within acceptable ranges, for example, a piece of data can be considered accurate only if it falls between a defined minimum and maximum value. The calculation of the Data Accuracy Range is realized using a simple formula: the number of items with values within the specified range is divided by the total number of items for which such a range has been defined. The result is a value between 0 and 1, where 1 indicates that all data meet the expected range, while lower values suggest the presence of potentially erroneous or abnormal data. This metric, therefore, makes it possible to objectively assess how well the data align with expectations and whether they reliably represent reality. When the calculated value is low, it may indicate problems in the dataset that could compromise analyses or decision-making processes based on that data. Moreover, the proposed framework expands the work of Dumas et al. by integrating several innovations, such as historical and contextual data analysis to provide a deeper understanding of business dynamics, a conversational architecture that promotes collaboration, a modular design that facilitates gradual adoption, and tools for deeper contextual reasoning. Such results are indicative of the potential of the framework to transform ABPMSs into being more adaptive, transparent, and collaborative. This proposed framework opens significant steps in the evolution of ABPMSs because it addresses the well-known challenges in the literature and thus makes way for paths of more innovative and efficient solutions in the future.

6. Limitations

Although the proposed framework represents great progress toward the evolution of ABPMs, it is essential to recognize the inherent limitations. One limitation of the current version of the framework concerns the modeling and analysis of the team’s behavioral dimensions. Although the system includes the tracking of discussions, lessons, and lessons learned through structured and contextual metadata, it does not yet incorporate an explicit analytical component aimed at interpreting individual or collective behaviors. As a result, the analysis of behavioral dynamics is only partially supported at this stage. While the proposed framework is designed to be flexible and scalable, its adoption in Agile environments characterized by fast-paced sprints and high-intensity development cycles requires further consideration. Agile methodologies emphasize rapid iteration, minimal documentation, and continuous delivery—conditions that may conflict with more structured and formal KM practices. Another aspect is that, currently, the implemented pipeline does not include mechanisms for continuous learning or automatic re-training. The artificial intelligence components used, such as process mining and XAI, are based on static models trained on predefined datasets and are not updated dynamically as new data become available. Moreover, there is no MLOps infrastructure in place to enable continuous performance monitoring, model lifecycle management, or automated updates. This limits the system’s ability to adapt to evolving contexts or changes in input data. Another key limitation consists of the adaptability of the framework in heterogeneous organizational settings: the heavy focus on Agile and Scrum methodologies may limit adaptability in more traditional or hierarchical organizations that follow rigid structures and protocols. Finally, as for the formalization of lessons learned and best practices, the framework currently lacks an automatic and structured method for their systematic validation.

7. Future Research

The highlighted limitations represent key points that open new paths of research. As for the adoption of the framework in high-velocity Agile settings, we propose the following directions for future development:
  • Automation of knowledge capture: AI agents embedded in the system could automatically collect and organize knowledge artifacts, minimizing manual effort and allowing seamless knowledge sharing without disrupting sprint workflows.
  • Incremental and contextual integration: The framework could be extended to support incremental knowledge updates that align with the iterative nature of the Agile methodology, enabling continuous documentation of decisions and lessons learned throughout the sprint.
  • Toolchain compatibility: The system architecture could be designed for seamless integration with Agile tools already in use by teams, such as Jira, Trello, or Confluence, facilitating easy access to the reuse of knowledge within daily workflows.
Future work will focus on developing lightweight modules and plug-ins tailored for key Agile ceremonies (e.g., sprint reviews, retrospectives), to enable real-time, low-friction KM contributions while preserving team agility and delivery speed. As for the scalability of the framework across diverse industry contexts and global operations, future research could explore the introduction of cloud-based distributed systems to support and foster seamless scalability and elasticity. Another research path could consist of exploring the application of blockchain for distributed consensus, ensuring secure, scalable, and transparent process synchronization in large contexts. Limitations coming from the heavy focus on Agile and Scrum methodologies could lead to further investigation focusing on hybrid models that blend Agile principles with traditional project management methodologies. Finally, the need for the automation of the validation process could lead to the definition of KPI-driven automated validation and peer review mechanisms. Moreover, the framework could be extended with tools and methodologies capable of more deeply addressing behavioral dimensions, which are not yet fully developed. In particular, techniques such as process mining and social network analysis could be integrated to investigate group dynamics, emerging roles, and patterns of knowledge sharing within the Scrum Team. These extensions would enable a deeper understanding of collaborative processes and their impact on knowledge management. Future development directions include the adoption of an MLOps framework to enable continuous model updates and automated lifecycle management. This would allow for the implementation of re-training pipelines, automated validation, and controlled deployment, ensuring greater robustness and adaptability over time. In parallel, more dynamic approaches to process mining and XAI will be explored to maintain high levels of interpretability even as models evolve. The long-term goal is to build a resilient, self-adaptive AI ecosystem that remains transparent and aligned with responsible governance principles.

8. Conclusions

This paper is a response to the concerns regarding challenges and opportunities in infusing AI into BPMSs by proposing a framework for integrating KM principles with AI-driven advancements as offering important future contributions to this evolving field of business process optimization. It facilitates effective dynamic knowledge sharing by offering a user-friendly and web-based tool for capturing and validating lessons learned and best practices. Furthermore, it advances techniques in searching so that knowledge retrieval can be made more efficient by filling in the gaps present in the existing KM methodologies as they concern the still very much unexplored roles of Scrum Masters and CoP in Agile.
This research also demonstrates the way AI can reinvent BPM systems of the olden type towards real-time optimization, from the automation of knowledge validation to the collaborative efforts of a human agent with an intelligent system.
The adaptation of the SECI methodology in an Agile context shows how, through empirical practices, an organization can carry out continuous learning and improvement. The focus is on modular adoption to ensure that the new system will be fitted to the needs of the organization, where it will be minimally disturbed.
A very important aspect related to developing solid mechanisms of explainability is building trust in any AI system, making recommendations understandable and actionable for various users. Additionally, the ethical implications of AI integration are yet to be explored, for example, through preventing biases and protecting personal data security. A greater understanding of behavioral dimensions and organizational routine patterns would build more elaborate and adaptable systems.
Another area worth pursuing would be increasing the interactivity of the framework; thus, one could have a serious conversation between users and the system. Proactive hints and real-time feedback could be really useful as part of this. Finally, empirical validation through real-life applications will prove essential to evaluate the effectiveness and to fine-tune the design of the framework.
The newly proposed model is a leap forward by itself, but it can also be a basis for lots of further work. Much remains to be done with respect to scalability across industry contexts and across organizational contexts in general.

Author Contributions

Conceptualization, D.M. and C.P.; methodology, D.M. and C.P.; validation, D.M. and M.P.; formal analysis, C.P.; investigation, L.L.R.; writing—original draft preparation, B.G.; writing—review and editing, B.G., L.L.R. and M.P.; supervision, D.M.; funding acquisition, D.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by EXPLAIN “EXplainable Process and WorkfLow Design Automation with Artificial INtelligence”—Programma operativo Puglia FESR 2014–2020 Regolamento Regionale PUGLIA per gli aiuti in esenzione N. 17 del 30-09-2014 e ss.mm.ii. Titolo II—Capo 2 AIUTI AI PROGRAMMI INTEGRATI PROMOSSI DA PICCOLE IMPRESE (ART. 27 Reg. n. 17 DEL 30/09/2014 E S.M.I.)—Asse III—Obiettivo specifico 1a (Innovazione)—Azione 1.3 Il futuro alla portata di tutti—Codice Progetto: IFNIMP2.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

Authors D.M., C.P., B.G. and L.L.R. were employed by the company EKA S.r.l. All authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
ABPMSAI-Augmented Business Process Management System
AIArtificial Intelligence
BPMSBusiness Process Management System
BPMBusiness Process Management
BPWHBest Practice Warehouse
CPACognitive Process Automation
CoPCommunity of Practice
DKEData KPI Engine
GANsGenerative Adversarial Networks
GenAIGenerative Artificial Intelligence
HCIHuman–Computer Interaction
KMKnowledge Management
RULRemaining Useful Life
UCDUser-Centered Design
HITLHuman-in-the-Loop
BPEBusiness Process Engine
MiSBMiddleware Service Bus
KPIsKey Performance Indicators
LIMELocal Interpretable Model-Agnostic Explanations
SHAPSHapley Additive exPlanations
LLLesson Learned
LPMsLarge Process Models
MLMachine Learning
NLPNatural Language Processing
PMProcess Mining
RPARobotic Process Automation
SCMSupply Chain Management
SECISocialization, Externalization, Combination, Internalization
SoSScrum of Scrums
XAIExplainable Artificial Intelligence
MROMaintenance, Repair, and Overhaul
SLAService-Level Agreement
MTTDMean Time To Detection
MTTRMean Time to Repair
APDEXApplication Performance Index

Appendix A

Appendix A.1. Detailed Architecture of the Proposed Framework

Figure A1. Detailed architecture of the proposed Framework.
Figure A1. Detailed architecture of the proposed Framework.
Ai 06 00110 g0a1

Appendix A.2. Connections Between Knowledge Management Processes

Figure A2. Pattern Usage and Lesson Learned Management processes following a Pattern modification.
Figure A2. Pattern Usage and Lesson Learned Management processes following a Pattern modification.
Ai 06 00110 g0a2
Figure A3. Pattern Usage and Lesson Learned Management processes following a Pattern Assessment.
Figure A3. Pattern Usage and Lesson Learned Management processes following a Pattern Assessment.
Ai 06 00110 g0a3
Figure A4. Lesson Learned Management and Pattern Management processes following a Pattern Assessment.
Figure A4. Lesson Learned Management and Pattern Management processes following a Pattern Assessment.
Ai 06 00110 g0a4
Figure A5. Link between Knowledge Assessment and Discussion Management processes.
Figure A5. Link between Knowledge Assessment and Discussion Management processes.
Ai 06 00110 g0a5
Figure A6. Data Analysis process impact on the Pattern Formalization.
Figure A6. Data Analysis process impact on the Pattern Formalization.
Ai 06 00110 g0a6
Figure A7. Data Analysis process impact on the Knowledge Creation.
Figure A7. Data Analysis process impact on the Knowledge Creation.
Ai 06 00110 g0a7
Figure A8. ML process impact on the Pattern Formalization.
Figure A8. ML process impact on the Pattern Formalization.
Ai 06 00110 g0a8

References

  1. Dumas, M.; Fournier, F.; Limonad, L.; Marrella, A.; Montali, M.; Rehse, J.R.; Accorsi, R.; Calvanese, D.; De Giacomo, G.; Fahland, D.; et al. AI-augmented business process management systems: A research manifesto. ACM Trans. Manag. Inf. Syst. 2023, 14, 1–19. [Google Scholar] [CrossRef]
  2. Yakyma, A. Knowledge Flow in Scaled Agile Delivery Model. 2011. Available online: http://www.yakyma.com/2011/07/knowledge-flow-in-scaled-agile-delivery.html (accessed on 20 February 2025).
  3. Kovačić, M.; Mutavdžija, M.; Buntak, K.; Pus, I. Using artificial intelligence for creating and managing organizational knowledge. Teh. Vjesn. 2022, 29, 1413–1418. [Google Scholar]
  4. Psarommatis, F.; Kiritsis, D. A hybrid Decision Support System for automating decision making in the event of defects in the era of Zero Defect Manufacturing. J. Ind. Inf. Integr. 2021, 26, 100263. [Google Scholar] [CrossRef]
  5. Enholm, I.M.; Papagiannidis, E.; Mikalef, P.; Krogstie, J. Artificial intelligence and business value: A literature review. Inf. Syst. Front. 2022, 24, 1709–1734. [Google Scholar] [CrossRef]
  6. Taherdoost, H.; Madanchian, M. Artificial intelligence and knowledge management: Impacts, benefits, and implementation. Computers 2023, 12, 72. [Google Scholar] [CrossRef]
  7. Jarrahi, M.H.; Askay, D.; Eshraghi, A.; Smith, P. Artificial intelligence and knowledge management: A partnership between human and AI. Bus. Horizons 2023, 66, 87–99. [Google Scholar] [CrossRef]
  8. Alavi, M.; Leidner, D.; Mousavi, R. Knowledge Management Perspective of Generative Artificial Intelligence (GenAI). J. Assoc. Inf. Syst. 2024, 25, 1–12. [Google Scholar] [CrossRef]
  9. Thakuri, S.; Bon, M.; Cavus, N.; Sancar, N. Artificial Intelligence on Knowledge Management Systems for Businesses: A Systematic Literature Review. TEM J. 2024, 13, 2146–2155. [Google Scholar] [CrossRef]
  10. Linder, A.; Anand, L.; Falk, B.; Schmitt, R. Technical complaint feedback to ramp-up. Procedia CIRP 2016, 51, 99–104. [Google Scholar] [CrossRef]
  11. Chen, E. Empowering artificial intelligence for knowledge management augmentation. Issues Inf. Syst. 2024, 25, 409–416. [Google Scholar]
  12. Casciani, A.; Bernardi, M.L.; Cimitile, M.; Marrella, A. Conversational Systems for AI-Augmented Business Process Management. In Research Challenges in Information Sciences; Springer: Cham, Switzerland, 2024; pp. 183–200. [Google Scholar]
  13. Kokala, A. Business Process Management: The Synergy of Intelligent Automation and AI-Driven Workflows. Int. Res. J. Mod. Eng. Technol. Sci. 2024, 6, 12. [Google Scholar]
  14. Zebec, A.; Indihar Štemberger, M. Creating AI business value through BPM capabilities. Bus. Process Manag. J. 2024, 30, 1–26. [Google Scholar] [CrossRef]
  15. Helo, P.; Hao, Y. Artificial intelligence in operations management and supply chain management: An exploratory case study. Prod. Plan. Control 2022, 33, 1573–1590. [Google Scholar] [CrossRef]
  16. Aggarwal, S. Guidelines for the Use of AI in BPM Systems: A Guide to Follow to USE AI in BPM Systems. Master’s Thesis, Universidade NOVA de Lisboa, Lisbon, Portugal, 2021. [Google Scholar]
  17. Rosemann, M.; Brocke, J.V.; Van Looy, A.; Santoro, F. Business process management in the age of AI–three essential drifts. Inf. Syst. e-Bus. Manag. 2024, 22, 415–429. [Google Scholar] [CrossRef]
  18. Szelągowski, M.; Lupeikiene, A.; Berniak-Woźny, J. Drivers and evolution paths of BPMS: State-of-the-art and future research directions. Informatica 2022, 33, 399–420. [Google Scholar] [CrossRef]
  19. Schaschek, M.; Gwinner, F.; Neis, N.; Tomitza, C.; Zeiß, C.; Winkelmann, A. Managing next generation BP-x initiatives. Inf. Syst. e-Bus. Manag. 2024, 22, 457–500. [Google Scholar] [CrossRef]
  20. Wang, L.; Liu, Z.; Liu, A.; Tao, F. Artificial intelligence in product lifecycle management. Int. J. Adv. Manuf. Technol. 2021, 114, 771–796. [Google Scholar] [CrossRef]
  21. Kampik, T.; Warmuth, C.; Rebmann, A.; Agam, R.; Egger, L.N.; Gerber, A.; Hoffart, J.; Kolk, J.; Herzig, P.; Decker, G.; et al. Large Process Models: A Vision for Business Process Management in the Age of Generative AI. KI-Künstl. Intell. 2024, 1–15. [Google Scholar] [CrossRef]
  22. De Nicola, A.; Formica, A.; Mele, I.; Missikoff, M.; Taglino, F. A comparative study of LLMs and NLP approaches for supporting business process analysis. Enterp. Inf. Syst. 2024, 18, 2415578. [Google Scholar] [CrossRef]
  23. Fahland, D.; Fournier, F.; Limonad, L.; Skarbovsky, I.; Swevels, A.J. How well can large language models explain business processes? arXiv 2024, arXiv:2401.12846. [Google Scholar]
  24. Olatunji, A.O. Machine Learning in Business Process Optimization: A Framework for Efficiency and Decision-Making. J. Basic Appl. Res. Int. 2025, 31, 18–28. [Google Scholar] [CrossRef]
  25. Chapela-Campa, D.; Dumas, M. From process mining to augmented process execution. Softw. Syst. Model. 2023, 22, 1977–1986. [Google Scholar] [CrossRef]
  26. Gabryelczyk, R.; Sipior, J.C.; Biernikowicz, A. Motivations to adopt BPM in view of digital transformation. Inf. Syst. Manag. 2024, 41, 340–356. [Google Scholar] [CrossRef]
  27. Salvadorinho, J.; Teixeira, L. Organizational knowledge in the I4. 0 using BPMN: A case study. Procedia Comput. Sci. 2021, 181, 981–988. [Google Scholar] [CrossRef]
  28. Abbasi, M.; Nishat, R.I.; Bond, C.; Graham-Knight, J.B.; Lasserre, P.; Lucet, Y.; Najjaran, H. A Review of AI and Machine Learning Contribution in Predictive Business Process Management (Process Enhancement and Process Improvement Approaches). arXiv 2024, arXiv:2407.11043. [Google Scholar]
  29. Nonaka, I.; Takeuchi, H. The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation; Oxford University Press: Oxford, UK, 1995. [Google Scholar]
  30. Beck, K.; Jeffries, R.; Highsmith, J.; Grenning, J.; Martin, R.; Schwaber, K.; Cunningham, W.; Sutherland, J.; Mellor, S.; Thomas, D. Manifesto per lo Sviluppo Agile di Software. 2001. Available online: https://agilemanifesto.org/iso/it/manifesto.html (accessed on 10 April 2025).
  31. Carroll, J.M. Human Computer Interaction (HCI). Encyclopedia of Human-Computer Interaction; The Interaction Design Foundation: Aarhus, Denmark, 2009. [Google Scholar]
  32. da Costa Brito, L.; Quaresma, M. User-Centered Design in Agile Methodologies. Ergodesign HCI 2009, 7, 126–137. [Google Scholar] [CrossRef]
  33. Nielsen, J. Ten Usability Heuristics. 2005. Available online: https://pdfs.semanticscholar.org/5f03/b251093aee730ab9772db2e1a8a7eb8522cb.pdf (accessed on 6 May 2025).
  34. Easa, N.F.; Fincham, R. The application of the socialisation, externalisation, combination and internalisation model in cross-cultural contexts: Theoretical analysis. Knowl. Process Manag. 2012, 19, 103–109. [Google Scholar] [CrossRef]
  35. European Commission. European Commission White Paper on Artificial Intelligence—A European Approach to Excellence and Trust. Available online: https://commission.europa.eu/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en (accessed on 13 March 2025).
  36. Europea Commission. Gruppo di Esperti ad alto Livello Sull’intelligenza Artificiale. Orientamenti etici per un’IA Affidabile. Available online: https://digital-strategy.ec.europa.eu/it/library/ethics-guidelines-trustworthy-ai (accessed on 5 May 2025).
  37. Maged, A.; Kassem, G. Self-Adaptive ERP: Embedding NLP into Petri-Net creation and Model Matching. In Proceedings of the 2024 International Conference on Computer and Applications (ICCA), Cairo, Egypt, 17–19 December 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–6. [Google Scholar]
  38. Miller, R.; Whelan, H.; Chrubasik, M.; Whittaker, D.; Duncan, P.; Gregório, J. A Framework for Current and New Data Quality Dimensions: An Overview. Data 2024, 9, 151. [Google Scholar] [CrossRef]
  39. Gualo, F.; Rodríguez, M.; Verdugo, J.; Caballero, I.; Piattini, M. Data quality certification using ISO/IEC 25012: Industrial experiences. J. Syst. Softw. 2021, 176, 110938. [Google Scholar] [CrossRef]
Figure 1. The proposed methodological and technological framework.
Figure 1. The proposed methodological and technological framework.
Ai 06 00110 g001
Figure 2. Suggested universal concept of Nonaka’s SECI model.
Figure 2. Suggested universal concept of Nonaka’s SECI model.
Ai 06 00110 g002
Figure 3. Framework functional representation.
Figure 3. Framework functional representation.
Ai 06 00110 g003
Figure 4. Overview of the KM process.
Figure 4. Overview of the KM process.
Ai 06 00110 g004
Figure 5. Discussion Management process.
Figure 5. Discussion Management process.
Ai 06 00110 g005
Figure 6. Lesson learned management process.
Figure 6. Lesson learned management process.
Ai 06 00110 g006
Figure 7. Pattern management process.
Figure 7. Pattern management process.
Ai 06 00110 g007
Figure 8. Pattern selection and usage process.
Figure 8. Pattern selection and usage process.
Ai 06 00110 g008
Figure 9. Knowledge Assessment management process.
Figure 9. Knowledge Assessment management process.
Ai 06 00110 g009
Figure 10. Data analysis process to validate the best pattern configuration.
Figure 10. Data analysis process to validate the best pattern configuration.
Ai 06 00110 g010
Figure 11. Data analysis process using association rules.
Figure 11. Data analysis process using association rules.
Ai 06 00110 g011
Figure 12. Data analysis process model using ML.
Figure 12. Data analysis process model using ML.
Ai 06 00110 g012
Table 1. Scrum framework nomenclature.
Table 1. Scrum framework nomenclature.
Scrum Framework NomenclatureDefinition
Scrum TeamSet of several roles such as Scrum Master, CoP Member, etc. …
Scrum MasterTeam Leader
CoP MemberUnit Leader
Table 2. Knowledge management system requirements.
Table 2. Knowledge management system requirements.
IDRequirement TitleDescription
1System ScopeCollection, storage, and visualization of information throughout the lifecycle. Support for discussions, lessons learned, and patterns. Traceability and management of roles and groups. Centralized access to patterns.
2Account ManagementManagement of accounts with different roles based on the user’s position within the group.
3System AuthenticationAuthentication through a basic authentication mechanism, respecting corporate security policies.
4Definition of Role in the Scrum TeamFunctions for Scrum Team Members: initiating discussions, formalizing lessons, participating in discussions, and proposing improvements.
5Scrum Master FunctionalityIn addition to the functions of the Scrum Team Member: approving discussions, formalizing lessons, approving lessons learned, and reformulating patterns.
6Scrum Member of Scrum Meeting FunctionalityScrum Master can take over additional functionality to participate in SoS meetings.
7CoP Member FunctionalityParticipation in the validation of lessons learned in patterns and best practices. Functions inherited from the Scrum Master.
8Initiation and formalization of a discussionInitiation of discussions by the Scrum Master. Formalization of new knowledge through title, keywords, and description.
9Lesson formalizationFormalization by a user or requested by the team leader. Entry of name, keywords, areas of use, and description. Automatic linking to related discussions.
10Lesson ManagementRepository for formalized lessons, visible and editable by all users.
11Creation of the Lesson LearnedCreation from an existing formalized lesson or pattern. Insertion of related information, inherited from the source Lesson.
12Approval of Lesson LearnedValidation by the SoS. Input of KPIs and other details required for validation.
13Managing Lesson LearnedRepository for lessons learned, visible and improvable by all users.
14Pattern ApprovalValidation of best practice by the CoP. Additional information and specific constraints are required.
15Pattern ManagementClassification of content by ID, type, and subject. Advanced search using metadata.
16Content CategorizationClassification by type (discussion, lesson, lesson learned, pattern), theme, and unique ID. Search based on tags, scope of use, and date of creation.
17Import/ExportLesson, lesson learned, and pattern import and export functionality.
Table 3. Knowledge taxonomy.
Table 3. Knowledge taxonomy.
TermDefinition
DiscussionExternalization of new knowledge that emerges during an operational activity
LessonPotentially valuable experience, not necessarily applied and/or validated by others, still in the process of being formalized
Lesson LearnedGuideline, advice, a checklist that identifies what was right or wrong in a particular event
Best PracticePractice that in a systematic and documented way allows for the achievement of excellent results
PatternModel that, through quantitative data, has allowed us to demonstrate that its use reduces time, work, and costs, as well as increases quality and satisfaction of the end customer, or that it at least affects some of these requirements
Table 4. Benchmarking of AI-enhanced Business Process Management Systems (ABPMSs).
Table 4. Benchmarking of AI-enhanced Business Process Management Systems (ABPMSs).
CriteriaAI-Augmented BPMS (Dumas et al.) [1]Self-Adaptive ERP (Maged & Kassem) [37]Large Process Models (Kampik et al.) [21]The Proposed Framework
Lifecycle PhasesPerception, reason, enact, adapt, explain, improveAdaptation of ERP processes through AI and NLPLLM-based contextual recommendationsPerception, validation, formalization, adaptation, dissemination
Knowledge Management FocusLimited, no detailed mechanisms for validation and disseminationLacks socio-organizational integration, focuses mainly on technologyData-driven, lacks structured knowledge sharingStrong focus with iterative cycles and structured roles
Organizational StructureNot explicitly definedNo defined roles for knowledge integrationNo role definition for knowledge processesDefined roles: Scrum Masters, Communities of Practice (CoPs)
Socio-Technical IntegrationMainly technical, organizational learning is implicitPredominantly technological, no community involvementFocused on AI, lacks socio-organizational perspectiveSocio-technical balance: integrates organizational learning and collaboration
Validation MechanismImplicit, not role-specificNone, relies on system adaptationAI-driven, lacks formal validation of lessons learnedClear mechanisms for validation through CoPs and structured roles
Dissemination of KnowledgeNot clearly addressedNo structured disseminationLimited to LLM recommendationsExplicitly defined, CoPs and Scrum Masters manage sharing
Learning from ExperienceImplied but not explicitLacks focus on organizational learningData-driven, no structured learning cycleCentral to the framework, learning cycles drive process adaptation
Process Adaptation CycleAdaptive but lacks socio-technical contextTechnology-driven without social learningContextual but lacks structured feedback loopsIterative cycle driven by validated feedback and structured roles
Target EnvironmentGeneric BPM systemsERP SystemsSoftware-supported BPMComplex and dynamic organizational contexts
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Martino, D.; Perlangeli, C.; Grottoli, B.; La Rosa, L.; Pacella, M. A Knowledge-Driven Framework for AI-Augmented Business Process Management Systems: Bridging Explainability and Agile Knowledge Sharing. AI 2025, 6, 110. https://doi.org/10.3390/ai6060110

AMA Style

Martino D, Perlangeli C, Grottoli B, La Rosa L, Pacella M. A Knowledge-Driven Framework for AI-Augmented Business Process Management Systems: Bridging Explainability and Agile Knowledge Sharing. AI. 2025; 6(6):110. https://doi.org/10.3390/ai6060110

Chicago/Turabian Style

Martino, Danilo, Cosimo Perlangeli, Barbara Grottoli, Luisa La Rosa, and Massimo Pacella. 2025. "A Knowledge-Driven Framework for AI-Augmented Business Process Management Systems: Bridging Explainability and Agile Knowledge Sharing" AI 6, no. 6: 110. https://doi.org/10.3390/ai6060110

APA Style

Martino, D., Perlangeli, C., Grottoli, B., La Rosa, L., & Pacella, M. (2025). A Knowledge-Driven Framework for AI-Augmented Business Process Management Systems: Bridging Explainability and Agile Knowledge Sharing. AI, 6(6), 110. https://doi.org/10.3390/ai6060110

Article Metrics

Back to TopTop