A Systematic Model for Process Development Activities to Support Process Intelligence

Process, manufacturing, and service industries currently face a large number of non-trivial challenges ranging from product conception, going through design, development, commercialization, and delivering in a customized market’s environment. Thus, industries can benefit by integrating new technologies in their day-by-day tasks gaining profitability. This work presents a model for enterprise process development activities called the wide intelligent management architecture model to integrate new technologies for services, processes, and manufacturing companies who strive to find the most efficient way towards enterprise and process intelligence. The model comprises and structures three critical systems: process system, knowledge system, and transactional system. As a result, analytical tools belonging to process activities and transactional data system are guided by a systematic development framework consolidated with formal knowledge models. Thus, the model improves the interaction among processes lifecycle, analytical models, transactional system, and knowledge. Finally, a case study is presented where an acrylic fiber production plant applies the proposed model, demonstrating how the three models described in the methodology work together to reach the desired technology application life cycle assessment systematically. Results allow us to conclude that the interaction between the semantics of formal knowledge models and the processes-transactional system development framework facilitates and simplifies new technology implementation along with enterprise development activities.


Introduction
Our civilization faces acute and critical challenges, such as climate change, safe drinking water availability, food scarcity, and secure energy supplies, which endanger current and future generations. Therefore, society and industry need to shape their activities based on sustainable principles and to efficiently adopt the rapidly evolving new technologies which can potentially handle the challenges mentioned above. Precisely, this work focuses on the integration of new technologies in the decision-making of process and manufacturing industries. Thus, the proposed methodology applies to the workflow of any productive sector or area where decision-making plays a crucial role.
As for the process and manufacturing industries, complex decision-making lurks at all enterprise levels and the whole product lifecycle, ranging from product conception, design, development, production, commercialization, and delivery. The need to consider highly complex scenarios results in involved and non-trivial decision-making. However, the advent of new technologies supports the successful development and systematization of new structures and frameworks for reaching informed, reasonable, and wise decisions. Therefore, this work aims to integrate new technologies systematically in the decisionmaking workflow of the process and manufacturing industries. Therefore, this work presents a framework for developing process and product-related activities. Thus, the proposed model allows to unveiling the most efficient way towards integrating enterprise decision support systems and process intelligence into actual enterprise processes.
As discussed in Section 1.1 Decision making in the enterprise, companies have recently adopted decision support systems to handle the complexity of decision-making. However, such systems only tackle part of the complete enterprise structure, and it is necessary to understand the whole picture to reach sensible solutions, as pointed out in Section 1.2 Enterprise Integration. Therefore, this work combines Knowledge management (Section 1.3) and Data management (Section 1.4) to propose a framework for reaching integration of different systems and efficiently apply new technological solutions for decision-making.

Decision-Making in the Enterprise
Process and manufacturing industries can be regarded as highly involved systems consisting of multiple business and process units. The organization of the different temporal and geographical scales in such units, as well as the other enterprise decision levels, is crucial to understand and analyze their behavior. The key objectives are to gain economic efficiency, market position, product quality, flexibility, or reliability [1]. Recently, indicators related to sustainability and environmental impact have also been included as drivers for decision-making. The basis for solving an enterprise system problem and further implement any action is the actual system representation in a model, which captures the observer's relevant features. Such a model is the basis for decision-making, which is a highly challenging task in these industries due to their inherent complexity.
Therefore, companies have devoted efforts to reach better decisions during the last decades. Indeed, they have invested a large number of resources in exploiting information systems, developing models, and using data to improve decisions. Decision support systems (DSS) are responsible for managing the necessary data and information that allow making decisions. Thus, those systems aim to integrate data transactions with analytical models supporting the decision-making activity at different organizational levels. The work in [2] defines DSS as aiding computer systems at the management level of an organization that combines data with advanced analytical models. The work in [3] presents four components for supporting classic DSS. The components comprise (i) a sophisticated database for accessing internal and external data, (ii) an analytical model system for accessing modeling functions, (iii) a graphical user interface for allowing the interaction of humans and the models to make decisions, and (iv) an optimization-engine based on mathematic algorithms or intuition/knowledge. Traditionally, DSS focus on a single enterprise unit and lack the vision of the boundaries. Thus, DSS rely heavily on rigid data and model structures, and they are difficult to adapt to include new algorithms and technologies.

Enterprise Integration
Current trends in the process industry outline the importance of being agile and fully integrated to improve decision-making at all scales in the company. Indeed, integration comprises the whole organizational activities from operation to planning and strategic, which differ in physical and temporal scope, but are directly related to each other as decisions made at one level directly affect others. Therefore, companies pursuing integration among different decision levels in the production management environment report substantial economic benefits [4,5]. Therefore, to coordinate and integrate information and decisions among the various functions are crucial for improving global performance.
The use of Standards is the primary conducted method for enterprise integration labor. Groups, commities, and societies have developed those standards in different geopolitical areas where they are of application. Next, the use of some standards serves as well as a brief introduction of their content.
First, the European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC) provide standards to characterize, guide, and rule SMEs' activities [6]. CEN [7]. The standards were funding and are now in use by the United States Air Force and United States Department of Defense agencies. Moreover, many organizations for business process capturing and improvement.
Following, the International Electrotechnical Commission (IEC) develops the International Standards and Conformity Assessment covering areas such as industrial control programming standards (IEC 61131-3) or field devise integration (IEC 61804-2). The standards aim at allowing interoperability, efficiency, the safety of electrical, electronic, and information systems [8].
The International Organization for Standardization (ISO) standards are well-known and widely used standards, covering management systems, quality management, information security management, etc. [9]. Thus, criteria for integration comprises the Enterprise Modeling and Architecture (ISO TEC184 SC5 WG1), Electronic Business Extensible Markup Language (ISO 15000), and the Asset Management System (ISO 55003), among others.
Manufacturing Execution Systems Association (MESA) presents a set of best management practices and information technology aiming to improve business. MESA focuses on asset performance management, lean manufacturing, product lifecycle management, manufacturing performance metrics, quality, regulatory compliance, and return to investment [10].
Next, Machinery Information Management Open Systems Alliance (MIMOSA) association presents the Open Standards for Physical Asset management for information management (I.M.), and information technologies (I.T.) applied to manufacturing environments [11]. MIMOSA standards recently focus on enabling digital twins, big data, industrial internet of things, and analytics specifications.
The Object Management Group (OMG) is dedicated to developing technological standards for enterprise integration and distributed broad-interoperability [12]. The OMG comprises the following standards: Business Process Model and Notation (BPMN), Common Object Request Broker Architecture (CORBA), Common Warehouse Metamodel (CWM), Data-Distribution Service for Real-Time Systems (DDS), Unified Modeling Language (UML), and the Model Driving Architecture (MDA) applied to software visual design, execution, and support.
Next, the Process Industry Practices (PIP) consortium collaborates to define common industry standards and best practices focused on design, maintenance, and procurement activities [13]. Besides, PIP practices facilitate knowledge capturing of process control, mechanical, data management, and Piping and Instrumentation Diagrams.
A key element of integration directly points to enterprise models: computational applications within organizations aiming to represent processes, activities, resources, or physical phenomena. These models are essential for driving design, analysis, management, and prognosis in enterprise functions. Nevertheless, the spread of these models confronts several issues in practice. First of all, the independent creation of systems supporting functions at the enterprise during past years, resulting in heterogeneous enterprise models; that is, the so-called correspondence problem. Different enterprise models refer to the same con-cept, for example, an activity, each model will probably apply other names, following the example activity, operation, or task. Therefore, most of the time, interpreting agents are necessary to allow communication among those enterprise functions. However, no matter how rational the idea of renaming the concepts is, organizational barriers usually impede it. Furthermore, these representations lack an adequate specification of what the model objects mean; they lack the terminology's actual semantic definition. Instead, concepts are poorly defined, and their interpretations overlap, leading to inconsistent understandings and uses of the knowledge. Finally, the cost of designing, building, and maintaining a model of the enterprise is high. Each model tends to be unique to the enterprise, and objects are enterprise-specific.
Therefore, some efforts have addressed the issues mentioned above using model standardization. On the other hand, the American National Standards Institute (ANSI), developed the Instrumentation, Systems and Automation Society (ISA) standards, known as ANSI/ISA standards for automation and control within the enterprise [14] with wide recognition for process integration. Figure 1 presents the main integration aspects of these standards. On the other hand, the Purdue reference model provides an "environment" for discrete parts manufacturing and stands for the basis for the other models [15]. In this case, certain activities are identified as directly related to shop floor production and organized in a six-level hierarchical model as depicted in Figure 2. Specific applications may require more or fewer than six levels, but six was deemed sufficient for identifying where integration standards are needed. The following list shows the name of each level and gives its primary responsibility.   These activities apply to manual operations, automated operations, or a mixture of the two at any level. It is worth mentioning the accessible subdivision of the six tasks into control enforcement, systems coordination and reporting, and reliability assurance. In the context of any large industrial plant or an entire industrial company based on one location, the tasks would take place at each level of the hierarchy.
Thus, the Common Information Model (CIM) reference model stands for a reference for computer-integrated manufacturing. It consists of a detailed collection of generic information management and automatic control tasks and their necessary functional requirements for a manufacturing plant. Nevertheless, the CIM reference model scope is limited to the integrated information management and automation system elements. As a result, the company's management, including planning function, financial, purchasing, research, development, engineering, and marketing and sales are all treated as external influences.
The adoption of standard models is the basis for the integration of enterprise processes. Thus, decision-making heavily relies on both the process models and the technologies which tackle the problem. Therefore, this work considers the systematization of data and knowledge management to reach integration in decision-making.

Knowledge Management
The development of better practices, strategies, and policies is highly related to how organizations use experiences and ideas from customers, suppliers, and employees. Thus, capturing, storing, sharing, and applying knowledge enables the construction of organization intelligence and intellectual assets. Two types of knowledge sources can be generally defined: tangible and intangible. On the one hand, intangible assets are related to skills, expertise, and human resources knowledge. On the other hand, tangible assets are related to data, information, historical records found on databases of customers, suppliers, and employees of the organization [16].
The bases of knowledge management tools can include distributed databases, ontologies, or network maps. This work focuses on formal domain ontologies development as the primary technology for knowledge management. Besides, the use of terms Semantic Web or Web 3.0 can be used to refer to this technology. Ontologies and logic serve as conceptual graphs for knowledge representation in constructing computable models within a specific domain [17]. Additionally, ontologies are defined as formal structures facilitating acquiring, maintaining, accessing, sharing, and reusing information [18,19]. Over the last decades, the Semantic Web pursued theoretical bases for developing knowledge-based applications software: One can communicate • a shared and common understanding of a domain among people and across application systems and • an explicit conceptualization that describes the semantics of the data.
Finally, knowledge management systems benefit from ontologies that semantically enrich information and precisely define the meaning of various information artifacts.

Bloom's Cognition Taxonomy
Bloom's Taxonomy is a framework that presents how educational objectives can guide and structure educational goals. This framework's latest work is entitled A Taxonomy for Teaching, Learning, and Assessment defining the cognitive processes related to knowledge [20], shown in Figure 3. The framework considers six major categories, with subactivities for better understanding, as follows:  Finally, the framework defines four types of knowledge used in cognition:

Transactional System and Data Management
The performance of enterprise processing activities highly depends on the transactional system's capacity and how well the data is managing.

Transactional System
A transactional system comprises multiple operations that collect, store, modify, and retrieve data transactions within an enterprise. These systems must support a high number of concurrent users and transaction types along the time. Besides, enterprise data are identified by their purpose and type, comprising transactional, analytical, and master data. First, transactional data support the daily operations of an organization. Transactional data refer to data created or modified by the operational systems, such as time, place, number, date, price, payment methods, etc. Next, define analytical data as numerical measurements that support activities, such as decision-making, reporting, query, or analysis. Thus, analytical data are stored and structured as numerical values in some dimensional models. Finally, master data represent the key business entities, involving creating a single view of the data in a master file or master record. Master data comprise data about sites, inventory, levels, demand, products, batches, etc.

Data Management
Enterprise data management aims to govern business data by retrieving, standardizing, storing, integrating, structuring, and disseminating requested data. The transactional system supports data management by enhancing data transaction features for control, analysis, and decision-making. Thus, data management's essential feature comprises communicating all data from different data sources (sensors) and fragmented control systems with all enterprise applications, processes, and entities that require it. Another critical aspect of data management is to store and make data available when needed securely.

Materials and Methods
New technologies can accomplish their implementation life cycle with a robust base supported by the proposed architecture, named Wide Intelligence Management Architecture. This presented architecture offers three central systems comprising development activities: process, knowledge, and transactional systems, shown in Table 1. First, the process system model introduces seven development activities systematically ordered towards a formalized process maturity process. Thus, the activities range from process definition. The main aspects of how enterprise processes perform are process intelligence, where human and environmental behavior is taken into account to enrich development activities. Next, the knowledge system model aims to strengthen the integration by formalized knowledge from three main perspectives: the domain area, the expertise area (functional activities), and the experience area, enhancing expertise knowledge with success and failure cases. Finally, the third model is related to the transactional data system. This model comprises four main areas: data definition, data improvement, data standardization, and data feeding.

Process System Model
The Modular Process Reference Model aims to define a coherent and structured manner of process evolution to integrate new technologies and business activities. The reference model comprises seven modules defined by the use of analytical tools and data linked to enterprise activities, represented in Figure 4.

Process Definition
This module provides a set of activities aiming to assess enterprise processes performance and to support systematic formalization. On the one hand, this module verifies "if" and measures "how much", existing formalized process follows the current enterprise activities, named as verification phase. Otherwise, this methodology aims to define, design, and standardize enterprise processes, called a definition phase. Thus, a process design phase takes place, considering the followed validation and verification phases. The steps mentioned above (verification and definition) must apply to the enterprise transactional system parallel with the processes.

Process Improvement
The process improvement module makes an exhaustive study of current processes to perform a re-design phase. This re-design phase considers new tendencies on standards, methods, and technologies. Moreover, good manufacturing practices (GMPs) and standard operating procedures (SOPs) are of paramount importance in the improvement task. Moreover, at the same time, a transactional data system must pass through a re-design phase to support the process improvements realized. Finally, updates and documentation regarding enterprise processes, resources, and data improvements must follow (Focus on management).

Process Standardization
This module performs research over standards and models strongly related to main enterprise processes to consider future implementation. Finally, as exposed in the previous module, data and structures from the transactional system are correctly standardized.

Process Optimization
The process optimization module comprises processes by using different rigorous and non-rigorous method approaches for optimization. The process optimization phase aims to provide necessary data and information due to fundamental calculations based on engineering approaches to decide on specific objectives and goals within processes. Thus, as the first step, knowledge, data, and information on processes and systems are crucial to understanding the problem. The development of model design occurs by defining an objective or multi-objective function, a single or multiple purposes, and single or multiple scenarios as a convenience. Finally, WIMa's optimization solutions are enriched by semantics, mathematical, and process semantics model, allowing easier integration within the enterprise.

Process Automation
The process automation module comprises applications such as business process automation (BPA), digital automation (DA), and robotic process automation (RPA). First, BPA makes use of advanced technologies to reduce human intervention in processing tasks across the enterprise. Thus, BPA aims to enhance efficiency by automating (initialize, execute, and complete) the whole or some parts of a complicated process. Next, DA takes the BPA system, aiming to digitalize and improve processes automation, thus meeting the market dynamics customer environment. Finally, software agents carry out RPA, thus pointing to mimic human actions within digital systems to optimize business processes by using artificial intelligence agents.

Process Digitalization
The digitalization module creates a digital integration of all the systems found in business processes. Digitalization encompasses process simulation, industrial augmented reality, predictive systems, proactive systems, industrial internet of things, expert systems, and process virtual twins. Finally, WiiMa's solutions facilitate the digitalization technologies development due to the semantic structure that supports easy access to raw or structured data and processes' formal knowledge.

Process Intelligence
This module aims to understand human behavior principles by reasoning for developing programs for problem solutions by machines, using artificially intelligent tools, computational intelligence systems, and formal knowledge models. One of the first tasks is to manage structured knowledge, which can facilitate and empower the system's understanding. Furthermore, this module is directly affected by the transactional system's efficiency, which considers the collection, structuring, and data communication.

Data System Model
The transactional system architecture set up data management activity. We have described data management comprising five main activities: data system definition, data system improvement, data standardization, data integration and feeding, and data system dynamics, as shown in Figure 5.

Definition
This activity takes into account the process definition (link) to create the data model. The data model establishes the relationship between the process model and the data generated by signals sources, such as process equipment, environment sensors, suppliers, or customers. Even more, transaction data protocols are defined, and the supported physical architecture must be capable of carrying those protocols. Finally, the data management plan is set, providing guidelines and procedures for enhancing security, compliance, quality, efficiency, and access.

Improvement
The data improvement activity refines the relationship between data signals and process models, and explains missing and necessary data. The data system requirements reside in process improvement activity. One can then calculate required data by making analytical computations, adding new technologies, or adding other sensors.

Standardization
The data standardization activity is the process of setting data systems into standard formats. It comprises the selection of common data language, structure, engineering metrics, and time/space Scales. Besides, data conciliation is a crucial task of data integration. Finally, the four language categories data structure comprises data definition language, data query language, data manipulation language, and transaction control language, in compliance with a processes database system.

Integration and Feeding
Data integration and feeding aim to link and integrate the transactional system and analytical systems or models. On the one hand, integration connects data among devices with analytical models, devices with devices, and analytical models with analytical models. On the other hand, data feeding is in charge to structure, send, and deliver data. Thus, integration and feeding tasks focus on support optimization based on decision-making and process action execution that maximize the business's benefit. Besides, material resource planning, distribution requirement planning, or enterprise resource planning systems, as part of the analytical procedure, are setting up if required. As a result, the definition of data-to-parameters and data-to-sets is established, required by transactional models. Finally, at some point, this task considers the feed and integration of virtual systems.

Dynamics
The data system model's last task replaces fixed data parameters and selected data sets by a mechanism for data collecting and structuring. Usually, these mechanisms refer to algorithms, which can become intelligent agents. Finally, the dynamics can enhance the database system, reaching intelligent databases. Intelligent databases are in constant change adapting to the dynamics of process systems and other business features.

Knowledge System Model
The knowledge system model organizes how the cognition process evolves, focusing on knowledge management. This cognition process integrates and adapts Bloom's taxonomy types of knowledge (Section 1.3.1) and ontologies, applied to computational systems. Thus, the resulting model comprises four main modules: conceptual, procedural, expertise, and metacognitive, as shown in Figure 6. Finally, the knowledge model system acts as an integrator between the process and data system models.

Conceptual
The conceptual knowledge aims to develop a formal domain ontology. The ontology considers domain terminology, elements, categories, principles, and generalizations, which can consider chemical, physics, and mechanics phenomena. All those domain information is retrieved from standards, books, handbooks, etc., derived from Sections 2.1.1 and 2.2.1. Additionally, information gathering can consider good manufacturing practices and standard operating procedures derived from Section 2.1.2. Thus, the ontology models those elements in the form of classes (elements involved in the domain area), data-properties (data from specifications and sensors), classes-properties (the relation among elements), axioms (assumptions on behavior), and rules (restrictions on behavior).

Procedural
Procedural knowledge focuses on the interaction that processes and humans have through analytical methods and tools, usually from procedural models, to perform the processing activity. Thus, the primary sources of information are techniques, methods, theories, models, structures, and procedural recipes (general, site, master, and control recipes). Besides, previous knowledge is stored and well identified within the data system. Finally, procedures, analytical models, and analytical tools translate to computational algorithms for the reasoning task.

Expertise
Knowledge expertise aims to collect and manage the criteria that determine the best use of analytics or procedures. This criterion is based on learning on good habits and how to replicate them. Besides, it is built on learning over bad habits and how to avoid them. Finally, expertise knowledge considers creating multiple scenarios to account for the power of knowledge about relations and behavior in the domain based on the conceptual and procedural knowledge activities.

Metacognitive
Metacognitive knowledge aims to go beyond current understanding. The current knowledge system is the one in charge of this metacognitive process. The metacognitive's main comprises the characterization, classification, reasoning, and creation of knowledge to enhance and reach process intelligence.
Finally, Figure 7 represents the interaction among three system models. Thus, the process system model drives the diagonal's general maturing process, reaching from process definition to process intelligence. Next, the X-axis represents how the data system model supports the process and matures, starting from data definition to data dynamics. Last, the Y-axis shows the knowledge system model developing, which interacts with the other two system models and allows an intelligent technological implementation, reaching from conceptual knowledge to metacognitive knowledge.

Results
This section presents the application of the methodology previously described in a process manufacturing case study.

Case Study
This case study aims to illustrate the WIMa framework's application for the lifecycle assessment (LCA) of an acrylic fiber production plant. The objective is to demonstrate how the three models described in Section 2 work together to reach the desired technology application (in this case, LCA). Life cycle assessment requires a correctly defined process and consistent data to provide sensible results. Furthermore, this is a meaningful example to understand the framework at the process definition level (Section 2.1.1).
The acrylic fibers polymerization process considered in this work is presented initially in [21]. Acrylic fibers' production takes 14 stages in a batch production plant, represented in Figure 8, which involves different material and energetic resources. Two alternative production processes are assessed: acrylic fiber A uses acetone as a solvent in the polymerization, and acrylic fiber B uses benzene. This case study describes the process and data definition required to perform a life cycle assessment according to the WIMa procedure. Still, the actual evaluation is beyond the scope of this contribution. For further details regarding the life cycle assessment, please refer to the work in [21]. First of all, define the requirements and objective of the case study. Thus, suppose that "Polymer A.C." wants to develop a high-level decision model agent based on optimization approaches. Besides, they want to standardize their processes to maintain relations with their industrial partners. These requirements comprise the performance of certain levels presented in Table 1: 1.
Level one (L1) regarding process definition, conceptual knowledge (process principles, process standards, and data standards), and data definition.

2.
Level three (L3) regarding process standardization and data standardization.

3.
Level four (L4) regarding process optimization, procedural knowledge, and integration and feeding data.
Next, present every activity performance in detail structured in each three main models: process model, data model, and knowledge model.

Process Model of Polymer Plant
L1. Process definition: Process definition tackles the formalization of the polymerization process itself according to the technical requirements. On the one hand, the process consists of a complete polymerization plant that produces acrylic fibers using acetone as solvent. The existing documentation comprises the plant flowsheet ( Figure 8) and the process recipe, which are the current formalization of the plant process activities. Finally, a characterization of the organization is performed, and the results, presented in Tables 2-4, summarize the features related to the general, tactic, and strategic levels of the organization in which the polymerization plant is installed. Overall, previous information provides clear boundaries of the process and includes all material and energy flows required to perform a life cycle assessment. Therefore, the process considered is according to the process definition model requirements.   L3. Process standardization: The standardization requires following the ANSI/ISA 88 standard. Thus, a semantic model is based on ANSI/ISA 88 standard, the so-called Batch Process Ontology (BaPrOn). The use is related to the instantiation task, which allows a faster and accurate manner to standardize the process. As a result, formulas of the master process recipes were extracted, as shown in Tables 5 and 6. The production plant considers four stages in batch production mode, eight recipe unit procedures, and six recipe operations, as well as twenty-seven different resources (considering material and energy flows). The master recipe's instantiation results in a set of recipe unit procedures and recipe operations, along with their formula and input, output, and other process parameters. Thus, the environmental performance metrics parameters appear included. Overall, the instantiation results in 934 instances concerning 295 classes, 257 object properties, and 33 data properties. The description logic expressivity of the ontology is SHIN (D) , where S refers to attributive language with complement of any concept allowed, not just atomic concepts (ALC); H refers to role hierarchy (subproperties); I refers to inverse properties; N refers to cardinality restrictions a special case of counting quantification; and (D) refers to the use of data-type properties, data values or data types. As an example of class instantiation, the RawMaterial class has Input1_1 (Acrylonitrile), Input1_2 (MethylMethacrilate), Input1_3 (VinilChloride), Input1_4 (Solvent-Acetone) as instances.   . (2011, 2012). The problem can be defined as follows: given a set of process operations planning data, including (i) time horizon, (ii) set of product recipes, (iii) equipment technologies for processing stages, (iv) product demands, (v) changeover methods, (vi) economic data related to costs and prices, and (vii) environmental data related to raw material, equipment, and product manufacturing environmental interventions; all of them provided by the data model. Four objective functions are relevant for decision-making: productivity (P), total environmental impact (TEI), makespan (M), and total profit (TP). The problem's modeling uses an immediate precedence mathematical formulation, managing the possible use of different product changeover cleaning methods; multiple alternative pieces of equipment at each stage; limited storage policies; and product batching, allocation, and timing constraints. Such problem representation is suitable for applying any of the three different optimization strategies considered in this case study. Specifically, we will solve the multi-objective problem using mathematical programming with a normalized constraint method (MP), a genetic algorithm (GA), and a hybrid optimization approach (HA). Each solution method's suitability depends on the combination of problem features and objective function, and will be further explored in level 7 related to process intelligence. At this level, the solution techniques are considered independently, and the knowledge model supports the implementation of the optimization providing adequate data from the data model.
L7. Process intelligence: This level comprises the development of intelligent agents for the scheduling problem. Indeed, one can use different problem representations for optimization purposes, and which is the most suitable depends on the issue features. That is precisely the function provided by the process intelligence framework using agents. The agents perform different functions and are integrated, allowing communication among them. The agent's functions include communication, search, classification, and solution. A common vocabulary is necessary to achieve communication, and all the agents rely on the ontologies described in the knowledge model.
The solution mechanism consists of the following steps: (i) problem definition, (ii) modeling process for reaching a problem model, (iii) model analysis, (iv) model solutions, and (v) problem implementation. Based on the answers in (iv), we can make inferences and reach decisions about the problem (v). The assessment of decisions goodness feeds back to the intelligent system to enable learning.
Next, the communication agent prepares the classification agent for analyzing the problem using a knowledge-driven classification procedure. Next, a solution strategy is proposed based on a similitude measure resulting from the problem instance compared with (i) existing problems tackled in the past and stored in the database and (ii) existing problem approaches from the state-of-the-art. The problem instances solved in the original papers are the basis for the database of this problem. A total of 415 problem solutions are included with different problem descriptions and objective function values. As a result of the similitude measure, a set of ranked solution approaches is proposed to the decision-maker. Finally, a solution agent uses the solution algorithms to reach the optimal solution for the problem instance. The solution agent also sends problem solutions to the decision-maker and stores them in the future reasoning database.
The framework testing with new problem instances and results are shown in Table 7. The first column describes the problem size (number of batches for each problem). The second column specifies the objective function. The third column presents the solution implementation method's selection, while the fourth column includes the objective function's Value. Finally, the fifth column stands for the distance to the best optimal solution found. For small problem instances, the rigorous mathematical programming approach has been selected, whereas problem instances with many variables are solved using a genetic algorithm. Thus, the objective function also has an essential role in the selection of the solution strategy. Indeed, for productivity maximization, the hybrid approach is selected. In most cases, the solution proposed by the agent-based system is close to 5% to the optimal solution. Overall, this framework stands for a systematic approach to scheduling model selection and solution implementation, thus supporting the engineers' high-level decision, who do not need to have a thorough understanding of advanced optimization techniques.
Programming the different agents uses Jython because it combines Python as a programming language and uses Java APIs for communicating with the ontological models.  [22]. EOP is an ontology containing three active ontologies: batch process ontology, environmental ontology, and enterprise ontology.
First, batch process ontology (BaPrOn) tackles features such as physical, procedural, recipe, and process models based on the ANSI/ISA 88 standard. It focuses on the production operation management of batch processes. Next, environmental ontology (EVO) considers life cycle assessment and environmental impact categories features, which allows the trace and calculation of environmental impact produced by product or processes activities. Finally, the enterprise ontology project (EOP) is based on the ANSI/ISA 95 standard. It considers the integration of enterprise activities, such as quality, maintenance, and inventory management. Additionally, EOP also considers financial features to tackle supply chain management activities.
Besides, the EOP model takes into account and models the following key knowledge: • Production system characterization: comprising physical, procedural, and recipe (site and general) models. • Products contemplated: according to the processing order activity in the industry recipes defining the production requirements and production path for the products in the physical, process and recipe (master and control) models.

•
Resource availability and plant status: provided by the process management and production information management activities.
Finally, Figure 9 shows the first classes found in the taxonomy of the enterprise ontology project.
L4. Procedural knowledge: Mathematical programming has been choose as strategy for making optimization knowledge explicit and available. Thus, this case study makes use of the mathematical modeling ontology (MMO) [23,24] and the operation research ontology (ORO) [25].
On the one hand, MMO aims to represent knowledge of mathematical domain based on mathematical structures comprising elements, terms, and operations. Thus, the mathematical term is the atomic part of a mathematical expression. Mathematical elements and expressions are related through mathematical operations, which can be logic or algebraic types. An element or expression can define specific conceptual meanings, such as processing time, the opening Value of the valve, the effort calculation equation, etc. In the same manner, an element or expression has a behavior that is related to variables, constants values, etc. Finally, MMO allows the definition of object-oriented mathematical modeling relating mathematical elements and expression with concepts from other semantic representations. In this case study, MMO is integrated with EOP. That allows linking mathematical models and equations with instances of the acrylic fiber process. Figure 10 shows the first classes found in the taxonomy of the mathematical modeling ontology.
On the other hand, ORO aims to capture the knowledge of operation research area that is a branch of mathematics. This ontology structures mathematical expressions fed by MMO in the form of mathematical programming. That allows a formal study and solution of complex problems for decision-making activity. As a result, an enriched semantic structure is obtaining. It considers the main parts of mathematical programming, such as the objective function in the form of an equation, a set of constraints in the form of mathematical equations. In the same manner, logic and algebraic operations are supported by MMO. Figure 11 shows the first classes found in the taxonomy of the operation research ontology.  First, the system's semantic definition refers to Tables 2-4 presented in Section 3.1.1. Based on the system instantiation, the current process is introduced semantically, indicators, related key features, and engineering metrics are set, such as resource availability, energy consumption, demand un-accomplishment, and cleaning overtimes desired. Table 8 shows a brief example of the resulting process of system setup for monitoring. This table performs a SWOT analysis defining strengths (S), weaknesses (W), opportunities (O), or threats (T). The following two rows show indicators and related features coming from classes representing the process domain concepts. The next row shows the engineering metrics associated with indicators. Finally, the last two rows refer to the upper and lower bounds values defined for evaluating the current performance. Next, the intelligent agent defines optimization goal statements (maximization or minimization). Then, using previous decision variables definitions, the system is ready for construct or semantic problem statement definition. Finally, the agent fills the problem statement template automatically to present it in the form of natural language, as follows: Empty template. "Taking into account -EOP classes found as key variablesvariables, and -EOP classes found as key parametersparameters; -Goal statement--EOP class defined as decision variablerelated to -EOP class defined as an indicatorindicator." Filled template: "Taking into account Processing start time, Storage level variables, and Maximum storage capacity, Batch processing time, Batch due date parameters; Minimize Makespan related to Number of late jobs indicator." At this level, intelligent agents provide additional capabilities for reasoning using semantic technologies. The main task focuses on decision-support for industry 4.0 microenvironments.

L1. Data definition:
The data definition is concerned with the data collection and data system architecture. Accordingly, the transactional system needs to be verified. In this case study, the information related to the recipe, namely, the energy and material flows, is stored in aStructured Query Language (SQL) database. All flows are listed, identified, and quantified in the database, and their sign (positive or negative) indicates whether they enter or leave the process boundaries. The data relating to the material and energy flows stems from the factory floor, and only process engineers have access and permission to modify the data.
L3. Data standardization: This level aims to standardize data properties. Data properties comprise data metrics, data language, and data structure. In general, properties are related to specifications within the processing system. Many of the systems are ruled by the technology implemented. In contrast, this methodology pursues defined, choose and consensus data properties desired for the processing system. In this particular case study, ANSI/ISA standards define those properties, standardizing the data through the instantiation process. The data model provides specifications for developed interfaces, software requirements. Table 9 presents properties, detail of Boolean, and Direction Type data from EOP (based on ANSI/ISA standards). User defined Next, Table 10 presents data details from the polymer process.

L4. Data integration and feeding:
This level performs the definition of data and data sets required by the optimization software or other software. We consider software specialized in mathematical programing and solving by optimization software, which contains strict and non-strict approaches. Thus, Tables 11-13 show some structure data or data sets required by the polymer process plant's optimization activity. Besides, optimization software can call for single data at any time.  ID_R1  4000  ID_P1  13,000  ID_C1  4000  ID_P2  5000  IN001B  5000  IN002A  5000  IN002B  11,000  ID-FP00A  36,000  ID_FP00B  36,000  ID_R001  36,000  ID_R002 36,000 L7. Data dynamics: This level aims to develop an algorithm based on Jython being capable of structuring data. For this specific case study, the algorithm was under construction. The strategy focuses on queries and the structure of triples from the semantic models, which can dynamically define data sets as shown in L4. Data structuring and feeding. Finally, we want to point out that ontologies have a database structure but are semantically enriched and supported by knowledge.

Discussion
This work introduces the wide intelligent management architecture and the application to acrylic fiber production as a case study. As a result, the production process first creates a process definition (system characterization and flowsheet of the plant), data definition (database based on the semantic model), and knowledge conceptualization (a semantic model for representing concepts and data of the process and system). The standardization of data and concepts has been done using the semantic model to represent the process (ANSI/ISA standards). Thus, using the architecture for data structuring and feeding fa-cilitates the integration of the Life Cycle Assessment approach improvement. From this point, the acrylic fiber production plant has the basis for developing process automation, developing process digitalization, or developing intelligent agents for decision-making. Figure 12 shows the process activities performed in the case study regarding process definition, standardization, optimization, and intelligence (yellow boxes and blue arrows path). Moreover, the acrylic fiber company can perform process improvement, automation, or digitalization based on the current plant status (green arrows pointing gray boxes). Finally, using a comprehensive intelligent management architecture model can be adapted to any company necessity by choosing how to evolve their processes.

Conclusions
This work presents a novel architecture for technology integration on process activities and systematically manages processes and their related data using formal knowledge. Knowledge models provide additional reasoning capabilities to support decision support systems for the wide optimization and industry 4.0 approaches. The architecture comprises and structures three critical systems: process system, knowledge system, and transactional system. As a result, analytical tools belonging to process activities and transactional data systems can be guided by a systematic development framework consolidated with formal knowledge models. Thus, the model improves the interaction among process life cycles, analytical models, transactional systems, and knowledge. A comprehensive intelligent management architecture model can be seen as an ordered, adaptable, and configurable tool for integrating technologies and processes maturity. A critical aspect of this method regards formal knowledge models that become usable and reusable in technology integration and maturity. Finally, this method is a new alternative supporting companies when they need to decide "what to do" by explaining to them "why to do" and "how to do it", taking as starting point their processes and characteristics.

Data Availability Statement:
The data presented in this study are available on request from funding institution (see Funding).