Next Article in Journal
Implementation and Parameter Analysis of the Knock Phenomenon of a Marine Dual-Fuel Engine Based on a Two-Zone Combustion Model
Next Article in Special Issue
Predictive Process Adjustment by Detecting System Status of Vacuum Gripper in Real Time during Pick-Up Operations
Previous Article in Journal
A New Algorithm for Unique Representation and Isomorphism Detection of Planar Kinematic Chains with Simple and Multiple Joints
Previous Article in Special Issue
MINLP Model for Operational Optimization of LNG Terminals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Systematic Model for Process Development Activities to Support Process Intelligence

by
Edrisi Muñoz
1,†,
Elisabet Capon-Garcia
2,†,
Enrique Martinez Muñoz
3,† and
Luis Puigjaner
1,*,†
1
Chemical Engineering Department, Process and Environment Engineering Centre, Universitat Politecnica de Catalunya, CEPIMA, EEBE - c. Eduard Maristany 10-14, Ed. I-5, 08019 Barcelona, Spain
2
ABB Switzerland Ltd., Segelhofstrasse 1K, 5405 Baden-Dättwil, Switzerland
3
Academic Area of Engineering and Architecture, Institute of Basic Sciences and Engineerin, Autonomous University of the State of Hidalgo, UAEH, Ciudad del Conocimiento, Carretera Pachuca-Tulancingo Km. 4.5, 42164 Hidalgo, Mexico
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Processes 2021, 9(4), 600; https://doi.org/10.3390/pr9040600
Submission received: 30 January 2021 / Revised: 20 March 2021 / Accepted: 24 March 2021 / Published: 30 March 2021

Abstract

:
Process, manufacturing, and service industries currently face a large number of non-trivial challenges ranging from product conception, going through design, development, commercialization, and delivering in a customized market’s environment. Thus, industries can benefit by integrating new technologies in their day-by-day tasks gaining profitability. This work presents a model for enterprise process development activities called the wide intelligent management architecture model to integrate new technologies for services, processes, and manufacturing companies who strive to find the most efficient way towards enterprise and process intelligence. The model comprises and structures three critical systems: process system, knowledge system, and transactional system. As a result, analytical tools belonging to process activities and transactional data system are guided by a systematic development framework consolidated with formal knowledge models. Thus, the model improves the interaction among processes lifecycle, analytical models, transactional system, and knowledge. Finally, a case study is presented where an acrylic fiber production plant applies the proposed model, demonstrating how the three models described in the methodology work together to reach the desired technology application life cycle assessment systematically. Results allow us to conclude that the interaction between the semantics of formal knowledge models and the processes-transactional system development framework facilitates and simplifies new technology implementation along with enterprise development activities.

Graphical Abstract

1. Introduction

Our civilization faces acute and critical challenges, such as climate change, safe drinking water availability, food scarcity, and secure energy supplies, which endanger current and future generations. Therefore, society and industry need to shape their activities based on sustainable principles and to efficiently adopt the rapidly evolving new technologies which can potentially handle the challenges mentioned above. Precisely, this work focuses on the integration of new technologies in the decision-making of process and manufacturing industries. Thus, the proposed methodology applies to the workflow of any productive sector or area where decision-making plays a crucial role.
As for the process and manufacturing industries, complex decision-making lurks at all enterprise levels and the whole product lifecycle, ranging from product conception, design, development, production, commercialization, and delivery. The need to consider highly complex scenarios results in involved and non-trivial decision-making. However, the advent of new technologies supports the successful development and systematization of new structures and frameworks for reaching informed, reasonable, and wise decisions. Therefore, this work aims to integrate new technologies systematically in the decision-making workflow of the process and manufacturing industries. Therefore, this work presents a framework for developing process and product-related activities. Thus, the proposed model allows to unveiling the most efficient way towards integrating enterprise decision support systems and process intelligence into actual enterprise processes.
As discussed in Section 1.1 Decision making in the enterprise, companies have recently adopted decision support systems to handle the complexity of decision-making. However, such systems only tackle part of the complete enterprise structure, and it is necessary to understand the whole picture to reach sensible solutions, as pointed out in Section 1.2 Enterprise Integration. Therefore, this work combines Knowledge management (Section 1.3) and Data management (Section 1.4) to propose a framework for reaching integration of different systems and efficiently apply new technological solutions for decision-making.

1.1. Decision-Making in the Enterprise

Process and manufacturing industries can be regarded as highly involved systems consisting of multiple business and process units. The organization of the different temporal and geographical scales in such units, as well as the other enterprise decision levels, is crucial to understand and analyze their behavior. The key objectives are to gain economic efficiency, market position, product quality, flexibility, or reliability [1]. Recently, indicators related to sustainability and environmental impact have also been included as drivers for decision-making. The basis for solving an enterprise system problem and further implement any action is the actual system representation in a model, which captures the observer’s relevant features. Such a model is the basis for decision-making, which is a highly challenging task in these industries due to their inherent complexity.
Therefore, companies have devoted efforts to reach better decisions during the last decades. Indeed, they have invested a large number of resources in exploiting information systems, developing models, and using data to improve decisions. Decision support systems (DSS) are responsible for managing the necessary data and information that allow making decisions. Thus, those systems aim to integrate data transactions with analytical models supporting the decision-making activity at different organizational levels. The work in [2] defines DSS as aiding computer systems at the management level of an organization that combines data with advanced analytical models. The work in [3] presents four components for supporting classic DSS. The components comprise (i) a sophisticated database for accessing internal and external data, (ii) an analytical model system for accessing modeling functions, (iii) a graphical user interface for allowing the interaction of humans and the models to make decisions, and (iv) an optimization-engine based on mathematic algorithms or intuition/knowledge. Traditionally, DSS focus on a single enterprise unit and lack the vision of the boundaries. Thus, DSS rely heavily on rigid data and model structures, and they are difficult to adapt to include new algorithms and technologies.

1.2. Enterprise Integration

Current trends in the process industry outline the importance of being agile and fully integrated to improve decision-making at all scales in the company. Indeed, integration comprises the whole organizational activities from operation to planning and strategic, which differ in physical and temporal scope, but are directly related to each other as decisions made at one level directly affect others. Therefore, companies pursuing integration among different decision levels in the production management environment report substantial economic benefits [4,5]. Therefore, to coordinate and integrate information and decisions among the various functions are crucial for improving global performance.
The use of Standards is the primary conducted method for enterprise integration labor. Groups, commities, and societies have developed those standards in different geopolitical areas where they are of application. Next, the use of some standards serves as well as a brief introduction of their content.
First, the European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC) provide standards to characterize, guide, and rule SMEs’ activities [6]. CEN standards comprise European Standards (E.N.s), drafts standards (prENs), Technical Specifications (CEN TSs), sDocuments (HDs), Technical Specifications (TSs), Technical Reports (TRs), and CEN Workshop Agreements (CWAs). Finally, CEN work is coordinate with the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC).
Next, the National Institute of Standards and Technology (NIST) creates the Integrated Definition Methods (IDEF). IDEF comprises a standard fir function modeling (IDEF 0), Information Modelling (IDEF 1), Data Modelling (IDEF 1X), Process Modelling (IDEF 3), Object-Oriented Design (IDEF 4), and Ontology Description (IDEF 5) currently maintained by the Knowledge-Based Systems, INC. (KBSI) [7]. The standards were funding and are now in use by the United States Air Force and United States Department of Defense agencies. Moreover, many organizations for business process capturing and improvement.
Following, the International Electrotechnical Commission (IEC) develops the International Standards and Conformity Assessment covering areas such as industrial control programming standards (IEC 61131-3) or field devise integration (IEC 61804-2). The standards aim at allowing interoperability, efficiency, the safety of electrical, electronic, and information systems [8].
The International Organization for Standardization (ISO) standards are well-known and widely used standards, covering management systems, quality management, information security management, etc. [9]. Thus, criteria for integration comprises the Enterprise Modeling and Architecture (ISO TEC184 SC5 WG1), Electronic Business Extensible Markup Language (ISO 15000), and the Asset Management System (ISO 55003), among others.
Manufacturing Execution Systems Association (MESA) presents a set of best management practices and information technology aiming to improve business. MESA focuses on asset performance management, lean manufacturing, product lifecycle management, manufacturing performance metrics, quality, regulatory compliance, and return to investment [10].
Next, Machinery Information Management Open Systems Alliance (MIMOSA) association presents the Open Standards for Physical Asset management for information management (I.M.), and information technologies (I.T.) applied to manufacturing environments [11]. MIMOSA standards recently focus on enabling digital twins, big data, industrial internet of things, and analytics specifications.
The Object Management Group (OMG) is dedicated to developing technological standards for enterprise integration and distributed broad-interoperability [12]. The OMG comprises the following standards: Business Process Model and Notation (BPMN), Common Object Request Broker Architecture (CORBA), Common Warehouse Metamodel (CWM), Data-Distribution Service for Real-Time Systems (DDS), Unified Modeling Language (UML), and the Model Driving Architecture (MDA) applied to software visual design, execution, and support.
Next, the Process Industry Practices (PIP) consortium collaborates to define common industry standards and best practices focused on design, maintenance, and procurement activities [13]. Besides, PIP practices facilitate knowledge capturing of process control, mechanical, data management, and Piping and Instrumentation Diagrams.
A key element of integration directly points to enterprise models: computational applications within organizations aiming to represent processes, activities, resources, or physical phenomena. These models are essential for driving design, analysis, management, and prognosis in enterprise functions. Nevertheless, the spread of these models confronts several issues in practice. First of all, the independent creation of systems supporting functions at the enterprise during past years, resulting in heterogeneous enterprise models; that is, the so-called correspondence problem. Different enterprise models refer to the same concept, for example, an activity, each model will probably apply other names, following the example activity, operation, or task. Therefore, most of the time, interpreting agents are necessary to allow communication among those enterprise functions. However, no matter how rational the idea of renaming the concepts is, organizational barriers usually impede it. Furthermore, these representations lack an adequate specification of what the model objects mean; they lack the terminology’s actual semantic definition. Instead, concepts are poorly defined, and their interpretations overlap, leading to inconsistent understandings and uses of the knowledge. Finally, the cost of designing, building, and maintaining a model of the enterprise is high. Each model tends to be unique to the enterprise, and objects are enterprise-specific.
Therefore, some efforts have addressed the issues mentioned above using model standardization. On the other hand, the American National Standards Institute (ANSI), developed the Instrumentation, Systems and Automation Society (ISA) standards, known as ANSI/ISA standards for automation and control within the enterprise [14] with wide recognition for process integration. Figure 1 presents the main integration aspects of these standards.
On the other hand, the Purdue reference model provides an “environment” for discrete parts manufacturing and stands for the basis for the other models [15]. In this case, certain activities are identified as directly related to shop floor production and organized in a six-level hierarchical model as depicted in Figure 2. Specific applications may require more or fewer than six levels, but six was deemed sufficient for identifying where integration standards are needed. The following list shows the name of each level and gives its primary responsibility.
  • Level 6 Enterprise: Corporate Management (External Influences)
  • Level 5 Facility: Planning Production
  • Level 4 Section: Material/Resource Supervision
  • Level 3 Cell: Coordinate Multiple Machines
  • Level 2 Station: Command Machine Sequences
  • Level 1 Equipment: Activate Sequences of Motion (Plant Machinery and Equipment)
These activities apply to manual operations, automated operations, or a mixture of the two at any level. It is worth mentioning the accessible subdivision of the six tasks into control enforcement, systems coordination and reporting, and reliability assurance. In the context of any large industrial plant or an entire industrial company based on one location, the tasks would take place at each level of the hierarchy.
Thus, the Common Information Model (CIM) reference model stands for a reference for computer-integrated manufacturing. It consists of a detailed collection of generic information management and automatic control tasks and their necessary functional requirements for a manufacturing plant. Nevertheless, the CIM reference model scope is limited to the integrated information management and automation system elements. As a result, the company’s management, including planning function, financial, purchasing, research, development, engineering, and marketing and sales are all treated as external influences.
The adoption of standard models is the basis for the integration of enterprise processes. Thus, decision-making heavily relies on both the process models and the technologies which tackle the problem. Therefore, this work considers the systematization of data and knowledge management to reach integration in decision-making.

1.3. Knowledge Management

The development of better practices, strategies, and policies is highly related to how organizations use experiences and ideas from customers, suppliers, and employees. Thus, capturing, storing, sharing, and applying knowledge enables the construction of organization intelligence and intellectual assets. Two types of knowledge sources can be generally defined: tangible and intangible. On the one hand, intangible assets are related to skills, expertise, and human resources knowledge. On the other hand, tangible assets are related to data, information, historical records found on databases of customers, suppliers, and employees of the organization [16].
The bases of knowledge management tools can include distributed databases, ontologies, or network maps. This work focuses on formal domain ontologies development as the primary technology for knowledge management. Besides, the use of terms Semantic Web or Web 3.0 can be used to refer to this technology. Ontologies and logic serve as conceptual graphs for knowledge representation in constructing computable models within a specific domain [17]. Additionally, ontologies are defined as formal structures facilitating acquiring, maintaining, accessing, sharing, and reusing information [18,19]. Over the last decades, the Semantic Web pursued theoretical bases for developing knowledge-based applications software: One can communicate
  • a shared and common understanding of a domain among people and across application systems and
  • an explicit conceptualization that describes the semantics of the data.
Finally, knowledge management systems benefit from ontologies that semantically enrich information and precisely define the meaning of various information artifacts.

1.3.1. Bloom’s Cognition Taxonomy

Bloom’s Taxonomy is a framework that presents how educational objectives can guide and structure educational goals. This framework’s latest work is entitled A Taxonomy for Teaching, Learning, and Assessment defining the cognitive processes related to knowledge [20], shown in Figure 3. The framework considers six major categories, with subactivities for better understanding, as follows:
  • Remember: Recognizing, Recalling.
  • Understand: Interpreting, Exemplifying, Classifying, Summarizing, Inferring, Comparing, Explaining.
  • Apply: Executing, Implementing.
  • Analyze: Differentiating, Organizing, Attributing.
  • Evaluate: Checking, Critiquing.
  • Create: Generating, Planning, Producing.
Finally, the framework defines four types of knowledge used in cognition:
  • Factual Knowledge
    • Knowledge of terminology
    • Knowledge of specific details & elements
  • Conceptual Knowledge
    • Knowledge of classifications and categories
    • Knowledge of principles and generalizations
    • Knowledge of theories, models, and structures
  • Procedural Knowledge
    • Knowledge of subject-specific skills and algorithms
    • Knowledge of subject-specific techniques and methods
    • Knowledge of criteria for determining when to use appropriate procedures
  • Metacognitive Knowledge
    • Strategic Knowledge
    • Knowledge about cognitive tasks (appropriate contextual and conditional knowledge) Self-knowledge

1.4. Transactional System and Data Management

The performance of enterprise processing activities highly depends on the transactional system’s capacity and how well the data is managing.

1.4.1. Transactional System

A transactional system comprises multiple operations that collect, store, modify, and retrieve data transactions within an enterprise. These systems must support a high number of concurrent users and transaction types along the time. Besides, enterprise data are identified by their purpose and type, comprising transactional, analytical, and master data. First, transactional data support the daily operations of an organization. Transactional data refer to data created or modified by the operational systems, such as time, place, number, date, price, payment methods, etc. Next, define analytical data as numerical measurements that support activities, such as decision-making, reporting, query, or analysis. Thus, analytical data are stored and structured as numerical values in some dimensional models. Finally, master data represent the key business entities, involving creating a single view of the data in a master file or master record. Master data comprise data about sites, inventory, levels, demand, products, batches, etc.

1.4.2. Data Management

Enterprise data management aims to govern business data by retrieving, standardizing, storing, integrating, structuring, and disseminating requested data. The transactional system supports data management by enhancing data transaction features for control, analysis, and decision-making. Thus, data management’s essential feature comprises communicating all data from different data sources (sensors) and fragmented control systems with all enterprise applications, processes, and entities that require it. Another critical aspect of data management is to store and make data available when needed securely.

2. Materials and Methods

New technologies can accomplish their implementation life cycle with a robust base supported by the proposed architecture, named Wide Intelligence Management Architecture. This presented architecture offers three central systems comprising development activities: process, knowledge, and transactional systems, shown in Table 1. First, the process system model introduces seven development activities systematically ordered towards a formalized process maturity process. Thus, the activities range from process definition. The main aspects of how enterprise processes perform are process intelligence, where human and environmental behavior is taken into account to enrich development activities. Next, the knowledge system model aims to strengthen the integration by formalized knowledge from three main perspectives: the domain area, the expertise area (functional activities), and the experience area, enhancing expertise knowledge with success and failure cases. Finally, the third model is related to the transactional data system. This model comprises four main areas: data definition, data improvement, data standardization, and data feeding.

2.1. Process System Model

The Modular Process Reference Model aims to define a coherent and structured manner of process evolution to integrate new technologies and business activities. The reference model comprises seven modules defined by the use of analytical tools and data linked to enterprise activities, represented in Figure 4.

2.1.1. Process Definition

This module provides a set of activities aiming to assess enterprise processes performance and to support systematic formalization. On the one hand, this module verifies “if” and measures “how much”, existing formalized process follows the current enterprise activities, named as verification phase. Otherwise, this methodology aims to define, design, and standardize enterprise processes, called a definition phase. Thus, a process design phase takes place, considering the followed validation and verification phases. The steps mentioned above (verification and definition) must apply to the enterprise transactional system parallel with the processes.

2.1.2. Process Improvement

The process improvement module makes an exhaustive study of current processes to perform a re-design phase. This re-design phase considers new tendencies on standards, methods, and technologies. Moreover, good manufacturing practices (GMPs) and standard operating procedures (SOPs) are of paramount importance in the improvement task. Moreover, at the same time, a transactional data system must pass through a re-design phase to support the process improvements realized. Finally, updates and documentation regarding enterprise processes, resources, and data improvements must follow (Focus on management).

2.1.3. Process Standardization

This module performs research over standards and models strongly related to main enterprise processes to consider future implementation. Finally, as exposed in the previous module, data and structures from the transactional system are correctly standardized.

2.1.4. Process Optimization

The process optimization module comprises processes by using different rigorous and non-rigorous method approaches for optimization. The process optimization phase aims to provide necessary data and information due to fundamental calculations based on engineering approaches to decide on specific objectives and goals within processes. Thus, as the first step, knowledge, data, and information on processes and systems are crucial to understanding the problem. The development of model design occurs by defining an objective or multi-objective function, a single or multiple purposes, and single or multiple scenarios as a convenience. Finally, WIMa’s optimization solutions are enriched by semantics, mathematical, and process semantics model, allowing easier integration within the enterprise.

2.1.5. Process Automation

The process automation module comprises applications such as business process automation (BPA), digital automation (DA), and robotic process automation (RPA). First, BPA makes use of advanced technologies to reduce human intervention in processing tasks across the enterprise. Thus, BPA aims to enhance efficiency by automating (initialize, execute, and complete) the whole or some parts of a complicated process. Next, DA takes the BPA system, aiming to digitalize and improve processes automation, thus meeting the market dynamics customer environment. Finally, software agents carry out RPA, thus pointing to mimic human actions within digital systems to optimize business processes by using artificial intelligence agents.

2.1.6. Process Digitalization

The digitalization module creates a digital integration of all the systems found in business processes. Digitalization encompasses process simulation, industrial augmented reality, predictive systems, proactive systems, industrial internet of things, expert systems, and process virtual twins. Finally, WiiMa’s solutions facilitate the digitalization technologies development due to the semantic structure that supports easy access to raw or structured data and processes’ formal knowledge.

2.1.7. Process Intelligence

This module aims to understand human behavior principles by reasoning for developing programs for problem solutions by machines, using artificially intelligent tools, computational intelligence systems, and formal knowledge models. One of the first tasks is to manage structured knowledge, which can facilitate and empower the system’s understanding. Furthermore, this module is directly affected by the transactional system’s efficiency, which considers the collection, structuring, and data communication.

2.2. Data System Model

The transactional system architecture set up data management activity. We have described data management comprising five main activities: data system definition, data system improvement, data standardization, data integration and feeding, and data system dynamics, as shown in Figure 5.

2.2.1. Definition

This activity takes into account the process definition (link) to create the data model. The data model establishes the relationship between the process model and the data generated by signals sources, such as process equipment, environment sensors, suppliers, or customers. Even more, transaction data protocols are defined, and the supported physical architecture must be capable of carrying those protocols. Finally, the data management plan is set, providing guidelines and procedures for enhancing security, compliance, quality, efficiency, and access.

2.2.2. Improvement

The data improvement activity refines the relationship between data signals and process models, and explains missing and necessary data. The data system requirements reside in process improvement activity. One can then calculate required data by making analytical computations, adding new technologies, or adding other sensors.

2.2.3. Standardization

The data standardization activity is the process of setting data systems into standard formats. It comprises the selection of common data language, structure, engineering metrics, and time/space Scales. Besides, data conciliation is a crucial task of data integration. Finally, the four language categories data structure comprises data definition language, data query language, data manipulation language, and transaction control language, in compliance with a processes database system.

2.2.4. Integration and Feeding

Data integration and feeding aim to link and integrate the transactional system and analytical systems or models. On the one hand, integration connects data among devices with analytical models, devices with devices, and analytical models with analytical models. On the other hand, data feeding is in charge to structure, send, and deliver data. Thus, integration and feeding tasks focus on support optimization based on decision-making and process action execution that maximize the business’s benefit. Besides, material resource planning, distribution requirement planning, or enterprise resource planning systems, as part of the analytical procedure, are setting up if required. As a result, the definition of data-to-parameters and data-to-sets is established, required by transactional models. Finally, at some point, this task considers the feed and integration of virtual systems.

2.2.5. Dynamics

The data system model’s last task replaces fixed data parameters and selected data sets by a mechanism for data collecting and structuring. Usually, these mechanisms refer to algorithms, which can become intelligent agents. Finally, the dynamics can enhance the database system, reaching intelligent databases. Intelligent databases are in constant change adapting to the dynamics of process systems and other business features.

2.3. Knowledge System Model

The knowledge system model organizes how the cognition process evolves, focusing on knowledge management. This cognition process integrates and adapts Bloom’s taxonomy types of knowledge (Section 1.3.1) and ontologies, applied to computational systems. Thus, the resulting model comprises four main modules: conceptual, procedural, expertise, and metacognitive, as shown in Figure 6. Finally, the knowledge model system acts as an integrator between the process and data system models.

2.3.1. Conceptual

The conceptual knowledge aims to develop a formal domain ontology. The ontology considers domain terminology, elements, categories, principles, and generalizations, which can consider chemical, physics, and mechanics phenomena. All those domain information is retrieved from standards, books, handbooks, etc., derived from Section 2.1.1 and Section 2.2.1. Additionally, information gathering can consider good manufacturing practices and standard operating procedures derived from Section 2.1.2. Thus, the ontology models those elements in the form of classes (elements involved in the domain area), data-properties (data from specifications and sensors), classes-properties (the relation among elements), axioms (assumptions on behavior), and rules (restrictions on behavior).

2.3.2. Procedural

Procedural knowledge focuses on the interaction that processes and humans have through analytical methods and tools, usually from procedural models, to perform the processing activity. Thus, the primary sources of information are techniques, methods, theories, models, structures, and procedural recipes (general, site, master, and control recipes). Besides, previous knowledge is stored and well identified within the data system. Finally, procedures, analytical models, and analytical tools translate to computational algorithms for the reasoning task.

2.3.3. Expertise

Knowledge expertise aims to collect and manage the criteria that determine the best use of analytics or procedures. This criterion is based on learning on good habits and how to replicate them. Besides, it is built on learning over bad habits and how to avoid them. Finally, expertise knowledge considers creating multiple scenarios to account for the power of knowledge about relations and behavior in the domain based on the conceptual and procedural knowledge activities.

2.3.4. Metacognitive

Metacognitive knowledge aims to go beyond current understanding. The current knowledge system is the one in charge of this metacognitive process. The metacognitive’s main comprises the characterization, classification, reasoning, and creation of knowledge to enhance and reach process intelligence.
Finally, Figure 7 represents the interaction among three system models. Thus, the process system model drives the diagonal’s general maturing process, reaching from process definition to process intelligence. Next, the X-axis represents how the data system model supports the process and matures, starting from data definition to data dynamics. Last, the Y-axis shows the knowledge system model developing, which interacts with the other two system models and allows an intelligent technological implementation, reaching from conceptual knowledge to metacognitive knowledge.

3. Results

This section presents the application of the methodology previously described in a process manufacturing case study.

3.1. Case Study

This case study aims to illustrate the WIMa framework’s application for the lifecycle assessment (LCA) of an acrylic fiber production plant. The objective is to demonstrate how the three models described in Section 2 work together to reach the desired technology application (in this case, LCA). Life cycle assessment requires a correctly defined process and consistent data to provide sensible results. Furthermore, this is a meaningful example to understand the framework at the process definition level (Section 2.1.1).
The acrylic fibers polymerization process considered in this work is presented initially in [21]. Acrylic fibers’ production takes 14 stages in a batch production plant, represented in Figure 8, which involves different material and energetic resources. Two alternative production processes are assessed: acrylic fiber A uses acetone as a solvent in the polymerization, and acrylic fiber B uses benzene. This case study describes the process and data definition required to perform a life cycle assessment according to the WIMa procedure. Still, the actual evaluation is beyond the scope of this contribution. For further details regarding the life cycle assessment, please refer to the work in [21].
First of all, define the requirements and objective of the case study. Thus, suppose that “Polymer A.C.” wants to develop a high-level decision model agent based on optimization approaches. Besides, they want to standardize their processes to maintain relations with their industrial partners. These requirements comprise the performance of certain levels presented in Table 1:
  • Level one (L1) regarding process definition, conceptual knowledge (process principles, process standards, and data standards), and data definition.
  • Level three (L3) regarding process standardization and data standardization.
  • Level four (L4) regarding process optimization, procedural knowledge, and integration and feeding data.
  • Level seven (L7) regarding process intelligence, metacognitive knowledge, and data dynamics.
Next, present every activity performance in detail structured in each three main models: process model, data model, and knowledge model.

3.1.1. Process Model of Polymer Plant

L1. Process definition: Process definition tackles the formalization of the polymerization process itself according to the technical requirements. On the one hand, the process consists of a complete polymerization plant that produces acrylic fibers using acetone as solvent. The existing documentation comprises the plant flowsheet (Figure 8) and the process recipe, which are the current formalization of the plant process activities. Finally, a characterization of the organization is performed, and the results, presented in Table 2, Table 3 and Table 4, summarize the features related to the general, tactic, and strategic levels of the organization in which the polymerization plant is installed. Overall, previous information provides clear boundaries of the process and includes all material and energy flows required to perform a life cycle assessment. Therefore, the process considered is according to the process definition model requirements.
L3. Process standardization: The standardization requires following the ANSI/ISA 88 standard. Thus, a semantic model is based on ANSI/ISA 88 standard, the so-called Batch Process Ontology (BaPrOn). The use is related to the instantiation task, which allows a faster and accurate manner to standardize the process. As a result, formulas of the master process recipes were extracted, as shown in Table 5 and Table 6. The production plant considers four stages in batch production mode, eight recipe unit procedures, and six recipe operations, as well as twenty-seven different resources (considering material and energy flows). The master recipe’s instantiation results in a set of recipe unit procedures and recipe operations, along with their formula and input, output, and other process parameters. Thus, the environmental performance metrics parameters appear included. Overall, the instantiation results in 934 instances concerning 295 classes, 257 object properties, and 33 data properties. The description logic expressivity of the ontology is SHIN(D), where S refers to attributive language with complement of any concept allowed, not just atomic concepts (ALC); H refers to role hierarchy (subproperties); I refers to inverse properties; N refers to cardinality restrictions a special case of counting quantification; and (D) refers to the use of data-type properties, data values or data types. As an example of class instantiation, the RawMaterial class has Input1_1 (Acrylonitrile), Input1_2 (MethylMethacrilate), Input1_3 (VinilChloride), Input1_4 (Solvent-Acetone) as instances.
L4. Process optimization: This case study tackles the optimization of multistage batch plants’ scheduling problem under sequence-dependent changeovers presented by Capon et al. (2011, 2012). The problem can be defined as follows: given a set of process operations planning data, including (i) time horizon, (ii) set of product recipes, (iii) equipment technologies for processing stages, (iv) product demands, (v) changeover methods, (vi) economic data related to costs and prices, and (vii) environmental data related to raw material, equipment, and product manufacturing environmental interventions; all of them provided by the data model. Four objective functions are relevant for decision-making: productivity (P), total environmental impact (TEI), makespan (M), and total profit (TP). The problem’s modeling uses an immediate precedence mathematical formulation, managing the possible use of different product changeover cleaning methods; multiple alternative pieces of equipment at each stage; limited storage policies; and product batching, allocation, and timing constraints. Such problem representation is suitable for applying any of the three different optimization strategies considered in this case study. Specifically, we will solve the multi-objective problem using mathematical programming with a normalized constraint method (MP), a genetic algorithm (GA), and a hybrid optimization approach (HA). Each solution method’s suitability depends on the combination of problem features and objective function, and will be further explored in level 7 related to process intelligence. At this level, the solution techniques are considered independently, and the knowledge model supports the implementation of the optimization providing adequate data from the data model.
L7. Process intelligence: This level comprises the development of intelligent agents for the scheduling problem. Indeed, one can use different problem representations for optimization purposes, and which is the most suitable depends on the issue features. That is precisely the function provided by the process intelligence framework using agents. The agents perform different functions and are integrated, allowing communication among them. The agent’s functions include communication, search, classification, and solution. A common vocabulary is necessary to achieve communication, and all the agents rely on the ontologies described in the knowledge model.
The solution mechanism consists of the following steps: (i) problem definition, (ii) modeling process for reaching a problem model, (iii) model analysis, (iv) model solutions, and (v) problem implementation. Based on the answers in (iv), we can make inferences and reach decisions about the problem (v). The assessment of decisions goodness feeds back to the intelligent system to enable learning.
Next, the communication agent prepares the classification agent for analyzing the problem using a knowledge-driven classification procedure. Next, a solution strategy is proposed based on a similitude measure resulting from the problem instance compared with (i) existing problems tackled in the past and stored in the database and (ii) existing problem approaches from the state-of-the-art. The problem instances solved in the original papers are the basis for the database of this problem. A total of 415 problem solutions are included with different problem descriptions and objective function values. As a result of the similitude measure, a set of ranked solution approaches is proposed to the decision-maker. Finally, a solution agent uses the solution algorithms to reach the optimal solution for the problem instance. The solution agent also sends problem solutions to the decision-maker and stores them in the future reasoning database.
The framework testing with new problem instances and results are shown in Table 7. The first column describes the problem size (number of batches for each problem). The second column specifies the objective function. The third column presents the solution implementation method’s selection, while the fourth column includes the objective function’s Value. Finally, the fifth column stands for the distance to the best optimal solution found. For small problem instances, the rigorous mathematical programming approach has been selected, whereas problem instances with many variables are solved using a genetic algorithm. Thus, the objective function also has an essential role in the selection of the solution strategy. Indeed, for productivity maximization, the hybrid approach is selected. In most cases, the solution proposed by the agent-based system is close to 5% to the optimal solution. Overall, this framework stands for a systematic approach to scheduling model selection and solution implementation, thus supporting the engineers’ high-level decision, who do not need to have a thorough understanding of advanced optimization techniques.
Programming the different agents uses Jython because it combines Python as a programming language and uses Java APIs for communicating with the ontological models.

3.1.2. A Knowledge Model of the Polymer Plant

L1. Conceptual knowledge: A knowledge system harmonizes and manages sets of valuable information, making them accessible for their use considering specific purposes. In this case study, knowledge conceptualization uses the Enterprise Ontology Project (EOP) [22]. EOP is an ontology containing three active ontologies: batch process ontology, environmental ontology, and enterprise ontology.
First, batch process ontology (BaPrOn) tackles features such as physical, procedural, recipe, and process models based on the ANSI/ISA 88 standard. It focuses on the production operation management of batch processes. Next, environmental ontology (EVO) considers life cycle assessment and environmental impact categories features, which allows the trace and calculation of environmental impact produced by product or processes activities. Finally, the enterprise ontology project (EOP) is based on the ANSI/ISA 95 standard. It considers the integration of enterprise activities, such as quality, maintenance, and inventory management. Additionally, EOP also considers financial features to tackle supply chain management activities.
Besides, the EOP model takes into account and models the following key knowledge:
  • Production system characterization: comprising physical, procedural, and recipe (site and general) models.
  • Products contemplated: according to the processing order activity in the industry recipes defining the production requirements and production path for the products in the physical, process and recipe (master and control) models.
  • Resource availability and plant status: provided by the process management and production information management activities.
Finally, Figure 9 shows the first classes found in the taxonomy of the enterprise ontology project.
L4. Procedural knowledge: Mathematical programming has been choose as strategy for making optimization knowledge explicit and available. Thus, this case study makes use of the mathematical modeling ontology (MMO) [23,24] and the operation research ontology (ORO) [25].
On the one hand, MMO aims to represent knowledge of mathematical domain based on mathematical structures comprising elements, terms, and operations. Thus, the mathematical term is the atomic part of a mathematical expression. Mathematical elements and expressions are related through mathematical operations, which can be logic or algebraic types. An element or expression can define specific conceptual meanings, such as processing time, the opening Value of the valve, the effort calculation equation, etc. In the same manner, an element or expression has a behavior that is related to variables, constants values, etc. Finally, MMO allows the definition of object-oriented mathematical modeling relating mathematical elements and expression with concepts from other semantic representations. In this case study, MMO is integrated with EOP. That allows linking mathematical models and equations with instances of the acrylic fiber process. Figure 10 shows the first classes found in the taxonomy of the mathematical modeling ontology.
On the other hand, ORO aims to capture the knowledge of operation research area that is a branch of mathematics. This ontology structures mathematical expressions fed by MMO in the form of mathematical programming. That allows a formal study and solution of complex problems for decision-making activity. As a result, an enriched semantic structure is obtaining. It considers the main parts of mathematical programming, such as the objective function in the form of an equation, a set of constraints in the form of mathematical equations. In the same manner, logic and algebraic operations are supported by MMO. Figure 11 shows the first classes found in the taxonomy of the operation research ontology.
L7. Metacognitive knowledge: This level aims to create an autonomous problem definition agent to construct a semantically enriched problem statement [26]. The agent works in a semantic environment where machines can access explicit knowledge codified in Python and Jython. The strategy comprises the following: (1) Semantic definition of the system. (2) Recognition of current situation. (3) The setting of key process features and variables. (4) The setting of confidential intervals for monitoring task. (5) Searching for relation to key features. (6) Definition problem statement.
First, the system’s semantic definition refers to Table 2, Table 3 and Table 4 presented in Section 3.1.1. Based on the system instantiation, the current process is introduced semantically, indicators, related key features, and engineering metrics are set, such as resource availability, energy consumption, demand un-accomplishment, and cleaning overtimes desired. Table 8 shows a brief example of the resulting process of system setup for monitoring. This table performs a SWOT analysis defining strengths (S), weaknesses (W), opportunities (O), or threats (T). The following two rows show indicators and related features coming from classes representing the process domain concepts. The next row shows the engineering metrics associated with indicators. Finally, the last two rows refer to the upper and lower bounds values defined for evaluating the current performance.
Next, the intelligent agent defines optimization goal statements (maximization or minimization). Then, using previous decision variables definitions, the system is ready for construct or semantic problem statement definition. Finally, the agent fills the problem statement template automatically to present it in the form of natural language, as follows:
Empty template. “Taking into account -EOP classes found as key variables- variables, and -EOP classes found as key parameters- parameters; -Goal statement- - EOP class defined as decision variable- related to -EOP class defined as an indicator- indicator.”
Filled template: “Taking into account Processing start time, Storage level variables, and Maximum storage capacity, Batch processing time, Batch due date parameters; Minimize Makespan related to Number of late jobs indicator.”
At this level, intelligent agents provide additional capabilities for reasoning using semantic technologies. The main task focuses on decision-support for industry 4.0 microenvironments.

3.1.3. The Data Model of the Polymer Plant

L1. Data definition: The data definition is concerned with the data collection and data system architecture. Accordingly, the transactional system needs to be verified. In this case study, the information related to the recipe, namely, the energy and material flows, is stored in aStructured Query Language (SQL) database. All flows are listed, identified, and quantified in the database, and their sign (positive or negative) indicates whether they enter or leave the process boundaries. The data relating to the material and energy flows stems from the factory floor, and only process engineers have access and permission to modify the data.
L3. Data standardization: This level aims to standardize data properties. Data properties comprise data metrics, data language, and data structure. In general, properties are related to specifications within the processing system. Many of the systems are ruled by the technology implemented. In contrast, this methodology pursues defined, choose and consensus data properties desired for the processing system. In this particular case study, ANSI/ISA standards define those properties, standardizing the data through the instantiation process. The data model provides specifications for developed interfaces, software requirements. Table 9 presents properties, detail of Boolean, and Direction Type data from EOP (based on ANSI/ISA standards).
Next, Table 10 presents data details from the polymer process.
L4. Data integration and feeding: This level performs the definition of data and data sets required by the optimization software or other software. We consider software specialized in mathematical programing and solving by optimization software, which contains strict and non-strict approaches. Thus, Table 11, Table 12 and Table 13 show some structure data or data sets required by the polymer process plant’s optimization activity. Besides, optimization software can call for single data at any time.
L7. Data dynamics: This level aims to develop an algorithm based on Jython being capable of structuring data. For this specific case study, the algorithm was under construction. The strategy focuses on queries and the structure of triples from the semantic models, which can dynamically define data sets as shown in L4. Data structuring and feeding. Finally, we want to point out that ontologies have a database structure but are semantically enriched and supported by knowledge.

4. Discussion

This work introduces the wide intelligent management architecture and the application to acrylic fiber production as a case study. As a result, the production process first creates a process definition (system characterization and flowsheet of the plant), data definition (database based on the semantic model), and knowledge conceptualization (a semantic model for representing concepts and data of the process and system). The standardization of data and concepts has been done using the semantic model to represent the process (ANSI/ISA standards). Thus, using the architecture for data structuring and feeding facilitates the integration of the Life Cycle Assessment approach improvement. From this point, the acrylic fiber production plant has the basis for developing process automation, developing process digitalization, or developing intelligent agents for decision-making. Figure 12 shows the process activities performed in the case study regarding process definition, standardization, optimization, and intelligence (yellow boxes and blue arrows path). Moreover, the acrylic fiber company can perform process improvement, automation, or digitalization based on the current plant status (green arrows pointing gray boxes). Finally, using a comprehensive intelligent management architecture model can be adapted to any company necessity by choosing how to evolve their processes.

5. Conclusions

This work presents a novel architecture for technology integration on process activities and systematically manages processes and their related data using formal knowledge. Knowledge models provide additional reasoning capabilities to support decision support systems for the wide optimization and industry 4.0 approaches. The architecture comprises and structures three critical systems: process system, knowledge system, and transactional system. As a result, analytical tools belonging to process activities and transactional data systems can be guided by a systematic development framework consolidated with formal knowledge models. Thus, the model improves the interaction among process life cycles, analytical models, transactional systems, and knowledge. A comprehensive intelligent management architecture model can be seen as an ordered, adaptable, and configurable tool for integrating technologies and processes maturity. A critical aspect of this method regards formal knowledge models that become usable and reusable in technology integration and maturity. Finally, this method is a new alternative supporting companies when they need to decide ”what to do” by explaining to them ”why to do” and ”how to do it”, taking as starting point their processes and characteristics.

Author Contributions

Conceptualization, E.M. and E.C.-G.; methodology, E.M. and E.C.-G.; software, E.C.-G. and E.M.; validation, E.M., L.P. and E.C.-G.; formal analysis, E.M., E.C.-G. and E.M.M.; investigation, E.M., E.C.-G. and L.P.; data curation, E.C.-G.; writing-original draft preparation, E.M., E.C.-G. and L.P.; writing-review and editing, E.M. and L.P; visualization, E.C.-G. and E.M.M.; supervision, L.P.; project administration, E.M.; funding acquisition, L.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Ministerio de Economia, Industria y Competitividad and the European Regional Development Fund, both with grant numbers DPI2017-87435-R and PCIN-2015-001/ELAC2014/ESE0034.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from funding institution (see Funding).

Acknowledgments

The technical support in this work from Enterprise WISDOM Company is fully acknowledge.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Venkatasubramanian, V.; Zhao, C.; Joglekar, G.; Jain, A.; Hailemariam, L.; Suresh, P.; Akkisetty, P.; Morris, K.; Reklaitis, G. Ontological informatics infrastructure for pharmaceutical product development and manufacturing. Comput. Chem. Eng. 2006, 30, 1482–1496. [Google Scholar] [CrossRef]
  2. Simon, F.; Murray, T. Decision support systems. Commun. ACM 2007, 50, 39–40. [Google Scholar]
  3. Shim, J.P.; Warkentin, M.; Courtney, J.F.; Power, D.J.; Sharda, R.; Carlsson, C. Past, present, and future of decision support technology. Decis. Support Syst. 2002, 33, 111–126. [Google Scholar] [CrossRef]
  4. Shobrys, D.E.; White, D.C. Planning, scheduling and control systems: Why cannot they work together. Comput. Chem. Eng. 2002, 26, 149–160. [Google Scholar] [CrossRef]
  5. Park, J.; Du, J.; Harjunkoski, I.; Baldea, M. Integration of Scheduling and Control Using Internal Coupling Models. Comput. Chem. Eng. 2014, 33, 529–534. [Google Scholar]
  6. CEN Standards. European Committee for Standardization (CEN). Available online: https://www.cen.eu/about/Pages/default.aspx (accessed on 15 March 2021).
  7. Integrated Definition Methods (IDEF) Standards. Available online: https://www.idef.com/ (accessed on 15 March 2021).
  8. International Electrotechnical Commission (IEC) Standards. Available online: https://www.iec.ch/homepage (accessed on 15 March 2021).
  9. International Organization for Standardization (ISO). Available online: https://www.iso.org/standards.html (accessed on 15 March 2021).
  10. Manufacturing Execution Systems Association (MESA). Available online: http://www.mesa.org/en/modelstrategicinitiatives/MSI.asp (accessed on 15 March 2021).
  11. Machinery Information Management Open Systems Alliance MIMOSA. Available online: https://www.mimosa.org/mimosa-osa-eai/ (accessed on 15 March 2021).
  12. Object Management Group (OMG). Available online: https://www.omg.org/about/index.htm (accessed on 15 March 2021).
  13. Process Industry Practices (PIP). Available online: https://www.pip.org/ (accessed on 15 March 2021).
  14. International Society for Measurement and Control. ISA-88/95 technical report: Using ISA-88 and ISA-95 together. In ISA The Instrumentation, Systems, and Automation Society 2007; Technical Report; ISA: Durham, NC, USA, 2017. [Google Scholar]
  15. Williams, T.J. A Reference Model for Computer Integrated Manufacturing from the Viewpoint of Industrial Automation. IFAC Proc. Vol. 1990, 23, 281–291. [Google Scholar] [CrossRef]
  16. Apostolou, D.; Mentzas, G.; Abecker, A. Ontology-enabled knowledge management at multiple organizational levels. In Proceedings of the 2008 IEEE International Engineering Management Conference, Estoril, Portugal, 28–30 June 2008; pp. 1–6. [Google Scholar]
  17. Sowa, J.F. Conceptual Structures: Information Processing in Mind and Machine; Addison-Wesley: Reading, MA, USA, 1984. [Google Scholar]
  18. Gruber, T.R. A translation approach to portable ontology specifications. Knowl. Acquis. 1993, 5, 199–220. [Google Scholar] [CrossRef]
  19. Fensel, D. A Silver Bullet for Knowledge Management and Electronic Commerce. In Ontologies; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  20. Anderson, L.W.; David, R.K.; Benjamin, S.B. A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives; Lorin, W., Anderson, D.K., Eds.; Longman: New York, NY, USA, 2001. [Google Scholar]
  21. Munoz, E.; Capon-Garcia, E.; Puigjaner, L. Supervised Life-Cycle Assessment Using Automated Process Inventory Based on Process Recipes. ACS Sustain. Chem. Eng. 2018, 6, 11246–11254. [Google Scholar] [CrossRef]
  22. Munoz, E.; Capon-Garcia, E.; Espuna, A.; Puigjaner, L. Ontological framework for enterprise-wide integrated decision-making at operational level. Comput. Chem. Eng. 2012, 42, 217–234. [Google Scholar] [CrossRef]
  23. Munoz, E.; Capon-Garcia, E.; Lainez, J.; Espuna, A.; Puigjaner, L. Integration of enterprise levels based on an ontological framework. Chem. Eng. Res. Des. 2012, 91, 1542–1556. [Google Scholar] [CrossRef]
  24. Munoz, E.; Capon-Garcia, E. Intelligent Mathematical Modelling Agent for Supporting Decision-Making at Industry 4.0. In Trends and Applications in Software Engineering. Advances in Intelligent Systems and Computing; Mejia, J., Munoz, M., Rocha, A., Pena, A., Perez-Cisneros, M., Eds.; Springer: Cham, Switzerland, 2019; Volume 865. [Google Scholar]
  25. Munoz, E.; Capon-Garcia, E.; Lainez-Aguirre, J.M.; Espuna, A.; Puigjaner, L. Operations Research Ontology for the Integration of Analytic Methods and Transactional Data. In Trends and Applications in Software Engineering. Advances in Intelligent Systems and Computing; Mejia, J., Munoz, M., Rocha, A., Calvo-Manzano, J., Eds.; Springer: Cham, Switzerland, 2018; Volume 45. [Google Scholar]
  26. Munoz, E.; Capon-Garcia, E.; Puigjaner, L. Advanced Model Design Based on Intelligent System Characterization And Problem Definition. In Computer-Aided Chemical Engineering; Anton, A.K., Edwin, Z., Richard, L., Leyla, Ö., Eds.; Elsevier: Amsterdam, The Netherlands, 2019; Volume 46, pp. 1045–1050. [Google Scholar]
Figure 1. Instrumentation, Systems and Automation Society ISA-95 integration of information schema.
Figure 1. Instrumentation, Systems and Automation Society ISA-95 integration of information schema.
Processes 09 00600 g001
Figure 2. The Purdue Enterprise Reference Architecture (PERA).
Figure 2. The Purdue Enterprise Reference Architecture (PERA).
Processes 09 00600 g002
Figure 3. Bloom’s taxonomy by Vanderbilt University Center for Teaching.
Figure 3. Bloom’s taxonomy by Vanderbilt University Center for Teaching.
Processes 09 00600 g003
Figure 4. Maturity echelons of the process system model.
Figure 4. Maturity echelons of the process system model.
Processes 09 00600 g004
Figure 5. Maturity echelons of the data system model.
Figure 5. Maturity echelons of the data system model.
Processes 09 00600 g005
Figure 6. Maturity echelons of the knowledge system model.
Figure 6. Maturity echelons of the knowledge system model.
Processes 09 00600 g006
Figure 7. Maturity model system integration: Wide Intelligence Management Architecture’s process, data, and knowledge.
Figure 7. Maturity model system integration: Wide Intelligence Management Architecture’s process, data, and knowledge.
Processes 09 00600 g007
Figure 8. Flowsheet for acrylic fibers’ production process (it contains 14 recipe elements, divided into eight recipe unit procedures and six recipe operations).
Figure 8. Flowsheet for acrylic fibers’ production process (it contains 14 recipe elements, divided into eight recipe unit procedures and six recipe operations).
Processes 09 00600 g008
Figure 9. First taxonomical representation of the Enterprise Ontology Project classes.
Figure 9. First taxonomical representation of the Enterprise Ontology Project classes.
Processes 09 00600 g009
Figure 10. First taxonomical layer representation of the Mathematical Modeling Ontology classes.
Figure 10. First taxonomical layer representation of the Mathematical Modeling Ontology classes.
Processes 09 00600 g010
Figure 11. First taxonomical layer representation of the Operations Research Ontology classes.
Figure 11. First taxonomical layer representation of the Operations Research Ontology classes.
Processes 09 00600 g011
Figure 12. Potential activities derived from the case study, where GMPs refers to Good Manufacturing Practices, SOPs refers to Standard Operating Procedures, and GUI refers to Graphical User Interface.
Figure 12. Potential activities derived from the case study, where GMPs refers to Good Manufacturing Practices, SOPs refers to Standard Operating Procedures, and GUI refers to Graphical User Interface.
Processes 09 00600 g012
Table 1. Overall Intelligence Management Architecture for technology integration through process activities.
Table 1. Overall Intelligence Management Architecture for technology integration through process activities.
Process ModelKnowledge ModelTransactional Model
L1Process DefinitionConceptualDefinition
Current process matterChemical principles
Physics principles
Mechanics principles
Data definition
Data collection
L2Improvement Improvement
BenchmarkingGood manufacturing practices
Standard operational procedures
Data refining
Database
L3Standardization Standardization
Tear levels definition
world-class process
Process standards
Quality standards
Data standards
Security standards
Data metrics
Data language
Structured data
L4OptimizationProceduralIntegration & Feeding
Better performance
Key process variables
Key process parameters
Analytic algorithms knowledge
Analytical methods knowledge
Data to parameters
Data to sets
Planning systems
L5AutomationExpertise
Fixed parameters
Fixed variablesKey indicators
Set values ranges
Good & bad habits algorithmsFixed data collection
Fixed data structuring
L6Digitalization
Virtual twin processes
New process scenarios
Knowledge-based scenariosVirtual feeding
L7IntelligenceMetacognitiveDynamics
Problem characterization
Problem classificationIntelligent systems
Intelligent agents
Autonomous decision-making
Model knowledge characterization
Model knowledge classification
Knowledge reasoning
Knowledge creation
Automated data collection
Automated data structuring
Intelligent database
Table 2. General features of the organization for system characterization.
Table 2. General features of the organization for system characterization.
General FeatureValue
Production capacityMedium
Company sizeMedium
Supply chain typeGood availability
Production typeMulti-stage
Market competition typeLow
Environmental regulationsDefined
Demand levelsHigh volume
Table 3. Tactic features of the organization for system characterization.
Table 3. Tactic features of the organization for system characterization.
Tactic FeaturesValue
Transport typeLand
Supply chain objectiveEconomic
Production policiesDefined
Customer features-
Suppliers features-
Process flow typeForward
Material storage typeLimited
Table 4. Strategic features of the organization for system characterization.
Table 4. Strategic features of the organization for system characterization.
Strategic FeaturesValue
Production processingSequential
TechnologyMultitask
Material storageLimited
Material resourceNot perishable
Processing resourcesLimited
Scheduling objectiveTiming
Scheduling modeOn-line
Table 5. The formula for the master recipe of acrylic fiber A production process 1/2.
Table 5. The formula for the master recipe of acrylic fiber A production process 1/2.
Recipe IDRecipe TypeElement IDProcedural Element TypeParameter IDParameter Name
MR-01MasterRE-3Unit procedureI15Output3_2
MR-01MasterRE-14Unit procedureI56Output14_1
MR-01MasterRE-14Unit procedureI57Output14_2
MR-01MasterRE-2OperationI61CleaningWater_total
MR-01MasterRE-1Unit procedureI62CoolingWater_total
MR-01MasterRE-1Unit procedureI63Electricity_total
MR-01MasterRE-13Unit procedureI50Output13_1
MR-01MasterRE-1Unit procedureI1Input1_1
MR-01MasterRE-1Unit procedureI2Input1_2
MR-01MasterRE-1Unit procedureI3Input1_3
MR-01MasterRE-1Unit procedureI4Input1_4
MR-01MasterRE-1Unit procedureI5Input1_5
MR-01MasterRE-7Unit procedureI26Output7_2
MR-01MasterRE-3Unit procedureI65Steam_total
Table 6. The formula for the master recipe of acrylic fiber A production process 2/2.
Table 6. The formula for the master recipe of acrylic fiber A production process 2/2.
Resource TypeSubtypeResource NameProcedural InformationValueUnit
MaterialBy-productByProduct1Output Parameter1750kg
MaterialBy-productByProduct3Output Parameter1217kg
MaterialBy-productByProduct4Output Parameter1734kg
Material CleaningWaterT1Process Parameter240,000kg
EnergeticCooling waterCoolingWaterT1Process Parameter8,613,983kg
EnergeticElectricityElectricityT1Process Parameter458,979kWh
MaterialFinal productFinalProduct1Output Parameter1000kg
MaterialRaw materialRawMaterial1Input Parameter100kg
MaterialRaw materialRawMaterial2Input Parameter50kg
MaterialRaw materialRawMaterial3Input Parameter25kg
MaterialRaw materialRawMaterial4Input Parameter0kg
MaterialRaw materialRawMaterial5Input Parameter0kg
MaterialResidueResidue1Output Parameter1974kg
EnergeticSteamSteamP1Process Parameter441,323kg
Table 7. Results for different problem instances from the agent-based framework at Polymer plant.
Table 7. Results for different problem instances from the agent-based framework at Polymer plant.
Number Batches (A/B/C)Objective FunctionSolution ApproachSolution ValueOptimal Solution Value
4/4/4ProductivityMathematical programming21702174
4/4/4Environmental impactMathematical programming48,20048,200
4/4/4MakespanMathematical programming48,00048,000
17/10/13ProductivityHybrid approach13011302
17/10/13Environmental impactMathematical programmin218,814217,236
17/10/13MakespanGenetic algorithm199,507197,686
20/18/15ProductivityHybrid approach13541356
Table 8. The semantic search of key features and confidential intervals, where SWOT refers to a strength (S), weakness (W), opportunity (O), or threat (T) and EOP refers to Enterprise Ontology Project.
Table 8. The semantic search of key features and confidential intervals, where SWOT refers to a strength (S), weakness (W), opportunity (O), or threat (T) and EOP refers to Enterprise Ontology Project.
SWOTIndicatorRelated FeatureMetricUBVLBV
DataStringEOP_ClassEOP_ClassEOP_DataPropertyEOP_DataPropertyEOP_DataProperty
WTime delivered haulerTardiness finish ordersu/month1545
WDemand unacomplishmentUnfinished ordersu/month515
Table 9. ANSI/ISA standard properties for enumeration members.
Table 9. ANSI/ISA standard properties for enumeration members.
Enumeration SetEnumeration ValueEnumeration StringDescription
Boolean0FALSEDefinition of a Boolean value.
1TRUE
Direction Type0InvalidEntry not valid
1InternalIdentifies how a parameter is handled. Internal = only available within the Recipe Element. Defined at creation or created as an intermediate value.
2InputThe Recipe Element receives the Value from an external source.
3OutputThe Recipe Element creates the Value and makes it available for external use.
4Input/OutputThe Recipe Element and external element exchange the Value, and may change its Value.
5–99 Reserved
100+ User defined
Table 10. Data properties from EOP within the Polymer process system.
Table 10. Data properties from EOP within the Polymer process system.
Object/Data PropertyRange
hasParameterSourceResource
hasID_ParameterIDParameterID
parameter_typeconstant; variable
hasEquationAsReferenceValueMathematicalElement
valuefloat
engineering_unitsstring
descriptionstring
scaledfloat
Table 11. Capacity data set of the Polymer plant, structuring two columns: Unit_ID and Data value.
Table 11. Capacity data set of the Polymer plant, structuring two columns: Unit_ID and Data value.
ID_R14000
ID_P113,000
ID_C14000
ID_P25000
IN001B5000
IN002A5000
IN002B11,000
ID-FP00A36,000
ID_FP00B36,000
ID_R00136,000
ID_R00236,000
Table 12. Subtasks time of product A of the Polymer plant, structuring seven columns: Task number, Unit_ID, Preparation time, Load time, Operation time, Unload time, and Cleaning time.
Table 12. Subtasks time of product A of the Polymer plant, structuring seven columns: Task number, Unit_ID, Preparation time, Load time, Operation time, Unload time, and Cleaning time.
1ID_R10.200.002.000.300.25
2ID_P10.200.000.300.750.25
3ID_C10.500.302.500.000.75
4ID_P20.200.000.750.000.25
5IN001B0.500.000.750.000.50
6IN002A0.200.000.750.750.25
7IN002B0.300.751.000.000.25
8ID-FP00A0.200.000.750.000.25
9ID_FP00B0.300.000.750.000.25
10ID_R0010.200.000.750.000.50
11ID_R0020.200.000.750.000.25
Table 13. Subtasks time of product B of the Polymer plant, structuring seven columns: Task number, Unit_ID, Preparation time, Load time, Operation time, Unload time, and Cleaning time.
Table 13. Subtasks time of product B of the Polymer plant, structuring seven columns: Task number, Unit_ID, Preparation time, Load time, Operation time, Unload time, and Cleaning time.
1ID_R10.200.003.000.750.25
2ID_P10.000.000.000.000.00
3ID_C10.000.000.000.000.00
4ID_P20.500.000.750.000.25
5IN001B0.200.000.750.000.50
6IN002A0.300.000.750.000.25
7IN002B0.200.750.740.740.25
8ID-FP00A0.200.000.740.000.25
9ID_FP00B0.500.000.740.000.25
10ID_R0010.200.000.740.000.50
11ID_R0020.200.000.740.000.25
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Muñoz, E.; Capon-Garcia, E.; Muñoz, E.M.; Puigjaner, L. A Systematic Model for Process Development Activities to Support Process Intelligence. Processes 2021, 9, 600. https://doi.org/10.3390/pr9040600

AMA Style

Muñoz E, Capon-Garcia E, Muñoz EM, Puigjaner L. A Systematic Model for Process Development Activities to Support Process Intelligence. Processes. 2021; 9(4):600. https://doi.org/10.3390/pr9040600

Chicago/Turabian Style

Muñoz, Edrisi, Elisabet Capon-Garcia, Enrique Martinez Muñoz, and Luis Puigjaner. 2021. "A Systematic Model for Process Development Activities to Support Process Intelligence" Processes 9, no. 4: 600. https://doi.org/10.3390/pr9040600

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop