1. Introduction
As every other aspect of everyday life, also the manufacturing domain is strongly influenced by innovations in Information and Communication Technologies (ICT) [
1,
2]. Companies need to react flexibly to changing demands to remain competitive in a dynamic market [
3]. The impact of ICT in this domain is broadly known as Industry 4.0 and ranges from the application of artificial intelligence in robot-assisted production to the usage of Internet of Things (IoT) devices, always connected and controllable just-in-time [
4].
Traditionally, a Process Model (PM) is designed by the expert to represent in a standard language, such as the Business Process Model Notation (BPMN), an abstract representation of the modus operandi and the set of operations adoptable to achieve the expected goal. These elements basically translate into a set of flow objects to represent events (such as start, intermediate and stop), activities (practical elementary actions, called task) and gateways (conditions, or events, based on adaptation of the path, representing decision points). A particular instantiation of the PM, based on the relevant set of variables and conditions, takes the name of Process Instance (PI). To transform a PI into an enactable model, it is necessary to associate each task T with one or more services available to allow its execution in the physical world, achieving in this way its expected goal. Each service (or combination of them) used in this way is called the grounding service for the task T. The set of evaluated gateways and of provided grounding services for a full PI is known as the Process Service Plan (PSP). An additional requirement to support a full enactability of the PSP by an execution environment is the existence of a contextual environment to be used for service deployment, plus the presence of the variables’ bindings amongst the set of grounding services, to support the exchange of all the information required for a correct service instantiation. During the process execution, meaning the ordered instantiation of the grounding services, a process registry, also known as a PSP log or execution log, is created by the RunTimeexecution environment and used to track the operations performed and their outcomes. Whenever one or more of the grounding services in a PSP become, temporarily or definitively, unavailable, we define this as a “broken” PSP. This situation requires a dynamic adaption to support the process execution completion, by providing an alternative set of available grounding services for the tasks that are lacking, using also the information provided by the PSP log.
There are multiple preconditions for allowing Industry 4.0 real-world applications; for instance, the need to define the domain formally in terms of ontological knowledge, the demand to give formalized representations of the executable services and the requirement of independence between business process models, their instantiation in the current context and the available services usable by the executed models. This calls for supporting tools that can provide an effective composition of services in the context of Everything-as-a-Service (XaaS) and Service-Oriented Architecture (SOA) systems, together with their Semantic variant, named SemSOA.
Along the same line, manufacturing business processes have to be designed and executed in a more dynamic production context, thus creating the need for adaptation and optimization at design time, as well as at runtime [
5]. As a consequence, the design of process models for business applications has to go further than what the BPMN standard can support, as it needs to comprise representations for functional and non-functional requirements. This exceeds what can be specified in traditional Business Process Modeling (BPM) systems, which does not include semantic representations of product models and manufacturing services, as well as Key Performance Indicator (KPI) requirements and Quality-of-Service (QoS) aspects. Moreover, effective supporting tools need to be able to provide reliable model optimization to achieve the best executable PSP for business processes. Eventually, the provided PSPs should be designed to support a runtime incremental re-planning effectively, in case an included service is temporarily failing or becomes unavailable. Additionally, a sustainable approach requires just-in-time service leasing, their elastic deployment on request into the cloud, their monitoring and billing.
Due to the unavailability of solutions to tackle these issues in an integrated way [
6], we developed a set of components whose cooperation can pragmatically solve the presented points. Starting from the ontology necessary for the domain and business cases’ description and reasoning, as well as the wrapping of services into their semantic characterization, the approach should be able to select the set of compliant services, available to implement the tasks. Subsequently, it should support the composition of functionally-correct PSPs based on semantic annotations, while optimizing their non-functional aspects formalized in terms of a Constrained Optimization Problem (COP). The resulting complete PSP is encoded back into specifically-developed BPMN 2.0 extensions. This approach partially bridges the gap between models and executable plans and provides at the same time the best variable assignments to optimize the outcome of the execution. Regarding the availability of the optimal PSP, a pragmatic tool for manufacturing in Industry 4.0 should provide an execution environment able to efficiently deploy the services grounded on the cloud, control them and react to eventual failure, with a smart re-planning policy.
The rest of the paper is organized as follows: In
Section 2, the related work is presented, then
Section 3 describes the set of components envisioned and developed in the CREMA framework; while
Section 4 introduces an exemplary test case in the manufacturing domain. This includes a short overview of the scenario, followed by
Section 5, with a brief description of the role of each of the components in this context. Then,
Section 6 gives some initial thoughts about the modifications that were required to achieve our demonstrator and the extendability of the presented use case towards a pragmatic approach for Industry 4.0 in manufacturing. The conclusions are eventually given in
Section 7.
2. Related Work
Multiple different domains are affected by our proposed approach. This section gives a brief overview of their current status, in particular with respect to the following themes: SOA and its semantic variant used for service matching and composition; the XaaS approach; business process optimization by user-defined KPI and QoS metrics; PSP composition, including variable bindings and optimal configuration; elastic process execution in the cloud; aspects of fault tolerance in business processes’ realization; and deployment of container-based software as a supporting solution for heterogeneous services’ enactment.
A semantic service (also known as semantic web service) is an approach to support the automatic interpretation of the service functions for service-based systems or intelligent agents [
7]. Its central idea is to use standardized annotations to conceal the functional and non-functional semantics from software agents defined for different tasks, not only in a machine-readable way, but also machine-interpretable. In order to achieve this interoperability between heterogeneous components and to allow them to consider any semantic service they can encounter, the semantics should be defined using concepts and rules coming from a shared ontology. The ontology itself needs to be formally defined adopting a widespread format, such as one of the W3C standard languages, OWL2 or RDFS. Applications and agents are consequently in the position to be able to rely on these well-founded formal semantic annotations for their service interpretation, with the final aim of discovering required services with the indicated high-precision or being able to plan a complex task by composing elementary services in an automated way. There are a number of currently notable frameworks for semantic service description, each one of them with some advantages and some limitations. Examples of them are OWL-S, WSML, the W3C standard SA-WSDL and USDL.
Another important paradigm at the foundation of the present work is the so-called everything-as-a-service [
8]. It represents an abstraction layer over the actual resources, wrapping them into well-defined public interfaces to support every possible operation, such as search, selection and invocation. On top of this, a XaaS approach clearly separates the service concrete instantiation from its semantic published description, guaranteeing a complete separation of the model and the service-based plan implementing it. For these reasons, this paradigm is widely adopted in the field of cloud computing infrastructure.
The archetype of the semantic service-oriented architecture, by composing the semantic aspects and the XaaS, is applicable also to the manufacturing domain, allowing specialized process models’ consideration. In this way, it is possible to adopt a proper procedure for semantic service discovery, selection and composition planning, resulting in the creation of a fully-automatic implementation based on the available semantic services.
The key idea is to enable automated understanding of task requirements and services by providing semantic descriptions in a standardized machine-understandable way by using formal ontological definitions [
7], for example in OWL2 (W3C standard:
https://www.w3.org/TR/owl2-overview/). In [
9], the authors proposed SBPM, a framework to combine Semantic web services and BPM to overcome the problem of automated understanding of processes by machines in a dynamic business environment. Similarly, the authors of [
10] proposed sBPMN, which integrates semantic technologies and BPMN to overcome the obvious gap between an abstract representation of process models and actual executable descriptions in BPEL. The work in [
11] follows the same track with the proposal of BPMO, an ontology, which is partly based on sBPMN, while [
12] took sBPMN as the basis for the Maestro tool, which implements the realization of semantically-annotated business tasks with concrete services by means of automatic discovery and composition. In [
13], a reference architecture for semSOA in BPM was proposed, which aims to address the representation discrepancy of business expertise and IT knowledge by making use of semantic web technologies. All of these proposals rely on formalization different from (although based on) BPMN or do not aim for a full integration from a formalism point of view. In the work [
14], the authors proposed an approach that uses BPMN extensions to add semantic annotations for automatic composition of process service plans and to verify their soundness, but this approach does not consider QoS-aware or runtime optimization. Adopting a similar approach, our demonstrator proposes a set of BPMN extensions that not only enable interoperability by offering process model composition, task service selection and process execution, but also provides a way to represent the best values to optimize the QoS and the quality values achieved.
Our optimized PSP creation component applies state of the art semantic service selection technologies [
15] to implement annotated process tasks. Non-functional criteria, often referred to as QoS (e.g., costs, execution time, availability), can additionally be considered to find matching services in terms of functional and non-functional requirements [
16,
17]. Here, optimality with respect to the non-functional QoS specifications is achieved at the process model level by solving (non)linear multi-objective COP (muCOP) as an integrated follow-up to the pattern-based composition.
Most existing approaches to PSP composition do not cover the combination of functional (semantic) aspects and non-functional (QoS-aware) optimization. For example, [
12,
18] considered functional semantic annotations to implement business processes by means of a service composition plan.
The work in [
19] provided a survey giving an overview of existing approaches and initiatives in this direction and highlighted research questions. Integrated functional and non-functional optimizations have rarely been considered, with the notable exception of [
20]. While composition typically includes the computation of possible data flows, our proposed approach additionally finds optimal service variable assignments that are also required for executing the resulting plans. This is a feature not yet considered by existing work. Moreover, our PSP computation component is equipped to perform re-optimization of PSPs at runtime upon request, which is also a novel feature. Finally, our optimization component employs the means of RDF stream processing to react to service changes (non-functional QoS aspects) reported by the service registry. This information can be used to trigger optimizations pro-actively if the RDF stream engine identifies that a previously computed PSP is affected.
Another area of recent innovation is the one of micro-services, which stems from the very diffused service abstraction. Here, many innovations arose in the last few years, such as all the family of techniques for container-based deployment [
21]: Docker (
https://www.docker.com), rkt (short for
CoreOS’s Rocket:
https://coreos.com/rkt/) or LXC (Linux Containers:
https://linuxcontainers.org/). They represents virtual machines (VM), but in contrast with the historical ones, they are lightweight, devoted to bundling together every requirement for a service, allowing a zero-configuration on deployment. This means all the software and operating system dependencies and all the service configurations are already contained and pre-built. Despite the original scope of lightweight VMs for the micro-service domain [
22], not much time passed before their usefulness was appreciated in other domains, such as legacy services’ integration [
23]. In our scenario, Docker is used to facilitate the integration of heterogeneous services into business processes. An additional feature that makes containers a natural choice in cases where the execution is on-demand is the better startup performance coupled with a lower resource footprint, in comparison to established virtual machines [
24].
Not too much research was devoted till now to exploring the theme of elastic process execution [
25]: here are the most relevant publications, in this respect. ViePEP (Vienna Platform for Elastic Processes) is an eBPMS (elastic Business Process Management System) created to fuse traditional process engine functionality with a cloud controller [
26,
27]. This means that relying on cloud resources, this solution allows executing software-based processes and instantiating process tasks on them. Additionally, ViePEP offers the possibility to optimize the service enactment by using in the most cost-effective way the readily obtainable cloud resources, respecting the predefined Service Level Agreements (SLAs) [
28]. ViePEP is based, in common with our approach, on software utilities to represent the execution elements of a process task, but does not offer any feature for the automatic selection and composition of the existing services. Consequently, our solution emphasizes the capabilities for matching and composition of automatic service and, in a failure case, of process optimization at runtime.
Juhnke et al. [
29] provided another similar work: to provide process tasks’ enactment, they used on-demand VM cloud-based computational resources. Anyway, they relied instead on a BPEL-based process representation. Our solution is instead based on BPMN v2.0 extensions, and this supports the interpretation simplification of the executable process service plans.
Other works that adopt VMs to implement the process tasks composing business processes on cloud resources were Wei and Blake [
30], Bessai et al. [
31] and Cai et al. [
32]. However, each of them lacked automatic service selection capabilities. This aspect could generate an issue during the process execution, as in the case of a necessary reconfiguration for service unavailability, it lacks the minimum level of flexibility necessary to support runtime dynamic optimization, in an automated fashion.
Another relevant area for the current work is the so-called “cloud manufacturing”. An example of work in this areas is from Chen et al. [
33], where they introduced a novel cloud manufacturing framework with an auto-scaling capability. The main differences with our work start with their approach of transforming single-user manufacturing functions into multiple contemporary usage cloud services, where we rely on semSOA to abstract from the underlying rooting function into a semantic rich service. Secondly, in the work of Chen, the optimality was obtained in terms of the minimal number of VMs required to confine the average service time to lower than a predefined threshold. This is much more inflexible than our approach, where the user can define any objective function for the optimization problem. Obviously, this flexibility comes at a cost, in this case the need for the user to understand clearly and coherently formulate the COP and the computation time required by our solution to provide an optimal solution.
3. CREMA: Towards Industry 4.0 for Manufacturing
The objective of this work is to provide a pragmatic solution for implementing an Industry 4.0 approach in the manufacturing domain. This is based on the smart process composition supported by the usage of semantic services, together with the possibility to optimize the service plan through a novel definition of requirements and objectives. That is facilitated by an ad-hoc developed COP language for defining QoS-based functions, which can be embedded into BPMN extensions. As a dynamic and just-in-time adaptation to the frequently-changing execution context and service availability, the proposed approach provides an adaptive instances execution, by automatically dealing with “broken” PSPs and repairing them seamlessly using services, or a combination of them.
In the following sections, we present each individual component required to implement this vision: we start by the ontology and its usages (
Section 3.1), followed by a short depiction of the helper UI for the semantic services annotation (
Section 3.2). With these two elements in place, it is possible to define the Process Model (PM) in BPMN, together with the additional elements that define its semantic meaning, by the usage of an extended BPMN editor (
Section 3.3). When the user requires the execution of the process, an optimized PSP is computed by the system, through semantic service matching and COP solving on the QoS-defined objective function (
Section 3.4). This PSP is then executed by the runtime environment (
Section 3.5). The runtime environment then uses the invocation and controlling capabilities of the deployment component (
Section 3.6), which controls the retrieval, instantiation and feedback collection of the services on the cloud resources.
3.4. Process Optimization by COP Solving with Semantic Services
To provide a service-based solution, we developed a one-stop process service plan composition and optimization component for extended BPMN [
37]. In order for the proposal to be optimal with respect to the set of possible functionally valid solutions, it has to make particular choices driven by non-functional requirements, which are expressed as functions of the QoS measures provided by the services. Moreover, it computes concrete settings of service input parameter values, which yield optimal results in terms of the optimization criteria.
This is done by specifying a COP at the process model level, whose solutions dictates what services to choose from and what parameter settings to use when calling services. The COP formulation includes information on how to map optimal parameter values to service inputs and service QoS to COP constants. The outputs produced by the optimization component are PSPs encoded in the original BPMN itself by making use of BPMN extensions. Besides the optimal services and input values for calling the services as described above, this also includes possible data flows with parameter bindings among services. Such a PSP implementing the process model can then be instantiated at runtime by a process service plan execution environment.
To achieve this objective, the optimization component follows two steps in a sequential manner: (a) it performs a pattern-based composition using semantic service selection for all semantically-annotated process tasks and the computation of possible data flows. Then, it executes a (b) QoS-aware non-functional optimization by means of COP solving at the process model level. This second step selects particular services out of sets of functionally-fitting services per tasks previously identified and provides the optimal settings for service inputs.
This workflow can be applied at design time and runtime (of a process model execution instance). At design time, the optimization component will be called after a process model has been defined in order to provide an executable implementation of the model as guidance for the execution environment. The runtime case appears as soon as a PSP is executed. Additionally, the execution environment can query back the optimization to provide alternative PSPs in the case of an exception during execution (e.g., a service becoming unavailable or failing). For this, the plan-enacting tool should not only provide to the optimization component the PSP it tried to execute, but also the current state of execution. This includes information on what services have already been executed, how gateways have been evaluated and what services caused errors during execution.
The aim of this component in the runtime case is then to provide an alternative solution for the given process instance. That is, it tries to patch the existing PSP and considers the current state of the world as fixed and not undoable, nevertheless trying to re-implement in an optimal way the part of the process model still uncovered or not correctly executed.
3.4.2. Process Service Plan
The computation of a PSP is presented in Algorithm 1, which uses four helper functions. The first one is
SIM (
) in Line 10, which is used to compute the similarity between two IOPE annotations based on a selected measure. Given the semantic description of a task (
) and a service (
) as the input, the adopted measures consider a logic-based signature plugin match for inputs and outputs and a logic specification plugin for precondition and effects. These matching filters are inspired by the classical plugin matching of components in software engineering. While a plugin match is commonly considered near-optimal, we prioritize services with semantic descriptions, which are logically equivalent with respect to the requested functionality. A possible ranking of logic-based semantic matching filters is proposed for iSeM, as shown in [
39]. Alternative approaches to semantic service selection learn the optimal weighted aggregation of different types of non-logic-based and logic-based semantic matching filters [
40].
Algorithm 1: The pseudocode for the process service plan composition. |
|
A second helper function is the
COPsolve (parameters) used in Line 23 for computing the set of Pareto-optimal solutions of the COP. This is a simple compiler that transforms our COP definition into a running instance of a JaCoPsolver (
http://jacop.osolpro.com/), using the set of parameters given.
The call to ComposeVariableBindings (solution) computes a possible set of variable bindings, which together define the data flow (Line 26). Bindings are determined by checking the semantic compatibility of the semantic variable types. This ensures a functionally-meaningful assignment beyond simple data type compatibility checking. The overall aim of this function is to connect as many service inputs in the solution with the outputs of services earlier in the execution order determined by the process model definition. Inputs that cannot be bound in that way are considered environmental variables. This ensures the direct executability of the computed service plan.
Please note that the pseudocode leaves out details on the handling of gateways and different possible execution paths through the process model for parallel execution and choices. Without loss of generality, the different paths can be considered additional options for generating PSPs, each indicating other gateway decisions and a valid data flow given this decision. The component is able to handle parallel (AND), choice (OR) and exclusive (XOR) gateways. While the AND gateway opens up independent parallel paths and is easy to handle, the XOR and OR gateways result in n and possible alternative execution paths, thus widening the problem space significantly. Structurally, however, all these options are handled in an analogous way to what was explained.
Eventually, MergePMwithSolution (PM,Plan) takes care of adding the full metadata section into the original process model to create an executable PSP. This happens at Line 29.
Functional optimization (services’ selection): The first step for creating a PSP is to select all the possible candidates functionally valid for each task. We rely on functionally equivalent
exact or on
plugin matches [
41] that are limited to direct subclass relationships. This way, all PSPs’ logical properties (in terms of IOPE) are conserved with respect to the given PM.
This step creates for each task a set of candidates, either a simple or composed service. In fact, the selection of their best composition is left for the non-functional optimization, based on the COP solution. Only after this additional phase, the actual service implementation in the returned PSP is complete.
Non-functional optimization (optimal services composition): Amongst all the possible combinations of services of the candidate pools of the process tasks, the best (or Pareto-optimal in the case of a multi-objective problem) option is chosen as part of the overall solution. This implies solving the COP problem associated with the PM, such as the example in Listing 2, by minimizing the function
TotalCost(X). For an introduction to the BPMN extensions defined in CREMA and used by our components, we refer the reader to [
42].
3.6. Process Deployment in the Cloud and Execution Control
At the lower level of the control chain for process execution sits a service deployment and execution control facility. This is responsible for the on-demand enactment of services on cloud resources. Differently from purely informative business PMs, the manufacturing domain is characterized by a natural mix of heterogeneous kinds of services. Naturally, there are traditional software services, such as analysis of values or computation of control conditions, that are naturally enactable on computational resources.
Typical of this domain, there are also real-world services, such as the one that physically manipulates the material and semi-finished piece to produce the final outcome of the manufacturing activity. These can be welding machines, robot arms, manipulators, numerically-controlled machines (CNC), lubrication systems, and so on. It is necessary to transform them by adding a software-based representation that works as their digital interface to enable their deployment and control on cloud resources.
Human-based services are another typical category and can be, for instance, the task of loading or unloading parts, manipulating the machines, managing an unexpected condition or collecting information about an environmental condition that is not automatically monitored. For this type of service to be enactable in a distributed computation environment, it is necessary to design some user interfaces that act as a human communication vector. This additional interfacing facility allows, on the one side, providing instruction and input and, on the other end, collecting feedback and information for reporting back to the execution context.
In order to combine these services, we have designed an abstraction approach called Proxy Service Wrappers (PSW), which provide a uniform representation of every different kind of service, allowing their usage as a grounding for process tasks.
The basic foundation of a PSW is a set of requirements that need to be implemented by services in order to be integrated. Firstly, there is the need to expose two different endpoints for each service: an endpoint for checking the Availability of the actual service, which should indicate its current leasability, taking into account all the limitations affecting it (e.g., contemporary usages, stale states or maintenance operations), and a Start one, which deploys the service using a JSON-encoded input parameter object. As a precondition, it is possible to call the latter one only under a positive answer from the former endpoint invocation. As a consequence of a Start endpoint triggering, a PSW starts the operation for a software-based service, triggers a real-world interaction, e.g., starts a welding process or signals to a human that he/she can start working on a specific task.
Secondly, a feedback and control channel is required between the executing PSW and the controlling component. This means that there is the need for each service to register to an endpoint to report its status. This can be either the termination of its execution by correct completion (such as software calculation, or a welding operation accomplishment, or a human indication that the operation is done) or a report for the occurrence of an error. In the last case, our component can raise an exception, to signal the process execution engine to start the compensation mechanism, by obtaining a new PSP, avoiding the failing service.
Eventually, a common technical format for every service is required, to ease their deployment on cloud resources. As it is currently common practice to use containers [
21], we also adopted this model. Our demonstrator uses the Docker Image format to represent every service grounding. This choice has the advantages of using an affirmed, widespread technical solution, which is also able to package all kinds of external resources within the image, supporting in this way our need for packaging heterogeneous services.
Thanks to this restricted set of three requirements, our system design is capable of integrating all kinds of services from the manufacturing domain smoothly. Additionally, these conditions are foundations to provide a real SOA abstraction of the existing manufacturing services.
In
Figure 9, the UI of the OSL is shown. It can be used to monitor the status of deployed and running PSWs (in this case, a single service called “metal injection”) and to stop their execution, if necessary.
4. Demonstrative Application
To showcase the flexibility and usefulness of the proposed approach, we envisioned and designed a simple PM for the manufacturing process of bicycle bodies. It encompasses the injection of melted aluminum by all the processes required to prepare the cold-chamber molding machine and to control and expel the formed piece of hull.
As an initial step, we semantically described the services available to implement the different tasks in this manufacturing area.
Figure 10 presents the results for three selected services, namely
,
and
. This figure concentrates only on the semantic aspect, and for this reason, the other metadata to represent the grounding and the QoS measures for these services are not explicitly depicted.
Figure 11 depicts the BPMN, mainly a linear model composed of 11 tasks, three exclusive gateways (the first two defining together two alternative subpaths, to include or exclude
, whether the third is used to control a condition and repeat
as many times as required). Each task is characterized by its IOPE annotation, as from
Figure 5.
Interesting to note here is the fact that there is not always a perfect one-to-one correspondence between tasks and services, but services can be composed (e.g.,
as the sequential arrangement of
and
) or can be alternatively used (at least under certain assumption, such as the plugin compatibility) to implement the same task (e.g.,
in
Figure 11 can be implemented by
or by the composed service
). This is also part of the flexibility provided by a SemSOA approach.
In Listing 2, the optimization problem defined using our COP grammar for the PM in
Figure 11 is presented: it starts (Line 2) by indicating the type of problem (linear and single objective), and then, it defines the variables (Line 5), an array and two simple variables followed by many constants (Line 6) in both simple and array form. There are nine functions (Lines 8–16), ranging from a linear combination of variables (Lines 14–16) to the sum for the X array length between one or more parameters (Lines 8–10), passing through non-linear operators such as MAX and MIN (Lines 11–13).
Then, the constraints are presented (Lines 19–22): the first one limiting the sum of elements in X to the value of one (this, together with the domain definition for each entry in the set {0, 1} allows only one element in the X array to be not null); the second constraint devoted to guarantee that there is enough production time for the required batch. Then, a constraint on the electricity consumption is presented to assure it does not exceed the available amount during the production time; and eventually, one for securing that the marginal revenue level produced by the execution of the current batch of hulls is satisfactory, both with respect to its dimension and the average quarterly cash flow. To complete the problem class definition, the objective functions is stated, together with its association with the semantic concept (Line 24), to allow its reuse in composing the final service plan. In this case, the objective is to minimize TotalCost(X) for the production.
As presented before, the definition of the current instance of the problem is then given, starting from the domains of the variables (Line 27) and the values of the constraints (Lines 28, 29). Eventually, in section “INPUT”, the mappings of semantic concepts (an extract of it is visible in
Table 1) in the PM annotation are used to create the automatic binding of produced output to incoming input (Lines 31–46), based on the service QoS annotations.
Listing 2. The Constraint Optimization Problem (COP) associated with the model in Figure 11. |
1 PROBLEM 2 TYPElinear singleEND TYPE 3 SOLVERbothEND SOLVER 4 CLASS 5 VARIABLESX[] VC QEND VARIABLES 6 CONSTANTSA1 A2 A3 B1 B2 B3 C1 C2 C3 Time[] Electricity[] ManteinTime[] SetupCost[] AmmCost[] 7 AcqTime[] Prec[] HoursAvaialble 8 END CONSTANTS 9 FUNCTIONS 10 Setup(T) = SUM(i, 1, X. length, X[i] * SetupCost + X[i] * AmmCost[i]) 11 ExecTime(T) = SUM(i, 1, X. length, X[i] * Time[i]) 12 ExecEletricity(T) = SUM(i, 1, X. length, X[i] * Electricity[i]) 13 ManteinanceTime(T) = MIN{X[i] * ManteinTime[i]} 14 AcquisitionTime(T) = MAX{X[i] * AcqTime[i]} 15 Precision(T) = MIN{X[i] * Prec[i]} 16 AvgTime(T) = A1 * ManteinanceTime(T) + A2 * ExecTime(T) − A3 * Precision(T) 17 ProdCostDirect(T) = B1 * ExecEletricity(T) − B2 * Precision(T) + B3 * Setup(T) 18 TotalCost(T) = C1 * AcquisitionTime(T) + C2 * AvgTime(T) + C3 * ProdCostDirect(T) 19 END FUNCTIONS 20 CONSTRAINTS 21 SUM(i, 1, X. length, X[i]) = 1 22 AvgTime(T) <= HoursAvaialble / BatchSize 23 ExecEletricity(T) / BatchSize < MaxElectricity 24 QuaterlyCashFlow / QuaterlyProduction + MinQuaterlyRevenues > ProdCostDirect(T) * BatchSize 25 END CONSTRAINTS 26 minimizeTotalCost(X) ->http://localhost/examples/Ont.owl#Cost 27 END CLASS 28 INSTANCE 29 DOMAINSX[]{0, 1} VC[120.0,1275.0] Q[10.5,1000.0] END DOMAINS 30 VALUESA1 = 1.5 A2 = 0.2 A3 = 3 B1 = 7.1 B2 = 12.9 B3 = 1.55 C1 = 4.55 C2 = 7.75 C3 = 9.99 31 HoursAvaialble = ( DeliveryDate − StartProduction ) / WorkingDayHours − DeliveryTime 32 INPUT 33 BatchSize<- (Task_A, http://localhost/examples/Ont.owl#BatchDimension) 34 MaxElectricity<- (Task_A, http://localhost/examples/Ont.owl#ElectricitzySupplyCapabilities) 35 DeliveryDate<- (Task_A, http://localhost/examples/Ont.owl#DeliveryDate) 36 StartProduction<- (Task_A, http://localhost/examples/Ont.owl#StartProduction) 37 WorkingDayHours<- (Task_A, http://localhost/examples/Ont.owl#WorkingDayHours) 38 DeliveryTime<- (Task_A, http://localhost/examples/Ont.owl#DeliveryTime) 39 QuaterlyCashFlow<- (Task_C, http://localhost/examples/Ont.owl#QuaterlyCashFlow) 40 QuaterlyProduction<- (Task_C, http://localhost/examples/Ont.owl#QuaterlyProduction) 41 MinQuaterlyRevenues<- (Task_C, http://localhost/examples/Ont.owl#ExpectedQuaterlyRevenues) 42 Time<- (Task_F, http://localhost/examples/Ont.owl#ForgingTime) 43 Electricity<- (Task_F, http://localhost/examples/Ont.owl#ElectricityConsumption) 44 ManteinTime<- (Task_G, http://localhost/examples/Ont.owl#ManteinanceTime) 45 AcqTime<- (Task_B, http://localhost/examples/Ont.owl#BlockingServiceTime) 46 Prec<- (Task_F, http://localhost/examples/Ont.owl#ProductionPrecision) 47 SetupCost<- (Task_C, http://localhost/examples/Ont.owl#SetupCost) 48 AmmCost<- (Task_C, http://localhost/examples/Ont.owl#AmmortisationCost) 49 END INPUT 50 END VALUES 51 END INSTANCE 52 OUTPUT 53 VC-> (Task_F, http://localhost/examples/Ont.owl#VariableCost) 54 Q-> (Task_F, http://localhost/ontology/fake.owl#Quality) 55 END OUTPUT 56 END PROBLEM
|
The actual values used for comparing and contrasting services during the optimal PSP computation come from their QoS annotation, such as in
Table 2. The final optional section “OUTPUT” instructs how to map back the found variables into the variable environment assignments (Lines 51, 52), which reflects how service input parameters are supposed to be set in order to yield the optimal objective values and the best level obtainable itself.
5. Results
At first, we defined the flow of invocation and data exchange at the base of the proposed demonstrator behavior.
Figure 12 depicts the ordered sequence: at first, the user annotates (Step 1) the services using the provided UI; as a consequence, all the semantic services complete with metadata about QoS and the executable program, which offers the service (the so-called service grounding), are stored in a repository (Step 2).
Every dashed line color represents a different type of transferred object: light blue encodes for Semantic Services (SS), dark yellow for Process Models (PM), whereas green is for Process Service Plans (PSP). It is important to notice that models, instances and service plans were all encoded as BPMN v2.0-compliant XML documents, by the usage of extension elements: this allows maintaining in a single place the different stages of the models and storing them coherently in a unique repository. As for the notation in this representation, continuous lines represent explicit user actions, whereas fine dashed connections indicate implicit components’ interaction to provide the service to the user. In Step 3, the process model is designed, together with the IOPE characterization and all the metadata required and stored in the central extended BPMN repository (Step 4). This finishes the preparation, until the moment the user is ready to execute an instance of a defined process model. It is important to notice that this approach supports the theoretical separation between the service annotator, the process model designer and the operator in charge of executing the instance of this manufacturing process. In Step 5, through the RunTime execution environment, the human operator launches an instance of the process, causing it to retrieve the model from the repository (Step 6) and passing it to the optimization component (Step 7). After retrieving the set of available semantic services (Step 8), the optimization component computes a non-dominated process service plan by functional composition and non-functional optimization of the related COP definition and returns it to the RunTime execution component (Step 10). Furthermore, the optimization component stores the plan in the eBPMN archive facility (Step 9).
The process then continues without user interaction, as the RunTime execution component analyzes the produced process service plan one service at a time, in the corresponding order for the process model. Here, the first service, equivalent to
in
Figure 11, is considered, and the service metadata are passed in Step 11 to the allocation and service deployment together with all the environmental and contextual variables and settings necessary for the service. Eventually, that enactment component retrieves from the repository the full information set of the service (Step 12) and, after initializing the environment, deploys its grounding in the cloud (Step 13), monitoring it and collecting the returned value(s). Once the service has finished its task, it is also necessary to dispose of the deployment wrapper and environment, in order to release the cloud resources used by this service. If the current service is correctly executed, the allocation and service deployment component interacts with the RunTime execution to move to the next service used to implement the model, till the full process service plan is completed, and the UI returns a positive confirmation of the termination to the user.
Anyway, it is possible that a service is unable to be deployed (such as when it is a physical resource already in use or out-of-service for unplanned maintenance) or a failure code is returned.
Figure 13 represents this case: the first task was terminated correctly by the execution of
, then the PSP dictated to use
for implementing
, as this was the best match (amongst the semantically-quasi-equivalent alternatives {
,
} from
Figure 10), but the robotic arm used to ground
is currently unusable, making the service failing.
The interaction flow follows the same evolution as for the “none failure” case until Step 13. In Step 14, the service control returns a failure status to the calling component. This triggers back the RunTime execution environment (Step 15), which collects all the information about the services correctly executed and the failing ones for the current process (PSP log), and it requests an alternative PSP (Step 16) via the optimizer. Using the additional knowledge about the execution log and the services currently available (Step 17), a new functional composition and a new QoS-based COP resolution are computed. This generates a viable non-dominated PSP, stored (Step 18) and subsequently returned to the execution component (Step 19), which is then able to resume the process execution, without the need for aborting the already executed services. This is particularly critical in the manufacturing domain, as most part of the services is not-idempotent and cannot be repeated without affecting the final result or generating scrap and defective parts. If there are no possible services available to implement the process, the user will observe a failure (“broken” PSP). In this demonstrative case, we suppose there is a combination that can be used as functionally equivalent for , even though it is sub-optimal with respect of the COP objective and the QoS measures for the services.
After that, the normal iteration is restored between the execution and the deployment component, using the new PSP, which has an updated grounding for , composed by and . Each single service is then individually instantiated: Steps 20 and 25 report the service deployment request; Steps 21 and 26 pinpoint the service details retrieved from the repository; Steps 22 and 27 show the actual deployment of the service in the complete and correct cloud environment; Steps 23 and 28 signal the correct completion of the service execution; whereas eventually, Steps 24 and 29 confirm it for the RunTime execution environment, in order to allow it to proceed with the next service defined by the PSP. Eventually, when the last service is correctly returned, the execution of the process instance is done, and the human operator is informed through a message in the UI.
6. Discussion
In this section, we present our thoughts about the effort and major barriers for the extension of the presented demonstrative case to general manufacturing-related processes, towards a pragmatic Industry 4.0-enabled approach in physical production contexts. The analysis is divided following the presented components of our demonstrator.
As a base, we proposed a basic general domain ontology, and we showcased how it can be treated and extended to cover the specific sub-domain for a new application. We showed that it is feasible to implement a semantic wrapper for the service annotation; and that it is possible to use it for representing whatever type of service (information retrieval or computation, mechanical/operative tool or robot and human operators) in a XaaS approach. This wrapping capability showed itself able also to equip the service description with user-defined QoS. These steps are still a considerable obstacle for process designers, as they are not normally familiar with domain knowledge elicitation and its formalization. Additionally, the need for defining the semantic and metadata (QoS) wrapper around all the available service, at the finest granularity possible, is also a big barrier for the adoption of XaaS in manufacturing. On the positive side, once these demanding and challenging tasks are done, it normally should not be necessary to repeat them, as they can support every process in the defined sub-domain. Unfortunately, in reality, both small corrections and, sometimes, big reworks are occasionally necessary, to better focus on or to correct a misunderstanding in these element settings.
Minimal modifications are necessary on the BPMN editor side, to support the extension in the standard language for including the COP problem description; the service plan, the data bindings and the optimal variable assignment; as well as the execution log. We exposed that this is already enough to support the expected functionalities. Designing extended BPMN models equipped with COP definition and tasks’ semantic annotations should not be too challenging, once the previous two steps are internalized. For the rest, these are a standard model following the well-known notation, and their graphical representation follows the usual aspect familiar to the process designer.
From the point of view of the processes optimization, the main benefits with respect to the existing approaches are manifold: the first improvement is the business process formulation, as it allows the full integration of functional service selection and composition with non-functional optimization based on user-defined QoS and objective functions arbitrarily complex in the COP. This is achieved through our BPMN extensions and thanks to the development of a grammar for the optimization problem formalization. Secondly, the produced output, the PSP, is directly enactable by an execution environment, being a complete plan. This means that it is equipped with all the relevant information: service assignments, data flow (variable bindings) and optimal variable assignments for initializing the enactment environment. Eventually, by encoding the computed PSP in an extended BPMN format, it allows maintaining in a single place the model and plan, together with the variables’ assignment and the optimality value achieved. This part is completely transparent to the process designer and the human operator in charge of supervising the process execution, except for the approval of the proposed service plan, for the responsibility assumption.
Regarding the execution part, thanks to a small interpretation effort for the model/instance additional fields, the main advantage is the capability of using the optimal PSP and data binding information, together with the decoupling of the BPMN structure (conditionals and gateways) from the service deployment, which is delayed and demanded by a specialized component.
Regarding the service execution and control, our specialized component is a complete novelty in this domain and allows checking the status (availability and leasability) of any service in the form of a container and to support late deployment on cloud facilities. Furthermore, these last two components work in the background of the demonstrator, except for the very standard UI they provide for controlling the process execution and the service deployment, so this should not represent any barrier for the practical adoption of such an approach in real-world manufacturing industries.