Next Article in Journal
Adolescent Perception of Maternal Practices in Portugal and Spain: Similarities and Differences
Next Article in Special Issue
A Multicomponent Distributed Framework for Smart Production System Modeling and Simulation
Previous Article in Journal
Transformability as a Wicked Problem: A Cautionary Tale?
Previous Article in Special Issue
Process Planning in Industry 4.0—Current State, Potential and Management of Transformation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Framework for Selecting Manufacturing Simulation Software in Industry 4.0 Environment

1
Department of Chemical, Materials and Industrial Production Engineering (DICMAPI), University of Naples “Federico II”, Piazzale V. Tecchio 80, 80125 Napoli, Italy
2
Department of Engineering and Architecture, University of Parma, viale G.P. Usberti 181/A, 43124 Parma, Italy
*
Author to whom correspondence should be addressed.
Sustainability 2020, 12(15), 5909; https://doi.org/10.3390/su12155909
Submission received: 21 April 2020 / Revised: 19 June 2020 / Accepted: 26 June 2020 / Published: 22 July 2020
(This article belongs to the Collection Smart Production Operations Management and Industry 4.0)

Abstract

:
Even though the use of simulation software packages is widespread in industrial and manufacturing companies, the criteria and methods proposed in the scientific literature to evaluate them do not adequately help companies in identifying a package able to enhance the efficiency of their production system. Hence, the main objective of this paper is to develop a framework to guide companies in choosing the most suitable manufacturing simulation software package. The evaluation framework developed in this study is based on two different multi-criteria methods: analytic hierarchy process (AHP) integrated with benefits, opportunities, costs, risks (BOCR) analysis and the best-worst method (BWM). The framework was developed on the basis of the suggestions from the literature and from a panel of experts, both from academia and industry, trying to capture all the facets of the software selection problem. For testing purposes, the proposed approach was applied to a mid-sized enterprise located in the south of Italy, which was facing the problem of buying an effective simulation software for Participatory Design. From a practical perspective, the application showed that the framework is effective in identifying the most suitable simulation software package according to the needs of the company. From a theoretical point of view, the multi-criteria methods suggested in the framework have never been applied to the problem of selecting simulation software; their usage in this context could bring some advantages compared to other decision-making tools.

1. Introduction

The industrial sector has always been the scene of fundamental innovations and discoveries over the years, which have contributed to revolutionize and transform not only the sector itself, but also the entire society [1]. This has meant that the world scenario has become increasingly globalized and competitive. Nowadays, especially in the industrial field, the strongest companies are those that continually aim at renewal, because the only way to get ahead, in a dynamic and volatile market, is the search for a winning idea and the use of increasingly innovative technologies [2]. These transformations have been given the name of industrial revolutions, which deeply involved the manufacturing sector. Mankind has already faced three of these revolutions throughout history, and currently we are in the middle of the Fourth Industrial Revolution, also known as Industry 4.0 [3]. Moeuf et al. [4] defined Industry 4.0—which was introduced in 2011 at the Hannover Fair—as a general concept enabling manufacturing with the elements of tactical intelligence using techniques and technologies such as the Internet of Things (IoTs), cloud computing and big data. According to [5], the main pillars of Industry 4.0 are nine technologies, which form the basis of the transition to this new industrial revolution: big data and analytics, autonomous robots, simulation, horizontal and vertical system integration, The Industrial Internet of Things, cybersecurity, cloud, additive manufacturing and augmented reality. Moreover, the main characteristics of Industry 4.0 are [6]: interconnection, interoperability, virtualization, decentralization, real-time operating system, modularity, cyber-physical systems, Internet of Things, Internet of services and smart factory.
Simulation is one of the fundamental pillars of Industry 4.0, thanks above all to its flexibility, being implementable in any field, from manufacturing, to services, to design and even to the healthcare sector. Simulation, according to [7], is “the process of designing a model of a real system and conducting experiments with this model for the purpose either of understanding the behaviour of the system or of evaluating various strategies (within the limits imposed by a criterion or set of criteria) for the operation of the system” [8]. Industry 4.0 and its software simulations can largely benefit from the usage of analytics and simulation models [9].
Technological development, globalization and the growing demand coming from the market in terms of product customization and speed of response have forced the manufacturing world to rethink its processes and its organization, laying the foundations for Industry 4.0 [10]. Before investing in the new technologies, enterprise managers need to know their companies’ readiness for making the right decisions to preserve or enhance their competitiveness. Industry 4.0 readiness assessment and maturity models can support the management in measuring and setting up a plan for the digital transformation of their company, providing guidance and support to align business strategies and operations. The notion of “maturity” defines “the state of being complete, perfect, or ready” [11] and, in a business context, it describes the ability of an organization for continuous improvement in a particular discipline [12]. Maturity models enable a structured approach for analysing the initial state in which weaknesses can be identified, the potential for improvement can be shown and specific steps for improvement can be initiated [13]. The degree of maturity defines a certain development state within a scale interval, determined by a starting point, which is the lowest level, and an ending point, the highest level [14]. A shift to a higher level indicates an improvement. A specific level of maturity includes the respective characteristics of previously defined objects and their required characteristics [15]. Hence, maturity models enable users to identify the need for change and to derive the necessary measures to guide the change process. These models represent a path towards an organized and systematic way of doing business. Two approaches for implementing maturity models exist. With a top-down approach [15] a fixed number of maturity stages (levels) is specified first and further corroborated with characteristics that support the initial assumptions about how maturity evolves. When using a bottom-up approach [16], distinct characteristic or assessment items are determined first and clustered in a second step into maturity levels to induce a more general view of the different steps of maturity evolution. These maturity models are generally operationalized by the submission of a standard questionnaire to the organizations. Answers are mapped in the defined maturity model and standard recommendations based on the assessed maturity level are provided to the organizations [17].
In recent years, the selection of the most suitable simulation software to correctly face industrial problems has become vital for companies. The choice should be made among several alternatives and some companies, especially small and medium enterprises (SMEs), may not have the appropriate tools to support that choice [18]. The simulation software selection is an articulated decision-making process and is often costly and time consuming; a careful selection can take as long as one year [19]. In short, the task of software package selection has become more complex due to: (1) difficulties in assessing the applicability of software packages to the business needs of the organization due to availability of a large number of software packages in the market; (2) possible incompatibilities between the different hardware and software systems; (3) lack of technical knowledge and experience of decision-makers; (4) ongoing improvements in information technology [20]. The simulation software evaluation and selection is a multi-criteria problem, which can be solved by decision analysis techniques [21]. Multi-criteria decision-making (MCDM) methods provide a possibility to evaluate and decide which alternative is the most suitable according to different criteria. This is the main advantage when comparing with other possibilities regarding making a decision [22].
Moving from the considerations above, this study tries to contribute to the literature by helping the management of the enterprises in selecting the most suitable manufacturing simulation software package for their needs, proposing a new methodology. The evaluation framework developed here makes use of two MCDM methods, namely:
  • Analytic hierarchy process (AHP) integrated with benefits, opportunities, costs, risks (BOCR) analysis [23];
  • Best-worst method (BWM), an approach proposed in recent studies as an alternative to AHP for finding the criteria weights in MCDM problems [24].
From a technical point of view, none of these MCDM approaches has been applied to the problem of selecting simulation software; therefore, their application could lead to some interesting findings.
The paper proceeds as follows. Section 2 reviews the literature relevant to this study, covering in particular Industry 4.0 assessment tools, readiness, maturity models and MCDM methods. Section 3 describes the research methodology followed to derive the proposed framework, while Section 4 details it step by step. In Section 5, the proposed framework is applied on a real case for testing purpose. Finally, the conclusions about the results and contributions made by this study are formalized, highlighting the main limitations and suggesting future research directions.

2. State of the Art

2.1. Industry 4.0 Maturity Models

Industry 4.0 has a strong Information Technology (IT) orientation and is often implemented as part of projects, so it suits well as an appropriate application domain for maturity models that are commonly used as an instrument to conceptualize and measure the maturity of an organization or a process regarding some specific target states.
In line with the aim of this study, this subsection reviews the literature focused on Industry 4.0 maturity models. Schumacher et al. [25] developed a maturity model and a related tool for assessing the Industry 4.0 maturity of manufacturing enterprise. The model has been developed using a multi-methodological approach including a systematic literature review, conceptual modelling and qualitative and quantitative methods for empirical validation. First validations of the model’s structure and content have showed that the model was transparent and easy to use and have proved its applicability in real production environments. Ganzarain & Errasti [26] proposed a process model as a guiding framework for Industry 4.0 collaborative diversification vision, strategy and action building. To be more precise, the authors suggested a stage process model to guide and train companies to identify new opportunities for diversification within Industry 4.0. Results have showed a real need for guided support in developing a company-specific Industry 4.0 vision and specific project planning. Weber et al. [27] have presented the Maturity Model for Data-Driven Manufacturing (M2DDM), consisting of six maturity levels, logically building upon each other. It was meant to serve as a guidance for architects, project managers and others who work and collaborate in smart factory projects. With the maturity model SIMMI 4.0 (System Integration Maturity Model Industry 4.0), Leyh et al. [14] provided a toolset for enterprises to assess their current IT system landscape. The authors presented the application of SIMMI 4.0 in selected enterprises as proof of concept; they used a questionnaire-based approach to apply their model. After observing the strengths and weaknesses of existing models and frameworks, Gökalp et al. [28] proposed a new SPICE-based Maturity Model for Industry 4.0, named Industry 4.0-MM. The aim of the model is to provide a means for assessing a manufacturer’s current Industry 4.0 maturity stage and for identifying concrete measures to help them reach a higher maturity stage in order to maximize the economic benefits of Industry 4.0. Gracel & Łebkowski [29] developed and presented the manufacturing technology ManuTech Maturity Model (MTMM) concept related to Industry 4.0. The role of this model was to help top management answer critical questions such as, “What is the current level of technological advancement of the factory?” or “How should the manufacturing technologies be deployed to ensure the effective execution of a new Industry 4.0 strategy or new business models?”. Guimarães et al. [30] have developed a new method to help an industry select the right Discrete-Event Simulation Software (DESS), which helps improve the productivity of a given process. First, it proposed a methodology that allows companies to self-assess their current internal processes based on a maturity model; finally, it applied the AHP to support simulation software selection by detailing and weighting the components that are important for the specific company to meet its business objectives. Finally, De Carolis et al. [31] proposed a model to set the ground for the investigation of company digital maturity. The authors, subsequently, defined a scoring method for maturity assessment, in order to identify the criticalities in implementing the digital transformation and to drive the improvement of the whole system.
Table 1 summarizes the main studies about Industry 4.0 maturity models, along with their structure and evaluation.

2.2. Simulation Software Selection

In recent decades, the issue of selecting simulation software packages has attracted the interest of many researchers. Indeed, there are numerous publications on this topic. For this reason, a systematic literature review was set up in order to span in the field of simulation software selection and collect the main contributions about such a topic. The databases considered for the analysis are Scopus (https://www.scopus.com/) and Google Scholar (https://scholar.google.com/). The two databases were analysed and screened using different combinations of keywords, namely “simulation software”, “criteria”, “evaluation”, “selection”, and “framework”. For a paper to be considered as pertinent for the analysis, the combinations of keywords and operators must be found in the title, abstract or keywords of the article. A huge number of documents was found to respond to these requirements, as shown in Figure 1.
These documents were further screened according to different criteria. In particular, papers were retained if:
-
they were written in English;
-
they were in the form of peer-reviewed document. “Grey” literature, i.e., any material, usually not peer-reviewed, produced by institutions and organizations outside classical scientific distribution channels (industrial reports, position papers, or government normative) was excluded from the analysis;
-
all relevant data were available. Lack of relevant information (e.g., authors’ names, title, journal or conference name) involved the exclusion of the paper;
-
they were in line with the goal of the analysis. Studies that only dealt with the application of simulation software to a case study or performance evaluation of production lines through simulation were not considered in the analysis;
-
the full text was available;
-
at least the software selection criteria were discussed in the document.
At the end, as a result of this systematic literature review process, 45 articles emerged as pertinent. Descriptive statistics about the publication year of the articles reviewed are illustrated in Figure 2.
As can be seen from Figure 2, the interest in simulation software selection has intensified in the last ten years with a peak of five and four papers published in 2014 and 2011 respectively.
A summary of the contribution of the literature in the field of evaluation and selection of software packages is given in Table 2, in which we have classified it into six categories. To be more precise, columns 2, 4 and 5 highlight the presence of structured frameworks for simulation software selection, evaluation criteria and evaluation technique, respectively. Column 6 indicates whether the proposed selection methodology, evaluation technique, selection criteria have been applied practically. Column 7 indicates whether the authors made a comparison of the technical features of the software packages only. Finally, column 3 indicates if the authors proposed a maturity model.
Only four papers [18,41,51,63] have proposed a structured framework, evaluation criteria, evaluation technique and a practical application. Almost all the selected works provide a set of evaluation criteria, but not all of them also suggest an evaluation technique. Moreover, only few papers make a practical application of this suggested technique or suggest a structured framework with the steps to be followed for selecting simulation software. Finally, six works only made a comparison of technical characteristics of some software packages available on the market. The work of [30] is the first study that tries to relate maturity models for assessing Industry 4.0 readiness to the simulation software selection problem; however, it does not provide a structured framework for simulation software selection.

2.3. MCDM Methods: Evaluation Techniques

This subsection reviews the evaluation techniques (mainly MCDM methods) that could be applied to the simulation software selection problem and summarises their usage in scientific literature (see Figure 3).
The first studies proposing MCDM evaluation techniques date back to the 1990s. AHP, in particular, was the first structured method used and applied in problems of selecting the best option among software alternatives [37]. A dedicated software called SimSelect was then developed by [38], with the precise aim of solving that problem. Looking at Figure 3, it is clear that AHP still remains the most used and studied MCDM technique [59]. However, despite its wide usage, it is plainly acknowledged in the literature that AHP also has some limitations. For example, this technique does not allow costs to be taken into account in the evaluation [23]. Moreover, possible errors and inconsistencies can arise during the association of the weights to a criterion; if applied to the software selection problem, this implies that a company could be doubtful about the final score of a software and thus about the selection of the most appropriate one [50,54]. In addition, the computational procedure of AHP become quite complex if several criteria and alternatives are to be taken into account.
Moving on from these considerations, researchers have started developing different Fuzzy MCDM approaches to refine or complement AHP or have studied how to integrated it with other techniques. More precisely, AHP can be integrated with BOCR analysis for a more comprehensive evaluation which also allows for cost to be taken into account in the assessment [23]. Recently, the BWM is gaining over AHP in popularity among researchers for simplifying the process of carrying out pairwise comparison and reducing the occurrence of inconsistencies during comparisons [72]. Instead, analytic network process (ANP), fuzzy analytic hierarchy process (FAHP), Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), preference selection index (PSI), elimination and choice translating reality (ELECTRE) [67,73,74,75] and other approaches are all examples of methods that have been created to this end. Most of them have been applied to the problem under examination or to similar problems; for example, TOPSIS has been used for evaluating the performance of IoT in organizations, while AHP, FAHP, TOPSIS and PSI have all been applied to the problem of evaluating and selecting simulation software packages [70,75].
In works expressively dedicated to the software selection problem, specific indexes have sometimes been introduced with the purpose of analysing and classifying the software behaviour. Examples of these studies have been by Rincon et al. [51] and Sahay & Gupta [48]. The former authors defined the weight global quality rate (WGQR) strategy for analysing the behaviour of the software within the hierarchy, because it quantifies the influence of the weight assigned to each category. The WGQR is estimated as a function of the quality rate and of the weights of the categories. Sahay & Gupta [48] have instead defined the software Solution Merit Index (SMI) which is indicative of software utility and its merit to the organization.
To sum up, on the basis of the considerations above, in the proposed framework the MCDM approaches suggested for implementation are AHP coupled with BOCR and BWM. AHP is indeed a well-structured and known MCDM methodology and the most applied in literature [18]; coupling it with a BOCR analysis allows the decision-maker to take into account the positive and negative aspects of the problem separately, which is expected to improve the effectiveness of the procedure [23]. Next to AHP, BWM has been selected as it represents an evolution of the first method, is faster from a computational point of view and reduces the risk of inconsistencies during comparisons [72].

3. Research Methodology

The framework presented in this paper was developed to support the evaluation and selection of simulation software packages. Given this aim, the framework design was supported by an appropriate multi-disciplinary panel of experts, composed of 14 members, namely:
  • two academics from the University of Parma and three academics from the University of Naples, chosen among researchers whose interests are in the field of simulation and process modelling;
  • three people from as many software houses manufacturing different kinds of software packages, including simulation software packages;
  • six people from two small-size companies. For each company, representatives were the company’s owner, the plant manager and the purchasing manager.
The panel work was led by the academics as follows. On the basis of their skills in simulation and process modelling, as well as following the suggestions retrieved from the literature (as illustrated in Section 2), academics set up a preliminary framework for the selection of simulation software packages. This version of the framework consisted of eight steps, with a related description, and was explained to the remaining panel members during a roundtable discussion. Each member was then asked to express their level of agreement against each step, on a 5-point scale (1 = totally disagree; 5 = totally agree); each expert could also suggest to add/remove a step, as well as to combine steps together. Hence, academics collected and summarized the outcomes of the roundtable discussion, and amended the framework accordingly. The updated version of the framework was submitted to the panel members for a second roundtable discussion, in which experts were asked to refine their preliminary considerations, as well as to express again their agreement with the steps included in the second version of the framework. At the end of the second roundtable discussion, the panel members expressed a general agreement on the framework proposed, which was, therefore, considered as validated. The framework in its final configuration is presented in Section 4.

4. Evaluation Framework for Simulation Software Selection

This section describes the evaluation framework for selecting simulation software. The need for a structured and standardized approach to simulation software selection is evident: evaluating simulation software packages and selecting the best one for an organization is a time consuming task unless an efficient methodology is used. This study provides a structured approach to simulation software selection. The proposed selection framework is a nine-step process and its structure is shown in Figure 4.
As can be seen from Figure 4, the proposed framework has been designed to provide an agile but structured approach to the selection of simulation software packages. Indeed, the approach consists of a sequence of stages that could be customised if needed, depending on the specific application and/or industrial problem the simulation software should face.
Stage 1 is the self-assessment of the digital readiness of the company using a maturity model. The maturity model proposed in this paper aims to contribute to the generation of knowledge about how to situate an industry in relation to the degree of maturity of its production processes, technology, governance and the culture in order to be able to extract the best outcomes from the digitalization and the adoption of a simulation software. Stage 2 is the identification of the core problem and the need of a company for purchasing a simulation software. Stage 3 represents the definition of the general objective and requirements of the simulation software. Stage 4 is an initial survey of the software packages available on the market to make a preliminary screening of the candidates and to get a short list of alternatives. Stage 5 is the effective evaluation and selection of the software through:
-
the new integrated BOCR-AHP methodology. It consists of six ordered sub-steps and should help select the best software and to provide a ranking of alternatives;
-
the BWM approach. It includes eight ordered steps to select the best simulation software from a list of candidates.
Stage 6 is the output of the selection methodology: a ranking of the candidate software packages. Stages 7 to 9 are practical steps. To be more precise, stage 7 involves purchasing the top-ranked software package. Then, in stage 8, a solution of the core problem is implemented using the selected simulation software. Finally, stage 9 involves the maintenance and evolution of the implemented solution.
Each stage will be described in detail in the following subsections.

4.1. Self-Assessment of the Digital Readiness of the Company Using Maturity Model

Before selecting a simulation software, through a maturity model, it is appropriate for companies to make a self-assessment of their manufacturing processes, for evaluating their readiness to embrace new Industry 4.0 key enabling technologies. This is a crucial point because it allows to understand if the company is ready to really take advantage of the implementation of a simulation software and of its usage in the Industry 4.0 perspective. If the company is not ready, the self-assessment could allow to identify the needs for improvement.
The starting point in the development of the model was to identify the possible structure to be used when representing a maturity level. Two type of structures are available in the literature, hierarchical and process-oriented [76]. In this study, a hierarchical structure was adopted. According to this structure, the degree of readiness of a process for the usage/implementation of computer simulation can be evaluated by looking at some specific dimensions. Weckenmann & Akkasoglu [77] note that hierarchical models usually have four to six levels. According to [34,78], the maturity model proposed in this work has five levels—basic, repetitive, defined, integrated and optimal—and is grounded in four dimensions—process, technology, organization and people. For the present study, the levels and dimensions of the maturity model were defined as per Table 3.
In our proposed approach, the evaluation of maturity level is suggested to be performed by applying a questionnaire survey. A total of 33 questions were formulated for this purpose and grouped according to the four dimensions; for each dimension, questions were classified on the basis of the different aspect investigated, according to the scheme proposed in Appendix A. Most of the questions are to be answered on a scale of perception with five options, from “1 = low” to “5 = high”; the remaining questions are Yes/No questions.
The maturity level for each dimension is then assessed by assigning numerical values to each response, according to the following rules:
[1 → 0.00; 2 → 2.50; 3 → 5.00; 4 → 7.50; 5 → 10.00] for questions with the response scale from 1 to 5;
[1 → 0.00; 2 → 10.00] for yes/no questions.
The score of each dimension, henceforth referred to as the “individual dimension maturity level” (IDML), is obtained as follows:
I D M L   = ( i = 1 n Q i n ) * 1
where:
Q i is the numerical value associated with the response to question i ;
n is the number of questions in each dimension.
According to the numerical values set above, IDLM can vary from 0 to 100. The range of possible scores was divided into five equal bands corresponding to the five levels of maturity, as shown in Table 4. The maturity level for each dimension is thus determined using these bands. The “final maturity level evaluated by the model” (FMML) is obtained as the mean of the IDMLs, as all the dimensions have the same weight.
FMML   = i = 1 4 I D M L i 4
The results of this step allow the company to obtain a quick overview of the global maturity level of their operational processes. Taking into account the correspondence between the maturity classification levels and the FMML score, it is easy to deduce that a company with FMML < 40 is not in the position of getting success from the permanent usage of a simulation software.

4.2. Identification of the Core Problem

One of the most important stages in selecting simulation software packages (as well as in many other decision-making problems) is to clearly state the problem, or class of problems, that the enterprise would like to address by means of the software. In addition, defining the problem also includes stating what the simulation tool is expected to do.
Once the need for purchasing a simulation software has been established, several factors have to be considered. These factors include, among others, the intended simulation purpose, the existing constraints within the company, the main types of problems to be simulated and some key information regarding the modellers and potential users. Other issues to be taken into account are organizational constraints. Financial constraints might include the hardware available for use by the simulation software, and the budget available for software purchasing, installation and maintenance costs, purchasing additional hardware, training of personnel, etc. Another constraint is the time available for software evaluation, selection and implementation [40]. Determining the models that are likely to be developed can further help reduce the list of possible software candidates for evaluation. Three main types of models can be distinguished: discrete-event, continuous or those that combine discrete and continuous elements. The issues relating to the persons involved in software selection, modelling and the use of future models also need to be addressed. Preferably, the same employees should be involved in the process of software selection and modelling. If possible, a team approach should be used.

4.3. Definition of the General Goals and Requirements

Once the core problem has been identified, in stage 3 of the software selection process the group responsible for executing the evaluation and selection of the software has to define the general goals of the choice based on the overall organizational requirements. The objectives are:
-
areas of the application and usage of the software;
-
particular aims of the organizational unit that will use it;
-
identification of the functional and non-functional requirements of the software [51].
This phase must be carried out in the most accurate, complete and detailed way possible as it will be crucial for selecting the most appropriate software package for the specific application the company needs. Indeed, the list of desired functionalities and features of the software indicates which the most important evaluation criteria are for the company and will determine which criteria will have a greater weight in the application of the selection method. Therefore, these functional requirements will be the discriminant in the evaluation of the candidate software packages. For example, for a specific application a company could consider as more important software features such as graphics, 3D animation and GUI. The same company, for a different application, could consider as more important different features such as statistical reports and integration with other technical systems.

4.4. Preliminary Screening of the Simulation Software Packages Available on the Market

This phase includes preliminary investigation on the market for availability of software packages that might be suitable candidates, including high-level investigation of major functionalities and features supported by the software package. Helpful sources for preliminary investigation could be web-based resources including the vendor’s website, professional association catalogues and other third party reports [59]. The purpose of this initial survey is to shorten the list of software products that can be considered for evaluation and subsequent selection [45]. For example, if the systems to be simulated comprise both discrete and continuous elements, then all packages that cannot support both functionalities could be eliminated. At this stage, several other sources of information have to be consulted. Vendors of software products that seem to be candidates for software evaluation should be contacted and asked for assistance, as well as for providing as much information as possible, in addition to software demonstrations and written material. Other sources of literature related to software being considered ought to be examined and other software users contacted, if possible. The expected outcome of this step is a short list of candidate simulation software packages for the next detailed evaluation.

4.5. Evaluation and Selection of the Best Simulation Software

This subsection describes the effective evaluation and selection methodologies, which represent stage 5 of our proposed framework. As already mentioned, the MCDM methods suggested for application are AHP integrated with BOCR analysis and BWM.

4.5.1. The Integrated BOCR-AHP Methodology

Based on the integration of BOCR analysis and AHP, this methodology is the core of the developed evaluation framework and includes six ordered steps to select the best software from a short list of candidates and rank them.
The AHP, one of the most popular MCDM approaches, introduced by [79], is an effective tool for dealing with complex decision-making, and may aid the decision-maker to set priorities and make the best decision. By reducing complex decisions to a series of pairwise comparisons, and then synthesizing the results, the AHP helps capture both subjective and objective aspects of a decision. The AHP functional steps are described in [80]. The AHP breaks down the problem as a hierarchy. In the hierarchical structure on which the elements are placed, the objective (goal) to be achieved is first established, and then split by defining the criteria needed to evaluate the alternatives. The AHP generates a weight for each evaluation criterion according to the decision-maker pairwise comparisons of the criteria. For a fixed criterion, the AHP assigns a score to each option according to the decision-maker pairwise comparisons of the options based on that criterion. Finally, the AHP combines the criteria weights and the options’ scores, thus determining a global score for each option, and consequent ranking. The global score for a given option is a weighted sum of the scores it obtained with respect to all criteria.
BOCR analysis is a strategic planning technique used to help a decision-maker identify benefits, costs, opportunities and risks related to business venture or project and identify the internal and external factors that are favourable or unfavourable in achieving those objectives [81]. The four parameters which the technique examines are:
-
Benefits: characteristics of the business/project that give it an advantage over others;
-
Opportunities: elements in the environment that the business/project could exploit to its advantage;
-
Costs: characteristics of the business that place the business/project at a disadvantage compared to others;
-
Risks: elements in the environment that could trouble the business/project.
Figure 5 briefly summarizes the six steps to be followed to perform an integrated BOCR-AHP analysis for simulation software evaluation and selection, specifying which elements represent the inputs and outputs of each step.
Step 1: Determining a set of decision criteria. The first step consists of defining the set of criteria to be adopted by a company for identifying the best simulation software for a specific application. Based on the literature review and by means of two brainstorming sessions, a three-level hierarchy with 9 criteria and 36 sub-criteria was developed, according to the scheme proposed in Appendix B. Appendix B also provides a description of each criterion/sub-criterion and suggests a possible scale to evaluate their relative importance.
Step 2: Developing of BOCR structure. To embody the BOCR analysis in the simulation software selection process, criteria should be therefore organized into four groups, covering benefits, opportunities, costs and risks. In the BOCR structure, the top level includes the merit nodes (i.e., the benefits, opportunities, costs and risks perspectives), whereas the second level consists of four sub-hierarchies, one for each perspective, containing the nodes known as control criteria. Figure 6 represents the developed BOCR structure with the classification of the criteria defined in the previous step.
Step 3: Computing the weight of the criteria. This is a typical step of the AHP, which has been slightly modified here to take into account the BOCR analysis. Not all the criteria will have the same importance. One of the steps in the AHP process is to derive the relative priorities (weights) for the criteria. In order to compute the weights for the different criteria, AHP starts by creating a pairwise comparison matrix A of the criteria (or sub-criteria). The matrix A is a n*n real matrix, where n is the number of evaluation criteria considered. Each entry a i j of the matrix A represents the importance of the ith criterion compared to the jth criterion. If a i j > 1 , then the ith criterion is more important than the jth criterion, while if a i j < 1 , then the ith criterion is less important than the jth criterion. If two criteria have the same importance, then the entry a i j scores 1.
The entries a i j and a j i . satisfy the following constraint:
a i j × a j i = 1
Obviously, a i i = 1 for all i. The relative importance between two criteria is measured according to a numerical scale from 1 to 9. For further details, the reader is referred to [80].
In a BOCR-AHP methodology, pairwise comparisons should be carried out in each sub-hierarchy of each merit. The number of pairwise comparisons depends on the number of factors to be compared and scores:
n ( n 1 ) 2
Once judgments have been entered, it is necessary to check their consistency. The matrix A is symmetrical, reciprocal and consistent if:
-
The reciprocity relationship is valid:   a i j = 1 a j i for each value of i and j;
-
The transitivity relationship is valid:   a i j = a i k   ×   a j k for each value of i, j and k.
If such conditions occur the matrix is perfectly consistent.
We now need to calculate the overall priorities or weights of the criteria. There are two types of methods available for this purpose: the exact and the approximate. Although we will not show the exact method in detail here, the general idea is very simple. Raise the comparison matrix to powers (e.g., raise the matrix to the power of two, raise the resulting matrix to the power of two again, and so forth) a few times until all the columns become identical. This is called the limit matrix. At this point, any of the matrix columns constitutes the desired set of priorities. This calculation can be done in a spreadsheet, but it is currently done very easily using AHP-based software packages. We will rather use one of the approximate methods due to its simplicity. The most commonly used method requires the normalization of the matrix at an early stage, summing the columns of the matrix and dividing all the elements of the matrix by the sum of the column in which they are located. Then approximately calculate the eigenvector of each row (the priority vector), simply calculating the average value of each row (adding the values of the rows and dividing them by the number of columns). The principal eigenvalue is calculated by multiplying each element of the priority vector by the number resulting from the sum of the matrix columns. The resulting values are then added up together. The consistency index is always used to check the consistency. From an analytic point of view, once the matrix A is built, it is possible to derive from A the normalized pairwise comparison matrix A n o r m by making equal to 1 the sum of the entries on each column, i.e., each entry a ¯ i j of the matrix A n o r m is computed as
a ¯ i j   =   a i j i = 1 n a i j
Finally, the criteria weight vector w is built by averaging the entries on each row of A n o r m , i.e.,
w i   =   j = 1 n a ¯ i j n
The components of the criteria weight vector meet the following constraint:
i = 1 n w i   =   1
Step 4: Computing the score of the alternatives. This step consists in deriving the relative priorities (preferences) of the alternatives with respect to each criterion. For this purpose, a pairwise comparison of all the alternatives against each criterion included in the decision-making model should be carried out. The matrix of option scores is a m × n real matrix S, where each entry s i j represents the score of the ith option with respect to the jth criterion. To derive these scores, a pairwise comparison matrix B ( j ) is first built for each of the n criteria, j = 1,…,n. B ( j ) is a m × m real matrix, where m is the number of options (alternatives) evaluated. Each entry b i h ( j ) of the matrix B ( j ) represents the evaluation of the ith option compared to the hth option with respect to the jth criterion. If b i h ( j ) > 1 , then the ith option is better than the hth option, while if b i h ( j ) < 1 , then the ith option is worse than the hth option. If two options are rated as equivalent with respect to the jth criterion, then the entry b i h ( j ) scores 1. The entries b i h ( j ) and b h i ( j ) meet the following constraint:
b i h ( j ) × b h i ( j ) = 1
and b i i ( j ) = 1 for all i.
The same two-step procedure for the pairwise comparison matrix A should thus be applied to each matrix B ( j ) . To be more precise, each entry in the matrix is divided by the sum of the entries in the same column, and then it averages the entries on each row, thus obtaining the score vectors s ( j ) = 1 , , m . The vector s ( j ) contains the scores of the evaluated options with respect to the jth criterion. Finally, the score matrix S is obtained as
S = [ s ( 1 ) s ( n ) ]
i.e., the jth column of S corresponds to s ( j ) .
Once the weight vector w and the score matrix S have been computed respectively with the (6) and (9), the AHP obtains a vector v of global scores by multiplying S and w. The ith entry v i of v represents the global score assigned by the AHP to the ith option.
The pairwise comparisons of the alternatives shall be performed for each criterion in each hierarchy of each perspective—Benefits, Opportunities, Costs and Risks—and we will obtain a vector v for each merit B, O, C, R.
At the end of this phase we will obtain the scores B i , O i , C i and R i which represent the synthetic results of the alternative i under merit B, O, C and R. It is expected that the best alternative gets the highest priority for Benefits and Opportunities merit and that the worst alternative gets the highest priority for Risk and Cost perspectives.
Step 5: Ranking of the alternatives. Once the alternatives are prioritized with respect to B, O, C, and R in their respective hierarchies (including criteria and eventual sub-criteria), finally their scores in each merit are combined using some formulae to compute a final global score of each alternative. Five formulas (multiplicative, additive, probabilistic additive, subtractive and multiplicative priority powers) were suggested by [81] as approaches to calculate the overall priorities of the alternatives by gathering the scores ( B i , O i , C i , R i ) of each alternative under each merit with the corresponding priorities (b, o, c, r). For further details, the reader is referred to [81].
Step 6: Sensitivity analysis. The overall priorities (global score of the alternatives) will be heavily influenced by the weights given to the respective criteria. It is useful to perform a “what-if” analysis to see how the final results would have changed if the weights of the criteria had been different. This process is called sensitivity analysis and constitutes the sixth step in our proposed methodology. Sensitivity analysis allows to evaluate the robustness of the original ranking and what main drivers influenced it to the highest extent. This is an important part of the process and, in general, no final decision should be made without performing a sensitivity analysis. When embodying the BOCR analysis in the AHP approach, changes in the criteria weights can be motivated, for instance, by a variation of the relative importance of the four perspectives b, o, c, r. Moreover, a different formula can be used to compute the overall performance of each candidate software package [23]. The implementation of the BOCR analysis also allows to establish priorities among the factors and to carry out sensitivity analyses which make it possible to evaluate the robustness of the results or the changes in the ranking of the alternatives generated by a variation in the relative importance of the positive/negative aspects of the problem.

4.5.2. The BWM Approach

The BWM approach, proposed by [24], includes eight ordered steps to select the best alternative (simulation software package in the case under examination) from a short list of possible candidates. As the BWM is a MCDM evaluation method, some steps are the same as those of the AHP methodology, which is currently the most widely used MCDM approach.
Figure 7 briefly summarizes the steps to be followed to implement the BWM approach for simulation software evaluation and selection, highlighting the inputs and outputs of each step.
Step 1: Determining a set of decision criteria. For this description, see “Step 1” of methodology described above, with particular attention to Appendix B.
Step 2: Determining the best and the worst criteria. The description of this step is taken from [24]. The decision-maker identifies the best and the worst criteria in general. This should be done for the first-level criteria (general characteristics, visual aspects, logical aspects, etc.) and for each group of second-level criteria (sub-criteria). If more than one criterion is considered to be the best or the worst, one can choose arbitrarily among them.
Step 3: Determining the preference of the best criterion against all the others. This step involves identifying the preference of the best criterion (B) over all the other criteria using a 9-point scale (numbers between 1 and 9, where 1 = B is equally important to j; 9 = B is extremely more important than j) [80]. The resulting Best-to-Others vector would be:
A B = ( a B 1 , a B 2 , , a B n )
where criterion j is a generic criterion and a B j indicates the preference of the best criterion B over criterion j. It is clear that a B B = 1 .
Step 4: Determining the preference of all the criteria against the worst. This step determines the preference of all the criteria over the worst criterion (W) using a 9-point scale (numbers between 1 and 9, where 1 = j is equally important to W; 9 = j is extremely more important than W) [80]. The resulting Others-to-Worst vector would be:
A W = ( a 1 W , a 2 W , , a n W ) T
where a j W indicates the preference of the criterion j over the worst criterion. It is clear that a W W = 1 .
Step 5: Finding the optimal weights for the criteria. The optimal weight for the criteria is the one where for each pair of w B / w j and w j / w W , we have w B / w j = a B j and w j / w W = a j W where w B is the weight of the best criterion, w j the weight of a generic criterion j, and w W the weight of the worst criterion. To satisfy these conditions for all j, we should find a solution where the maximum absolute differences | w B w j a B j | and | w j w W a j W | for all j are minimized. Considering the non-negativity and sum condition for the weights, the following problem results in
min m a x j { | w B w j a B j | , | w j w W a j W | }
subject to
j w j = 1 w j     0   f o r   a l l   j
After solving problem (12), with the procedure following by [24] the optimal weights ( w 1 * , w 2 * , , w n * )   and ε * are obtained. For MCDM problems with more than one level, we should identify the weights for different levels following the BWM steps, after which we can multiply the weights of different levels to determine the global weights.
Step 6: Comparing software alternatives against each criterion using the aforementioned procedure. According to [24], “In some decision-making problems, we have an alternative value i with respect to criterion j ( p i j ) . For instance, think of a car selection problem where fuel consumption is a criterion, and we have information about the fuel consumption of all the alternatives. In some decision-making problems, however, values p i j are not available. For example, think of the same car selection problem where colour is a criterion and there are cars with different colours (not values). In case of the latter problem, where values p i j are not available, the aforementioned procedure is also carried out for the alternatives (comparing alternatives against each criterion) to find p i j (the weight of alternative i with respect to criterion j)”. This is our case. Each software alternative should be evaluated with respect to each criterion to determine value p i j .
Step 7: Calculating the overall scores of the alternatives. The overall scores of the alternatives need to be calculated as
V i = j = 1 n w j p i j
Step 8: Ranking of the alternatives. The alternatives ranking is accomplished by ordering the global scores in decreasing order. Sorting the values of V i i , the best alternative is identified.
At the end of this step, the decision-maker should be able to select the most suitable simulation software for its specific application.
Step 9:Sensitivity analysis. Sensitivity analysis is a powerful tool to check the robustness of the model and eliminate bias in data collection and analysis [82]. In order to execute sensitivity analysis, the weight of the first-level criterion that got highest weight is varied from 0.1 to 0.9 and subsequently, the weights of all the other first-level criteria are varied [83]. The next step is to use these new main criteria weights in BWM methodology again to calculate new ranking of solutions in these new different conditions. The results are then compared and analysed.

4.6. Candidate Software Packages Ranking

At the end of the previous step, the selection methodologies will provide as a result a ranking of the candidate simulation software packages ordered according to the score given to them by the procedure. The “best” software will not necessarily be the optimum one, but it will emerge as the best trade-off between all the criteria considered, depending on the importance given to them. The sensitivity analysis will also highlight possible changes in ranking of the candidate software packages due to possible changes in the weights of the evaluation criteria and/or in the relative importance of the positive/negative aspects of the problem.

4.7. Best Software Solution

Once the decision-maker has selected the most suitable software for the specific application using one of the methodologies describes above, the next step is to negotiate a contract acceptable with the software vendor, by means of direct contacts with the supplier. The contract should specify what products and services will be provided, where and when they will be used, how the licence will be transferred to other parties, and how long the product will be used. Dates and obligations should be specified precisely to avoid any future misunderstandings. Where a suitable agreement cannot be achieved, nor an adequate level of support ensured, the chosen software (i.e., top-ranked alternative) will be abandoned and the second-ranked alternative will be examined on the basis of results of the quantitative evaluation. Basically, this first means going back to the previous step, for the determination of different software products, then returning to this stage, for achieving an acceptable agreement with the (new) software supplier. In general, the purchase of the simulation software will be finalized if an acceptable agreement can be reached.

4.8. Implementing the Solution

At this stage, the software package is finally purchased. The next stage for the company is to set up training activities for the internal staff on the usage of the software. Once the staff have become experts in using the simulation software, the company can proceed with the implementation of the software solution by conducting tests and pilot activities followed by a phased rollout. After the implementation of the solution, the company will benefit from all the advantages deriving from the adoption of a simulation software and can explore all its possible usages.

4.9. Maintaining and Improving the Solution

When the solution has been implemented, the company will need to maintain the tool and ensure the possibility of updating it to accommodate new needs or possibilities, which could bring improvement or revisions to the selected software.

5. Case Study

This section details the full application of the framework proposed in Section 4, for selecting the best simulation software in a real case. The aim of the application is to determine the most suitable simulation software to be used for Participatory Design, a well-established approach also known as co-operative design and frequently used in Scandinavia [84,85]; according to this approach, all stakeholders, and therefore also the end-users, take an active role in co-designing solutions for themselves, to help ensure that the end product meets the needs of its intended user base.
The scene of the application is a mid-sized enterprise (called Company A), located in the South of Italy, and whose main business is the production of machined components, 3D printing and additive solutions for the aerospace and automotive industries. The entire evaluation phase has been carried out in close contact with the company’s managers, who acted as the key decision-makers for the application in question.

5.1. Stage 1. Self-Assessment of the Digital Readiness of the Company Using Maturity Model

The first stage in the implementation of the proposed procedure consists of evaluating the current maturity level of the company, so as to judge its suitability for the implementation and usage of a simulation software. To this end, the questionnaire shown in Appendix A and described earlier in the paper was used. Responses were obtained thanks to two meetings and some interviews with the company’s representatives, which took a couple of days. Applying the computational procedure (Equations (1) and (2)) to the responses obtained, it was found that the targeted company scored 57.00 in Process (Defined maturity level), 55.00 in Technology (Defined maturity level), 58.00 in Organization (Defined maturity level) and 32.00 in People (Repetitive maturity level). Therefore, the overall score was 50.5, which classified the company at level 2, but very close to level 3. Managers became aware of improving people management to improve the digital readiness of Company A.

5.2. Stages 2 and 3. Core Problem, General Goals and Requirements

For the case under examination, the core problem is to identify the most suitable simulation software tool for Participatory Design. The general goals of the software were well known to the company’s managers, so that this step simply consisted in formalising them and was quite immediate in terms of time. More precisely, the idea of the company was to select a tool with: (1) good performance in terms of simulation of manufacturing scenarios; (2) ease of usage and of understanding; (3) excellent graphics capabilities; and (4) a good set of animations. These characteristics form the basis for the effectiveness of the software in recreating daily cases with a sufficient level of detail. Indeed, the aim of Participatory Design is to directly involve workers and stakeholders; hence, the level of detail required could be very high. At the same time, the desired software should allow quick model changes and analysis of the effects of these changes. This means that a high simulation speed is desirable. Furthermore, it should allow the import of CAD files of original machines, the import of background images of the layout, in order to represent the real process and the real workplace.

5.3. Stage 4. Preliminary Screening of the Simulation Software Packages on the Market

As per the previous step, managers of Company A had already made a market analysis for identifying a possible set of potential candidates and found three software packages, called S1, S2, and S3 for the sake of confidentiality. Each alternative reflects a particular class of simulation software packages: S2 is a discrete-event simulator, S3 is an agent-based simulator and S1 has a continuous engine.

5.4. Stage 5. Evaluation and Selection of the Best Software

5.4.1. The Integrated BOCR-AHP Methodology

The decision criteria set (Step 1—Section 4.5.1) and the BOCR structure (Step 2—Section 4.5.1) were utilized for this case. Then, in order to compute the criteria weights, the pairwise comparison matrices A of criteria under each merit B, O, C, R, using the intensity scale proposed by [80], were created; this required involving the company’s managers in a meeting lasting three hours approximately. The computational steps were instead carried out by the authors; in particular, the pairwise comparison matrices were used to calculate the vectors of the weights of criteria and sub-criteria under each merit. Moreover, the consistency of judgements was checked. The next step is to derive the relative priorities (preferences) of the alternatives with respect to each criterion and sub-criterion. To this end, the pairwise comparison matrices B of the alternatives for each sub-criterion were first determined using the Saaty scale. These matrices were used to calculate the scores of the alternatives for each sub-criterion under each merit B, O, C, R, using the procedure described previously. The consistency of the judgements was also checked. Then, the score vector of the alternatives for the remaining criteria was determined. Finally, the score of the alternatives under each merit, B i , O i , C i and R i , was calculated; Table 5 summarizes the outcomes of the computation.
In order to rank the alternatives, their overall score was computed by combining the outcomes obtained in each perspective ( B i , O i , C i and R i ), following a multiplicative approach. By setting the same priority for each merit (b = o = c = r = 0.25), the global scores shown in Table 6 were obtained.
According to the proposed methodology, the most suitable simulation software for Participatory Design turns out to be S3. The second-ranked alternative is S2, while S1 got the lowest score.
A sensitivity analysis was carried out on the results obtained to investigate the possible changes in the ranking of alternatives if different formulae (other than the multiplicative approach) were used to combine the score of the alternatives under each perspective ( B i , O i , C i , R i ). Moreover, four alternative scenarios were considered as far as the weights of the perspectives are concerned; more precisely, in each scenario one of the perspectives (at a time) was assigned a double importance value compared with the remaining one (i.e., 0.4 vs. 0.2). By combining these scenarios with the different approaches that can be used for the computation of the scores, 25 scenarios were obtained overall for the sensitivity analysis. The relating results, in terms of the changes in the ranking of the candidate software packages, are graphically shown in Figure 8.
In Figure 8, the scenarios are labelled with the first word that indicates the perspective assigned the highest relative importance (i.e., benefit, cost, etc.). Otherwise, base is used to denote the situation where the four perspectives are assigned the same importance. The remaining words describe the approach used for computing the scores of the alternatives. Looking at the scores, in the base-multiplicative scenario (which led to the results in Table 6), the ranking of the candidate software packages is as follows: S3>S2>S1. In general, such a ranking is quite robust, as it is observed in most of the scenarios considered (18 scenarios). In 6 scenarios the ranking is S3>S1>S2, while only in the risk-additive scenario, the ranking is S2>S3>S1. Overall, apart from this latter scenario, S3 is always found to be the top-ranked alternative.

5.4.2. The BWM Implementation

Pairwise comparisons of the software packages for each criterion are shown in Table 7. Aggregation (based on simple average) to determine the overall weights for the criteria and sub-criteria was used.
As can be seen from the table above, in the first-level criteria set, Ease of use is identified as the best criterion, and Vendor as the worst. Moreover, for example, in the group General Characteristics, the Best criterion is Maximum simulation speed, while the worst criterion is Supported languages. As for Visual Aspects category, the best criterion is 3D animation, the worst criterion is Graphical background images. Finally, in the Logical Aspects group, the best criterion refers to Coding, while the worst to Queueing rules.
At the end of the process, the following global scores were obtained:
( S 1 S 2 S 3 ) = ( 0.315 0.283 0.401 )
According to the BWM, the most suitable simulation software for Participatory Design is S3. Finally, a sensitivity analysis was performed to check the robustness of the model. To this end, the weight of the best first-level criterion, Ease of Use was varied from 0.1 to 0.9; then, the weights of the remaining first-level criteria were varied as well. A total of ten different runs were performed. Table 8 shows the weights of all the main criteria after a variation in the weight of Ease of Use. The new ranking of solutions in the ten different scenarios is shown in Table 9.

5.5. Stage 6. Candidate Ranking

The ranking returned by the application of the BOCR-AHP approach is S3-S2-S1, and thus S3 is the most suitable simulation software. When applying the BWM, the ranking of the alternatives is S3-S1-S2, which confirms the findings of the previous approach, as S3 still turns out to be the most suitable simulation software.

5.6. Stages 7, 8 and 9. Software Purchase, Implement, Maintain and Improve Solution

These stages form the practical part of the approach and deal with the purchase of the software package, its installation and maintenance at the targeted company. At the time of writing, the purchase and implementation of the selected software at Company A are still ongoing; hence, these stages could not be fully covered by this study and will be analysed in future research.

6. Conclusions

This paper has proposed an original and structured methodology to help the managers of an enterprise in selecting the most suitable manufacturing simulation software package for their needs, taking into account all the positive and negative aspects of such a decision. The methodology embodies a maturity model that is expected to help companies (especially SMEs) understand if they can fully exploit digitalization. Overall, the framework is a structured approach, composed of nine steps that could be customised when needed, depending on the specific application and/or on the industrial problem. In particular, the fifth stage is the true evaluation and selection of the software and is carried out by means of two approaches, namely:
-
the new integrated BOCR-AHP methodology. It consists of six ordered sub-steps and should help select the best software and provide a ranking of alternatives;
-
the BWM approach. It includes eight ordered steps to select the best simulation software from a list of candidates.
In the case under examination, the short list of potential candidates was made of three software packages (S1, S2, and S3). As mentioned before, each alternative reflects a particular class of simulation software packages: S2 is a discrete-event simulator, S3 is an agent-based simulator and S1 has a continuous engine. According to the BOCR-AHP approach, the most suitable simulation software for the problem in exam is S3. The second-ranked alternative is S2, followed by S1, which got the lowest score. The ranking with BWM confirms the finding of the previous approach, and, in particular, S3 still turns out to be the most suitable simulation software. This suggests robustness of the proposed methodologies.
From a theoretical perspective, this framework has the fundamental advantage of being easy to implement and not requiring significant efforts. Indeed, the company’s involvement is needed in the first step of the approach, to reply to the questionnaire regarding the digital readiness self-assessment, and when applying the MCDM methodologies, for gathering data and opinions from experts. Moreover, the computational procedure is quite easy, as it can be performed using an Excel spreadsheet.
From a practical perspective, because the analysis is based on a real enterprise, the results are expected to be useful to managers facing similar problems in real scenarios. Moreover, reusable software, framework and approaches can be used towards the development of Industry 4.0. To be more precise, this means that if there are similar demands, the framework for simulation software selection or codes can be reused to some extent without developing it again from scratch [85]. Because we have presented an exhaustive case study, the proposed approaches could be easily replicated in other companies whose criteria and sub-criteria for simulation software selection are similar to those presented in this study. The fact that some stages of the evaluation framework can be implemented in Microsoft Excel is also interesting, because this general purpose software is known and widespread. This is expected to encourage the application of the framework in practice. Finally, managers can use the proposed framework for monitoring the performance of the purchasing process for a simulation software in a continuous improvement perspective (Industry 4.0 era), where simulation is central in operations management.
A general weakness of this study is that it is hard to evaluate the correctness of the results provided by the framework, as similar methodologies almost lack in the literature. In addition, the two MDCM approaches suggested for application in the framework have never been applied to the problem of selecting simulation software, thus preventing the possibility of direct comparisons of the outcomes.
Starting from this study, several future research directions could be undertaken. First, it is important to highlight that, as the hierarchical structure can be updated, criteria and software candidates can be added or removed at any time, meaning that different frameworks could be built starting from that presented in this paper. As a further research direction, the proposed framework could be generalised, with the purpose of applying it to different fields or for the selection of other types of software tools. Moreover, from a technical point of view, this work could be extended by embodying an integrated BOCR-BWM evaluation methodology, which would combine the advantages of the BWM method over AHP, with the advantages of a BOCR analysis.

Author Contributions

T.M. acted as the coordinator of this study and was in charge of the general design of the research activities. D.C. and C.C. developed the framework on the basis of the literature. G.C. developed the first draft of the whole paper. E.B. helped in carrying out the case studies and performed a critical revision of the paper draft. The paper was written together by all of the authors. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thank Habil, Ralph Riedel and Michael Bojko from Technische Universität Chemnitz for their support and suggestions at the beginning of this research.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The structure of the questionnaire built for the present approach is shown below.
Table A1. Questionnaire structure.
Table A1. Questionnaire structure.
1. PROCESS
Dimension 1: process—Product data management. Yes/No questions
1.1
Is the product-related data digitally managed in your organization?
1.2
Are the design documents maintained locally (at source) or globally?
1.3
Does your organization maintain a central design repository which can encourage design reuse?
Dimension 1: process—lifecycle. Scale: from 1 to 5 (5 = high; 1 = low)
1.4
How high is the level of standardization of the process?
1.5
Does your organization have defined processes for engineering and manufacturing activities that are documented, standardized and integrated?
1.6
How much is the ability of a process to reconfigure itself after changes have occurred?
1.7
How much does the process tend towards continuous improvement?
Dimension 1: process—monitoring and control. Scale: from 1 to 5 (5 = high; 1 = low)
1.8
How often is information tracked and used within a process?
1.9
How much are performances of processes retrieved and used for analysis?
Dimension 1: process—engineering design.
1.10
Does your organization create and maintain the visualizations of the product in graphical forms (2D and 3D)? Yes/No question
1.11
How often are typical digital tools (CAD/CAM/CAE) used for product design in your organization? Scale from 1 to 5 (5 = high; 1 = low)
2. TECHNOLOGY
Dimension 2: technology—automation. Scale: from 1 to 5 (5 = high; 1 = low)
2.1
How much is the process automatized?
Dimension 2: technology—sensors.
2.2
How do you rate your organization’s current installation of sensors on your plant floor? Scale: from 1 to 5 (5 = high; 1 = low)
2.3
Does your organization follow any process for measuring the critically of the equipment? Yes/No question
Dimension 2: technology—connectivity.
2.4
What is your organization’s level of enabling information system, applications, tools and infrastructure to ensure end-to-end data collection and sharing? Scale: from 1 to 5 (5 = high; 1 = low)
2.5
How much are the systems supporting the processes standardized and integrated? Scale: from 1 to 5 (5 = high; 1 = low)
2.6
Does your organization use the latest modes of communication such as wireless, Bluetooth, mobile, etc. for operations activities? Yes/No question
2.7
Are the systems of record (PLM, ERP, MES, etc.) integrated with other business information and engineering systems? Yes/No question
2.8
Is your organization using service-oriented cloud applications? Yes/No question
2.9
Is your organization using big data applications? Yes/No question
3. ORGANIZATION
Dimension 3: organization—decision-making. Yes/No questions
3.1
Does your organization utilize data trends and patterns to make proactive, real-time decisions to improve operations?
3.2
Is decision-making supported by feedback about the process?
3.3
Does your organization follow a fixed schedule to address maintenance issues?
Dimension 3: organization—strategic planning. Scale: from 1 to 5 (5 = high; 1 = low)
3.4
Is your organization’s strategic planning short-term or long-term focused?
Dimension 3: organization—company.
3.5
Does your organization believe that your competitive strategy depends on digital? Yes/No question
3.6
Does your organization collect data about the way customers use your products? Yes/No question
3.7
How do you assess your company’s privacy and data security? Scale: from 1 to 5 (5 = high; 1 = low)
4. PEOPLE
Dimension 4: people—people management.
4.1
How often does your organization assess digital competency levels? Scale: from 1 to 5 (5 = high; 1 = low)
4.2
Does your organization have development programs in place to upgrade employee/people skills? Yes/No question
4.3
Does your organization implement policies and procedures for capability development? Yes/No question
4.4
On average, how much time per year does your organization allocate to digital skills development? Scale: from 1 to 5 (5 = high; 1 = low)
4.5
How much does your organization invest in targeted digital education and training at all levels of your organization? Scale: from 1 to 5 (5 = high; 1 = low)
4.6
Does your organization have clear and quantifiable goals for measuring the success of your digital strategy? Yes/No question

Appendix B

Table A2 lists the proposed criteria and sub-criteria set for the application, together with their description.
Table A2. Selection and description criteria set.
Table A2. Selection and description criteria set.
CriteriaSub-CriteriaDescription
GENERAL CHARACTERISTICS (C1)Compatibility (C1.1)Aspects related to hardware and installation, operating systems supported, computer architecture supported, networked version, multiprocessing capability.
Integration with other systems (C1.2)Aspects related to portability between different hardware or software platforms. Can the software run external software to perform specialized tasks? Can other software control the software?
Data security (C1.3)Encryption of simulation models by a password. Encryption standard. User-definable restrictions.
Maximum simulation speed (C1.4)Maximum speed of a simulation run.
Efficiency (C1.5)Debugging on-line error checking and troubleshooting. Backup and recovery.
Supported languages (C1.6)Number of supported languages.
VISUAL ASPECTS (C2)Icons (C2.1)Library of icons. Import icons. Icon editor.
Graphical background images (C2.2)Possibility to import and/or edit *.dwg. *.dxf. *.bmp files and similar.
3D objects (C2.3)3D objects standard library. Possibility to import and/or edit 3D objects.
3D animation (C2.4)Quality of animation during the simulation run (workers carrying parts, machines performing processes, etc.)
Visual quality (C2.5)Graphical quality of 3D objects and similarity to real objects.
Virtual reality (C2.6)Experience 3D simulations in virtual reality using a cardboard.
LOGICAL ASPECTS (C3)Incorporate merge models (C3.1)Is there a way to combine models to make a complete file?
Coding (C3.2)Aspects related to coding, such as programming flexibility, access to source code, built-in functions, support for third-party libraries.
Attributes and variables (C3.3)Possibility to set attributes and variables. Attributes: local values assigned to entities moving through the system. Variables: values available to all entities moving through the system. Used to describe its state.
Routing rules (C3.4)Possibility to send entities to different locations based on prescribed conditions.
Queuing rules (C3.5)Number of possible queuing rules: First In First Out, Last In First Out, Service In Random Order, Priority, etc.
INPUT (C4)Data input (C4.1)Aspects related to input modes, such as interactive input, batch input, automatically collected data or by reading from a file. Supported input file types. Rejection of illegal inputs.
Statistical information on input data (C4.2)Determining distributions of raw input data.
OUTPUT (C5)Data export (C5.1)Supported output file types, graphics, reports, model execution logs, charts.
Statistical information on export data (C5.2)Statistical analysis such as distribution fitting, confidence interval, data mining options, neural networks.
Export animation (C5.3)Create a movie of the model to view and share.
EASE OF USE (C6)Graphical user interface (C6.1)Easiness in using the menu-driven interface.
Graphical model construction (C6.2)Drag and drop objects into the virtual environment.
Real-time viewing (C6.3)View instantaneous values of variables.
Mobile application (C6.4)Is there a mobile viewer that makes it easy to share models and to view animation recordings?
SUPPORT AND TRAINING (C7)Manual (C7.1)Availability of a manual that explain how to use the tool and quality and clarity of explanations.
Training (C7.2)Availability of tutorials.
Online support (C7.3)Is there an official forum? If so, how active is it?
Demo version (C7.4)Is there a free demo version? How easy is it to download?
Updates (C7.5)Frequency of updates.
Specialized training (C7.6)Training courses. On-site training. Consultancy.
VENDOR (C8)Vendor strength (C8.1)Aspects related to credibility, such as how long the company has been trading, company track record, references, supplier reputation and information sources.
COSTS (C9)Purchasing cost (C9.1)Cost for a single perpetual license or annual license.
Implementation cost (C9.2)Installation cost and the cost to become a fluent user of the software.
Cost of updates (C9.3)Costs involved in keeping the software up-to-date as new versions are released. If there is a new version of the software, can you upgrade without having to purchase a full new license?

References

  1. Pereira, A.C.; Romero, F.C. A review of the meanings and the implications of the Industry 4.0 concept. Procedia Manuf. 2017, 13, 1206–1214. [Google Scholar] [CrossRef]
  2. Ghobakhloo, M. Industry 4.0, digitization, and opportunities for sustainability. J. Clean. Prod. 2020, 252, 119869. [Google Scholar] [CrossRef]
  3. Liao, Y.; Deschamps, F.; Loures, E.D.F.R.; Ramos, L.F.P. Past, present and future of Industry 4.0—A systematic literature review and research agenda proposal. Int. J. Prod. Res. 2017, 8, 1–21. [Google Scholar] [CrossRef]
  4. Moeuf, A.; Pellerin, R.; Lamouri, S.; Tamayo, S.; Barbaray, R. The industrial management of SMEs in the era of Industry 4.0. Int. J. Prod. Res. 2017, 56, 1118–1136. [Google Scholar] [CrossRef] [Green Version]
  5. Rüßmann, M.; Lorenz, M.; Gerbert, P.; Waldner, M.; Justus, J.; Harnisch, P.E.e.M. Industry 4.0: The Future of Productivity and Growth in Manufacturing Industries 2015. Available online: http://www.inovasyon.org/pdf/bcg.perspectives_Industry.4.0_2015 (accessed on 5 March 2020).
  6. Hermann, M.; Pentek, T.; Otto, B. Design principles for industrie 4.0 scenarios. In Proceedings of the 2016 49th Hawaii International Conference on System Sciences (HICSS), Koloa, HI, USA, 5–8 January 2016. [Google Scholar]
  7. Shannon, R.; Johannes, J.D. Systems Simulation: The Art and Science; Prentice-Hall: San Diego, CA, USA, 1975. [Google Scholar]
  8. Ingalls, R. Introduction to simulation. In Proceedings of the 2011 Winter Simulation Conference, Miami, FL, USA, 11–14 December 2011. [Google Scholar]
  9. Chang, V. Presenting cloud business performance for manufacturing organizations. Inf. Syst. Front. 2017, 22, 59–75. [Google Scholar] [CrossRef]
  10. Stock, T.; Seliger, G. Opportunities of sustainable manufacturing in industry 4.0. Procedia CIRP 2016, 40, 536–541. [Google Scholar] [CrossRef] [Green Version]
  11. Tarhan, A.K.; Garousi, V.; Turetken, O.; Söylemez, M.; Garossi, S. Maturity assessment and maturity models in health care: A multivocal literature review. Digit. Health 2020, 6, 2055207620914772. [Google Scholar]
  12. Cusick, J.J. A survey of maturity models from nolon to DevOps and their applications in process improvement. arXiv 2019, arXiv:1907.01878. [Google Scholar]
  13. Manning, L. Moving from a compliance-based to an integrity-based organizational climate in the food supply chain. Compr. Rev. Food Sci. Food Saf. 2020, 19, 995–1017. [Google Scholar] [CrossRef] [Green Version]
  14. Leyh, C.; Bley, K.; Bay, L. The application of the maturity model SIMMI 4.0 in selected enterprises. In Proceedings of the AMCIS 2017—America’s Conference on Information Systems: A Tradition of Innovation, Boston, MA, USA, 10–12 August 2017. [Google Scholar]
  15. Becker, J.; Knackstedt, R.; Pöppelbuß, J. Developing maturity models for it management—A procedure model and its application. Bus. Inf. Syst. Eng. 2009, 1, 213–222. [Google Scholar] [CrossRef]
  16. Lahrmann, G.; Marx, F.; Mettler, T.; Winter, R.; Wortmann, F. Inductive design of maturity models: applying the rasch algorithm for design science research. In Proceedings of the 6th International Conference on Service-Oriented Perspectives in Design Science Research (DESRIST 2011), Milwaukee, WI, USA, 5–6 May 2011. [Google Scholar]
  17. Colli, M.; Madsen, O.; Berger, U.; Møller, C.; Wæhrens, B.V.; Bockholt, M.T. Contextualizing the outcome of a maturity assessment for Industry 4.0. IFAC-Pap. 2018, 51, 1347–1352. [Google Scholar] [CrossRef]
  18. Fumagalli, L.; Polenghi, A.; Negri, E.; Roda, I. Framework for simulation software selection. J. Simul. 2019, 13, 286–303. [Google Scholar] [CrossRef]
  19. Zakria, G.; Guan, Z.; Shao, X.; Riaz, Y.; Hameed, U. Selection of simulation software for manufacturing system: Application of analytical hierarchy approach in multi criteria decision making. Adv. Sci. Lett. 2011, 4, 2152–2158. [Google Scholar] [CrossRef]
  20. Lin, J.C.; Hsieh, P.-L. The role of technology readiness in customers’ perception and adoption of self-service technologies. Int. J. Serv. Ind. Manag. 2006, 17, 497–517. [Google Scholar] [CrossRef]
  21. Dorado-Vicente, R.; Gómez-Moreno, A.; Torres-Jiménez, E.; López-Alba, E. An AHP application to select software for engineering education. Comput. Appl. Eng. Educ. 2011, 22, 200–208. [Google Scholar] [CrossRef]
  22. Siksnelyte-Butkiene, I.; Zavadskas, E.K.; Štreimikienė, D. Multi-Criteria Decision-Making (MCDM) for the assessment of renewable energy technologies in a household: A review. Energies 2020, 13, 1164. [Google Scholar] [CrossRef] [Green Version]
  23. Bottani, E.; Centobelli, P.; Murino, T.; Shekarian, E. A QFD-ANP method for supplier selection with benefits, opportunities, costs and risks considerations. Int. J. Inf. Technol. Decis. Mak. 2018, 17, 911–939. [Google Scholar] [CrossRef]
  24. Rezaei, J. Best-worst multi-criteria decision-making method. Omega 2015, 53, 49–57. [Google Scholar] [CrossRef]
  25. Schumacher, A.; Erol, S.; Sihn, W. A maturity model for assessing industry 4.0 readiness and maturity of manufacturing enterprises. Procedia CIRP 2016, 52, 161–166. [Google Scholar] [CrossRef]
  26. Ganzarain, J.; Errasti, N. Three stage maturity model in SME’s toward industry 4.0. J. Ind. Eng. Manag. 2016, 9, 1119. [Google Scholar] [CrossRef]
  27. Weber, C.; Königsberger, J.; Kassner, L.; Mitschang, B. M2DDM—A maturity model for data-driven manufacturing. Procedia CIRP 2017, 63, 173–178. [Google Scholar] [CrossRef]
  28. Gökalp, E.; Şener, U.; Eren, P.E. Development of an assessment model for industry 4.0: Industry 4.0-MM. In Proceedings of the International Conference on Software Process Improvement and Capability Determination (SPICE), Palma de Mallorca, Spain, 4–5 October 2017. [Google Scholar]
  29. Gracel, J.; Łebkowski, P. The concept of industry 4.0-related manufacturing technology maturity model (ManuTech Maturity Model—MTMM). Decis. Mak. Manuf. Serv. 2018, 12, 17–31. [Google Scholar] [CrossRef]
  30. Guimarães, A.M.C.; Leal, J.E.; Mendes, P. Discrete-event simulation software selection for manufacturing based on the maturity model. Comput. Ind. 2018, 103, 14–27. [Google Scholar] [CrossRef]
  31. De Carolis, A.; Macchi, M.; Negri, E.; Terzi, S. A maturity model for assessing the digital readiness of manufacturing companies. In Proceedings of the IFIP Advances in Information and Communication Technology (APMS), Hamburg, Germany, 3–7 September 2017. [Google Scholar]
  32. Grant, F. Simulation in designing and scheduling manufacturing systems. In Design and Analysis of Integrated Manufacturing Systems; National Academy Press: Washington, DC, USA, 1988; pp. 134–147. [Google Scholar]
  33. Law, A.M.; Haider, S.W. Selecting simulation software for manufacturing applications. In Proceedings of the 21st conference on Winter Simulation Conference (WSC ‘89), Washington, DC, USA, 4–6 December 1989; pp. 29–32. [Google Scholar] [CrossRef] [Green Version]
  34. Banks, J.; Aviles, E.; McLaughlin, J.R.; Yuan, R.C. The simulator: New member of the simulation family. Interfaces 1991, 21, 76–86. [Google Scholar] [CrossRef]
  35. Banks, J. Selecting simulation software. In Proceedings of the 1991 Winter Simulation Conference (WSC ’91), Phoenix, AZ, USA, 8–11 December 1991. [Google Scholar]
  36. Mackulak, G.T.; Cochran, J.K.; Savory, P.A. Ascertaining important features for industrial simulation environments. Simulation 1994, 63, 211–221. [Google Scholar] [CrossRef] [Green Version]
  37. Davis, L.; Williams, G. Evaluating and selecting simulation software using the analytic hierarchy process. Integr. Manuf. Syst. 1994, 5, 23–32. [Google Scholar] [CrossRef]
  38. Hlupic, V.; Mann, A. Simselect: a system for simulation software selection. In Proceedings of the 1995 Winter Simulation Conference, Arlington, VA, USA, 3–6 December 1995. [Google Scholar]
  39. Kuljis, J. HCI and simulation packages. In Proceedings of the 1996 Winter Simulation Conference, Coronado, CA, USA, 8–11 December 1996. [Google Scholar]
  40. Hlupic, V.; Paul, R.J. Methodological approach to manufacturing simulation software selection. Comput. Integr. Manuf. Syst. 1996, 9, 49–55. [Google Scholar] [CrossRef]
  41. Bard, J.; Bergevin, A.; DeSilva, A. Evaluating simulation software for postal service use: technique versus perception. IEEE Trans. Eng. Manag. 1997, 44, 31–42. [Google Scholar] [CrossRef]
  42. Nikoukaran, J.; Hlupic, V.; Paul, R.J. Criteria for simulation software evaluation. In Proceedings of the 1998 Winter Simulation Conference (WSC ’98), Washington, DC, USA, 13–16 December 1998. [Google Scholar]
  43. Hlupic, V.; Paul, R.J. Guidelines for selection of manufacturing simulation software. IIE Trans. 1999, 31, 21–29. [Google Scholar] [CrossRef]
  44. Nikoukaran, J.; Hlupic, V.; Paul, R.J. A hierarchical framework for evaluating simulation software. Simul. Pract. Theory 1999, 7, 219–231. [Google Scholar] [CrossRef]
  45. Hlupic, V.; Irani, Z.; Paul, R.J. Evaluation Framework for Simulation Software. Int. J. Adv. Manuf. Technol. 1999, 15, 366–382. [Google Scholar] [CrossRef]
  46. Tewoldeberhan, T.; Verbraeck, A.; Valentin, E.; Bardonnet, G. An evaluation and selection methodology for discrete-event simulation software. In Proceedings of the 2002 Winter Simulation Conference, San Diego, CA, USA, 8–11 December 2002. [Google Scholar]
  47. Arisha, A.; Baradie, M.E. On selection of simulation software for manufacturing application. In Proceedings of the Nineteenth International Manufacturing Conference (IMC-19), Queen’s University of Belfast, Ireland, 28–30 August 2002; pp. 495–507. [Google Scholar]
  48. Sahay, B.; Gupta, A. Development of software selection criteria for supply chain solutions. Ind. Manag. Data Syst. 2003, 103, 97–110. [Google Scholar] [CrossRef]
  49. Lee, H.-S.; Shen, P.-D.; Chih, W.-L. A fuzzy multiple criteria decision making model for software selection. In Proceedings of the 2004 IEEE International Conference on Fuzzy Systems, Budapest, Hungary, 25–29 July 2004. [Google Scholar]
  50. Cochran, J.K.; Chen, H.-N. Fuzzy multi-criteria selection of object-oriented simulation software for production system analysis. Comput. Oper. Res. 2005, 32, 153–168. [Google Scholar] [CrossRef]
  51. Rincon, G.; Alvarez, M.; Pérez, M.; Hernandez, S. A discrete-event simulation and continuous software evaluation on a systemic quality model: An oil industry case. Inf. Manag. 2005, 42, 1051–1066. [Google Scholar] [CrossRef]
  52. Vuksic, V.B.; Ceric, V.; Hlupic, V. Criteria for the evaluation of business process simulation tools. Interdiscip. J. Inf. Knowl. Manag. 2007, 2, 73–88. [Google Scholar]
  53. Alvarez, M.; Rincón, G.; Hernández, S. Evaluation and selection of discrete-event simulation software for the oil industry. Lat. Am. Appl. Res. 2008, 38, 305–312. [Google Scholar]
  54. Azadeh, M.; Shirkouhi, S.N. Evaluating simulation software using fuzzy analytical hierarchy process. In Proceedings of the 2009 Spring Simulation Multiconference (SpringSim ’09), San Diego, CA, USA, 22–27 March 2009. [Google Scholar]
  55. Gupta, A.; Verma, R.; Singh, K. A critical evaluation and comparison of four manufacturing simulators using analytic hierarchy process. Int. J. Eng. Model. 2009, 22, 35–51. [Google Scholar]
  56. Gupta, A.; Verma, R.; Singh, K. A critical study and comparison of manufacturing simulation softwares using analytic hierarchy process. J. Eng. Sci. Technol. 2010, 5, 108–129. [Google Scholar]
  57. Sawant, V.; Mohite, S.; Patil, R. A decision-making methodology for automated guided vehicle selection problem using a preference selection index method. In Technology Systems and Management. Communications in Computer and Information Science; Shah, K., Lakshmi Gorty, V.R., Phirke, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; Volume 145. [Google Scholar]
  58. Ayağ, Z. A combined fuzzy AHP-simulation approach to CAD software selection. Int. J. Gen. Syst. 2010, 39, 731–756. [Google Scholar] [CrossRef]
  59. Jadhav, A.S.; Sonar, R. Framework for evaluation and selection of the software packages: A hybrid knowledge based system approach. J. Syst. Softw. 2011, 84, 1394–1407. [Google Scholar] [CrossRef]
  60. Hincu, D.; Andreica, M. The evaluation and selecting process for simulation software using fuzzy sets. Metal. Int. 2012, 17, 141–144. [Google Scholar]
  61. Pezzotta, G.; Pirola, F.; Rondini, A.; Pinto, R.; Ouertani, M.-Z. Towards a methodology to engineer industrial product-service system—Evidence from power and automation industry. CIRP J. Manuf. Sci. Technol. 2016, 15, 19–32. [Google Scholar] [CrossRef]
  62. Ereeş, S.; Kuruoğlu, E.; Moralı, N. An application of analytical hierarchy process for simulation software selection in education area. Front. Sci. 2013, 3, 66–70. [Google Scholar]
  63. Azadeh, A.; Nazari-Shirkouhi, S.; Samadi, H.; Shirkouhi, A.N. An integrated fuzzy group decision making approach for evaluation and selection of best simulation software packages. Int. J. Ind. Syst. Eng. 2014, 18, 256. [Google Scholar] [CrossRef]
  64. Franceschini, R.; Bisgambiglia, P.-A.; Touraille, L.; Hill, D. A survey of modelling and simulation software frameworks using Discrete Event System Specification. Open Access Ser. Inform. 2014, 43, 40–49. [Google Scholar]
  65. Gupta, A. How to select a simulation software. Int. J. Eng. Res. Dev. 2014, 10, 35–41. [Google Scholar]
  66. Jadric, M.; Cukusic, M.; Bralić, A. Comparison of discrete event simulation tools in an academic environment. Croat. Oper. Res. Rev. 2014, 5, 203–219. [Google Scholar] [CrossRef]
  67. Rohaninejad, M.; Kheirkhah, A.; Fattahi, P.; Vahedi-Nouri, B. A hybrid multi-objective genetic algorithm based on the ELECTRE method for a capacitated flexible job shop scheduling problem. Int. J. Adv. Manuf. Technol. 2014, 77, 51–66. [Google Scholar] [CrossRef]
  68. Dias, L.; Vieira, A.A.; Pereira, G.; Oliveira, J. Discrete simulation software ranking—A top list of the worldwide most popular and used tools. In Proceedings of the 2016 Winter Simulation Conference, Washington, DC, USA, 11–14 December 2016. [Google Scholar]
  69. Rashidi, Z.; Rashidi, Z. Evaluation and ranking of discrete simulation tools. J. Electr. Comput. Eng. Innov. 2016, 4, 69–84. [Google Scholar]
  70. Alomair, Y.; Ahmad, I.; Alghamdi, A.; Alhaznawi, S. Evaluating defense simulation packages using analytic hierarchy process. J. Int. Technol. 2016, 17, 831–838. [Google Scholar]
  71. Ejercito, P.M.; Nebrija, K.G.E.; Feria, R.; Lara-Figueroa, L.L. Traffic simulation software review. In Proceedings of the 2017 International Conference on Information, Intelligence, Systems & Applications (IISA), Larnaka, Cyprus, 27–30 August 2017. [Google Scholar]
  72. Govindan, K.; Shankar, K.M.; Kannan, D. Achieving sustainable development goals through identifying and analyzing barriers to industrial sharing economy: A framework development. Int. J. Prod. Econ. 2020, 227, 107575. [Google Scholar] [CrossRef]
  73. Attri, R.; Grover, S. Decision making over the production system life cycle: MOORA method. Int. J. Syst. Assur. Eng. Manag. 2013, 5, 320–328. [Google Scholar] [CrossRef]
  74. Alomair, Y.; Ahmad, I.; Alghamdi, A. A review of evaluation methods and techniques for simulation packages. Procedia Comput. Sci. 2015, 62, 249–256. [Google Scholar] [CrossRef] [Green Version]
  75. Abdel-Basset, M.; Mohamed, M.; Chang, V.; Smarandache, F. IoT and its impact on the electronics market: A powerful decision support system for helping customers in choosing the best product. Symmetry 2019, 11, 611. [Google Scholar] [CrossRef] [Green Version]
  76. Willaert, P.; Bergh, J.V.D.; Willems, J.; Deschoolmeester, D. The process-oriented organisation: A holistic view developing a framework for business process orientation maturity. In Business Process Management (BPM 2007); Alonso, G., Dadam, P., Rosemann, M., Eds.; Springer: Berlin, Germany, 2007; Volume 4717, pp. 1–15. [Google Scholar] [CrossRef]
  77. Weckenmann, A.; Akkasoglu, G. Methodic design of a customized maturity model for geometrical tolerancing. Procedia CIRP 2013, 10, 119–124. [Google Scholar] [CrossRef] [Green Version]
  78. Crosby, P. Quality is Free: The Art of Making Quality Certain; McGraw-Hill: London, UK, 1979. [Google Scholar]
  79. Wind, Y.; Saaty, T.L. Marketing applications of the analytic hierarchy process. Manag. Sci. 1980, 26, 641–658. [Google Scholar] [CrossRef]
  80. Saaty, T.L. Decision making with the analytic hierarchy process. Int. J. Serv. Sci. 2008, 1, 83. [Google Scholar] [CrossRef] [Green Version]
  81. Wijnmalen, D.J. Analysis of benefits, opportunities, costs, and risks (BOCR) with the AHP–ANP: A critical validation. Math. Comput. Model. 2007, 46, 892–905. [Google Scholar] [CrossRef]
  82. Gupta, H.; Barua, M.K. Supplier selection among SMEs on the basis of their green innovation ability using BWM and fuzzy TOPSIS. J. Clean. Prod. 2017, 152, 242–258. [Google Scholar] [CrossRef]
  83. Gupta, H.; Barua, M.K. A framework to overcome barriers to green innovation in SMEs using BWM and Fuzzy TOPSIS. Sci. Total Environ. 2018, 633, 122–139. [Google Scholar] [CrossRef]
  84. Jensen, P. Can participatory ergonomics become ’the way we do things in this firm’—The Scandinavian approach to participatory ergonomics. Ergonomics 1997, 40, 1078–1087. [Google Scholar] [CrossRef]
  85. Chang, V.; Abdel-Basset, M.; Ramachandran, M. Towards a reuse strategic decision pattern framework—From theories to practices. Inf. Syst. Front. 2019, 21, 27–44. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Scheme of the queries and related results.
Figure 1. Scheme of the queries and related results.
Sustainability 12 05909 g001
Figure 2. Distribution of the number of papers as a function of the publication year.
Figure 2. Distribution of the number of papers as a function of the publication year.
Sustainability 12 05909 g002
Figure 3. Frequency of usage of multi-criteria decision-making (MCDM) techniques.
Figure 3. Frequency of usage of multi-criteria decision-making (MCDM) techniques.
Sustainability 12 05909 g003
Figure 4. The proposed selection framework.
Figure 4. The proposed selection framework.
Sustainability 12 05909 g004
Figure 5. The Analytic hierarchy process integrated with benefits, opportunities, costs, risks analysis.
Figure 5. The Analytic hierarchy process integrated with benefits, opportunities, costs, risks analysis.
Sustainability 12 05909 g005
Figure 6. The developed benefits, opportunities, costs, risks structure of selection criteria.
Figure 6. The developed benefits, opportunities, costs, risks structure of selection criteria.
Sustainability 12 05909 g006
Figure 7. The best-worst model.
Figure 7. The best-worst model.
Sustainability 12 05909 g007
Figure 8. Changes in the ranking.
Figure 8. Changes in the ranking.
Sustainability 12 05909 g008
Table 1. Overview of scientific Industry 4.0 maturity models.
Table 1. Overview of scientific Industry 4.0 maturity models.
YearReferenceModelStructureEvaluation
2016[25]Industry 4.0 maturity modelHierarchical with five levelsNine dimensions: strategy, leadership, customers, products, operations, culture, people, governance, technology
2016[26]Three stages maturity modelHierarchical with five levelsThree dimensions: envision, enable, enact
2017[27]M2DDMHierarchical with five levelsFocused on IT systems, which is the only dimension
2017[14]SIMMI 4.0Hierarchical with five levelsFour dimensions: vertical integration, horizontal integration, digital product development, cross-sectional technology
2017[28]Industry 4.0 maturity modelHierarchical with six levelsFive dimensions: asset management, data governance, application management, process transformation, organizational alignment
2018[29]MTMMHierarchical with five levelsEight dimensions: core technologies, people and culture, knowledge management, real-time integration, infrastructure, strategic awareness and alignment, process excellence, cybersecurity
2018[30]DESS maturity modelHierarchical with five levelsSix dimensions: knowledge of simulation, process standardization, specialist knowledge, process organization, measurement and evaluation, management programs
2018[31]Digital Readiness Assessment Maturity—DREAMYHierarchical with five levelsFour dimensions: process, monitoring and controlling, technology, organization
Table 2. Overview of contributions to literature of selected documents.
Table 2. Overview of contributions to literature of selected documents.
PaperEvaluation FrameworkMaturity ModelSelection CriteriaEvaluation TechniquePractical ApplicationComparison Software Characteristics
[32] x
[33] x
[34] x x
[35] x
[36] x
[37] xxx
[38] xxx
[39] x x
[40]x x
[41]x xxx
[42] x
[43]x x
[44]x x
[45]x x
[46]x xx
[47] x
[48] xxx
[49] x
[50] xxx
[51]x xxx
[52] x
[53]x xx
[54] xxx
[55] xxx
[56] xxx
[57] xx
[58] x
[59]x xx
[19] xx
[60] xx
[61] x
[62] xxx
[21] xxx
[63]x xxx
[64] x x
[65] xx
[66] x x
[24] x
[67] xx
[68] xx
[69] xxx
[70] xx
[71] x x
[30] xxxx
[18]x xxx
Table 3. Levels and dimensions of maturity model.
Table 3. Levels and dimensions of maturity model.
LEVEL
BasicRepetitiveDefinedIntegratedOptimal
DIMENSIONProcessNo standardization
Documents only maintained locally
Minimal information sharing
Limited product feedback
Static reports of operational activity
Isolated attempts of standardization
Information is starting to be shared
Processes are formally described
Implementation of standards
Processes controlled and managed statistically and quantitatively
Real-time analytical data processing
Predictive analytics
Integration into corporate processes
Digital oriented processes
Processes improved by incremental innovations
Cognitive analytics
Quantitative goals
TechnologyNo historical data
Sensors are not connected
Connected devices
Data localized
Software tunable assets
Data in real-time
Self-optimization
Interaction with ecosystem
Converged technology
Real-time infrastructure
Machine learning
OrganizationAd hoc Decision-making
No prediction capabilities
Use trial and error or experience for troubleshooting
Minimal strategic planning
Near-term focused
Measurements are made in the form of manual notes
Some evaluation procedures have started to be defined
Process-driven
Longer-term focused
Quality inspections
Quality maintenance procedures
Policy-driven
Long-term focused
Smart decision-making
Limited enterprise-wide integration
Value-oriented
Strategy iterates rapidly in response to competitive opportunities and threats
Implementation of programs such as QPM, OPP, OPM
PeopleAd hoc people managementPolicies developed for capability improvementStandardized people management across organizationQuantitative goals for people management in placeContinuous focus on improving individual competence and workforce motivation
Table 4. Score band for maturity levels.
Table 4. Score band for maturity levels.
Maturity LevelScore Band
Basic0 ≤ IDML < 20
Repetitive20 ≤ IDML < 40
Defined40 ≤ IDML < 60
Integrated60 ≤ IDML < 80
Optimal80 ≤ IDML ≤ 100
Table 5. Score of the alternatives under merits B, O, C and R.
Table 5. Score of the alternatives under merits B, O, C and R.
B i O i C i R i
S10.4910.2730.2820.651
S20.2330.2960.5070.134
S30.2760.4370.2090.215
Table 6. BOCR-AHP global scores.
Table 6. BOCR-AHP global scores.
AlternativeGlobal Score
S10.730
S21.015
S32.684
Table 7. Pairwise comparison of the software packages for each criterion (Note: red = worst criterion; green = best criterion).
Table 7. Pairwise comparison of the software packages for each criterion (Note: red = worst criterion; green = best criterion).
CriteriaVC to OthersPS to OthersAL to Others
VCPSALVCPSALVCPSAL
General Characteristics (GC)
Compatibility111/5111/5551
Integration with other systems11/31/3311311
Data security11/333151/31/51
Maximum simulation speed11/91/591551/51
Efficiency11/51/351331/31
Supported languages11/31/5311/3531
Visual Aspects (VA)
Icons131/51/311/7371
Graphical background images111111111
3D objects1731/711/51/351
3D animation1971/911/71/571
Visual quality1751/711/31/531
Virtual reality1571/5131/71/31
Logical Aspects (LA)
Incorporate-merge models11/31/3311311
Coding111111111
Attributes and variables11/31/3311311
Routing rules1311/311131
Queueing rules111111111
Input (IN)
Data Input111/5111/5551
Statistical information on input data111/3111/3331
Output (OU)
Data export11/31/3311311
Statistical information on export data1331/3111/311
Export animation111111111
Ease of use (EU)
Graphical user interface131/31/311/5351
Graphical model construction111111111
Real-time viewing11/51/5511511
Mobile application1991/9111/911
Support and training (ST)
Manual111111111
Tutorial1151151/51/51
Online support11/7171711/71
Demo version11/31/3311311
Updates111111111
Specialized training111111111
Vendor (VE)
Vendor strength11/91/491551/51
Costs (CO)
Acquisition cost131/31/311/5351
Implementation cost111111111
Cost of updates111111111
Table 8. Variation in weight value alter varying Ease of Use (EU) weight value.
Table 8. Variation in weight value alter varying Ease of Use (EU) weight value.
RunGCVALAINOUEUSTVECO
Original0.1260.1260.0760.1260.0760.3130.0540.0270.076
Run1 (0.1)0.1650.1650.0990.1650.0990.1000.0710.0360.099
Run2 (0.2)0.1470.1470.0880.1470.0880.2000.0630.0320.088
Run3 (0.3)0.1290.1290.0770.1290.0770.3000.0550.0280.077
Run4 (0.4)0.1100.1100.0660.1100.0660.4000.0470.0240.066
Run5 (0.5)0.0920.0920.0550.0920.0550.5000.0390.0200.055
Run6 (0.6)0.0730.0730.0440.0730.0440.6000.0310.0160.044
Run7 (0.7)0.0550.0550.0330.0550.0330.7000.0240.0120.033
Run8 (0.8)0.0370.0370.0220.0370.0220.8000.0160.0080.022
Run9 (0.9)0.0180.0180.0110.0180.0110.9000.0080.0040.011
Table 9. Ranking of solutions during sensitivity analysis (criterion EU variation).
Table 9. Ranking of solutions during sensitivity analysis (criterion EU variation).
SolutionsOriginalRun1Run2Run3Run4Run5Run6Run7Run8Run9
S10.3150.3220.3190.3160.3120.3090.3060.3030.3000.297
S20.2840.2870.2850.2840.2820.2810.2800.2780.2770.275
S30.4010.3910.3960.4000.4050.4100.4140.4190.4230.428

Share and Cite

MDPI and ACS Style

Cafasso, D.; Calabrese, C.; Casella, G.; Bottani, E.; Murino, T. Framework for Selecting Manufacturing Simulation Software in Industry 4.0 Environment. Sustainability 2020, 12, 5909. https://doi.org/10.3390/su12155909

AMA Style

Cafasso D, Calabrese C, Casella G, Bottani E, Murino T. Framework for Selecting Manufacturing Simulation Software in Industry 4.0 Environment. Sustainability. 2020; 12(15):5909. https://doi.org/10.3390/su12155909

Chicago/Turabian Style

Cafasso, Davide, Cosimo Calabrese, Giorgia Casella, Eleonora Bottani, and Teresa Murino. 2020. "Framework for Selecting Manufacturing Simulation Software in Industry 4.0 Environment" Sustainability 12, no. 15: 5909. https://doi.org/10.3390/su12155909

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop