Implementing a Novel Use of Multicriteria Decision Analysis to Select IIoT Platforms for Smart Manufacturing

: Industry 4.0 is having a great impact in all smart efforts. This is not a single product, but is composed of several technologies, being one of them Industrial Internet of Things (IIoT). Currently, there are very varied implementation options offered by several companies, and this imposes a new challenge to companies that want to implement IoT in their processes. This challenge suggests to use multi-criteria analysis to make a repeatable and justiﬁed decision, requiring a set of alternatives and criteria. This paper proposes a new methodology and comprehensive criteria to help organizations to take an educated decision by applying multi-criteria analysis. Here, we suggest a new original use of PROMETHEE-II with full example from weight calculation up to IoT platform selection, showing this methodology as an effective study for other organizations interested to select an IoT platform. The criteria proposed outstands from previous work by including not only technical aspects, but economic and social criteria, providing a full view of the problem analyzed. A case of study was used to prove this proposed methodology.


Introduction
Industry 4.0 is having high impact in all industries. This is not a unique product, but is composed of several technologies. Boston Consulting Group has defined nine technological pillars for Industry 4.0: cloud, additive manufacturing, simulation, big data and analysis, autonomous robots, augmented reality, integration of horizontal and vertical systems, cybersecurity and industrial internet of things (IIOT) [1]. IIOT has been used not only in the manufacturing industry, but has expanded to other industries such as health, travel and transportation, energy, gas and oil, etc. This is one of the main reasons that IIOT is known as the Internet of Things (IoT) [2]. IIoT is a key intelligent factor that allows factories to act intelligently. By adding sensors and actuators to objects, the object becomes intelligent because it can interact with people, other objects, generate data, generate transactions and react to environmental data [3,4]. Cities do not ignore this trend, since there is a plan to turn cities into smart cities in certain countries [5].
The decision processes that companies must follow should be supported by methods that consider pros and cons of plural points of view that affect the decision process. Researchers and practitioners have developed over time the techniques that today are part of the domain of Multiple Criteria Decision Industria Internet of Things (IIoT) continues to evolve. Due to the instrinsic complexity, it is good practice to look at architectural references. IIoT have five main requirements in general basis [8]: 1) Enable communication and connectivity between devices and data processing; 2) Establish a mechanism to manage devices, including tasks such as adding or deleting devices, updating software and configurations; 3) Gather all the data produced by the devices and then analyze them to provide a meaningful perspective to the companies or users; 4) Facilitate scalability to handle the increased flow of "data pipes" (hereinafter referred to as data pipelines) and the flow of data, and handle an increasing number of devices; 5) Protect the data by adding the necessary functions to provide privacy and trust between the devices and the users. Table 1 shows the summary of the various multi-layer architectures found in the literature. Devices, Communication and Application [10-12] 4 Devices, Communication, Transport and Application [9,12-16] 5 Devices, Local processing, Communication, Transport and Applications [12] 7 Business, Management, Communication, Processing, Acquisition, User interaction and Security [15,17] 8 Physical devices, Communication, Edge or Fog processing, Data storage, Applications, Collaboration and process, Security [18] Technical architecture provides an extreme value to users because it can be implemented with different products. Therefore, it is understandable that several companies offer IIoT platforms that can be useful for our architectures. Commercial providers aim to flexible options offered, and consumers are responsible for using each component in the best way they consider. The main commercial players identified are, in alphabetical order: Amazon Web Services, Bosch IoT Suite, Google Cloud Platform, IBM Blue Mix (now Watson IoT), Microsoft Azure IoT and Oracle Integrated Cloud [19]. The leading players identified in 2014 by Gartner Group were AWS and Microsoft, but in 2018 Google enters the leaders quadrant. IBM, for its commercial relevance is considered, although it has become a niche player, along with Oracle. Although Bosch IoT does not appear in the panorama detected by Gartner, we include it for being used in several industries. Each of these suppliers has similar characteristics among them but have different value propositions.

MCDA as a tool to select IIoT Platform
Making a decision introduce problems to individuals. One of the problems is the integration of heterogeneous data and the uncertainty factor surrounding a decision, and the criteria that usually conflict with each other [7,20]. To carry out a MCDA process, a series of tasks is proposed, based on the three generic steps suggested by [21]: i) identify the objective or goal, ii) select the criteria, parameters, factors, attributes, iii) selection of alternatives, iv) association of attributes with the criteria, v) selection of weight methods to represent the importance of each criterion, and vi) the method of aggregation. [21] included a step that is left out of these proposed tasks, but which should be considered in the discussion before executing the selected action. This step is to understand and compare the preferences of the person making the decision.
The MCDA can be classified according to the basis of the problem, by type, by category or by the methods used to make the analysis. Figure 2 shows a taxonomy adapted from [22]; the methods included in this taxonomy are not exhaustive. The MCDA is a collection of systematic methodologies for comparisons, classification and selection of multiple alternatives, each one with multiple attributes and is dependent on an evaluation matrix. Generally it used to detect and quantify the decisions and considerations from interested parties (stakeholders) about various monetary factors and non-monetary factors to compare alternative course of action [7,22]. The major division that exists in MCDA lies in the category of methodologies. First group considers discrete values with a limited number of known alternatives that involve some compensation or trade-off. This group is called Multiple Attributes Decision Making (MADM). The other group is the Multiple Objectives Decision Making (MODM) and its variable decision values are within a continuous domain with infinite or very numerous options that satisfy the restrictions, preferences or priorities [20]. Also, there is another classification according to the way of adding criteria and it is divided into the American school, which aggregates into a single criterion, and into the European or French school that uses outranking methods. It can be considered a mixture of both schools and they are indirect approaches, such as the Peer Criteria Comparison methods (PCCA) [23]. When finding the available alternatives of the market, a new question will arise to find the method that helps to select the appropriate option. To answer this last question, a review of the literature is made looking for: a) MCDA methods applied to the selection of IIoT platforms and b) knowing the criteria taken into account.
In the literature there is little information on the subject in recent years. Table 2 shows the summary of the work found. The selected methods are focused on AHP, TOPSIS and Fuzzy logic in AHP and TOPSIS. The outranking methods were not implemented, but were considered as an option or for future work by some authors [24,25]. The selection of an IIoT platform is not dominated by a single criterion, nor is there a single alternative. [26] considered AWS, Azure, Bosch, IBM Watson and Google Cloud within their options, which coincide with some of the alternatives considered in this manuscript. Therefore, it is interesting to review the criteria they included for MCDA, as summarized in Table 2.  [30] 2013 Ranking cloud services AHP Responsibility, Agility, Service assurance, Cost, Performance, Security and privacy, Usability [31] Criteria found in literature are purely technical with some hints of economy, and can be found as part of the characteristics of IoT architecture [32]. But when implementing an IIoT platform, non-technical aspects should also be considered. As the platform to be considered has its foundation in the cloud, it is valid to review the criteria included in previous MCDA exercises to select a cloud provider, looking for non-technical aspects.
The criteria for selecting a cloud proposed in the CSMIC Framework v 2.1 of 2014 1 as Index of Measure of Service (SMI) include topics of interest to the organization, financial and usability, together With the technical issues [31]. Some of these criteria can be included to complement the analysis having the technical point of view and the business point of view.
Finally, there is the question about which methods are suitable for this type of problems, noting that the previous work includes AHP, ANP, TOPSIS and Fuzzy Logic, but they leave aside for future research methods such as PROMETHEE and ELECTRE. There are many more methods available in MCDA scope. Following the decision tree to select an MCDA method written by [23], which considers 56 methods, the number of options can be easily reduced. In the case of selecting an IoT platform that has different criteria, the problem has the characteristics of classification or ranking, ordering the options from best to worst. This technique is useful in real life, since they are hardly conform and subject themselves to a single option, but they have to consider their primary option and another option as backup, assuming that the first option is not viable.
The candidate methods found are COMET, NAIADE II, EVAMIX, MAUT, MAVT, SAW, SMART, TOPSIS, UTA, VIKOR, Fuzzy SAW, Fuzzy TOPSIS, Fuzzy VIKOR, PROMETHEE I, PAMSSEM II, Fuzzy PROMETHEE II, AHP + TOPSIS, AHP + VIKOR, fuzzy AHP + TOPSIS, AHP + Fuzzy TOPSIS, Fuzzy ANP + Fuzzy TOPSIS, AHP, ANP, MACBETH, DEMATEL, REMBRANDT, Fuzzy AHP and Fuzzy ANP. 1 Cloud Services Measurement Initiative Consortium (CSMIC) was created by Carnegie Mellon University to develop Service Measurement Index (SMI). it can be found at https://spark.adobe.com/page/PN39b/ Of the 29 methods suggested by the decision tree, those used in the literature are included for this type of problem. However, although it would be a very interesting exercise to compare the 29 methods with each other, it is beyond the scope of this article. We propose to use PROMETHEE II, which has not been used in previous works, but some authors have considered it for future work.

Materials and Methods
In our experience, companies that want to implement IIoT show great enthusiasm for the initiative, but on several occasions they have a misconception of what IoT entails. IoT concepts are technical and of great interest to engineers and systems architects, but the business factors, cost aspects, methods of payment, and commercial conditions, all of them are of great interest for senior management represented by the Chief Officers, referred often as CxO Level. In addition, the wide offer that exists in the market where suppliers have different prices and service schemes make it difficult to compare among each other, or at least difficult to do a linear comparison.
Our proposal identifies and suggests the criteria required for IIoT Platform selection for a MCDA exercise with PROMETHEE-II method, enabling organizations to compare results and make a well-founded decision. This work does not provide a universal and definitive solution, but rather, it proposes the methodology that any organization, be it small or large, can use to decide on the IIoT platform that best suits their circumstances and needs. Following the general MCDA process depicted in Figure 3, the decision objective is the selection of an IIoT platform. The selection of criteria must be consistent with the decision and each criterion must be independent of one another. Each criterion must also be measured on the same scale and applicable to all alternatives. The Table 3 summarizes the criteria to be used together with its definition. Criteria that are qualitative, i.e. based on expert judgement, can be measured by text to number scale. For calculating criteria weights, we propose to use Analytic Hierarchy Process and the Saaty scale [24,27]. Criteria that are quantitative should consider equal scenarios, such as the cost of data transmission, which for all alternatives should be calculated with the same number of devices, same message size and same number of messages per day.
The selected criteria are divided into three major areas of interest: technical, economic and social. This is a major enhancement over previous works found in the literature. To identify to what area each criterion belongs, we use a relationship matrix, where we identify if the criterion has a high, medium or low relationship with each of the areas. The selected criteria are also classified as quantitative and qualitative according to their nature, and are summarized in Table 3. The data collected must be analyzed in different ways. It is important to consider the data flow, real-time analysis, batch, and machine learning algorithms available on the platform.
Years that the provider has in the market. It is expected that the reputation of a supplier will increase over the years.  Our proposal includes profiles of people who must participate in the expert judgement exercise, something that has not been found in literature. It is important that they are not only dedicated to technology in order to enrich the exercise. Table 5 lists the desirable profiles of people we suggest, who should be involved in a MCDA exercise as experts. It is important to note that not all roles must necessarily be participating, as these positions may vary between organizations

Methods
Our proposed methodology, shown in Figure 5, consists of several tasks in order to found the best alternative. The first task (Activity 1) is to define a decision matrix, taking in consideration sub tasks. It is required to find the alternatives available in the market (Activity 1.1). a good source of information is to rely in recognized entities such as Gartner Consulting (Activity 1.1.1); they perform studies to find who are the leaders, challengers, niche players and visionaries. Next, criteria is defined (Activity 1.2) supported by ellaborating a relationship matrix (Activity 1.2.1). Defined criteria has been proposed in Table 3. It consists of fourteen items available, named C i , where i = 1, 2, ..., n, and n = 14, arranged in 3 main areas, supported by decision matrix shown in Figure 6.

Chief Information Officer
In terms usually is, it is the most important person responsible for technology in any company. Their tasks range from buying IT equipment to directing the workforce to the use of technology.
T, E, S

Chief Technology Officer
The technology director reports to the CIO, which means that it acts as support for IoT projects. That said, in larger organizations, the work may be too much for just one person, so the CTO has this responsibility.

T CInO Chief Innovation Officer
This role is of recent creation and is the one that can counteract the wild instinct oriented to sales of the business units of a company and design an organizational environment more favourable to innovation.
T, S CSO Chief Security Officer He is the main responsible for the information security program of an organization and should be consulted before any deployment of technology.

T COO Chief Operations Officer
Oversees the business operations of an organization and work to create an operations strategy and communicate it to employees. He is very involved in the day to day of the company and will be one of the main impacted in an IoT project.

E CMO Chief Marketing Officer
The technology and the business aspects of the company are converging. This convergence of technology and marketing reflects the need for the traditional Commercial Director to adapt to a digital world and, therefore, participates in any IoT project in which they are working, to express their opinion as to obtain commercial benefit for the company.

Chief Financial Officer
In all the projects of the company, there must be the support of the Finance Director, who controls the economic resources of the company.
In an IoT project, he is interested in the investment required, and especially in the return of investment to exercise.

E HRO Human Resources Officer
It is the person who needs to know if the necessary skills to the project exist in the market, how easy it is to obtain them and the sources where they can be obtained. Among his responsibilities are the personnel development plans and the recruitment of human resources.

Business Unit Leaders
The deputy directors and managers who report within each hierarchy are key personnel that can provide good opinions and issue a more tactical than strategic judgement. By being more focused on specific projects, their knowledge and sensitivity also becomes specific, giving value to expert judgements.
T, E, S Then, Activity 2 start, where experts will need to grade each criterion in pairwise fashion, using Saaty scale [35] (Activity 2.1) for pairwise comparison (Table 6) to assign a level of importance of C i over C j . Expert's answers are recorded in a square matrix x = [n × n]. Each element x ij will have a numeric value translated from Saaty scale and, as it is pairwise, the reciprocal x ji = 1/x ij when i = j; when i = j, then x ij = 1. In other words, x ij corresponds to the importance of C i over C j . Moderate importance Experience and judgement slightly favor one element over another 5 Strong importance Experience and judgement strongly favor one element over another 7 Very strong importance One element is favored very strongly over another, its dominance is demonstrated in practice 9 Extreme importance The evidence favouring one activity over another is of the highest possible order of affirmation 2, 4, 6, 8 Intermediate values Importance between above and below value When designing the tool to grab expert's answers, consider the number of pairwise comparisons required. These can be easily calculated by After having recorded all answers, it is required to calculate weights w, for each C i . To proceed, first the matrix values need to be normalized by obtaining the sum of each column and then dividing each cell by the sum of its corresponding column.
From this normalized matrix, criteria weights w are obtained by the sum on each row element∑ n i=1 x ij , when j = 1, 2, ..., n. However, it is important to verify if weights found are trustworthy and can be applied later. This is achieved by calculating the Consistency Ratio (CR). CR will measure how consistent the judgements are relative to large sample of pure random judgements, known as Random Index (RI). When CR < 0.1, then the weights are acceptable. In the case CR > 0.1, it indicates the judgements are untrustworthy becasue they are closer to random distribution and the excercise must be repeated. random distribution, also known as Saaty random consistency index, is well documented by Saaty [35] and widely used in literature. As a reference, Table 7 shows values for RI, based on number of criteria [36].
CR is found by where CI is Concistency Index and RI is the Random Index. CI is calculated as It is required to multiply each value for its corresponding criteria weight and then sum each row to obtain a weighted sum value (WSM). Then, each of this weighted sum values is divided by the corresponding criteria weight (CW). The result is a new column with λ i = WSM i CW i

values.
To calculate λ max , just sum of the results of each λ and divide it by number of rows in the matrix If CR < 0.1, then calculated weights are accepted (trustworthy) and expert can proceed to grade each alternative S k for each C i . We propose qualitative criterion to use qualitive conversion, from 1 to 5. Each word from low, below low, average, good, excellent has a corresponding value, in this case {1, 2,3,4,5}.
Activity 3 consist on evaluate the alternatives using the decision matrix with the weights found and validated. It is required to define criterion goal. They can be Maximize (also know as direct criteria, or beneficial criteria), or Minimize (also known as indirect criteria or non beneficial criteria). This goal setting is important as it will define the normalization method in Activity 4.
Quantitative criterion just need to enter the value as it is found. For qualitative criterion, expert enters a perception of the criterion, that in turn will be translated into a numeric value. We propose to use a 1 to 5 values, as shown in Table 8. After all decision matrix is evaluated, it can be applied PROMETHEE-II method. PROMTHEE-II stands for Preference Ranking Organization Method for Enrichment Evaluations. Version I is just a partial ranking, reason enough not to use it in our methodology, while version II is a full ranking. PROMETHEE-II is an extensive documented method, and reader can find information about this method in [37,38].
Finally, all alteratives are ranked and it can be obtained the best option for the organization (Activity 5).

Results
Calculating weights, consistency and selecting the best alternative can be difficult to follow. It is better to show an example. In our work, we follow our proposed methodology to obtain the best option to select an IIoT pltaform calculating the weighted criteria with the three platform vendors located in leader quadrant from Gartner's magic quadrant (Fig. 4). Those are: AWS, Azure and GCP.

Weight Criteria Calculation
The first step in our methodology says to calculate the weights required for platform selection. In order to achieve this, there are two things to do: 1) Weight calculation coming from experts judgement (participants came from Table 5) and 2) Validate consistency.
Each expert must answer how important is criterion i over criterion j . Using Saaty scale [35] for pairwise comparison (Table 6) experts can express the importance between two criterion. In our proposed methodology, each expert consulted should answer [(14 2 ) − 14]/2 = 91 comparisons, as there are 14 criteria. This is 91 items.
By following criteria abbreviations proposed in table 3, and having recorded expert's judgement for each pairwise comparison, table 9 shows the matrix with answers given. We need to obtain the sum of each column. The sum of each column will be used to normalize Table 9 resulting in Table 10.   To determine if weights are trustworthy, we calculated Consistency Index and Consistency ratio. Table 13 shows the values obtained when calculating WVS, the ratio of each WVS w i , λ max and equation 5 shows Consistency Index CI calculation.   Consistency Index in our experiment is calculated as Using the random index for N = 14 from Table 7, Consistency ratio is computed as As CR < 0.1, the weights for each criterion are consistent and trustworthy, therefore, they are accepted to use in our decision process.

IIoT Platform Selection
Among the three cloud platform vendors considered for this excercise: AWS, Azure and Google Cloud Platform (GCP), listed in alphabetical order. Each vendor brings IoT capacity, different services and price schema not directly comparable among vendors. Each organization must have their goals, and will answer the weight criteria process differently, so it is not possible to determine which vendor is better than other in an absolute fashion. For that reason, this scenario is a good fit for our methodology.
Each alternative (let's call them S i ) need to be graded on each of the criterion proposed. It is convenient to have it on a table, with criteria identified (in this case we use abbreviations suggested in our methodology) and specify if criterion is cualitative, i.e. requires a numeric value contained in criterion domain, or it is qualitative and requires to convert the appreciation of expert grading into a pre-established numeric value, as shown in table 14. For criterion "Available regions (TAr), AWS has 22 availabe regions worldwide 2 , Azure offers 55 regions 3 and GCP offers 21 4 . Criterion Communication ports (TCp), AWS offers three options (HTTP, Websockets, MQTT), Azure offers four (HTTP, AMQP, MQTT, Websockets), and GCP offers two (HTTP, MQTT). Criterion Cost (EC) is the most cumbersome to compare and calculate. AWS uses a mix schema to estimate IoT costs. Azure is based on messages, and GCP has a traffic consumption schema. As it can be seen, this is not comparable directly, so we estimated costs based on a same scenario for all three vendors.
The scenario consist of 1,000 devices, sending a message of 8Kb with a rate of 2 messages per minute. All estimations are per month. Our compared estimations using each vendor calculator are summarized on table 15. Training cost (ETc) takes in consideration the cost of certification, being AWS $150.00, Azure $165.00 and GCP $200.00 (at the time of writting this paper). The rest of the criteria are evaluated from a qualitative form. In the Table 16, contains the grades provided and Max(x ij ) and Min(x ij ). In order to save space, we use S 1 as AWS, S 2 as Azure, and S 3 as GCP.  )  21  3  3  3  3  4  4  3  138.82  3  150  3  3  3 To Normalize the table, we need to consider if we are maximizing (Eq. ??) or minimizing (Eq. ??). The resulting normalized matrix is on Table 17. As a courtesy to the reader, we exemplify the operation using the first cell of the matrix. The operation executed to normalize values is Now, to calculate preference function values we use eq. ??, resulting in table 19. The operation is  Next, we calculate the weighted preferences, using preference function and weights found (Table  11. Each cell has the value wP(a, b) and results are in Table 20 by doing  The aggregated preference is shown in Table 21.
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 19 February 2020 doi:10.20944/preprints202002.0269.v1 Table 21. Aggregated preference Next, using the aggregated preference values, we calculate the entering and leaving flows. Table  22 has the arranged values; right-most column contains the leaving flow (ϕ + ) and bottom row shows the entering flow (ϕ − ). Leaving and entering flow are calculated using eq. ?? and ??. Our operations are As we are using PROMETHEE-II, we need to calculate net flow Φ. The best way to do it is to build another table with each alternative and its corresponding leaving and entering flows. Add the column for net flow (Φ = ϕ + − ϕ − ) and order the net flows from highest to lowest to rank all alternatives available. Table 23 shows the results.

Discussion
The methodology proposed to find the best alternative within a decision matrix, using all criteria, and applied to an example, finds the best solution. However, as part of this research, we decided to execute two validations. The first one is using the proposed methodology with criteria subsets. The second consist on run the full criteria (14 elements) with three different methods: TOPSIS, which its use has been reported in literature for similar problems, MOORA and Dimensional Analysis (DA), using same alternatives and values in decision matrix.
Our proposed methodology with criteria subsets shows a good consistency in the alternative selected, except when we used five criteria. When use seven or ten criteria, the result is exactly the same, as shown in Table 24 and Fig. 7.  Now, comparing TOPSIS, MOORA and DA against our proposed methodology, the results are consistent, as all algorithms selected the same alternative with same number of criteria considered. Table 25 and Fig. 9 shows all three other methods selected the same alternative as our methodology.  Becasue TOPSIS has been used in similar problems, we decided to do an additional comparison. By running TOPSIS against the same criteria subsets, we can observe the selected alternative is the same for all cases, as shown in Table 26.

Conclusions
As technology in IIoT and cloud advance, there will be new options available in the market for the organizations. Also, there are aspects that are relevant, not only technical, but economical and social. The three alterantives evaluated for this paper are aligned to leaders identified by Gartner up to 2018, however, it doesn't assure they will be the only ones in the near, mid or long term.
The criteria proposed follows and adapt for today's vision. People must have double deep abilities, that is technical and business. That is one of the reasons to add to technical criteria the angles of economics and social view. Both of them have been left off in literature and daily practice. Our contribution to industry provides these two missing aspects.
Cost is one of the most difficult and confusing comparisons, if there is not a good scenario to run against each price schema. However, as it is shown on Table 11, cost is not the main driver to take a decision in IIoT. Security has the highest weight and this is understandable as organization's IIoT implementations and solutions will transmit sensitive data. Communication protocols is the second most important criterion, and the reasoning behind is the flexibilty required for different sensors available in the market. Device management and display are very close in importance, which is logic as organizations need to deploy from tenths to thousands of devices for a solution, and having a dashboard to locate and get information about devices is important.
Of economic and social criteria, the most significants are cost and available resources, respectly; longevity in the market was the least important criterion. This can be read as organizations may be open to experiment and learn with newcomers.
It is the best to have different experts from different background or responsibility within the organization. The roles suggested in this methodology (Table 5) covers a large part of main organization areas. We decided to include not only the IT department, but operations, financial, human resources, and business unit leaders. This proves to be aligned with the criteria suggested. By inviting to participate to differnt roles, the weighting criteria becomes more accurate, therefore, the selection process will be better. We do not suggest to have a single expert to provide opinion on criteria weighting. As people may have different understanding or could be biased towards a specific criteria, having more than one expert is preferred, and our proposed set of roles provide the options to select the experts.
Use of Saaty scale and method to evaluate criteria importance was proven to be effective. However, we discover the validation of opinions is even more important, in order to provide a trustworthy weights for the selection criteria. In our experiment, consistency ratio was 0.06, whic is acceptable and allows to continue with the process. Organizations must use this kind of validations when choosing what would be more important over other criteria.
As it was discovered in the literature review (Table 2), most work related to cloud and IoT has focused on AHP and TOPSIS. But selecting an IIoT platform cannot have a single alternative winner, it is better to have all alternatives ranked. Our experience states in some cases, the vendor selected cannot deliver or does not meet other oraganization's requirements such as terms, legal contracts, conditions, or timing. When this happen, it would be a waste of time to redo the whole MCDA process again. That is why PROMETHEE-II proven to be effective as it can rank from top to bottom the alternatives available. In our excercise, Azure was the first option, followed by AWS and GCP.
It is important to notice, PROMETHEE-II and our methodology will not say which platform or technology is better, from an absolute standpoint, but which platform or technology is better suited for the organization based on the weights and grades provided by experts within the organization.
The paper demonstrated that our proposed methodology is effective to find the best alternative to select an IIoT platform vendor as it has been performed consistent with five, seven and ten criteria subsets, as well as comparing results against other methods. Also, it contributes to the field of IIoT as it provides a novel method to solve the problem many organization are or will face at any time. Combining Saaty weight method and PROMEHEE-II, decision makers have a good tool to perform the selection. However, if it is limited to the technical aspects, the result may be biased and miss important aspects of the market. For example, if the technology is very good, the platform is the most complete and least expensive, but there are not engineers or developers available, or training classes cost a fortune, implementing this platform will be a difficult and expensive project, with hidden costs not detected since inception. That is the reason and justification to include economic and social aspects in the criteria, as our methodology proposes.
IIoT platform selection should not be left to IT departments or CIO or CTO. Doing that will miss the point of view of other important leaders that will use, maintain or benefit from selected platform. Chief Operation Officer, leaders from business units, interdisciplinary teams, and even human resources and finance should participate in the MCDA process, as they bring ideas and considerations that sometimes are ignored uintentionally. Our proposed methodology provides a suggested list of key persons to participate, something that was not found in the literature, and is very valuable for the decision process.
As a side discovery, Comparing price schemas among vendors is not an easy task. We saw very useful to have a common scenario to run against the price schemas. To build a common scenario it is required to have a close to reality idea of usage, number of devices, message size, and frequency of communication. Trying to compare price schemas without this scenario could lead to a miss information entered in the grading matrix of PROMETHEE-II part ( Table 16).
The process of doing calculation and operations is laborius, due to the nature of algorithms used in our methodology proposed. This inspires us to continue the future work enhancing the methodology, creating a software to facilitate the computation. Another key aspect is the importance grading from Saaty's process. Filling the matrix with reciprocals values, could lead to human error easily. This also highlight as part of our future work to develop a graphical user interface that experts can use in a friendly fashion to enter the importance between criteria and fully automate our methodology when multiple experts participate in the process.
Author Contributions: All the authors jointly contributed to the finalization of the paper: R.C. defined the criteria proposed and provide the MCDA options; A.O. supervised the overall process and provide resources; M.E. directed the method of ranking; L.P. reviewed R.C. development of methodology; V.G. critically reviewed the concept and design of the paper. All authors have read and agreed to the published version of the manuscript.

Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.