Next Article in Journal
A Fast and Reliable Luma Control Scheme for High-Quality HDR/WCG Video
Previous Article in Journal
Mapping Relevant Parameters for Efficient Operation of Low-Temperature Heating Systems in Nordic Single-Family Dwellings
Previous Article in Special Issue
Optimal Location of the Access Points for MIMO-UWB Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rank-Based Comparative Research Flow Benchmarking the Effectiveness of AHP–GTMA on Aiding Decisions of Shredder Selection by Reference to AHP–TOPSIS

1
Department of Civil Engineering, College of Engineering, National Kaohsiung University of Science and Technology, Kaohsiung 807, Taiwan (ROC)
2
Department of Management Sciences, College of Business and Management, Tamkang University, New Taipei City 251, Taiwan (ROC)
3
Department of Multimedia Design, College of Human Ecology and Design, St John’s University, New Taipei City 251, Taiwan (ROC)
4
The Graduate Institute of Business and Management, College of Management, Chang Gung University, Taoyuan City 333, Taiwan (ROC)
*
Author to whom correspondence should be addressed.
Appl. Sci. 2018, 8(10), 1974; https://doi.org/10.3390/app8101974
Submission received: 4 September 2018 / Revised: 7 October 2018 / Accepted: 12 October 2018 / Published: 18 October 2018
(This article belongs to the Special Issue Selected Papers from IEEE ICICE 2018..)

Abstract

:
The AHP–GTMA (analytic hierarchy process and graph theory and matrix approach) has been applied to select the best paper shredder before a company was making a bulk purchase order. However, there is a question as to whether one such relatively recent approach is effective to aid the selection decision problems in industrial/commercial practice. In this paper, a novel multi-measure, rank-based comparative research flow is proposed. The real decision problem case mentioned above is solved using the AHP–GTMA and the AHP–TOPSIS methods, respectively, with relevant datasets sourced. Several measures in the proposed flow, i.e., the arithmetical, geometrical, or even statistical ones, are multiplexed and used to validate the similarity between the rank order vectors (ROVs) (and thus between the final preferential orders determined over the alternatives) that are obtained using these two different methods. While AHP–TOPSIS is a confident multi-attribute decision-making (MADM) approach which has been successfully applied to many other fields, the similarity validated between these individual results using the proposed method is used to confirm the efficacy of the AHP–GTMA approach and to determine its applicability in practice. In addition, along with this study, some contributable points are also rendered for implementing the decision models, e.g., the optimized recursive implementation in R to compute the permanent value of a square ASAM (alternative selection attribute matrix, which is the computational basis required by AHP–GTMA) of any dimension. The proposed methodological flow to confirm the similarity based on the ordinal rank information is not only convenient in operational practice with ubiquitous tool supports (e.g., the vector-based R statistical platform), but also generalizable (to verify between another pair of results obtained using any other MADM methods). This gives options for future research.

1. Introduction

The problem of sourcing required materials is usually a confounding decision to be made by the decision makers (DMs) for an industrial company. Sourcing an appropriate combination of materials for manufacturing has been a key problem, e.g., for the photoelectric industry, but the focus sometimes differs because of the presence of various risk factors in the supply chain. Multi-objective programming (MOP) models are usually employed as a main tool to tackle these multi-objective decision-making (MODM) problems [1]. For the problem of finding the optimal combination of material sourcing for manufacturing, such property (i.e., MODM) is analogous to many existing problems in relevant fields, e.g., on the other side of manufacturing to find the expected optimal combination of products to produce, i.e., a “product mix” problem [2].
However, sourcing the appropriate tools or materials for office use is another problem for the industrial companies, because it usually involves the uni-type bulk purchase (i.e., the “single-lot purchase”). This forms a selection problem which is intrinsically different from the abovementioned MODM problems. One single type of product, among many “alternatives”, must be determined before the bulk-purchasing process begins, and such decision is usually justified based on their pros and cons in terms of the “attributes”. Prioritizing the alternatives encountered using the attributes considered is the basis for the selection. Therefore, for problem–model fitness, the suitable approach to this problem is usually multi-attribute decision-making (MADM). The following paragraphs illustrates the core properties of this type of decision problem by the paper shredder selection problem which is usually faced by industrial companies.
A paper shredder, which is an electromechanical facility, is used in offices, although its use in the home is also popular. For decision-makers (DMs), the selection of a best paper shredder from several considered alternatives is an occasional, but confounding operational decision. This type of decision is important for business entities before launching a bulk purchase because of some properties of the decision problem itself. As can be summarized from [3], such a decision is important because of the following properties:
(1)
Importance of the shredders for company management. Because of the key roles of such a business machine in security management (i.e., it destroys sensitive or private documents before disposal or recycling, thus preventing the disclosure risk of confidential information) and in efficiently managing paper files (i.e., it destroys dated but still archived physical paper files, some of which have been converted and saved in the digital repository), a wrong decision to select and purchase an unsuitable shredder may lead to some unexpected consequences.
(2)
The post-decision consequence of bulk purchase. Usually, for maintenance convenience and cost saving, a company launches a batch procurement which involves purchasing several shredders of the same type at a time. In this context, failing to choose and decide the most suitable shredder will simply replicate the irrecoverable consequences mentioned above.
(3)
Hard to decide because of the variety of choices. In the market, there are too many different types of shredder, each having its own special features. For example, in the purchase case studied by Zhuang, Chiang, Su and Chen [4] there were 26 options. Such fact deters a DM from justifying them properly and makes it harder to decide among so many alternatives.
(4)
Hard to decide because there are multiple conflicting selection criteria present, but these criteria and the relative importance among them remains unknown. For this point, it was claimed in [4] that “… the problem … should address a different set of criteria, in comparison with using at home.” According to this, seven relevant main selection criteria which should be considered in the “office-use” context were summarized.
Therefore, Zhuang et al. [4] used a hybrid MADM approach, AHP–GTMA (analytic hierarchy process and a graph theory and matrix approach), to model such a selection decision problem. For how the AHP–GTMA model was formulated in [4], see the digested review in Appendix A for details. In summary, the main contributions of [3,4] are the novelty to scientifically manage/support the encountered selection decision for the first time, to identify a suitable criteria set for the problem and to solve the problem systematically using the hybrid AHP–GTMA model. These are important for making a purchase decision and for addressing the design issues of paper shredder. However, apart from these, there is a question as to whether one such relatively recent approach is effective to aid a selection decision problem (such as the abovementioned) in industrial practice.
In terms of operational research (OR) applications, using only one single method to solve a certain type of problem is not always plausible. Thus, often, whether another method can be used to solve the same problem, the results of that process and the possible comparisons made (between the method used before and the one used after) are of interest. That is to say, using an arbitrary approach to have a solution could be deemed appropriate, as long as there is problem–model fitness and it is the first time to scientifically manage the encountered decision. This is exactly the contributions of [4]. However, this also implies that the effectiveness of using the AHP–GTMA method has not yet been cross-validated or at least verified.
Therefore, this paper proposes a rank-based, multi-measure comparative research flow to benchmark the effectiveness of AHP–GTMA on solving the target selection decision problem. The original paper selection problem is used as the exemplar problem that is to be solved separately by using AHP–GTMA and another rival approach. The rival approach must be a “credible” one to ground the claims made based on the results from the comparisons. In the experiment that applied the proposed flow, the well-known AHP–TOPSIS method, which has been applied to various fields, is introduced as the rival approach not only because of its credibility but also for the sake of its model–problem fitness.
The entire flow compares the results and validates the similarity in the results that are separately solved using the two models. For the two models used in this study in particular, subject to the validated similarity, the effectiveness of the AHP–GTMA model can be underlain by reference to the results from using the widely-applied AHP–TOPSIS method. In other words, the effectiveness of AHP–GTMA in solving a selection problem in practice is therefore verified by the verified similarity between the results.
The proposed flow is rank-based because it takes only the rank order vectors (ROVs) (i.e., the output of most of the MADM models, to the authors’ knowledge [5]) that are solved individually by two MADM models as the required information for input. (A final ordinal rank is usually determined by the ROV, wherein each element indicates the preferential order of an alternative. Usually, this vector is determined by, and is the result of, prioritizing another “score vector” that carries information about the “assessed scores” for all of the alternatives in the given selection problem. See Hwang and Yoon’s book [5] for more detail, which was re-printed in 2011 [6]. In addition, the mentioned term “score” has different forms when different MADM methods are used. For example, using TOPSIS, the relative closeness index (RCI) connotes the “score” for an alternative. For another example, using GTMA, the permanent value of an ASAM matrix connotes the “score” for an alternative. These are the basis to prioritize the alternatives, to name but a few.) It is a multi-measure flow because different concepts are used to solidify and cross-validate the results from comparison: the arithmetic based, the geometrical (vector distance) based and even the statistical (non-parametric) based ones.
Applying the comparative research flow, both methods yield very similar ROVs for the exemplar problem. This is justified in terms of several measures, such as the (arithmetic) difference vector of the two ROVs and the (geometrical) Euclidean distance between them. Similarities observed using these measures are further validated (confirmed) using a (statistical) non-parametric test—the sign test—to see the probability for the chance that these two ROVs are not (drawn) from non-identical distributions (as a result, it is very probable that both ROVs are from an identical population). Consequentially, as for the same real problem, the ROV determined using AHP–GTMA is quite similar to that obtained using AHP–TOPSIS, thus the effectiveness of applying AHP–GTMA method is validated. As can be seen, the proposed flow is in fact a “QC” process that examines the similarity in the ROVs step by step to come to the eventual evidence that “they passed the similarity QC check”. Note that the outlier concept (in the data analytics field) is also a special design embedded into the comparisons, which is discussed below.
By reviewing the relevant methods systematically, Section 2 addresses the necessity and timeliness to validate the practical effectiveness of the AHP–GTMA approach, while confidence in using AHP–TOPSIS as the rival method is established in term of its “wide use” and being able to mitigate the risk of “inconsistency in the results” due to methodological heterogeneity as it shares a common former phase with AHP–GTMA. Section 3 introduces the comparative research flow that models/solves the same selection decision problem using AHP–GTMA and AHP–TOPSIS individually and compares the results using the proposed flow. Section 4 analyzes the results and discusses the findings/implications which are drawn from the experiment, while the new recursive algorithm that determined permanent values for the latter phase of AHP–GTMA in this study is detailed. Section 5 gives concluding remarks.

2. Literature Review

As discussed previously, the most important aim of this study was to examine the practical effectiveness of AHP–GTMA, by reference to the results obtained using AHP–TOPSIS upon solving the same real problem, using the proposed multi-measure, rank-based comparative research flow. Thus, in this section, first, the main method to be benchmarked (i.e., AHP–GTMA as well as its basis, the GTMA approach), the rival approach that used (i.e., AHP–TOPSIS as well as its basis, the TOPSIS approach) and their common former stage (i.e., AHP) are reviewed (and in a reversed order for clarity). Current application literature reveals that not only TOPSIS but also the hybrid AHP–TOPSIS are popular methods for MADM. In contrast, relatively fewer studies have applied the more recent AHP–GTMA approach; even the applications of GTMA are increasing now. Following this systematic review, it is observed that: (1) using AHP–TOPSIS as the rival approach is appropriate because the benchmarked AHP–GTMA approach is compared to a “widely used approach” [7]; and (2) it is suitable to benchmark the effectiveness of AHP–GTMA in practice at this moment because more upcoming applications of which is expected (similar to the increasing popularity of applying pure GTMA) but confidence in its application is still insufficient. As for the mentioned problem–data–model fitness of AHP–TOPSIS and the novelty of the proposed recursive algorithm for implementing AHP–GTMA in R, these are relatively minor for the study and are presented below along with the modeling and model implementation works.
AHP is a widely used MADM approach that was proposed in 1977 [8]. It has been used in many recent application studies (e.g., [9,10,11,12,13,14,15]). AHP uses the concept of pairwise comparisons during investigation to prioritize or rank the criteria and/or the alternatives during calculation. With standard AHP, the involved criteria with respect to the total decision goal are pairwise compared to determine the criteria weight vector (CWV) and to have a priority over them. This is usually called the first phase. The involved alternatives are then pairwise compared with respect to (each of) these criteria to determine the preferred priority sequence over them. For AHP, there is also evidential cross-method integration [16,17,18,19]. To the authors’ knowledge, it is still the most popular MADM method for field applications. For example, in the supply chain field, until this decade, except for the MODM models, which are usually based on mathematical programming, AHP is the top-rated MADM approach for tackling with the supplier selection problems [20,21] and this is also true in a survey for green supplier selection methods in 2015 [22]. Based on its popularity (i.e., the application domain) and since AHP has been well developed over half a century (i.e., the methodological domain), it is used as the shared initial phase for the two methods that are to be compared.
TOPSIS (Technique for Order Preference by Similarity to an Ideal Solution) was proposed at the end of the last century [5,23]. In the MADM school of MCDM, it gained popularity quickly. TOPSIS utilizes the concept of relative closeness to the ideal, while “relative closeness” of an alternative is measured by the ratio of distance to the anti-ideal solution of the model (system) against the summed distance to both the ideal and anti-ideal (solutions). The greater this ratio is, the closer the alternative is to the ideal solution and the farther it is from the anti-ideal solution. Even after 2010, studies using the non-hybrid versions of TOPSIS have been undertaken [24,25,26,27,28,29,30,31,32,33,34,35]. Similar to AHP, these TOPSIS studies can be roughly categorized as methodological improvement studies and model application studies. In those methodological improvement studies, we found that many studies since 2000 have used AHP–TOPSIS [36,37,38,39,40]. This has continued to be a trend in the last three years because the literature is replete with examples of its use in widely varying fields (e.g., [41,42,43,44,45,46,47,48,49]). Since TOPSIS is another widely used approach, setting up a hybrid approach that uses both TOPSIS and AHP (AHP–TOPSIS) as the rival approach is an important experimental design of this study.
GTMA, since proposed, has also become an increasing popular MADM approach. Graph theory, which forms the basis for GTMA, is centuries old. To the authors’ best knowledge, the first application of graph theory was in a study by Leonhard Euler in 1736 [50]. A decade ago, Rao proposed an idea that used graph theory [51] for MADM, and the method was further systematized as the known graph theory and matrix approach (GTMA) as a formal decision-making method [52]. GTMA represents the performance attributes (of each alternative) in a digraph, translates it into the matrix representation and calculates the permanent functional value of which as the basis for ranking [53]. Note that, during this process, the permanent function, and neither the determinant function nor the permutation function (which totally differs in its meaning), is used to provide the complete information of the assessed alternative without any loss (see [51]). Some recent studies have successfully resolved the selection decisions or alternative ranking problems using this method [54,55,56,57,58]. Within the scope of GTMA, in addition to the abovementioned articles, there have been studies using the AHP–GTMA hybrid approach [59,60,61]. These works are all published after 2012 because of the youngness (recency) of the AHP–GTMA approach. However, to the authors’ knowledge, the applications of this method is relatively few (in comparison with the widely used AHP–TOPSIS).
The above literature study addresses the standpoint of this study. First, methodologically, each MADM model prioritizes the alternatives using a different internal solution logic. This is reflexive to the claim that “algorithms (for MADM) differ in their approach to selecting the ‘best’ solution, so this leads to ‘inconsistency in the results’” [7]. However, in the literature, it is also noted that “there is neither a strong reason to reject a particular school of MCDM, nor a convincing argument to give general preference to one of the many methods” [62]. In this sense, the necessity for further scrutiny of whether the relatively recent AHP–GTMA approach is effective to aid the selection decision problems in industrial practice (which may answer the main research question of this study) should be addressed.
Second, AHP–GTMA and AHP–TOPSIS share a former phase of AHP. This is important to make a solid basis for the comparison works. The fact the rival approach shares a common former phase with the benchmarked approach can control the “inconsistency in the results”, which is a positive antecedence for the comparisons.
Third, AHP–TOPSIS has been a “widely used” approach, in contrast to AHP–GTMA whose applications are relatively few. The literature contains many examples of the application of AHP–TOPSIS (the articles cited are very limited). It is a mature and widely used method so it is credible. This justifies the appropriateness of using AHP–TOPSIS as the rival approach. Note that this is not a comment on the relative merits of these two approaches, but is a claim made based on the observed number of cases (i.e., it has been used again and again for selection problems), although such popularity of AHP–TOPSIS could be attributed to the length of time during which it has been used. For example, in a recent study [63], the school of MODM proposed a hybrid mathematical programming approach to solve the site selection and capacity planning problem for the establishment of wind farms. However, in another recent study [49], the school of MADM also successfully used AHP–TOPSIS as an effective MADM modeling method to aid the decision for the selection of solar farms. The reason for the latter study to choose AHP–TOPSIS should be analogous (to this study), and that study has eventually become yet another addition to the “number of cases” of AHP–TOPSIS.
Fourth, subject to the relatively limited applications, we found that it is suitable to benchmark the effectiveness of AHP–GTMA in practice at this moment because more upcoming applications can be expected in the future (similar to the increased popularity of applying pure GTMA) but confidence in its practical application is still insufficient.

3. Decision Case Modeling and the Comparisons: Methods and the Results

3.1. Comparative Research Flow

Most MADM studies suppose that there are m alternatives and n criteria (attributes) for a given decision system of a selection decision problem, P. An m × n “decision matrix”, D, wherein each element contains a value of an alternative that is evaluated based on some considered attribute, is established by data collection. The data collection process could involve many heterogeneous data sources, such as human data, computer data, network data, archived database data, paper profile data or electronic data, etc., subject to the recent trend of data-driven decision-making (DDDM).
Usually, before the model calculations begin, the initial decision matrix (D) is normalized to give another “normalized decision matrix”, which is D’. In addition, the opinion of a DM about the pairwise relationship between any two given attributes can be polled using an AHP-style expert questionnaire. A pairwise comparison matrix, B, is then obtained and this serves as the basis to determine a priority vector, which is the CWV over the considered attributes.
Using the data specifications as defined (and required) by AHP–GTMA and by AHP–TOPSIS, D’ and B are the sufficient input information for AHP–GTMA modeling, whilst D’ and CWV are sufficient for AHP–TOPSIS modeling. By using E = {D’, B} as the data catalogue and solving P using AHP–GTMA, the ROV, ROVAHP–GTMA, whose elements contain the rank orders that are determined for the alternatives, is obtained as the final output of this AHP–GTMA modeling. Similarly, by using F = {D’, CWV} as the data catalogue for P and solving it again using AHP–TOPSIS, another ROV, ROVAHP–TOPSIS, is then obtained as the final output of this AHP–TOPSIS modeling.
In this study, the first and core logic of the proposed comparative research flow is that, for P, whether ROVAHP–GTMA and ROVAHP–TOPSIS are similar is important. That is, if ROVAHP–GTMA “can be” similar to ROVAHP–TOPSIS upon modeling the same known decision problem in practice, the effectiveness of AHP–GTMA (which is relatively new and with fewer existing applications) is therefore verified because the rival AHP–TOPSIS approach is a “widely used” and experienced method.
The second logic pertains to “how to determine”, instead of to “whether there is”, the similarity between the individual results obtained from two models. This involves two questions for the methodology of comparison: (1) The degree to which they are similar must be measured in a numerical sense, but how? (2) In the “individual results”, what type of message should be taken for the similarity measurement?
The latter question can be examined earlier for illustration clarity. To keep convenience of use when the proposed flow is applied, this study advocates that only the information about ordinal rank, which is the final output of almost every MADM model, is the input of the flow. That is, comparisons should be made and the similarity should be identified solely based on the two ROVs, rather than introducing any other type of result information which is perhaps in common. This is the reason the proposed comparative research flow is “rank-based”.
The answer to the former question relates to the above answer to the latter question, because the possible way(s) to measure the similarity numerically depends on what is to be measured. In our study, since the similarity between the individually-obtained ROVs are of interest and it contains the numerical values of ordinal rank, each of which is assessed for an alternative, several tangible/intangible measures can be established in terms of different theoretical aspects, as follows:
(1)
The absolute distance in rank can be calculated directly based on the ordinal rank values individually solve by the two models, while the aggregated absolute distance is a similarity measure in the arithmetic sense. Even though an ordinal rank is not ratio-scaled, the different values of rank preserve a meaningful order. Suppose that X is the MADM model which is to be benchmarked and Y is the MADM model used to benchmark X; these processes are expressed as:
T V m × 1 = [ t v 1 t v 2 t v m ] = | R O V X = [ r o v x 1 r o v x 2 r o v x m ] m × 1 R O V Y [ r o v y 1 r o v y 2 r o v y m ] m × 1 | m × 1 i = 1 , 2 , . . , m ,   r o v z i = 1 , 2 , . . , m   and   i , j { 1 , 2 , . . , m } , i j r o v z i r o v z j   | z { X , Y }
A D R O V s = i = 1 m t v i
where i is the index for the alternatives; z is the index for the methods; TV is a temporary vector which constrains the element-wise absolute distances between the rank order values assessed by the two methods; and AD is the aggregated absolute distance, which is the scalar column sum of TV.
(2)
The geometrical distance can be computed by treating the two valued ROVs as vectors in the space and using the Euclidean method. Thus, the vector TV in Equation (1) is treated as the Euclidean vector, based on which the Euclidean distance is computed according to the following formula to have another similarity measure in geometrical sense:
G D = ( i = 1 m t v i ) 1 / 2
(3)
Another statistical similarity measure can be established to confirm and cross-validate the results. This will be especially useful when the two ROVs are found to be very similar or at least similar to some extent in terms of the two previous arithmetic and geometric measures. With this purpose mind, we propose to treat the two ROVs and apply the non-parametric sign rank test in the statistics field to establish this similarity measure. The sign rank test determines whether the two ROVs (not as vectors but as statistical variables) are (from) non-identical populations. Theoretically, using this test here is suitable because the input to such a non-parametric test, i.e., the ROVs, is not ratio-scaled but in the meantime, the ordinal relationship is kept. Thus, if after the test it is shown that these two ROVs are not from non-identical populations and it is far from the evidence to support that they are so (from non-identical populations), this statistical-based observation can be used to confirm the other two similarities which have been observed arithmetically or geometrically. Regardless, setting up such an additional intangible similarity measure is a novelty of this study.
To utilize the existing initial real case for paper shredder selection decision in [4] as the exemplar problem of this study (i.e., P), the following subsections begin with the collection of the required datasets (i.e., the union set EF = {D’, B, CWV}). Problem P is then solved using AHP–GTMA and AHP–TOPSIS, respectively, while two ROVs are obtained. This is followed by using the above systematic comparative research flow to determine the similarity between them in terms of the different measures. As a result, ROVAHP–GTMA is shown to be very similar to ROVAHP–GTMA. Using these processes for illustration, the involved methods for modeling and comparisons are self-explanatory.

3.2. Description of the Decision Problem and Data Sourcing

In the used case in [4], the size of problem P is defined by: (n, m) = (7, 8), for which there are 8 shredder alternatives and 7 relevant criteria (attributes) in the criteria set, which is: {c1, c2, c3, c4, c5, c6, c7} = {Security level, Width of output strip or segment, Paper can size, Neatness of the shredder (in terms of the total volume of the shredder set), Noise level, Number of supported material types and Price}. Of the seven relevant criteria, c1, c3 and c6 are the-more-the-better (TMTB) criteria, and the other four are the-less-the-better (TLTB). This briefly describes the decision problem, but the required datasets, D’, B and CWV, must be collected before going on.
Based on the source decision matrix (D) in [4], a normalized decision matrix (i.e., D’) is recalculated and obtained. It contains the normalized performance score (NPS) vector for each alternative as a row. This is shown in Table 1. As can be seen, compared with Table 1, the results that calculated in [4] had some rounding inconsistencies, i.e., sometimes rounded to the nearest whole number and sometimes not (e.g., rounded down unconditionally). That is, in [4], the other version of D’ did not give the precision that is required by subsequent permanent value calculations for the alternative selection attribute matrices (ASAMs) (because, to compose the ASAM for an alternative, the diagonal of the AHP pairwise comparison matrix is substituted by the NPS vector of that alternative, which affects the permanent function values of the ASAMs.). Table 1 shows the correction.
For source dataset B, which is the pairwise comparison matrix to be used by AHP–GTMA as the basis to compose the ASAM for each alternative before permanent value determination and to be used by AHP–TOPSIS as the attribute weights (i.e., the CWV) for the latter stage of TOPSIS, the previous dataset is followed directly. This is because, although it might suffer the problem for not passing the consistency check if the threshold for CR (consistency ratio) is set to 0.1, it can pass the check if the threshold is 0.2, a standard that is acceptable by many industrial studies. Besides, the CR of this matrix is just a little greater than 0.1 (CR = 0.1117), and of most importance is that it is a real dataset that was investigated from a DM using the formal yet rigorous style of AHP expert questionnaire survey [4]. Given such practical values, in this study, this dataset is sourced and used, as shown in Table 2.
For the criteria weight vector, CWV, it is also re-assessed. Again, this is to mitigate the risk of precision problem during modeling. In the previous study, the imprecise version of this vector was only used to have a rough knowledge about the priority over all considered criteria and draw practical implications based on it (e.g., the bulk purchase concerns for or design issues of paper shredders). It was totally unused by the later calculations in [4]. In this study, however, since the elements of this vector are used as the basis for the latter stage of AHP–TOPSIS, their precision matters. Thus, this vector is re-assessed upon B, as:
C W V * = [ 0.15568452 0.02337814 0.11160558 0.05413003 0.42072456 0.06787833 0.16659885 ] .

3.3. Alternative Ranking/Selection Using AHP–GTMA

Although the decision case was solved in [4] using AHP–GTMA, this subsection re-establishes the model based on the precise datasets presented in Section 3.2, while the model is implemented as an integrated, one-stop program in R to maintain precision consistency during calculation (including the prior steps to obtain Dataset 1 and the precise CWV in Equation (4)). With the previous Excel–R solution of [4], the former stages of AHP–GTMA (i.e., determining the CWV using AHP and composing the ASAMs) were performed in spread sheets and then the latter stages (i.e., computing the permanent values of the alternatives using the ASAMs and ranking them) were done in R. In addition, unlike [4] that implemented GTMA based on dynamic programming (DP), in this study, a recursive algorithm is proposed. Other details of this algorithm are discussed in the next section, but its overall flow is presented here along with the solution process.
Following the symbolic conventions and using the datasets sourced in Section 3.2, the algorithm involves the following phases:
(1)
Compose an ASAM matrix for each alternative and archive it.
(2)
Load and compute the permanent value for each alternative.
(3)
Rank the alternatives according to the permanent values obtained in Phase (2).
According to Singh and Rao [52], for Phase (1), for each alternative (i.e., Ai, i = 1…8), an ASAM, Ci, is obtained by replacing the diagonal axis of B with its NPS in D’ (i.e., the i-th row of D’). Algorithm 1 shows the algorithm of this phase.
Algorithm 1 ComposeASAMsForAlternatives
Input: D’ (Normalized Decision Matrix), B (Pairwise Comparison Matrix)
Output: The ASAM Matrix for Every Alternatives in the List
* Usually, the inputs and outputs can be the .csv files (comma separated files)
1: Load the normalized decision matrix (NPS vectors of all of the alternatives as rows) (e.g., 8 × 7):
   D = read data file for Dataset 1;
2: Load the source AHP pairwise comparison matrix (e.g., 7 × 7):
   B = read data file for Dataset 2;
3: Initialize the dimension that defines the problem size (e.g., #Criteria = 7 and #Alternatives = 8):
3-1: D’_Mat = as numerical matrix(D’);
3-2: B_Mat = as numerical matrix(B);
3-3: Nalts = count # rows for D’_Mat;
3.4: Ncrit = count # column for D’_Mat;
4: For every alternative, replace the diagonal axis of B with the associated row in D’ (e.g., completing
 this obtains totally eight 7 × 7 ASAM matrices) and then write all the result ASAM matrices Ci
 (Revised_B_Mat) to output files
4-1: for each i in {1…Nalts} do
4-2:  Revised_B_Mat = B_Mat;
4-3:  for each j in {1:Ncrit} do
4-4:   Revised_B_Mat [j,j] ← D_Mat [i,j];
4-5:  end for
4-6:  filename = “GTMA_C” + i + “.csv”; //ASAM data file name
4-7:  C_Mat = Revised_B_Mat;
4-8:  write data file for C_Mat to filename;
4-9: end for
Phase (2) is to obtain the “index score” for each alternative. The ASAM square matrix for every alternative i (Ci, as obtained from Phase (1)) is loaded, and the permanent value of which, PVi, is determined. These are exactly the index scores used to compose the “score vector”, as the basis for ranking. Algorithm 2 shows the algorithm for this phase.
Algorithm 2 ComputePermanentValuesForASAMs
Input: Ci, for every i (The ASAM Matrices for All Alternatives)
Output: P (A Vector of the Permanent Values Computed)
* The function PermRecur() is declared and will be defined later in Algorithm 4
1: Initialize the PermValsVector (e.g., an 8 × 1 vector in this case):
  PermValsVector = as vector (initialize with dimension Nalts×1);
2: For each alternative i, load the square Ci matrix and compute its permanent value; each value should be properly placed in the PermValsVector vector:
2-1: for each i in {1:Nalts} do
2-2:  filename = “GTMA_C” + i + “.csv”;
2-3:  TmpMat = read data file with name filename;
2-4:  PermValsVector[i] = PermRecur(TmpMat);
2-5: end for
3: Retrieve the Index Scores as a Vector, PV:
  PVPermValsVector;
Phase (3) ranks the alternatives according to the index scores in the score vector. Fortunately, for the implementation in R, this can be performed by “rbind(PV, ROVAHP–GTMA = rank(−PV))” because the values in PV are TMTB in terms of GTMA. This single line command is not sufficient to form “an algorithm”.
Running these phases for the exemplar problem using the datasets established in Section 3.2 gives a solution to PV that contains index scores for the decision alternatives and a rank order vector for them (i.e., ROVAHP–GTMA). In Table 3, the numerical part of the second column shows the value of PV, while the numerical part of the third column shows the value of ROVAHP–GTMA.
Such result gives the following preferential order over the alternatives:
A7 ≻ A8 ≈ A6 ≻ A5 ≻ A3 ≈ A2 ≻ A4 ≻ A1.
If they are strictly ordered without any indifference interval, it is:
A7 ≻ A8 ≻ A6 ≻ A5 ≻ A3 ≻ A2 ≻ A4 ≻ A1.
It is seen that such a preferential order concurs with the order that was obtained by Zhuang et al. [4]. The fact the two index score vectors assessed using the previous DP-based and the current recurrence-based algorithms are quite close to each other verifies the adequacy and functional correctness of this new implementation. However, since there should be no calculation error, in the final “Reference” column, the permanent values that were obtained previously are slightly different simply because of the imprecise calculation basis upon the imprecise data sources (see Section 3.2), and not because of the different implementation logic.

3.4. Alternative Ranking/Selection Using AHP–TOPSIS

The decision problem P is solved using AHP–TOPSIS. Usually, the first step in applying an AHP–TOPSIS involves loading the source decision matrix (i.e., the non-normalized D), identifying a suitable normalization method according to the problem context and then obtaining the normalized decision matrix, D’. However, the dataset D’ has been recomputed in Section 3.2 and is thus available.
According to the data specification as defined in Section 3.1 (i.e., dataset F), another solution algorithm for AHP–TOPSIS (which is also designed and implemented in R) initially loads the D’ matrix and the CWV* vector as the data input and uses them to calculate the “weighted normalized decision matrix” (named as the WiNDMa). This involves the following calculations semantically:
CWV_M ← columnwise_replicate(CWV*,Ncrit);
WiNDMaD’ · CWV_M;
where columnwise_replicate(m,n) is a function replicating vector m for n columns, as offered by any software utility and “·” is the standard matrix multiplication operator for linear algebra.
For the datasets in Section 3.2, CWV* is a 7 × 1 vector (matrix). Compiling and binding seven of this type of vector using columnwise_replicate() gives a 7 × 7 matrix, CWV. Multiplying the 8 × 7 D’ by the 7 × 7 CWV gives the 8 × 7 WiNDMa matrix, as shown in Table 4.
The most critical element of TOPSIS is to identify the ideal and anti-(negative-)ideal solutions from WiNDMa, which are denoted by A+ and A. This involves the following algorithmic logic:
A+ ← as vector(initialize with dimension Ncrit×1);
A ← as vector(initialize with dimension Ncrit×1);
A+ [j, 1] ← max(WiNDMa [1…Nalts,j]), ∀ j ∈ {1…Ncrit};
A [j, 1] ← min(WiNDMa[1 …Nalts,j]), ∀ j ∈ {1…Ncrit};
Applying these equations to WiNDMa, the ideal and anti-ideal alternatives for the decision problem, both being virtual, are identified as:
A + = [ 0.15568452 0.02337814 0.11160558 0.05413003 0.42072456 0.06787833 0.16659885 ]   and   A = [ 0.05189484 0.01753360 0.04783096 0.02397804 0.24748504 0.04525222 0.07080747 ] .
It is seen that A+ is identical to CWV* (while A is not). This finding is discussed extensively on a theoretical basis in Section 4.1.
The third phase of TOPSIS measures the distance from each alternative to the ideal (i.e., the “separation measures”) and the distance to the anti-ideal. The process uses the following logic:
Sto_A+ ← vector(initialize with dimension Nalts×1);
Sto_A− ← vector(initialize with dimension Nalts×1);
S to _ A + [ i , 1 ]     j N c r i t ( W i N D M a [ i , j ] A + [ j , 1 ] ) 2 ,     integer   i     { 1 Nalts } ;
S to _ A [ i , 1 ]     j N c r i t ( W i N D M a [ i , j ] A [ j , 1 ] ) 2 ,     integer   i     { 1 Nalts } ;
Applying this logic to the dataset, two separation measure vectors are obtained as follows:
S to _ A + = [ 0.18931705 0.14815937 0.14437979 0.20072201 0.13475571 0.08288226 0.12485745 0.15618098 ]   and   S to _ A = [ 0.06669622 0.11433725 0.11295291 0.06602163 0.11291660 0.19251697 0.15902443 0.12107992 ] .
The relative closeness index (RCI) for each alternative is then calculated to allow the final ranking to proceed. This is the fourth phase of TOPSIS. An alternative has a higher score for this index if it is relatively close to the ideal solution and far from the anti-ideal. This is equivalent to determining the separation from the anti-ideal solution over the entire distance for which it is separated from both the ideal and the anti-ideal solutions:
RCI ← as vector(initialize with dimension Nalts×1);
RCI [i,1] ← Sto_A−/(Sto_A+ + Sto_A−);
Substituting Equation (20) using the information into Equation (18), the 8 × 1 RCI vector (RCIV), which carries information about the alternatives’ relative proximity to the ideal, is obtained. This is another index score vector obtained by AHP–TOPSIS, as shown in the second column in Table 5.
This RCIV is further used to determine the ranks of the alternatives as a vector, which is ROVAHP–GTMA. For example, with the use of R, a command similar to that used in Section 3.3 (i.e., “rbind(RCIV, ROVAHP–GTMA = rank(−RCIV))”), is sufficient to obtain this ROV for subsequent comparisons, the result value of which is shown in the last column in Table 5.
This ROV also gives another preferential order over the alternatives as suggested by AHP–TOPSIS:
A6 ≻ A7 ≻ A5 ≻ A3 ≈ A8 ≈ A2 ≻ A1 ≻ A4.
If these alternatives are strictly ordered without considering the durable indifference, the preferential order is:
A6 ≻ A7 ≻ A5 ≻ A3 ≻ A8 ≻ A2 ≻ A1 ≻ A4.

3.5. Verifying the Similarity between the Two ROVs Using the Proposed Comparative Research Flow

From Equations (6) and (22), the preferential orders obtained individually using AHP–GTMA and AHP–TOPSIS are, respectively:
Using AHP–GTMA: A7 A8 A6 A5 A3 A2 A4 A1.
Using AHP–TOPSIS: A6 A7 A5 A3 A8 A2 A1 A4.
At a glance, these two preferential orders are not so consistent, but further scrutiny can be made to give more insights to such an observation. Therefore, the systematic yet scientific comparative research flow proposed in Section 3.1 takes place.
Phase (1) of the flow is to calculate the absolute distance in rank for each alternative directly based on the ordinal rank values, while the aggregated absolute distance is a similarity measure.
In Table 3 and Table 5, the two ROVs that are individually solved by the two models are:
ROVAHP–GTMA [A1 A2 A3 A4 A5 A6 A7 A8] = [8 6 5 7 4 3 1 2];
ROVAHP–TOPSIS [A1 A2 A3 A4 A5 A6 A7 A8] = [7 6 4 8 3 1 2 5].
According to Equation (1), the difference between these two ROVs is calculated as:
Diff = [A1 A2 A3 A4 A5 A6 A7 A8] = [1 0 1 −1 1 2 −1 −3].
Furthermore, the absolute distance in rank for each alternative is assessed in terms of the TV vector as:
TV = [A1 A2 A3 A4 A5 A6 A7 A8] = [1 0 1 1 1 2 1 3].
According to Equation (2), the aggregate absolute distance for these two ROVs, which is the result of applying the pure arithmetic similarity measure, is the summation of the elements in TV:
AD = 1 + 0 + 1 + 1 + 1 + 2 + 1 + 3 = 10.
Phase (2) is to measure the geometrical distance between the two ROVs in the vector space. As in the Euclidean methods, the absolute distances in rank, TV, can be treated directly as a Euclidean vector, so the Euclidean distance (GD) value is determined, as:
GD = (1 + 0 + 1 + 1 + 1 + 22 + 1 + 32)1/2 = (18)1/2.
This is the result of applying the plausible geometrical-based similarity measure. Let us briefly discuss these observations before going on.
It is initially seen that, in Equations (23) and (24), one alternative (A2) has the same rank by using both methods. However, in Equation (26), five alternatives (i.e., A1, A3, A4, A5, A7) differ in rank by only one place. Only two of them differ in rank by more than one place: the rankings for A6 differ by two places and those for A8 differ by three.
It is seen that the rank orders for both methods are quite similar. Six alternatives (i.e., A1, A2, A3, A4, A5, and A7) of the eight differ in rank by no more than one place, so 75% of the alternatives obtained very similar ranks by both methods. For the two outliers (A6 and A8), the “maximum deviation” in the ranking is three places (for A8), which is still not large. There is also supporting evidence in that the Euclidean distance between these two seven-dimensional ROVs is only 18 from Phase (2). These are the results of using tangible measures for the first two phases of the proposed flow.
However, as using these tangible measures there could be similarity, the intangible statistics measure of Phase (3) is used to confirm the similarity found between these two ROVs. This study views the two ROVs as statistical samples and uses the non-parametric Wilcoxon signed-rank test function (as also offered by R) to test whether they are (from) non-identical populations, as they are in fact “ordinal variables” from the perspective of data analytics.
Using the ROVAHP–GTMA and ROVAHP–TOPSIS vectors as the parameters for the non-parametric test, the result confirms that they are not (drawn) from non-identical populations. The evidence is little, so it is highly probable they are (from) an identical population: the p-value is 0.9301, which is significantly greater than 0.05. This completes Phase (3) and finalizes the proposed comparative research flow, using an intangible measure for the studied case. For more details, please see the code and experimental results in R in Appendix B.

4. Further Analysis and Discussion

This section focuses on two aspects: implications from the application of the rank-based, multi-measure comparative research flow and the recurrence-based implementation for GTMA, both of which are closely related to the contributions of this study. Section 4.1 discusses the main findings and positive implications based on the results from modeling and the comparisons using the proposed method. Section 4.2 and Section 4.3, respectively, details and benchmarks the new implementation form for GTMA, wherein the superiority of which is demonstrated.

4.1. Results/Implications from Modeling/Comparisons

The results in Section 3.5 may render the most important outcome of this study, which is the similarity shown and validated based on the application of the comparative research flow that is methodologically proposed. Consequently, AHP–GTMA gives a ROV (ROVAHP–GTMA) which is very similar to that obtained using AHP–TOPSIS (ROVAHP–TOPSIS).
In Phase (1), the similarity is shown when the elements in each ROV are viewed as a list. In these lists, the aggregated absolute distance in rank for the eight alternatives is only 10, while 75% of the alternatives have very similar ranks for both methods (among them, only two have a difference in rank which is strictly greater than one place, and the one with “maximum deviation” is only 3). In Phase (2), the small Euclidean distance (i.e., 18) that is measured between these two ROVs (as two vectors in the space) also shows the similarity. Then, Phase (3) finds further supporting evidence that the two ROVs, when viewed as statistical variables, are likely (drawn from) an identical population using the signed-rank test.
With the progress of the three phases, in terms of methodology, it is shown or implied that:
(1)
The proposed comparative research flow is applicable. This can be justified from its successful application.
(2)
The ranked-based, multi-measure flow is suitable for the given comparison problem. For the input of the flow, it requires only the two ROVs that are available from using almost all MADM models. Such rank-based property of the proposed flow should be valuable, in that it can be easily implemented in decision practice. The multiple measures that are included and used may cross validate the results from each other. Because these measures are designed and established from different theoretical aspects that are intrinsically heterogeneous (i.e., arithmetic, geometrical and statistical), they are particularly useful when any similarity is observed using one measure but further confidence should be sought for from using other measures. Specifically, in this study, the result from using Phase (2)’s geometrical measure provides evidence to the similarity that has been observed in Phase (1)’s arithmetic measure. Moreover, the result from using Phase (3)’s statistical measure further confirms the similarity that has been justified based on the two previous phases. In other words, such a final outcome infers a “total similarity” between the two ROVs, which can further support the following Point (3). However, such a multi-measure, cross-validation style should also be a valuable property of the proposed flow, in terms of methodology.
(3)
It well supports the target of this study to establish the confidence on applying AHP–GTMA in practice. As the major experimental result, the AHP–GTMA gives a ROV that is quite similar to that obtained using AHP–TOPSIS, and this has been cross-validated using the multiple tangible or intangible measures. The similarity between the ROVs claims the effectiveness of the AHP–GTMA approach. As discussed previously, since AHP–TOPSIS is a mature and “widely used” MADM method, the fact the two ROVs are quite similar supports the claim that AHP–GTMA is (and should also be) another effective and suitable method for solving a selection decision problem. In other words, the result gives confidence that using an AHP–GTMA to solve other types of operational selection decision problems is feasible.
(4)
The proposed flow is able to be generalized. In this study, the methodological flow proposed in Section 3.1 is shown to support the required comparative research process between the ROVs that are individually obtained using AHP–GTMA and AHP–TOPSIS. As most MADM methods give a ROV (even using simple-additive weighting (SAW) or weighted sum model (WSM)) that is behind the final preferential order, the flow can be generalized to identify the similarity between any other pair of ROVs that are produced by any other two MADM approaches, where appropriate. This should be an important contribution that this study makes to the methodological field of MADM.
In addition to these implications drawn from the proposed flow or its application, there is another important interesting observation: Section 3.4 shows that the ideal solution vector using TOPSIS is identical to the criteria weight vector that is obtained using AHP in the initial stage of AHP–TOPSIS, i.e., A+ = CWV*. This should be discussed extensively.
After scrutiny, we found that this is because of the normalization method that is used! In this study, the method which is used to normalize D as D’ is as follows:
D’ [i,j] ← D[i,j]/max(D[1:Nalts,j]), for any of a TMTB criteria j, j ∈ {1 … Ncrit}
D’ [i,j] ← min(D[1:Nalts,j])/D[i,j], for any of a TLTB criteria j, j ∈ {1 … Ncrit}
As can be seen, the above process also aligns the direction of optimization to TMTB. Therefore, in Table 1, each column of D’ has at least one “1”(s), which tag(s) the alternative(s) that perform(s) the best, in terms of the criteria for that column. When TOPSIS is used to calculate the WiNDMa, these 1s are multiplied by the counterpart element in CWV*, depending upon the criterion that is being considered. For example, in Table 1, following the normalization, for c1, only A7 performs the best for this criterion and is assigned a value of 1. This, when multiplied by the first element in CWV* (=0.15568452), becomes exactly the value that is the assessed weight for c1, as shown in Table 4. Criterion by criterion, this ideal-identification process selects these elements from WiNDMa to produce the ideal solution vector, A+. For this example, the [A7, c1] cell in Table 4, whose value is 0.15568452, is regarded as the ideal value for the c1 criterion, so it is assigned as the first element of A+.
Therefore, the normalization method used to normalize the source decision matrix at the beginning is responsible for this phenomenon. This finding, although minor, is interesting because it poses a question for future research to determine whether this affects any part of the solution process for the hybrid AHP–TOPSIS approach.

4.2. Code Optimization: The Recursive Implementation of the Permanent Value Determination Algorithm

The algorithm that gives recurrence-based implementation of permanent value determination is presented in more detail. Theoretically, it can deal with any size of a square matrix for GTMA so this breaks the limit of the previous implementation based on dynamic programming (DP) wherein the dimension was fixed for the problem. Therefore, it should be an additional contribution of this study, because permanent value determination of an ASAM matrix is key to the computations of GTMA and because the DP-based algorithm in [4] requires further code optimization (for the aforementioned fixed problem-size problem and because it requires separate functions to complete the computational task).
Using the concept of DP, Zhuang et al. [4] wrote a series of functions to compute and obtain the permanent value of each ASAM matrix. In contrast to calculating the determinant value for any numeric square matrix, calculations for the permanent value was not supported in R by the default packages. Thus, an R function named as Per2×2() was written to obtain the permanent value of a 2 × 2 matrix. Furthermore, this function is called in the other Per3×3() function that calculates the permanent value of a 3 × 3 matrix. The Per3×3() function is further called inside Per4×4(), and so on, until Per7×7() where 7 was defined by problem size. This DP-based implementation style follows the following logic, for any problem case where #Criteria = Ncrit:
P e r n × n ( M ) = j = 1 n M [ j , 1 ] × P e r ( n 1 ) × ( n 1 ) ( M [ j , 1 ] ) ,   f o r   a l l   2 < n N c r i t
where symbol “−” is the row deduction or column deduction operator (i.e., in matrix M, which row and/or which column should be removed so a new matrix with a decreased dimension is obtained); Per2×2() is the fundamental building block function that has been written according to the mathematical definition and can be called; and Ncrit is the number of criteria as defined by the actual problem size.
As can be seen from Equation (31), with the DP-based implementation style, to solve a real problem, a series of functions (more specifically, a number of (Ncrit − 1) permanent functions) should be written to complete the final calculation job of PerNcrit×Ncrit(). For example, for the used problem case where Ncrit = 7, six functions from Per2×2() to Per7×7() were written. This becomes an inevitable drawback when the problem size grows higher. If, for another decision problem, the degree is higher (i.e., Ncrit > 7 or >>7), more Pernxn() functions must be written, so debugging and maintenance would be more difficult and the process would be less resilient to possible changes in the problem size. This study addresses the issue by implementing a recurrence-based permanent function, which also extends the limit on ASAM matrix dimension to any order.
According to the divide-and-conquer concept, a recursive PerNcrit×Ncrit() function takes MNcrit×Ncrit as the parameter, but it divides the problem into a number of Ncrit sub-problems. These sub-problems are all to be solved with Per(Ncrit−1)×(Ncrit−1)(M(Ncrit−1)×(Ncrit−1)) in a lower dimension where the matrix order is decremented (by 1). Repeating this process until n = 3, the problem becomes an indivisible unit (i.e., Per3×3(M3×3)), which can be easily hard-coded by reference to the definition of permanent value. Depending on the results from lower levels, the problem is “conquered back” from n = 3, n = 4 …, until n = Ncrit, when the entire problem is solved. However, with this recursive implementation style, only two functions are eventually required to complete the calculation job, regardless of the problem size and the matrix dimension: a recurrence-based Pern×n(Mn×n) wherein the boundary condition is n = 3 (“bouncing back” when this condition is met) and the Per3×3(M3×3) basic building block.
Algorithms for these two required functions are presented in Algorithms 3 and 4, respectively. Both have been implemented in R to determine the permanent values in this study. Algorithms 3 shows the Per3×3() building block according to its mathematical definition at first, because it is called by the main recurrence-based permanent value determination algorithm in Algorithms 4.
Algorithm 3: Perm3×3
Input: Temp_Mat
Output: >0 for the Permanent Value of Temp_Mat; = −1 for Dimensional Inconsistency (Non-square
 Matrix Input) or Dimensional Conflict (the Degree is not 3)
1: Initialize the sum value:
   TheSum = 0;
2:  Accumulate the sum according to mathematical formulae of permanent value:
2-1: #Row = count # rows for Temp_Mat;
2-2: #Col = count # columns for Temp_Mat;
2-3: if #Row equals to #Col do     // Check if Temp_Mat is a square matrix
2-4:  if #Row equals to 3 do    // Check if Temp_Mat has the 3x3 dimension
2-5:   TheSum = TheSum + Temp_Mat [1,1] × Temp_Mat [2,2] × Temp_Mat [3,3];
2-6:   TheSum = TheSum + Temp_Mat [1,2] × Temp_Mat [2,3] × Temp_Mat [3,1];
2-7:   TheSum = TheSum + Temp_Mat [1,3] × Temp_Mat [2,1] × Temp_Mat [3,2];
2-8:   TheSum = TheSum + Temp_Mat [1,3] × Temp_Mat [2,2] × Temp_Mat [3,1];
2-9:   TheSum = TheSum + Temp_Mat [1,2] × Temp_Mat [2,1] × Temp_Mat [3,3];
2-10:   TheSum = TheSum + Temp_Mat [1,1] × Temp_Mat [3,2] × Temp_Mat [2,3];
2-11:  else do
2-12:   TheSum = (−1);
2-13:  end if
2-14: else do
2-15:  TheSum = (−1);
2-16: end if
3:  Return the TheSum value to the calling algorithm:
   return TheSum;
Algorithm 4 PermRecur
Input: A Square Matrix Temp_Mat of Degree > 3
Output: The Permanent Value of Temp_Mat
Algorithm 3 has been implemented
1: Initialize TheSum and measure the degree of Temp_Mat:
1-1: TheSum = 0;
1-2: #Row = count # rows for Temp_Mat;
1-3: #Col = count # columns for Temp_Mat;
2: Perform suitable operations based on the dimension of Temp_Mat:
2-1: if #Row equals to #Col do      //Check if Temp_Mat is a square matrix
2-2:  if #Row equals to 3 do //Check if the degree of Temp_Mat has reached 3x3
2-3:   TheSum = Per3×3(Temp_Mat);
2-4:  else
2-5:    for i in {1 … DimMat} do
2-6:    Reduced_Mat = remove row i from Temp_Mat;
2-7:    Reduced_Mat = remove column 1 from Reduced_Mat; //Obtained the reduced matrix of
       Temp_Mat by removing the i-th row and the j-th column
2-8:    TheSum = TheSum + Temp_Mat[i,1] × PermRecur(Reduced_Mat);
                          // Accumulate TheSum
2-9:   end for
2-10:  end if
2-11: else
2-12:  TheSum ← 0;
2-13:  end if
3: Return TheSum value to the calling algorithm:
   return TheSum;

4.3. The Performance of the Recursive Implementation

The performance of the new GTMA implementation should be an interesting issue. However, limitations of space mean that the overall performance of only the recursive algorithm that is improved, instead of the entire R program, is measured, because computations to determine the permanent values create a bottleneck during GTMA computations.
To benchmark this, the program is revised and a timing mechanism is inserted at the outside calling layer, which is at the ComputePermanentValuesForASAMs() function that calls PermRecur() (i.e., Function-2 that calls Function-4).
For the problem size (Nalt = 8 and Ncrit = 7) and data of the used decision case, the following outputs, which are the timestamps in units of (seconds.milliseconds), are listed in Table 6.
The start time and stop time for each call to the PermRecur() function in R is shown in the second and third columns. The last column and the last row are also obtained as the auxiliary information of the benchmarking process.
The result show that, on the testing platform (a portable PC with Intel i7 CPU 2.6 GHz, 16 G RAM, HDD, Windows 10 64-bits as the OS, R Studio v1.0.143 and R x64 v3.3.1 installed, with two SSD hard drives), it requires about 0.253 s to complete calling PermRecur() function each time for any one 7 × 7 ASAM matrix. For this decision case, it only requires about 2.022 s to complete the permanent value calculation step for GTMA, subject to the given problem size.
Finally, in terms of efficiency, the recursive implementation to facilitate the solution process for GTMA in Section 3.3 should be another contribution of this study, and the implementations for the algorithms in Section 4.2 can be generalized to compute the permanent value for a square matrix of any dimension. In terms of decision support system (DSS) design (and thus not in terms of decision problem modeling), this implementation method is shown to be efficient by benchmarking. Note that another function that has just been supported by the BonsonSample package in September 2017 is similar to this function (permanent value computation in R), with a lower number of decimal places as a default. However, for this study, the relevant implementation work has started since May 2017. During the experimental stage of this study, in 2017, neither R nor its packages seemed to support the required permanent value computation. Relevant functions for this study were implemented on the R platform in parallel and developed for the MADM research purpose.

5. Conclusions

This work started with the doubt placed on the effectiveness of AHP–GTMA on aiding the industrial selection decisions. A systematic comparative research flow was proposed. According to this flow, the effectiveness of using the relatively new AHP–GTMA approach was examined by reference to the results from taking another “widely used” approach, AHP–TOPSIS. Based on such encouraging final outcome, the possible contributions of this study are thus reviewed and summarized.
The first contribution relates to the methodological flow that was proposed. It determines whether there is similarity between the final solutions that are obtained using two MADM methods. As only the final rank information is used (rather than using other information during the solution process) and only the ROVs in these final solutions (which is available from almost all MADM models) are used, the proposed flow should be a convenient tool which can be easily applied in operational practice. Except for the ROVs from using AHP–GTMA and AHP–TOPSIS (as in this study), it can be applied to determine the similarity between any two ROVs that are produced using any other two MADM methods, where appropriate. This is the methodological reason for naming this flow as a “rank-based” one.
In addition, the flow includes several heterogeneous measures to determine the similarity between the ROVs from two methods using several measures. It suggests to establish both tangible measure(s) and the intangible measure(s), while the results from using these measures can be cross-validated or confirmed. The tangible measures can be arithmetic (i.e., the difference vector, absolute distance in rank for an alternative and the aggregated absolute distance) and/or geometrical (i.e., the Euclidean distance), while other concepts in the data analytics field can also be integrated (e.g., number of outliers, percent of alternatives ranked very similarly and maximum deviation in rank). Another niche is the establishment of the intangible measure from the perspective of statistics. The non-parametric test concept that is frequently used in the data science field is introduced, and this fits the problem context. These are important to claim the similarity between using two methods eventually on a solid basis. Furthermore, the abovementioned is the methodological reason for naming the proposed flow as a “multi-measure” one.
The second contribution pertains to the model selection work in real practice. As with the application of the comparative research flow, the operational effectiveness of the AHP–GTMA model in solving the shredder selection decision problem is shown (verified by the fact that this decision model obtains a solution that is quite similar to that obtained by AHP–TOPSIS, the “widely used” model), thus the confidence in using this relatively new but relatively less-applied model in future practice is established.
The third contribution, although it should be relatively minor for this study, is about model implementation. This contribution is two-fold: the algorithmic optimization of GTMA computations and the implementation of TOPSIS in R. The recursive implementation that computes the permanent value for a square matrix of any dimension facilitates the solution process for GTMA and contributes to DSS design. This avoids writing many functions for a larger problem size and makes the maintenance easy. The implementation of TOPSIS in R is also unique to this study.
There are also boundary conditions. In this study, the datasets were sourced from a paper shredder selection decision case, based on which the findings (e.g., the similarity in the results) were justified. Even though these data apply to a real case, it would be of interest to know whether another existing selection decision problem in another management decision domain can be modeled and solved using the same methods as those used by this study (i.e., AHP–GTMA and AHP–TOPSIS). Whether the result would be identical to this study (i.e., yielding similar rank orders) could be a subject for future study.
Finally, identifying similarities in results across more MADM methods is another significant future topic for study, wherein the proposed similarity confirmation method would be of use.

Author Contributions

Z.-Y.Z.: funding acquisition, methodology, software and writing (original draft); C.-C.L.: validation; C.-Y.C.: conceptualization, supervision and writing (review and editing); and C.-R.S.: data curation and software.

Funding

This research was funded by Ministry of Science and Technology (Taiwan ROC), grant numbers MOST 106-2410-H-038-001, 106-2410-H-038-003; Taipei Medical University, grant number TMU 105-AE1-B46. The APC was funded by MOST 107-2410-H-992-046.

Acknowledgments

Note that the original idea of this study was presented orally at the 2017 IEEE International Conference on Innovation, Communication and Engineering Innovation, Information and Engineering (IEEE ICICE 2017), 5–11 November 2017, Kunming, China [64], as well as at the 2018 IEEE/TIKI ICICE 2018, 9–14 November 2018, Hangzhou, China [65]. The body of this research has been continuously irrigated across a time span, so the authors sincerely thank these conferences for the platform for idea exchanging and thank the anonymous colleagues for the valuable recommendations.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. A Digest Summary to the Methodological Flow of [4]

In [4], the approach that was proposed recently by Singh and Rao [52] was used (named “AHGTMA” in the article). Due to the lack of criteria that are summarized in the literature, a set of selection criteria was also determined, whereby seven common attributes that were deemed important by the DM to make the decision were defined. Using the concepts of data-driven decision-making [66] and the core BI strategic thinking [67], the catalogue data for the 26 alternative shredders were collected from various data sources (e.g., the Internet, brochures and price quotations), cleaned and quantized as required. Eventually, eight alternatives remained after pre-processing by considering the agent’s ability to issue local invoices and using the “complete case study method” (carrying no missing value for all criteria). This then formed a data warehouse that stored the 8 × 7 “source decision matrix” that contains information on how each alternative performs and justified based on each selection criterion. An 8 × 7 “normalized performance matrix” was obtained to serve as a base for later GTMA calculations. An AHP-style survey was then conducted. The “source pairwise comparison matrix”, which is a 7 × 7 matrix, was filled with the DM’s opinions, and as a sub product of this phase, a criteria weight vector (CWV) that defines the priority of the attributes in the DM’s mind, was obtained using an AHP. By spreading the elements in a row of the “normalized performance matrix” (1 × 7, which is in fact the performance vector of an alternative) over the matrix diagonal of the “source pairwise comparison matrix”, a square 7 × 7 “ASAM matrix” (alternative selection attribute matrix) was obtained. The permanent value of this was regarded as the “index score” for an alternative. Repeating this process for all of the alternatives, the index scores for each were obtained, to form an 8 × 1 “final score vector”. As a final outcome, based on this vector, the alternatives were ranked, so the best (or winner) shredder(s) could be identified.

Appendix B. Code Segment in R for the Non-Parametric Test

> wilcox.test( ROV_AHPGTMA, ROV_AHPTOPSIS, paired = TRUE )
  Wilcoxon signed rank test with continuity correction
  data: ROV_AHPGTMA and ROV_AHPTOPSIS
  V = 15, p-value = 0.9301
  alternative hypothesis: true location shift is not equal to 0

References

  1. Chang, C.-T.; Chou, Y.-Y.; Zhuang, Z.Y. A practical expected-value-approach model to assess the relevant procurement costs. J. Oper. Res. Soc. 2015, 66, 539–553. [Google Scholar] [CrossRef]
  2. Zhuang, Z.Y.; Chang, S.C. Deciding product mix based on time-driven activity-based costing by mixed integer programming. J. Intell. Manuf. 2017, 28, 959–974. [Google Scholar] [CrossRef]
  3. Zhuang, Z.-Y.; Chen, C.-Y.; Su, C.-R. Modelling/solving a shredder choosing decision problem with analytic hierarchy process (AHP) and graph theory and matrix approach (GTMA). In Proceeding of the 2017 IEEE International Conference on Applied System Innovation (IEEE ICASI 2017), Sapporo, Japan, 13–17 May 2017; p. 709. [Google Scholar]
  4. Zhuang, Z.-Y.; Chiang, I.-J.; Su, C.-R.; Chen, C.-Y. Modelling the decision of paper shredder selection using analytic hierarchy process and graph theory and matrix approach. Adv. Mech. Eng. 2017, 9, 1–11. [Google Scholar] [CrossRef]
  5. Hwang, C.-L.; Yoon, K. Multiple Attribute Decision Making: Methods and Applications; Springer-Verlag: New York, NY, USA, 1981. [Google Scholar]
  6. Tzeng, G.H.; Huang, J.J. Multiple Attribute Decision Making: Methods and Applications; Chapman and Hall/CRC: Boca Raton, FL, USA, 2011. [Google Scholar]
  7. Zanakis, S.H.; Solomon, A.; Wishart, N.; Dublish, S. Multi-attribute decision making: A simulation comparison of select methods. Eur. J. Oper. Res. 1998, 107, 507–529. [Google Scholar] [CrossRef]
  8. Saaty, T.L. A scaling method for priorities in hierarchical structures. J. Math. Psychol. 1977, 15, 59–62. [Google Scholar] [CrossRef]
  9. Govindan, K.; Kaliyan, M.; Kannan, D.; Haq, A.N. Barriers analysis for green supply chain management implementation in Indian industries using analytic hierarchy process. Int. J. Prod. Econ. 2014, 147, 555–568. [Google Scholar] [CrossRef]
  10. Akaa, O.U.; Abu, A.; Spearpoint, M.; Giovinazzi, S. A group-AHP decision analysis for the selection of applied fire protection to steel structures. Fire Saf. J. 2016, 86, 95–105. [Google Scholar] [CrossRef]
  11. Bian, T.; Hu, J.; Deng, Y. Identifying influential nodes in complex networks based on AHP. Phys. A Stat. Mech. Appl. 2017, 479, 422–436. [Google Scholar] [CrossRef]
  12. Dweiri, F.; Kumar, S.; Khan, S.A.; Jain, V. Designing an integrated AHP based decision support system for supplier selection in automotive industry. Expert Syst. Appl. 2016, 62, 273–283. [Google Scholar] [CrossRef]
  13. Dong, Q.; Cooper, O. An orders-of-magnitude AHP supply chain risk assessment framework. Int. J. Prod. Econ. 2016, 182, 144–156. [Google Scholar] [CrossRef]
  14. Hillerman, T.; Souza, J.C.F.; Reis, A.C.B.; Carvalho, R.N. Applying clustering and AHP methods for evaluating suspect healthcare claims. J. Comput. Sci. 2017, 19, 97–111. [Google Scholar] [CrossRef]
  15. Erdogan, S.A.; Šaparauskas, J.; Turskis, Z. Decision making in construction management: AHP and Expert Choice approach. Procedia Eng. 2017, 172, 270–276. [Google Scholar] [CrossRef]
  16. Szulecka, J.; Zalazar, E.M. Forest plantations in Paraguay: Historical developments and a critical diagnosis in a SWOT-AHP framework. Land Use Policy 2017, 60, 384–394. [Google Scholar] [CrossRef]
  17. Li, W.; Yu, S.; Pei, H.; Zhao, C.; Tian, B. A hybrid approach based on fuzzy AHP and 2-tuple fuzzy linguistic method for evaluation in-flight service quality. J. Air Transp. Manag. 2017, 60, 49–64. [Google Scholar] [CrossRef]
  18. Kokangül, A.; Polat, U.; Dağsuyu, C. A new approximation for risk assessment using the AHP and Fine Kinney methodologies. Saf. Sci. 2017, 91, 24–32. [Google Scholar] [CrossRef]
  19. Samuel, O.W.; Asogbon, G.M.; Sangaiah, A.K.; Fang, P.; Li, G. An integrated decision support system based on ANN and Fuzzy AHP for heart failure risk prediction. Expert Syst. Appl. 2017, 68, 163–172. [Google Scholar] [CrossRef]
  20. Agarwal, P.; Sahai, M.; Mishra, V.; Bag, M.; Singh, V. A review of multi-criteria decision making techniques for supplier evaluation and selection. Int. J. Ind. Eng. Comput. 2011, 2, 801–810. [Google Scholar] [CrossRef]
  21. Ho, W.; Xu, X.; Dey, P.K. Multi-criteria decision making approaches for supplier evaluation and selection: A literature review. Eur. J. Oper. Res. 2010, 202, 16–24. [Google Scholar] [CrossRef]
  22. Govindan, K.; Rajendran, S.; Sarkis, J.; Murugesan, P. Multi criteria decision making approaches for green supplier evaluation and selection: A literature review. J. Clean. Prod. 2015, 98, 66–83. [Google Scholar] [CrossRef]
  23. Hwang, C.-L.; Lai, Y.-J.; Liu, T.-Y. A new approach for multiple objective decision making. Comput. Oper. Res. 1993, 20, 889–899. [Google Scholar] [CrossRef]
  24. Mir, M.A.; Ghazvinei, P.T.; Sulaiman, N.M.N.; Basri, N.E.A.; Saheri, S.; Mahmood, N.Z.; Jahan, A.; Begum, R.A.; Aghamohammadi, N. Application of TOPSIS and VIKOR improved versions in a multi criteria decision analysis to develop an optimized municipal solid waste management model. J. Environ. Manag. 2016, 166, 109–115. [Google Scholar]
  25. Zhou, S.; Liu, W.; Chang, W. An improved TOPSIS with weighted hesitant vague information. Chaos Solitons Fractals 2016, 89, 47–53. [Google Scholar] [CrossRef]
  26. Wang, X.; Peng, B. Determining the value of the port transport waters: Based on improved TOPSIS model by multiple regression weighting. Ocean Coast. Manag. 2015, 107, 37–45. [Google Scholar] [CrossRef]
  27. Liu, S.; Chan, F.T.; Ran, W. Multi-attribute group decision-making with multi-granularity linguistic assessment information: An improved approach based on deviation and TOPSIS. Appl. Math. Model. 2013, 37, 10129–10140. [Google Scholar] [CrossRef]
  28. Kuo, R.J.; Wu, Y.H.; Hsu, T.S. Integration of fuzzy set theory and TOPSIS into HFMEA to improve outpatient service for elderly patients in Taiwan. J. Chin. Med. Assoc. 2012, 75, 341–348. [Google Scholar] [CrossRef] [PubMed]
  29. Bao, Q.; Ruan, D.; Shen, Y.; Hermans, E.; Janssens, D. Improved hierarchical fuzzy TOPSIS for road safety performance evaluation. Knowl.-Based Syst. 2012, 32, 84–90. [Google Scholar] [CrossRef]
  30. Gupta, H.; Barua, M.K. Supplier selection among SMEs on the basis of their green innovation ability using BWM and fuzzy TOPSIS. J. Clean. Prod. 2017, 152, 242–258. [Google Scholar] [CrossRef]
  31. Walczak, D.; Rutkowska, A. Project rankings for participatory budget based on the fuzzy TOPSIS method. Eur. J. Oper. Res. 2017, 260, 706–714. [Google Scholar] [CrossRef]
  32. He, Y.H.; Wang, L.B.; He, Z.Z.; Xie, M. A fuzzy TOPSIS and rough set based approach for mechanism analysis of product infant failure. Eng. Appl. Artif. Intell. 2016, 47, 25–37. [Google Scholar] [CrossRef]
  33. Kang, D.; Jang, W.; Park, Y. Evaluation of e-commerce websites using fuzzy hierarchical TOPSIS based on ES-QUAL. Appl. Soft Comput. 2016, 42, 53–65. [Google Scholar] [CrossRef]
  34. Kannan, D.; de Sousa Jabbour, A.B.L.; Jabbour, C.J.C. Selecting green suppliers based on GSCM practices: Using fuzzy TOPSIS applied to a Brazilian electronics company. Eur. J. Oper. Res. 2014, 233, 432–447. [Google Scholar] [CrossRef]
  35. Mahdevari, S.; Shahriar, K.; Esfahanipour, A. Human health and safety risks management in underground coal mines using fuzzy TOPSIS. Sci. Total. Environ. 2014, 488, 85–99. [Google Scholar] [CrossRef] [PubMed]
  36. Pi, W.-N. Supplier evaluation using AHP and TOPSIS. J. Sci. Eng. Technol. 2005, 1, 75–83. [Google Scholar]
  37. Lin, M.-C.; Wang, C.-C.; Chen, M.-S.; Chang, C.-A. Using AHP and TOPSIS approaches in customer-driven product design process. Comput. Ind. 2008, 59, 17–31. [Google Scholar] [CrossRef]
  38. Dağdeviren, M.; Yavuz, S.; Kılınç, N. Weapon selection using the AHP and TOPSIS methods under fuzzy environment. Expert Syst. Appl. 2009, 36, 8143–8151. [Google Scholar] [CrossRef]
  39. Torfi, F.; Farahani, R.Z.; Rezapour, S. Fuzzy AHP to determine the relative weights of evaluation criteria and Fuzzy TOPSIS to rank the alternatives. Appl. Soft Comput. 2010, 10, 520–528. [Google Scholar] [CrossRef]
  40. Amiri, M.P. Project selection for oil-fields development by using the AHP and fuzzy TOPSIS methods. Expert Syst. Appl. 2010, 37, 6218–6224. [Google Scholar] [CrossRef]
  41. Yu, X.; Guo, S.; Guo, J.; Huang, X. Rank B2C e-commerce websites in e-alliance based on AHP and fuzzy TOPSIS. Expert Syst. Appl. 2011, 38, 3550–3557. [Google Scholar] [CrossRef]
  42. Büyüközkan, G.; Çifçi, G. A combined fuzzy AHP and fuzzy TOPSIS based strategic analysis of electronic service quality in healthcare industry. Expert Syst. Appl. 2012, 39, 2341–2354. [Google Scholar] [CrossRef]
  43. Patil, S.K.; Kant, R. A fuzzy AHP-TOPSIS framework for ranking the solutions of knowledge management adoption in Supply Chain to overcome its barriers. Expert Syst. Appl. 2014, 41, 679–693. [Google Scholar] [CrossRef]
  44. Taylan, O.; Kabli, M.R.; Saeedpoor, M.; Vafadarnikjoo, A. Commentary on ‘Construction projects selection and risk assessment by Fuzzy AHP and Fuzzy TOPSIS methodologies’ [Applied Soft Computing 17 (2014): 105–116]. Appl. Soft Comput. 2015, 36, 419–421. [Google Scholar] [CrossRef]
  45. Prakash, C.; Barua, M.K. Integration of AHP-TOPSIS method for prioritizing the solutions of reverse logistics adoption to overcome its barriers under fuzzy environment. J. Manuf. Syst. 2015, 37, 599–615. [Google Scholar] [CrossRef]
  46. Sekhar, C.; Patwardhan, M.; Vyas, V. A Delphi-AHP-TOPSIS based framework for the prioritization of intellectual capital indicators: A SMEs perspective. Procedia-Soc. Behav. Sci. 2015, 189, 275–284. [Google Scholar] [CrossRef]
  47. Goyal, T.; Kaushal, S. An intelligent scheduling scheme for real-time traffic management using cooperative Game Theory and AHP-TOPSIS methods for next generation telecommunication networks. Expert Syst. Appl. 2017, 86, 125–134. [Google Scholar] [CrossRef]
  48. Karahalios, H. The application of the AHP-TOPSIS for evaluating ballast water treatment systems by ship operators. Transp. Res. Part D Transp. Environ. 2017, 52, 172–184. [Google Scholar] [CrossRef]
  49. Sindhu, S.; Nehra, V.; Luthra, S. Investigation of feasibility study of solar farms deployment using hybrid AHP-TOPSIS analysis: Case study of India. Renew. Sustain. Energy Rev. 2017, 73, 496–511. [Google Scholar] [CrossRef]
  50. Alexanderson, G. About the cover: Euler and Königsberg’s Bridges: A historical view. Bull. Am. Math. Soc. 2006, 43, 567–573. [Google Scholar] [CrossRef]
  51. Rao, R.V. Decision Making in the Manufacturing Environment: Using Graph Theory and Fuzzy Multiple Attribute Decision Making Methods; Springer Science & Business Media: Berlin, Germany, 2007. [Google Scholar]
  52. Singh, D.; Rao, R. A hybrid multiple attribute decision making method for solving problems of industrial environment. Int. J. Ind. Eng. Comput. 2011, 2, 631–644. [Google Scholar] [CrossRef]
  53. Geetha, N.K.; Sekar, P. Graph Theory Matrix Approach—A Qualitative Decision Making Tool. Mater. Today Proc. 2017, 4, 7741–7749. [Google Scholar] [CrossRef]
  54. Darvish, M.; Yasaei, M.; Saeedi, A. Application of the graph theory and matrix methods to contractor ranking. Int. J. Proj. Manag. 2009, 27, 610–619. [Google Scholar] [CrossRef]
  55. Kulkarni, S. Graph theory and matrix approach for performance evaluation of TQM in Indian industries. TQM Mag. 2005, 17, 509–526. [Google Scholar] [CrossRef]
  56. Jain, V.; Raj, T. Modeling and analysis of FMS performance variables by ISM, SEM and GTMA approach. Int. J. Prod. Econ. 2016, 171, 84–96. [Google Scholar] [CrossRef]
  57. Rao, K.V.; Murthy, P.B.G.S.N.; Vidhu, K.P. Assignment of weightage to machining characteristics to improve overall performance of machining using GTMA and utility concept. CIRP J. Manuf. Sci. Technol. 2017, 18, 152–158. [Google Scholar] [CrossRef]
  58. Chou, J.S.; Ongkowijoyo, C.S. Risk-based group decision making regarding renewable energy schemes using a stochastic graphical matrix model. Autom. Constr. 2014, 37, 98–109. [Google Scholar] [CrossRef]
  59. Fathi, M.R.; Safari, H.; Faghih, A. Integration of graph theory and matrix approach with fuzzy AHP for equipment selection. J. Ind. Eng. Manag. 2013, 6, 477–494. [Google Scholar] [CrossRef]
  60. Chaghooshi, A.J.; Safari, H.; Fathi, M.R. Integration of fuzzy AHP and fuzzy GTMA for location selection of gas pressure reducing stations: A case study. J. Manag. Res. 2012, 4, 152–169. [Google Scholar]
  61. Lanjewar, P.B.; Rao, R.V.; Kale, A.V. Assessment of alternative fuels for transportation using a hybrid graph theory and analytic hierarchy process method. Fuel 2015, 154, 9–16. [Google Scholar] [CrossRef]
  62. Hanne, T. 6.2 Meta decision problems in multiple criteria decision making. In Multicriteria Decision Making: Advances in MCDM Models, Algorithms, Theory, and Applications; Gal, T., Stewart, T., Hanne, T., Eds.; Springer Science & Business Media: Berlin, Germany, 2013. [Google Scholar]
  63. Zhuang, Z.-Y.; Hocine, A. Meta Goal Programming Approach for Solving Multi-Criteria de Novo Programming Problem. Eur. J. Oper. Res. 2018, 265, 228–238. [Google Scholar] [CrossRef]
  64. Zhuang, Z.-Y.; Lin, C.-C.; Chen, C.-Y. Applying AHP+GTMA and AHP+TOPSIS on solving the same paper shredder selection problem and comparing the results. In Proceeding of the 2017 IEEE International Conference on Innovation, Communication and Engineering Innovation, Information and Engineering (IEEE ICICE 2017), Kunming, China, 5–12 November 2017; p. 168. [Google Scholar]
  65. Zhuang, Z.-Y.; Lin, C.-C.; Chen, C.-Y. A systematic flow to compare the rank order vectors from solving the paper shredder selection problem using AHP–GTMA and AHP–TOPSIS. In Proceeding of the 2018 TIKI International Conference on Innovation, Communication and Engineering Innovation, Information and Engineering (TIKI ICICE 2018), Hangzhou, China, 9–14 November 2018. [Google Scholar]
  66. Marr, B. Data Strategy: How to Profit from a World of Big Data, Analytics and the Internet of Things; Kogan Page (Limited): London, UK, 2017; ISBN 9780749479855. [Google Scholar]
  67. Williams, S. Business Intelligence Strategy and Big Data Analytics: A General Management Perspective; Morgan Kaufmann: Boston, MA, USA, 2016; ISBN 9780128091982. [Google Scholar]
Table 1. Problem Dataset 1 (Matrix D’): recalculated normalized decision matrix.
Table 1. Problem Dataset 1 (Matrix D’): recalculated normalized decision matrix.
Alts. × Attrib.c1c2c3c4c5c6c7
A11/33/40.7142860.6342450.71428610.425018
A22/33/410.4504330.76923110.433333
A32/33/40.9733920.4455530.76923110.470483
A42/33/40.6489280.6495470.58823510.548614
A52/310.42857110.7692312/30.770619
A62/33/40.5714290.86071112/30.784777
A7110.8652370.4429710.71428610.987549
A82/33/40.4285710.9045560.68493211
Table 2. Problem Dataset 2 (Matrix B): pairwise comparison matrix (Data source: [4]).
Table 2. Problem Dataset 2 (Matrix B): pairwise comparison matrix (Data source: [4]).
Attr. × Attr.c1c2c3c4c5c6c7
c117131/351
c21/711/51/31/91/31/9
c315151/511/3
c41/331/511/91/31
c53959175
c61/53131/711/5
c719311/551
Table 3. The index scores and rank orders assessed by GTMA for the alternatives.
Table 3. The index scores and rank orders assessed by GTMA for the alternatives.
AiIndex Score (PermRecur(Ci))RankReference
A19663.52989663.533
A210,430.966610,430.86
A310,431.094510,431.03
A410,049.697710,049.56
A510,534.207410,534.19
A610,571.709310,571.69
A711,654.630111,654.58
A810,576.246210,576.28
Table 4. The weighted normalized decision matrix (matrix WiNDMa).
Table 4. The weighted normalized decision matrix (matrix WiNDMa).
Alts.×Attrib.c1c2c3c4c5c6c7
A10.051894840.017533600.079718270.034331670.300517500.067878330.07080747
A20.103789680.017533600.111605580.024381960.323634300.067878330.07219283
A30.103789680.017533600.108635930.024117800.323634300.067878330.07838198
A40.103789680.017533600.072423960.035159990.247485000.067878330.09139847
A50.103789680.023378140.047830960.054130030.323634300.045252220.12838416
A60.103789680.017533600.063774620.046590330.420724600.045252220.13074293
A70.155684520.023378140.096565270.023978040.300517500.067878330.16452456
A80.103789680.017533600.047830960.048963630.288167500.067878330.16659885
Table 5. The RCI scores assessed by TOPSIS for the alternatives and the rank orders.
Table 5. The RCI scores assessed by TOPSIS for the alternatives and the rank orders.
AiRCI Score VectorRank
A10.26051867
A20.43557616
A30.43893734
A40.24750978
A50.45591133
A60.69904691
A70.56017822
A80.43670035
Table 6. The Fundamental Building Block of Permanent Value Determination.
Table 6. The Fundamental Building Block of Permanent Value Determination.
For AiStart Time (s·ms)Stop Time (s·ms)PermTimeVector
PermRecur(C1)43.97444.2300.2552431
PermRecur(C2)44.23044.4820.2526710
PermRecur(C3)44.48244.7350.2526729
PermRecur(C4)44.73544.9940.2587240
PermRecur(C5)44.99445.2510.2566819
PermRecur(C6)45.25145.5050.2541721
PermRecur(C7)45.50645.7600.2547059
PermRecur(C8)45.76046.0120.2521710
Total Elapsed2.022281Average0.2527851

Share and Cite

MDPI and ACS Style

Zhuang, Z.-Y.; Lin, C.-C.; Chen, C.-Y.; Su, C.-R. Rank-Based Comparative Research Flow Benchmarking the Effectiveness of AHP–GTMA on Aiding Decisions of Shredder Selection by Reference to AHP–TOPSIS. Appl. Sci. 2018, 8, 1974. https://doi.org/10.3390/app8101974

AMA Style

Zhuang Z-Y, Lin C-C, Chen C-Y, Su C-R. Rank-Based Comparative Research Flow Benchmarking the Effectiveness of AHP–GTMA on Aiding Decisions of Shredder Selection by Reference to AHP–TOPSIS. Applied Sciences. 2018; 8(10):1974. https://doi.org/10.3390/app8101974

Chicago/Turabian Style

Zhuang, Zheng-Yun, Chang-Ching Lin, Chih-Yung Chen, and Chia-Rong Su. 2018. "Rank-Based Comparative Research Flow Benchmarking the Effectiveness of AHP–GTMA on Aiding Decisions of Shredder Selection by Reference to AHP–TOPSIS" Applied Sciences 8, no. 10: 1974. https://doi.org/10.3390/app8101974

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop