Rank-Based Comparative Research Flow Benchmarking the Effectiveness of AHP–GTMA on Aiding Decisions of Shredder Selection by Reference to AHP–TOPSIS

: The AHP–GTMA (analytic hierarchy process and graph theory and matrix approach) has been applied to select the best paper shredder before a company was making a bulk purchase order. However, there is a question as to whether one such relatively recent approach is effective to aid the selection decision problems in industrial/commercial practice. In this paper, a novel multi-measure, rank-based comparative research ﬂow is proposed. The real decision problem case mentioned above is solved using the AHP–GTMA and the AHP–TOPSIS methods, respectively, with relevant datasets sourced. Several measures in the proposed ﬂow, i.e., the arithmetical, geometrical, or even statistical ones, are multiplexed and used to validate the similarity between the rank order vectors (ROVs) (and thus between the ﬁnal preferential orders determined over the alternatives) that are obtained using these two different methods. While AHP–TOPSIS is a conﬁdent multi-attribute decision-making (MADM) approach which has been successfully applied to many other ﬁelds, the similarity validated between these individual results using the proposed method is used to conﬁrm the efﬁcacy of the AHP–GTMA approach and to determine its applicability in practice. In addition, along with this study, some contributable points are also rendered for implementing the decision models, e.g., the optimized recursive implementation in R to compute the permanent value of a square ASAM (alternative selection attribute matrix, which is the computational basis required by AHP–GTMA) of any dimension. The proposed methodological ﬂow to conﬁrm the similarity based on the ordinal rank information is not only convenient in operational practice with ubiquitous tool supports (e.g., the vector-based R statistical platform), but also generalizable (to verify between another pair of results obtained using any other MADM methods). This gives options for future research.


Introduction
The problem of sourcing required materials is usually a confounding decision to be made by the decision makers (DMs) for an industrial company. Sourcing an appropriate combination of materials for manufacturing has been a key problem, e.g., for the photoelectric industry, but the focus sometimes differs because of the presence of various risk factors in the supply chain. Multi-objective programming (MOP) models are usually employed as a main tool to tackle these multi-objective decision-making (MODM) problems [1]. For the problem of finding the optimal combination of material sourcing for manufacturing, such property (i.e., MODM) is analogous to many existing problems in relevant fields, e.g., on the other side of manufacturing to find the expected optimal combination of products to produce, i.e., a "product mix" problem [2].
However, sourcing the appropriate tools or materials for office use is another problem for the industrial companies, because it usually involves the uni-type bulk purchase (i.e., the "single-lot purchase"). This forms a selection problem which is intrinsically different from the abovementioned MODM problems. One single type of product, among many "alternatives", must be determined before the bulk-purchasing process begins, and such decision is usually justified based on their pros and cons in terms of the "attributes". Prioritizing the alternatives encountered using the attributes considered is the basis for the selection. Therefore, for problem-model fitness, the suitable approach to this problem is usually multi-attribute decision-making (MADM). The following paragraphs illustrates the core properties of this type of decision problem by the paper shredder selection problem which is usually faced by industrial companies.
A paper shredder, which is an electromechanical facility, is used in offices, although its use in the home is also popular. For decision-makers (DMs), the selection of a best paper shredder from several considered alternatives is an occasional, but confounding operational decision. This type of decision is important for business entities before launching a bulk purchase because of some properties of the decision problem itself. As can be summarized from [3], such a decision is important because of the following properties: (1) Importance of the shredders for company management. Because of the key roles of such a business machine in security management (i.e., it destroys sensitive or private documents before disposal or recycling, thus preventing the disclosure risk of confidential information) and in efficiently managing paper files (i.e., it destroys dated but still archived physical paper files, some of which have been converted and saved in the digital repository), a wrong decision to select and purchase an unsuitable shredder may lead to some unexpected consequences. (2) The post-decision consequence of bulk purchase. Usually, for maintenance convenience and cost saving, a company launches a batch procurement which involves purchasing several shredders of the same type at a time. In this context, failing to choose and decide the most suitable shredder will simply replicate the irrecoverable consequences mentioned above. (3) Hard to decide because of the variety of choices. In the market, there are too many different types of shredder, each having its own special features. For example, in the purchase case studied by Zhuang, Chiang, Su and Chen [4] there were 26 options. Such fact deters a DM from justifying them properly and makes it harder to decide among so many alternatives. (4) Hard to decide because there are multiple conflicting selection criteria present, but these criteria and the relative importance among them remains unknown. For this point, it was claimed in [4] that " . . . the problem . . . should address a different set of criteria, in comparison with using at home." According to this, seven relevant main selection criteria which should be considered in the "office-use" context were summarized.
Therefore, Zhuang et al. [4] used a hybrid MADM approach, AHP-GTMA (analytic hierarchy process and a graph theory and matrix approach), to model such a selection decision problem. For how the AHP-GTMA model was formulated in [4], see the digested review in Appendix A for details. In summary, the main contributions of [3,4] are the novelty to scientifically manage/support the encountered selection decision for the first time, to identify a suitable criteria set for the problem and to solve the problem systematically using the hybrid AHP-GTMA model. These are important for making a purchase decision and for addressing the design issues of paper shredder. However, apart from these, there is a question as to whether one such relatively recent approach is effective to aid a selection decision problem (such as the abovementioned) in industrial practice.
By reviewing the relevant methods systematically, Section 2 addresses the necessity and timeliness to validate the practical effectiveness of the AHP-GTMA approach, while confidence in using AHP-TOPSIS as the rival method is established in term of its "wide use" and being able to mitigate the risk of "inconsistency in the results" due to methodological heterogeneity as it shares a common former phase with AHP-GTMA. Section 3 introduces the comparative research flow that models/solves the same selection decision problem using AHP-GTMA and AHP-TOPSIS individually and compares the results using the proposed flow. Section 4 analyzes the results and discusses the findings/implications which are drawn from the experiment, while the new recursive algorithm that determined permanent values for the latter phase of AHP-GTMA in this study is detailed. Section 5 gives concluding remarks.

Literature Review
As discussed previously, the most important aim of this study was to examine the practical effectiveness of AHP-GTMA, by reference to the results obtained using AHP-TOPSIS upon solving the same real problem, using the proposed multi-measure, rank-based comparative research flow. Thus, in this section, first, the main method to be benchmarked (i.e., AHP-GTMA as well as its basis, the GTMA approach), the rival approach that used (i.e., AHP-TOPSIS as well as its basis, the TOPSIS approach) and their common former stage (i.e., AHP) are reviewed (and in a reversed order for clarity). Current application literature reveals that not only TOPSIS but also the hybrid AHP-TOPSIS are popular methods for MADM. In contrast, relatively fewer studies have applied the more recent AHP-GTMA approach; even the applications of GTMA are increasing now. Following this systematic review, it is observed that: (1) using AHP-TOPSIS as the rival approach is appropriate because the benchmarked AHP-GTMA approach is compared to a "widely used approach" [7]; and (2) it is suitable to benchmark the effectiveness of AHP-GTMA in practice at this moment because more upcoming applications of which is expected (similar to the increasing popularity of applying pure GTMA) but confidence in its application is still insufficient. As for the mentioned problem-data-model fitness of AHP-TOPSIS and the novelty of the proposed recursive algorithm for implementing AHP-GTMA in R, these are relatively minor for the study and are presented below along with the modeling and model implementation works.
AHP is a widely used MADM approach that was proposed in 1977 [8]. It has been used in many recent application studies (e.g., [9][10][11][12][13][14][15]). AHP uses the concept of pairwise comparisons during investigation to prioritize or rank the criteria and/or the alternatives during calculation. With standard AHP, the involved criteria with respect to the total decision goal are pairwise compared to determine the criteria weight vector (CWV) and to have a priority over them. This is usually called the first phase. The involved alternatives are then pairwise compared with respect to (each of) these criteria to determine the preferred priority sequence over them. For AHP, there is also evidential cross-method integration [16][17][18][19]. To the authors' knowledge, it is still the most popular MADM method for field applications. For example, in the supply chain field, until this decade, except for the MODM models, which are usually based on mathematical programming, AHP is the top-rated MADM approach for tackling with the supplier selection problems [20,21] and this is also true in a survey for green supplier selection methods in 2015 [22]. Based on its popularity (i.e., the application domain) and since AHP has been well developed over half a century (i.e., the methodological domain), it is used as the shared initial phase for the two methods that are to be compared. TOPSIS (Technique for Order Preference by Similarity to an Ideal Solution) was proposed at the end of the last century [5,23]. In the MADM school of MCDM, it gained popularity quickly. TOPSIS utilizes the concept of relative closeness to the ideal, while "relative closeness" of an alternative is measured by the ratio of distance to the anti-ideal solution of the model (system) against the summed distance to both the ideal and anti-ideal (solutions). The greater this ratio is, the closer the alternative is to the ideal solution and the farther it is from the anti-ideal solution. Even after 2010, studies using the non-hybrid versions of TOPSIS have been undertaken [24][25][26][27][28][29][30][31][32][33][34][35]. Similar to AHP, these TOPSIS studies can be roughly categorized as methodological improvement studies and model application studies.
In those methodological improvement studies, we found that many studies since 2000 have used AHP-TOPSIS [36][37][38][39][40]. This has continued to be a trend in the last three years because the literature is replete with examples of its use in widely varying fields (e.g., [41][42][43][44][45][46][47][48][49]). Since TOPSIS is another widely used approach, setting up a hybrid approach that uses both TOPSIS and AHP (AHP-TOPSIS) as the rival approach is an important experimental design of this study.
GTMA, since proposed, has also become an increasing popular MADM approach. Graph theory, which forms the basis for GTMA, is centuries old. To the authors' best knowledge, the first application of graph theory was in a study by Leonhard Euler in 1736 [50]. A decade ago, Rao proposed an idea that used graph theory [51] for MADM, and the method was further systematized as the known graph theory and matrix approach (GTMA) as a formal decision-making method [52]. GTMA represents the performance attributes (of each alternative) in a digraph, translates it into the matrix representation and calculates the permanent functional value of which as the basis for ranking [53]. Note that, during this process, the permanent function, and neither the determinant function nor the permutation function (which totally differs in its meaning), is used to provide the complete information of the assessed alternative without any loss (see [51]). Some recent studies have successfully resolved the selection decisions or alternative ranking problems using this method [54][55][56][57][58]. Within the scope of GTMA, in addition to the abovementioned articles, there have been studies using the AHP-GTMA hybrid approach [59][60][61]. These works are all published after 2012 because of the youngness (recency) of the AHP-GTMA approach. However, to the authors' knowledge, the applications of this method is relatively few (in comparison with the widely used AHP-TOPSIS).
The above literature study addresses the standpoint of this study. First, methodologically, each MADM model prioritizes the alternatives using a different internal solution logic. This is reflexive to the claim that "algorithms (for MADM) differ in their approach to selecting the 'best' solution, so this leads to 'inconsistency in the results'" [7]. However, in the literature, it is also noted that "there is neither a strong reason to reject a particular school of MCDM, nor a convincing argument to give general preference to one of the many methods" [62]. In this sense, the necessity for further scrutiny of whether the relatively recent AHP-GTMA approach is effective to aid the selection decision problems in industrial practice (which may answer the main research question of this study) should be addressed.
Second, AHP-GTMA and AHP-TOPSIS share a former phase of AHP. This is important to make a solid basis for the comparison works. The fact the rival approach shares a common former phase with the benchmarked approach can control the "inconsistency in the results", which is a positive antecedence for the comparisons.
Third, AHP-TOPSIS has been a "widely used" approach, in contrast to AHP-GTMA whose applications are relatively few. The literature contains many examples of the application of AHP-TOPSIS (the articles cited are very limited). It is a mature and widely used method so it is credible. This justifies the appropriateness of using AHP-TOPSIS as the rival approach. Note that this is not a comment on the relative merits of these two approaches, but is a claim made based on the observed number of cases (i.e., it has been used again and again for selection problems), although such popularity of AHP-TOPSIS could be attributed to the length of time during which it has been used. For example, in a recent study [63], the school of MODM proposed a hybrid mathematical programming approach to solve the site selection and capacity planning problem for the establishment of wind farms. However, in another recent study [49], the school of MADM also successfully used AHP-TOPSIS as an effective MADM modeling method to aid the decision for the selection of solar farms. The reason for the latter study to choose AHP-TOPSIS should be analogous (to this study), and that study has eventually become yet another addition to the "number of cases" of AHP-TOPSIS.
Fourth, subject to the relatively limited applications, we found that it is suitable to benchmark the effectiveness of AHP-GTMA in practice at this moment because more upcoming applications can be expected in the future (similar to the increased popularity of applying pure GTMA) but confidence in its practical application is still insufficient.

Comparative Research Flow
Most MADM studies suppose that there are m alternatives and n criteria (attributes) for a given decision system of a selection decision problem, P. An m × n "decision matrix", D, wherein each element contains a value of an alternative that is evaluated based on some considered attribute, is established by data collection. The data collection process could involve many heterogeneous data sources, such as human data, computer data, network data, archived database data, paper profile data or electronic data, etc., subject to the recent trend of data-driven decision-making (DDDM).
Usually, before the model calculations begin, the initial decision matrix (D) is normalized to give another "normalized decision matrix", which is D'. In addition, the opinion of a DM about the pairwise relationship between any two given attributes can be polled using an AHP-style expert questionnaire. A pairwise comparison matrix, B, is then obtained and this serves as the basis to determine a priority vector, which is the CWV over the considered attributes.
Using the data specifications as defined (and required) by AHP-GTMA and by AHP-TOPSIS, D' and B are the sufficient input information for AHP-GTMA modeling, whilst D' and CWV are sufficient for AHP-TOPSIS modeling. By using E = {D', B} as the data catalogue and solving P using AHP-GTMA, the ROV, ROV AHP-GTMA , whose elements contain the rank orders that are determined for the alternatives, is obtained as the final output of this AHP-GTMA modeling. Similarly, by using F = {D', CWV} as the data catalogue for P and solving it again using AHP-TOPSIS, another ROV, ROV AHP-TOPSIS , is then obtained as the final output of this AHP-TOPSIS modeling.
In this study, the first and core logic of the proposed comparative research flow is that, for P, whether ROV AHP-GTMA and ROV AHP-TOPSIS are similar is important. That is, if ROV AHP-GTMA "can be" similar to ROV AHP-TOPSIS upon modeling the same known decision problem in practice, the effectiveness of AHP-GTMA (which is relatively new and with fewer existing applications) is therefore verified because the rival AHP-TOPSIS approach is a "widely used" and experienced method.
The second logic pertains to "how to determine", instead of to "whether there is", the similarity between the individual results obtained from two models. This involves two questions for the methodology of comparison: (1) The degree to which they are similar must be measured in a numerical sense, but how? (2) In the "individual results", what type of message should be taken for the similarity measurement?
The latter question can be examined earlier for illustration clarity. To keep convenience of use when the proposed flow is applied, this study advocates that only the information about ordinal rank, which is the final output of almost every MADM model, is the input of the flow. That is, comparisons should be made and the similarity should be identified solely based on the two ROVs, rather than introducing any other type of result information which is perhaps in common. This is the reason the proposed comparative research flow is "rank-based".
The answer to the former question relates to the above answer to the latter question, because the possible way(s) to measure the similarity numerically depends on what is to be measured. In our study, since the similarity between the individually-obtained ROVs are of interest and it contains the numerical values of ordinal rank, each of which is assessed for an alternative, several tangible/intangible measures can be established in terms of different theoretical aspects, as follows: (1) The absolute distance in rank can be calculated directly based on the ordinal rank values individually solve by the two models, while the aggregated absolute distance is a similarity measure in the arithmetic sense. Even though an ordinal rank is not ratio-scaled, the different values of rank preserve a meaningful order. Suppose that X is the MADM model which is to be benchmarked and Y is the MADM model used to benchmark X; these processes are expressed as: where i is the index for the alternatives; z is the index for the methods; TV is a temporary vector which constrains the element-wise absolute distances between the rank order values assessed by the two methods; and AD is the aggregated absolute distance, which is the scalar column sum of TV. (2) The geometrical distance can be computed by treating the two valued ROVs as vectors in the space and using the Euclidean method. Thus, the vector TV in Equation (1) is treated as the Euclidean vector, based on which the Euclidean distance is computed according to the following formula to have another similarity measure in geometrical sense: (3) Another statistical similarity measure can be established to confirm and cross-validate the results. This will be especially useful when the two ROVs are found to be very similar or at least similar to some extent in terms of the two previous arithmetic and geometric measures. With this purpose mind, we propose to treat the two ROVs and apply the non-parametric sign rank test in the statistics field to establish this similarity measure. The sign rank test determines whether the two ROVs (not as vectors but as statistical variables) are (from) non-identical populations.
Theoretically, using this test here is suitable because the input to such a non-parametric test, i.e., the ROVs, is not ratio-scaled but in the meantime, the ordinal relationship is kept. Thus, if after the test it is shown that these two ROVs are not from non-identical populations and it is far from the evidence to support that they are so (from non-identical populations), this statistical-based observation can be used to confirm the other two similarities which have been observed arithmetically or geometrically. Regardless, setting up such an additional intangible similarity measure is a novelty of this study.
To utilize the existing initial real case for paper shredder selection decision in [4] as the exemplar problem of this study (i.e., P), the following subsections begin with the collection of the required datasets (i.e., the union set E∪F = {D', B, CWV}). Problem P is then solved using AHP-GTMA and AHP-TOPSIS, respectively, while two ROVs are obtained. This is followed by using the above systematic comparative research flow to determine the similarity between them in terms of the different measures. As a result, ROV AHP-GTMA is shown to be very similar to ROV AHP-GTMA . Using these processes for illustration, the involved methods for modeling and comparisons are self-explanatory.

Description of the Decision Problem and Data Sourcing
In the used case in [4], the size of problem P is defined by: (n, m) = (7,8), for which there are 8 shredder alternatives and 7 relevant criteria (attributes) in the criteria set, which is: {c1, c2, c3, c4, c5, c6, c7} = {Security level, Width of output strip or segment, Paper can size, Neatness of the shredder (in terms of the total volume of the shredder set), Noise level, Number of supported material types and Price}. Of the seven relevant criteria, c1, c3 and c6 are the-more-the-better (TMTB) criteria, and the other four are the-less-the-better (TLTB). This briefly describes the decision problem, but the required datasets, D', B and CWV, must be collected before going on.
Based on the source decision matrix (D) in [4], a normalized decision matrix (i.e., D') is recalculated and obtained. It contains the normalized performance score (NPS) vector for each alternative as a row. This is shown in Table 1. As can be seen, compared with Table 1, the results that calculated in [4] had some rounding inconsistencies, i.e., sometimes rounded to the nearest whole number and sometimes not (e.g., rounded down unconditionally). That is, in [4], the other version of D' did not give the precision that is required by subsequent permanent value calculations for the alternative selection attribute matrices (ASAMs) (because, to compose the ASAM for an alternative, the diagonal of the AHP pairwise comparison matrix is substituted by the NPS vector of that alternative, which affects the permanent function values of the ASAMs.). Table 1 shows the correction. For source dataset B, which is the pairwise comparison matrix to be used by AHP-GTMA as the basis to compose the ASAM for each alternative before permanent value determination and to be used by AHP-TOPSIS as the attribute weights (i.e., the CWV) for the latter stage of TOPSIS, the previous dataset is followed directly. This is because, although it might suffer the problem for not passing the consistency check if the threshold for CR (consistency ratio) is set to 0.1, it can pass the check if the threshold is 0.2, a standard that is acceptable by many industrial studies. Besides, the CR of this matrix is just a little greater than 0.1 (CR = 0.1117), and of most importance is that it is a real dataset that was investigated from a DM using the formal yet rigorous style of AHP expert questionnaire survey [4]. Given such practical values, in this study, this dataset is sourced and used, as shown in Table 2. For the criteria weight vector, CWV, it is also re-assessed. Again, this is to mitigate the risk of precision problem during modeling. In the previous study, the imprecise version of this vector was only used to have a rough knowledge about the priority over all considered criteria and draw practical implications based on it (e.g., the bulk purchase concerns for or design issues of paper shredders). It was totally unused by the later calculations in [4]. In this study, however, since the elements of this vector are used as the basis for the latter stage of AHP-TOPSIS, their precision matters. Thus, this vector is re-assessed upon B, as: (4)

Alternative Ranking/Selection Using AHP-GTMA
Although the decision case was solved in [4] using AHP-GTMA, this subsection re-establishes the model based on the precise datasets presented in Section 3.2, while the model is implemented as an integrated, one-stop program in R to maintain precision consistency during calculation (including the prior steps to obtain Dataset 1 and the precise CWV in Equation (4)). With the previous Excel-R solution of [4], the former stages of AHP-GTMA (i.e., determining the CWV using AHP and composing the ASAMs) were performed in spread sheets and then the latter stages (i.e., computing the permanent values of the alternatives using the ASAMs and ranking them) were done in R. In addition, unlike [4] that implemented GTMA based on dynamic programming (DP), in this study, a recursive algorithm is proposed. Other details of this algorithm are discussed in the next section, but its overall flow is presented here along with the solution process.
Following the symbolic conventions and using the datasets sourced in Section 3.2, the algorithm involves the following phases: (1) Compose an ASAM matrix for each alternative and archive it.
(2) Load and compute the permanent value for each alternative. According to Singh and Rao [52], for Phase (1), for each alternative (i.e., A i , i = 1 . . . 8), an ASAM, C i , is obtained by replacing the diagonal axis of B with its NPS in D' (i.e., the i-th row of D'). Algorithm 1 shows the algorithm of this phase.
Phase (2) is to obtain the "index score" for each alternative. The ASAM square matrix for every alternative i (C i , as obtained from Phase (1)) is loaded, and the permanent value of which, PV i , is determined. These are exactly the index scores used to compose the "score vector", as the basis for ranking. Algorithm 2 shows the algorithm for this phase.
Phase (3) ranks the alternatives according to the index scores in the score vector. Fortunately, for the implementation in R, this can be performed by "rbind(PV, ROV AHP-GTMA = rank(−PV))" because the values in PV are TMTB in terms of GTMA. This single line command is not sufficient to form "an algorithm".
Running these phases for the exemplar problem using the datasets established in Section 3.2 gives a solution to PV that contains index scores for the decision alternatives and a rank order vector for them (i.e., ROV AHP-GTMA ). In Table 3, the numerical part of the second column shows the value of PV, while the numerical part of the third column shows the value of ROV AHP-GTMA .
Such result gives the following preferential order over the alternatives: If they are strictly ordered without any indifference interval, it is: It is seen that such a preferential order concurs with the order that was obtained by Zhuang et al. [4]. The fact the two index score vectors assessed using the previous DP-based and the current recurrence-based algorithms are quite close to each other verifies the adequacy and functional correctness of this new implementation. However, since there should be no calculation error, in the final "Reference" column, the permanent values that were obtained previously are slightly different simply because of the imprecise calculation basis upon the imprecise data sources (see Section 3.2), and not because of the different implementation logic.

Alternative Ranking/Selection Using AHP-TOPSIS
The decision problem P is solved using AHP-TOPSIS. Usually, the first step in applying an AHP-TOPSIS involves loading the source decision matrix (i.e., the non-normalized D), identifying a suitable normalization method according to the problem context and then obtaining the normalized decision matrix, D'. However, the dataset D' has been recomputed in Section 3.2 and is thus available.
According to the data specification as defined in Section 3.1 (i.e., dataset F), another solution algorithm for AHP-TOPSIS (which is also designed and implemented in R) initially loads the D' matrix and the CWV * vector as the data input and uses them to calculate the "weighted normalized decision matrix" (named as the WiNDMa). This involves the following calculations semantically: where columnwise_replicate(m,n) is a function replicating vector m for n columns, as offered by any software utility and "·" is the standard matrix multiplication operator for linear algebra. For the datasets in Section 3.2, CWV * is a 7 × 1 vector (matrix). Compiling and binding seven of this type of vector using columnwise_replicate() gives a 7 × 7 matrix, CWV. Multiplying the 8 × 7 D' by the 7 × 7 CWV gives the 8 × 7 WiNDMa matrix, as shown in Table 4.  The most critical element of TOPSIS is to identify the ideal and anti-(negative-)ideal solutions from WiNDMa, which are denoted by A + and A − . This involves the following algorithmic logic: A + ← as vector(initialize with dimension Ncrit×1); A − ← as vector(initialize with dimension Ncrit×1); Applying these equations to WiNDMa, the ideal and anti-ideal alternatives for the decision problem, both being virtual, are identified as: It is seen that A + is identical to CWV * (while A − is not). This finding is discussed extensively on a theoretical basis in Section 4.1.
The third phase of TOPSIS measures the distance from each alternative to the ideal (i.e., the "separation measures") and the distance to the anti-ideal. The process uses the following logic: S to_A− ← vector(initialize with dimension Nalts×1); Applying this logic to the dataset, two separation measure vectors are obtained as follows: The relative closeness index (RCI) for each alternative is then calculated to allow the final ranking to proceed. This is the fourth phase of TOPSIS. An alternative has a higher score for this index if it is relatively close to the ideal solution and far from the anti-ideal. This is equivalent to determining the separation from the anti-ideal solution over the entire distance for which it is separated from both the ideal and the anti-ideal solutions: Substituting Equation (20) using the information into Equation (18), the 8 × 1 RCI vector (RCIV), which carries information about the alternatives' relative proximity to the ideal, is obtained. This is another index score vector obtained by AHP-TOPSIS, as shown in the second column in Table 5.
This RCIV is further used to determine the ranks of the alternatives as a vector, which is ROV AHP-GTMA . For example, with the use of R, a command similar to that used in Section 3.3 (i.e., "rbind(RCIV, ROV AHP-GTMA = rank(−RCIV))"), is sufficient to obtain this ROV for subsequent comparisons, the result value of which is shown in the last column in Table 5.
This ROV also gives another preferential order over the alternatives as suggested by AHP-TOPSIS: If these alternatives are strictly ordered without considering the durable indifference, the preferential order is: A6 A7 A5 A3 A8 A2 A1 A4.

Verifying the Similarity between the Two ROVs Using the Proposed Comparative Research Flow
From Equations (6) and (22), the preferential orders obtained individually using AHP-GTMA and AHP-TOPSIS are, respectively: Using AHP-GTMA: A7 A8 A6 A5 A3 A2 A4 A1. Using AHP-TOPSIS: A6 A7 A5 A3 A8 A2 A1 A4. At a glance, these two preferential orders are not so consistent, but further scrutiny can be made to give more insights to such an observation. Therefore, the systematic yet scientific comparative research flow proposed in Section 3.1 takes place.
Phase (1) of the flow is to calculate the absolute distance in rank for each alternative directly based on the ordinal rank values, while the aggregated absolute distance is a similarity measure.
In Tables 3 According to Equation (1), the difference between these two ROVs is calculated as: Furthermore, the absolute distance in rank for each alternative is assessed in terms of the TV vector as: According to Equation (2), the aggregate absolute distance for these two ROVs, which is the result of applying the pure arithmetic similarity measure, is the summation of the elements in TV: AD = 1 + 0 + 1 + 1 + 1 + 2 + 1 + 3 = 10.
This is the result of applying the plausible geometrical-based similarity measure. Let us briefly discuss these observations before going on.
It is initially seen that, in Equations (23) and (24), one alternative (A2) has the same rank by using both methods. However, in Equation (26), five alternatives (i.e., A1, A3, A4, A5, A7) differ in rank by only one place. Only two of them differ in rank by more than one place: the rankings for A6 differ by two places and those for A8 differ by three.
It is seen that the rank orders for both methods are quite similar. Six alternatives (i.e., A1, A2, A3, A4, A5, and A7) of the eight differ in rank by no more than one place, so 75% of the alternatives obtained very similar ranks by both methods. For the two outliers (A6 and A8), the "maximum deviation" in the ranking is three places (for A8), which is still not large. There is also supporting evidence in that the Euclidean distance between these two seven-dimensional ROVs is only 18 from Phase (2). These are the results of using tangible measures for the first two phases of the proposed flow.
However, as using these tangible measures there could be similarity, the intangible statistics measure of Phase (3) is used to confirm the similarity found between these two ROVs. This study views the two ROVs as statistical samples and uses the non-parametric Wilcoxon signed-rank test function (as also offered by R) to test whether they are (from) non-identical populations, as they are in fact "ordinal variables" from the perspective of data analytics.
Using the ROV AHP-GTMA and ROV AHP-TOPSIS vectors as the parameters for the non-parametric test, the result confirms that they are not (drawn) from non-identical populations. The evidence is little, so it is highly probable they are (from) an identical population: the p-value is 0.9301, which is significantly greater than 0.05. This completes Phase (3) and finalizes the proposed comparative research flow, using an intangible measure for the studied case. For more details, please see the code and experimental results in R in Appendix B.

Further Analysis and Discussion
This section focuses on two aspects: implications from the application of the rank-based, multi-measure comparative research flow and the recurrence-based implementation for GTMA, both of which are closely related to the contributions of this study. Section 4.1 discusses the main findings and positive implications based on the results from modeling and the comparisons using the proposed method. Sections 4.2 and 4.3, respectively, details and benchmarks the new implementation form for GTMA, wherein the superiority of which is demonstrated.

Results/Implications from Modeling/Comparisons
The results in Section 3.5 may render the most important outcome of this study, which is the similarity shown and validated based on the application of the comparative research flow that is methodologically proposed. Consequently, AHP-GTMA gives a ROV (ROV AHP-GTMA ) which is very similar to that obtained using AHP-TOPSIS (ROV AHP-TOPSIS ).
In Phase (1), the similarity is shown when the elements in each ROV are viewed as a list. In these lists, the aggregated absolute distance in rank for the eight alternatives is only 10, while 75% of the alternatives have very similar ranks for both methods (among them, only two have a difference in rank which is strictly greater than one place, and the one with "maximum deviation" is only 3). In Phase (2), the small Euclidean distance (i.e., 18) that is measured between these two ROVs (as two vectors in the space) also shows the similarity. Then, Phase (3) finds further supporting evidence that the two ROVs, when viewed as statistical variables, are likely (drawn from) an identical population using the signed-rank test.
With the progress of the three phases, in terms of methodology, it is shown or implied that: (1) The proposed comparative research flow is applicable. This can be justified from its successful application.
(2) The ranked-based, multi-measure flow is suitable for the given comparison problem. For the input of the flow, it requires only the two ROVs that are available from using almost all MADM models. Such rank-based property of the proposed flow should be valuable, in that it can be easily implemented in decision practice. The multiple measures that are included and used may cross validate the results from each other. Because these measures are designed and established from different theoretical aspects that are intrinsically heterogeneous (i.e., arithmetic, geometrical and statistical), they are particularly useful when any similarity is observed using one measure but further confidence should be sought for from using other measures. Specifically, in this study, the result from using Phase (2)'s geometrical measure provides evidence to the similarity that has been observed in Phase (1)'s arithmetic measure. Moreover, the result from using Phase (3)'s statistical measure further confirms the similarity that has been justified based on the two previous phases. In other words, such a final outcome infers a "total similarity" between the two ROVs, which can further support the following Point (3). However, such a multi-measure, cross-validation style should also be a valuable property of the proposed flow, in terms of methodology. (3) It well supports the target of this study to establish the confidence on applying AHP-GTMA in practice.
As the major experimental result, the AHP-GTMA gives a ROV that is quite similar to that obtained using AHP-TOPSIS, and this has been cross-validated using the multiple tangible or intangible measures. The similarity between the ROVs claims the effectiveness of the AHP-GTMA approach. As discussed previously, since AHP-TOPSIS is a mature and "widely used" MADM method, the fact the two ROVs are quite similar supports the claim that AHP-GTMA is (and should also be) another effective and suitable method for solving a selection decision problem. In other words, the result gives confidence that using an AHP-GTMA to solve other types of operational selection decision problems is feasible. (4) The proposed flow is able to be generalized. In this study, the methodological flow proposed in Section 3.1 is shown to support the required comparative research process between the ROVs that are individually obtained using AHP-GTMA and AHP-TOPSIS. As most MADM methods give a ROV (even using simple-additive weighting (SAW) or weighted sum model (WSM)) that is behind the final preferential order, the flow can be generalized to identify the similarity between any other pair of ROVs that are produced by any other two MADM approaches, where appropriate. This should be an important contribution that this study makes to the methodological field of MADM.
In addition to these implications drawn from the proposed flow or its application, there is another important interesting observation: Section 3.4 shows that the ideal solution vector using TOPSIS is identical to the criteria weight vector that is obtained using AHP in the initial stage of AHP-TOPSIS, i.e., A + = CWV * . This should be discussed extensively.
After scrutiny, we found that this is because of the normalization method that is used! In this study, the method which is used to normalize D as D' is as follows: As can be seen, the above process also aligns the direction of optimization to TMTB. Therefore, in Table 1, each column of D' has at least one "1"(s), which tag(s) the alternative(s) that perform(s) the best, in terms of the criteria for that column. When TOPSIS is used to calculate the WiNDMa, these 1s are multiplied by the counterpart element in CWV * , depending upon the criterion that is being considered. For example, in Table 1, following the normalization, for c1, only A7 performs the best for this criterion and is assigned a value of 1. This, when multiplied by the first element in CWV * (=0.15568452), becomes exactly the value that is the assessed weight for c1, as shown in Table 4. Criterion by criterion, this ideal-identification process selects these elements from WiNDMa to produce the ideal solution vector, A + . For this example, the [A7, c1] cell in Table 4, whose value is 0.15568452, is regarded as the ideal value for the c1 criterion, so it is assigned as the first element of A + . Therefore, the normalization method used to normalize the source decision matrix at the beginning is responsible for this phenomenon. This finding, although minor, is interesting because it poses a question for future research to determine whether this affects any part of the solution process for the hybrid AHP-TOPSIS approach.

Code Optimization: The Recursive Implementation of the Permanent Value Determination Algorithm
The algorithm that gives recurrence-based implementation of permanent value determination is presented in more detail. Theoretically, it can deal with any size of a square matrix for GTMA so this breaks the limit of the previous implementation based on dynamic programming (DP) wherein the dimension was fixed for the problem. Therefore, it should be an additional contribution of this study, because permanent value determination of an ASAM matrix is key to the computations of GTMA and because the DP-based algorithm in [4] requires further code optimization (for the aforementioned fixed problem-size problem and because it requires separate functions to complete the computational task).
Using the concept of DP, Zhuang et al. [4] wrote a series of functions to compute and obtain the permanent value of each ASAM matrix. In contrast to calculating the determinant value for any numeric square matrix, calculations for the permanent value was not supported in R by the default packages. Thus, an R function named as Per 2×2 () was written to obtain the permanent value of a 2 × 2 matrix. Furthermore, this function is called in the other Per 3×3 () function that calculates the permanent value of a 3 × 3 matrix. The Per 3×3 () function is further called inside Per 4×4 (), and so on, until Per 7×7 () where 7 was defined by problem size. This DP-based implementation style follows the following logic, for any problem case where #Criteria = Ncrit: where symbol "−" is the row deduction or column deduction operator (i.e., in matrix M, which row and/or which column should be removed so a new matrix with a decreased dimension is obtained); Per 2×2 () is the fundamental building block function that has been written according to the mathematical definition and can be called; and N crit is the number of criteria as defined by the actual problem size. As can be seen from Equation (31), with the DP-based implementation style, to solve a real problem, a series of functions (more specifically, a number of (N crit − 1) permanent functions) should be written to complete the final calculation job of Per Ncrit×Ncrit (). For example, for the used problem case where N crit = 7, six functions from Per 2×2 () to Per 7×7 () were written. This becomes an inevitable drawback when the problem size grows higher. If, for another decision problem, the degree is higher (i.e., Ncrit > 7 or >>7), more Per nxn () functions must be written, so debugging and maintenance would be more difficult and the process would be less resilient to possible changes in the problem size. This study addresses the issue by implementing a recurrence-based permanent function, which also extends the limit on ASAM matrix dimension to any order.
According to the divide-and-conquer concept, a recursive Per Ncrit×Ncrit () function takes M Ncrit×Ncrit as the parameter, but it divides the problem into a number of Ncrit sub-problems. These sub-problems are all to be solved with Per (Ncrit−1)×(Ncrit−1) (M (Ncrit−1)×(Ncrit−1) ) in a lower dimension where the matrix order is decremented (by 1). Repeating this process until n = 3, the problem becomes an indivisible unit (i.e., Per 3×3 (M 3×3 )), which can be easily hard-coded by reference to the definition of permanent value. Depending on the results from lower levels, the problem is "conquered back" from n = 3, n = 4 . . . , until n = Ncrit, when the entire problem is solved. However, with this recursive implementation style, only two functions are eventually required to complete the calculation job, regardless of the problem size and the matrix dimension: a recurrence-based Per n×n (M n×n ) wherein the boundary condition is n = 3 ("bouncing back" when this condition is met) and the Per 3×3 (M 3×3 ) basic building block.
Algorithms for these two required functions are presented in Algorithms 3 and 4, respectively. Both have been implemented in R to determine the permanent values in this study. Algorithms 3 shows the Per 3×3 () building block according to its mathematical definition at first, because it is called by the main recurrence-based permanent value determination algorithm in Algorithms 4.

The Performance of the Recursive Implementation
The performance of the new GTMA implementation should be an interesting issue. However, limitations of space mean that the overall performance of only the recursive algorithm that is improved, instead of the entire R program, is measured, because computations to determine the permanent values create a bottleneck during GTMA computations.
To benchmark this, the program is revised and a timing mechanism is inserted at the outside calling layer, which is at the ComputePermanentValuesForASAMs() function that calls PermRecur() (i.e., Function-2 that calls Function-4).
For the problem size (Nalt = 8 and Ncrit = 7) and data of the used decision case, the following outputs, which are the timestamps in units of (seconds.milliseconds), are listed in Table 6. The start time and stop time for each call to the PermRecur() function in R is shown in the second and third columns. The last column and the last row are also obtained as the auxiliary information of the benchmarking process.
The result show that, on the testing platform (a portable PC with Intel i7 CPU 2.6 GHz, 16 G RAM, HDD, Windows 10 64-bits as the OS, R Studio v1.0.143 and R x64 v3.3.1 installed, with two SSD hard drives), it requires about 0.253 s to complete calling PermRecur() function each time for any one 7 × 7 ASAM matrix. For this decision case, it only requires about 2.022 s to complete the permanent value calculation step for GTMA, subject to the given problem size.
Finally, in terms of efficiency, the recursive implementation to facilitate the solution process for GTMA in Section 3.3 should be another contribution of this study, and the implementations for the algorithms in Section 4.2 can be generalized to compute the permanent value for a square matrix of any dimension. In terms of decision support system (DSS) design (and thus not in terms of decision problem modeling), this implementation method is shown to be efficient by benchmarking. Note that another function that has just been supported by the BonsonSample package in September 2017 is similar to this function (permanent value computation in R), with a lower number of decimal places as a default. However, for this study, the relevant implementation work has started since May 2017. During the experimental stage of this study, in 2017, neither R nor its packages seemed to support the required permanent value computation. Relevant functions for this study were implemented on the R platform in parallel and developed for the MADM research purpose.

Conclusions
This work started with the doubt placed on the effectiveness of AHP-GTMA on aiding the industrial selection decisions. A systematic comparative research flow was proposed. According to this flow, the effectiveness of using the relatively new AHP-GTMA approach was examined by reference to the results from taking another "widely used" approach, AHP-TOPSIS. Based on such encouraging final outcome, the possible contributions of this study are thus reviewed and summarized.
The first contribution relates to the methodological flow that was proposed. It determines whether there is similarity between the final solutions that are obtained using two MADM methods. As only the final rank information is used (rather than using other information during the solution process) and only the ROVs in these final solutions (which is available from almost all MADM models) are used, the proposed flow should be a convenient tool which can be easily applied in operational practice. Except for the ROVs from using AHP-GTMA and AHP-TOPSIS (as in this study), it can be applied to determine the similarity between any two ROVs that are produced using any other two MADM methods, where appropriate. This is the methodological reason for naming this flow as a "rank-based" one.
In addition, the flow includes several heterogeneous measures to determine the similarity between the ROVs from two methods using several measures. It suggests to establish both tangible measure(s) and the intangible measure(s), while the results from using these measures can be cross-validated or confirmed. The tangible measures can be arithmetic (i.e., the difference vector, absolute distance in rank for an alternative and the aggregated absolute distance) and/or geometrical (i.e., the Euclidean distance), while other concepts in the data analytics field can also be integrated (e.g., number of outliers, percent of alternatives ranked very similarly and maximum deviation in rank). Another niche is the establishment of the intangible measure from the perspective of statistics. The non-parametric test concept that is frequently used in the data science field is introduced, and this fits the problem context. These are important to claim the similarity between using two methods eventually on a solid basis. Furthermore, the abovementioned is the methodological reason for naming the proposed flow as a "multi-measure" one.
The second contribution pertains to the model selection work in real practice. As with the application of the comparative research flow, the operational effectiveness of the AHP-GTMA model in solving the shredder selection decision problem is shown (verified by the fact that this decision model obtains a solution that is quite similar to that obtained by AHP-TOPSIS, the "widely used" model), thus the confidence in using this relatively new but relatively less-applied model in future practice is established.
The third contribution, although it should be relatively minor for this study, is about model implementation. This contribution is two-fold: the algorithmic optimization of GTMA computations and the implementation of TOPSIS in R. The recursive implementation that computes the permanent value for a square matrix of any dimension facilitates the solution process for GTMA and contributes to DSS design. This avoids writing many functions for a larger problem size and makes the maintenance easy. The implementation of TOPSIS in R is also unique to this study.
There are also boundary conditions. In this study, the datasets were sourced from a paper shredder selection decision case, based on which the findings (e.g., the similarity in the results) were justified. Even though these data apply to a real case, it would be of interest to know whether another existing selection decision problem in another management decision domain can be modeled and solved using the same methods as those used by this study (i.e., AHP-GTMA and AHP-TOPSIS). Whether the result would be identical to this study (i.e., yielding similar rank orders) could be a subject for future study.
Finally, identifying similarities in results across more MADM methods is another significant future topic for study, wherein the proposed similarity confirmation method would be of use.  a time span, so the authors sincerely thank these conferences for the platform for idea exchanging and thank the anonymous colleagues for the valuable recommendations.

Conflicts of Interest:
The authors declare no conflict of interest.