Next Article in Journal
Energy and Ecological Sustainability: Challenges and Panoramas in Belt and Road Initiative Countries
Next Article in Special Issue
Analysis and Research on the Key Success Factors of Marketing Ugly Fruits and Vegetables
Previous Article in Journal
From Global Goals and Planetary Boundaries to Public Governance—A Framework for Prioritizing Organizational Sustainability Activities
Previous Article in Special Issue
Study on the Learning Effectiveness of Stanford Design Thinking in Integrated Design Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Knowledge Discovery Education Framework Targeting the Effective Budget Use and Opinion Explorations in Designing Specific High Cost Product

1
Department of Management Science, College of Management, National Chiao Tung University, Hsinchu 30010, Taiwan
2
Department of Civil Engineering, College of Engineering, National Kaohsiung University of Science and Technology, Kaohsiung 80778, Taiwan
3
Department of Information Management, College of Management, National Defence University, Taipei 11258, Taiwan
*
Author to whom correspondence should be addressed.
Sustainability 2018, 10(8), 2742; https://doi.org/10.3390/su10082742
Submission received: 30 May 2018 / Revised: 20 July 2018 / Accepted: 23 July 2018 / Published: 3 August 2018

Abstract

:
For an R&D institution to design a specific high investment cost product, the budget is usually ‘large but limited’. To allocate such budget on the directions with key potential benefits (e.g., core technologies) requires, at first and at least, a priority over the involved design criteria, as to discover the relevant decision knowledge for a suitable budgeting plan. Such a problem becomes crucial when the designed product is relevant to the security and military sustainability of a nation, e.g., a next generation fighter. This study presents a science education framework that helps to obtain such knowledge and close the opinion gaps. It involves several main tutorial phases to construct and confirm the set of design criteria, to establish a decision hierarchy, to assess the preferential structures of the decision makers (DMs) (individually or on a group basis), and to perform some decision analyses that are designed to identify the homogeneity and heterogeneity of the opinions in the decision group. The entire framework has been applied in a training course hold in a large R&D institution, while after learning the staff successfully applied these knowledge discovery processes (for planning the budget for the fighter design works and for closing the opinion gaps present). With the staffs’ practical exercises, several empirical findings except for the budgeting priority (e.g., the discrimination between ‘more important criteria’ against the less important ones) are also interesting. For some examples (but not limited to these), it is found that the results from using two measures (statistical correlation vs. geometrical cosine similarity) to identify the opinion gaps are almost identical. It is found that DMs’ considerations under various constructs are sometimes consistent, but often hard to be consistent. It is also found that the two methods (degree of divergence (DoD) vs. number of observed subgroups (NSgs)) that are used to understand the opinions’ diversity under the constructs are different. The proposed education framework meets the recent trend of data-driven decision-making, and the teaching materials are also some updates to science education.

1. Introduction

For institutions that are focused on the R&D of new, high-tech, and high investment cost products, the scale of the budget is usually large. However, a usual situation is that the budget is anyway insufficient to cope with every aspect of the requirements, even it has been huge, relative to the budget that is allocated for developing other products [1,2]. This is especially true when developing a single high cost product [3,4], e.g., a fighting aircraft (fighter) [5]. As such, to compete for the resources, departments that are responsible for designing the different functions of such product often insist their individual interests due to departmental selfishness (i.e., they usually think that the features they develop are the most important). As a consequence, quarrels for such resource conflictions are observed, and the decision makers (DMs) in such a research institution are in charge of coordinating and leveraging the relevant resources.
Based on the belief to allocate the ‘limited’ budget on those directions with key benefits (e.g., the real core technologies to be developed) and to pursue a ‘satisfied’ investment (rather than an ‘optimal’ one, which is impossible due to the resource constraints), a priority over the involved product design criteria should be determined [6,7] for resource allocation. This priority has better been determined scientifically because this not only mitigates internal pressures, but it also avoids a ‘wrong decision’ and at the same time provides a numerical basis for an appropriate budget allocation (because the subsequent R&D works are irreversible). Therefore, a staff training course aiming at incubating the ability of the budgeting staffs is required to understand the real intensions and/or preferences of the strategic DMs, so a systematic yet scientific education framework to guide the process of this course should be established.
This study fills the gap by presenting one such science education framework to enhance such ‘decision ability’ and to perform the relevant knowledge discovery process, as to: (1) understand the knowledge that can be discovered from the DMs’ mind, which is helpful to make a scientific budgeting plan for the design of the product on a group basis; (2) understand, numerically and visualised, the heterogeneity (opinion gaps) and homogeneity that are present inside the DM group, as to make suggestions or justify the ways to close the gaps. The framework has been applied to hold a training course for the DMs in a large R&D institution, which received a ‘limited’ budget and a task to design a next generation fighter. The tutorial courses for (1) were taught and a strategic budgeting plan that could meet the reached consensus was successfully made according to the results that were obtained based on real data. The tutorial courses for (2) were also taught, and as a result, the staffs successfully discovered the relevant knowledge about opinion-gap-closing by using the numerical-based or graphical-based decision analysis methods. Such confidence means that one such framework can be generalised when other science educational programs are hold in practice to establish the staffs’ ability, while the focus and context are similar.
The proposed framework also serves the other two purposes for relevant science teaching: (1) it is a good material for teaching the latest data-driven decision-making (DDDM) theories/methods and (2) a suitable update to the fighter selection example, which has been used in the multi-criteria decision-making (MCDM) textbook for a number of decades. For the former point, this is because some methods in the data analytics field have been included in the teaching material and the results from which are data-driven (as will be shown later). For the latter, the studied case in the training course, which is the budget-use problem before designing a next generation fighter, would be a suitable update for the ‘conventional fighter selection’ case that had been used for a long time as a teaching exemplar to teach the non-DDDM methods. Anyhow, the above two paragraphs form a short summary about the goals of the study.
Section 2 reviews the literature. The state-of-the-art of educational trainings for the R&D institutions is reviewed. As the used decision case (i.e., the design budget planning problem) is important to demonstrate the science education framework and to address the main research question of this study, it is also reviewed. Following this, the inclusion of the relevant decision criteria via literature study, which was the work that was done for the initial phase of the proposed education framework, is also presented as a part of this section. Section 3 introduces the methodology of this study. The application of the middle of the proposed framework, in terms of the Delphi method and AHP, as well as some main empirical results obtained from the training course exercises, is illustrated in Section 4. Section 5 demonstrates several extensive analyses to discriminate and discover similarities in the individual opinions from the DMs, which are the final phase of the framework. Section 6 presents the relevant discussions, conclusions, and the recommendations for future works.

2. Literature Study

In this section, the relevant literature is reviewed step by step. In Section 2.1, educational courses that were successfully hold for training the employees in many institutions are reviewed, while it is found that there are few articles discussing about the specific topic of management (decision) science education training in the R&D institutions, let alone to propose a systematic framework for the training work. As the decision case is important because it is used in the training course and as to illustrate the proposed educational framework, and it is not only the first successful application, but it also can be used to address the main research question, its problem context is discussed in Section 2.2. Section 2.3 establishes the criteria set for the studied decision case by reviewing relevant literature, while it is also a good example to demonstrate the review-based criteria set filtering/establishment process of the framework (see Phase I in Figure 2).

2.1. Educational Trainings for the R&D Institutions: State of the Art

There are several studies discussing education/training in the businesses and the relationship between education/training and performance of the business institution. Bartel found that some businesses implemented new employee training programs in 1983 and they received significantly larger increases in labour productivity growth from 1983 to 1986 [8]. Storey and Westhead found that the take-up of management training in small firms is lower than large firms [9]. Loan-Clarke et al. mentioned that the topic of management training and development (MTD) in small enterprises is relatively under-researched. In their study, the results showed that the investments that were made on MTD were significantly influenced by the organisational characteristics of ownership, size, number of managers, and family management, but 85% of the studied sample considered investment in MTD to be linked to business success [10]. Moreover, Harel and Tzafrir also found that training practices affected perceived organizational performance in a way that is statistically significant [11]. Antonacopoulou considered that learning is linked to competitiveness at the levels of both the national economy and organisations [12]. Bartel believed that the employer’s return on investments (ROI) in training might be much higher than other actions [13].
Ibrahim and Soufani mentioned that, for the developing and growing of small-medium enterprises (SMEs), management training could be treated as an effective way to provide enterprises with the management expertise. They also found that the SME sector in Canada suffers from a high failure rate and most of them lacked management skills and planning. This situation can potentially be improved with training and education in different business areas [14]. Simpson et al. investigated the possible factors of success in small service sector organizations, while the result of the study showed clear evidence that education/training had a positive effect on the success of the business [15]. Aw et al. described that Taiwanese firms made many investments in R&D and worker training to facilitate their ability and benefit from their exposure to the export market [16]. Salas, et al. emphasizes the importance of training for organizations; it is because training and development activities allow organizations to adapt, compete, excel, innovate, produce, improve service, reach goals, and be safe. Moreover, they also mentioned that every year, organizations in the United States (US) spend billions on training [17].
From the above-related literature, it is understood that companies expect to improve their performance and strengthen their operational efficiency through employees’ education and training. Nevertheless, through such literature study, it is also understood that none of these studies has highlighted (or is related to) the training issues for/in R&D institutions, let alone to have one discussing the topic of management science education as to establish the ‘decision ability’ of the employees. However, as this study aims to propose a suitable framework for the education mentioned just above and for a guide to teach the relevant training courses, perhaps evidence that supports the above claims can be glimpsed from the institution case used in this study.
The studied case is a large R&D institution, and the mission of it is to develop various types of aviation platform and a number of related devices. Every year, it invests a considerable amount of budget in employee education and training. In the institution, 4.25% of the total personnel budget is used in employees’ education and training. The accumulated time that is spent on education and training is nearly 28,000 h/year (annual average of the sum of (#participants × course hours), as summed by the courses hold) and this determines the institution’s total cost spent (see the above). In addition, at least 50 per-capita training hours/year are received by each employee.
However, given such a budget that is allocated for employees’ training, as can be observed, in the case institution, the courses were all taught for the following knowledge domains: engineering research, system development, financing/accounting, and even the routine administrative tasks. But, such an empirical observation may draw two very important implications. First, this means that the training courses, which established the employees’ various abilities in the R&D institution, have been hold for years, but the trainings in these institutions are not well studied (due to the lack of relevant studies in the literature, as discussed previously). Second, but more importantly, these training courses do not include or involve a management (decision) science tutorial; even that there have been tutorials for establishing other abilities in management domain in a broader sense (e.g., the courses for financing/accounting and the administrative tasks).
Therefore, given these observations, it is seen that for the education and training programs in an R&D institution, the perspective of management science or decision analysis need to be addressed and strengthened. In the studied case institution, most of the employees of R&D institutions are scientists and engineers. Only 8% of the employees are finance, human resource, and administration staffs (including the inter-departmental budgeting staffs who make the budget plan, which is to be approved by the higher level decision makers). Despite so, if staff can possess professional knowledge and ability of management (decision) science, they will become able to make a better plan for budget allocation, especially when the budget is ‘large but limited’ to develop a specific high cost product (see Section 1). This addresses the novelty of the management science educational framework that is proposed by this study.

2.2. The Problem Context of the Specific High Cost Product Design Decision Case

The new style of contemporary wars is interesting, and the outcome is often determined by the advantages of weapons. This requires advances in technological developments of a country and the superior abilities to equip and seamlessly build the relevant technologies into the weapon systems.
For aero force, one of the most important weapon systems is the fighting aircrafts (i.e., fighters). The functional capability, performance, and combating power of fighters have been, and still are, key topics of military strategy, and these strongly affect the overall security of a nation. Nowadays, the demand for establishing a new fleet of next generation fighters as a substitution of the traditional fleets is emerging fast [18], because destabilisation is rising in some areas over the world due to military expansions, e.g., [19]. Therefore, the R&D works of fighting aircrafts and the relevant purchases are the indispensable and inevitable works to enhance the overall military strength of the country.
To fulfil the demand, some countries, such as the US, Russia, and PRC, have developed several types of the fifth generation fighters [20]. See Figure 1 for these examples. In other words, the strategic plan in these countries mainly concerns about fighter design. Some other countries purchase and source the next generation fighters and establish an additional fighter fleet as the new power of their aero force [21,22,23]. In other words, the strategic plan in these countries concerns mainly about fighter selection. A hybrid of the above two cases is the development project of F-35, which is the most ‘expensive’ fighter in the military history than ever. In this project, the US funded principally, and additional minor funds came from partner countries. These partner countries are either NATO members or close US allies, namely, the UK, Italy, Australia, Canada, Norway, Denmark, Netherlands, Turkey, Japan, South Korea, and Singapore [24]. They decided to join the development program of F-35 fighter because they can purchase and own the product afterwards, as to enhance their air combat ability by equipping with a fleet of such fighters. Another extreme are countries, such as Taiwan (ROC). She would like to select and buy next generation fighters, but sourcing them from one country would displease the other. So, eventually she decided to design a new one by herself. Figure 1 shows some famous well-developed next generation fighters.
As in the literature, the latter fighter selection decision problem has been addressed; this study is focused on the topic of the fighter design decision problem. The proposed framework can help to guide the design and R&D process of next generation fighters as well as the allocation of the ‘huge but limited’ budget. The fact that the former fighter selection problem has been addressed for decades can be reviewed from the earlier textbooks of multi-attribute decision-making (MADM) [25] that is about the selection of a ‘traditional fighter’ to the article that is recently published, which is about the selection of ‘new fighters’ [26]. Except for the fighting aircrafts, the selection problem of ‘normal aircrafts’ (that is to meet passengers’ travel and airline companies’ aero network route demands) is also popular, until recently [27]. As is seen, rarely did these studies tackle with the design problem of next generation fighters systematically, as to unveil the relevant knowledge about budgeting for the design and R&D works of next generation fighters. In other words, the problem context of the specific high cost product design case shows its niche and also meets the case selection criteria of the academy: the studied target is new (compared to the conventional fighters) and the solved problem is also novel (as compared to gaining the decision knowledge for aircraft selection).
Anyhow, to have a fleet of next generation fighters, there are two main types of decisions: about the selection and about the design. A shared feature when making these decisions is that the involved monetary amount is large, as compared to designing or selecting other types of products. For people facing the fighter design decisions, it is usually the R&D project that has received a big budget, but the budget is anyway insufficient to cover all of the required features. Therefore, with such ‘huge but limited’ budget, it is required to allocate it properly on the relevant designs that are really critical. Identifying the critical designs and technologies ensures that the relevant R&D works can be executed with the given limited budget, and doing so may receive a more satisfied consequence. But, how can the relevant budgeting staffs understand that for the strategic DMs, which set of technologies are really key and essential to the project? Helping these staffs obtain such knowledge is the main purpose of the training course, which was designed while using the proposed educational framework. This is, exactly, the main research question of the study.

2.3. Criteria Set Establishment

The current crucial fact reveals that the next generation fighters must possess excellent combat capabilities, as to fulfil the demands as required by a variety of new forms of wars in the future. According to a review to the recent advancing developments of fighters over the world, a next generation fighter should show its excellent abilities (or, capabilities) in the following aspects: (1) hypersonic, (2) super-cruise capability, (3) vertical/short take-off and landing capability, (4) super manoeuvrability, (5) multi-mission execution capability, (6) beyond visual range awareness capability, (7) advanced cockpit and human-machine interface, (8) rapid electronic warfare countermeasures and interference capability, (9) super information advantage/artificial intelligence capability, (10) Stealth, (11) beyond visual range integrated attack capability, and (12) various weapon systems integrating capability. That is, the development of a next generation fighter not only requires considerable time and a huge amount of resources in the intrinsic, but it also involves a wide range of functional aspects. The above list of aspects in fact forms a set of criteria for the design decision problem. As is shown in Section 3, these criteria are organized under four constructs and a decision hierarchy including them is confirmed using the Delphi method (i.e., Phase II). But, the point here is to illustrate Phase I of the proposed educational framework with a review process that was performed by the staffs in the course, as to lend supports to the inclusion of the abovementioned criteria for the decision case.
Air superiority is one of the required capabilities for a next generation fighter. Generally, a fighter with air superiority is considered to have an effective performance in a dogfight with stealth and high maneuverability; and, it should be able to surprise the enemy along with survivability against the missile fire. Moreover, for a new generation fighter aircraft, it also requires a super cruise ability [28]. The F-22 fighter is a well-known 5th generation fighter aircraft: a combination of speed, stealth, manoeuvrability and integrated avionics gives the F-22 multi-role fighter the ability to gain access to, and survive in, high threat environments [29]. For the future combat aircraft, Munjulury et al. proposed it should have a stealth design with super-cruise capability [30]. Yang et al. also claim that a modern (modernized) fighter should have the ability of stealth, super-manoeuvrability, super-sonic cruising, and super avionics for battle awareness and effectiveness [31]. In addition, USAF keeps concerning and investing hypersonic military aircraft development in a long term, especially in the hypersonic bomber. Since hypersonic military aircrafts can quickly reach any zone on the earth in a few hours and accomplish their missions [32], hypersonic is another key capability that a next generation fighter should have.
For air superiority, except for the stealth capability, it is also required to have other capabilities, such as the secure bases, superior situational awareness, and BVR missiles [33]. Lahtinen et al. also emphasize that the interception capability beyond visual range is one important feature of a modern fighter [34]. Hence, a next generation fighter would be equipped with BVR missiles that let pilots fire at the enemy from far away [35], and it should also have strong capability of situational awareness to detect enemy’s information [31]. So, a next generation fighter should have the BVR awareness and attack capabilities. In this electronic age, for a fighter, its electronic warfare system is an important part of the electronic self-defence system [36]. Usually, powerful electronic warfare countermeasures equipment is an important self-defence system used on the board fighter [37]. A modern fighter aircraft should equip with precise electronic warfare equipment [38,39] to execute electronic warfare/reconnaissance missions. Anyhow, a powerful electronic warfare countermeasures and interference is one of the capabilities that a modern fighter should also possess.
Moreover, for suppressing enemy’s air defence, a new fighter should have capabilities to play multiple roles in the combat and execute multiple missions [40]. Murman described multi-mission effectiveness is one of the requirements of a new generation fighter in his study [41]. Tirpak also mentioned that the operational expense could be reduced if multi-role military aircrafts have a good capability for integrating the weapon systems [42]. In the F-35 fighter program mentioned previously, the manufacturer of F-35C fighter, Lockheed Martin, sets a new standard in weapon systems’ integration, including lethality, maintainability, combat radius, and payload that brings true multi-mission power projection capability from the sea [43]. Many multi-role fighter aircrafts have been employed to execute multiple missions in many countries, and PRC is one of these countries: its PLAAF is acquiring and fielding a new generation of multi-role fighters [44]. Therefore, for a next generation fighter, the capability to integrate various weapon systems is another important function.
For the modern fighter pilots, too much information could come from many automated sensors/devices or from their co-working teams, so information overloading is a serious issue. It is important for the pilots in the modern fighters to have some supporting systems that filter and provide the information that is really necessary and assist the pilots to seamlessly integrate with those new technologies and the new warfare strategies [45]. As the development of human-machine (human-computer) interaction (HCI) for fighters keeps going, it is necessary for a new generation fighter to equip with a good human-machine interface [46]. Large touch screen technology is one of these interfaces in an advanced cockpit [47]. For a modern fighter, an advanced cockpit can improve the pilot’s situation awareness (SA), while alleviating workload [48]. Wang, et al. proposed the fighter cockpit interface should be based on the eye movement tracking instrument as to optimize the instruments’ configuration and to improve the HCI in the fighter cockpit [49]. Furthermore, Groh emphasized that US military should leverage the power of information in a network-centric warfare: since fighters play the key roles in a network-centric warfare, they should have superior advantage in terms of the information aspect [50]. James also emphasized the importance of continual information advantage for war fighters with over adversaries [51]. In [52], the roles of the most advanced expert systems in military applications are addressed, while F-22 Raptor is mentioned as a next generation fighter example. This has reflected the earlier claim made in [53]: it is necessary to build weapon systems with artificial intelligence (Al) tools and techniques.
Finally, if there exists a serious issue where a runway is damaged or the landing distance is limited, a next generation fighter should have the vertical/short take-off and landing capability, whilst an additional benefit to have the vertical/short take-off and landing capability is to improve future close air support (CAS) capabilities [54].
Therefore, from the features that are summarized in the fifth generation fighter aircraft cases and from the above long but thorough literature study, it is found that for the design of a next generation fighter, the following functions are desired, i.e., stealth, super manoeuvrability, super cruise, hypersonic, BVR awareness, BVR attack, good electronic warfare countermeasures and interference, multi-mission execution, various weapon systems integration, superior information advantage with artificial intelligence, and vertical/short take-off and landing. Furthermore, it should also have a good human-machine interface in the advanced cockpit that is to assist the pilots to handle the critical information and to accomplish the missions smoothly. As such, the decision to design one such new fighter involves the abovementioned 11 functional capabilities and one advanced interface as the evaluation criteria. The operational definitions of these decision factors are summarized in Table 1.
Finally, the ‘fighting aircraft selection problem’ has been a classic example of MADM methods in the textbook for a long time [25]. However, this example has been used over several decades, both the case data and the selection criteria set are relatively dated (‘dated’ relative to the new types of considered criteria as abovementioned for developing new fighters). Besides, as discussed previously, most recent MADM models are proposed for solving the ‘selection’ problem of (existing) fighting aircrafts or normal aircrafts [26,27], rather than concerning the ‘design’ and R&D issues of fighters. This feature distinguishes the case that is studied here from other aircraft (type) selection cases, in terms of the type of problem to be solved in the intrinsic.

3. Methodology

3.1. The Proposed Knowledge Discovery Education Framework: An Overview

The proposed science education framework includes a systematic flow that organizes several fundamental steps that would be helpful for teaching the concepts and the relevant methods as to discover the relevant knowledge about the budget-use decision scientifically.
To illustrate, using the decision problem case of this study (as the example), in Phase I, it was taught that the design criteria for developing the next generation fighter should be studied thoroughly. Clues for the inclusion of the 12 relevant criteria were sought for in both the industrial literature and the academic literature (see Section 2.3). This was followed by a next Phase II to establish a decision hierarchy that organises these criteria under a total decision goal, and to confirm that with the expert DMs while using the qualitative Delphi method.
In Phase III, the staffs were guided to make face-to-face interviews with the DMs, and the DMs were asked by the staffs to fill the expert questionnaire designed, according to the decision hierarchy that had been confirmed in Phase II. This obtained the pairwise comparison matrices for each DM. The consistency analysis of analytic hierarchy process (AHP) was applied to validate the answers and the trained staffs re-interviewed a DM if the result is negative (inconsistent). Following this survey, the criteria prioritizing (i.e., criteria weight vector determination, or CWV-determination) phase of AHP was taught and performed, so as to assess the priority of the constructs w.r.t. the total decision goal as well as the priority of the criteria w.r.t. each construct, for each DM. The staffs were then taught how to have the aggregated opinions by taking these individual opinions, as to obtain the knowledge about the real priority of the design (for the effective budget use), which is scientific and numerical.
In the final phase IV of education training, the ways to make decision analysis were taught to the staffs extensively. The aims of this phase are to have an overall picture to the individual opinions in the interest group and to discriminate and discover similarities in the individual opinions. In order to do so, some methods and tools that are frequently used in the recent DDDM field were taught. For example, by treating the individual CWVs as (valued) statistic variables, the Pearson correlation coefficient between each pair of two ‘variables’ can be identified. This forms the basis to establish a ‘correlation matrix’, and the staffs were taught to visualise one such matrix (for the constructs w.r.t. the total decision goal) as a heat maps. In addition, they also learned how to have an undirected network graph for social network analysis (SNA) in terms of in-between closeness by utilising the data elements in the matrix.
Except for from the statistical aspect, it is also taught that these CWVs can be viewed, geometrically, as multiple dimensional vectors in the space. This allowed for the computation of the cosine similarity index between every two CWVs, and the computed cosine similarity values were also taught to be plotted as heat maps (one for the constructs and the other ones, each of which for the criteria under a construct) and utilised as the base information for plotting the network diagrams.
More importantly, by doing these works, the information about how close the opinions were among the DMs were observed and cross validated by using two separate methods, while the knowledge about the heterogeneity and homogeneity of the DM opinions in the interest group were obtained while using different measures. In the tutorial, another important visualisation method, the decision tree method for clustering analysis, was also taught. Based on the individual opinions data, the method saw how the DM opinions, in overall, were clustered as subgroups, and this had provided additional information to assert the analytical results from using heat map and network diagram, i.e., for prudence.
Note that unlike in Phase III where the knowledge that was obtained for the staffs was for budget allocation, the analytical results that were obtained from Phase IV helped the staffs to make the opinion gaps closer, as such knowledge was important to numerically support the internal coordination works. In addition, the DDDM-based methods taught in this phase were relatively new to science education, in contrast to the conventional teaching materials for conducting decision analysis, e.g., sensitivity analysis.
The components that constitute the proposed science educational framework are summarized in Figure 2. Not limited to the example training course hold, it can be generalised to guiding other tutorial courses as to enhance the ‘decision ability’ of staffs who are facing a similar budget-use problem when it is to design a specific high cost product. Anyhow, the suitable application of the framework could answer the main research question of this study, which is about “how can the relevant budgeting staffs understand that for the strategic DMs, which set of technologies are really key and essential to the project?”, as mentioned previously in Section 2.2.

3.2. Methods Used for Phases I and II

The process to obtain the operational definitions for the decision factors and to establish a set of criteria in Table 1 is exactly the methodological body of Phase I of the educational framework (see Figure 2), which has been taught to the users in the training course: to design a specific high cost product, the initial decision factors should be carefully selected and determined from both the industrial data catalogues and the literature. A bunch of reviewed literature (as the literature reviewed above) is necessary to establish a ‘confident’ criteria set before making such decisions. Note that we intentionally made a review for the criteria set that was established for the decision problem in the example case in very detail (see the works shown in Section 2.3), as to demonstrate the suitable review process that should be taught and performed in Phase I of the educational framework and to form a solid basis for other results that were obtained later in the training course case.
Immediately following this, for presentation clarity and fluency, we also present the methodological body of Phase II here. In Phase II, the staffs of the R&D institution were trained to have another important decision ability, which is to organize the criteria set as determined in Phase I as a decision hierarchy, as to have a solid basis for making subsequent AHP-based surveys and the group-based assessment for the priorities (in Phase III). For Phase II, the well-known qualitative Delphi method, as well as its role for the two main steps of this phase (i.e., the first step to establish a reasonable decision hierarchy and a second one to confirm the established hierarchy) are taught.
To illustrate, in the training course practice, specifically for this step, the staffs were asked to consult the potential DMs with face-to-face meetings, as to confirm the appropriateness of the 12 determined criteria and to add an additional ‘layer of constructs’ beyond the 12 criteria. Eventually, in the tutorial, a set of four constructs was determined by the DMs (i.e., engine capability, flying control capability, avionics and awareness capability, and the integration capability), whilst the 12 criteria were also placed (mounted) under these constructs. As such, a decision hierarchy, as shown in Figure 3, is formed. So, in the second step, the staffs were asked again to confirm this ‘supposed hierarchy’ with all potential DMs via e-mail. Fortunately, for this step, all of the 10 potential DMs agreed with the hierarchy without exception in the responding e-mails. For more detail, please see Section 4.1.2.
Since Phase II’s qualitative Delphi method was proposed more than 70 years ago and it has been a popular method with many successful applications [57], for space reason, the quantitative AHP method, which is the method taught and used for Phase III, and its applications in the relevant fields, requires a further scrutiny.

3.3. The AHP Method Suggested for Phase III’s Main Knowledge Discovery Work

This subsection reviews the theoretical basis of the relevant processes of Phase III. These are also the tutorial materials used for conducting the surveys, performing the modelling works, and making the (in-) consistency analysis, in terms of AHP.
As can be seen in Section 2.3, the design decision problem of fighters becomes increasingly confounding because of the set of criteria is growing large. The set is too complicated to be considered for making a right decision and this should be true for other similar design decision problems. But, in the meanwhile and on the other side, the development of the various multi-attribute decision-making (MADM) methods is maturing day by day. They could meet the demand.
In the tutorial, as the most popular MADM method, AHP is taught as the quantitative evaluation model (see Figure 2) for the staffs to obtain the relevant knowledge about the ‘priorities’, i.e., the priorities for the constructs (as determined in Phase II) and the priorities for the criteria under the constructs (as determined in Phase I and confirmed in Phase II), because it is an evident and confident method in real practice. That is, AHP is taught to the course-participating staffs in order to determine the priority (the obtained decision knowledge) for strategically planning the relevant R&D investments, and this is the main decision-ability developing target of the staff training.
AHP is a well-known MADM approach that has been applied for decades, since it was proposed [58]. A standard AHP allows for a hierarchical organisation of the overall decision goal, the criteria (constructs) and the sub-criteria (criteria), where the alternatives are placed in the bottom layer. The process of AHP is roughly divided into two phases: a first phase to determine the weights for the criteria and a second to prioritize the alternatives. The literature is abundant with its applications, and this is still true in this decade. For space reason, the following recent articles which involve a variety of applications (but not limited to these) are cited [59,60,61,62,63,64,65,66,67]. As can be seen, the popularity of AHP is mainly based on its effectiveness in solving real decision problems; so many hybrid models also involve the use of AHP [68,69,70,71]. Recent studies also extend AHP with the fuzzy set logic, e.g., in terms of the intuitionistic fuzzy sets [72,73,74]. Anyhow, AHP is not only an MADM approach that is well proven, but also a suitable model for this study because it is a ratio-scaled, compensatory group MADM model that offers a quantitative basis for the evaluations.
In this study, only the first phase of AHP is taken to conduct the relevant surveys, to perform the consistency analyses and to assess the preferential structure of the opinion group (i.e., the CWV-determination process). As such, the priority and importance of each performance requirement item to design a next generation fighter can be determined scientifically (and quantitatively), as to establish a systematic model that allows a DM to make relevant assessments toward the design problem of next generation fighters. Then, following these, the ‘look’ of a new fighter can be depicted, and this will guide the R&D process, in that the input portfolio of the relevant resources can be determined, subject to the functional needs, in terms of the viable technologies.
Suppose there is a pairwise comparison matrix, Mnxn, where n is the number of the criteria that are pair-wisely compared, the process to determine the CWV based on this square matrix is reviewed, as follows.
M = [ m 11 = 1 m 12 m 1 n m 21 m 22 = 1 m n 1 m n n = 1 ] , i , j , i j , m i j { 1 9 , 1 7 , 1 5 , 1 3 , 1 , 3 , 5 , 7 , 9 }
In the experiment, similarly to many other studies that utilise AHP, the even ratios are not used for recording the results of pairwise comparison. The column-sums vector of this matrix is thus:
V = [ i = 1 n m i 1 i = 1 n m i 2 i = 1 n m i n ] ( 1 × n )
Dividing each column in M using this vector element-wisely, another square matrix, M′, is obtained, as:
M = [ m 11 = 1 / i = 1 n m i 1 m 12 = m 12 / i = 1 n m i 2 m 1 n = m 1 n / i = 1 n m i n m 21 = m 21 / i = 1 n m i 1 m 22 = 1 / i = 1 n m i 2 m n 1 = m n 1 / i = 1 n m i 1 m n n = m n n / i = 1 n m i n ]
The CWV assessed using any pairwise comparison matrix M is thus obtained by calculating the row-sums vector of M′, which is:
C W V = [ j = 1 n m 1 j j = 2 n m 2 j j = 1 n m n j ] T  
As both the main knowledge discovery process (Phase III) and the subsequent analyses (Phase IV) are focused on the assessed CWVs, only the general form for CWV-determination in AHP is reviewed here (and was taught in the class course before the staffs are going on to the real surveys). For the CR-based consistency analysis, which is another well-known method of AHP, please check the relevant articles, since this study relies on Expert Choice as a tool at the earlier survey stage to assist the consistency analysis processes.

4. Courses Discovering the Knowledge for Budgeting

Phase III of the proposed education framework discovers the knowledge that facilitates the make of a suitable decision for budget allocation on a numerical basis, which is based on the set of criteria identified in Phase I and the decision hierarchy established and confirmed in Phase II. In the training course, it let the staff learn how to investigate the opinions and calculate the priority for the constructs and those priorities for the involved criteria under them according to the decision hierarchy, for each DM, and how to aggregate and assess the priorities based on the group opinions.
The latter is exactly the knowledge to establish a guideline to support the make of a scientifically-right resource allocation decision about the use of the ‘limited but large’ budget, which is a common decision when designing a specific high R&D cost product. The ability to obtain such knowledge is a key decision ability for the staffs (e.g., employees of the budget planning department in this case) who are eager to know the real preferences of the DMs in the interest group, to justify the priority over the relevant technologies to be developed, to identify the technologies that are really critical for their next generation fighter R&D, and to make a reasonable budgeting plan, which is subject to the ‘limited but huge’ budget received.
By introducing the tutorial process and materials of the training course that was made to the relevant staffs in the institution directly, this section illustrates the survey works in both Phases II and III and the main knowledge discovery works in Phase III.

4.1. For the Survey Works

4.1.1. The Opinion Group

The trained staff members were asked to choose 10 potential DMs to constitute the opinion (interest) group. Works for Phase II were done by consulting with these DMs via phone or e-mail, while after Phase II, the decision hierarchy was confirmed, prior to Phase III (see Section 2.2). Then for Phase III the staffs were taught and asked to conduct the relevant AHP-style surveys using the expert questionnaires designed, according to the obtained hierarchy. In this training course, the same group of DMs were also the interviewees.
The 10 DMs are from one of the well-known aeronautical systems R&D institutions in Taiwan (ROC). These DMs are experienced fighter R&D experts. Most of them have worked for more than 20 years and some of them are R&D project chiefs or R&D unit supervisors. They are experts and have professional knowledge in fighter R&D jobs, and are also familiar with fighter R&D expense forecasting. Moreover, most of them are members of the budget allocation committee, so they play a key role in deciding the uses of the fighter R&D budget fund. All of the expert DMs are male, and eight of them are experienced seniors (over 50 and in the service track for over 21 years), while the other two are under 50 and in the service for 5–10 years and 11–20 years, respectively. Three of them are department managers or above, four are advisors, and three are the R&D engineers. As for the academic degrees, four of them hold a Ph.D. degree and six are Master of Science (M.Sc.). The stratifications of the population are presented in Table 2.

4.1.2. The Phase II Works Using the Delphi Method

As discussed in Section 2.3, in Phase II of the training course, the Delphi method was taught and it was used (by the staffs) for two purposes: (i) to establish a reasonable decision hierarchy: this includes the introduction of the construct layer, the determination of the set of constructs, and tree structuring; (ii) to consult and confirm that the established decision hierarchy is reasonable. This phase, including the training courses and the exercise, started at December 2017 and it ended until the survey work of AHP began (see Section 4.1.1). The works involved three round trips of face to face interviews, e-mails, and phone conversations with the DMs, and as a consequence, the decision hierarchy was confirmed in Figure 3. For further presentation simplicity, the constructs and the criteria in the hierarchy are organized and coded in Table 3.

4.1.3. The Survey Works of AHP

In the tutorial, the ways to conduct the AHP surveys and poll with the expert questionnaires that are designed by reference to the decision hierarchy, to fill the source analytical data sets in terms of the pairwise comparison matrices and to make the consistency (ratio) analysis based on which were taught.
The materials were taught and the exercise was made during the period from 12 February to 15 March 2018. DMs in the interest group (see Section 4.1.1) were interviewed by the staffs and each DM was asked to fill the five AHP-style expert questionnaires that are designed, according to the confirmed decision hierarchy (i.e., one for pair-wisely comparing the importance in the main constructs set, i.e., {PC-A, PC-B, PC-C, PC-D}, while four for comparing the importance in the criteria sets under each construct, e.g., {AC-1, AC-2, AC-3}). During the interview process, the staffs were asked to equip with a notebook PC with the Expert Choice software (ver. 11.1.3322) installed. In each round of interview, the answers to the pairwise comparison questions in the expert questionnaires were directly recorded, so that a total number of five pairwise comparison matrices were obtained for each DM. After each round of interview, a consistency analysis was performed, so as to test if there was inconsistency in the obtained results. This was again done by using the Expert Choice software as a tool, and if the consistency analysis failed (i.e., the software uses the threshold: inconsistency index (consistency ratio) C.R. = C.I./R.I. > 0.10 by default), the respondent was re-interviewed for one more round after a period of time for sufficient rest. This process repeated until every pairwise comparison matrix had passed the consistency check.
As a result, perhaps because of the expertise of the opinion group member and the limited number of items to be compared in each questionnaire, eight of the 10 DMs passed the consistency check right after the first round of interview. Only one of them was interviewed twice and one of them was interviewed for the third time to pass the check. This also reflects the fact that our teaching material had let the hierarchy not exceed the psychological limit of ‘the number of items to be compared for human by using AHP’ [75,76,77]. Methodologically speaking, this is exactly a good effect from organizing the considered criteria under an additional layer of constructs.

4.2. For the Analytical Steps Based on AHP

After the training course of AHP survey (see Section 4.1.1), the way to assess the preferential opinions in terms of priority vectors (for the constructs and for the subset of criteria under each construct) for every DM individual, the way to aggregate these individual opinions (again for the constructs and for the subset of criteria under each construct) and the way to prioritize the overall sequence for investment budget planning were taught.

4.2.1. For the Main Constructs

After the exercise in Section 4.1.1, the pairwise comparison matrices, all of which passed the consistency check, are ready for the determination of CWVs. Except for the theoretical courses that are given for the calculations to determine the individual opinions in terms of CWVs (as discussed and presented in Section 2.3), to obtain the aggregated opinion, the staffs are guided through the use of a convenient function that is offered by Expert Choice, which is to assess the ‘aggregated CWV’ based on the datasets sourced from the 10 DMs. So, at first, for this case, the operations to obtain the aggregated CWV for/among the four constructs under (w.r.t.) the total design decision goal were demonstrated, while the relative importance among them (i.e., PC-A, PC-B, PC-C, and PC-D)) were assessed in Table 4. Using a software tool is more preferable for the staffs, even they have learned the theories. This is also more convenient for us when teaching for this step.
The ways to read the numbers in Table 4 (interpretation) was another part of the tutorial course. As was seen from the result, the construct that is considered as the most important one is the fighter’s engine capability (PC-A), followed by integration capability (PC-D) and avionics and awareness capability (PC-C), while its flying control capability (PC-B) is regarded as relatively less important. The engine capability outranks the other constructs because it cuts the amount by more than a third (36.6%). The integration capability and the avionics and awareness capability are almost equally important, as each of which contributes about one fourth for the design decision and their assessed importance are 26.9% and 24.5%, respectively.
These observations reveal that, when designing a next generation fighter, of most importance is engine capability (PC-A). So, if more budgets can be allocated for the relevant R&D works of it, the designed fighter would have more chance to meet the requirement. In the meanwhile, since the integration and the avionics and awareness capabilities of a fighter (see the weight numbers for PC-C and PC-D) are also relatively important, the budget allowed for designing the relevant functions should also be assigned to a certain extent. In compare with these, the role of flying control capability (PC-B) is relatively minor. This is an important observation because it was deemed important when designing a traditional fighter, but it becomes a minor concern when designing a next generation fighter.

4.2.2. For the Criteria Involved under (w.r.t.) Each Construct

In the training course, the criteria involved w.r.t. each of the four constructs were taught to be analysed accordingly. But this did not cost much time because similar matters had been done for the constructs w.r.t. (under) the total design decision goal. In each of the four exercises, ‘aggregated opinion determination’ was also performed on a group-opinion basis, i.e., by using Expert Choice, each exercise uses and aggregates the 10 CWVs that are assessed individually for the DMs. Note that how to read the results is, once again, an important training of this step.
At first, the trained staffs were asked to assess the CWV for the 3 involved criteria under PC-A, which are AC-1, AC-2, and AC-3, while the result is shown in Table 5. As is observed, AC-1 (vertical /short take-off and landing) is the most important design criterion that is required w.r.t. the engine capability construct (PC-A), as its importance (under which) reaches 70.5%, which is over 2/3. When compared to AC-1, AC-2 (super-cruise) is relatively less important, while AC-3 (hypersonic) is far less important. For the staff, this is the important knowledge that will guide the decision when allocating the budget that has been allocated for PC-A.
Next, the excise was made for the two involved criteria under PC-B, BC-1, and BC-2, while the ‘correct answer’ of this exercise is shown in Table 6. As is seen, under such a construct, which is relatively minor for next generation fighter design, multi-mission execution capability (BC-1) is relatively important, as its importance is about 2/3 and super manoeuvrability is only about 1/3.
Thirdly, the same exercise was made for the four involved criteria under PC-C, i.e., CC-1, CC-2, CC-3 and CC-4. The result is shown in Table 7. As is seen, under this construct, which is with a medium (moderate) importance, among the four criteria, although super information advantage and AI capability (CC-1) is shown as more important than the rest three criteria and its importance cuts the amount by more than 1/3 (36.1%), a dominance is not shown, since CC-2 (beyond visual range awareness) is also a significant criteria (28.2%, whose importance is more than 1/4) and the least important criteria, CC-4 (advanced cockpit and human-machine interface), still renders an importance of 15.8% that is over 1/7. The ‘correct answer’ in this exercise draws two implications for technology management and education: (1) a topic that is actively discussed, such as AI, does not mean that it must have dominated importance to everything; and, (2) understanding the real preferences and needs, as not to be led by the popular technologies, is the power of using the proposed management science education framework.
Finally, the experiment is made again for the 3 involved criteria under PC-D, DC-1, DC-2, and DC-3. The CWV evaluations under this construct are shown in Table 8. By interpreting the result, it is easily observed that no relative importance of any one criterion outranks those of the other two. Every criterion contributes an importance to the integration capability construct (PC-D) that is almost equal to one-third. Moreover, despite DC-1 (stealth) is the most important criterion, the contributing gap between it and the least contributing criterion, DC-3, is as narrow as 7.2% (i.e., 36.6–29.4%).

4.2.3. For the Overall Priority Analysis

In addition to the training courses that were arranged and taught to analyse the priority for the constructs and the priority for the involved criteria w.r.t. a construct (Section 3.3), another course was also given to the staffs for understanding the other type of knowledge about budget planning, which is to know the ‘overall (synthesized) priority’ of all criteria.
Figure 4 shows an overall assessment for the priority and the synthesised individual weights of the 12 considered criteria in the decision case (see their definitions in Table 1). These are done by multiplying an assessed CWV in Table 5, Table 6, Table 7 and Table 8 under a construct with the scalar value that is the corresponding weight of that construct (in Table 4) and then ranking them. This is as the following:
W P C = C W V P C = [ w a w b w c w d ]   W A C = w a C W V A C , W B C = w b C W V B C , W C C = w c C W V C C , W D C = w d C W V D C
where W X C , X { P , A , B , C , D } , is, respectively, the absolute weight vector of the constructs and the involved criteria under each construct; C W V X C , X { P , A , B , C , D } are the relative importance vectors that are used for the calculations, as are sourced from Table 4, Table 5, Table 6, Table 7 and Table 8.
Despite that the above theoretical part of this step had been learned in the class, the staff students were asked once again to have an exercise by directly using Expert Choice in the class. The software tool also provides such a function by default. Therefore, ‘how to discover the decision-relevant knowledge’ was the core for establishing the decision ability of the staffs.
At first, the credibility of the source data sets can be observed. As can be read in Figure 4, on average, the inconsistency of the 50 source pairwise comparison matrices after all the consistency checks have passed is as little as 0.02.
In the next, the most and least important criteria, when justified in overall, can be observed and compared. The importance of AC-1 (vertical/short take-off and landing capability of the fighter) is salient, as it has an absolute weight of 17.2% (>1/6). In contrast, AC-3 is the least important one (2%). It is interesting that the difference is over eight times, but both of them are under the same primary construct (PC-A), which is engine capability.
Thirdly, the knowledge about the set of ‘more important criteria’ can be discovered. For this matter, a threshold can be set to discriminate these criteria against the ‘not that important’ or ‘not important’ ones. In this study, t = 0.9 was used as the threshold for classification, and as a consequence, six criteria (among the 12) are considered to constitute a set of ‘more important criteria’ (i.e., DC-1, DC-2, CC-1, DC-3, and CC-2, in addition to AC-1, as discussed above). That is, half of the criteria is classified as more important while the rest half is as less important. According to this classification, the six more important criteria have determined 72.4% of the total importance for the decision problem. This implies that this subset of criteria dominates the future design works and should be the priority during R&D budget planning, even if the budget allocation ratios will not be strictly followed.
Finally, further observations can be made about how these ‘more important criteria’ are with respect to the main constructs. Firstly, these criteria are w.r.t. the PC-A, PC-C, and the PC-D primary constructs, and none of them is under the PC-B construct (the construct ‘flying control capability’). Secondly, the most important construct (PC-A, engine capability) has only one ‘more important criteria’ under which (i.e., AC-1), but it is the most important criterion in overall. In contrast to this, all of the three criteria under the next important construct (i.e., the integration capability, PC-B) are included in this more important criteria set. Moreover, two of the four criteria under the third important construct (i.e., the avionics and awareness capability, PC-C) are included. This implies that in terms of the information of ‘6 more important criteria’, for the constructs, ‘engine capability’, ‘integration capability’, and ‘avionics and awareness capability’ are the focuses. This is reflexive to the knowledge obtained in Section 2.2, which is the priority of the constructs that was assessed using the information of CWVs directly.
In a short summary, this section presents Phases II and III of the proposed education framework, for establishing the ability to discover the relevant decision knowledge for designing a specific high cost product. These are illustrated by the training course that was recently given to the employees in a large R&D institution, which received a ‘big’ budget for next generation fighter design (the specific high cost product), but was going to allocate the ‘limited’ budget effectively. With the teaching classes for the theories and the relevant exercises assigned, the staffs have successfully learned how to organize and confirm the decision hierarchy for the given decision using the Delphi method, how to design the expert questionnaires, how to conduct the AHP-style surveys, how to examine the inconsistency in the source data set, how to discover the key decision-supporting knowledge from the priority sequences for the constructs and criteria. Establishing these abilities for the relevant staffs in the R&D institution is the aim of the proposed education framework, because they can learn how to allocate the ‘big but limited’ budget cogently yet appropriately, and the ways to read extra information that is relevant to the decision from interpreting the results.

5. Tutorials for Decision Analysis Identifying Opinion Gaps and Implications

This section presents the teaching steps/materials designed for Phase IV (further decision analysis) of the framework. Again, these are illustrated by using and following the results from the CWV-determination exercises, which have been obtained in Section 3. It is specifically noted that, here, the courses that are taught for the analyses are not a common one for sensitivity analysis, which is conventional to operational research. Instead, in order to fulfil the users’ major demand on analysing the similarities and diversities of the DM opinions, the various perspectives and methods, either statistical or geometrical based, are ‘borrowed’ from the field of big data analytics and they are used in the teaching works for Phase IV.

5.1. Required Pre-Processes for the Analyses

Since the purpose of the decision analysis is focused on the discriminations and similarities of the DM opinions in the interest group, the data of their individual opinions (both the priority for the constructs and the priorities for the criteria with respect to the four constructs) should be assessed again in terms of the ‘individual CWVs’.
So, at first, how the source model datasets (recorded in Expert Choice) are converted to .CSV files is taught. This obtains 50 .CSV files, each of which contains a pairwise comparison matrix for a DM (i.e., 10 DMs and five for each).
Secondly, a tutorial to write an additional program in R for data pre-processing is also given. Because the consistency analyses were done by the software tool, only the logic for the algorithmic computations for determining the individual CWVs in terms of AHP are coded in the R program. With the 50 input files supplied, the program automatically loaded these files in order and determined a CWV based on each pairwise comparison matrix. Following this, the program compiles these CWVs by (or bounds column-by-column in terms of) the total goal and by each of the four constructs, rather than by DM. These yielded five new matrices, each of which is for 10 DMs, as shown in Table 9.
Mathematically, the above process obtains aggregated criteria weights (ACW) matrices, which is mathematically expressed, as follows:
A C W c = [ C W V c 1 C W V c 2 C W V c K ] = [ [ w c 1 w c 2 w c N ( c ) ] N ( c ) × 1 [ w c 2 w c 2 w c 2 ] N ( c ) × 1 [ w c K w c K w c K ] N ( c ) × 1 ] N ( c ) × K , c { P , A , B , C , D }
where k is the identifier for DM (in this case, K = 10 and k = 1…10); c connotes the total goal (so CWVPC are the CWVs for the primary constructs) or any of a construct (in this case, c = ‘AC’, ‘BC’, ‘CC’, and ‘DC’ for construct A, B, C, and D, so CWVAC, CWVBC, CWVCC, and CWVDC, are, respectively, the CWVs for the criteria under a construct); N(c) is the number of constructs or criteria under c (in this case, N(PC) = 4, N(AC) = 3, N(BC) = 2, N(CC) = 4, N(DC) = 3).
These ACW matrices are the base information for the subsequent analyses because as will be shown later, the required decision analyses can only be conducted when the relevant information is measured, established, and reorganized in this way. Note that in these tables, the criteria under each construct are disordered to keep faith with the original questionnaire. For example, in Table 9d, under the CC construct, the row order of the criteria is CC-2 (beyond visual range awareness), CC-4 (advanced cockpit and human-machine interface), CC-3 (rapid electronic warfare countermeasures and interference), and then CC-1 (super information/AI capability). This preserves the order that is used in the investigation, but is not the order previously used to render the results for these criteria (i.e., for presentation simplicity, these criteria were intentionally reordered and renumbered in the previous sections). However, this will not affect the subsequent analyses, as along as the elements of each CWV, semantically, keep an identical order subject to the analytical context. This is the situation of the data in Table 9.
Thirdly, prior to the subsequent analysis, two observations are made based on Table 9. The aggregated CWVs as assessed by the R program (see the numbers in the ‘aggregated’ column of Table 9a–e), can be cross-validated with the CWVs that were aggregated in Table 4, Table 5, Table 6, Table 7 and Table 8 by the commercial Expert Choice utility. It is found that because R’s core computations support more under-decimal digits (while the numbers that are presented here are rounded), the loss in precision is relatively less when determining the CWVs using Equations (1)–(4) in R. So, the results slightly differ from those obtained using Expert Choice. Next, it is shown that the rank order of the criteria under each construct is kept (i.e., the priority order of criteria is identical for any pair of two CWVs as assessed separately using the R program and Expert Choice). These are the ground and the evidential confidence to conduct the subsequent analyses. Anyhow, the ways to make these observations were also taught, as a supplement to the education material of Phase IV.

5.2. Analysis in Terms of Statistical Correlations

In Phase IV of the training course, an initial analysis is taught to discover the knowledge about how the individual CWVs over the four primary constructs are correlated with each other. In terms of statistics, this is done by treating every column (a CWV) in Table 9a as a ‘statistical variable’ and then examining the Pearson correlation coefficient for each pair of them.
As Table 9a contains CWVs for the 10 DMs, the correlation coefficients for a total number of C 2 10 = 45 pairs of CWVs are calculated using R. In R, the entire matrix is regarded as a multivariate sample (except for the greyed part), and a ‘correlations matrix’ for the multivariate sample is computed. The obtained correlations matrix is flipped upside down and visualized as a heat map. This is shown in Figure 5 as the following.
As can be seen in Figure 5, for the four constructs under the total decision goal, the CWV pairs of (DM-6, DM-8), (DM-5, DM-10), and (DM-4, DM-5) are very positively correlated. The correlation coefficients of these pairs are >0.9 (i.e., 0.979, 0.967, and 0.961, respectively), but the latter two observations are not transitive to the (DM-4, DM-10) pair (0.879). The opposite extreme of negative correlation is the CWV pair of (DM-8, DM-9) (−0.865). So, in overall, in the heat map, it seems that the opinions of the DMs are quite diversified for the four constructs under the total goal of fighting aircraft designing, in terms of the correlation coefficients that are calculated between their CWVs.
Following such an observation, further analysis is made by using the network diagram in social network analysis (SNA), while between-ness centrality is used to establish the edges of the graph. Based on data in the correlations matrix, an SNA graph for the CWVs is obtained in Figure 6.
In Figure 6, the obtained diagram means that the opinion group is divided into two isolated subgroups, and the fact that there is no bridge between these two groups implies a divergence in the opinions. DM-4, DM-5, and DM-10, who were discussed previously to be coherent, are clustered in the upper-right subgroup. DM-9, who was also discussed previously to be very negatively correlated with DM-8, is also clustered in this group and is far from DM-8. In contrast, the remainders are clustered in another lower-left subgroup. This means that their opinions are correlated enough to be clustered together, according to the shown structure, but none of them has sufficient correlation with the upper-right subgroup that can make a bridge toward there.
This observation is confirmed when a decision tree is established to cluster the assessed CWVs of the individual DMs, which is shown in Figure 7. In Figure 7, it is found that every DM who is in the upper-right subgroup in the network diagram (Figure 6) is in the left sub-tree and is classified as a category (yellow boxed). Every DM who is in the lower-right subgroup in Figure 6 is in the right sub-tree and classified as another category (blue boxed). In addition, the decision tree gives supplementary information for how dissimilar two opinions are by branching at the different levels using the dissimilarities that were revealed by the correlation coefficients. Moreover, by using this tree, the required knowledge about which two DMs’ opinions are the most correlated with each other as well as the knowledge about which DM’s opinion is closest to them are easily discovered.

5.3. Analysing the Similarities in Terms of Geometrical Cosine Similarity

In Table 9, there are five matrices compiling the result CWVs, one for the construct weights under the total goal (Table 9a), and the other four for the criteria weights under each construct (Table 9b–e). In Section 4.2, the matrix composed of the construct weights is used as the teaching example for the analysis and the outcome reveals that the opinion group of DMs are divided into two salient subgroups, in terms of the correlation relationships analysed (between pairs of individual CWVs). Anyhow, this was performed by treating the individual CWVs as statistical variables in view of statistics.
In this subsection, the perspective is altered. By treating the individual CWVs directly in view of geometry as multi-dimensional vectors in the space (rather than as multiple statistical variables, or multivariate), the similarity relationships (rather than the correlation relationships) are identified using the cosine similarity method, so the further knowledge are obtained by visualising the results based on which. As the similarity (or distance) measures are usually the studied topic in MADM, this subsection conducts a thorough analysis not only for the matrix that was compiled for the construct weights (Table 9a), but also for each matrix containing the criteria weights (Table 9b–e).
Again, the process to write an R program for the cosine similarity analysis was demonstrated and taught to the staffs, and the cosine similarity value of each pair of vectors (CWVs) are calculated and summarized, whose heat maps are rendered in Figure 8a–e, based on the data of the compiled matrices in Table 9a–e.
From these results, the initial observation is that unlike the correlation coefficients that are shown in Figure 5 and Figure 8a, the cosine similarity values are all positive. This is for the reason that the elements of every CWV (as assessed from each DM) that reveals a priority for the primary constructs are all positive because of the weight postulation (i.e., every weight is positive and all weights should be summed to 1). Every CWV is in Quadrant I, regardless of the number of dimensions for the vector space, so no angle between any two CWVs exceeds 90° (so that no cosine similarity value assessed for any two CWVs is negative).
A second observation is that roughly, when two DMs have a greater cosine similarity in Figure 8a, they have a larger degree of correlation in Figure 5, and vice versa. Moreover (and again roughly), when two DMs have a relatively smaller value of similarity in Figure 8a (i.e., <0.6, see the cells whose colour is ranged from pink to white and to blue), they tend to have negative correlation in Figure 5 (see the grey cells in Figure 5). The fact that the tendencies of the associated cosine similarity values and the correlation coefficients are identical cross-validates the experimental results, because mathematically, it has been proven that, when cosine similarity value is de-scaled and de-shifted, it is proportional to the correlation coefficient.
The third observation is that in compare with the diversified opinions of the DMs for the criteria under constructs PC-B (the flying control capability), PC-C (avionics and awareness capability), and PC-D (integration capability), their opinions are rather convergent for those criteria under PC-A (engine capability). This is reflected in Figure 8b, in that the lowest cosine similarity value for the three constructs under PC-A (AC-1, AC-2, and AC-3) is still very high (0.758, as assessed between DM-7 and DM-9) and the heat map contains no other colours than red. This is not true in Figure 8c–e, from a direct observation that these heat maps are colourful. Anyhow, such observation implies that, in the future, the DMs may easily reach a consensus about the relevant designs and the budget plan for the R&D works pertaining to ‘engine capability’. For the works that pertain to the flying control, avionics and awareness, and integration capabilities, perhaps more negotiations are required to have a consensus.
The fourth observation is about the rest ‘colourful’ constructs wherein the DM’s opinions are not that homogeneous. Although for the criteria under the PC-B construct (BC-1 and BC-2), there are 18 pairs of DMs whose CWVs are very similar (by counting the fully red cells in Figure 8c), rightly under this construct, there are also six pairs of DMs whose CWVs are very dissimilar (i.e., cosine similarity <0.35 and the cell is ‘relatively or very blue’). At the same time, under the PC-C construct, the heat map (Figure 8d) shows that there are only five fully red cells, and only two relatively or very blue cells are shown. Under the PC-D construct, the heat map (Figure 8e) shows that there are nine fully red cells, but no relatively or very blue cell is present. These comparisons connote the diversifications of DMs’ opinions under these constructs. From this analysis, it is recognized that the degree of opinions’ divergence (DoD) under PC-B is greater than the DoD under PC-C, and then they are greater than the DoD under PC-D, while the DoD under PC-A is the smallest.
Similar to Section 4.2, these cosine similarity values can also be further analysed by using the network diagrams in SNA, as to identify the subgroups of DMs, in terms of their opinions for the primary constructs and their opinions under each construct. Again, the ‘igraph’ library is loaded and the visualisation program is written in R, using a value of 0.7 as the edge limit to establish the graphs. The obtained network diagrams are shown in Figure 9.
As can be seen in Figure 9a, again the opinion group is divided into two subgroups. The set of DMs in the upper-right subgroup is: {DM-4, DM-5, DM-9, DM-10}, which is identical to the upper-right subgroup when the DMs were justified using the correlation coefficients in Figure 6. This implies that when either the statistical-based correlation coefficient or the geometrical-based cosine similarity is used to justify the opinions of the DMs, the grouping scenarios are the same for the constructs w.r.t. the total decision goal to design a next generation fighter. However, there are also differences.
As is shown between Figure 6 and Figure 9a, the relations between the DMs in each subgroup in terms of the links (edges) are different. This is for the reason that statistical-based correlation and geometrical-based cosine similarity are different measures in the intrinsic. But, in addition to this and more importantly, in Figure 9a, the two subgroups are no longer isolated. Now, based on the obtained cosine similarity values, using the same between-ness closeness settings, several bridges have been established from the lower-left subgroup to the upper-right subgroup. It is clearly indicated that there are bridges from DM-1 in the lower-left subgroup to DM-4, DM-5, and DM-10 in the upper-right one, while there is also a bridge from DM-2 in the lower-left subgroup to DM-10 in the upper-right one. These observations imply that when the statistical-based correlations were used as the basis for SNA, specifically for the problem, relatively less knowledge about the relationships among the opinions of DMs would be revealed. This further implies that for analysing a problem that is similar to the studied one, the geometrical-based cosine similarity is a good tool, since it keeps the meanings of scale and shift.
From the results, the divergences in the DMs’ opinions are also observed for the criteria under the constructs. In Figure 9b, there is only one group, wherein the links between pairs of nodes are quite saturated. This is reflexive to the previous observation that the opinions of all DMs are quite homogeneous (coherent) for the three involved criteria under the engine capability construct. However, there are more than a group in Figure 9c–e.
For the two criteria under the flying control capability construct, in Figure 9c, there are three subgroups and either the opinion of DM-3 or that of DM-4 forms a single-node subgroup that is isolated from another large subgroup of the other eight DMs. However, these two subgroups are not independent because there is a bridge between them and each of them establishes bridges to both DM-7 and DM-9 in the lower large subgroup.
For the four criteria under the avionics and awareness capability construct, in Figure 9d, there are only two bridged subgroups, one relatively large and the other relatively small. There are links between every pairs of nodes in the smaller subgroup (i.e., {DM-1, DM3, DM-6}), and each node in this subgroup bridges to the larger subgroup via a link to DM-5.
For the three criteria under the integration capability construct, in Figure 9e, there are a total number of five subgroups, including one large subgroup of six DMs and four single-node subgroups, but three main observations are made. First, in the upper-right large subgroup, the opinions are very consistent. This is shown by the fact that the similarities among these six DMs have formed a complete graph. Second, DM-6, DM-9, and DM-10, which are in three single-node subgroups, might form another subgroup, if the edge limit threshold can be decreased. This is because each pair of them, already, has a bridge. But this is not true for DM-2. Finally, each single-node subgroup bridges to the large one. DM-2 and DM-6 has two bridges, while DM-9 and DM-10 has three, to the edge nodes of the large subgroup (as defined by the subset {DM-4, DM-5, DM-8} of that group).
Another important finding from these rendered results is that DMs’ considerations under various constructs are hard to be consistent. That is, any two DMs who gave very similar or even identical opinions for the criteria under a construct may diverge when they consider the criteria under another construct. For example, the opinions of DM-7 and DM-10 are very similar under the avionics and awareness construct (see Figure 8d and Figure 9d), but their opinions are far from each other under the integration capability construct (see Figure 8e and Figure 9e). While other examples can also be easily identified, these reflect the common variety and diversity of people’s consideration of different decision-making issues.
Yet another important finding is that if the SNA graphs presented in Figure 9 are associated to the results in the heat maps, it is easily found that the DoD of the opinions under a construct (which is justified based on the distribution of cosine similarity values) is not completely analogous to the grouping scenario in the network diagram that is visualised for the same construct. The previous discussions revealed that, when the DoD of the opinions under the constructs are ordered, they follow this priority:
D o D ( P C B ) > D o D ( P C C ) > D o D ( P C D ) D o D ( P C A )
But, when the number of subgroups (NSgs) of DMs under each construct is analysed in the network diagram, the order is:
N S g s ( P C D ) > N S g s ( P C B ) > N S g s ( P C C ) > N S g s ( P C A )
When under the integration capability construct the opinions of DMs in the {DM-6, DM-9, DM-10} subset can be further viewed as a subgroup (as discussed), at best the following order is yielded:
( N S g s ( P C B ) N S g s ( P C D ) ) > N S g s ( P C C ) > N S g s ( P C A )
In other words, the order of the number of subgroups of DM opinions is, anyhow, different from the order of the DoD of these opinions. This implies that DoD, which measures the absolute diversity of the cosine similarity values that are assessed between every pair of DM opinions, is different to NSgs, which is the result of classifying the DM opinions by branching upon the magnitudes of the cosine similarity values. This means that DoD and NSgs are different concepts in nature, although both of them measure the ‘diversity’ in the opinions of DMs. Such methodological knowledge should be important for making other similar decision analyses in the future, in addition to the empirical knowledge that are further explored using the network diagrams in terms of SNA (i.e., how the DMs’ opinions are grouped under each construct).
Finally, once again a decision tree is rendered based on the cosine similarity values between pairs of DM opinions that are assessed for the four constructs, which is shown in Figure 10. In overall, when compared to the tree in Figure 7 that classified the DMs based on the correlations among their opinions, the tree here classifies the DMs based on the geometrical similarities among their opinions. As can be directly observed, in terms of the four primary constructs w.r.t. the total design decision goal, both trees classify the entire opinion group into two DM subgroups, i.e., a {DM-4, DM-5, DM-9, DM-10} subset and another {DM-1, DM-2, DM-3, DM-6, DM-7, DM-8} subset. From such an observation, the results from classifying based on geometrical-based similarity and from classifying based on statistical-based correlation are cross-validated.
However, under a close scrutiny, the two sub trees have different shapes in Figure 7 and Figure 10. For example, in the left subtree in Figure 7, the opinions of DM-5 and DM-10 are the most correlated, but in the left subtree in Figure 10, the opinions of DM-9 and DM-10 are the most similar ones. This is again due to the difference in ‘branching upon what’ in these two trees.
Despite so, as it has been cross validated that the DM opinions are divided into two subgroups in view of two data perspectives and the reason that leads to the difference has been told, the process to discover such knowledge for the decision case (about how the DMs are classified in their attitudes toward the constructs) shows the effectiveness of Phase IV of the proposed education framework. As in this phase, the materials about making the further decision analyses were taught to the staffs, they have obtained a guide in practice to couple with the two subgroups’ opinions (in the interest group) that are quite divergent at the top level of decision-making, according to the results from the extensive analyses that are made in this section.

6. Discussion, Conclusions, and Recommendations

6.1. Discussion

So far, with the application of the proposed science education framework, the involved phases in the framework are illustrated. An employee training course was given for the staffs in a large R&D institution and it had a chief aim to incubate the staffs’ decision ability, for them to make a plan for using the ‘large but limited’ budget appropriately. After that, the staffs who are relevant to the budgeting works became able to handle the knowledge discovery processes in order to understand (and mine) the preference structures of the DMs, to have a priority over the key technologies to be developed (functions or design criteria), which is also a ‘sequence of matter’ for budget allocation and to make the extensive decision analysis as to close the opinion gaps in the interest group. In the abovementioned training process, the staffs were asked to do the exercises after learning the relevant courses for the involved phases and steps, and the results from these practices mean a lot to the empirical side.
In Phase I, by using the suggested literature study method (to review both the academic and the industrial literatures rigorously and to have clear operational definitions for the design criteria), the staffs successfully identified a suitable set of criteria for the encountered problem (i.e., the 12 criteria for next generation fighter design) (see Section 2.2). In Phase II, by using the tree construction logic and the Delphi method that were taught, the staffs constructed a suitable decision hierarchy (with a total decision goal and four constructs over the 12 criteria) and confirmed the established hierarchy with the experts (see Section 2.2 and Section 3.2). The main decision-relevant knowledge obtained in these two phases, i.e., the 12 criteria and the confirmed decision hierarchy, should also be the empirical contribution of this study. The criteria in the set to be considered are (ranked by importance): (C1) vertical/short take-off and landing capability, (C2) stealth, (C3) beyond visual range integrated attack capability, (C4) super information advantage and AI capability, (C5) various weapon systems integrating capability, (C6) beyond visual range awareness capability, (C7) rapid electronic warfare countermeasures and interference capability, (C8) multi-mission execution capability, (C9) super cruise capability, (C10) advanced cockpit and human-machine interface, (C11) super manoeuvrability, and (C12) hypersonic.
In Phase III, with the relevant teaching materials taught (see Section 2.3), the 10 DMs that have the real decision power were visited by the staffs and they served as the interviewee experts to fill the questionnaires in the investigations in terms of AHP (see Section 4.1.1), while the consistency of the results was verified ‘on the fly’, because the staffs were asked to record the answers in terms of pairwise comparison matrix and validate the consistency immediately while using the Expert Choice software (see Section 4.1.3). Then, based on this source data set, the aggregated priorities, in terms of CWVs, were assessed on a group basis. These revealed several preferential structures of the entire decision group, one for over the four constructs (w.r.t. the total goal) and the other four for over the criteria w.r.t. each construct (see Section 4.2.1 and Section 4.2.2). Then these were the basis to evaluate the overall importance of each design criterion. A total rank over all criteria was obtained based on such information (see Section 4.2.3), and the knowledge for allocating the budget was obtained. Eventually, this study found that the sum of the weights of the top six ‘more important criteria’ is 72.4%, which cut almost 3/4 of the total importance. This means that most of the budget should be invested on these design criteria. A further observation revealed that the weight of the most important criteria, vertical/short take-off and landing capability (C1 above), is 17.2%, and this design criteria is the first priority to receive more budget. The weights of the stealth (C2), beyond visual range integrated attack capability (C3), and super information advantage and AI capability (C4), are 12.7%, 11.7%, and 11.6%, respectively. As the weights of these criteria are almost par, they should receive a second priority to receive the budget. Anyhow, after investing on the first 6 more important criteria, if a surplus is allowed in the budget, the six less important criteria should also receive some R&D budget. All of these are not only the significant information for budget allocation (for the institution), but also the empirical knowledge for further study (for the industry).
In Phase IV, the staffs were first taught to compile the individual CWVs as several matrices. In these matrices, the column is treated as either a valued statistical variable or a vector in the space (see Section 5.1). Further analyses were made in terms of correlation (see Section 5.2) and similarity (see Section 5.3), while the statistical-based Pearson correlation coefficient and the geometrical-based cosine similarity value between every pair of columns were computed. The analytical results were rendered using the heat maps, the graphs in SNA (i.e., the network diagrams), and the decision trees. Observations were made to these visualised results. The diversities in the opinions of DMs under the total goal and under every construct were thus identified. The whole set of knowledge, except for the teaching material, is also the empirical contribution of this study (e.g., the interesting fact that all of the DMs can reach a consensus for the criteria under the engine capability construct, but this is not the same for those under any other construct) (e.g., the priority order of the degrees of opinion divergence (DoD) under the constructs and the different grouping scenarios of the DMs under which).
In addition, by associating and comparing the results that were obtained based on correlation and based on similarity, there were also interesting empirical findings and implications, either empirically or methodologically. These are as follows:
(1)
It was found that w.r.t. the total goal, the grouping scenarios for the DMs are identical (i.e., they are divided into two identical subgroups; so this important empirical finding is cross validated), but despite so, the two subgroups are completely isolated in terms of the correlation relationships, while they are bridged in terms of the similarity relationships, because the measures that are based on the statistical and geometrical concepts are, in the intrinsic, different.
(2)
There is the fact that DMs’ considerations under various constructs are sometimes consistent (under PC-A), but often hard to be consistent (under PC-B, PC-C, and PC-D). This reflects the common variety and diversity of people’s consideration of different decision-making issues.
(3)
It was also found that the two methods used to understand (and prioritize) the opinions’ diversity under the different constructs are different, in that the measure of DoD (which is justified based on the distribution of cosine similarity values under a construct) is not analogous to the measure of NSgs (which is the result of classifying the DM opinions in the visualized network diagram under the same construct), because DoD and NSgs are different concepts in nature, although both of them measure the ‘diversity in the opinions of DMs’. This methodological knowledge is important for making other similar decision analyses in the future.
(4)
Additional extensive knowledge is derived from comparing the decision trees that, respectively, cluster the DMs in terms of correlation and similarity. As a result, both trees classify the entire opinion group into two DM subgroups w.r.t. the total goal, i.e., {DM-4, DM-5, DM-9, DM-10} and {DM-1, DM-2, DM-3, DM-6, DM-7, DM-8}. As such, the results from classifying based on geometrical-based similarity and based on the statistical-based correlation, once again, adhere to each other. However, if they are further scrutinized, their tree shapes are, in fact, different, but the identified inconsistencies of opinions in the network graphs are, therefore, explained. Despite so, the knowledge about how the DMs are classified into two categories according to their main attitudes toward the four primary constructs is not only another significant empirical contribution, but also a guide in practice to couple with the divergent opinions from these groups at the top level of decision-making.
As such, the whole set of knowledge that was obtained during the staffs’ exercises, either empirical or methodological, is also fruitful, in addition to the proposed framework itself and the teaching materials that were used in the training courses. Therefore, the main research question of this study, which is to scientifically understand the relevant decision knowledge about allocating such ‘big but limited’ budget for the design of a specific high cost product, is reflected.
Several methods, such as: Delphi, AHP, and DDDM, are adopted in the proposed science education framework to explore multi-criteria decision-making issues. For further discussing features of the proposed framework, we use a SWOT analysis to present the differences between our framework and the solutions with traditional Delphi and AHP methods. The SWOT analysis of the proposed framework is shown in Table 10.

6.2. Conclusions

This paper presents a scientific education framework for the decision knowledge discovery process. It has been applied and verified empirically by a training course that was hold for the employees in a large R&D institution, where the aim of the course is to incubate the ‘decision ability’ for the staff on making budgeting decisions. The institution just got a large budget and an order to design a next generation fighter, but is going to make a strategic plan targeting the effective budget use for doing the relevant design and R&D works. The education framework includes several systematic tutorial phases, as to help the staffs or students to discover the knowledge about a scientific yet cogent budget allocation decision using opinions from the DMs on a group basis and manage the gaps and similarities in the interest group numerically and graphically. Given the importance of the decision of allocating such ‘large but still limited’ budget (because these kinds of budget are usually allocated for developing the high cost products that will affect a nation’s security and military sustainability as well as the area balance), the importance of the framework should be axiomatic.
The Phase I tutorial establishes the relevant staffs’ ability to filter and choose a suitable set of criteria pertaining to the encountered decision problem that is about high-cost specific product design, while it addressed the only way to have a solid basis for criteria set determination is to make a thorough review to both the academic literature and the industrial literature. Phase II develops the abilities to construct an appropriate ‘decision hierarchy’ and to confirm it, in terms of the use of the qualitative Delphi method, while some important processes (e.g., multi-channel communications, the introduction of an additional construct layer and keeping the number of items in a subtree fit) are taught. Phase III establishes their ability to discover the knowledge about ‘a priority’, which facilitates a resource (budget) allocation decision that can be scientifically managed. The courses for both the theoretical aspect and the application aspect of AHP (e.g., expert questionnaire design, interview-based survey, consistency analysis, prioritizing the constructs and the criteria under a construct in terms of CWVs for each individual DM, the group-based priority assessments, and the overall synthesized analysis) are given, and in each step, the staffs are asked for doing an exercise. Phase IV increases the staffs’ ability to make further decision analysis extensively. In the case, the teaching material is designed in such a way that meets the specific purpose raised from the taught staffs, which is to analyse the heterogeneity and the homogeneity of the DM opinions that are present in the interest group, rather than to conduct a common sensitivity analysis.
Since this paper also presents the body of the teaching materials used in the employee training courses and the practical outcome, the effectiveness of the proposed science education framework should be evident after such empirical verification. In the large R&D institution that would like to make a strategic budget allocation decision for a project to design a next generation fighter under the condition that the total budget is anyway insufficient to cope with all desired functions, the staffs have successfully learned how to identify the set of criteria for the design works (in Phase I tutorial), how to construct and confirm the decision hierarchy (in Phase II tutorial), how to discover the relevant knowledge about the ‘priorities’ to be set for budget allocation (in Phase III tutorial), and how to make the extensive decision analysis as to understand the opinion gaps in the interest group (in Phase IV tutorial). As is also seen, in the training course, the content prepared and taught for Phase IV is novel to science education, because the different perspectives toward the data in the big data analytics field are taken, while several DDDM analytical/visualisation methods are introduced and used.
As a final overview to the entire study and the possible merits of the exerted works, a critical analysis is given in Figure 11. In the critical analysis, we use several items: ‘what’, ‘where’, ‘when’, and ‘who’, to itemize the descriptions about this study. Then, we use the items: ‘how’, ‘why’, and ‘what if’, to analyze this study. Finally, we use the items: ‘so what’ and ‘what next’, to evaluate this study.

6.3. Recommendations

As confidence is gained from the successful application of the proposed framework, the applied case will serve as a reference and a guideline when it is to hold another employee training course for other similar large-scale projects (to develop single high investment cost products), since the R&D works are irreversible after the ‘large but limited’ budget has been planned. In these projects, the budget (as well as the relevant resources) should be allocated and spent very carefully (so the plan must be determined scientifically), so it is therefore required to understand the ‘priority to invest’ for the listed functions that are to be designed and the set of functions that are really key, important, and in accordance to the nation’s technological development strategy.
From the perspective of methodological improvements, this study definitely paves several ways for future research on the issues of fighters’ R&D in real practice. For example, a real budget allocation plan can be decided by selecting the best plan among several probable alternatives using other MADM methods (e.g., the TOPSIS and GTMA methods) based on the aggregated CWVs and the information of a ‘decision matrix’, which contains the attribute values for each alternative plan (e.g., how this plan generates returns on investment for every criterion). For another example, a new budget plan can also be determined by incorporating the use of MODM methods, e.g., goal programming (GP), whilst the CWVs can be used as model parameters for setting the weights for the goal criteria when constructing the model.
Moreover, for the methodology to discriminate the diversity of opinions of DMs and to confirm the observed grouping scenario, perhaps other measurement and analytical basis can also be introduced except for the statistical-based correlation coefficients (which view the individual CWVs as statistical variables) and the geometrical-based cosine similarity values (which treat the CWVs as vectors in a vector space) (e.g., probability values from the non-parametric test as the basis, which regard them as independent equal-length samples and test if they are from non-identical populations or having different distributions).
Finally, from the view of pedagogy, future works are mainly two-fold, i.e., widening the application of the proposed science education framework to employee educations in other industries, as well as to student educations in the universities. For the former type of application, more similar training courses can be planned and implemented with audiences from other industries who are facing similar decision problems, e.g., to scientifically allocate the budget for designing a second generation submarine or battleship, or for making a proton radio-therapeutic equipment. For the latter type of application, both the DDDM-based teaching framework and the ‘next generation’ fighter ‘design’ case would be the suitable updates for the teaching materials for the non-DDDM-based decision methods and the ‘conventional’ fighter ‘selection’ case, both of which have been used for a number of decades in the relevant textbooks.

Author Contributions

Conceptualization, funding acquisition and investigation, L.-P.C.; Methodology, visualization and writing (original draft and preparation), Z.-Y.Z.; Validation, project administration and writing (review and editing), C.-H.F.; Supervision, J.-H.H.

Funding

The funding institutions of this study is temporarily blinded.

Acknowledgments

The authors thank to Wen-Chao Yeh’s help for his selfless devotes to visualising the results in Section 4. The funding institutions of this study, as well as their associated grant numbers, are as follows: 106-2410-H-038 -001, Ministry of Science and Technology, Taiwan (ROC), 2017(-2018); 106-2410-H-038 -003, Ministry of Science and Technology, Taiwan (ROC), 2018(-2019).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Baker, N.R. R&D project selection models: An assessment. IEEE Trans. Eng. Manag. 1974, 4, 165–171. [Google Scholar]
  2. Jia, Q.S. Efficient computing budget allocation for simulation-based policy improvement. IEEE Trans. Autom. Sci. Eng. 2012, 9, 342–352. [Google Scholar] [CrossRef]
  3. Ballantine, J.A.; Galliers, R.; Stray, S.J. The use and importance of financial appraisal techniques in the IS/IT investment decision-making process—Recent UK evidence. Proj. Apprais. 1995, 10, 233–241. [Google Scholar] [CrossRef]
  4. Ballantine, J.; Stray, S. Financial appraisal and the IS/IT investment decision making process. J. Inf. Technol. 1998, 131, 3–14. [Google Scholar] [CrossRef]
  5. Cakmak, M.A.; Gokpinar, E.S. Research & development project selection model and process approach in defense industry related programs: First phase-concept approval decision. In Proceedings of the Portland International Conference on Management of Engineering & Technology (PICMET 2007), Portland, OR, USA, 5–9 August 2007; pp. 2225–2235. [Google Scholar]
  6. Lu, L.Y.; Wu, C.H.; Kuo, T.C. Environmental principles applicable to green supplier evaluation by using multi-objective decision analysis. Int. J. Prod. Res. 2007, 45, 4317–4331. [Google Scholar] [CrossRef]
  7. Lin, S.S.; Juang, Y.S.; Chen, M.Y.; Yu, C.J. Analysis of green design criteria and alternative evaluation processes for 3C products. In Proceedings of the Asia Pacific Industrial Engineering & Management Systems Conference (APIEMS), Kitakyushu, Japan, 14–16 December 2009; pp. 14–16. [Google Scholar]
  8. Bartel, A.P. Productivity gains from the implementation of employee training programs. Ind. Relat. A J. Econ. Soc. 1994, 33, 411–425. [Google Scholar] [CrossRef]
  9. Storey, D.J.; Westhead, P. Management training in small firms—A case of market failure? Hum. Resour. Manag. J. 1997, 7, 61–71. [Google Scholar] [CrossRef]
  10. Loan-Clarke, J.; Boocock, G.; Smith, A.; Whittaker, J. Investment in management training and development by small businesses. Empl. Relat. 1999, 21, 296–311. [Google Scholar] [CrossRef]
  11. Harel, G.H.; Tzafrir, S.S. The effect of human resource management practices on the perceptions of organizational and market performance of the firm. Hum. Resour. Manag. 1999, 383, 185–199. [Google Scholar] [CrossRef]
  12. Antonacopoulou, E.P. Reconnecting education, development and training through learning: A holographic perspective. Educ. Train. 2000, 42, 255–264. [Google Scholar] [CrossRef]
  13. Bartel, A.P. Measuring the employer’s return on investments in training: Evidence from the literature. Ind. Relat. A J. Econ. Soc. 2000, 39, 502–524. [Google Scholar] [CrossRef]
  14. Ibrahim, A.B.; Soufani, K. Entrepreneurship education and training in Canada: A critical assessment. Educ. Train. 2002, 44, 421–430. [Google Scholar] [CrossRef]
  15. Simpson, M.; Tuck, N.; Bellamy, S. Small business success factors: The role of education and training. Educ. Train. 2004, 46, 481–491. [Google Scholar] [CrossRef]
  16. Aw, B.Y.; Roberts, M.J.; Winston, T. Export market participation, investments in R&D and worker training, and the evolution of firm productivity. World Econ. 2007, 30, 83–104. [Google Scholar]
  17. Salas, E.; Tannenbaum, S.I.; Kraiger, K.; Smith-Jentsch, K.A. The science of training and development in organizations: What matters in practice? Psychol. Sci. Public Int. 2012, 13, 74–101. [Google Scholar] [CrossRef] [PubMed]
  18. Wyss, M.; Wilner, A. The next generation fighter club: How shifting markets will shape Canada’s F-35 debate. Can. Mil. J. 2012, 12, 18–27. [Google Scholar]
  19. Chiang, Y.-Y. Taiwan President Says China’s Military Expansion Could Destabilize Asia. The New York Times. 2017. Available online: https://cn.nytimes.com/china/20171230/taiwan-china-tsai-ing-wen/zh-hant/dual/ (accessed on 26 March 2018).
  20. Marcum, M. A Comparative Study of Global Fighter Development Timelines. SITC Policy Briefs 2014, 3, 1–5. [Google Scholar]
  21. Zientek, J.B. Promoting Japan and South Korea’s Role in East Asian Security. Strategy Research Project of U.S. Army War College, Carlisle Barracks, PA, USA. 2010. Available online: http://www.dtic.mil/dtic/tr/fulltext/u2/a521814.pdf (accessed on 15 May 2018).
  22. Ausink, J.A.; Taylor, W.W.; Bigelow, J.H.; Brancato, K. Investment Strategies for Improving Fifth-Generation Fighter Training; RAND Corporation: Santa Monica, CA, USA, 2011; Available online: http://www.dtic.mil/dtic/tr/fulltext/u2/a537970.pdf (accessed on 15 May 2018).
  23. Charles, M.B.; Sinnewe, E. India’s indigenization of military aircraft design and manufacturing: Towards a fifth-generation fighter. In The Political Economy of Conflict in South Asia; Webb, M.J., Wijeweera, A., Eds.; Palgrave Macmillan: London, UK, 2015; pp. 93–113. [Google Scholar]
  24. Lockheed Martin Aeronautics. Lockheed Martin F-35 Lightning II. Wikipedia. 2006. Available online: https://en.wikipedia.org/wiki/Lockheed_Martin_F-35_Lightning_II (accessed on 15 March 2018).
  25. Hwang, C.-L.; Yoon, K.-S. Multiple Attribute Decision Making Methods and Applications: A State-of-the-Art Survey; Springer: Berlin/Heidelberg, Germany, 1981; ISBN 978-3-540-10558-9. [Google Scholar]
  26. Ali, Y.; Asghar, A.; Muhammad, N.; Salman, A. Selection of a Fighter Aircraft to Improve the Effectiveness of Air Combat in the War on Terror: Pakistan Air Force—A Case in Point. Int. J. Anal. Hierarchy Process 2017, 9. [Google Scholar] [CrossRef]
  27. Dožić, S.; Kalić, M. An AHP approach to aircraft selection process. Transp. Res. Procedia 2014, 3, 165–174. [Google Scholar] [CrossRef]
  28. Atique, M.S.A.; Barman, S.; Nafi, A.S.; Bellah, M.; Salam, M.A. Design of a fifth generation air superiority fighter. In Proceedings of the 11th AIP Conference Proceedings, Dhaka, Bangladesh, 18–20 December 2015; Volume 1754, p. 060003. [Google Scholar]
  29. Gertler, J. Air force F-22 Fighter Program; Library of Congress, Congressional Research Service: Washington, DC, USA, 2013. [Google Scholar]
  30. Munjulury, R.C.; Staack, I.; Abdalla, A.M.; Melin, T.; Jouannet, C.; Krus, P. Knowledge-based design for future combat aircraft concepts. In Proceedings of the 29th Congress of the International Council of the Aeronautical Sciences, St. Petersburg, Russia, 7–12 September 2014. [Google Scholar]
  31. Yang, R.; Shen, C.; Huang, F. Air combat tactics among the fourth generation fighters. J. Autom. Control Eng. 2015, 3, 290–293. [Google Scholar] [CrossRef]
  32. Johnson, K.F. The Need for Speed: Hypersonic Aircraft and the Transformation of Long Range Airpower. Ph.D. Thesis, Air University, Maxwell Air Force Base, AL, USA, 2005; pp. 1–73. [Google Scholar]
  33. Stillion, J.; Perdue, S. Air Combat Past, Present and Future. Briefing Slides, RAND Project Air Force, RAND Corporation. 2008. Available online: http://www.aereo.jor.br/wp-content/uploads/2016/02/2008_RAND_Pacific_View_Air_ Combat_Briefing.pdf (accessed on 4 May 2018).
  34. Lahtinen, T.M.; Koskelo, J.P.; Laitinen, T.; Leino, T.K. Heart rate and performance during combat missions in a flight simulator. Aviat. Space Environ. Med. 2007, 78, 387–391. [Google Scholar] [PubMed]
  35. Narayana, R.; Sudesh, K.K.; Girija, G.; Debanjan, M. Situation and threat assessment in BVR combat. In Proceedings of the AIAA Guidance, Navigation, and Control Conference, Guidance, Navigation, and Control and Co-located Conferences, Portland, OR, USA, 8–11 August 2011; pp. 1–6. [Google Scholar]
  36. Horváth, J. JAS 39 Gripen in Air Operations 23: Introduction. The Official Website of REPÜLÉSTUDOMÁNY. 2013. Available online: http://www.repulestudomany.hu/kulonszamok/2013_cikkek/2013-2-30-Jozsef_Horvath.pdf (accessed on 30 April 2018).
  37. Wolff, S. Using executable VDM++ models in an industrial application-self-defense system for fighter aircraft. Tech. Rep. Electron. Comp. Eng. 2012, 1, 1–18. [Google Scholar]
  38. Pratt, M. Marine Corps Aerial Electronic Warfare into the Future. United States Marine Corps, Command and Staff College, School of Advanced Warfighting, Marine Corps University. 2003. Available online: http://www.dtic.mil/dtic/tr/fulltext/u2/a510474.pdf (accessed on 28 April 2018).
  39. Layton, P. Combat thunderstorm, a new form of air warfare. Defence Today 2014, 11, 2–8. [Google Scholar]
  40. Hathaway, D.C. Germinating a New SEAD: The Implications of Executing the SEAD Mission in a UCAV. Ph.D. Thesis, Air University, Maxwell Air Force Base, AL, USA, 2001; pp. 1–88. [Google Scholar]
  41. Murman, E.M. Lean Aerospace Engineering; Littlewood Lecture AIAA-2008-4; Massachusetts Institute of Technology: Cambridge, MA, USA, 2008. [Google Scholar]
  42. Tirpak, J.A. Bomber Questions. Air Force Mag. 2001, 84, 36–43. [Google Scholar]
  43. Ball, S. The New Reality. In Signals: The Lockheed Martin UK News Update; Rood, P., Ed.; Lockheed Martin UK, Winter 2010/2011: London, UK, 2010; pp. 2–4. [Google Scholar]
  44. Schriver, R.; Stokes, M. Evolving Capabilities of the Chinese People’s Liberation Army: Consequences of Coercive Aerospace Power for United States Conventional Deterrence. Official Website of the Project 2049 Institute. 2008. Available online: https://project2049.net/documents/ChineseCoerciveAerospaceCampaign.pdf (accessed on 3 May 2018).
  45. Helldin, T.; Falkman, G. Human-centered automation for improving situation awareness in the fighter aircraft domain. In Proceedings of the 2012 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), New Orleans, LA, USA, 6–8 March 2012; pp. 191–197. [Google Scholar]
  46. Alfredson, J.; Holmberg, J.; Andersson, R.; Wikforss, M. Applied cognitive ergonomics design principles for fighter aircraft. In Proceedings of the 2011 International Conference on Engineering Psychology and Cognitive Ergonomics, Orlando, FL, USA, 9–14 July 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 473–483. [Google Scholar]
  47. Kaminani, S. Human computer interaction issues with touch screen interfaces in the flight deck. In Proceedings of the 30th IEEE/AIAA Digital Avionics Systems Conference (DASC 2011), Seattle, DC, USA, 16–20 October 2011. [Google Scholar]
  48. Mulgund, S.; Rinkus, G.; Illgen, C.; Zacharias, G. Situation Awareness Modeling and Pilot State Estimation for Tactical Cockpit Interfaces. In Proceedings of the Human-Computer Interaction International 1997, San Francisco, CA, USA, 24–29 August 1997. [Google Scholar]
  49. Wang, H.Y.; Bian, T.; Xue, C.Q. Experimental evaluation of fighter’s interface layout based on eye tracking. Electro Mech. Eng. 2011, 27, 50–53. [Google Scholar]
  50. Groh, J.L. Network-Centric Warfare: Leveraging the Power of Information. In U.S. Army War College Guide to National Security Issues; U.S. Army War College: Carlisle, PA, USA, 2008; pp. 323–338. [Google Scholar]
  51. James, L.D. Airmen: Delivering decision advantage. Air Space Power J. 2012, 6, 4–11. [Google Scholar]
  52. Shukla Shubhendu, S.; Vijay, J. Applicability of artificial intelligence in different fields of life. Int. J. Sci. Eng. Res. 2013, 1, 28–35. [Google Scholar]
  53. Capraro, G.T.; Berdan, G.B.; Liuzzi, R.A.; Wicks, M.C. Artificial intelligence and sensor fusion. In Proceedings of the IEEE International Conference on Integration of Knowledge Intensive Multi-Agent Systems 2003, Cambridge, MA, USA, 30 September–4 October 2003; pp. 591–595. [Google Scholar] [Green Version]
  54. Bolkcom, C.; Anthony, M. F-35 Lightning II Joint Strike Fighter (JSF) Program: Background, Status, and Issues; CRS Report for Congress RL30563; Congressional Research Service, Library of Congress: Washington, DC, USA, 2009. [Google Scholar]
  55. Higby, L.C.P. Promise and reality: Beyond visual range (BVR) air-to-air combat. In Air War College (AWC) Electives Program: Air Power Theory, Doctrine, and Strategy: 1945–Present; Air War College Seminar, Virginia Military Institute, Air University: Sumter, CA, USA, 2005. [Google Scholar]
  56. Arik, M.; Akan, O.B. Enabling cognition on electronic counter-measure systems against next-generation radars. In Proceedings of the 2015 IEEE Military Communications Conference (MILCOM 2015), Tampa, FL, USA, 16–28 October 2015; pp. 1103–1108. [Google Scholar]
  57. Okoli, C.; Pawlowski, S.D. The Delphi method as a research tool: An example, design considerations and applications. Inf. Manag. 2004, 42, 15–29. [Google Scholar] [CrossRef] [Green Version]
  58. Saaty, T.L. A scaling method for priorities in hierarchical structures. J. Math. Psychol. 1977, 15, 59–62. [Google Scholar] [CrossRef]
  59. Akaa, O.U.; Abu, A.; Spearpoint, M.; Giovinazzi, S. A Group-AHP Decision Analysis for the Selection of Applied Fire Protection to Steel Structures. Fire Saf. J. 2016, 86, 95–105. [Google Scholar] [CrossRef]
  60. Bian, T.; Hu, J.; Deng, Y. Identifying influential nodes in complex networks based on AHP. Phys. A Stat. Mech. Appl. 2017, 479, 422–436. [Google Scholar] [CrossRef]
  61. Dong, Q.; Cooper, O. An orders-of-magnitude AHP supply chain risk assessment framework. Int. J. Prod. Econ. 2016, 182, 144–156. [Google Scholar] [CrossRef]
  62. Dweiri, F.; Kumar, S.; Khan, S.A.; Jain, V. Designing an integrated AHP based decision support system for supplier selection in automotive industry. Expert Syst. Appl. 2016, 62, 273–283. [Google Scholar] [CrossRef]
  63. Erdogan, S.A.; Šaparauskas, J.; Turskis, Z. Decision making in construction management: AHP and expert choice approach. Procedia Eng. 2017, 172, 270–276. [Google Scholar] [CrossRef]
  64. Govindan, K.; Kaliyan, M.; Kannan, D.; Haq, A.N. Barriers analysis for green supply chain management implementation in Indian industries using analytic hierarchy process. Int. J. Prod. Econ. 2014, 147, 555–568. [Google Scholar] [CrossRef]
  65. Hillerman, T.; Souza, J.C.F.; Reis, A.C.B.; Carvalho, R.N. Applying clustering and AHP methods for evaluating suspect healthcare claims. J. Comp. Sci. 2017, 19, 97–111. [Google Scholar] [CrossRef]
  66. Nikou, S.; Mezei, J. Evaluation of mobile services and substantial adoption factors with analytic hierarchy process (AHP). Telecomm. Policy 2013, 37, 915–929. [Google Scholar] [CrossRef]
  67. Samuel, O.W.; Asogbon, G.M.; Sangaiah, A.K.; Fang, P.; Li, G. An integrated decision support system based on ANN and Fuzzy AHP for heart failure risk prediction. Expert Syst. Appl. 2017, 68, 163–172. [Google Scholar] [CrossRef]
  68. Ho, H.-P.; Chang, C.-T.; Ku, C.-Y. On the location selection problem using analytic hierarchy process and multi-choice goal programming. Int. J. Syst. Sci. 2013, 44, 94–108. [Google Scholar] [CrossRef]
  69. Kokangül, A.; Polat, U.; Dağsuyu, C. A new approximation for risk assessment using the AHP and Fine Kinney methodologies. Saf. Sci. 2017, 91, 24–32. [Google Scholar] [CrossRef]
  70. Li, W.; Yu, S.; Pei, H.; Zhao, C.; Tian, B. A hybrid approach based on fuzzy AHP and 2-tuple fuzzy linguistic method for evaluation in-flight service quality. J. Air Transp. Manag. 2017, 60, 49–64. [Google Scholar] [CrossRef]
  71. Szulecka, J.; Zalazar, E.M. Forest plantations in Paraguay: Historical developments and a critical diagnosis in a SWOT-AHP framework. Land Use Policy 2017, 60, 384–394. [Google Scholar] [CrossRef]
  72. Xu, Z.; Liao, H. Intuitionistic fuzzy analytic hierarchy process. IEEE Trans. Fuzzy Syst. 2014, 22, 749–761. [Google Scholar] [CrossRef]
  73. Sadiq, R.; Tesfamariam, S. Environmental decision-making under uncertainty using intuitionistic fuzzy analytic hierarchy process (IF-AHP). Stoch. Environ. Res. Risk Assess. 2009, 23, 75–91. [Google Scholar] [CrossRef]
  74. Zhuang, Z.-Y.; Yang, L.-W.; Lee, M.-H.; Wang, C.-Y. ‘MEAN+R’: Implementing a web-based, multi-participant decision support system using the prevalent MEAN architecture with R based on a revised intuitionistic-fuzzy multiple attribute decision-making model. Microsyst. Technol. 2018, in press. [Google Scholar] [CrossRef]
  75. Fernandez, J.F.G.; Marquez, A.C. Managing Maintenance Strategy. In Maintenance Management in Network Utilities: Framework and Practical Implementation; Fernandez, J.F.G., Marquez, A.C., Eds.; Springer: Berlin, Germany, 2012; Chapter 6; pp. 149–183. [Google Scholar]
  76. Marquez, A.C. Criticality Analysis for Asset Priority Setting. In The Maintenance Management Framework: Models and Methods for Complex Systems Maintenance; Marquez, A.C., Ed.; Springer: Berlin, Germany, 2007; Chapter 9; pp. 107–126. [Google Scholar]
  77. Zhuang, Z.-Y.; Chiang, I.-J.; Su, C.-R.; Chen, C.-Y. Modelling the decision of paper shredder selection using analytic hierarchy process and graph theory and matrix approach. Adv. Mech. Eng. 2017, 9, 1–11. [Google Scholar] [CrossRef]
Figure 1. Types of Well-known the Fifth Generation Fighting Aircraft.
Figure 1. Types of Well-known the Fifth Generation Fighting Aircraft.
Sustainability 10 02742 g001
Figure 2. The Proposed Scientific Decision Knowledge Exploration Education Framework.
Figure 2. The Proposed Scientific Decision Knowledge Exploration Education Framework.
Sustainability 10 02742 g002
Figure 3. Constructing the Decision Hierarchy for the New Generation Fighter Design Problem.
Figure 3. Constructing the Decision Hierarchy for the New Generation Fighter Design Problem.
Sustainability 10 02742 g003
Figure 4. The Overall Priority of the Criteria.
Figure 4. The Overall Priority of the Criteria.
Sustainability 10 02742 g004
Figure 5. Heat Map upon the Correlations between DMs Opinions (CWVs) w.r.t. the Total Design Decision Goal.
Figure 5. Heat Map upon the Correlations between DMs Opinions (CWVs) w.r.t. the Total Design Decision Goal.
Sustainability 10 02742 g005
Figure 6. The Network Diagram for the DMs Opinions w.r.t. the Total Design Decision Goal.
Figure 6. The Network Diagram for the DMs Opinions w.r.t. the Total Design Decision Goal.
Sustainability 10 02742 g006
Figure 7. Tree Classifying the DMs w.r.t. the Total Goal based on Opinions’ Correlations.
Figure 7. Tree Classifying the DMs w.r.t. the Total Goal based on Opinions’ Correlations.
Sustainability 10 02742 g007
Figure 8. Heat Maps for the Similarities between DMs Opinions (CWVs).
Figure 8. Heat Maps for the Similarities between DMs Opinions (CWVs).
Sustainability 10 02742 g008aSustainability 10 02742 g008b
Figure 9. Network Diagrams Visualised based on the Similarities between the Opinions of DMs.
Figure 9. Network Diagrams Visualised based on the Similarities between the Opinions of DMs.
Sustainability 10 02742 g009aSustainability 10 02742 g009b
Figure 10. Tree Classifying the DMs w.r.t. the Total Goal based on Similarities between the Individual Opinions.
Figure 10. Tree Classifying the DMs w.r.t. the Total Goal based on Similarities between the Individual Opinions.
Sustainability 10 02742 g010
Figure 11. A critical analysis.
Figure 11. A critical analysis.
Sustainability 10 02742 g011
Table 1. Operational definitions of next generation fighter design decision factors.
Table 1. Operational definitions of next generation fighter design decision factors.
Evaluation FactorsOperational Definition
HypersonicThe features of sonic speed are divided into the following four categories:
Subsonic: less than 0.8 Mach
Transonic: less than 1.2 Mach and greater than 0.8 Mach
Supersonic: less than 5.0 Mach and greater than 1.2 Mach
Hypersonic: greater than 5.0 Mach
Hypersonic is also known as very high supersonic, meaning the speed is much higher than the supersonic state. In general, at 5 Mach there will be some integrative effects not occurred at supersonic speeds, which are important for propulsion systems and vehicles. This technology has a decisive influence on rapid combat and mobile combat.
Supercruise capabilityCruising Speed means the speed of a fighter flying while the engine consumes the minimum fuels for certain flying distance. Similarly, Super Cruising Speed is the speed of a fighter remaining at supersonic state with minimum consumption of fuels for certain flying distance, which usually refers to the condition of a fighter flying over 1.5 Mach at supersonic state for over 30 min. after the engine stops using afterburner for speeding.
Vertical/short take-off and landing capabilityVertical landing refers to a process that a Fixed-wing airplane carries out a lift vertically or without a runway. Nevertheless, the requirement for short take-off and landing (STOL) is the fighter being capable of lifting/landing within 300–500 m running distance in fully equipped condition (fully equipped with available weapons), which is further reduced to 250 m in New Generation Fighter.
Super maneuverabilitySuper manoeuvrability of fighters is a capability comprehensively evaluated by manoeuvrability and mobility, demonstrated by the capability of changing manoeuvre state and dimension, and in brief, the capability of changing position.
Evaluation of Super manoeuvrability refers to assessing variation of the indicators such as acceleration capability, climbing velocity, steadiness, transient circling angular velocity, and rolling velocity in certain flying time.
Multi-mission execution capabilityMulti-mission execution capability means the original model can be adapted to execution of multiple missions, instead of single task, without much modification on it. The capability could be performed in integrative execution of more than two of the missions like air combat, counter-surface attack, reconnaissance, bombardment, and electronic warfare
Beyond visual range awareness capabilityBeyond Visual Range, abbreviated as BVR, refers to the distance beyond visual range of naked eyes and meanwhile the reliance on High-Tech Device to detect or deploy weapon against unknown target [55]. The distance is yet well-defined or unified, but approximately takes tens kilometres to be counted.
Therefore, the BVR awareness capability could be referred to not only the capability to sense dimension of time and space in corresponding to environmental factors beyond visual range in a specific event, but also to process and understand the meaning of the factors, and ultimately to predict the outcome when variations, such as time or certain incidence, were added to the algorithm.
Advanced cockpit and human-machine interfaceAdvanced cockpit and human-machine interface refers to fighters equipped with integrative display showing various information provided by avionic fire control system and sensors, which include not only fire control, fuel, loaded weapon, radar warning, but also tactical path and condition sensing of engaging fighters.
Advanced cockpit and human-machine interface is usually incorporated with large-sized display to show BVR and whole-field condition sensing information of the fighter, and in addition, applies helmet display for showing information in visual distance and other tactical sensing indicators.
Furthermore, Advanced cockpit and human-machine interface will incorporate advanced devices such as Hands on Throttle-and-Stick (HOTAS), touching design, helmet chasing control... etc.
Rapid electronic warfare countermeasures and interference capabilityElectronic warfare countermeasures refer to capability of fighters to suppress or devastate enemies in application of electromagnetic equipment or other means. Usually, the countermeasure mission covers interfering in enemies’ electromagnetic wave signal receiving and even enemies’ electromagnetic devices.
Moreover, fighters also need the influential capability brought by reducing or suppressing enemies’ counteraction, which usually includes methods such as changing radar channels or electromagnetic wave frequencies, radio communication channels. In total, fighters are supposed to be equipped with active and passive interfering capability [56].
Active interfering means sending signal actively to refrain enemy from receiving or even using electromagnetic signals effectively for communication, for example, by dispatching interfering electro signals to cause enemies’ communication failure.
Oppositely, passive interfering means fighters do not send signals actively for inference. Instead, they use Chaff for cluttering the radar, special coating material for reduction of far-red signal, shortening detectable distance or lowering possibility of being detected by enemies to ultimately reach the goal of interfering enemies’ application of electromagnetic signals.
Super information advantage/artificial intelligence capabilityAlong with the development and application of internet technology, fighters has become a critical part of the modern combat operation and commanding system. Equipped with ultra-high speed information processing capability and integrative information exchange capability, fighters can make strategy analysis through surrounding combat information or various commanding information, and in further simultaneously make the best combat strategy from multiple combat options ranging from independent or joint operation to commanding peer for execution by superiority on the information processing capability.
StealthBy integrating specialized technics and designs including surface coating, material property, special compound material and appearance designs, fighters can lower the possibility being detected or shorten the detectable distance. The principle military Stealth technology development is focus on reducing radar, far-red light, visible light, sound wave detection.
Beyond visual range integrated attack capabilityBeyond Visual Range (BVR) refers to the distance beyond visual range of naked eyes and meanwhile the reliance on High-Tech Device to detect or deploy weapon against unknown target [55]. The distance is yet well-defined or unified, but approximately takes tens kilometers to be counted.
Therefore, beyond visual range integrated attack capability (BVR attack capability), indicates that fighters can apply multiple weapons, such as Active/semi-active radar homing system, in conduction of attack beyond visual range by equipped avionic devices or information provided by the commanding system.
Various weapon systems integrating capabilityVarious weapon systems integrated with fighters should include not only traditional ammunition, electronic warfare equipment, and reconnaissance photographing device but also new concept weapons, such as Directed Energy Weapons (DEWs) like Laser Weapons, Microwave Weapons, Particle-Beam Weapons, Kinetic Weapons like kinetic kill vehicles, and Electromagnetic Guns. Fighters equipped with various weapon systems will enhance combat capability and return on investment (ROI) and win by unpredictable moves.
Table 2. The different stratifications of the decision makers (DMs) interviewed during exercising.
Table 2. The different stratifications of the decision makers (DMs) interviewed during exercising.
StratificationType#DMs%
GenderMale10100%
Female00%
DegreePh.D.440%
M.Sc.660%
OccupancyManaging330%
Advising440%
Staff330%
In Service5–10 years110%
11–20 years110%
>21 years880%
Table 3. The organized decision hierarchy that is confirmed using Delphi method.
Table 3. The organized decision hierarchy that is confirmed using Delphi method.
Decision GoalConstructsCriteria
The Suitable Design of a Next Generation Fighting AircraftEngine Capability
(PC-A)
Vertical/short Take-off/landing Capability (AC-1)
Super-cruise Capability (AC-2)
Hypersonic (AC-3)
Flying Control Capability
(PC-B)
Multi-mission Execution Capability (BC-1)
Super Maneuverability (BC-2)
Avionics and Awareness Capability
(PC-C)
Super Information Advantage and AI Capability (CC-1)
Beyond-visual Range Awareness Capability (CC-2)
Rapid e-Warfare Countermeasures and Interference Capability (CC-3)
Advanced Cockpit and Human Machine Interface (CC-4)
Integration Capability
(PC-D)
Stealth (DC-1)
Beyond-visual Range Integrated Attack Capability (DC-2)
Various Weapon Systems Integrating Capability (DC-3)
Table 4. The priority of the primary constructs w.r.t. new generation fighter total design goal.
Table 4. The priority of the primary constructs w.r.t. new generation fighter total design goal.
ConstructsRelative ImportanceOrdinal RankConsistency Analysis
(PC-A) Engine capability0.3661Inconsistency = 0.00934 with 0 missing judgments.
(PC-D) Integration capability0.2692
(PC-C) Avionics and awareness capability0.2453
(PC-B) Flying control capability0.1204
Table 5. The priority of the criteria w.r.t. engine capability construct.
Table 5. The priority of the criteria w.r.t. engine capability construct.
CriteriaRelative ImportanceOrdinal RankConsistency Analysis
(AC-1) Vertical/short take-off and landing capability0.7051Inconsistency = 0.03 with 0 missing judgments.
(AC-2) Super-cruise capability0.2152
(AC-3) Hypersonic0.0803
Table 6. The priority of the criteria w.r.t. flying control capability construct.
Table 6. The priority of the criteria w.r.t. flying control capability construct.
CriteriaRelative ImportanceOrdinal RankConsistency Analysis
(BC-1) Multi-mission execution capability0.6361Inconsistency = 0
with 0 missing judgments.
(BC-2) Super manoeuvrability0.3642
Table 7. The priority of the criteria w.r.t. avionics and awareness capability construct.
Table 7. The priority of the criteria w.r.t. avionics and awareness capability construct.
CriteriaRelative ImportanceOrdinal RankConsistency Analysis
(CC-1) Super information advantage/AI capability0.3611Inconsistency = 0.02
with 0 missing judgments.
(CC-2) Beyond visual range awareness capability0.2822
(CC-3) Rapid electronic warfare countermeasures and interference capability0.1993
(CC-4) Advanced cockpit and human-machine interface0.1584
Table 8. The priority of the criteria w.r.t. integration capability construct.
Table 8. The priority of the criteria w.r.t. integration capability construct.
CriteriaRelative ImportanceOrdinal RankConsistency Analysis
(DC-1) Stealth0.3661Incon. = 0.03
with 0 missing judgments.
(DC-2) Beyond visual range integrated attack capability0.3392
(DC-3) Various weapon systems integrating capability0.2943
Table 9. The compiled matrices of criteria weight vector (CWVs).
Table 9. The compiled matrices of criteria weight vector (CWVs).
(a)The Matrix of CWVs under Total Goal Compiled for the DMs
CriteriaDM-1DM-2DM-3DM-4DM-5DM-6DM-7DM-8DM-9DM-10Aggregated
PC-A0.4296690.543040.4396060.1294640.0917360.6459840.6202580.4658190.06250.1837080.361179
PC-B0.0427960.1359890.080010.0407520.0664490.2228320.0819120.277140.06250.1368850.114727
PC-C0.1130170.0764650.4110340.3096360.2285080.0861870.2438430.161070.43750.1725990.223986
PC-D0.4145180.2445050.069350.5201480.6133070.0449970.0539860.095970.43750.5068080.300109
(b)The Compiled Matrix of CWVs under the Engine Capability Construct (PC-A)
CriteriaDM-1DM-2DM-3DM-4DM-5DM-6DM-7DM-8DM-9DM-10Aggregated
AC-30.0622540.0752840.0778390.0653910.0806880.0704210.2437560.0567430.0666670.0778390.087688
AC-20.2364380.1243510.2344320.1994190.2923280.2062120.0669330.2946380.4666670.2344320.235585
AC-10.7013080.8003650.6877290.7351900.6269840.7233670.6893110.6486190.4666670.6877290.676727
(c)The Compiled Matrix of CWVs under the Flying Control Capability Construct (PC-B)
CriteriaDM-1DM-2DM-3DM-4DM-5DM-6DM-7DM-8DM-9DM-10Aggregated
BC-20.1428570.1666670.8333330.8750000.2500000.1250000.5000000.2500000.5000000.1666670.380952
BC-10.8571430.8333330.1666670.1250000.7500000.8750000.5000000.7500000.5000000.8333330.619048
(d)The Compiled Matrix of CWVs under the Avionics and Awareness Capability Construct (PC-C)
CriteriaDM-1DM-2DM-3DM-4DM-5DM-6DM-7DM-8DM-9DM-10Aggregated
CC-10.6077840.1186210.6082040.0375790.2062500.3574400.2503920.2791670.1223750.2080630.279587
CC-20.0443410.1523060.0472100.3065800.1645830.0888410.0507650.3916670.4254060.0680070.173971
CC-30.1858420.0734670.1968650.2171680.3416670.5035270.0818850.1645830.0473060.1173580.192967
CC-40.1620330.6556060.1477210.4386730.2875000.0501930.6169580.1645830.4049140.6065720.353475
(e)The Compiled Matrix of CWVs under the Integration Capability Construct (PC-D)
CriteriaDM-1DM-2DM-3DM-4DM-5DM-6DM-7DM-8DM-9DM-10Aggregated
DC-10.6650700.2344320.7235060.3277780.3277780.0915280.7028390.3333330.2611110.0903520.375773
DC-20.2310820.0778390.1931860.4111110.2611110.7070600.1822340.3333330.3277780.5559270.328066
DC-30.1038470.6877290.0833080.2611110.4111110.2014120.1149270.3333330.4111110.3537210.296161
CriteriaDM-1DM-2DM-3DM-4DM-5DM-6DM-7DM-8DM-9DM-10Aggregated
Table 10. A SWOT analysis of the proposed science education framework.
Table 10. A SWOT analysis of the proposed science education framework.
StrengthWeakness
For exploring multi-criteria decision-making issues, many solutions use Delphi and AHP methods to set evaluation criteria and build hierarchical evaluation model. Then, these solutions can understand numerically assessed group-based priority for the constructs and the criteria under each constructs. Comparing with other solutions, the proposed framework uses Delphi and AHP methods to receive numerically assessed group-based priority of the constructs and the criteria under each constructs. Moreover, it also uses several DDDM methods to analyse opinions of individual DM to get the Similarities and Diversities between/among DMs’ opinions. The analysis results of the DDDM methods would help users to understand completely and deeply ideas of an individual DM or groups of decision makers’ for multi-criteria decision-making issues.Comparing with traditional studies about multi-criteria decision-making issues, the proposed education framework in this study will teach users to use more methods to receive more information to identify opinion gaps and implications that might help them to solve their multi-criteria decision-making issue further. Therefore, in addition to understanding Delphi and AHP methods, users need to spend extra time to familiarize themselves with the relevant analysis methods of the DDDM method in the proposed framework; it will be the shortcoming of the proposed framework. Moreover, users will take more time to analyse DMs’ opinions with the DDDM method.
OpportunityThreat
With the DDDM analysis, the proposed framework can help users to understand further Similarities and Diversities between/among opinions from an individual DM or groups of DMs with Correlation, Cosine Similarities, SNA Network Diagram, Heat Map, Decision Tree, etc. The analysis results of the Correlation, Cosine Similarities, SNA Network Diagram, Heat Map, Decision Tree methods might excite users’ other views about the multi-criteria decision-making opinions of DMs.With the DDDM analysis, the proposed framework might present different opinions of individual DM or groups of DMs and find opinion gaps between/among DMs. If there is no good opinion communication bridge among/between individual or groups of DMs; the presented results might cause an enmity among DMs. This might be a hidden worry issue for a R&D institution to perform a large R&D project smoothly.

Share and Cite

MDPI and ACS Style

Chi, L.-P.; Zhuang, Z.-Y.; Fu, C.-H.; Huang, J.-H. A Knowledge Discovery Education Framework Targeting the Effective Budget Use and Opinion Explorations in Designing Specific High Cost Product. Sustainability 2018, 10, 2742. https://doi.org/10.3390/su10082742

AMA Style

Chi L-P, Zhuang Z-Y, Fu C-H, Huang J-H. A Knowledge Discovery Education Framework Targeting the Effective Budget Use and Opinion Explorations in Designing Specific High Cost Product. Sustainability. 2018; 10(8):2742. https://doi.org/10.3390/su10082742

Chicago/Turabian Style

Chi, Li-Pin, Zheng-Yun Zhuang, Chen-Hua Fu, and Jen-Hung Huang. 2018. "A Knowledge Discovery Education Framework Targeting the Effective Budget Use and Opinion Explorations in Designing Specific High Cost Product" Sustainability 10, no. 8: 2742. https://doi.org/10.3390/su10082742

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop