Next Article in Journal
Optimising Urban Freight Logistics Using Discrete-Event Simulation and Cluster Analysis: A Stochastic Two-Tier Hub-and-Spoke Architecture Approach
Previous Article in Journal
Understanding the Spatiotemporal Impacts of the Built Environment on Different Types of Metro Ridership: A Case Study in Wuhan, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimizing Smart Campus Solutions: An Evidential Reasoning Decision Support Tool

Department of Industrial Engineering, American University of Sharjah, Sharjah P.O. Box 26666, United Arab Emirates
*
Author to whom correspondence should be addressed.
Smart Cities 2023, 6(5), 2308-2346; https://doi.org/10.3390/smartcities6050106
Submission received: 6 July 2023 / Revised: 10 August 2023 / Accepted: 22 August 2023 / Published: 1 September 2023

Abstract

:
Smart technologies have become increasingly prevalent in various industries due to their potential for energy cost reduction, productivity gains, and sustainability. Smart campuses, which are educational institutions that implement smart technologies, have emerged as a specific application of these technologies. However, implementing available smart technologies is often not feasible due to various limitations, such as funding and cultural restrictions. In response, this study develops a mathematical decision-making tool based on the evidential reasoning (ER) approach and implemented in Python. The tool aims to assist universities in prioritizing smart campus solutions tailored to their specific needs. The research combines a comprehensive literature review with insights from stakeholder surveys to identify six principal objectives and four foundational technologies underpinning smart campus solutions. Additionally, six critical success factors and nine functional clusters of smart campus solutions are pinpointed, and evaluated through the ER approach. The developed decision-support tool underwent validation through various statistical tests and was found to be highly reliable, making it a generalized tool for worldwide use with different alternatives and attributes. The proposed tool provides universities with rankings and utilities to determine necessary smart applications based on inputs such as implementation cost, operation cost, maintenance cost, implementation duration, resource availability, and stakeholders’ perceived benefit.

1. Introduction

Smart cities are viewed as a necessary future development in response to global climate and ecological issues. However, not all of the available smart city technologies and frameworks are feasible or optimal [1]. As it would be expensive to replace an entire city’s infrastructure for experimentation purposes, it might be best to test smart applications on a smaller scale, such as a university campus [2]. Given that smart campuses have the potential to become models for future smart cities, as they emulate many aspects of cities within a limited area, but with fewer stakeholders and more control over assets, transforming traditional campuses into smart ones could contribute to the strategic development of the education sector and enhance its competitiveness. As such, universities, which typically host various functions within their campuses, are in a unique position to experiment with cutting-edge technologies, making them excellent testing grounds for different smart solutions [3]. By doing so, campuses can help to improve the socio-ecological aspects of their surrounding areas and contribute to the progress of their communities, regions, and cities [2].
According to [4], the increased accessibility of higher education across many countries and cultures has led to a significant growth in the number of students since the 1990s. Therefore, with the new innovative technologies and the reshaping of several aspects of human lives, educational institutions must develop and adapt to these new technologies and continue to innovate to meet the needs of a diverse student population. According to [5], such challenges present an opportunity for universities to experiment with different technologies to increase students’ comfort and satisfaction. Moreover, most new students were raised in a tech-driven, connected world [6], leading to high expectations for on-demand experiences and opportunities for investing in smart technologies for campus use.
In their work, Dong et al. [2] defined the term ‘smart’ as the ability to learn quickly, make intelligent judgments, and respond promptly to problems. They argue that a system can be considered smart when it can autonomously provide services that meet the dynamic needs of users. It is important to note that a smart campus goes beyond a smart application like a Chabot. A smart campus employs networked applications to promote collaboration, enhance resource efficiency, conserve resources, and ultimately create a more enjoyable environment for all.
Smart applications utilize recent information and communication technologies to manage smart buildings, territories, and businesses [7]. Within an educational environment, smart applications leverage sensors, databases, and wireless access to provide for users [8]. A smart campus can be deployed through multiple routes based on various factors, but certain major principles need to be addressed to positively impact stakeholders’ experiences when implementing a smart campus [9]. Firstly, it should be intuitive and easy to use, with great design achieved by exploring user roles and experiences. Secondly, it should be modular, adaptive, and flexible to accommodate the ever-changing needs of the campus and its users. Thirdly, it should be intelligent by implementing AI-based solutions to make better predictions, using the data generated by various stakeholders. Lastly, the smart campus should facilitate collaboration with external stakeholders and universities worldwide, while also offering global scalability and positive data-driven experiences [9].
Furthermore, Alnaaj et al. [10] proposed a strategic framework for smart campuses, which includes models of smart campuses and the positive impact of implementing smart applications in areas such as waste and energy management, transportation, and security. However, implementing all of these applications may not be economically or culturally feasible for most campuses. Therefore, a decision-making tool is necessary to make informed decisions based on stakeholders’ interests, including management, students, faculty, and staff. The challenge of deciding which smart applications are suitable for a particular campus is the focus of this paper. While there is extensive research on smart technologies, little attention has been paid to the problem of choice. Not all smart applications are appropriate for every campus due to factors such as location, demographics, culture, cost, and assets. In support of this, [2,11] emphasize the need for a decision-making tool to determine which combination of smart applications is optimal for a given campus. In addition, [8] also argues that the concept of smart campuses lacks a clear definition and is usually discussed only from a technical perspective, overlooking the perceptions of key stakeholders. This study, therefore, aims to identify the success factors of smart campus applications and develop a decision-making tool to aid university management in ranking the most viable applications. The research objectives include defining the hallmarks of a smart campus, understanding strategic success factors, designing and implementing a decision-making tool, and validating the tool’s performance. As such, this paper is set out to develop an understanding of a smart campus and its potential benefits, and propose a decision support tool that aids stakeholders to make a selective decision prior to investing into the smart campus application.

2. Materials and Methods

The purpose of this section is to provide a comprehensive understanding of the current state of the smart campus concepts, definitions, enabling technologies, and applications. The section also identifies the key critical success factors required for transitioning from a traditional campus to a smart campus, as well as the decision-making techniques and multi-criteria decision-making methods that can help stakeholders make strategic decisions for transforming a traditional campus into a smart campus.

2.1. Methodological Steps

This section outlines the methodological steps undertaken to achieve the study’s objective of creating a decision support tool for converting a conventional campus into a smart campus.
Stage I—Literature Review: A comprehensive review of smart campus concepts, definitions, enabling technologies, and applications was conducted. Essential factors for a successful transition to a smart campus were identified, along with multi-criteria decision-making techniques.
Stage II—Data Acquisition: Data collection involved two surveys: one to assess the relative importance of smart campus alternatives among fifty-six stakeholders, and another to assign weights to critical success factors through input from nine experts. Five external experts provided utilities and optimal alternatives for ten decision-making scenarios, which were subsequently compared with those generated by the developed decision support tool.
Stage III—Tool Development and Validation: The development of the ER decision support tool involves creating a model to calculate attribute weights, derived from critical success factors for smart campus development, through a survey of experts in Stage II. The averaged weights from experts’ votes will inform the ER model, programmed in Python to process three-dimensional belief tensors, yielding average utility and ranking of alternatives. Validation will comprise randomly generating and assessing 50 belief tensors through the tool and experts, comparing results via paired t-tests and Receiver Operating Characteristics (ROC) curve analysis. A comprehensive assessment using metrics including accuracy, precision, sensitivity, specificity, negative predicted value, and F1 score will follow.

2.2. Smart Campus Definition and Benefits

Al Naaj et al. [10] define a smart campus as one that offers personalized and environmental services, as well as information services, which necessitate the integration of all of these elements to establish a digital campus. Similarly, Prandi et al. [12] explain that a smart campus is an advanced version of an intelligent environment, utilizing advanced information and communication technologies to enable interaction with space and data. Musa et al. [13] describe a smart campus as a platform that provides efficient technology and infrastructure to improve services that facilitate education, research, and overall student experience, while Malatji [14] defines it as an entity that interacts intelligently with its environment and stakeholders, including students.
However, Ahmed al. [15] argue that the continuous development of technology renders it impossible to define smart campuses using a single platform or definition. Rather, it depends on various criteria and features. Although there have been several proposed definitions for smart campuses, the authors note that they often lack the incorporation of end users’ perspectives. Nonetheless, these definitions signify the importance of a sophisticated infrastructure, which requires defining and searching for infrastructure that can support the operation of smart campuses.
Formerly, traditional educational institutions have experienced various benefits by implementing smart technology on their campuses, such as enhanced student learning, improved quality of life, reduced operating costs, increased safety and security, and improved environmental sustainability [2,3,8,16,17,18,19]. As such, some of the benefits reaped from the smart campus technologies include:
-
Workflow automation: Smart technology can analyze user preferences and deliver personalized services tailored to their needs, enabling multiple intricate tasks to be executed with minimal effort [1]. This includes quicker check-out of library books and automatic deduction of funds when exiting restaurants for meals consumed, saving valuable time.
-
Safety and security: Smart safety and security systems enable automated, real-time preventive and remedial actions when a threat arises, compared to traditional systems. They enhance student satisfaction and increase the appeal of universities to prospective students and parents [16]. For example, Peking University in China has implemented facial recognition cameras at entry gates, and Beijing Normal University uses voice recognition for dormitory access [16].
-
Teaching and learning: Smart campuses can foster operational resilience through virtual labs, digital ports, and remote learning, even in adverse circumstances [3,16]. American University in Washington DC is using augmented reality and virtual reality to offer virtual campus tours [16]. Hence, immersive AR applications are also effective learning aids for different disciplines, with universities planning to employ them in their curricula [3].
-
Strategic management: Smart technologies use data to recommend improvements and increase the efficiency of systems, reducing operational costs [8]. Platforms with intelligent capabilities use robust analytical tools and quick reporting to examine data related to campus-wide resource consumption, student interests, facilities management, transportation demand, and movement patterns, improving efficiency.
-
Resource conservation: Smart technologies regulate and automate energy use, saving stakeholders energy and money and satisfying rising demands for environmental sustainability [2,18]. They also decrease water consumption, parking, and traffic issues, benefiting financial, environmental, and social aspects [2,19].
Having a clear understanding of the benefits and limitations of smart solutions is crucial for prioritizing their implementation and developing effective evaluation criteria. By identifying key benefits, such as increased efficiency and improved sustainability, decision makers can better allocate resources and plan for successful implementation. Moreover, by identifying potential challenges and limitations, such as high implementation costs and data privacy concerns, stakeholders can proactively address these issues and mitigate their impact on the implementation process.
In addition to understanding the benefits and limitations of smart solutions, it is also essential to identify the enabling technologies that underpin a smart campus. These may include sensors, data analytics platforms, and communication networks, among others. By comprehensively identifying and understanding these technologies, stakeholders can better evaluate their suitability for specific smart solutions and identify opportunities for integration and optimization.

2.3. Enabling Technologies of the Smart Campus

According to Zhang et al. [20], cloud computing, Internet of Things (IoT), virtual reality (VR), augmented reality (AR), and artificial intelligence (AI) are among the key enabling technologies for smart campuses. To better comprehend the state-of-the-art technologies that contribute to the success of smart campuses, a concise overview of these technologies is presented in Table 1.
To fully capitalize on the potential benefits of these technologies, it is crucial for decision makers to have a comprehensive understanding of their advantages. By doing so, they can effectively assess the available options and select applications that align with their university’s objectives and constraints. It is, therefore, essential to identify the distinct categories of smart campus applications, along with their primary features, as these technologies serve as the building blocks of most smart campus applications, which will be elaborated upon in the following section.

2.4. Smart Campus Applications

To implement a smart campus, various technologies like IoT devices, AI, and big data analytics need to be integrated, but budget constraints, infrastructure limitations, and organizational capacity can limit their implementation. Therefore, decision makers need to evaluate available options and select applications that align with the university’s institutional goals and priorities. As such, Nine distinct categories of smart campus applications have been identified through a thorough literature review [11,15,26,27,28], such as smart learning management systems, smart classrooms, smart campus operations, smart transportation systems, sustainable energy management, waste and water management, smart geographic information systems, and safe learning environments.
Our research on transitioning a traditional campus into a smart campus began with a focus on defining its objectives and underlying technologies, as well as categorizing smart campus solutions into functional families. This initial step was crucial to gaining a comprehensive understanding of the subject matter, allowing us to identify and explore available technologies and their potential benefits for a smart campus. However, a successful transition requires more than just technology; it also involves addressing critical success factors. To identify these factors, we conducted an exploratory literature review, which forms the basis of the next section. By addressing potential strategic limitations, the critical success factors identified in the literature review will provide valuable insights for developing an effective decision support tool that can guide the transition of traditional campuses into smart campuses.

2.5. Critical Success Factors for a Smart Campus

In the context of transitioning a traditional campus into a smart campus, decision makers face a complex set of challenges and trade-offs. To make informed decisions, it is essential to break down the decision problem into its fundamental attributes, which can be measured and evaluated. Therefore, identifying critical success factors is key to developing an effective decision support system that can help decision makers navigate these challenges. Through an extensive literature review, we have identified a set of six attributes that are crucial to consider when selecting alternatives for introducing new technology on a university campus. These attributes include implementation cost, operation cost, maintenance cost, project duration, stakeholders’ benefit, and resource availability. By underpinning these critical success factors, decision makers can make relevant choices that are consistent with the objectives of establishing a smart campus, optimizing the benefits for all stakeholders, and ensuring a sustainable and cost-effective implementation:
-
Implementation cost: Decision makers in a university setting should conduct a cost-benefit analysis when considering the feasibility of new technology. The implementation cost is a critical factor that can determine the success of the technology [29,30,31].
-
Operation cost: Operating costs refer to the expenses related to staffing, electricity, storage rental, and security that are incurred after a system has been implemented [32]. If these costs are excessively high, it can reduce the justification for implementing the technology. Therefore, operating costs play a crucial role in the decision-making process for selecting alternatives when introducing new technology on a university campus [32,33].
-
Maintenance cost: maintenance cost [34,35] is also a crucial element in any investment analysis, and it determines an asset’s economic life, similar to the operation cost.
-
Project duration: The duration of project implementation significantly influences decision makers’ technology preferences and, therefore, the development of a decision-making tool [30]. Faster implementation times generally increase the attractiveness of an alternative.
-
Stakeholders’ benefit: The primary objective of establishing a smart campus is to optimize the advantages for all stakeholders, including students, staff, the faculty, and the management team. If the benefits for stakeholders do not exceed the system’s costs, then it should not be implemented [36,37]. The expected stakeholder benefit is a vital attribute of the considered decision support system, which may be estimated based on the financial benefits of implementing a system, such as savings or cash inflows. In the case of a smart campus, some stakeholders may derive satisfaction from using the system rather than witnessing any cash inflows or savings.
-
Resource availability: For new technology to be advocated on a university campus, the university must have sufficient funds to invest in its development [33,38,39]. Therefore, resource availability is another critical success factor.
This study has included an extensive literature review aimed at defining a smart campus by identifying its key objectives, technologies, applications, and strategic success factors. Through this review, the study has identified six primary objectives, four key technologies, nine functional groups of applications, and six crucial success factors. These findings provide valuable insights for the development of a decision support tool that can aid stakeholders in making informed decisions regarding the adoption of various smart campus applications. Multi-criteria decision-making techniques can be used to create this tool, and we will introduce the main techniques that can be employed to achieve this goal in the following sections.

2.6. Decision-Making Techniques

Effective decision making is a crucial aspect of management across all industries, as it is often considered the primary function of any management team. The decisions made can have a significant impact on an organization’s growth in various areas, and each decision-making process aims to achieve specific goals that enable the organization to expand and succeed in multiple directions. However, organizations encounter several obstacles in fulfilling their objectives, particularly in areas such as administration, marketing, operations, and finance. As a result, decision-making processes are employed to effectively overcome these challenges and accomplish their goals [40]. To achieve this, [41] argues that the process of making a decision can be divided into six distinct steps, as demonstrated in Figure 1.
Different decision-making methods can be categorized based on the techniques they employ, such as multiple-criteria decision-making (MCDM), mathematical programming (MP), artificial intelligence (AI), and theory-based methods [42]. MCDM, for instance, is a multi-step process that involves structuring and executing a formal decision-making process, assessing multiple alternatives against various criteria, and providing a recommendation based on the best-fit alternatives, which have been evaluated under multiple criteria [43]. MP methods, as described in [44], aim to optimize objectives while accounting for various constraints and boundaries that stakeholders must consider when making decisions. Examples of MP methods include goal programming, linear programming, and stochastic programming. AI, on the other hand, refers to a machine’s ability to learn from past experiences, adapt to new inputs, and perform tasks that are similar to those executed by humans, as defined in [45]. AI systems can either assist or replace human decision makers.
As such, MCDM can handle conflicting quantitative and qualitative criteria, enabling decision makers to select the best-fit alternatives from a set of options in uncertain and risky situations [46]. This study therefore considers the MCDM to be the most suitable model making strategic smart campus decisions. The main MCDM approaches will be presented in the next section.

2.7. Multi-Criteria Decision-Making Method

The process of Multicriteria Decision-Making (MCDM) is complex and dynamic, consisting of two levels: managerial and engineering. At the managerial level, objectives are identified, and the most advantageous option, deemed “optimal”, is selected. This level emphasizes the multicriteria aspect of decision-making, and the decision makers, usually public officials, have the power to accept or reject the proposed solution from the engineering level [47]. Belton and Stewart in [48] describe MCDM as an “umbrella term to describe a collection of formal approaches which seek to take explicit account of multiple criteria in helping individuals or groups of individuals to explore decisions that matter”. In other words, MCDM is a valuable approach when there are various criteria that have conflicting priorities and are valued differently by stakeholders and decision makers. It becomes easier to assess multiple courses of action based on these criteria, which are crucial aspects of the decision-making process that involve human judgement and preferences. As a result of this evident benefit, MCDM has become increasingly popular and widely used in recent decades.
Furthermore, there is a vast range of MCDM techniques in the literature, each with its own strengths and weaknesses. One popular approach is the outranking synthesis method, which connects alternatives based on the decision maker’s preferences and helps to identify the best solution. PROMETHEE and ELECTRE are two examples of this method [49]. Another technique is the interactive local judgement (ILJ) approach, which involves a cycle of computation and discussion to produce successive solutions and gather additional information about the decision maker’s preferences [50,51]. Other approaches include Multi-Attribute Utility Theory (MAUT), simple multi-attribute rating technique (SMART), and analytic hierarchy process (AHP). Each of these methods offers unique advantages and disadvantages, and the choice of which one to use depends on the specific context of the decision-making problem. The following paragraph explores the strengths and weaknesses of each of the main MCDM methods [18,49,52,53,54,55]:
ELECTRE: iterative outranking method based on concordance analysis that can consider uncertainty and vagueness. However, it is not easily explainable, and the strengths and weaknesses of the alternatives are not clearly identified.
PROMETHEE: An outranking method that has several versions for partial ranking, complete ranking, and interval-based ranking. It is easy to use, but it does not provide a clear method to assign weights to the different criteria.
Analytic hierarchy process (AHP): A decision-making technique that makes pairwise comparisons among various options and factors. It is easy to apply and is scalable, but there is a risk of rank reversal and inconsistencies between judgment and ranking criteria.
Technique for order of preference by similarity to an ideal solution (TOPSIS): A method that identifies two alternatives based on their distance to an ideal solution in a multidimensional computing space. It is easy to use and has a simple process, but it relies on Euclidean distance to obtain the solution, which ignores the correlation of the attributes.
Multi-Attribute Utility Theory (MAUT): A method that assigns a level of usefulness or value to every potential result and then identifies the option with the greatest utility. It can take uncertainty into account and incorporate the preferences of the decision maker, but it is data-intensive.
Fuzzy set theory: useful for dealing with imprecise and uncertain data but difficult to develop and requiring multiple simulations before use.
Case-based Reasoning (CBR): Retrieves cases similar to the current decision-making problem from an existing database of cases and proposes a solution based on previous solutions. It is not data-intensive and requires little maintenance, but is sensitive to inconsistent data.
Data envelopment analysis (DEA): Measures the relative efficiencies of alternatives against each other and can handle multiple inputs and outputs. It cannot deal with imprecise data.
Simple multi-attribute rating technique (SMART): One of the most basic types of MAUT that is simple and requires less effort by decision-makers. However, it may not be convenient for complex real-life decision-making problems, where criteria affecting the decision-making process are usually interrelated.
Goal programming: can solve a decision-making problem from infinite alternatives but needs to be used in combination with other MCDM methods to determine the weight coefficients.
Simple additive weighting (SAW): Establishes a value function based on a simple addition of scores representing the goal achievement under each criterion, multiplied by particular weights. It is easy to understand and apply, but it may not be suitable for complex decision-making problems.
Evidential reasoning (ER): The evidential reasoning (ER) approach is a decision support framework that uses a belief structure to address decision-making problems involving qualitative and quantitative criteria. ER combines the utility theory, probability theory, and theory of evidence to define decision problems and aggregate evidence. The approach comprises two components: the knowledge base and the inference engine. The knowledge base includes domain knowledge, decision parameters, additional factors, and user beliefs. The inference engine uses the Dempster–Shafer theorem to define probability and evaluation grades and combine beliefs. ER handles both qualitative and quantitative attributes, avoids distortion of data during transformation, and handles stochastic and incomplete attributes. ER provides accurate and precise data while capturing various types of uncertainties.
When it comes to developing decision-making tools for optimizing smart campus applications, ER stands out as the best approach due to its unique ability to handle both qualitative and quantitative attributes without introducing any data distortion. Furthermore, ER is highly effective at handling uncertain and incomplete attributes in a stochastic environment, making it an ideal method for strategic decision-making. With its superior accuracy and precision, ER can capture various types of uncertainties that may arise.
The next section, therefore, delves into the theory underlying the approach in greater detail, elucidating its mathematical formulation, including parameters such as aggregated probability mass, degrees of belief, and utility. By presenting a more in-depth analysis of these key concepts, this section will provide a comprehensive understanding of the approach and its underlying principles, which will be used by this study to develop the decision support tool for a smart campus.

2.8. Evidential Reasoning Approach

The ER approach combines input information and infers evidence for an alternative using the Dempster–Shafer (DS) hypothesis [56]. The approach deals with a multi-criteria decision problem with L basic attributes under a general attribute Y. Since the utility of the general attribute Y is challenging to measure directly, several more operational indicators are measured to estimate it, which are the basic attributes. Thus, a general attribute can be divided into basic attributes. Table 2 presents the main parameters used in the ER approach.
Table 2. Main parameters used in the ER approach [56,57].
Table 2. Main parameters used in the ER approach [56,57].
ParameterDescription
Basic Attributes, A A = a 1 , a 2 , ,   a i ,   , a L ,
L: Number of basic Attributes
Evaluation Grades of an alternative, E E = { e 1 ,   e 2 , ,   e n , ,   e N }
N: Number of Evaluation Grades
User Beliefs (degrees of belief), β i , n
0 n = 1 N β i , n 1      i

β i , n denoting the degree of belief for the basic attribute ai at the evaluation grade en (Degree to which the user is confident about a particular assertion).
A complete assessment of an attribute means n = 1 N β i , n = 1
An incomplete assessment of an attribute means n = 1 N β i , n < 1
Attribute Weights, ω i
0 ω i 1      i
i = 1 L ω i = 1

Each attribute needs to be assigned a weight.
Utilities, u n Determine the desirability of an alternative where a highest possible utility value of 1 should be assigned to the most desirable evaluation grade en.
The probability mass of a basic attribute ai at evaluation grade
en. It represents how well a i supports the claim that the assessment of y is e n .
m i , n = ω i · β i , n
In case of uncertainty,
m i * = 1 n = 1 N m i , n = 1 ω i n = 1 N β i , n
m i * can be decomposed into two components: m ̲ i and m ~ i . m ̲ i is the remaining probability mass unassigned to attribute a i due to incomplete weight, while m ~ i is the remaining probability mass unassigned to attribute a i due to incomplete degree of belief.
m ̲ i = 1 ω i
m ~ i = ω i 1 n = 1 N β i , n
To evaluate an alternative’s overall performance on the general attribute y in the evidential reasoning approach, it is necessary to aggregate the data at the basic attribute level. This aggregation is carried out through a recursive algorithm, which performs L-1 iterations. At each recursion, the algorithm computes several measures that are aggregated across the basic attributes a 1 to a j , where j = 2 , , L :
  • M j , n denotes the probability mass aggregated at evaluation grade e n . It is calculated as:
    M j , n = K j M j 1 , n · m j , n + M j 1 * · m j , n + M j 1 , n · m j *
  • K j is a normalization factor that ensures the aggregated probability masses remain between 0 and 1 in each recursion of the ER algorithm. It is calculated as:
    K j = [ 1 t = 1 N k = 1   k   t   N M j 1 , t · m j , k ] 1      j = 2 , , L
  • M j * is the unassigned probability mass aggregated over all evaluation grades. It is the sum of M ̲ j and M ~ j .
    M j * = M ̲ j + M ~ j
  • M ̲ j is the unassigned probability mass due to incomplete weight, aggregated over all evaluation grades. It is calculated as:
    M ̲ j = K j   M ̲ j 1 · m ̲ j   j = 2 , , L
  • M ~ j is the unassigned probability mass due to due to incomplete degrees of belief, aggregated across the basic attributes a 1 to a j and all evaluation grades. It is calculated as:
    M ~ j = K j M ~ j 1 · m ~ j + M ̲ j 1 · m ~ j + M ~ j 1 · m ̲ j     j = 2 , , L
    where:
    M 1 , n = m 1 , n ,   M ̲ 1 = m ̲ 1 , M ~ 1 = m ~ 1
Once the aggregated probability masses M L , n , M ̲ L , and M ~ L have been calculated through the recursive ER algorithm, the aggregated degrees of belief can be calculated, where B n is the aggregated degree of belief for the general attribute y assessed to the evaluation grade e n :
B n = M L , n 1 M ̲ L   n
whereas the unassigned, aggregated degree of belief for the general attribute y is B * :
B * = 1 n = 1 N B n = M ~ L 1 M ̲ L
Finally, the aggregated utility for y is denoted by U . Under a complete assessment with no assessment uncertainty, the aggregated utility is calculated:
U = n = 1 N B n   u n
However, if assessment uncertainty exists, i.e., there exists an unassigned belief, then a utility interval  [ U m i n ,   U m a x ] is calculated instead, where U m i n and U m a x are the minimum and maximum utilities of y for the considered alternative, respectively. The endpoints of the utility interval are defined in the equations below.
U m i n = n = 1 N B n   u n + B * u N
U m a x = n = 1 N B n   u n + B * u 1
Thus, the average aggregated utility, U a v g , is the midpoint of the utility interval:
U a v g = U m a x + U m i n   2
The model evaluates different alternatives by assigning utility values to each alternative based on various criteria. These utility values are then used to calculate an average aggregated utility value for each alternative, which is used to determine the optimal alternative.
The ER approach serves as the backbone of our decision support tool. It involves multiple stages:
Attribute weight determination: A panel of experts was engaged, comprising faculty, staff, and operational managers, to establish the relative importance of attributes. Utilizing the Nominal Group Technique, experts provided their final voted weights, which were then used as input for the ER model.
Model implementation and programming: The ER model, driven by the established attribute weights, was programmed in Python. This model processes belief tensors, which are three-dimensional matrices representing degrees of belief at the intersection of alternative, attribute, and evaluation grade. Python coding ensured efficient computation of average utility for each alternative and its subsequent ranking.
Validation and reliability: The model’s validity was rigorously tested. A paired t-test was utilized to compare the model-generated results with expert-provided results, while the Receiver Operating Characteristics (ROC) curve analysis was conducted to evaluate the model’s performance. Key metrics including accuracy, precision, sensitivity, specificity, negative predicted value, and F1 score, which were utilized for a comprehensive evaluation.

3. Data Collection, Implementation, Results and Validation

This section shares the data collection process and analysis conducted to develop the proposed evidential reasoning decision support tool for a smart campus. As such, two of the key components of the proposed decision tool are validated; namely, alternatives and attribute weights, using two different surveys. Attribute weights are then tested through weight perturbations. To ensure the model’s accuracy and reliability, various statistical tests are employed. All of the relevant ethical approval considerations for data collection have been followed, and the Institutional Research Board consent was granted prior to the data collection process.

3.1. Validation of Smart Campus Alternative Applications

To validate nine functional families of smart campus applications detailed in Section 2.4, a survey was conducted targeting 56 stakeholders from a renowned higher education institution, including students, alumni, and staff who were not necessarily smart campus experts. The participants were asked to rate the importance of the smart campus applications on a scale from 0 to 5. The results of the average scores and analysis of variance data are shown in Table 3 below.
In addition, the stakeholders’ opinion data were subjected to a single-factor, two-way ANOVA test to determine if the proposed smart campus alternatives represented distinct categories. The ANOVA test results presented in Table 4 show a statistically significant difference in stakeholder preference among the nine alternatives, with a p-value of 0.0004.
In conclusion, the survey conducted among 56 stakeholders demonstrated positive ratings for all nine functional families of smart campus applications. The stakeholders’ opinions exhibited significant variation among the different alternatives, as supported by the statistically significant results of the single-factor, two-way ANOVA test (p-value = 0.0004). This analysis confirmed that the proposed smart campus alternatives represent distinct categories, each with its own level of stakeholder preference. The obtained findings not only validate the selection of the nine smart campus applications but also provide valuable insights for the implementation of the ER decision tool. With the endorsement of these alternatives by the stakeholders, the ER decision tool can now be developed using these nine categories as input.

3.2. Attribute Weights

The evidential reasoning (ER) approach is a decision-making framework that combines input information to infer evidence for a particular alternative using the Dempster–Shafer (DS) hypothesis [57]. It is particularly useful for addressing multi-criteria decision problems that involve L basic attributes under a general attribute y. Directly measuring the utility of the general attribute can be difficult, so the approach relies on several more operational indicators, which are the basic attributes, to estimate it. This allows the general attribute to be divided into more manageable basic attributes. To depict the relationship between the general and basic attributes, an Architectural Theory Diagram (ATD) is used, which consists of a two-layer hierarchy (as shown in Figure 2). In this study, the basic attributes used in the decision support tool were derived from the critical success factors identified in the literature, including implementation cost, operation cost, maintenance cost, project duration, stakeholder benefits, and resource availability.
When faced with an MCDM problem, it is common to assign weights to decision attributes based on their relative importance. However, since historical data to derive weights for decision attributes may not always be available, a weight allocation logic must be employed [58]. In this study, a consensus-based approach was used to collect weights for the decision-making framework. Two popular consensus methods are the Delphi and the Nominal Group Technique (NGT) [59]. For this research, the NGT method [60,61] was proposed to allocate weights to decision attributes. Nine domain experts (Group A experts) assigned weights to decision parameters, discussed and justified their assigned weights for each attribute, and revised them if necessary. The facilitator then determined the final weight of each attribute through a voting process. The attribute weights collected through the NGT are summarized in Table 5 for easy reference, with the final weights obtained by averaging the attribute weights.
The attributes considered in the ER approach include implementation cost, maintenance cost, operation cost, project duration, stakeholder benefits, and resource availability. The attribute weights collected through the consensus-based approach using the NGT are crucial for the decision-making tool. These weights represent the relative importance of each attribute in the ER approach and play a vital role in evaluating smart campus alternative applications. By assigning weights to these attributes, the decision tool can effectively prioritize and assess the smart campus alternative applications based on their performance in these key areas.

3.3. Decision-Making Scenarios Generation

To validate the decision tool, belief tensors were randomly generated and given to five decision-making experts (referred to as Group B) who had more than 7 years of experience in decision-making tasks. Each expert was provided with 10 decision-making scenarios represented by 10 belief tensors. Each tensor was three-dimensional, consisting of 162 degrees of belief, since each run of the model requires a degree of belief to be reported at the intersection of the six attributes in Table 5, the nine smart campus alternative categories in Section 2.4, and the Figure 2 evaluation grades of the attributes listed in Table 6, below.
A belief tensor was presented using a spreadsheet file (see Table 7), where the user provides a degree of belief for each of the six attributes and nine alternatives against three evaluation grades on a scale of 0 to 1, representing the confidence level from 0 to 100%. To avoid overwhelming the experts, all 1620 elements were randomly generated and provided to the participants and they were asked to convert the given beliefs to utilities using their judgment and experience. This resulted in 50 sets of inputs and expert-provided outputs.
Corresponding to each belief tensor, each expert was also asked to select the best alternative, typically the ones which were assigned the maximum utility. The utilities provided by the experts and the corresponding optimal alternatives chosen by the experts are reported in Appendix A, Table A1.
In order to validate the performance of the developed decision-making support tool, the utilities and optimal choices determined by the decision-making experts for the 50 randomly generated scenarios (Table 8) will be compared to those obtained by the decision tool. This comparison will be conducted in Section 3.7. By comparing the results from the decision-making experts with the outcomes produced by the decision tool, the validation process aims to assess the tool’s effectiveness and accuracy in determining utilities and optimal choices across various scenarios. This analysis will provide valuable insights into the tool’s performance, highlighting its strengths and areas for improvement. Through this validation process, stakeholders can gain confidence in the reliability of the decision-making support tool and its ability to assist in making informed choices.

3.4. Decision Support System Implementation

To implement the decision support tool effectively, two main components need to be developed: the knowledge capture matrix and the knowledge manipulation algorithms. The knowledge capture matrix is constructed based on findings from the literature review (nine different smart application alternatives), while the six attribute weights are provided by domain experts (Section 3.2). The knowledge manipulation, or aggregation engine, is developed using the Dempster–Shafer (DS) algorithm. The decision support tool is built using Python version 3.9, with the PyCharm Community Edition 2022.3.2 serving as the Integrated Development Environment (IDE). The decision support‘s architecture is depicted in Figure 3, showcasing three structural layers [62]: the presentation, application, and data processing layers. The presentation layer acts as the user interface, allowing users to interact with the decision support tool. The application layer handles calculations and facilitates communication between different layers. The data processing layer contains the necessary knowledge resources. The decision support tool utilizes the inference engine based on the DS theory. This engine recursively combines probability masses and beliefs of basic attributes to generate utilities for decision alternatives. The evidential reasoning (ER) approach is used to normalize the combined probability masses.
The logic of the inference engine is illustrated in the flowchart presented in Figure 4.
The decision tool relies on two essential inputs to function effectively: a vector of weight attributes, as developed in Section 3.2, and a beliefs tensor representing the user’s assessment of the nine smart campus alternative applications across the six attributes. By combining these inputs, the model is able to calculate the utility of each alternative, providing valuable insights for decision making. In the given example (Figure 5), the smart GIS (Geographical Information Systems) application for a smart campus emerges as the alternative with the highest utility value. It is closely followed by smart classroom applications, smart administration, and smart campus operations. These rankings are determined based on the calculated utility values, allowing stakeholders to identify the most favorable alternatives within the context of the assessed attributes.
The decision tool’s ability to generate utility values facilitates a comprehensive evaluation of the smart campus alternative applications, aiding stakeholders in making informed decisions regarding their implementation. By considering both the weighted attributes and the user’s assessment, the tool provides a structured approach for assessing and comparing the potential benefits of different alternatives, enabling the identification of the most suitable options for a smart campus environment.
The next section, the decision tool’s inputs and outputs are validated and its performance is assessed.

3.5. Validation of Attribute Weights

The attribute weights used in this study were collected through the NGT (Nominal Group Technique). To validate the attribute weights (determined in Section 3.2), the model’s sensitivity to weight perturbations was tested. Five belief tensors were chosen, which produced outputs identical to the expert-recommended alternatives. For each attribute, the weights were changed by 25% and 50%, resulting in four new sets of weights per attribute (as shown in Figure 6). The model was then run 24 times for each of the five belief tensors, resulting in a total of 120 runs. After each run, the optimal alternative and its utility were recorded and compared against expert-provided utilities. Additionally, the decision accuracy of the decision tool was compared to truth data, and confusion matrices were developed [62,63]. The decision tool performance was reported in terms of accuracy, sensitivity, and specificity for each set of weights. Table A2 of Appendix A summarizes how the attribute weights affect system performance, which can be more clearly visualized using the bar chart in Figure 6.
To compare the decision tool’s performance against expert-provided utilities, only the utilities of the optimal alternatives (shown in Table A3, Appendix A) were used. Furthermore, a comparison of the decision tool’s decision accuracy against the truth data was performed and reported in Table A4, Appendix A. Finally, Table A5 of Appendix A shows how the weights of attributes affect the decision tool’s performance. Figure 6 illustrates that the model’s accuracy varies with changes in attribute weights, thereby validating the weights collected through the ‘Nominal Group Technique.’
When expert-allocated weights are used, the model achieves 100% accuracy. Once the validated alternatives and weights were obtained, they were programmed into the tool, which was then run through 50 randomly generated belief tensors in the validation dataset. In the following section, various outputs for a single tool run are presented to demonstrate the decision tool’s performance.

3.6. Decision Tool Output

The main outputs of the proposed decision tool are a vector of the average utilities, denoted as U a v g , for each alternative. Additionally, the tool produces an optimally ranked list of the smart campus alternatives, in descending order of U a v g . If the end user wishes to receive a single recommendation, it can be obtained from the top alternative in the ranked list. The average utility of each alternative is demonstrated in the PyCharm console, as shown in Figure 7.
To make the results more accessible and understandable, the decision tool provides visual representations of the outputs using bar charts. Figure 8 presents a bar chart that displays the utilities of different smart campus alternatives. Among these alternatives, “A8” is recommended as the top choice for establishing the smart campus because it has the highest utility value. Additionally, the decision tool offers graphical reports of certain intermediary variables, which can be valuable for expert users who want to understand the results of the decision tool. One such variable is Bn, which represents the aggregated degree of belief by evaluation grade for an alternative.
Figure 9 illustrates a bar chart showing the Bn values for the optimal alternative. In this example, the optimal alternative received 77% of assessments as “good”, 14.9% as “average”, and only 3.5% as “poor”. This breakdown of assessments provides further insights into the decision-making process of the model, allowing users to better understand and interpret the outcomes. By presenting the results through visual representations, the decision tool enhances clarity and facilitates a more intuitive understanding of the outputs. These visualizations enable users to easily identify the recommended alternative and gain deeper insights into the underlying assessment and belief aggregation processes.

3.7. Decision Tool Validity

The decision tool was validated by comparing the utilities and optimal alternatives generated by the decision tool (Table A6) to the utilities and smart campus optimal alternatives collected from experts in Table 6, using 50 randomly generated belief tensors. Furthermore, for each belief tensor, the assigned expert was asked to select the best alternative with the highest utility.

3.7.1. Raw Data Summary

The decision tool was validated using a dataset of 50 truth data points. The decision-tool-generated utilities and optimal alternatives were compared to their expert-provided counterparts in Table A7 of Appendix A. A scatter chart was used to plot the decision-tool-generated utilities and the expert-provided utilities of the optimal alternatives in Figure 10, to visualize the degree of alignment between the decision tool’s results and the experts’ opinions.
Figure 10 shows that the line for expert-provided utilities largely overlaps with the one for decision-tool-generated utilities for the 50 test runs. This suggests that the proposed decision support correctly selected almost all of the optimal alternatives for the given 50 cases.
To rigorously validate the model output, a paired t-test will be conducted to determine if the two distributions (expert-provided vs. decision-tool-generated utilities) significantly differ from each other. However, before applying this statistical test, a normality test will also be conducted to ensure that both distributions are normally distributed.

3.7.2. Normality Test

The two distributions have been plotted separately in histograms in Figure 11a,b.
To determine if the distributions are bell-shaped, a numerical approach to the normality test is recommended, as it may not be apparent from the histograms (Figure 11a,b). IBM SPSS Statistics version 22 was used to conduct the normality test on both the decision-tool-generated and expert-provided utilities, with 50 tests per 50 cases, as shown in Table 9. There were no missing data for this test, as indicated in Table 8. The skewness values for the decision-tool-generated and expert-provided utilities were reported to be 0.279 and 0.258, respectively, both of which fall between −0.5 and 0.5. These values indicate that both datasets are reasonably symmetrical. The kurtosis values were found to be 0.561 and 0.564. A kurtosis value of less than 3 suggests that the tails of the distribution are thinner than in the normal distribution and that the peak of the considered distribution is lower, indicating that the data are light-tailed or lack outliers [64].

3.7.3. Paired t-Test

Since the sample data passed the normality test, a parametric test could be used to validate the decision tool. Therefore, a paired t-test was performed to validate the decision-tool-generated results against the truth data provided by the experts. The results of the paired two-sample t-test are presented in Table 10. The paired t-test for means yielded a p-value of 0.597, which is greater than the 0.05 significance level. Thus, it can be concluded that there is no statistically significant difference between the decision-tool-generated and expert-provided utilities. It can be inferred that the utilities generated by the decision tool are very similar to the utilities provided by the experts. In the following sections, the performance of the decision tool is rigorously assessed to determine how closely it can emulate a decision-making expert.

3.8. Decision Tool Performance Assessment

The decision tool’s performance has been assessed with the help of three powerful instruments: confusion matrices, the area under the ROC curve, and six other performance metrics.

3.8.1. Confusion Matrices

A confusion matrix is a table that is used to evaluate the performance of a classification model. It compares the predicted class of each sample to its actual class and categorizes them into four different measures: True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) (Table 11).
In the context of Table 12, the confusion matrix was used to validate the performance of the proposed tool. It showcases the tool’s predictive capabilities across multiple alternatives (A1–A9) and how they align with expert-provided actual alternatives. This step highlights the tool’s ability to make accurate predictions, forming the foundation for further analysis.
Table 13 furthers our understanding by offering a detailed breakdown of True Positives (TP), False Positives (FP), True Negatives (TN), and False Negatives (FN) for each individual alternative. This breakdown not only reveals the tool’s accuracy for each alternative but also uncovers potential areas for improvement, aiding in refining the decision support model.
Table 14 shows the pooled confusion matrix for decision tool validation, where the actual and predicted values are compared across all of the alternatives. The pooled confusion matrix shows that the decision tool predicted 47 True Positives, 3 False Positives, 3 False Negatives, and 397 True Negatives. The pooled confusion matrix allows for an overall evaluation of the decision tool’s performance, showing that it correctly predicted most alternatives, with a low number of false positives and false negatives.

3.8.2. Performance Metrics

Based on the four measures (TP, FP, TN, FN), more meaningful measures of the tool’s performance, such as precision, sensitivity, specificity, accuracy, and F1 score, can be calculated.
P r e c i s i o n = T P T P + F P
N e g a t i v e   P r e d i c t e d   V a l u e = T N T N + F N
S e n s i t i v i t y   o r   R e c a l l   o r   T r u e   P o s i t i v e   R a t e = T P T P + F N
S p e c i f i c i t y = T N T N + F P
A c c u r a c y = T P + T N T P + T N + F P + F N
According to Mausner et al. [66], a model’s precision determines the number of positive cases that the decision tool correctly identifies out of all of the available positive cases. The negative predicted value of a model reflects how accurately negative cases were classified as negative. Additionally, a model’s sensitivity represents the ratio of correctly identified positive data to the actual positive data, while specificity defines the ratio of correctly classified negative data to the actual negative data. The accuracy measure indicates the percentage of data correctly identified by the model.
Furthermore, the F1 score is a metric that takes into account both the sensitivity and specificity of the model, providing an overall measure of its performance [67].
F 1   S c o r e   o r   P e r f o r m a n c e = 2   ×   ( P r e c i s i o n   ×   R e c a l l ) P r e c i s i o n + R e c a l l
In Table 15, the performance metrics offer a comprehensive assessment of the decision support model’s effectiveness. Precision, negative predicted value, sensitivity, specificity, accuracy, and the F1 score emphasize the tool’s strengths in correctly identifying both positive and negative cases. These metrics, derived from the confusion matrices (Table 14), showcase the decision tool’s well-rounded performance and its potential to contribute effectively to decision-making processes.
The decision support model correctly identified over 99% of negative cases with a negative projected value of 0.9925, and accurately detected 47 out of 50 predicted-positive cases with a sensitivity of 0.94. The specificity of the decision tool was 0.9925 or 99.25%, and the accuracy was 0.9867, indicating that the decision tool can accurately identify over 98% of cases. The F1 score for the proposed decision tool was 0.94, which measures the performance of a decision tool as the harmonic mean of precision and recall. This score further confirms the excellent performance of the decision support tool.

3.8.3. Receiver Operating Characteristics (ROC)

The ROC curve is a visual representation of a model’s performance that helps compare different models with a specific objective. The area under the ROC curve is a common metric used to validate the effectiveness of a classifier. The ROC curve was generated using IBM SPSS V.22, and the area under the curve (AUC) was calculated to better understand the decision tool’s performance. The ROC curve demonstrates the model’s sensitivity against the false positive rate, and the closer the curve is to the left and upper border, the better the model’s performance. The results are presented in Table 16 and Figure 12.
The AUC of the proposed decision tool was 0.734, indicating that it performs better than random guessing (AUC = 0.5) but is not a perfect classifier (AUC = 1). The asymptotic significance was 0.178, which is greater than 0.05, suggesting that the developed decision support tool is valid.

4. Discussion, Conclusions and Future Work

Universities worldwide are striving to modernize their traditional campuses and incorporate smart technology. However, implementing a variety of smart applications can leave campus management unsure about the optimal order of implementation and the key success factors and potential benefits. To address this issue, a comprehensive decision-making model has been developed to help a university’s strategic management team to prioritize smart campus solutions based on its specific circumstances.
Previous research has highlighted the importance of smart technologies in various industries, including the potential benefits of smart campuses in the education sector. The current study builds upon this knowledge by proposing a comprehensive decision-making model specifically tailored for universities.
Firstly, through a literature survey, six main objectives were identified: enhancing workflow automation, safety and security, teaching and learning, strategic management, and resource conservation. These objectives are achieved through four underlying technologies: cloud computing, IoT, AR and VR, and AI. Smart campus solutions were grouped into nine functional families through a stakeholder opinion survey, followed by a statistical analysis to ensure clear and non-overlapping grouping of options. This aligns with previous studies that have emphasized the need for smart applications to enhance workflow automation, safety and security, teaching and learning, strategic management, and resource conservation. The integration of cloud computing, IoT, AR and VR, and AI technologies further aligns with the evolving landscape of smart technologies.
Secondly, six critical success factors at the strategic level were identified from the literature, including the three types of costs (implementation, operation, and maintenance), implementation duration, resource availability, and stakeholders’ perceived benefit. These success factors were included as attributes in the multiple criteria decision analysis (MCDA) problem for evaluating each alternative. After reviewing various decision-making frameworks, the evidential reasoning (ER) approach was selected as the most appropriate among 12 surveyed MCDA methods to apply to the smart campus strategic decision problem.
The ER model was established through a mathematical formulation, followed by the Dempster–Shafer algorithm. To determine the relative importance of attributes, a group of experts consisting of faculty, staff, and operational managers utilized the Nominal Group Technique and provided their final voted weights. Subsequently, the ER model was programmed in Python and takes in a three-dimensional tensor of degrees of belief, where each element is a degree of belief at the intersection of an alternative, an attribute, and an evaluation grade. The model outputs the average utility of each alternative and a ranking of alternatives in descending order of average utility, with the optimal alternative being the top-ranked option.
To validate the decision tool, 50 belief tensors were randomly generated and assessed simultaneously through the model and by a group of decision-making experts who provided utilities for each alternative using their experience and judgment. The model’s validity was evaluated using a paired t-test, which indicated no significant difference between the expert-provided and model-generated results, even at the 0.05 significance level. Additionally, the area under the model’s ROC curve was determined to be 0.734, indicating that the model outperformed random guessing. To evaluate the model’s performance, seven metrics were used, including accuracy, precision, sensitivity, specificity, negative predicted value, and F1 score. All metrics were found to be above 90%, highlighting the soundness of the model and its ability to reliably emulate a decision-making expert.
The validation of the ER approach as a highly reliable decision-making tool reinforces the findings and contributes to the broader context of decision support in the implementation of smart solutions. By utilizing expert opinions and comparing the model-generated results with human experts’ assessments, the study demonstrates the tool’s effectiveness in emulating decision-making expertise.
The implications of these findings extend beyond the specific context of the study. The decision-making tool can serve as a generalized tool for worldwide use, helping universities globally make informed decisions about implementing smart campus solutions. This contributes to the modernization of traditional campuses on a global scale and facilitates the adoption of smart technologies in the education sector.
The findings of this research provide valuable insights into the optimization of smart campus solutions through the proposed decision-making tool based on the evidential reasoning (ER) approach. The tool addresses the challenge of prioritizing smart applications based on specific requirements and circumstances of universities. By considering various attributes such as implementation cost, operation cost, maintenance cost, implementation duration, resource availability, and stakeholders’ perceived benefit, the tool provides rankings and utilities for different smart applications.
Although the study successfully achieved its aim, a few limitations and possible areas of improvement were identified:
  • When applying the tool to another university campus, the attribute weights should be adjusted to reflect that university’s unique values hierarchy.
  • The tool assumes that the university’s leadership is not bound by any strategic commitments, such as sustainability goals, so adding “Strategic Alignment” as an extra attribute to the tool in the future may be useful.
  • The tool outlines the digital transformation plan for a fully traditional campus, so if the campus has already incorporated some smart applications, the decision-making tool must consider these existing systems by excluding them from the alternatives.
  • Implicit costs, such as opportunity cost, are not accounted for in the tool, so investigating how to incorporate them would be interesting.
  • The tool currently does not prompt the user for the institution’s budget, which means that an alternative that exceeds the budget would not be excluded from the analysis, but a simple mechanism could be implemented to automatically exclude alternatives with annualized costs that exceed the budget.
  • The evaluation of cost is left to subjective interpretation, so establishing more objective and concrete definitions of cost would be helpful.
  • To increase the tool’s resolution, more evaluation grades can be added, such as “Excellent”, “Very Good”, “Good”, “Average”, “Below Average”, “Bad”, and “Very Poor”, or a continuous utility function can be assumed, but this may negatively impact user experience.
  • Expanding the tool’s scope by including other important factors, such as social impact, environmental impact, and ethical considerations, to help universities make more socially responsible and sustainable decisions.
  • Collaborating with other universities to gather more data and insights on their smart campus implementation strategies and to better understand the variations in universities’ priorities and values.
  • Investigating the applicability of the developed tool to other industries, such as healthcare or manufacturing, to see if the tool can be adapted for other decision-making contexts.
This study has developed a comprehensive decision-making model that addresses the challenge of prioritizing smart applications in universities. The validation of the model confirms its reliability and effectiveness, and its generalizability enables universities worldwide to make informed decisions about implementing smart campus solutions. The identified limitations and potential areas for improvement offer valuable directions for future research, enhancing the tool’s usability, objectivity, and applicability across various contexts.

Author Contributions

This study builds on an existing study that was initiated by V.A. who is the principle investigator of the study, V.A. and M.F.K. contributed to conducting the research, conceptualizing of the findings. Z.B. contributed to the writing and structuring of the paper, as well as the conceptualization of the findings. Z.B. was also very involved in the editing and visualization of the paper. All authors contributed to the editing and visualization of the paper including the N.B. All authors have read and agreed to the published version of the manuscript.

Funding

The authors acknowledge the support of the American University of Sharjah under the Open Access Program. This paper represents the opinions of the authors and does not mean to represent the position or opinions of the American University of Sharjah.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Tabulated Results

Table A1. Expert-provided utilities per alternative.
Table A1. Expert-provided utilities per alternative.
No.UtilitiesOptimal Alternative
A1A2A3A4A5A6A7A8A9
1100000000A1
2010000000A2
3001000000A3
4000100000A4
5000010000A5
6000001000A6
7000000100A7
8000000010A8
9000000001A9
10100000000A1
11100000000A1
120.250.250.750.250.250.250.250.250.25A3
130.250.250.250.750.250.250.250.250.25A4
140.250.250.250.250.750.250.250.250.25A5
150.250.250.250.250.250.750.250.250.25A6
160.250.250.250.250.250.250.750.250.25A7
170.250.250.250.250.250.250.250.750.25A8
180.250.250.250.250.250.250.250.250.75A9
190.50.20.20.20.20.20.20.20.2A1
200.20.50.20.20.20.20.20.20.2A2
210.20.20.50.20.20.20.20.20.2A3
220.20.20.20.50.20.20.20.20.2A4
230.20.20.20.20.50.20.20.20.2A5
240.20.20.20.20.20.50.20.20.2A6
250.20.20.20.20.50.20.50.20.2A7
260.20.20.20.20.50.20.20.50.2A8
270.20.20.20.20.50.20.20.50.2A8
280.50.50.60.60.60.70.60.650.5A6
290.590.670.450.490.490.60.580.580.7A9
300.650.70.450.550.60.650.560.70.6A2
310.70.750.50.40.650.60.60.750.6A2
320.40.550.60.60.60.750.60.650.7A6
330.60.60.60.720.550.650.70.150.7A4
340.6510.650.60.60.60.70.60.70.15A6
350.70.150.650.650.60.60.60.70.55A1
360.150.650.650.60.70.550.60.60.7A5
370.150.650.70.60.70.550.60.60.7A9
380.60.70.550.70.650.650.150.650.6A2
390.450.70.50.250.80.70.350.80.6A5
400.450.750.550.650.550.80.650.550.55A6
410.750.50.80.70.60.780.750.50.7A3
420.60.30.60.60.50.60.30.750.4A8
430.650.550.80.70.50.70.450.60.7A3
440.30.60.70.50.80.650.30.50.65A5
450.30.450.550.30.60.50.20.80.45A8
460.650.650.80.70.550.850.550.550.7A6
470.60.150.450.40.30.450.30.60.45A8
480.50.450.50.40.40.70.50.650.35A6
490.40.350.450.40.450.450.40.550.35A8
500.650.40.60.350.50.450.350.450.45A1
Table A2. Attribute weight variation.
Table A2. Attribute weight variation.
AttributesAttribute Being ControlledActual WeightWeight Decreased by 25%Weight Decreased by 50%Weight Increased by 25%Weight Increased by 50%
Set 1Set 2Set 3Set 4Set 5
Implementation CostSmartcities 06 00106 i0010.20.150.10.250.3
Operation Cost 0.150.160.170.140.13
Maintenance Cost 0.10.110.120.090.08
Stakeholders’ Benefit 0.250.260.270.240.23
Implementation Duration 0.150.160.170.140.13
Resource Availability 0.150.160.170.140.13
Implementation Cost 0.20.20750.2150.19250.185
Operation CostSmartcities 06 00106 i0010.150.11250.0750.18750.225
Maintenance Cost 0.10.10750.1150.09250.085
Stakeholders’ Benefit 0.250.25750.2650.24250.235
Implementation Duration 0.150.15750.1650.14250.135
Resource Availability 0.150.15750.1650.14250.135
Implementation Cost 0.20.2050.210.1950.19
Operation Cost 0.150.1550.160.1450.14
Maintenance CostSmartcities 06 00106 i0010.10.0750.050.1250.15
Stakeholders’ Benefit 0.250.2550.260.2450.24
Implementation Duration 0.150.1550.160.1450.14
Resource Availability 0.150.1550.160.1450.14
Implementation Cost 0.20.21520.2250.18750.175
Operation Cost 0.150.16250.1750.13750.125
Maintenance Cost 0.10.11250.1250.08750.075
Stakeholders’ BenefitSmartcities 06 00106 i0010.250.18750.1250.31250.375
Implementation Duration 0.150.16250.1750.13250.125
Resource Availability 0.150.16250.1750.13750.125
Implementation Cost 0.20.20750.2150.19250.185
Operation Cost 0.150.11250.0750.18750.225
Maintenance Cost 0.10.10750.1150.09250.085
Stakeholders’ Benefit 0.250.25750.2650.24250.235
Implementation DurationSmartcities 06 00106 i0010.150.15750.1650.14250.135
Resource Availability 0.150.15750.1650.14250.135
Implementation Cost 0.20.20750.2150.19250.185
Operation Cost 0.150.11250.0750.18750.225
Maintenance Cost 0.10.10750.1150.09250.085
Stakeholders’ Benefit 0.250.25750.2650.24250.235
Implementation Duration 0.150.15750.1650.14250.135
Resource AvailabilitySmartcities 06 00106 i0010.150.15750.1650.14250.135
Table A3. Variation in the utility of the optimal alternative due to change in weight.
Table A3. Variation in the utility of the optimal alternative due to change in weight.
0.55Beliefs Tensor 50.66Beliefs Tensor 40.61Beliefs Tensor 30.85Beliefs Tensor 20.81Beliefs Tensor 1Utilities under original weights
0.550.690.600.860.79Implementation CostDecreasing Weight by 25%
0.540.680.640.860.81Operation Cost
0.560.660.600.850.82Maintenance Cost
0.590.650.650.840.79Stakeholder’s Benefit
0.540.680.640.860.81Implementation Duration
0.540.680.640.860.81Resource Availability
0.550.740.590.870.77Implementation CostDecreasing Weight by 50%
0.530.700.670.870.80Operation Cost
0.660.660.600.850.58Maintenance Cost
0.630.630.690.830.78Stakeholder’s Benefit
0.530.700.670.870.80Implementation Duration
0.530.700.670.870.80Resource Availability
0.550.690.630.840.83Implementation CostIncreasing Weight by 25%
0.560.630.630.850.82Operation Cost
0.540.660.620.850.80Maintenance Cost
0.510.680.600.860.83Stakeholder’s Benefit
0.560.630.630.850.82Implementation Duration
0.560.630.630.850.82Resource Availability
0.550.690.840.840.83Implementation CostIncreasing Weight by 50%
0.570.610.650.840.57Operation Cost
0.520.670.630.850.79Maintenance Cost
0.470.710.590.870.85Stakeholder’s Benefit
0.570.610.650.840.57Implementation Duration
0.570.610.650.840.57Resource Availability
Table A4. Variation in the optimal alternative due to changes in weight.
Table A4. Variation in the optimal alternative due to changes in weight.
Actual OutputDecreasing Weight by 25%Decreasing Weight by 50%Increasing Weight by 25%Increasing Weight by 50%
Implementation CostOperation CostMaintenance CostStakeholder’s BenefitImplementation DurationResource AvailabilityImplementation CostOperation CostMaintenance CostStakeholder’s BenefitImplementation DurationResource AvailabilityImplementation CostOperation CostMaintenance CostStakeholders BenefitImplementation DurationResource AvailabilityImplementation CostOperation CostMaintenance CostStakeholder’s BenefitImplementation DurationResource Availability
Beliefs Tensor 1
A8A8A8A8A8A8A8A8A8A8A8A8A8A8A8A8A8A8A8A8A8A8A8A8A8
Beliefs Tensor 2
A6A6A6A6A6A6A6A6A6A6A6A6A6A6A6A6A6A6A6A6A6A6A6A6A6
Beliefs Tensor 3
A1A8A1A1A8A1A1A8A1A1A8A1A1A1A8A1A1A8A8A6A8A1A1A8A8
Beliefs Tensor 4
A8A6A8A8A8A8A8A6A8A8A8A8A8A8A8A8A6A8A8A8A8A8A6A8A8
Beliefs Tensor 5
A8A8A8A8A8A8A8A8A8A8A8A8A8A8A8A8A8A8A8A8A8A8A8A8A8
Table A5. Variation in the tool’s performance metrics due to change in weight.
Table A5. Variation in the tool’s performance metrics due to change in weight.
AccuracySpecificitySensitivityNegative Predicted ValuePrecision
1.01.01.01.01.0Value under Original Weights
0.70.670.750.80.6Implementation CostDecreasing Weight by 25%
1.01.01.01.01.0Operation Cost
1.01.01.01.01.0Maintenance Cost
0.90.831.01.00.8Stakeholder’ s Benefit
1.01.01.01.01.0Implementation Duration
1.01.01.01.01.0Resource Availability
0.70.670.750.80.6Implementation CostDecreasing Weight by 50%
1.01.01.01.01.0Operation Cost
1.01.01.01.01.0Maintenance Cost
0.90.831.01.00.8Stakeholder’s Benefit
1.01.01.01.01.0Implementation Duration
1.01.01.01.01.0Resource Availability
1.01.01.01.01.0Implementation CostIncreasing Weight by 25%
0.90.831.01.00.8Operation Cost
1.01.01.01.01.0Maintenance Cost
0.750.750.750.750.75Stakeholder’s Benefit
0.90.831.01.00.8Implementation Duration
0.90.831.01.00.8Resource Availability
0.90.831.01.00.8Implementation CostIncreasing Weight by 50%
0.90.831.01.00.8Operation Cost
1.01.01.01.01.0Maintenance Cost
0.750.750.750.750.75Stakeholder’ s Benefit
0.90.831.01.00.8Implementation Duration
0.90.831.01.00.8Resource Availability
Table A6. Decision-tool-generated utilities based on the beliefs tensors provided to Group B.
Table A6. Decision-tool-generated utilities based on the beliefs tensors provided to Group B.
ExpertUtilities Generated by Decision ToolOutput
A1A2A3A4A5A6A7A8A9
1100000000A1
2010000000A2
3001000000A3
4000100000A4
5000010000A5
6000001000A6
7000000100A7
8000000010A8
9000000001A9
10100000000A1
11100000000A1
120.250.250.750.250.250.250.250.250.25A3
130.250.250.250.750.250.250.250.250.25A4
140.250.250.250.250.750.250.250.250.25A5
150.250.250.250.250.250.750.250.250.25A6
160.250.250.250.250.250.250.750.250.25A7
170.250.250.250.250.250.250.250.750.25A8
180.250.250.250.250.250.250.250.250.75A9
190.50.16340.16340.16340.1630.16340.16340.16340.1634A1
200.16340.50.16340.16340.1630.16340.16340.16340.1634A2
210.16340.16340.50.16340.1630.16340.16340.16340.1634A3
220.16340.16340.16340.50.1630.16340.16340.16340.1634A4
230.16340.16340.16340.16340.50.16340.16340.16340.1634A5
240.16340.16340.16340.16340.1630.50.16340.16340.1634A6
250.16340.16340.16340.16340.1630.16340.50.16340.1634A7
260.16340.16340.16340.16340.1630.16340.16340.50.1634A8
270.16340.16340.16340.16340.1630.16340.16340.50.1634A8
280.42700.54390.60790.61080.6100.69460.57920.67840.4500A6
290.57920.67840.45000.42700.5430.60790.61080.61000.6946A9
300.67840.69460.42700.54390.6070.61080.61000.69460.5792A2
310.73440.74190.51740.42910.6420.60630.59670.74190.6082A2
320.42700.54390.60790.61080.6100.69460.57920.66290.6794A6
330.60790.61080.61000.69460.5790.66290.67940.16340.6946A4
340.66290.67940.60790.61080.6100.69460.57920.69460.1634A6
Table A7. Comparison of the decision tool’s results to the experts’ decisions.
Table A7. Comparison of the decision tool’s results to the experts’ decisions.
Test RunsDecision Tool UtilityDecision Tool ResultsExpert UtilityExpert Results
11A11A1
21A21A2
31A31A3
41A41A4
51A51A5
61A61A6
71A71A7
81A81A8
91A91A9
101A11A1
111A11A1
120.75A30.75A3
130.75A40.75A4
140.75A50.75A5
150.75A60.75A6
160.75A70.75A7
170.75A80.75A8
180.75A90.75A9
190.5A10.5A1
200.5A20.5A2
210.5A30.5A3
220.5A40.5A4
230.5A50.5A5
240.5A60.5A6
250.5A70.5A7
260.5A80.5A8
270.5A80.5A8
280.69A60.7A6
290.69A90.6A9
300.694614A20.7A2
310.741931A20.75A2
320.694614A60.75A6
330.694614A40.7A4
340.694614A60.7A6
350.694614A10.7A1
360.694614A50.7A5
370.694614A50.7A9
380.694614A20.7A2
390.83354A50.8A5
400.797288A60.8A6
410.802289A30.8A3
420.747877A80.75A8
430.798422A30.8A3
440.82466A50.8A5
450.812593A80.8A8
460.85234A60.85A6
470.612789A10.6A8
480.661435A80.7A6
490.551644A80.55A8
500.658589A10.65A1

References

  1. Hoang, G.T.T.; Dupont, L.; Camargo, M. Application of Decision-Making Methods in Smart City Projects: A Systematic Literature Review. Smart Cities 2019, 2, 27. [Google Scholar] [CrossRef]
  2. Dong, Z.Y.; Zhang, Y.; Yip, C.; Swift, S.; Beswick, K. Smart campus: Definition, framework, technologies, and services. IET Smart Cities 2020, 2, 43–54. [Google Scholar] [CrossRef]
  3. Norma, L. The Future of Higher Education: Smart Campuses. 2019. Available online: https://spaces4learning.com/articles/2019/03/01/smart-campuses.aspx (accessed on 11 January 2022).
  4. John, S.; Mantz, Y. Capability and Quality in Higher Education. 2013, p. 224. Available online: https://www.taylorfrancis.com/books/edit/10.4324/9781315042046/capability-quality-higher-education-john-stephenson-mantz-yorke (accessed on 5 January 2022).
  5. Misra, S.; McMahon, G. Diversity in Higher Education: The Three Rs. J. Educ. Bus. 2006, 82, 40–43. [Google Scholar] [CrossRef]
  6. Valentine, G.; Holloway, S.; Bingham, N. The Digital Generation?: Children, ICT and the Everyday Nature of Social Exclusion. Antipode 2002, 34, 296–315. [Google Scholar] [CrossRef]
  7. Anonymous. SMART—NetLingo The Internet Dictionary: Online Dictionary of Computer and Internet Terms, Acronyms, Text Messaging, Smileys. 2020. Available online: https://www.netlingo.com/word/smart.php (accessed on 19 January 2022).
  8. Bilal, K. Smart Campus: Benefits, Trends, and Technology—Part 1. 2019. Available online: https://www.wrld3d.com/blog/smart-campus-trends/ (accessed on 6 February 2022).
  9. Roy, M.; Tushar, H. Smart Campus|The Next-Generation Campus. Available online: https://www2.deloitte.com/us/en/pages/consulting/solutions/next-generation-smart-campus.html (accessed on 8 February 2022).
  10. Karam, A.; Vian, A.; Sara, S. A Strategic Framework for Smart Campus. 2020, pp. 790–798. Available online: http://ieomsociety.org/ieom2020/papers/488.pdf (accessed on 15 March 2022).
  11. Muhamad, W.; Kurniawan, N.B.; Suhardi; Yazid, S. Smart campus features, technologies, and applications: A systematic literature review. In Proceedings of the 2017 International Conference on Information Technology Systems and Innovation (ICITSI), Bandung, Indonesia, 23–24 October 2017; pp. 384–391. [Google Scholar]
  12. Prandi, C.; Monti, L.; Ceccarini, C.; Salomoni, P. Smart Campus: Fostering the Community Awareness Through an Intelligent Environment. Mob. Netw. Appl. 2020, 25, 945–952. [Google Scholar] [CrossRef]
  13. Musa, M.; Ismail, M.N.; Fudzee, M.F.M. A survey on smart campus implementation in Malaysia. JOIV Int. J. Inform. Vis. 2021, 5, 51–56. [Google Scholar] [CrossRef]
  14. Malatji, E.M. The development of a smart campus-african universities point of view. In Proceedings of the 2017 8th International Renewable Energy Congress (IREC), Amman, Jordan, 21–23 March 2017. [Google Scholar]
  15. Ahmed, V.; Alnaaj, K.A.; Saboor, S. An investigation into stakeholders’ perception of smart campus criteria: The American university of Sharjah as a case study. Sustainability 2020, 12, 5187. [Google Scholar] [CrossRef]
  16. Vanhooijdon, R. Smart Campuses Are the Future of Higher Education. 2019. Available online: https://www.richardvanhooijdonk.com/blog/en/smart-campuses-are-the-future-of-higher-education (accessed on 4 May 2022).
  17. Sun, L.; Chen, G.; Xiong, H.; Guo, C. Cluster Analysis in Data-Driven Management and Decisions. J. Manag. Sci. Eng. 2017, 2, 227–251. [Google Scholar] [CrossRef]
  18. Saadi, M.; Noor, M.T.; Imran, A.; Toor, W.T.; Mumtaz, S.; Wuttisittikulkij, L. IoT Enabled Quality of Experience Measurement for Next Generation Networks in Smart Cities. Sustain. Cities Soc. 2020, 60, 102266. [Google Scholar] [CrossRef]
  19. Alghamdi, A.; Shetty, S. Survey toward a smart campus using the internet of things. In Proceedings of the 2016 IEEE 4th International Conference on Future Internet of Things and Cloud (FiCloud), Vienna, Austria, 22–24 August 2016; pp. 235–239. [Google Scholar]
  20. Zhang, Y.; Yip, C.; Lu, E.; Dong, Z.Y. A Systematic Review on Technologies and Applications in Smart Campus: A Human-Centered Case Study. IEEE Access 2022, 10, 16134–16149. [Google Scholar] [CrossRef]
  21. Azure, M. What Is Cloud Computing? A Beginner’s Guide. Available online: https://azure.microsoft.com/en-us/resources/cloud-computing-dictionary/what-is-cloud-computing/ (accessed on 29 June 2022).
  22. Eckhoff, D.; Wagner, I. Privacy in the smart city—Applications, technologies, challenges, and solutions. IEEE Commun. Surv. Tutor. 2017, 20, 489–516. [Google Scholar] [CrossRef]
  23. Jamei, E.; Mortimer, M.; Seyedmahmoudian, M.; Horan, B.; Stojcevski, A. Investigating the role of virtual reality in planning for sustainable smart cities. Sustainability 2017, 9, 2006. [Google Scholar] [CrossRef]
  24. Gregory, K.; Joseph, R. Augmented Reality. 2012. Available online: https://www.elsevier.com/books/augmented-reality/kipper/978-1-59749-733-6 (accessed on 2 July 2022).
  25. Copeland, B.J. Artificial Intelligence. Available online: https://www.britannica.com/technology/artificial-intelligence (accessed on 2 July 2022).
  26. Doulai, P. Smart and flexible campus: Technology enabled university education. In Proceedings of the World Internet and Electronic Cities Conference (WIECC), Kish Island, Iran, 1–3 May 2001. [Google Scholar]
  27. Wang, M.; Ng, J.W.P. Intelligent mobile cloud education: Smart anytime-anywhere learning for the next generation campus environment. In Proceedings of the 2012 Eighth International Conference on Intelligent Environments, Washington, DC, USA, 26–29 June 2012; pp. 149–156. [Google Scholar]
  28. Kwok, L. A vision for the development of i-campus. Smart Learn. Env. 2015, 2, 122. [Google Scholar] [CrossRef]
  29. Hwa, R.T.; Chiung, L.; Hsin, C. A Hybrid Approach to IT Project Selection. 2008. Available online: http://www.wseas.us/e-library/transactions/economics/2008/30-924N.pdf (accessed on 13 July 2022).
  30. Atkinson, R. Project management: Cost, time and quality, two best guesses and a phenomenon, its time to accept other success criteria. Int. J. Proj. Manag. 1999, 17, 337–342. [Google Scholar] [CrossRef]
  31. Jiang, J.J.; Klein, G. Project selection criteria by strategic orientation. Inf. Manag. 1999, 36, 63–75. [Google Scholar] [CrossRef]
  32. Muralidhar, K.; Santhanam, R.; Wilson, R.L. Using the analytic hierarchy process for information system project selection. Inf. Manag. 1990, 18, 87–95. [Google Scholar] [CrossRef]
  33. Kendrick, J.; Saaty, D. Use Analytic Hierarchy Process For Project Selection. ASQ Six Sigma Forum Mag. 2007, 6, 22. [Google Scholar]
  34. Stewart, T.J. A Multi-criteria Decision Support System for R&D Project Selection. J. Oper. Res. Soc. 1991, 42, 17–26. [Google Scholar] [CrossRef]
  35. Lee, J.W.; Kim, S.H. An integrated approach for interdependent information system project selection. Int. J. Proj. Manag. 2001, 19, 111–118. [Google Scholar] [CrossRef]
  36. Meade, L.M.; Presley, A. R&D project selection using the analytic network process. Tem 2002, 49, 59–66. [Google Scholar] [CrossRef]
  37. Mohanty, R.P.; Agarwal, R.; Choudhury, A.K.; Tiwari, M.K. A fuzzy ANP-based approach to R&D project selection: A case study. Int. J. Prod. Res. 2005, 43, 5199–5216. [Google Scholar] [CrossRef]
  38. Chen, J.; Askin, R.G. Project selection, scheduling and resource allocation with time dependent returns. Eur. J. Oper. Res. 2009, 193, 23–34. [Google Scholar] [CrossRef]
  39. Liu, S.; Wang, C. Optimizing project selection and scheduling problems with time-dependent resource constraints. Autom. Constr. 2011, 20, 1110–1119. [Google Scholar] [CrossRef]
  40. Juneja, P. What Is Decision Making? Available online: https://www.managementstudyguide.com/what-is-decision-making.htm (accessed on 25 August 2022).
  41. Peter, F.D. The Effective Decision. 1967. Available online: https://hbr.org/1967/01/the-effective-decision (accessed on 27 August 2022).
  42. Liu, Y.; Eckert, C.; Bris, G.Y.-L.; Petit, G. A fuzzy decision tool to evaluate the sustainable performance of suppliers in an agrifood value chain. Comput. Ind. Eng. 2019, 127, 196–212. [Google Scholar] [CrossRef]
  43. Chai, J.; Liu, J.N.; Ngai, E.W. Application of decision-making techniques in supplier selection: A systematic review of literature. Expert Syst. Appl. 2013, 40, 3872–3885. [Google Scholar] [CrossRef]
  44. Luenberger, D.G.; Ye, Y. Linear and Nonlinear Programming; Springer: Cham, Switzerland, 1984. [Google Scholar]
  45. Duan, Y.; Edwards, J.S.; Dwivedi, Y.K. Artificial intelligence for decision making in the era of Big Data–evolution, challenges and research agenda. Int. J. Inf. Manag. 2019, 48, 63–71. [Google Scholar] [CrossRef]
  46. Haddad, M.; Sanders, D. Selection of discrete multiple criteria decision making methods in the presence of risk and uncertainty. Oper. Res. Perspect. 2018, 5, 357–370. [Google Scholar] [CrossRef]
  47. Opricovic, S.; Tzeng, G. Compromise solution by MCDM methods: A comparative analysis of VIKOR and TOPSIS. Eur. J. Oper. Res. 2004, 156, 445–455. [Google Scholar] [CrossRef]
  48. Belton, V.; Stewart, T.J. Problem Structuring. Multiple Criteria Decision Analysis; Springer: Boston, MA, USA, 2002. [Google Scholar]
  49. Mark, V.; Patrick, T.H. An Analysis of Multi-Criteria Decision Making Method. Int. J. Oper. Res. 2013, 10, 56–66. [Google Scholar]
  50. Zardari, N.H.; Ahmed, K.; Shirazi, S.M.; Yusop, Z.B. Weighting Methods and Their Effects on Multi-Criteria Decision Making Model Outcomes in Water Resources Management; Springer: Cham, Switzerland, 2015. [Google Scholar]
  51. Sumaryanti, L.; Rahayu, T.K.; Prayitno, A.; Salju. Comparison study of SMART and AHP method for paddy fertilizer recommendation in decision support system. IOP Conf. Ser. Earth Environ. Sci. 2019, 343, 12207. [Google Scholar] [CrossRef]
  52. Roy, B. Decision-aid and decision-making. Eur. J. Oper. Res. 1990, 45, 324–331. [Google Scholar] [CrossRef]
  53. Janse, B. Multiple Criteria Decision Analysis (MCDA). 2018. Available online: https://www.toolshero.com/decision-making/multiple-criteria-decision-analysis-mcda (accessed on 29 October 2022).
  54. Urošević, K.; Gligorić, Z.; Miljanović, I.M.; Čedomir; Beljić, B.; Gligorić, M.G.; Moreno-Jiménez, J. Novel Methods in Multiple Criteria Decision-Making Process (MCRAT and RAPS)—Application in the Mining Industry. Mathematics 2021, 9, 1980. [Google Scholar] [CrossRef]
  55. Xu, D.D.; Yang, J.B. Introduction to Multi-Criteria Decision Making and the Evidential Reasoning Approach. Available online: https://personalpages.manchester.ac.uk/staff/jian-bo.yang/JB%20Yang%20Book_Chapters/XuYang_MSM_WorkingPaperFinal.pdf (accessed on 12 November 2022).
  56. Ferson, S.; Sentz, K. Combination of Evidence in Dempster-Shafer Theory; SAND2002-0835, 2002; Sandia National Lab.: Livermore, California USA, 2002.
  57. Yang, J.; Xu, D. On the evidential reasoning algorithm for multiple attribute decision analysis under uncertainty. Tsmca 2002, 32, 289–304. [Google Scholar] [CrossRef]
  58. Figueira, J.; Greco, S.; Ehrgott, M.; Roy, B. Paradigms and challenges. In Multiple Criteria Decision Analysis: State of the Art Surveys; Springer: New York, NY, USA, 2005; Volume 78, pp. 3–24. [Google Scholar]
  59. McMillan, S.S.; King, M.; Tully, M.P. How to use the nominal group and Delphi techniques. Int. J. Clin. Pharm. 2016, 38, 655–662. [Google Scholar] [CrossRef] [PubMed]
  60. Delbecq, A.L.; Van de Ven, A.H. A Group Process Model for Problem Identification and Program Planning. J. Appl. Behav. Sci. 1971, 7, 466–492. [Google Scholar] [CrossRef]
  61. Delbecq, A.L.; Van de Ven, A.H.; Gustafson, D.H. Group Techniques for Program Planning: A Guide to Nominal Group and Delphi Processes; Scott Foresman Company: Glenview, IL, USA, 1975. [Google Scholar]
  62. Pressman, R.S. Software Engineering: A Practitioners Approach, 6th ed.; McGraw-Hill: New York, NY, USA, 2005. [Google Scholar]
  63. Tanase, M.; Le Toan, T.; de la Riva, J.; Santoro, M. An Estimation of Change in Forest Area in Central Siberia Using Multi Temporal SAR Data. 2009. Available online: http://www.tma.ro/files/An%20estimation%20of%20change%20in%20forest%20area%20in%20central%20siberia%20using%20multi%20temporal%20SAR%20data.pdf (accessed on 13 November 2022).
  64. Bai, J.; Ng, S. Tests for Skewness, Kurtosis, and Normality for Time Series Data. J. Bus. Econ. Stat. 2005, 23, 49–60. [Google Scholar] [CrossRef]
  65. Rajalakshmi, R.; Aravindan, C. A Naive Bayes approach for URL classification with supervised feature selection and rejection framework. Comput. Intell. 2018, 34, 363–396. [Google Scholar] [CrossRef]
  66. Mausner, J.S.; Bahn, A.K.; Kramer, S. Mausner & Bahn Epidemiology: An Introductory Text, 2nd ed.; Saunders: Philadelphia, PA, USA, 1985. [Google Scholar]
  67. Hasnain, M.; Pasha, M.F.; Ghani, I.; Imran, M.; Alzahrani, M.Y.; Budiarto, R. Evaluating Trust Prediction and Confusion Matrix Measures for Web Services Ranking. Access 2020, 8, 90847–90861. [Google Scholar] [CrossRef]
Figure 1. Steps in Decision Making; Source: Author.
Figure 1. Steps in Decision Making; Source: Author.
Smartcities 06 00106 g001
Figure 2. Decision model representation through ATD.
Figure 2. Decision model representation through ATD.
Smartcities 06 00106 g002
Figure 3. Structural design of the decision tool components.
Figure 3. Structural design of the decision tool components.
Smartcities 06 00106 g003
Figure 4. Inference engine flowchart.
Figure 4. Inference engine flowchart.
Smartcities 06 00106 g004
Figure 5. Visual summary of the model inputs and outputs; Source: Author.
Figure 5. Visual summary of the model inputs and outputs; Source: Author.
Smartcities 06 00106 g005
Figure 6. Bar chart of changes in accuracy due to changes in weight.
Figure 6. Bar chart of changes in accuracy due to changes in weight.
Smartcities 06 00106 g006
Figure 7. Example model output: ranked list of smart campus alternatives by average utility.
Figure 7. Example model output: ranked list of smart campus alternatives by average utility.
Smartcities 06 00106 g007
Figure 8. Example model output: Pareto chart of average utilities of alternatives.
Figure 8. Example model output: Pareto chart of average utilities of alternatives.
Smartcities 06 00106 g008
Figure 9. Example model output: Bar chart of aggregated belief for the optimal alternative.
Figure 9. Example model output: Bar chart of aggregated belief for the optimal alternative.
Smartcities 06 00106 g009
Figure 10. Comparison of decision-tool-generated utilities to expert-provided utilities.
Figure 10. Comparison of decision-tool-generated utilities to expert-provided utilities.
Smartcities 06 00106 g010
Figure 11. (a) Histogram of expert-provided utilities. (b) Histogram of decision-tool-generated utilities.
Figure 11. (a) Histogram of expert-provided utilities. (b) Histogram of decision-tool-generated utilities.
Smartcities 06 00106 g011
Figure 12. Receiver Operating Characteristic (ROC) curve.
Figure 12. Receiver Operating Characteristic (ROC) curve.
Smartcities 06 00106 g012
Table 1. Enabling Technologies of Smart Campus (Source: Author).
Table 1. Enabling Technologies of Smart Campus (Source: Author).
TechnologyMeaningBenefits
Cloud computing [21]Cloud computing refers to the provision of computing resources, storage, and software, such as servers, databases, networking, and analytics, through the internet which can be addressed as “the cloud.”
  • Flexible
  • Minimum interaction with the service provider
  • Low operation cost
  • High efficiency
Internet of Things [22]Global infrastructure that supports the information society by linking physical and virtual objects using current and developing interoperable technologies for communication and information exchange.
  • Real-time tracking
  • Automation of operations
  • Accurate monitoring
Virtual reality [23]A simulated world generated by a computer and creates the impression of a physical reality surrounding the user.
  • Dynamic virtual environment
  • In-depth learning
  • Efficient communication tool
Augmented reality [24]An interface technology that augments the view of the real world with digitally created virtual content, such as visual elements, sound, or other sensory stimuli, thereby allowing a seamless overlay between computer-generated content and our real-world perceptions.
  • Efficient learning aid
  • Hyper-realistic visualization
  • Easy campus navigation
Artificial intelligence [25]The science by which machines learn from experience, adapt to new inputs and generate solutions that would otherwise be very difficult to obtain using analytical techniques.
  • Pattern recognition
  • Forecasting
  • Planning & control
Table 3. Data summary for the stakeholders’ opinion survey.
Table 3. Data summary for the stakeholders’ opinion survey.
GroupsCountSumAverageVariance
A.1. Smart Learning Management System562424.3214285710.512987013
A.2. Smart Campus Operations562524.50.290909091
A.3. Safe Learning Environment562304.1071428571.188311688
A.4. Smart Geographic Information System562193.9107142861.100974026
A.5. Smart Administrative System562454.3750.565909091
A.6. Waste and Water Management System562434.3392857140.700974026
A.7. Sustainable Energy Management System562444.3571428570.815584416
A.8. Smart Classrooms562153.8392857140.937337662
A.9. Smart Transportation System562354.1964285710.778896104
Table 4. Results of the single-factor, two-way ANOVA test on stakeholder survey data.
Table 4. Results of the single-factor, two-way ANOVA test on stakeholder survey data.
ANOVA
Source of VariationSSdfMSFp-ValueF Crit
Between Groups22.378.002.79663.65210.00041.9571
Within Groups379.05495.000.7658
Total401.43503.00
Table 5. Attribute weights provided by Group A experts.
Table 5. Attribute weights provided by Group A experts.
General AttributesExp. 1Exp. 2Exp. 3Exp. 4Exp. 5Exp. 6Exp. 7Exp. 8Exp. 9Average Weights
Implementation Cost0.20.20.250.30.20.20.30.30.250.20
Maintenance Cost0.150.100.20.10.100.150.10.10.10.10
Operation Cost0.150.150.150.10.150150.10.10.20.15
Project Duration0.150.150.10.10.150.150.10.10.10.15
Stakeholder’s Benefit0.20.250.20.30.250.20.30.30.250.25
Resource Availability0.150.150.10.10.150.150.10.10.10.15
Table 6. Evaluation grades for each attribute.
Table 6. Evaluation grades for each attribute.
General AttributesEvaluation Grades
Development CostLow (Good), Medium (Average), High (Poor)
Maintenance CostLow (Good), Medium (Average), High (Poor)
Operation CostLow (Good), Medium (Average), High (Poor)
Implementation DurationShort (Good), Medium (Average), Long (Poor)
Stakeholder’s BenefitHigh (Good), Medium (Average), Low (Poor)
Resource AvailabilityYes (Good), Not Sure (Average), No (Poor)
Table 7. Example of a generated decision-making scenario.
Table 7. Example of a generated decision-making scenario.
AttributeEvaluation GradeAlternatives
A1A2A3A4A5A6A7A8A9
Implementation CostGood00.500.30.90.70.10.60.4
Average0.50.50.50.30.050.30.20.30.6
Poor0.500.50.40.0500.70.10
Operation CostGood0.60.60.60.10.60.40.30.90.7
Average0.30.30.30.20.30.60.30.050.3
Poor0.10.10.10.70.100.40.050
Maintenance CostGood0.10.60.40.30.90.700.50
Average0.20.30.60.30.050.30.50.50.5
Poor0.70.100.40.0500.500.5
Stakeholder’s BenefitGood00.500.10.60.40.60.60.6
Average0.50.50.50.20.30.60.30.30.3
Poor0.500.50.70.100.10.10.1
Implementation DurationGood0.60.60.600.500.10.60.4
Average0.30.30.30.50.50.50.20.30.6
Poor0.10.10.10.500.50.70.10
Resource AvailabilityGood0.40.250.40.10.60.400.50
Average0.60.250.60.20.30.60.50.50.5
Poor00.500.70.100.500.5
Table 8. Case processing summary for t-test.
Table 8. Case processing summary for t-test.
Cases
ValidMissingTotal
NPercentNPercentNPercent
Decision-tool-Generated Utilities50100.0%00.0%50100.0%
Expert-Provided Utilities50100.0%00.0%50100.0%
Table 9. Descriptive statistics of the t-test.
Table 9. Descriptive statistics of the t-test.
StatisticStd. Error
Decision-tool-Generated UtilitiesMean0.71900.02374
95% Confidence Interval for MeanLower Bound0.6703
Upper Bound0.7677
5% Trimmed Mean0.7156
Median0.6946
Variance0.016
Std. Deviation0.12562
Minimum0.50
Maximum1.00
Range0.50
Interquartile Range0.13
Skewness0.2790.441
Kurtosis0.5610.858
Expert-Provided UtilitiesMean0.71680.02376
95% Confidence Interval for MeanLower Bound0.6680
Upper Bound0.7655
5% Trimmed Mean0.7131
Median0.7000
Variance0.016
Std. Deviation0.12573
Minimum0.50
Maximum1.00
Range0.50
Interquartile Range0.15
Skewness0.2580.441
Kurtosis0.5640.858
Table 10. Results of the paired two-sample t-test for means.
Table 10. Results of the paired two-sample t-test for means.
Decision-Tool-Generated UtilitiesExpert-Provided Utilities
Mean0.747646180.7464
Variance0.0283707050.028460245
Observations5050
Pearson Correlation0.995167299
Hypothesized Mean Difference0
Df49
t Stat0.531646386
P (T <= t) one-tail0.298686545
t Critical one-tail1.676550893
P (T <= t) two-tail0.597373089
t Critical two-tail2.009575199
Table 11. Confusion matrix structure [65].
Table 11. Confusion matrix structure [65].
ValidationPredicted PositivePredicted Negative
Actual PositiveTrue Positive (TP)False Negative (FN)
Type II Error
Actual NegativeFalse Positive (FP)
Type I Error
True Negative (TN)
Table 12. Confusion matrix for decision tool validation.
Table 12. Confusion matrix for decision tool validation.
Decision Tool Predicted Alternatives
Expert-Provided Actual Alternatives A1A2A3A4A5A6A7A8A9
A16
A2 5
A3 5
A4 4
A5 6
A6 8 1
A7 3
A81 7
A9 1 3
Table 13. Four crucial measures for decision tool validation.
Table 13. Four crucial measures for decision tool validation.
A1A2A3A4A5A6A7A8A9
TP655468373
FP100010010
TN355631716
FN000001011
Table 14. Pooled confusion matrix for decision tool validation.
Table 14. Pooled confusion matrix for decision tool validation.
Predicted Value
Actual Value PositiveNegative
PositiveTrue Positive (TP) = 47False Positive (FP) = 3
NegativeFalse Negative (FN) = 3True Negative (TN) = 397
Table 15. Overall model performance measures.
Table 15. Overall model performance measures.
Performance MeasureValue
Precision T P T P   +   F P = 47 47   +   3 = 0.94
Negative Predicted Value T N T N   +   F N = 397 397   +   3 = 0.9925
Sensitivity T P T P   +   F N = 47 47   +   3 = 0.94
Specificity T N T N   +   F P = 397 397   +   3 = 0.9925
Accuracy T P   +   T N T P   +   T N   +   F P   +   F N = 47   +   397 47   +   397   +   3   +   3 = 0.9867
F1 Score 2   ×   ( 0.94   ×   0.94 ) 0.94   +   0.94 = 0.94
Table 16. Area under the ROC curve for decision-tool-generated utilities.
Table 16. Area under the ROC curve for decision-tool-generated utilities.
Area0.734
Std. Error (under the nonparametric assumption)0.071
Asymptotic Significance0.178
Asymptotic 95% Confidence IntervalLower Bound0.594
Upper Bound0.874
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ahmed, V.; Khatri, M.F.; Bahroun, Z.; Basheer, N. Optimizing Smart Campus Solutions: An Evidential Reasoning Decision Support Tool. Smart Cities 2023, 6, 2308-2346. https://doi.org/10.3390/smartcities6050106

AMA Style

Ahmed V, Khatri MF, Bahroun Z, Basheer N. Optimizing Smart Campus Solutions: An Evidential Reasoning Decision Support Tool. Smart Cities. 2023; 6(5):2308-2346. https://doi.org/10.3390/smartcities6050106

Chicago/Turabian Style

Ahmed, Vian, Mohamed Faisal Khatri, Zied Bahroun, and Najihath Basheer. 2023. "Optimizing Smart Campus Solutions: An Evidential Reasoning Decision Support Tool" Smart Cities 6, no. 5: 2308-2346. https://doi.org/10.3390/smartcities6050106

Article Metrics

Back to TopTop