Abstract
With the rapid growth of customer data in financial institutions, such as trusts, issues of data quality have become increasingly prominent. The main challenge lies in constructing an effective evaluation method that ensures accurate and efficient assessment of customer data quality when dealing with massive customer data. In this paper, we construct a data quality evaluation index system based on the analytic hierarchy process through a comprehensive investigation of existing research on data quality. Then, redundant features are filtered based on the Shapley value, and the multiple linear regression model is employed to adjust the weight of different indices. Finally, a case study of the customer and institution information of a trust institution is conducted. The results demonstrate that the utilization of completeness, accuracy, timeliness, consistency, uniqueness, and compliance to establish a quality evaluation index system proves instrumental in conducting extensive and in-depth research on data quality measurement dimensions. Additionally, the data quality evaluation approach based on multiple linear regression facilitates the batch scoring of data, and the incorporation of the Shapley value facilitates the elimination of invalid features. This enables the intelligent evaluation of large-scale data quality for financial data.
1. Introduction
Quality evaluation commonly pertains to the methodologies that organizations employ to ensure the attainment of strategic objectives, establish regulatory compliance, and monitor data integrity []. In the past, data collected from various transaction systems within companies or institutions were often regarded as a mere by-product of business operations, possessing limited value beyond the transactions []. Today, as the complexity and volume of data continue to escalate, numerous companies and businesses are compelled to adapt their business models accordingly. Customers need personalized products, and service products must be industrialized. These factors inevitably influence business processes and organizational strategies, and high-quality data are a prerequisite for meeting these changing business demands and achieving corporate agility objectives []. A single data management solution is insufficient to meet the demands of the business. Therefore, different approaches to integrating solutions to data problems must be employed [].
Financial institutions must ensure the completeness and accuracy of their data quality so that regulatory bodies can better understand the operation of the institutions, predict risks, and take corresponding measures []. In the era of rapid big data technology advancement, financial institutions have been compelled to enhance their management, processing, and analysis of vast volumes of personal customer information. This imperative arises from the goals to enhance business efficiency and strengthen risk management capabilities. Institutions are increasingly utilizing machine learning technologies to analyze customer behavior, predict market trends, and identify fraudulent behaviors. These technologies also must be able to support a large amount of high-quality data; therefore, financial institutions must establish effective data governance and evaluation mechanisms to ensure data quality.
Mcgilvray [] introduced a methodical yet adaptable framework consisting of ten steps to address an organization’s business needs and data quality issues. Within this framework, data quality dimensions are employed to precisely define, measure, and effectively manage the quality of data. Omara et al. [] presented a comprehensive compilation of prevalent data quality dimensions, encompassing accuracy, completeness, consistency, and timeliness. Furthermore, they proposed a novel approach to gauge row completeness utilizing a data mining model developed on neural networks. Peltier et al. [] measured data quality from a systems perspective with more dimensions and considered customer data quality to facilitate the development of personalized interactive marketing initiatives. Taleb et al. [] developed a scheme to evaluate data quality on large datasets using sampling strategies, which adopted some data quality measurement metrics tailored to specific data quality dimensions. Juddoo [] conducted an investigation on a variety of data quality metrics, with a specific focus on the measurement and quantification of particular dimensions like completeness and consistency. Recently, the analytic hierarchy process (AHP) was used to integrate various data quality dimensions as well as expert preferences for quantifying a comprehensive score of data quality [], and has been applied to healthcare [], financial statements [] and cloud computing [].
For the data quality evaluation system, most experts only consider qualitative or quantitative situations, citing a single data quality evaluation method, such as fault tree analysis, AHP, gray relational degree analysis, etc. Some scholars combine commonly used engineering evaluation methods to build evaluation models on the basis of combining qualitative and quantitative methods. Through research and combining, this paper finds that many scholars have not unified the selection of data quality indicators, and most of them use more traditional methods, which lack certain innovation. This leads to the main contributions of this paper as listed below:
- We proposed a new system for constructing quality evaluation indicators by surveying the current status of data quality evaluation research. This system forms primary indicators according to completeness, accuracy, timeliness, consistency, uniqueness, and compliance. We used AHP to determine the weight among these indicators.
- We proposed a data quality evaluation method based on multiple linear regression (MLP) to objectively evaluate data quality. In addition, we employed et al. Shapley value for feature selection to reduce the complexity of the model and shorten the training time.
- By using customer-related table information from a specific financial institution as an example, we verified the feasibility and accuracy of the quality evaluation system and the quality evaluation model. Additionally, we found that the evaluation and processing of incremental customer form information could be automated.
The remainder of this paper is organization as follows: In Section 2, we formulate the primary indicators and adopt the AHP to determine the weights of the indicators. In Section 3, we construct a quality evaluation model using MLP algorithms. We next utilize a concept of the Shapley value to evaluate the contribution of each indicator to the scoring model, facilitating feature selection. In Section 4, we use the customer information system of a trust company as an example for empirical analysis. In Section 5, we summarize the results of this study and discuss its contributions.
2. Construction of Quality Evaluation Indicator System
The construction of the evaluation system should consider the following points data []:
- Quality evaluation focuses on each dataset as the target. Every department is responsible for managing multiple datasets []. These datasets are typically represented as information source tables.
- Data usability is the fundamental prerequisite for effective data utilization. The data resource centralization department can contribute by identifying factors that impede the basic usability of the data. These factors include empty data items, irregular formats, incomplete data, and other similar issues (Table 1).
Table 1. Problem situations and examples of some data. - Data flexibility takes center stage as a crucial consideration. When evaluating the quality of customer data types, it is essential to consider its unique characteristics while maintaining a certain level of flexibility. For instance, empty fields may be the result of non-required fields [].
Because of multiple systems and vast amounts of data, it is crucial to strike a balance between quality and efficiency of the evaluation process []. To address this, a quality evaluation index framework for trustworthy data should be established, consisting of six dimensions: completeness, accuracy, timeliness, consistency, uniqueness, and compliance.
2.1. Basic Attributes of Quality Evaluation
A completeness evaluation encompasses three secondary indicators: record completeness, attribute completeness, and value completeness. The following fields should be assigned:
- Required fields: These fields must be filled in accordance with the business rules of each department or according to the specifications outlined in the data dictionary.
- Key fields: These fields serve as unique primary keys or facilitate associations with related data tables. Examples of such key fields include the identity card number, the unified social credit identifier, and other identification numbers or codes that act as primary keys in related associations.
Data accuracy encompasses three aspects: logical accuracy, value accuracy, and conceptual accuracy (Table 2). If the identity card number for a piece of data is not accurate or does not correspond to the name, this identity card number is considered to be one piece of erroneous data.
Table 2.
Some types of data accuracy and related descriptions.
The timeliness requirement differs depending on the update frequencies of various information resource tables. As a result, rules can be established differently [].
A consistency evaluation measures the level of data association, assessing the logical accuracy and completeness of the same information subject across different datasets [].
A uniqueness evaluation gauges the extent of data duplication. For a dataset that consists of four fields, customer number, customer name, customer status, and gender, if N data records have the same values for all four fields, it would produce N-1 duplicate data records. As presented in Table 3, we identified a total of two duplicate data records.
Table 3.
Example of duplicate data records.
Compliance ensures that the data types and precision of each data field fall within the specified range. For those fields governed by industry unified standards, adherence to the relevant standards is required. In the absence of unified standards, compliance is based on the data dictionary or relevant regulations provided by each department.
2.2. Determination of Indicator Weights Based on the AHP
The AHP is characterized by its ability to convert judgments into comparisons of importance between pairs of several factors []. The complete process involves decomposition, judgment, synthesis, and other steps, effectively addressing the limitations of alternative methods that aim to minimize subjective judgment from decision makers.
2.2.1. Single Scoring
To score a dataset from an information resource table, the scores of each secondary indicator can be computed using the previous scoring rules. The total score of the evaluated dataset can then be calculated by taking the weighted average.
where Y is the total score of the evaluated data set, is the indicator weight, is the indicator score, and n is the secondary indicator number.
In the evaluation of a data set from an information resource table, the scoring range for each indicator ranges from 0 to 100 points. The scoring rules and formula are detailed in Table 4.
Table 4.
Scoring rules and scoring formulas for data quality evaluation indicators.
2.2.2. Comprehensive Scoring
Trusts continuously update the collected data. Therefore, if it is necessary to showcase the overall quality of a data set from a specific information resource table over a certain time frame, the numerous evaluation outcomes within this period can be comprehensively processed:
where represents the single data quality evaluation score of the information resource data table, k is the order of evaluation, represents the total number of times the data table is evaluated during the evaluation period, and i is the number of each information resource table.
3. Construction of Quality Evaluation Model Based on Multiple Liner Regression
In this study, we present a machine learning-based data quality evaluation method. The method revolves around the creation of a data quality evaluation indicator system. The key steps of this approach are as follows:
- Select relevant customer information data from the customer information database, and preprocess the data based on the proposed label generation algorithm.
- Extract quality evaluation indicators and associated data, constructing a quality evaluation indicator system suitable for machine learning models.
- Train and test the machine learning model to complete its construction.
- Evaluate the results using performance indicators.
3.1. Data Acquisition and Indicator Extraction
As shown in Figure 1, we extracted the table being evaluated from the ECIF (Enterprise Customer Information Facility) database by employing the data quality evaluation indicators. Subsequently, to facilitate the implementation of the machine learning model, we had to transform the actual customer data into a trainable dataset. Therefore, in this study, by leveraging the constructed data quality evaluation indicator system, we introduced a related label generation algorithm. Subsequently, we used the overall score of the single-table quality evaluation as the output indicator, which represents the data quality. Meanwhile, we used the scores of the six primary indicators as the input indicators for the machine learning-based quality evaluation model.
Figure 1.
Quality evaluation model based on multiple linear regression.
3.2. Label Generation Algorithm
Suppose there are p primary indicators (), and each primary indicator i has secondary indicators. A given data table is scored based on the data quality evaluation system established. Specifically, Equation (3) is utilized to generate the corresponding labels for this data table:
where represents the weight corresponding to each primary indicator, and represents the weight corresponding to the secondary indicator. In the end, each table is associated with a quality evaluation score, which serves as the label for that table.
On the basis of the existing large data table, we randomly sampled different numbers of data entries and generated data subtables. We then applied the label generation algorithm to produce labels for each table. Next, we compiled a trainable assessment table based on each table’s primary indicators and their respective final quality evaluation scores. This process led to the formation of a dataset that includes n samples.
To examine the relationships among the dependent variables and between the dependent and independent variables, we conducted a Pearson correlation analysis of the data. Once the correlations among the indicators were determined, we were able to identify and remove the correlated indicators to streamline the model and reduce the training time.
3.3. Feature Selection Based on Shapley Value
Feature selection and hyperparameter tuning are two important steps in every machine learning task. They help improve performance most of the time but have the disadvantage of being expensive in time. The more parameter combinations, or the more precise the selection process, the longer the duration. The usual approach is to combine tuning and feature selection. For feature selection, we adopt a ranking-based selection algorithm. Rank selection involves iteratively removing less important features while retraining the model until convergence is reached. SHAP helps when we perform feature selection using ranking-based algorithms. Instead of using the default variable importances generated by gradient boosting, we choose the best features, e.g., the one with the highest Shapley value.
The Shapley value originates from cooperative game theory and is a distribution method based on contribution. It requires that profits and costs be fairly shared among each agent in an alliance []. The Shapley value has numerous applications in machine learning, including data pricing, federated learning, interpretability, reinforcement learning, and feature selection [].
The equation for the Shapley value of n members is as follows []:
where represents the probability of member i joining the alliance . The denominator represents the number of permutations of n members. The numerator represents the number of permutations of the first members entering the alliance . Upon joining the alliance, member i acquires a certain value, denoted as . This value represents the contribution that member i makes to the alliance S.
In numerous scenarios involving data mining and machine learning, we frequently encounter challenges, such as the high dimensionality of features [] and unknown relationships. In this study, we adopted the Powershap algorithm to select and eliminate indicators [].
The following steps outline the automated feature selection methodology implemented through Powershap, which is constructed based on the foundation of the genetic algorithm:
- Population initialization: A set of feature subsets is randomly produced, marking the formation of the initial population.
- Fitness evaluation: A fitness function is implemented to assess the aptness of each entity, employing metrics like accuracy, F1 score, and so forth.
- Selection process: Individuals exhibiting higher fitness are chosen to serve as the base for the next-generation population as determined by the fitness function assessment results.
- Crossover procedure: Using a crossover operation, select individuals with superior fitness are randomly amalgamated to generate new offspring.
- Mutation procedure: To enhance the population’s diversity, random alterations are introduced to the new offspring through a mutation operation.
- Iteration: Steps 2–5 are reiterated until the predetermined stop conditions are satisfied.
3.4. Regression Model Building
Linear regression is a linear model that assumes a linear relationship between an input variable and a single output variable.Specifically, using the linear regression model, the output variable y can be calculated from the linear combination of a set of input variables x, that is, . If there are two or more independent variables, such linear regression analysis is called multiple linear regression (MLP). The multiple linear regression model is a relatively simple regression model in machine learning. After using the Shapley value to eliminate model features, the weights calculated based on AHP will no longer be accurate. Therefore, consider using MLP to recalculate the weights.
In the MLP model, it assumes that if the secondary indicators are reasonably set, the quality of the data table is considered only from the dimensions of the primary indicators []. If the primary indicator value and the quality of the data table are related linearly, a multivariate regression model can be established []: . The most suitable can be solved by solving the MLP problem. We employed the MLP solution method, specifically gradient descent, to determine the optimal parameters for the indicators. This was achieved after eliminating irrelevant features from the model.
4. Results
Financial institutions, such as trusts, have a large number of customers. Analyzing the current status of customers is extremely important for enhancing the accuracy of customer marketing and improving the user experience of customer services []. This section takes the personal customer table and institutional customer table in the customer information integration system of a trust institution as examples to illustrate the practical application of the data quality evaluation model.
4.1. Quality Evaluation Indicator System
By following the AHP procedure and the selected primary and secondary indicators in Figure 2, the decision makers have to indicate preferences or priority for each decision alternative in terms of how it contributes to each criterion as shown in Table 5.
Figure 2.
Dimensions of quality evaluation system.
Table 5.
Average random consistency (RI).
Then, the following can be performed manually or automatically by the AHP software Expert Choice:
- Synthesizing the pair-wise comparison matrix (example: Table 5).
- Calculating the priority vector for a criterion such as experience (example: Table 5).
- Calculating the consistency ratio.
- Calculating .
- Calculating the consistency index, .
- Selecting appropriate value of the random consistency ratio from Table 5.
- Checking the consistency of the pair-wise comparison matrix to check whether the decision maker’s comparisons were consistent or not.
w is the eigenvector corresponding to the largest eigenvalue.
Now, we find the consistency index, CI, as follows:
Selecting an appropriate value of random consistency ratio for a matrix size of five using Table 5, we find . We then calculate the consistency ratio as follows:
As the value of is less than 0.1, the judgments are acceptable. Similarly, the pair-wise comparison matrices and priority vectors for the remaining criteria can be found as shown in Table 6, Table 7, Table 8 and Table 9, respectively.
Table 6.
Quality evaluation system evaluation matrix .
Table 7.
Integrity assessment matrix .
Table 8.
Accuracy evaluation matrix .
Table 9.
Weights of quality bid evaluation indicators.
Regarding completeness, out of the 264,762 records, including 14,709 data items and 18 attributes, we obtained 67,839 records with null values. This accounted for approximately 25.62% of the data, resulting in a completeness score of 81.70.
Concerning accuracy, among the 14,709 data records, we did not have any instances of data being updated or created later than the current time. Additionally, we had 5191 records with expired certificates. Roughly 35.29% of the data exhibited accuracy issues, resulting in an accuracy score of 91.58.
In terms of timeliness, we did not identify any problems within the 14,709 data records analyzed, warranting a perfect timeliness score of 100.
In terms of consistency, out of the 14,709 data records, 13,869 records belonging to the same information subject within the dataset did not meet the criteria for logical accuracy. This suggested that approximately 94.29% of the data exhibited consistency issues, resulting in a consistency score of 5.71.
Regarding uniqueness, we did not find any instances of noncompliant record uniqueness among the 14,709 data records, resulting in a uniqueness score of 100.
In terms of compliance, among the 14,709 data records, 169 records did not conform to the requirement of 18-digit identity card numbers. Further scrutiny of the birth year, month, day, and last digit did not reveal any records that violated the year and month requirements. However, 1380 records did not comply with the day requirement. Additionally, for contact information, 4908 records had noncompliant mobile phone numbers. Overall, the number of noncompliant data items reached 6457, accounting for approximately 21.95% of the total data. Consequently, the compliance score was calculated as 58.43.
Considering the six dimensions of data quality and the data quality evaluation model, the individual quality evaluation score for the consignment customer form was determined to be 82.39 as shown in Table 10 and Figure 3.
Table 10.
Individual customer completeness statistical results.
Figure 3.
Integrity statistics results.
4.2. Multiple Linear Regression Quality Evaluation Model
We created the evaluation dataset using the label generation algorithm and the quality evaluation indicator system. The descriptive statistical results of various indicators of the trust agency’s individual customer table and institutional customer table are shown in Table 11 and Table 12.
Table 11.
Descriptive statistics of indicators in the personal customer table.
Table 12.
Descriptive statistical table of indicators in the institutional client table.
To examine the presence of correlations among dependent variables and between the dependent and independent variables, we performed a Pearson correlation analysis of the data. The results were visualized through a heatmap as depicted in Figure 4, showcasing the correlations between the variables. In the heatmap, black represents a negative correlation, white represents a positive correlation, and the diagonal line represents the correlation of each independent variable with itself, resulting in a correlation coefficient of 1.
Figure 4.
Heat map of correlation between variables.
We observed that the correlation between the various indicator variables was not notably high, suggesting their independence from each other and the absence of multicollinearity. Moreover, the correlation between the independent variables and dependent variables was quite strong. Among these correlations, the compliance_score exhibited the highest correlation, with a coefficient of 0.93. This result indicated that compliance, as one of the first-layer indicators, exerted the most significant influence on the final quality score. Additionally, the integrityạccuracy_score also demonstrated notable correlations with the quality score, with coefficients of 0.64 and 0.59, respectively.
The modeling approach for the MLP model in this study had the following steps: (1) The indicator data and quality score data were normalized. (2) The dataset was divided into a training set (70% of the data) and a testing set (remaining 30% of the data). (3) The MLP model was constructed using the gradient descent algorithm to derive the optimal weight values for the model. The experimental results are shown in Figure 5 and Figure 6.
Figure 5.
True value and predicted value of quality evaluation score.
Figure 6.
Test set loss function graph.
The Figure 5 shows the results of the real value and predicted value of the test set. Among them, the blue is the predicted value, and the red is the real value. It is evident from the figure that a discernible variance exists between the predicted and actual values. The magnitude of this difference is relatively small, however, which suggests a high level of accuracy in the prediction model.
In real-world applications, each sample represents a distinct customer table. Table 4, Table 5, Table 6 and Table 7 demonstrate the evaluation of the consignment personal customer information table and the general personal customer information table from the trust institution using two different methods: the conventional scoring technique and a machine learning-based method. The time invested and the quality assessment scores obtained from the AHP are approximate representations of the outcomes achieved by the manual scoring method currently employed by the trust.
According to the incremental experimental results, we observed that the time required by both methods increased as the sample size grew. When the number of customer tables doubled, the time spent on the traditional method also doubled, whereas the time expenditure for the machine learning-based approach experienced only a slight increase. Moreover, the quality assessment scores generated through the MLP method were relatively higher than the traditional quality evaluation techniques. By employing Powershap for feature selection, we excluded redundant data, thus reducing the time expenditure and improving score precision (Table 13).
Table 13.
The results for samples of different orders of magnitude were calculated based on the AHP and the method proposed in this paper.
4.3. Feature Selection Based on Shapley Value
We initially split the data into a 3:7 ratio for testing and training purposes. Subsequently, we categorized the data into six classes, representing the six primary indicators. Then, we constructed a Powershap function, utilizing the chosen classification model, which, in this study, was the logistic regression CV algorithm.
When employing the logistic regression CV algorithm model for 200 iterations, we observed that significant features accounted for five-sixths of the total (Figure 7). By considering impact scores and other statistical results, the timeliness factor could be excluded from the primary indicators. This exclusion not only reduced the required effort, but also improved the precision in determining the weight values for subsequent analysis.
Figure 7.
The importance of each feature.
5. Conclusions
This article organizes and analyzes the existing quality evaluation indicators, and comprehensively uses completeness, accuracy, timeliness, consistency, uniqueness and compliance to establish a quality evaluation indicator system for trusts and other financial institutions. Preliminary weight determination is made based on AHP. Since there are many indicators, the Shapley value is introduced for feature selection. A multiple linear regression model is used to determine new weights for the screened indicators. The model is verified using customer data and institutional data from a trust company’s customer information integration system.
Through data quality scoring, trust data can be graded for quality, thereby improving data quality and promoting the transfer and transformation of data results from trusts and other financial institutions. Using machine learning regression algorithms to evaluate data quality has the advantages of quantifiable evaluation results, more intuitive results, and higher accuracy. In addition, in the face of massive customer data, the intelligent data quality evaluation model based on machine learning is faster and more convenient, and the evaluation results are more accurate and objective.
Therefore, the data quality evaluation model based on the regression algorithm model can not only be used to evaluate the quality of trust customer data but can also be used in other industries and government departments to evaluate the quality of other types of data, including their own, as well as the benefits and value that the data can bring.
Author Contributions
Conceptualization, M.L. and Y.Y.; methodology, M.L.; software, M.L.; validation, M.L., J.L. and Y.Y.; formal analysis, M.L.; investigation, J.L.; resources, Y.Y.; data curation, M.L.; writing—original draft preparation, M.L.; writing—review and editing, J.L.; visualization, M.L.; supervision, J.L.; project administration, J.L. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
Data sharing not applicable.
Conflicts of Interest
The author declares no conflict of interest.
References
- Weber, K.; Otto, B.; Osterle, H. One Size Does Not Fit All—A Contingency Approach to Data Governance. J. Data Inf. Qual. 2009, 1, 4. [Google Scholar] [CrossRef]
- Begg, C.; Cairat, T. Exploring the SME quandary:data governance in practise in the small to medium-sized enterprise sector. Electron. J. Inf. Syst. Eval. 2012, 15, 3–13. [Google Scholar]
- Newman, D.; Logan, D. Governance is an essential building block for enterprise information management. Gart. Res. Stamford 2006, 13, 4. [Google Scholar]
- Niemi, E. Designing a data governance framework. In Proceedings of the IRIS Conference, Turku, Finland, 16–19 August 2011. [Google Scholar]
- Karkošková, S. Data governance model to enhance data quality in financial institutions. Inf. Syst. Manag. 2023, 40, 90–110. [Google Scholar] [CrossRef]
- Mcgilvray, D. Executing Data Quality Projects Ten Steps to Quality Data and Trusted Information; Elsevier: Amsterdam, The Netherlands, 2008; Volume 12, pp. 2–4. [Google Scholar]
- Omara, E.; Said, T.E.; Mousa, M. Employing neural networks for assessment of data quality with emphasis on data completeness. Int. J. Artif. Intell. Mach. Learn. 2011, 11, 21. [Google Scholar]
- Peltier, J.W.; Zahay, D.; Lehmann, D.R. Organizational learning and CRM success: A model for linking organizational practices, customer data quality, and performance. J. Interact. Mark. 2013, 27, 1–13. [Google Scholar] [CrossRef]
- Taleb, I.; Kassabi, H.; Serhani, M.A.; Dssouli, R.; Bouhaddioui, C. Big Data Quality: A Quality Dimensions Evaluation. Ubiquitous Intelligence and Computing. In Proceedings of the Advanced and Trusted Computing, Scalable Computing and Communications, Cloud and Big Data Computing, Internet of People, and Smart World Congress, Toulouse, France, 18–21 July 2016. [Google Scholar]
- Juddoo, S. Overview of data quality challenges in the context of Big Data. In Proceedings of the 2015 International Conference on Computing, Communication and Security (ICCCS), Pointe aux Piments, Mauritius, 4–5 December 2015; pp. 1–9. [Google Scholar]
- Madhikermi, M.; Kubler, S.; Robert, J.; Buda, A.; Främling, K. Data quality assessment of maintenance reporting procedures. Expert Syst. Appl. 2016, 63, 145–164. [Google Scholar] [CrossRef]
- Mashoufi, M.; Ayatollahi, H.; Khorasani-Zavareh, D.; Talebi Azad Boni, T. Data quality in health care: Main concepts and assessment methodologies. Methods Inf. Med. 2023, 62, 5–18. [Google Scholar] [CrossRef]
- Uzoka, F.M. AHP-based system for strategic evaluation of financial information. Inf. Knowl. Syst. Manag. 2005, 5, 49–61. [Google Scholar]
- Khan, A.W.; Khan, M.U.; Khan, J.A.; Ahmad, A.; Khan, K.; Zamir, M.; Kim, W.; Ijaz, M.F. Analyzing and evaluating critical challenges and practices for software vendor organizations to secure big data on cloud computing: An AHP-based systematic approach. IEEE Access 2021, 9, 107309–107332. [Google Scholar] [CrossRef]
- Alam, M.K. A systematic qualitative case study: Questions, data collection, NVivo analysis and saturation. Qual. Res. Organ. Manag. Int. J. 2021, 16, 1–31. [Google Scholar] [CrossRef]
- Gomes, V.C.F.; Queiroz, G.R.; Ferreira, K.R. An overview of platforms for big earth observation data management and analysis. Remote Sens. 2020, 12, 1253. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, Y.; Xue, F.; Chen, J. A hybrid approach for supplier selection based on quality management system evaluation and grey relational analysis. J. Intell. Fuzzy Syst. 2021, 41, 1149–1159. [Google Scholar]
- Malik, S.; Tahir, M.; Sardaraz, M.; Alourani, A. A resource utilization prediction model for cloud data centers using evolutionary algorithms and machine learning techniques. Appl. Sci. 2022, 12, 2160. [Google Scholar] [CrossRef]
- Titus, B.D.; Brown, K.; Helmisaari, H.S.; Vanguelova, E.; Stupak, I.; Evans, A.; Clarke, N.; Guidi, C.; Bruckman, V.J.; Varnagiryte-Kabasinskiene, I.; et al. Sustainable forest biomass: A review of current residue harvesting guidelines. Energy Sustain. Soc. 2021, 11, 10. [Google Scholar] [CrossRef]
- Hou, Y.; Biljecki, F. A comprehensive framework for evaluating the quality of street view imagery. Int. J. Appl. Earth Obs. Geoinf. 2022, 115, 103094. [Google Scholar] [CrossRef]
- Sun, H.; Chen, Z. Interval neutrosophic hesitant fuzzy AHP method based on combined weights. J. Intell. Fuzzy Syst. 2021, 41, 8015–8028. [Google Scholar]
- Wang, Y.; Liu, Z.; Cai, C.; Xue, L.; Ma, Y.; Shen, H.; Chen, X.; Liu, L. Research on the optimization method of integrated energy system operation with multi-subject game. Energy 2022, 21, 123305. [Google Scholar] [CrossRef]
- Liu, J.; Huang, J.; Zhou, Y.; Li, X.; Ji, S.; Xiong, H.; Dou, D. From distributed machine learning to federated learning: A survey. Knowl. Inf. Syst. 2022, 64, 885–917. [Google Scholar] [CrossRef]
- Chen, H.; Covert, I.C.; Lundberg, S.M.; Lee, S.I. Algorithms to estimate Shapley value feature attributions. Nat. Mach. Intell. 2023, 5, 590–601. [Google Scholar] [CrossRef]
- Liu, L.; Li, Z.; Yang, J. An emergency plan evaluation model based on combined DEA and TOPSIS methods. J. Clean. Prod. 2021, 315, 62. [Google Scholar]
- Kitiyodom, P.; Chindapa, P. Development of an emergency response plan assessment model for hazardous chemical accidents in Thailand. J. Loss Prev. Process. Ind. 2021, 70, 307. [Google Scholar]
- Wen, C.; Yang, J.; Gan, L.; Pan, Y. Big data driven Internet of Things for credit evaluation and early warning in finance. Future Gener. Comput. Syst. 2021, 34, 295–307. [Google Scholar] [CrossRef]
- Liapis, C.M.; Kotsiantis, S. Energy Balance Forecasting: An Extensive Multivariate Regression Models Comparison. In Proceedings of the 12th Hellenic Conference on Artificial Intelligence, Corfu, Greece, 7–9 September 2022; pp. 1–7. [Google Scholar]
- Tiwari, P. Bank affection and customer retention: An empirical investigation of customer trust, satisfaction, loyalty. SN Bus. Econ. 2022, 2, 54. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).