Next Article in Journal
A GPT-Based Approach for Cyber Threat Assessment
Previous Article in Journal
FASTSeg3D: A Fast, Efficient, and Adaptive Ground Filtering Algorithm for 3D Point Clouds in Mobile Sensing Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identifying Suitability for Data Reduction in Imbalanced Time-Series Datasets

by
Dominic Sanderson
and
Tatiana Kalganova
*
Department of Electronic and Electrical Engineering, Brunel University of London, Uxbridge UB8 3PH, UK
*
Author to whom correspondence should be addressed.
Submission received: 19 February 2025 / Revised: 16 April 2025 / Accepted: 18 April 2025 / Published: 8 May 2025

Abstract

Occupancy detection for large buildings enables optimised control of indoor systems based on occupant presence, reducing the energy costs of heating and cooling. Through machine learning models, occupancy detection is achieved with an accuracy of over 95%. However, to achieve this, large amounts of data are collected with little consideration of which of the collected data are most useful to the task. This paper demonstrates methods to identify if data may be removed from the imbalanced time-series training datasets to optimise the training process and model performance. It also describes how the calculation of the class density of a dataset may be used to identify if a dataset is applicable for data reduction, and how dataset fusion may be used to combine occupancy datasets. The results show that over 50% of a training dataset may be removed from imbalanced datasets while maintaining performance, reducing training time and energy cost by over 40%. This indicates that a data-centric approach to developing artificial intelligence applications is as important as selecting the best model.

1. Introduction

Indoor occupancy detection is an important task for operations such as energy saving, management, and security [1,2]. It allows for automated control of heating, ventilation and air conditioning (HVAC) by only powering these systems when occupant presence is detected. A classic approach to occupancy detection for indoor environments is to deploy multiple homogeneous sensors, which are positioned around the interior for maximum coverage, and feed these data into an artificial intelligence (AI) model. These sensors collect thousands of datapoints each day, resulting in copious datasets which require processing, cleaning, and validating before they can be used to estimate occupancy. To coalesce these data from multiple sources, embedded or “edge” devices are ideal due to their small size and ease of use. Edge devices are low-power and low-compute devices, making them appropriate for energy-saving solutions, but this makes them incapable of running the increasingly complex AI models that have become commonplace. Cloud computing allows these devices to send data to a more powerful machine for classification, but data transmission has a high energy cost, especially when large amounts of data must be sent multiple times a minute [3]. To alleviate this cost, it is beneficial to perform as much processing as early in the pipeline as possible on the edge device and reduce the amount of data to be sent. In order to optimise these devices for use on such large collections of data, data analysis can give us an understanding of which data are more or less relevant to the task at hand, and therefore, which data may be removed from the training dataset. This can minimise the amount of data transmitted further and stored. Additionally, environmental datasets often suffer from noise and class imbalance which can lead to bias in training [4,5]. By reducing the amount of data in the majority class, class balance may be alleviated and the cost of transmission reduced. This paper aims to show that, depending on the attributes of the data, derived from centroid distance and class density, some data may be removed and model performance maintained. It also aims to find the compatibility of low-compute Random Forest (RF) algorithms with data reduction to maximise data efficiency. The final aim is to perform dataset fusion on occupancy datasets to observe if previously obtained data may be used with newly collected data for a more robust classifier. The experimentation includes reduction strategies inspired by previous works on image data to identify compatibility of these methods with lower-dimensional environmental sensor data.

1.1. Related Works

This paper discusses three topics: Data reduction to find the most useful data in the time-series domain, machine learning for occupancy detection, and dataset fusion of time-series data.
With the rapid expansion of AI, the models and data necessary for its operation have grown substantially, posing sustainability challenges for users [6]. Also, the environmental effects of AI have become an increasing concern, leading to the trend towards green AI [7]. For these reasons, there is a strong desire to be able to train AI more cheaply. This includes both more efficient AI models and training data. Generally, an increase in data can enhance model performance; however, recent research has focused on identifying the most useful data to streamline collection efforts and model training [8]. This targeted approach reduces the resources required for training without sacrificing accuracy or performance.
Class imbalance can lead to bias, where the model favours a class’s samples due to its larger sample size. To counter this, imbalance can be relieved by adding artificial samples to the smaller (minority) class, or removing samples from the larger (majority) class. These processes are called undersampling and oversampling, respectively [9]. Undersampling can be considered a data reduction technique as it reduces the size of the dataset.
Data pruning [10] is a technique that reduces an entire dataset by giving each datapoint a ‘parameter influence’, and then removing datapoints with the lowest influence. In this way, not only is data utility optimised but also the computational load of training the model. A similar alternative is to perform preliminary training for a few iterations on a complete dataset to first identify the most impactful features. After these features are identified, only they are used to train for the full duration [11]. Research by Toneva et al. [12] used the ‘forgetting score’ as a metric to group data, eliminating less forgotten and therefore less useful data before training on the refined dataset. This practice further minimises time and computational resources, increasing overall efficiency.
Dimensionality reduction is a popular technique which transforms data into a lower-dimensional space. This reduction aids in data visualization and addresses challenges associated with the ‘curse of dimensionality’, which can impede data grouping and analysis [13]. Principal component analysis (PCA), for example, identifies and analyses principle components within a dataset, which may then be used as feature sets for training, replacing raw data [14]. By using only these essential components, models experience a reduction in complexity and operational costs without sacrificing key data insights or performance. Dimensionality reduction is popular in reducing image data [15], but when reducing time-series data it is important to consider the temporal nature of the data, and that feature order must be preserved. PCA has proven to be useful as a preprocessing step before applying machine learning, but assumes linearity in the data. Kernel PCA (KPCA) resolves this by applying a kernel to the data, allowing it to handle non-linearity, but it has a high computational overhead [16].
Understanding the most useful data in a dataset is important, but it is also important to understand which data are most useful at different stages of training. Usually, training AI models involves several iterations of computation on the full dataset. However, recent work on dynamic data inclusion shows an alternative in which a model is trained for some time on a subset of the data, and gradually data exposure is increased over time [17]. After identifying which data are ‘easy’ to learn and which are ‘hard’, by means of parameter influence or the forgetting score, training may be performed with only the easier data for more rapid training, and fine-tuned with harder data once the model parameters have been improved.
Many of these methods produce a data subset which is used for partial training. While this is an improvement from training on a full dataset, there is a desire to be able to identify the usefulness of data without having to use AI at all. The work in [18] aims to identify redundant data in a dataset, which may be removed from training before running any AI models at all. By calculating the ‘class density’, each class is given a score which may be used to quantify how much data may be removed before training.
Occupancy detection using AI holds significant potential for optimizing heating, ventilation, and air conditioning (HVAC) systems to achieve greater energy efficiency and cost savings [19,20]. While methods such as camera-based systems can be employed to detect occupancy, these techniques are often seen as intrusive by users, as they may infringe on privacy and personal comfort [21,22]. To address these concerns, non-intrusive sensing methods are preferred. These alternatives rely on environmental indicators like temperature, humidity, and CO2 levels within a space to infer human presence. Such non-intrusive techniques offer a viable solution for occupancy detection, achieving notable success while preserving user privacy [21,23].
AI models are most often trained on data specific to the domain they are to operate in. However, data from the real world may be combined to make it more heterogeneous and informative, increasing the reliability of the classification and quality of the extracted information [24]. ‘Data fusion’ refers to the combination of multiple features into a single dataset, which is used in regression or classification [25]. In the context of occupancy detection, this is often referred to as sensor fusion, as multiple sensor readings are concatenated into one complete dataset. ‘Dataset fusion’ allows AI models to perform more reliably in new domains by introducing data from multiple sources. By combining datasets, information from other domains becomes available, giving improved performance when tested in other domains. This is a popular method in image classification tasks, as new images are easily resized to match the original data [26]. However, time-series datasets are seldom in the same format as each other, due to the domain-specific information they capture. This is true even for datasets that aim to capture the same information; for example, in the case of occupancy detection, occupancy datasets capture different types of data such as temperature and humidity, image and sound, altitude and location. When attempting to fuse these datasets, this causes issues such as mismatching data formats or missing data, which makes training difficult. Data pre-processing must be performed in order to homogenise datasets prior to fusion. There has been less attention given to fusing completed occupancy datasets together to improve model robustness.

1.2. Research Gap and Contributions

There has been recent research on data reduction for time-series data, but not on occupancy data specifically. Moreover, there is a research gap in fusing occupancy datasets. To address this, this paper describes data reduction techniques, based on previous work on image datasets, developed for time-series data. More specifically, these techniques focus on sorting time-series data by their distance from the data centroid, and reduction is performed based on this metric.
This paper aims to show the effects of data reduction in two aspects: reduction across all classes indiscriminately, and reduction of only the larger class. This comparison will show if undersampling is better than pure, random data reduction in the context of occupancy data. Also, this paper investigates the effects of varying amounts of data reduction to observe the optimal amount of reduction for best performance. This paper aims to then correlate these findings with class density, to observe if this technique is applicable to time-series data, where it has previously only been tested on high-dimensionality image datasets. Finally, this paper demonstrates the effectiveness of data reduction techniques on individual datasets and fused datasets to identify the suitability of data reduction on one-dimensional data.
We define the most useful data as the data that best describe a model, while the least useful data are those which do not provide any new information to the model. We define sufficient data as the amount of data required to successfully train an AI model. By identifying the sufficient amount of data, it is possible to optimise training by not spending resources training on data that do not contribute further to model performance.
Our contributions are as follows:
  • This paper introduces methods of data reduction for time-series data based on previously established techniques for 2D image data;
  • This paper shows, through experimentation, the benefits and drawbacks of varying amounts of data reduction on time-series data;
  • This paper compares data reduction on the larger of imbalanced classes and data reduction on the entire dataset to identify the effects of data undersampling in conjunction with our novel data reduction strategies;
  • This paper shows the correlation between class density and model and model performance after data reduction to show how data reduction may be suitable for a given dataset;
  • This paper shows the suitability of dataset fusion for occupancy datasets, in combination with data reduction.

2. Materials and Methods

We aim to identify the most useful data in a dataset. We achieve this by calculating centroid distances, which consolidate all the features, the sensor values, into one variable. This allows data to be organised by a single metric, which is used to remove data by our multiple data reduction techniques, giving us reduced datasets. These reduced datasets are used for training and testing, and the results are compared to show which data reduction method, and therefore which data, is most beneficial for training. However, as occupancy data commonly have an imbalanced number of datapoints in each class, we propose a method of balancing data by removing data from only the larger class. This resolves both issues of class imbalance and the abundance of less useful data.

2.1. Dataset Preparation and Fusion

Multiple open-source datasets have been developed for the purpose of occupancy detection with machine learning [27,28]. The dataset selected for this experiment is the HPDMobile dataset [29], due to it having multiple sites of homogeneous environmental sensor data, making it ideal for dataset fusion. It is an open-source dataset that collects data from six sites, with each site containing the same type of sensors and using the same capture methods. Each sensor device captures temperature, humidity, and the volatile organic compound (VOC) count. Each site has either four or five of these sensors, which equates to twelve or fifteen features for each site, respectively. Each datapoint has an associated ground truth of the number of occupants, but for the sake of simplicity, the experimentation aims to differentiate between ‘some’ or ‘no’ occupants.
The HPDMobile dataset is not originally formatted by site, but by sensor; each sensor’s data are stored in a separate file. To make a complete dataset, each file of sensor data is sorted by location, time, and date and aggregated into a table for each site. The result is a dataset csv file for each of the six sites. Details of the six subsets of the HPDMobile dataset are shown in Table 1.
The HPDMobile dataset has on average 7% of data missing across all sites and time frames due to sensor synchronisation and duplicate dropping [29]. As gaps in data cause a loss in information, and the Random Forest algorithm does not support missing data, this issue is addressed by filling these gaps in artificially with data imputation. K-Nearest-Neighbour averaging was selected as an appropriate imputation technique for this purpose [30]. Prior to any training, each site’s dataset undergoes imputation. For each missing datapoint in each dataset, the three most similar datapoints are averaged to give the missing value. The KNNImputer package from scikit-learn is used for this.
When attempting to fuse datasets, there may be issues in mismatching data formats or missing data, which makes training difficult. Data pre-processing must be performed in order to homogenise datasets prior to their fusion. In the case of the HPDMobile dataset, there are six individual site datasets, some containing 5 sensors, giving a total of 15 features, and with 4 sensors giving 12 features. To be able to fuse the individual site datasets together and to make comparison between each site simpler, each dataset is homogenised into the same shape with the same number of features by removing the least important sensor, and therefore the 3 least important features. The least important sensor for the larger sites is identified by classifying each of the 5 sensor datasets, and using the scikit-learn importances metric. The importances for each feature are collected and summed. The sensor with the smallest sum is then identified as the least important feature and omitted from the dataset before training. Sensor importances are highlighted in Table 1.
To create a fused dataset from the six individual site datasets, each site is first split into training and test sets. Then, each training set is combined into one large training dataset. The test sets are not combined but tested on individually, after the model is trained on the fused dataset. Figure 1 shows this procedure.

2.2. Centroid Distance Calculation

Five strategies for data reduction were developed for this paper, four of which use centroid distance as an identifier for reduction. Algorithm 1 shows the process of calculating each centroid distance.
Algorithm 1 Centroid distance calculation
  1:
for Each Dataset do
  2:
       Split Data into 2 classes: Occupied & Not Occupied
  3:
       for Each Feature in Each Class do
  4:
              Feature Centroids = Calculate Mean Average of Each Feature
  5:
              for Each Datapoint in Dataset do
  6:
                    Feature Distances = Difference Between Feature And Feature Centroid
  7:
                    Centroid Distance = Linear Normalisation of all Feature Distances
  8:
              end for
  9:
       end for
10:
end for

2.3. Data Reduction Strategies

Once data centroids are calculated, they are used to identify which data to remove from the dataset for training. This paper contains two experiments for data reduction: Data balancing through undersampling and data reduction on both classes, or pure data reduction. Data balancing aims to set the class distribution to 50:50 to alleviate the effects of class imbalance and to reduce the amount of data used by removing data from the dominant class. For example, if a dataset has two classes with 100 and 300 datapoints per class, the data balance methods aim to reduce the majority class from 300 datapoints to 100. In total, this would be a reduction of ((300 − 100)/(100 + 300)) = 50.0%. As this is quite a large reduction, the experimentation described below caps the amounts of reduction by 5%, 10%, 25%, 50%, 75%, as well as the maximum. This allows us to observe the effects of varying the amount of reduction. Pure data reduction aims to identify the effects of reducing data in both classes indiscriminately. For consistency between experiments, the same reduction caps are used as above for pure data reduction. As each site dataset has a different class balance, the maximum amount of data to remove to balance the classes differs across datasets. Datasets with greater class imbalance require more reduction to balance the classes, and vice versa.
Figure 2 shows the reduction methods developed. These are as follows:
  • Random exclusion—random datapoints are removed from the training set.
  • Central exclusion—datapoints with the smallest class centroid distance are removed.
  • Lateral exclusion—datapoints with the largest class centroid distance are removed.
  • Data even—datapoints from the largest density of class centroid distances are removed. This effectively cuts the top off the tallest columns in the centroid distribution plots.
  • Data squash—an amount of datapoints proportional to the density of each of 10 bins of data is removed from each bin. This effectively flattens all columns in the centroid distribution plots, proportionally to the size of each column.
Figure 2. Centroid distance-based reduction strategies. Original dataset distribution in blue; reduced dataset in orange. (a) Random reduction: Data are removed from the dataset at random. (b) Central reduction: Datapoints with the smallest centroid distance are removed. (c) Lateral reduction: Datapoints with the largest centroid distance are removed. (d) Even reduction: Datapoints from the largest density of centroid distances are removed. (e) Squash reduction: Datapoints are removed proportionally to the local density.
Figure 2. Centroid distance-based reduction strategies. Original dataset distribution in blue; reduced dataset in orange. (a) Random reduction: Data are removed from the dataset at random. (b) Central reduction: Datapoints with the smallest centroid distance are removed. (c) Lateral reduction: Datapoints with the largest centroid distance are removed. (d) Even reduction: Datapoints from the largest density of centroid distances are removed. (e) Squash reduction: Datapoints are removed proportionally to the local density.
Ai 06 00098 g002
The central and lateral exclusion methods are based on similar work on 2D image data [18], and data even and data squash were developed for this paper.

2.4. Class Density Calculation

Class density, or label density [18,31], is a measure of the aggregate similarity of datapoints within each class of a dataset. A low class density suggests the datapoints of that class are unique, while a high class density suggests many datapoints hold the same features. For the latter case, it stands to reason that similar datapoints may be removed to reduce a dataset without taking away key features from it.
Equation (1) shows how class density is calculated, where d is the density for class i for all n classes, where c i is the count of samples for that class; σ i is the standard deviations of the m Gaussians of the m-dimensional class i.
d i = n · c i · ( j n c j ) 1 · ( 1 m k m σ i k ) 1
For each experiment in this paper, we calculate the class densities to identify the effect the different reduction strategies have on class density. With this knowledge, and the corresponding model performance, we can observe the importance of specific data to overall model performance. We may also use this information to identify if a dataset may be reduced before attempting any data reduction; findings by [18] suggest that by reducing data from the denser classes and converging each class’s density towards a value of 1, model performance may be maintained. We identify if this is true for the HPDMobile dataset and, by extension, other time-series datasets.

2.5. Metrics and Model

Accuracy is traditionally used to measure the performance of AI models, but especially in the case of unbalanced datasets, it is known to give bias to the majority class in a phenomenon known as the accuracy paradox [32]. To avoid this, model performance is also measured by the area under the receiver operating characteristic curve (AUC-ROC) [33]. This is a single value that leverages a model’s sensitivity against its specificity instead of considering these metrics individually, and is considered a more descriptive metric over accuracy for biased datasets for binary classification tasks. The experimental results in this paper show accuracy and AUC-ROC. The p-values of each experiment are also given to show the statistically significant difference between the results and the test benchmark. A p-value below 0.05 is selected to identify statistical significance.
Multiple models were considered for the experimentation, and preliminary testing was performed on each to identify the most suitable model for the rest of the experimentation. Those included were Random Forest (RF), XGBoost, Convolutional Neural Network (CNN), and Long Short-Term Memory (LSTM) models. The RF and XGBoost models are available as part of the scikit-learn library, and the CNN and LSTM models were created using the Pytorch library. Below are the configurations of each network:
  • Random Forest Algorithm (RF)
    Maximum depth: unlimited;
    Number of estimators: 100.
  • XGBoost
    Maximum depth: unlimited;
    Number of estimators: 100;
    Tree method: ‘approx’.
  • Convolutional Neural Network (CNN)
    Layer configuration: 3 convolutional layers with batch normalization; 2 fully connected final layers;
    Learning rate: 0.001;
    Optimiser: Adam;
    Loss function: Binary cross-entropy;
    Data window size: 6 datapoints.
  • Long Short-Term Memory Network (LSTM)
    Number of layers: 4 LSTM layers, 1 fully connected layer;
    Hidden layer size: 250;
    Bidirectional: False;
    Learning rate: 0.001;
    Optimiser: Adam;
    Loss function: Binary cross-entropy;
    Data window size: 6 datapoints.
Table 2 shows the results from preliminary testing on Site Alpha of the HPDMobile dataset. This shows that the RF model was the best-performing model. The RF and XGBoost models are the simpler models to train with, while the CNN and LSTM models require the input data’s sequencing to be preserved. This adds a level of complexity which may be avoided by using a model that does not require sequencing preservation. Also, as one objective of this study is for it to be deployable on edge devices with computational constraints, a computationally simpler model would be preferred. The Random Forest model was selected as a simple single-loop model for classification.

2.6. Hardware and Power Calculation

As the focus of this paper is to reduce the cost of operation of occupancy classification on edge devices, it is important to consider the hardware the experiments are carried out on. Due to the large number of experiments and the amount of data logging that will be performed, experimentation is performed on a desktop PC. The code is designed to be transferable to an IOT device for deployment, but in the context of this paper the following hardware is used for experimentation:
  • CPU: Intel i7-11700k;
  • RAM: 16 GB DDR4;
  • OS: Windows 10.
The power consumption of the CPU is measured using HWiNFO software [34] while the model is trained. For the processor used, the power consumption is 14 W when idle and 46 W when busy. The power consumption of the program is the difference between these two: 32 W. This value is consistent between experiments regardless of tree depth or dataset size, although these factors instead increase the runtime of training. In the UK, according to the Department for Energy Security and Net Zero [35], this translates to 6.623 g of CO2 emissions per hour.

3. Results

3.1. Experiments on Individual Sites

For each experiment performed, the datasets were split into training and testing sets at a ratio of 80:20. Each experiment was run five times and results were averaged.
Experimentation was performed to observe the effects of reducing the data by varying amounts, from different areas in the data distribution. Due to the different class balances of each dataset, class balancing removes more data for more imbalanced classes, and vice versa. Table 3 shows the maximum percentage reduction of the larger class, and the dataset overall. It also shows the densities of the classes, derived from research in [18]. This will be used as a metric to identify suitability for data reduction.

3.1.1. Experimental Benchmark

Before data reduction was performed, the models were trained with the full dataset to acquire the benchmark results. These results are shown in Table 4. The runtime of each of these experiments was less than one minute.

3.1.2. Site Alpha

Site Alpha has 147,750 datapoints and a class balance of 20:80. Figure 3 shows the results of varying degrees of data reduction on the single dataset. Table 5 and Table 6 show the p-values of the AUC-ROC of each experiment.
Figure 3a,b show that model accuracy decreases steadily as more data are removed, regardless of whether the removed data are from the majority class or both. Figure 3c,d, show a similar drop in performance. The AUC-ROC score may be maintained with majority class reduction, up to 50%. This is interesting behaviour as the accuracy up to this amount of reduction decreases. By performing reduction in this way, we may improve the model’s ability to avoid false positives and false negatives. It is also important to note that the p-values of all experiments corresponding to majority class reduction indicate that the results are not statistically differentiable from the benchmark, apart from with the maximum reduction. For the maximum reduction, the performance is clearly worse, hence the differentiation. For every other case, performance is maintained while reducing the amount of data.
Table 7 and Table 8 show the class densities for each experiment. Table 7 shows that, for each reduction method, the densities of each class become closer, up to a reduction cap of 50%. At maximum reduction, the minority class, class 0, has a greater class density than class 1. At the same time, both the accuracy and the AUC-ROC of the model decrease by a relatively large amount. This supports the theory that data may be reduced in order to balance the density of each class towards a value of 1, but further reduction that leads to an imbalance causes the model to deteriorate. Also, Table 8 shows that by performing data reduction on both classes, the difference in class density between classes 0 and 1 does not change by a significant amount. This may explain why the accuracy and AUR-ROC decrease as the amount of data is reduced, while they do not with reduction only on the majority class.
Figure 3. Experimental results for Site Alpha test set. (a) Accuracy, Majority class reduced; (b) Accuracy, Both classes reduced; (c) AUC, Majority class reduced; (d) AUC, Both classes reduced.
Figure 3. Experimental results for Site Alpha test set. (a) Accuracy, Majority class reduced; (b) Accuracy, Both classes reduced; (c) AUC, Majority class reduced; (d) AUC, Both classes reduced.
Ai 06 00098 g003
Table 5. p-values of the AUC metrics of each experiment of reduction on the majority class, on Site Alpha test set. Values marked with an * have a statistically significant difference from the benchmark.
Table 5. p-values of the AUC metrics of each experiment of reduction on the majority class, on Site Alpha test set. Values marked with an * have a statistically significant difference from the benchmark.
ReductionRandomCentralLateralEvenSquash
5%0.2680.1820.6160.7940.329
10%0.9710.2550.9060.6520.625
25%0.740.9720.6880.4820.673
50%0.6050.7310.9480.3410.239
Max2.25 × 10 4 *2.63  × 10 5 *1.49 × 10 4 *1.59 × 10 4 *1.18 × 10 4 *
Table 6. p-values of the AUC metrics of each experiment of reduction on both classes, on Site Alpha test set. Values marked with an * have a statistically significant difference from the benchmark.
Table 6. p-values of the AUC metrics of each experiment of reduction on both classes, on Site Alpha test set. Values marked with an * have a statistically significant difference from the benchmark.
ReductionRandomCentralLateralEvenSquash
5%0.2737.04 × 10 3 *4.42 × 10 2 *5.92 × 10 2 5.56 × 10 3 *
10%3.51 × 10 3 *0.2671.44 × 10 2 *0.163.48 × 10 2 *
25%1.71 × 10 2 *5.21 × 10 3 *1.21 × 10 3 *1.98 × 10 4 *2.83 × 10 3 *
50%3.96 × 10 4 *2.56 × 10 5 *2.64 × 10 5 *7.09 × 10 4 *2.11 × 10 4 *
Max1.85 × 10 5 *3.14 × 10 5 *1.25 × 10 4 *1.73 × 10 6 *3.52 × 10 5 *
Table 7. Class density of Site Alpha dataset after data reduction on the majority class only. R: random reduction, C: central reduction, L: lateral reduction, E: even reduction, S: squash reduction.
Table 7. Class density of Site Alpha dataset after data reduction on the majority class only. R: random reduction, C: central reduction, L: lateral reduction, E: even reduction, S: squash reduction.
Class 0 (Not Occupied)Class 1 (Occupied)
Data
Reduced
R C L E S R C L E S
5%0.7070.7120.6700.6870.7041.5651.5651.5821.5651.551
10%0.7390.7210.7220.7560.7401.5651.5621.5521.5111.550
25%0.8120.8750.8480.8480.8821.5071.4851.4821.3891.447
50%1.1121.1851.1761.1611.1331.3241.3081.3151.1121.311
Max1.7821.7341.7251.7961.6211.0081.0100.9770.6800.968
Table 8. Class density of Site Alpha dataset after data reduction across both classes. R: random reduction, C: central reduction, L: lateral reduction, E: even reduction, S: squash reduction.
Table 8. Class density of Site Alpha dataset after data reduction across both classes. R: random reduction, C: central reduction, L: lateral reduction, E: even reduction, S: squash reduction.
Class 0 (Not Occupied)Class 1 (Occupied)
Data
Reduced
R C L E S R C L E S
5%0.7200.7060.7190.6430.7071.5491.5551.5791.5911.572
10%0.7820.6690.6340.7040.7091.5651.5791.5981.5321.557
25%0.7920.7520.7810.6350.6221.5621.5731.5801.5311.597
50%0.7070.7160.7350.5950.6331.5711.5691.5821.4581.579
Max0.7660.8040.8500.5540.5411.5561.5711.6221.3401.594

3.1.3. Site Beta

Site Beta has 146,879 datapoints and a class balance of 40:60. This dataset has relatively few datapoints and less class imbalance than the others in the HPDMobile dataset, causing a lower maximum reduction of 34%. Figure 4 shows the results of varying degrees of data reduction on the Site Beta dataset, and Table 9 and Table 10 show the p-values of the AUC-ROC of each experiment.
Both the accuracy and AUC-ROC values change by less that 0.2% as the amount of data is reduced. This is due to much less reduction being required to balance the classes, compared to the reduction performed for Site Alpha. However, there is still a very slight drop in both accuracy and AUC-ROC. Most of the p-values show no statistically significant difference from the benchmark, expect for some of the more extreme reduction amounts. These results show a larger difference from the benchmark, which raises questions about the stability of the model after reducing the dataset.
Table 11 and Table 12 show that the densities of both classes are values above 1 for all experiments except the maximum reduction of the majority class, where class 1’s density is between 0.9 and 1. This suggests that the reduction methods performed are not enough to shift the densities towards 1. Alternative methods may be required to optimise datasets like Site Beta, where classes are already closely balanced.
Figure 4. Experimental results for Site Beta test set. (a) Accuracy, Majority class reduced; (b) Accuracy, Both classes reduced; (c) AUC, Majority class reduced; (d) AUC, Both classes reduced.
Figure 4. Experimental results for Site Beta test set. (a) Accuracy, Majority class reduced; (b) Accuracy, Both classes reduced; (c) AUC, Majority class reduced; (d) AUC, Both classes reduced.
Ai 06 00098 g004
Table 9. p-values of the AUC metrics of each experiment of reduction on the majority class, on Site Beta test set. Values marked with an * have a statistically significant difference from the benchmark.
Table 9. p-values of the AUC metrics of each experiment of reduction on the majority class, on Site Beta test set. Values marked with an * have a statistically significant difference from the benchmark.
ReductionRandomCentralLateralEvenSquash
5%0.9550.6840.5910.2580.325
10%0.9290.4860.320.6720.376
25%4.65 × 10 2  *4.82 × 10 3  *0.2970.4830.251
Max3.47 × 10 2  *0.5895.31 × 10 2 1.71 × 10 2  *4.05 × 10 2  *
Table 10. p-values of the AUC metrics of each experiment of reduction on both classes, on Site Beta test set. Values marked with an * have a statistically significant difference from the benchmark.
Table 10. p-values of the AUC metrics of each experiment of reduction on both classes, on Site Beta test set. Values marked with an * have a statistically significant difference from the benchmark.
ReductionRandomCentralLateralEvenSquash
5%7.27 × 10 2 0.1510.3490.6570.368
10%0.7843.04 × 10 2  *0.3890.5642.12 × 10 2  *
25%0.1420.1117.13 × 10 2 0.2470.523
Max1.80 × 10 2  *0.1241.38 × 10 4  *1.00 × 10 2  *9.75 × 10 2
Table 11. Class density of Site Beta dataset after data reduction on the majority class only. R: random reduction, C: central reduction, L: lateral reduction, E: even reduction, S: squash reduction.
Table 11. Class density of Site Beta dataset after data reduction on the majority class only. R: random reduction, C: central reduction, L: lateral reduction, E: even reduction, S: squash reduction.
Class 0 (Not Occupied)Class 1 (Occupied)
Data
Reduced
R C L E S R C L E S
5%1.1111.1321.1121.1141.1261.1721.1531.1641.1571.149
10%1.1441.1571.1621.1491.1581.1371.1461.1461.1191.120
25%1.2771.2761.2791.2631.2981.0551.0471.0351.0041.021
Max1.3831.3451.3711.3451.3870.9631.0020.9700.9170.952
Table 12. Class density of Site Beta dataset after each data reduction method, with reduction on both classes. R: random reduction, C: central reduction, L: lateral reduction, E: even reduction, S: squash reduction.
Table 12. Class density of Site Beta dataset after each data reduction method, with reduction on both classes. R: random reduction, C: central reduction, L: lateral reduction, E: even reduction, S: squash reduction.
Class 0 (Not Occupied)Class 1 (Occupied)
Data
Reduced
R C L E S R C L E S
5%1.0831.0971.0871.0911.0711.1951.1801.1981.1801.189
10%1.0871.0881.0941.0621.0791.1911.1941.1921.1871.177
25%1.0571.0781.0851.0471.0611.2051.1981.1861.1561.185
Max1.0851.1381.1141.0521.0681.1901.1841.2181.1261.174

3.1.4. Site Charlie

Site Charlie has one of the larger class imbalances, and therefore larger maximum reduction caps, with a maximum reduction of 72%. It is also one of the larger datasets, with over 300,000 datapoints. Figure 5 shows the experimental results and Table 13 and Table 14 show the p-values of the AUC-ROC of each experiment.
Figure 5a shows that up to a reduction cap of 50%, accuracy is above the benchmark, with the lateral data reduction method with a reduction cap of 10% performing best. With the maximum reduction cap (at 72.111%), however, the performance is below the benchmark for all strategies. This indicates a delicate balance is needed for data reduction to ensure that too much data is not removed. This is further explained by the class densities; Table 15 shows that as more of the majority class is reduced, both classes’ densities converge around a value of 1. At a reduction cap of 50%, the combined difference between each density and 1 is smallest, which is where the AUC-ROC is greatest across all strategies. However, the results are less stable, as shown by the size of each box. Furthermore, Table 16 shows that by reducing data across both classes, the model performance is worse.
Figure 5. Experimental results for Site Charlie test set. (a) Accuracy, Majority class reduced; (b) Accuracy, Both classes reduced; (c) AUC, Majority class reduced; (d) AUC, Both classes reduced.
Figure 5. Experimental results for Site Charlie test set. (a) Accuracy, Majority class reduced; (b) Accuracy, Both classes reduced; (c) AUC, Majority class reduced; (d) AUC, Both classes reduced.
Ai 06 00098 g005
Table 13. p-values of the AUC metrics of each experiment of reduction on the majority class, on Site Charlie test set. Values marked with an * have a statistically significant difference from the benchmark.
Table 13. p-values of the AUC metrics of each experiment of reduction on the majority class, on Site Charlie test set. Values marked with an * have a statistically significant difference from the benchmark.
ReductionRandomCentralLateralEvenSquash
5%0.144.96 × 10 2  *3.08 × 10 2  *1.18 × 10 2  *0.215
10%2.05 × 10 3  *7.93 × 10 3  *6.49 × 10 4  *2.93 × 10 2  *8.29 × 10 3  *
25%2.96 × 10 2  *2.23 × 10 2  *6.12 × 10 4  *1.41 × 10 2  *8.71 × 10 3  *
50%6.60 × 10 3  *9.01 × 10 3  *1.74 × 10 2  *2.63 × 10 3  *3.41 × 10 3  *
Max7.02 × 10 2  0.3894.31 × 10 4  *0.3652.89 × 10 2  *
Table 14. p-values of the AUC metrics of each experiment of reduction on both classes, on Site Charlie test set. Values marked with an * have a statistically significant difference from the benchmark.
Table 14. p-values of the AUC metrics of each experiment of reduction on both classes, on Site Charlie test set. Values marked with an * have a statistically significant difference from the benchmark.
ReductionRandomCentralLateralEvenSquash
5%0.1380.1160.2991.71 × 10 3  *1.23 × 10 3  *
10%0.2330.4610.30.3960.924
25%0.8630.7050.9942.66 × 10 2  *0.804
50%2.42 × 10 2  *0.2550.1946.01 × 10 4  *2.02 × 10 3  *
Max3.48 × 10 4  *5.56 × 10 3  *2.32 × 10 3  *9.48 × 10 5  *3.73 × 10 4  *
Table 15. Class density of Site Charlie dataset after data reduction on the majority class only. R: random reduction, C: central reduction, L: lateral reduction, E: even reduction, S: squash reduction.
Table 15. Class density of Site Charlie dataset after data reduction on the majority class only. R: random reduction, C: central reduction, L: lateral reduction, E: even reduction, S: squash reduction.
Class 0 (Not Occupied)Class 1 (Occupied)
Data
Reduced
RCLESRCLES
5%0.6630.6620.6640.6630.6611.5121.5241.5191.5111.513
10%0.6890.6890.6900.6930.6951.4921.4931.4911.4871.489
25%0.7940.7930.7930.7910.7911.4491.4231.4211.4031.414
50%1.0461.0331.0461.0481.0421.2601.2711.2221.2021.251
Max1.4581.4661.4601.4571.4670.9850.9810.9910.8910.957
Table 16. Class density of Site Charlie dataset after data reduction across both classes. R: random reduction, C: central reduction, L: lateral reduction, E: even reduction, S: squash reduction.
Table 16. Class density of Site Charlie dataset after data reduction across both classes. R: random reduction, C: central reduction, L: lateral reduction, E: even reduction, S: squash reduction.
Class 0 (Not Occupied)Class 1 (Occupied)
Data
Reduced
RCLESRCLES
5%0.6360.6370.6400.6370.6361.5311.5411.5311.5291.530
10%0.6380.6390.6390.6320.6371.5341.5321.5311.5251.528
25%0.6350.6350.6380.6360.6351.5381.5371.5261.5121.522
50%0.6390.6360.6350.6330.6331.5331.5381.5261.4891.512
Max0.6340.6460.6380.6310.6311.5561.5631.5541.4451.509

3.1.5. Site Delta

Site Delta has similar properties to Site Alpha, with 146,879 datapoints and a class balance of 21:79. Figure 6 shows the experimental results, and Table 17 and Table 18 show the p-values of the AUC-ROC of each experiment.
Figure 6a,b show similar behaviour to each other, where accuracy drops as greater reduction is performed. Figure 6c shows that the AUC-ROC is maintained up to a 75% reduction for class balance. However, Figure 6d shows that the AUC-ROC drops by a larger amount with reduction across both classes. Site Delta has a large class imbalance, which suggests that some balancing is important for optimal results. The p-values of the results for all but three experiments (data squash at 10% and maximum reduction and lateral reduction at 25% reduction) show no statistically significant difference from the benchmark. This suggests once again that through this method of data reduction, optimal model performance is maintained.
Table 19 and Table 20 show the class densities after each reduction method for Site Delta. As with the other sites, reduction on both classes does little to bring the class densities towards a value of 1. For majority class reduction, an optimal amount of reduction to achieve balanced class densities is between reduction caps of 50% and 75%. This demonstrates the delicate balance needed to find the optimal amount of data to remove for the best performance.
Figure 6. Experimental results for Site Delta test set. (a) Accuracy, Majority class reduced; (b) Accuracy, Both classes reduced; (c) AUC, Majority class reduced; (d) AUC, Both classes reduced.
Figure 6. Experimental results for Site Delta test set. (a) Accuracy, Majority class reduced; (b) Accuracy, Both classes reduced; (c) AUC, Majority class reduced; (d) AUC, Both classes reduced.
Ai 06 00098 g006
Table 17. p-values of the AUC metrics of each experiment of reduction on the majority class, on Site Delta test set. Values marked with an * have a statistically significant difference from the benchmark.
Table 17. p-values of the AUC metrics of each experiment of reduction on the majority class, on Site Delta test set. Values marked with an * have a statistically significant difference from the benchmark.
ReductionRandomCentralLateralEvenSquash
5%0.7048.17 × 10 2 0.7110.3960.287
10%0.1990.357.70 × 10 2  0.5821.59 × 10 2  *
25%0.9490.1684.32 × 10 2  *0.7570.212
50%0.4150.5530.8070.4730.184
75%0.3755.83 × 10 2  0.7780.8020.891
Max0.1130.2545.27 × 10 2  0.4443.14 × 10 2  *
Table 18. p-values of the AUC metrics of each experiment of reduction on both classes, on Site Delta test set. Values marked with an * have a statistically significant difference from the benchmark.
Table 18. p-values of the AUC metrics of each experiment of reduction on both classes, on Site Delta test set. Values marked with an * have a statistically significant difference from the benchmark.
ReductionRandomCentralLateralEvenSquash
5%0.1980.9180.6250.2260.591
10%0.8690.1440.1550.9670.686
25%2.07 × 10 2  *3.73 × 10 2  *3.20 × 10 3  *5.53 × 10 2  2.61 × 10 2  *
50%2.98 × 10 3  *1.00 × 10 4  *4.39 × 10 4  *1.59 × 10 4  *1.42 × 10 2  *
75%1.70 × 10 4  *2.11 × 10 5  *4.50 × 10 5  *8.12 × 10 4  *1.02 × 10 4  *
Max5.72 × 10 5  *5.59 × 10 3  *2.34 × 10 3  *4.75 × 10 4  *3.29 × 10 4  *
Table 19. Class density of Site Delta dataset after data reduction on the majority class only. R: random reduction, C: central reduction, L: lateral reduction, E: even reduction, S: squash reduction.
Table 19. Class density of Site Delta dataset after data reduction on the majority class only. R: random reduction, C: central reduction, L: lateral reduction, E: even reduction, S: squash reduction.
Class 0 (Not Occupied)Class 1 (Occupied)
Data
Reduced
RCLESRCLES
5%0.5780.5610.5860.5560.5591.5661.5731.5531.5491.550
10%0.6330.5860.6030.6330.5901.5341.5591.5441.5041.523
25%0.6710.6790.6620.6750.6451.4751.5201.5031.4141.468
50%0.9320.9690.9380.9540.9141.3341.4121.3521.1521.294
75%1.3541.4941.4841.4931.4201.0521.0521.1030.7370.957
Max1.5061.4561.5831.5311.5091.0741.0620.9690.6850.909
Table 20. Class density of Site Delta dataset after data reduction across both classes. R: random reduction, C: central reduction, L: lateral reduction, E: even reduction, S: squash reduction.
Table 20. Class density of Site Delta dataset after data reduction across both classes. R: random reduction, C: central reduction, L: lateral reduction, E: even reduction, S: squash reduction.
Class 0 (Not Occupied)Class 1 (Occupied)
Data
Reduced
RCLESRCLES
5%0.5330.5440.5580.5770.5781.5771.5721.5731.5531.555
10%0.5630.5160.5610.5420.5571.6051.6001.5791.5491.557
25%0.6270.5450.5460.5180.5671.5811.5771.5721.5121.540
50%0.6020.6340.5590.5150.5371.6631.5911.5731.4161.538
75%0.6150.5690.5810.4640.6321.6391.7041.6341.2991.498
Max0.5730.5620.6630.4450.6291.6881.5931.6151.2781.474

3.1.6. Site Epsilon

Site Epsilon is the smallest site in the dataset, with only 129,599 datapoints. It is also among the most imbalanced of the site datasets, with a class balance of 24:76. Figure 7 shows the experimental results, and Table 21 and Table 22 show the p-values of the AUC-ROC of each experiment.
Much like with Site Delta, Figure 7a,b show that data reduction causes a drop in performance of up to 0.4%. Like the experiments for sites Alpha, Beta, and Delta, the AUC-ROC does not decrease with accuracy until the maximum reduction.
Table 23 shows that with each reduction cap, the density of class 0 increases while the density of class 1 decreases, as with the other sites. However, class 0’s density passes through a value of 1 between reduction caps of 10% and 25%, while class 1 passes through a value of 1 between a reduction cap of 50% and the maximum (for allreduction methods except random). This makes it difficult to identify the best reduction cap for this dataset. Perhaps an alternative method of data reduction would be more appropriate. Table 24 shows that, similarly to sites Charlie and Delta, as data is reduced the densities move further from a value of 1. This is linked to a drop in both accuracy and AUC-ROC.
Figure 7. Experimental results for Site Epsilon test set. (a) Accuracy, Majority class reduced; (b) Accuracy, Both classes reduced; (c) AUC, Majority class reduced; (d) AUC, Both classes reduced.
Figure 7. Experimental results for Site Epsilon test set. (a) Accuracy, Majority class reduced; (b) Accuracy, Both classes reduced; (c) AUC, Majority class reduced; (d) AUC, Both classes reduced.
Ai 06 00098 g007
Table 21. p-values of the AUC metrics of each experiment of reduction on the majority class, on Site Epsilon test set. Values marked with an * have a statistically significant difference from the benchmark.
Table 21. p-values of the AUC metrics of each experiment of reduction on the majority class, on Site Epsilon test set. Values marked with an * have a statistically significant difference from the benchmark.
ReductionRandomCentralLateralEvenSquash
5%0.7610.6182.51 × 10 2 *0.3760.919
10%6.61 × 10 2 0.3730.5410.8440.522
25%0.7450.7650.2070.1730.991
50%0.1861.91 × 10 2  *8.94 × 10 2 1.57 × 10 2 *0.282
Max1.03 × 10 3  *8.73 × 10 3  *1.79 × 10 3  *1.47 × 10 2  *1.37 × 10 3  *
Table 22. p-values of the AUC metrics of each experiment of reduction on both classes, on Site Epsilon test set. Values marked with an * have a statistically significant difference from the benchmark.
Table 22. p-values of the AUC metrics of each experiment of reduction on both classes, on Site Epsilon test set. Values marked with an * have a statistically significant difference from the benchmark.
ReductionRandomCentralLateralEvenSquash
5%0.8560.5130.6760.5130.593
10%0.6390.4476.03 × 10 2 0.1270.997
25%3.89 × 10 2  *4.75 × 10 2 *0.2030.1051.51 × 10 2  *
50%7.18 × 10 3  *2.48 × 10 2  *1.58 × 10 3  *1.21 × 10 3  *5.37 × 10 3  *
Max2.01 × 10 4  *1.07 × 10 3  *3.55 × 10 3  *8.09 × 10 4  *5.19 × 10 4  *
Table 23. Class density of Site Epsilon dataset after data reduction on the majority class only. R: random reduction, C: central reduction, L: lateral reduction, E: even reduction, S: squash reduction.
Table 23. Class density of Site Epsilon dataset after data reduction on the majority class only. R: random reduction, C: central reduction, L: lateral reduction, E: even reduction, S: squash reduction.
Class 0 (Not Occupied)Class 1 (Occupied)
Data
Reduced
RCLESRCLES
5%0.8620.9320.8780.9090.9071.3831.3781.3911.3631.370
10%0.9440.9460.9560.9690.9541.3771.3411.3691.3251.340
25%1.0611.0411.0791.1151.0501.3191.3021.2791.2071.265
50%1.4061.4001.4581.4061.4561.2001.1241.1680.9631.097
Max1.8341.7901.8131.8361.8141.0080.9770.9730.6920.935
Table 24. Class density of Site Epsilon dataset after data reduction across both classes. R: random reduction, C: central reduction, L: lateral reduction, E: even reduction, S: squash reduction.
Table 24. Class density of Site Epsilon dataset after data reduction across both classes. R: random reduction, C: central reduction, L: lateral reduction, E: even reduction, S: squash reduction.
Class 0 (Not Occupied)Class 1 (Occupied)
Data
Reduced
RCLESRCLES
5%0.9060.8980.8760.8840.8711.3941.3861.4031.3821.387
10%0.8500.8740.8730.8770.8771.4061.3861.3861.3721.381
25%0.8860.8830.8820.8830.8751.3921.3791.4171.3351.367
50%0.8580.8630.8830.8780.8741.4041.4431.4221.2621.365
Max0.8640.8730.8830.8840.8761.3491.4461.4171.1831.371

3.1.7. Site Fazbear

Site Fazbear is the largest of the sites, with 328,319 datapoints, and the only site to feature more datapoints in the ‘Occupied’ class, class 1. It is also the most balanced site, with a class balance of 47:53, giving a maximum reduction of 12.111%. Figure 8 shows the experimental results, and Table 25 and Table 26 show the p-values of the AUC-ROC of each experiment.
Before analysing the box and whisker plots, the p-values indicate no statistically significant difference from the benchmark, except with 5% reduction on the data squash method. For this experiment, the AUC-ROC is slightly better than the benchmark. For all other experiments, there is no statistical significance. This can be explained by the very small amount of reduction performed for this dataset, due to the natural class balance of 47:53. Considering the class densities, shown by Table 27 and Table 28, class densities reach values closer to 1 with experiments with reduction of both classes, unlike for the previous sites. The values reach just under 1.2 for class 0 and just above 0.95 for class 1 at the maximum reduction on both classes. There is very little change in densities, like with AUC-ROC, because of the small amount of reduction performed. This indicates the reduction method of balancing datasets is not aggressive enough for already closely balanced datasets.
Figure 8. Experimental results for Site Fazbear test set. (a) Accuracy, Majority class reduced; (b) Accuracy, Both classes reduced; (c) AUC, Majority class reduced; (d) AUC, Both classes reduced.
Figure 8. Experimental results for Site Fazbear test set. (a) Accuracy, Majority class reduced; (b) Accuracy, Both classes reduced; (c) AUC, Majority class reduced; (d) AUC, Both classes reduced.
Ai 06 00098 g008
Table 25. p-values of the AUC metrics of each experiment of reduction on the majority class, on Site Fazbear test set. Values marked with an * have a statistically significant difference from the benchmark.
Table 25. p-values of the AUC metrics of each experiment of reduction on the majority class, on Site Fazbear test set. Values marked with an * have a statistically significant difference from the benchmark.
ReductionRandomCentralLateralEvenSquash
5%0.1910.2970.1820.1013.72 × 10 2  *
10%0.9939.94 × 10 2  0.8340.4350.489
Max6.08 × 10 2  7.90 × 10 2  0.3840.5940.207
Table 26. p-values of the AUC metrics of each experiment of reduction on both classes, on Site Fazbear test set. Values marked with an * have a statistically significant difference from the benchmark.
Table 26. p-values of the AUC metrics of each experiment of reduction on both classes, on Site Fazbear test set. Values marked with an * have a statistically significant difference from the benchmark.
ReductionRandomCentralLateralEvenSquash
5%0.5340.5740.1380.180.568
10%0.5960.4766.91 × 10 2  5.22 × 10 2  0.47
Max0.7270.7420.6690.4420.929
Table 27. Class density of Site Fazbear dataset after data reduction on the majority class only. R: random reduction, C: central reduction, L: lateral reduction, E: even reduction, S: squash reduction.
Table 27. Class density of Site Fazbear dataset after data reduction on the majority class only. R: random reduction, C: central reduction, L: lateral reduction, E: even reduction, S: squash reduction.
Class 0 (Not Occupied)Class 1 (Occupied)
Data
Reduced
RCLESRCLES
5%1.2211.2161.2081.1901.2130.9180.9250.9260.9310.918
10%1.2081.1721.1941.1581.1690.9450.9520.9440.9500.949
Max1.1741.1631.1531.1541.1510.9560.9580.9660.9520.959
Table 28. Class density of Site Fazbear dataset after data reduction across both classes. R: random reduction, C: central reduction, L: lateral reduction, E: even reduction, S: squash reduction.
Table 28. Class density of Site Fazbear dataset after data reduction across both classes. R: random reduction, C: central reduction, L: lateral reduction, E: even reduction, S: squash reduction.
Class 0 (Not Occupied)Class 1 (Occupied)
Data
Reduced
RCLESRCLES
5%1.2631.2491.2561.2391.2420.8960.8960.8960.8930.893
10%1.2471.2681.2691.2331.2290.8930.8850.9000.8850.898
Max1.2401.2571.2321.2491.2300.8970.8930.9030.8770.895

3.1.8. Discussion—Individual Site Datasets

First, we discuss the performance of each site individually. We have identified that datasets Alpha, Charlie, and Delta show promising results when performing reduction on the majority class. The AUC-ROC may be maintained, so long as class density shifts towards 1 through data reduction. The class densities of sites Beta, Epsilon, and Fazbear do not converge around 1 as the amount of data is reduced, and the AUC-ROC decreases slightly as a result. Site Epsilon has one of the largest class imbalances, but still fails to improve after class balancing through data reduction. This dataset also has the fewest number of datapoints, which may explain the poor performance after data reduction is performed. Therefore, we cannot rely solely on class imbalance as a criterion for data reduction. Instead, class density offers insight into a dataset that might not be immediately apparent. Class density shows more than class imbalance, it also shows whether a dataset has sufficient data for the methods described in this paper. Class density may therefore be considered as a metric that encompasses class balance and sufficient data, and can be used to determine if data reduction is applicable.
Table 29 shows the runtimes for all the experiments. This shows the benefits data reduction may bring to energy and CO2 cost reduction, as runtime, energy use, and CO2 emissions are directly correlated. For most experiments, by increasing the amount of reduction performed, we reduce the runtime. Sites Alpha, Charlie, Delta, and Epsilon have relatively large class imbalance, meaning they remove more data to balance the classes. However, sites Beta and Fazbear have less imbalance, and therefore remove less data. This is especially apparent for Site Fazbear, where in experiments to balance classes, by performing reduction there is an increase in runtime. This is because the overhead of identifying which data to remove takes longer for this site, due to its size, and as so few data are removed, the runtime is similar to that of its benchmark. This is correlated to the fact that experiments that do not balance the classes are in most cases slightly faster, as the datasets do not have to be split between classes before reduction, which is an additional overhead. This shows again that the data reduction strategies introduced in this paper are not applicable to all datasets such as Site Fazbear. It does however show that for datasets such as Site Charlie, the runtime may be nearly halved (from 44 s to 26 s) by reducing with a cap of 50%, which also improves model performance.
To identify which data reduction strategy performs best, we must focus on the best-performing scenarios, as they are the most stable and the most useful. We must also consider the statistical significance of the results. For example, Site Alpha maintains performance in AUC-ROC up to 50% reduction, but none of the reduction strategies show significant difference from the benchmark. Therefore, no strategy can definitively be identified as superior. Site Beta has only inferior results to the benchmark. Site Charlie shows an increase in AUC-ROC that is statistically different from the benchmark. With a 50% reduction, the data squash method is performs best. Site Delta has two results that stand out: the data squash method at 10% reduction and the lateral reduction at 25%. The data squash method is superior, with an average increase in AUC-ROC of 0.137%. For Site Epsilon, a 5% lateral reduction performs best, with an average AUC-ROC increase of 0.0301%. For Site Fazbear, a 5% reduction with the data squash method has a better AUC-ROC by an average of 0.0272%. To conclude, the lateral and squash reduction methods are among the best performing.

3.2. Experiments on Fused Dataset

By fusing all of the sites into one large dataset, we may observe the ability of a single model to generalise on multiple different test sets from different environments. We may then see if data reduction will benefit the model further, as it can with the individual site datasets. Table 30 shows the details of the fused dataset. It has over four million datapoints, whereas the largest individual site has just under 330,000.
Table 31 shows the benchmark results of experimentation on the fused dataset, with no reduction. The model is trained on a fused training set and tested on individual site test sets.
For all sites, the accuracy and AUC-ROC are lower than their non-fused counterparts. For sites Beta and Charlie, the accuracy is substantially lower, by around 25–30%. For sites Alpha, Beta, Delta, and Epsilon, the AUC-ROC is around 30–35% lower. This shows that despite the training sets having more data, the model is unable to classify well. Sites Charlie and Fazbear are the biggest original datasets, meaning that compared to the other sites, they are affected least by the additional data. This may explain why the AUC-ROCs of these two sites are slightly above those of the other sites.
The runtime of training the fused dataset is 51 min. This is a huge increase in runtime from the sub-minute runtime on individual sites; this shows that there is an exponential increase in runtime with the amount of data used. Because of this large runtime, only the maximum reduction was performed to balance the classes, and each experiment was performed only once.
Table 32 shows the properties of the reduced fused dataset. Over half of the majority class was reduced to balance the classes, giving a total dataset reduction of 38.9%.
Table 33 shows the accuracy of the RF model trained on the reduced fused dataset. Table 34 shows the AUC-ROC. Table 35 shows the class densities of the fused dataset after each reduction method.
Table 32. Reduced fused dataset properties.
Table 32. Reduced fused dataset properties.
Number of
Datapoints
Balanced Class
Reduction
Total Dataset
Reduction
2,816,51855.972%38.864%
Table 33. Reduced fused dataset accuracies. Values in bold indicate best-performing reduction strategy.
Table 33. Reduced fused dataset accuracies. Values in bold indicate best-performing reduction strategy.
SiteRandomCentralLateralEvenSquash
Alpha65.781%65.991%66.367%66.516%67.253%
Beta75.323%75.803%75.014%74.929%75.320%
Charlie67.267%67.361%67.207%67.176%67.469%
Delta66.793%66.187%66.479%67.045%66.684%
Espilon69.228%68.611%68.387%68.777%69.090%
Fazbear87.051%87.095%86.941%86.728%87.337%
Table 34. Reduced fused dataset AUC-ROC. Values in bold indicate best-performing reduction strategy.
Table 34. Reduced fused dataset AUC-ROC. Values in bold indicate best-performing reduction strategy.
SiteRandomCentralLateralEvenSquash
Alpha72.403%72.648%73.022%73.028%73.546%
Beta75.904%76.346%75.626%75.445%75.834%
Charlie77.896%77.831%77.713%77.688%77.979%
Delta74.268%73.250%74.039%74.539%74.072%
Espilon73.863%73.187%72.951%73.093%73.576%
Fazbear86.914%86.969%86.807%86.599%87.220%
Table 35. Class density of reduced fused dataset. R: random reduction, C: central reduction, L: lateral reduction, E: even reduction, S: squash reduction.
Table 35. Class density of reduced fused dataset. R: random reduction, C: central reduction, L: lateral reduction, E: even reduction, S: squash reduction.
Class 0 (Not Occupied)Class 1 (Occupied)
R C L E S R C L E S
1.0401.0391.0271.0291.0320.9950.9751.0070.88690.980
The results show that for all sites except Beta and Fazbear, the accuracy decreases further when data are removed. For all sites, accuracy averages around 65–75% except for Site Fazbear, which is 87%. Still, this is worse performance than when trained on the individual training datasets. As for AUC-ROC, performance is increased from the benchmark for all sites except sites Charlie and Fazbear. As these two sites have the highest benchmark AUC-ROC, it is interesting to see that these two sites perform worse after data reduction; this may be because the data reduction is able to remove more training data from these sites, as there are more data to lose.

3.3. Discussion—Fused Dataset

The fused dataset shows poor performance in both the reduced and non-reduced experiments. Not only are the accuracy and AUC-ROC scores inferior to the individually trained models, but the runtime is far longer. Therefore, this methodology is not appropriate for the purpose of energy saving and running on low-compute devices. However, it does give some insight into the importance of using the correct data; despite each site dataset containing the same types of data (temperature, humidity, and VOC), there are differences between sites that mean datasets cannot be fused in this way.
The runtime of training the fused dataset is 51 min, and 28 min for the reduced fused dataset. While there is a significant decrease in runtime by performing this reduction, 28 min is still much larger than the times observed when training the individual datasets, which are all less than 1 min. As they are proportional, this runtime means there is an equivalent increase in energy cost and CO2 usage, which is incompatible with the aim of reducing energy and CO2 cost for moving towards green AI. This shows that using too much data can be detrimental to training in both model performance and cost, despite the similarities between the original data and the additional data.

4. Conclusions

This paper has identified that class density may be used as a metric to qualify a dataset for reduction. The results have shown that for datasets like Site Charlie, which are abundant in data and heavily imbalanced, data reduction on the majority class may be used to improve model performance and reduce the computation required to train the model, due to there being less data. For other datasets like sites Alpha and Delta that are either abundant in data or highly imbalanced, data reduction may be used to at least maintain performance. For highly balanced or less abundant datasets such as sites Beta, Epsilon, and Fazbear, data reduction is not as beneficial. By calculating the class densities of a dataset and gradually reducing the data, if the class densities converge towards a value of 1 then the dataset may be reduced while performance is maintained.
A direction for future work might be to first perform data reduction on each individual site dataset, and then attempt dataset fusion. This should ensure that the most important data are retained at the reduction stage. Alternatively, one might reduce the datasets to a much smaller set, with more data reduction than needed to balance the classes. Dataset fusion on heavily reduced data may contain insightful data that could improve model reliability in new domains.

Author Contributions

Conceptualization, D.S. and T.K.; methodology, D.S.; software, D.S.; validation, D.S; formal analysis, D.S and T.K.; investigation, D.S; resources, D.S; data curation, D.S.; writing—original draft preparation, D.S.; writing—review and editing, T.K.; visualization, D.S.; supervision, T.K.; project administration, T.K.; funding acquisition, T.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by InnovateUK, project number 10097909.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The open-source dataset used in this study may be found here: https://springernature.figshare.com/collections/A_High-Fidelity_Residential_Building_Occupancy_Detection_Dataset/5364449 (accessed on 16 April 2024).

Acknowledgments

This research performed as part of the D-XPERT AI-Based Recommender System for Smart Energy Saving.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
HVACHeating, ventilation and air conditioning
TTemperature
HHumidity
VOCVolatile organic compound
MLMachine learning
AIArtificial intelligence
PCAPrinciple component analysis
AUC-ROCArea under the receiver operating characteristic curve
RFRandom Forest
CNNConvolutional Neural Network
LSTMLong Short-Term Memory
KNNK-Nearest Neighbour
csvComma-Separated Value

References

  1. Erickson, V.L.; Carreira-Perpinan, M.A.; Cerpa, A.E. OBSERVE: Occupancy-based system for efficient reduction of HVAC energy. In Proceedings of the 10th ACM/IEEE International Conference on Information Processing in Sensor Networks, Chicago, IL, USA, 12–14 April 2011; pp. 258–269. [Google Scholar]
  2. Ahmad, J.; Masood, F.; Shah, S.A.; Jamal, S.S.; Hussain, I. A Novel Secure Occupancy Monitoring Scheme Based on Multi-Chaos Mapping. Symmetry 2020, 12, 350. [Google Scholar] [CrossRef]
  3. Krug, S.; O’Nils, M. Modeling and Comparison of Delay and Energy Cost of IoT Data Transfers. IEEE Access 2019, 7, 58654–58675. [Google Scholar] [CrossRef]
  4. Shafran-Nathan, R.; Levy, I.; Levin, N.; Broday, D.M. Ecological bias in environmental health studies: The problem of aggregation of multiple data sources. Air Qual. Atmos. Health 2017, 10, 411–420. [Google Scholar] [CrossRef]
  5. Kubat, M. Addressing the Curse of Imbalanced Training Sets: One-Sided Selection. In Proceedings of the Fourteenth International Conference on Machine Learning, Nashville, TN, USA, 8–12 July 1997. [Google Scholar]
  6. ur Rehman, M.H.; Liew, C.S.; Abbas, A.; Jayaraman, P.P.; Wah, T.Y.; Khan, S.U. Big Data Reduction Methods: A Survey. Data Sci. Eng. 2016, 1, 265–284. [Google Scholar] [CrossRef]
  7. Schwartz, R.; Dodge, J.; Smith, N.A.; Etzioni, O. Green AI. Commun. ACM 2020, 63, 54–63. [Google Scholar] [CrossRef]
  8. Whang, S.E.; Roh, Y.; Song, H.; Lee, J.G. Data collection and quality challenges in deep learning: A data-centric AI perspective. VLDB J. 2023, 32, 791–813. [Google Scholar] [CrossRef]
  9. Kaur, P.; Gosain, A. Comparing the Behavior of Oversampling and Undersampling Approach of Class Imbalance Learning by Combining Class Imbalance Problem with Noise. In ICT Based Innovations; Saini, A.K., Nayak, A.K., Vyas, R.K., Eds.; Springer: Singapore, 2018; pp. 23–30. [Google Scholar]
  10. Moser, B.B.; Raue, F.; Dengel, A. A Study in Dataset Pruning for Image Super-Resolution. Artif. Neural Netw. Mach. Learn.—ICANN 2024, 9, 351–363. [Google Scholar] [CrossRef]
  11. Paul, M.; Ganguli, S.; Dziugaite, G.K. Deep Learning on a Data Diet: Finding Important Examples Early in Training. In Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS 2021), Online, 6–14 December 2021. [Google Scholar] [CrossRef]
  12. Toneva, M.; Sordoni, A.; des Combes, R.T.; Trischler, A.; Bengio, Y.; Gordon, G.J. An Empirical Study of Example Forgetting during Deep Neural Network Learning. In Proceedings of the 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
  13. Bessa, M.; Bostanabad, R.; Liu, Z.; Hu, A.; Apley, D.W.; Brinson, C.; Chen, W.; Liu, W.K. A framework for data-driven analysis of materials under uncertainty: Countering the curse of dimensionality. Comput. Methods Appl. Mech. Eng. 2017, 320, 633–667. [Google Scholar] [CrossRef]
  14. Ashraf, M.; Anowar, F.; Setu, J.H.; Chowdhury, A.I.; Ahmed, E.; Islam, A.; Al-Mamun, A. A Survey on Dimensionality Reduction Techniques for Time-Series Data. IEEE Access 2023, 11, 42909–42923. [Google Scholar] [CrossRef]
  15. Ma, J.; Yuan, Y. Dimension reduction of image deep feature using PCA. J. Vis. Commun. Image Represent. 2019, 63, 102578. [Google Scholar] [CrossRef]
  16. Zaheer, R.; Hanif, M.K.; Sarwar, M.U.; Talib, R. Evaluating the Effectiveness of Dimensionality Reduction on Machine Learning Algorithms in Time Series Forecasting. IEEE Access 2025, 13, 50493–50510. [Google Scholar] [CrossRef]
  17. Sanderson, D.; Kalganova, T. Dynamic Data Inclusion with Sliding Window. In Proceedings of the Intelligent Sustainable Systems, London, UK, 23–26 July 2024; Nagar, A.K., Jat, D.S., Mishra, D.K., Joshi, A., Eds.; Springer: Singapore, 2024; pp. 525–544. [Google Scholar]
  18. Byerly, A.; Kalganova, T. Class Density and Dataset Quality in High-Dimensional, Unstructured Data. arXiv 2022, arXiv:2202.03856. [Google Scholar] [CrossRef]
  19. Sayed, A.N.; Himeur, Y.; Bensaali, F. Deep and transfer learning for building occupancy detection: A review and comparative analysis. Eng. Appl. Artif. Intell. 2022, 115, 105254. [Google Scholar] [CrossRef]
  20. Chitnis, S.; Somu, N.; Kowli, A. Occupancy estimation with environmental sensors: The possibilities and limitations. Energy Built Environ. 2025, 6, 96–108. [Google Scholar] [CrossRef]
  21. Zemouri, S.; Gkoufas, Y.; Murphy, J. A Machine Learning Approach to Indoor Occupancy Detection Using Non-Intrusive Environmental Sensor Data. In Proceedings of the 3rd International Conference on Big Data and Internet of Things, Melbourn, VIC, Australia, 22–24 August 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 70–74. [Google Scholar] [CrossRef]
  22. Mohammadabadi, A.; Rahnama, S.; Afshari, A. Indoor Occupancy Detection Based on Environmental Data Using CNN-XGboost Model: Experimental Validation in a Residential Building. Sustainability 2022, 14, 14644. [Google Scholar] [CrossRef]
  23. Vela, A.; Alvarado-Uribe, J.; Davila, M.; Hernandez-Gress, N.; Ceballos, H.G. Estimating Occupancy Levels in Enclosed Spaces Using Environmental Variables: A Fitness Gym and Living Room as Evaluation Scenarios. Sensors 2020, 20, 6579. [Google Scholar] [CrossRef]
  24. Pereira, L.M.; Salazar, A.; Vergara, L. On Comparing Early and Late Fusion Methods. In Advances in Computational Intelligence; Rojas, I., Joya, G., Catala, A., Eds.; Springer: Cham, Switzerland, 2023; Volume 14134. [Google Scholar] [CrossRef]
  25. Tsanousa, A.; Moschou, C.; Bektsis, E.; Vrochidis, S.; Kompatsiaris, I. Fusion of Environmental Sensors for Occupancy Detection in a Real Construction Site. Sensors 2023, 23, 9596. [Google Scholar] [CrossRef]
  26. Nguyen, T.; Khadka, R.; Phan, N.; Yazidi, A.; Halvorsen, P.; Riegler, M.A. Combining datasets to increase the number of samples and improve model fitting. In Proceedings of the 2023 International Joint Conference on Neural Networks (IJCNN), Gold Coast, Australia, 18–23 June 2023; pp. 1–9. [Google Scholar] [CrossRef]
  27. Vela, A.; Alvarado-Uribe, J.; Ceballos, H.G. Indoor Environment Dataset to Estimate Room Occupancy. Data 2021, 6, 133. [Google Scholar] [CrossRef]
  28. Schwee, J.H.; Johansen, A.; Jørgensen, B.N.; Kjærgaard, M.B.; Mattera, C.G.; Sangogboye, F.C.; Veje, C. Room-level occupant counts and environmental quality from heterogeneous sensing modalities in a smart building. Sci. Data 2019, 6, 287. [Google Scholar] [CrossRef]
  29. Jacoby, M.; Tan, S.Y.; Henze, G.; Sarkar, S. A high-fidelity residential building occupancy detection dataset. Sci. Data 2021, 8, 280. [Google Scholar] [CrossRef]
  30. Anil Jadhav, D.P.; Ramanathan, K. Comparison of Performance of Data Imputation Methods for Numeric Dataset. Appl. Artif. Intell. 2019, 33, 913–933. [Google Scholar] [CrossRef]
  31. Filippakis, P.; Ougiaroglou, S.; Evangelidis, G. Prototype Selection for Multilabel Instance-Based Learning. Information 2023, 14, 572. [Google Scholar] [CrossRef]
  32. Uddin, M.F. Addressing Accuracy Paradox Using Enhanched Weighted Performance Metric in Machine Learning. In Proceedings of the 2019 Sixth HCT Information Technology Trends (ITT), Ras Al Khaimah, United Arab Emirates, 20–21 November 2019; pp. 319–324. [Google Scholar] [CrossRef]
  33. Bradley, A.P. The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognit. 1997, 30, 1145–1159. [Google Scholar] [CrossRef]
  34. Martin Malík, REALiX, s.r.o. HWiNFO. Available online: https://www.hwinfo.com/ (accessed on 10 April 2025).
  35. The Department for Energy Security and Net Zero. Greenhouse Gas Reporting: Conversion Factors. 2024. Available online: https://www.gov.uk/government/publications/greenhouse-gas-reporting-conversion-factors-2024 (accessed on 11 April 2024).
Figure 1. Dataset fusion procedure. Data from each site are split between train and test sets at a ratio of 80:20.
Figure 1. Dataset fusion procedure. Data from each site are split between train and test sets at a ratio of 80:20.
Ai 06 00098 g001
Table 1. HPDMobile dataset information. Class balance ratio is between classes ‘Not Occupied’ and ‘Occupied’. Each sensor contains three features: temperature, humidity, and VOC.
Table 1. HPDMobile dataset information. Class balance ratio is between classes ‘Not Occupied’ and ‘Occupied’. Each sensor contains three features: temperature, humidity, and VOC.
SiteNumber of
Datapoints
Original Number
of Sensors/Derived
Number of Features
Least Important
Sensor
Class Balance Ratio
(Not Occ:Occ)
Alpha147,7505:15420:80
Beta146,8794:12N/A40:60
Charlie302,3995:15022:78
Delta146,8795:15421:79
Epsilon129,5995:15424:76
Fazbear328,3194:12N/A47:53
Table 2. Preliminary test results on RF, XGBoost, CNN, and LSTM models with HPDMobile dataset Site Alpha.
Table 2. Preliminary test results on RF, XGBoost, CNN, and LSTM models with HPDMobile dataset Site Alpha.
ModelAccuracyAUC-ROC
RF98.744%97.143%
XGBoost95.128%93.054%
CNN91.021%89.783%
LSTM85.470%85.393%
Table 3. HPDMobile class balance and class density properties, and maximum reduction amounts after class balancing.
Table 3. HPDMobile class balance and class density properties, and maximum reduction amounts after class balancing.
SiteNumber of
Datapoints
Class Balance
(Not Occ:Occ)
Balanced
Class Max
Reduction
Total Dataset
Reduction at
Max Balancing
Class Density
(Not Occ:Occ)
Alpha147,75020:8074.912%59.887%0.674:1.585
Beta146,87940:6034.599%20.918%1.068:1.201
Charlie302,39922:7872.111%56.386%0.639:1.532
Delta146,87921:7977.569%63.358%0.547:1.576
Epsilon129,59924:7667.755%51.235%0.886:1.392
Fazbear328,31947:5312.111%6.446%1.260:0.892
Table 4. Experimental benchmarks of accuracy and AUC-ROC with RF model.
Table 4. Experimental benchmarks of accuracy and AUC-ROC with RF model.
SiteAccuracyAUC-ROC
Alpha98.813%98.812%
Beta99.613%99.613%
Charlie99.755%99.612%
Delta99.589%99.271%
Epsilon99.692%99.574%
Fazbear99.367%99.368%
Table 29. Runtimes for experiments on each individual site at each reduction amount for experiments to balance classes and without class balancing. A: Site Alpha, B: Site Beta, C: Site Charlie, D: Site Delta, E: Site Epsilon, F: Site Fazbear.
Table 29. Runtimes for experiments on each individual site at each reduction amount for experiments to balance classes and without class balancing. A: Site Alpha, B: Site Beta, C: Site Charlie, D: Site Delta, E: Site Epsilon, F: Site Fazbear.
Class Balancing
Runtimes (s)
No Class Balancing
Runtimes (s)
Reduction
Amount
ABCDEFABCDEF
None151544181542151544181542
5%151443181849151541181542
10%141441181749141443181740
25%1212371513-1211361516-
50%10-261010-9-27109-
75%---6-----6--
Max%711206848612186740
Table 30. Fused dataset properties.
Table 30. Fused dataset properties.
Number of
Datapoints
Class Balance
(Not Occ:Occ)
Class Density
(Not Occ:Occ)
4,599,96030:700.624:1.386
Table 31. Fused dataset benchmark.
Table 31. Fused dataset benchmark.
SiteAccuracyAUC-ROC
Alpha82.576%59.915%
Beta65.829%57.186%
Charlie91.316%84.657%
Delta84.504%61.632%
Epsilon79.618%59.643%
Fazbear69.956%71.670%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sanderson, D.; Kalganova, T. Identifying Suitability for Data Reduction in Imbalanced Time-Series Datasets. AI 2025, 6, 98. https://doi.org/10.3390/ai6050098

AMA Style

Sanderson D, Kalganova T. Identifying Suitability for Data Reduction in Imbalanced Time-Series Datasets. AI. 2025; 6(5):98. https://doi.org/10.3390/ai6050098

Chicago/Turabian Style

Sanderson, Dominic, and Tatiana Kalganova. 2025. "Identifying Suitability for Data Reduction in Imbalanced Time-Series Datasets" AI 6, no. 5: 98. https://doi.org/10.3390/ai6050098

APA Style

Sanderson, D., & Kalganova, T. (2025). Identifying Suitability for Data Reduction in Imbalanced Time-Series Datasets. AI, 6(5), 98. https://doi.org/10.3390/ai6050098

Article Metrics

Back to TopTop