1. Introduction
Deep neural networks (DNNs) have demonstrated remarkable success in various computer vision tasks such as image classification, object detection, and video analysis [
1,
2,
3,
4,
5,
6,
7]. However, the deployment of these networks on resource-constrained devices like mobile phones, drones, and wearable gadgets remains challenging due to their high computational and memory demands. These devices are active targets for intelligent application development, with core visual capabilities like image recognition and object detection. To address these challenges, numerous network compression and acceleration techniques have been developed, including quantization [
8,
9], low-rank approximation [
10,
11], knowledge distillation [
12], and network pruning [
13,
14,
15,
16].
Network pruning, in particular, has gained significant attention as an effective method to reduce the size and complexity of deep neural networks without substantially compromising their performance [
14,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25]. Pruning methods can be broadly categorized into unstructured pruning and structured pruning. Unstructured pruning removes connections (weights) of neural networks, resulting in unstructured sparsity, which often requires specific hardware or library support for actual speedup. In contrast, structured pruning removes entire filters of neural networks and can achieve actual speedup and compression with standard hardware.
Despite the advancements in pruning techniques, several challenges remain. Existing methods often rely on heuristics or complex criteria to determine the importance of filters, which can lead to suboptimal pruning decisions. Furthermore, many methods do not effectively account for the distribution and variability of filter importance across different layers of the network.
To address these challenges, we introduce a novel pruning method that optimizes the subnetwork search space through entropy-guided adjustments, the entropy-guided principles have also been widely adopted [
26]. By refining the search space, the proposed method enhances both accuracy and efficiency, enabling the identification of high-performing subnetworks with reduced computational demands. The proposed method systematically focuses on promising regions of the parameter space, resulting in significant improvements in the final pruned model’s performance.
The primary contributions of this paper are summarized as follows:
We introduce a novel neural network pruning framework, Entropy-Guided Search Space Optimization (EGSSO), which refines the search space based on information entropy;
We implement outlier detection and normalization techniques to stabilize entropy values and enhance pruning reliability;
We demonstrate the effectiveness of EGSSO through extensive experiments on benchmark datasets, achieving notable gains in both accuracy and computational efficiency over existing pruning methods.
2. Related Works
Network pruning has been extensively studied to reduce the size and complexity of deep neural networks while maintaining acceptable performance. This section reviews various strategies, highlighting their key methodologies and limitations.
Early work on unstructured weight pruning removed individual weights from the network, leading to sparsity that often depended on specialized hardware or software for practical speedup. Several methods used L1/L2 norm pruning, median pruning, or Taylor expansion to estimate weight importance [
8,
15]. Filter pruning emerged as a structured alternative, removing entire filters to achieve genuine acceleration on standard hardware. Representative approaches include the following:
Ranking and Pruning Filters: Criteria based on weight magnitude, activation statistics, or Average Percentage of Zero Activations (APoZ) [
27].
Reconstruction Error Minimization: Methods such as ThiNet [
28] and Neuron Importance Score Propagation (NISP) [
29] prune filters with minimal reconstruction error to preserve accuracy, though they can be computationally expensive.
Similarity Measurement: Algorithms relying on the geometric median or clustering, including Filter Pruning via Geometric Median (FPGM) [
30], effectively remove redundant filters but can introduce additional overhead.
Recent work introduced more advanced techniques:
Discrimination-aware Pruning: Exploits additional loss functions so that pruned channels retain discriminative power, as shown in Discrimination-aware Channel Pruning (DCP) [
31] and Discrimination-aware Kernel Pruning (DKP) [
19].
Random Channel Pruning: Provides a baseline by randomly sampling subnetworks [
32] but often demands extensive searches and may yield suboptimal results.
Structured Redundancy Reduction (SRR): Builds a graph for each layer to measure redundancy, pruning filters in layers deemed most redundant [
33].
Dependency Graph (DepGraph) [
34]: Automatically models inter/intra-layer dependencies to facilitate group-level pruning, though it may be limited by complex architectures and hyperparameter sensitivity.
CORING [
35]: Employs tensor decomposition (HOSVD) to retain the multidimensional structure of filters. While it can reduce complexity, large-scale applications may still face fine-tuning challenges.
Despite progress, existing methods still exhibit certain limitations:
Randomness in Subnetwork Search: Techniques such as random channel pruning can be inefficient, requiring extensive searching to find high-quality subnetworks [
32].
Lack of Layer Importance Consideration: Many methods overlook the variation in filter importance across different layers, risking the removal of crucial filters.
Heuristic-based Decisions: Filters are often pruned based on heuristic criteria that may not accurately capture their overall contribution to performance.
Computational Overhead: Methods that minimize reconstruction error or measure filter similarity can involve heavy computations, reducing pruning efficiency.
Neural network pruning has thus evolved through various trade-offs and strategies, yet the above shortcomings indicate a need for more robust, efficient pruning. The entropy-based approach proposed here seeks to address these gaps by leveraging information entropy and outlier detection to guide pruning decisions, delivering improved accuracy and efficiency for network compression. Recent works introduce optimization-based formulations to learn pruning decisions more explicitly. The differentiable pruning method [
16] replaces discrete filter selection with continuous mask parameterization and optimizes pruning via optimal-transport-based soft top-
k operators. These methods offer fine-grained sparsity control but require iterative Sinkhorn updates and backpropagation through relaxed masks, thereby increasing computational overhead. Bi-level optimization frameworks [
23] formulate pruning as a nested optimization problem in which model weights are updated in the inner loop, while pruning parameters are optimized in the outer loop. Although such methods can produce high-quality subnetworks, they typically rely on implicit gradients or second-order approximations, resulting in substantial training cost. Our proposed EGSSO avoids continuous mask relaxation and bi-level optimization entirely. It leverages normalized information entropy to refine the pruning search space and performs lightweight subnetwork sampling, enabling efficient and stable pruning without additional gradient dependencies.
3. Methods
This section introduces our neural network pruning algorithm design based on the entropy-guided search space optimization (EGSSO). Traditional methods often rely on uniform or random search spaces, leading to suboptimal outcomes. In contrast, EGSSO calculates the entropy of weights in each convolutional layer to guide pruning: layers with higher entropy retain more filters, while lower-entropy layers are pruned more aggressively. Outlier detection and normalization further refine these entropy values, ensuring stability and directing the search toward promising subnetwork configurations.
Figure 1 illustrates the EGSSO workflow, from baseline training to iterative pruning and evaluation. As shown in Algorithm 1 and visualized in
Figure 1, EGSSO proceeds in four stages—(a) layer-wise information entropy; (b) outlier detection; and (c) subnetwork searching with entropy-guided bounds.
![Algorithms 18 00736 i001 Algorithms 18 00736 i001]()
3.1. Information Entropy for Efficient Neural Network Pruning
Consider a deep convolutional neural network
, where the
l-th layer has a weight tensor
, with
and
denoting the number of input and output channels, and
the kernel size. The convolution operation at layer
l is defined as follows:
where
is the input tensor,
is the output of the
i-th filter, and
are its weights. Structured pruning seeks a subset of filters
that minimizes the loss
over the dataset
, subject to a sparsity constraint:
where
denotes the target number of remaining filters. As defined in Equation (
3), the Information Entropy [
36] measures the uncertainty in the weight distribution, and we computed the layer-wise entropy according to Equation (
4). For a discrete random variable
X with outcomes
and probabilities
:
In neural networks, a higher entropy weight distribution suggests richer, more critical information, while lower entropy implies less contribution to the network’s performance.
To compute layer-wise entropy, the filters in each convolutional layer were flattened and binned into 50 bins to form a probability density function. The entropy is then
where
is the probability in bin
i, and
avoids the logarithm of zero. This entropy guides our pruning: layers with higher entropy values remained largely intact, whereas layers with lower entropy were pruned more aggressively. An empty dictionary was used to store each layer’s entropy, and layers not meant to be pruned were skipped. Outlier detection and normalization further stabilized these entropy measurements, preventing extreme values from distorting the pruning decisions.
By quantitatively assessing each layer’s information content, EGSSO focuses on retaining the filters most crucial to network performance, yielding a more efficient and effective pruning process.
3.2. Outlier Detection and Normalization of Information Entropy
The second step in our method involves detecting and replacing outliers in the entropy values, as they can distort the subsequent normalization process and degrade the performance of subnetwork search. To address this, we first calculated the mean (
) and standard deviation (
) of the entropy values across all convolutional layers. We then used the z-score method as defined in Equation (
5), to identify outliers [
37,
38], which measures how far a data point deviates from the mean in terms of standard deviations:
where
is the entropy of the
i-th convolutional layer, and
and
are the mean and standard deviation of the entropy values, respectively.
Figure 2 visualizes the entropy values across 59 layers, with the horizontal axis showing the raw entropy values and the vertical axis listing the convolutional layers of the network. Colored rectangles indicate the outlier detection results at different z-score thresholds. The z-scores corresponding to these rectangles are labeled as
,
,
, and
, highlighting the detected outliers for each threshold.
Once the outliers are detected, we replaced each outlier with the nearest non-outlying value. By detecting and replacing outliers, we ensure a more stable and consistent entropy distribution, which is crucial for the subsequent normalization and subnetwork sampling steps. This enhances the robustness and effectiveness of our pruning strategy.
The third step in our method involves normalizing the entropy values to ensure that they are on a comparable scale. This normalization process uses the min–max normalization technique as defined in Equation (
6), which scales the entropy values to the range
.
Min–max normalization was performed using the following:
where
represents the original entropy value of the
i-th convolutional layer,
is the minimum entropy value across all layers, and
is the maximum entropy value. The normalized entropy value
will be in the range
.
The primary reason for normalizing entropy values to the range is to facilitate the subsequent subnetwork search process. By having all entropy values within the same range, we can systematically determine appropriate pruning levels across different layers, ensuring a balanced and efficient pruning strategy.
3.3. Subnetwork Search Algorithm Design
This section describes the subnetwork sampling rules, explains how the search space is generated, and compares the search space of the proposed method with an existing approach. For each convolutional layer in the model, the retention ratio is determined based on the normalized entropy values as follows:
where
is the normalized entropy value of the
i-th layer,
is a minimum retention ratio parameter (typically 0.2),
and
are search space parameters (e.g.,
,
), and
is a reduction factor. This sampling strategy adjusts each layer’s retention ratio according to its entropy, aiming for more informed pruning decisions.
Figure 3 illustrates a comparison of the existing method and the proposed entropy-guided method on the detection network in the VisDrone dataset. Along the horizontal axis are 59 convolutional layers. The vertical axis shows the retention ratio search space from 0.0 to 1.0. Two bars are shown for each layer, one for the existing method and one for the proposed method.
In the existing method, the orange bar on the left extends uniformly from the minimum ratio of 0.2 up to 1.0, thus covering a broad but unstructured search space. While this allows for flexibility, it can include many suboptimal regions that slow down the search and can degrade overall subnetwork quality.
In contrast, the proposed method uses an entropy-based approach for each layer, depicted by the blue bar on the right. Layers with higher entropy values are assigned both upper and lower bounds at higher retention ratios, reflecting their richer information content and the need to preserve more filters. Conversely, layers with lower entropy are assigned proportionally smaller search intervals. As a result, the search range naturally adapts to each layer’s importance, and the dashed red line at 0.2 indicates the common lower limit below which neither bar can drop.
By restricting the search space to more relevant intervals, the proposed method increases the likelihood of discovering subnetwork configurations with stronger performance while also reducing unnecessary exploration of less promising ratios. Experiments show that this tailored strategy not only accelerates the search process but also leads to pruned models with better overall performance.
3.4. Theoretical Analysis of Outlier Detection in Subnetwork Search
(1) Setup and Notation. Let
denote the entropy values of
n convolutional layers. We define the following:
To ensure these entropy values are comparable across layers, we applied min–max normalization:
where
ideally captures the relative importance of each layer. Larger values of
indicate that a layer’s weight distribution is more diverse or informative.
(2) Impact of Outliers on Normalization and Search. In practice, a single extreme outlier
can inflate the range
, forcing most layers’ entropy values
to be much smaller by comparison. Consequently,
meaning many
values become compressed into a narrow interval, thereby losing important distinctions among layers.
Following our subnetwork search algorithm in last subsection, each layer
i was assigned a pruning retention ratio
based on
. If
is artificially low due to an outlier
, the resulting
will also be disproportionately low, leading to excessive pruning of an otherwise critical layer. As shown in
Figure 4, we visualized the outlier detection and replacement process. Removing extreme anomalous values not only eliminates distortion but also reduces the overall variance of the normalized entropy, demonstrating that outlier correction effectively stabilizes the entropy-guided search space.
(3) Why Outlier Detection is Necessary. Without outlier correction, a single extreme can skew the entire normalization scale, compromising the search space for pruning ratios. After outlier replacement, the effective range of entropies reflects a more realistic variability among layers. Consequently, the subsequent subnetwork search can assign retention ratios appropriately, preserving high-entropy (critical) layers and pruning low-entropy (less influential) ones more reliably.
In summary, detecting and replacing outliers is pivotal for maintaining the integrity of entropy-guided pruning, ensuring stable subnetwork sampling and preventing a few extreme values from dictating the network’s compression strategy.
3.5. Pruning Based on Norms
Once the pruning ratio
for each layer is determined, we use the
norm to select filters for retention. For a convolutional layer with weight tensor
W, the
norm of the
k-th filter is defined as follows:
where
represents the weights of the
k-th filter, and
are the spatial and channel indices of each filter.
After computing the
norm for all filters in the
i-th layer, we determined the number of filters to retain
where
is the total number of filters in that layer, and
is the corresponding pruning ratio. We then sorted all filters by their
norms and selected the top
filters:
We constructed a binary mask
M to retain only the selected filters:
and applied it to the weight tensor
W:
denotes the pruned model parameters after channel pruning. We computed the FLOPs of the pruned and original models. If its retained computation matches the prescribed Target Remaining Rate within a small tolerance
E, the pruning configuration is accepted, and we set the Target Remaining Rate as 0.4, 0.5, 0.6 and E as 0.02.
3.6. Pruning Algorithm Design Based on the Entropy-Guided Search Space Optimization
This section provides a comprehensive description of our proposed EGSSO algorithm, encompassing the following key components: entropy calculation, outlier detection, normalization, subnetwork sampling, and iterative evaluation.
Initially, we trained the neural network to establish a baseline performance. We then computed the information entropy of each convolutional layer’s weights, quantifying their uncertainty or randomness within the network. Layers with higher entropy values tend to contain more diverse and informative weights, while lower entropy layers are more likely to be redundant.
To ensure accurate entropy-based pruning, outlier detection was performed to identify and replace extreme values that could otherwise skew the normalization. This correction step stabilizes the entropy distribution and improves the reliability of subsequent pruning decisions.
Next, the entropy values were normalized via min–max scaling to a common range of [0, 1]. This provides a consistent basis for determining the retention ratios across layers. Specifically, layers with higher normalized entropy retained a larger proportion of filters, whereas layers with lower entropy were pruned more aggressively. The retention ratio for each layer was obtained by computing upper and lower limits and randomly sampling within these bounds, ensuring an optimal balance between model performance and parameter reduction.
If the subnetwork obtained through the above pruning scheme meets the FLOP (Floating Point Operation Per Second) requirement, it is deemed a successful candidate. We then evaluate its performance using adaptive batch normalization [
32], which recalibrates network parameters to reflect the reduced number of filters. This step offers a quick way to assess the subnetwork’s potential accuracy without expensive fine-tuning. If the performance meets the desired criteria, the subnetwork is retained; otherwise, the process iterates, refining the retention ratios and pruning masks until an optimal subnetwork is found.
Finally, once the optimal subnetwork is identified, we perform fine-tuning to restore any lost accuracy. The training procedure is based on the neural network framework [
39].
4. Experiments
4.1. Settings
To validate the effectiveness of the proposed method, extensive experiments were conducted on multiple datasets, including COCO [
40] and VisDrone [
41], using various previous neural network framework [
39].
An accurate evaluation of subnetwork performance before fine-tuning plays a crucial role in the pruning pipeline. In this work, we follow the approach proposed by EagleEye [
32], which employs adaptive batch normalization (BN) to recalibrate BN statistics for each pruned subnetwork. EagleEye’s quantitative correlation analysis indicates that subnetwork performance assessed by adaptive BN strongly correlates with the final fine-tuned accuracy, with a Pearson correlation coefficient [
42] of up to 0.793. This higher correlation assures that the subnetwork selection criterion is more reliable, thereby improving the effectiveness and efficiency of our overall pruning procedure.
4.2. Evaluation Metrics
To evaluate the performance of our proposed pruning method, we utilize several widely accepted metrics: Mean Average Precision (mAP) and Floating Point Operations Per Second (FLOPs).
Mean Average Precision (mAP). First, the IoU is computed for each predicted bounding box with respect to the ground truth boxes. Precision and recall are then calculated:
The mean average precision (mAP) is then computed by averaging the APs across all classes:
where
N is the number of classes, and
is the average precision for class
i. We specifically use mAP@0.5, which refers to the mAP calculated at an IoU threshold of 0.5.
Floating Point Operations (FLOPs). The FLOPs for a convolutional layer can be computed using the following formula:
where
H and
W are the height and width of the output feature map,
and
are the number of input and output channels, and
and
are the height and width of the convolution kernel. For a fully connected layer, the FLOPs can be calculated as follows:
where
I and
O are the numbers of input and output units, respectively. The total FLOPs for a neural network is the sum of the FLOPs of all its layers.
4.3. Comparison of Search Results
In this section, we compare the subnetwork search results of the proposed method (EGSSO) against existing random-pruning methods [
24,
32] for different pruning retention ratios (0.4, 0.5, 0.6) on both COCO and VisDrone datasets. All results discussed here correspond to subnetworks before fine-tuning.
Figure 5 integrates all experimental results into a single four-row layout: (1) top row: COCO histograms at 0.4, 0.5, 0.6; (2) second row: VisDrone histograms at 0.4, 0.5, 0.6; (3) third row: COCO scatter plots at the same ratios; (4) bottom row: VisDrone scatter plots. In each histogram, the horizontal axis indicates the mAP range, while the vertical axis gives the frequency of subnetworks falling into each bin. Subnetworks from the proposed method (blue) tend to occupy higher mAP bins more frequently, whereas the existing method (red) concentrates more mass in lower ranges. For example, at a retention ratio of 0.4 on COCO (top-left histogram), EGSSO shows a notably heavier tail in the >
mAP region compared to the existing method. A similar pattern is observed on VisDrone (second-row histograms), where EGSSO consistently yields a more favorable mAP distribution.
Turning to the scatter plots (
Figure 6), we visualize the mAP of each sampled subnetwork (vertical axis) against its identifier (horizontal axis). Red points correspond to the existing method and blue points to EGSSO. We also highlight the top 10 subnetworks from each approach with green (existing) or gold (proposed) markers. Notably, EGSSO typically occupies higher mAP bands, with its top subnetworks reaching significantly above the red markers. This clear separation underscores EGSSO’s capability to search more effectively for high-performing subnetworks under various pruning ratios.
Table 1 further quantifies these observations.
Remaining Rate denotes the target pruning ratio (e.g., 40%, 50%, 60%).
Max-Existing and
Max-Proposed record the highest mAP reached by the existing method and EGSSO, respectively.
Search-Count-Existing Max is the number of subnetworks the existing method sampled before achieving its maximum mAP.
Search-Count-First-Exceed indicates how many subnetworks EGSSO sampled before surpassing the existing method’s best mAP for the first time.
Count-Exceeding-Existing Max shows how many of EGSSO’s subnetworks exceeded the existing method’s top mAP.
Analyzing these metrics, the results show that EGSSO not only achieves higher maximum mAP in every scenario but also does so more efficiently. For instance, at 40% remaining rate on COCO, the existing method requires 985 searches to reach 0.01367 mAP, whereas EGSSO surpasses that value after only 11 searches and ultimately attains a max of 0.02566 mAP. Similar patterns emerge at 50% and 60% on both COCO and VisDrone, demonstrating EGSSO’s robust advantage in search efficiency and performance.
In summary, both the distributional evidence from
Figure 5 histograms and
Figure 6 scatter plots and the summarized metrics from
Table 1 confirm that EGSSO outperforms the existing random-pruning methods across a range of pruning ratios. By leveraging layer-wise entropy to guide retention ratios, EGSSO effectively restricts the search space to more promising subnetworks, leading to consistently higher mAP and faster convergence in subnetwork discovery.
4.4. Performance Comparison of Fine-Tuned Subnetwork
After obtaining candidate subnetworks, it is crucial to evaluate their performance afterfine-tuning.
Table 2 and
Table 3 present the post-fine-tuning mAP performance of EGSSO and several existing methods on two YOLOv5 variants (YOLOv5m and YOLOv5L) evaluated on the COCO dataset. We report mAP@0.5 and mAP@0.5:0.95 to measure detection accuracy under different IoU thresholds, along with FLOPs and the number of parameters to reflect the computational and model-size efficiency.
As shown in
Table 2, our EGSSO approach achieves competitive accuracy on YOLOv5m (mAP@0.5 of 62.1% and mAP@0.5:0.95 of 44.4%), outperforming several existing pruning or lightweight baselines such as EagleEye, PAGCP, and DSD while reducing FLOPs to 24.8 G and parameters to only 6.18 M. This balance between accuracy and model size indicates EGSSO’s potential for resource-constrained environments that still demand high detection accuracy. Since PAGCP belongs to gradient-based saliency pruning, where channel importance is inferred from performance-aware gradients, comparing against it provides a qualitative reference. EGSSO still achieves higher accuracy with fewer FLOPs, suggesting that entropy-guided search-space refinement can outperform gradient-based saliency estimation while maintaining stronger efficiency.
Similarly, for YOLOv5L (
Table 3), EGSSO maintains superior mAP at different remaining rates (e.g., 64.5% at 60% rate) and exhibits fewer parameters and lower GFLOPs compared to other competitors. Notably, when the target remaining rate is set to 60%, EGSSO yields 67.4 GFLOPs (58.3% of the baseline) with a mAP@0.5 of 64.5%, surpassing methods like HFP [
45] and TCFP [
46] by a significant margin in terms of overall efficiency.
Overall, these fine-tuned results validate the robustness of EGSSO across different YOLOv5 variants. By leveraging entropy-guided pruning and refining the subnetwork search space, EGSSO consistently finds subnetworks that preserve accuracy while reducing computational cost. This highlights the method’s effectiveness in practical deployment scenarios, where models must balance performance, speed, and memory footprint.
4.5. Ablation Study
This section analyzes the effect of each component in Entropy-Guided Search Space Optimization (EGSSO), focusing on outlier detection under different z-score thresholds and the impact on subnetwork performance (before fine-tuning).
4.5.1. Motivation and Setup
The core idea of EGSSO is to guide pruning ratios according to the layer-wise entropy values, but outlier detection can further refine these entropy values by removing anomalous measurements that could distort the min–max normalization. To assess this, several z-score thresholds (
) are tested in detecting outlier entropy values. A smaller threshold flags more entropy values as outliers, while a larger threshold is more lenient.
Figure 7 reports how many entropy values are deemed outliers at each threshold on COCO (left) and VisDrone (right).
4.5.2. Threshold Effect on Outlier Counts
Figure 7 shows that, at a z-score threshold of 0.0 or 0.5, nearly all 59 layers contain entropy values flagged as outliers. The count drops rapidly with even moderate thresholds like 1.0 or 1.5, and tends to zero beyond 2.5. This suggests that a moderate threshold (
or
) focuses on genuine anomalies without over-filtering normal entropy values.
4.5.3. Influence on Subnetwork Search
To examine how outlier detection contributes to final subnetwork quality (before fine-tuning), four configurations are compared: (1) existing random pruning, (2) entropy-guided without outlier detection, (3) entropy-guided with
, and (4) entropy-guided with
. Subnetworks are ranked by mAP, and the top 20 are displayed in
Figure 8 for COCO (left) and VisDrone (right) at a pruning retention ratio of 0.4.
Figure 8 indicates that the existing random pruning has lower mAP overall. The entropy-guided approach without outlier detection performs better but still trails the versions that include outlier detection. Using
or
shifts the top-20 curves upward, showing higher mAP in the best subnetworks on both datasets. A slightly relaxed threshold (
) preserves more mildly atypical entropy values, sometimes leading to a higher first-ranked or fifth-ranked mAP.
In summary, these ablation results validate that outlier detection effectively refines the entropy distribution, boosting the subnetwork sampling quality. A moderate threshold strikes a good balance between ignoring extremely distorted layers and retaining legitimate, high-entropy ones. Thus, we include outlier detection in our final EGSSO pipeline, as it consistently leads to improved top subnetwork performance across datasets and pruning ratios.
5. Discussion
In this paper, EGSSO demonstrates significant improvements in pruning efficiency and accuracy preservation on object detection architectures. By substantially reducing FLOPs and parameter counts while maintaining competitive mAP performance, our entropy-guided search-space optimization proves to be an effective strategy for structured pruning. These results highlight the strength of leveraging information entropy and outlier correction to refine the search space and discover high-quality subnetworks efficiently.
Hardware-level evaluation is also an essential aspect of assessing the practical deployability of pruning methods. While this paper focuses on algorithmic efficiency in terms of FLOPs and parameter reduction, future work will extend our analysis to real edge platforms, including NVIDIA Jetson modules, ARM-based mobile SoCs, and dedicated AI accelerators. Systematic measurements of inference latency, memory footprint, and energy consumption on these devices will provide a more comprehensive understanding of EGSSO’s effectiveness in resource-constrained settings and further validate its suitability for real-world deployment.
Although this paper focuses on object detection models, EGSSO’s framework is architecture-agnostic. The entropy computation, outlier correction, and search-space refinement, making the method naturally applicable to other visual tasks. Future research will explore extending EGSSO to image classification architectures such as ResNet-50 [
47], and lightweight semantic segmentation models such as DeepLab [
48]. These investigations will help further verify the generality and scalability of EGSSO across a broader range of deep learning applications.
6. Conclusions
This paper introduces EGSSO, an entropy-guided pruning framework that integrates outlier detection to reduce parameters while preserving accuracy. By refining subnetwork sampling and iteratively optimizing the pruned model, EGSSO effectively focuses computational resources on the most informative parts of the network, resulting in a more efficient and accurate pruning process. Experimental results on multiple benchmarks show that EGSSO not only discovers high-performing subnetworks more frequently but also achieves these results with fewer search iterations and reduced pruning time cost. Fine-tuning on YOLOv5m and YOLOv5L further verifies EGSSO’s ability to maintain strong mAP with fewer FLOPs and parameters, underscoring its suitability for deployment in resource-constrained settings. Future extensions could investigate EGSSO’s applicability to other architectures, such as Transformers, and explore combinations with quantization or knowledge distillation for additional efficiency gains. These avenues may broaden EGSSO’s scope as a robust and scalable approach to neural network compression in real-world applications.
Author Contributions
Conceptualization, Y.Q. and L.N.; methodology, Y.Q. and L.N.; software, Y.Q. and L.N.; validation, Y.Q., L.N., F.S. and Z.C.; formal analysis, Y.Q. and L.N.; investigation, Y.Q. and L.N.; resources, F.S. and Z.C.; data curation, L.N., F.S. and Z.C.; writing—original draft preparation, Y.Q. and L.N.; writing—review & editing, Y.Q. and L.N.; visualization, Y.Q. and L.N.; supervision, K.Y.; project administration, K.Y.; funding acquisition, K.Y., Y.Q. and L.N. contributed equally to this work and share first authorship. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
The datasets are available on the project page of COCO [
40] and VisDrone [
41].
Acknowledgments
This work was supported by JSPS KAKENHI 22H00548 and JST CRONOS JPMJCS24K4.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.Y.; et al. Segment anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 4015–4026. [Google Scholar]
- Qiu, Y.; Niu, L.; Sha, F. Multipath 3D-Conv encoder and temporal-sequence decision for repetitive-action counting. Expert Syst. Appl. 2024, 249, 123760. [Google Scholar] [CrossRef]
- Liu, Z.; Ning, J.; Cao, Y.; Wei, Y.; Zhang, Z.; Lin, S.; Hu, H. Video swin transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 28–24 June 2022; pp. 3202–3211. [Google Scholar]
- Zhu, L.; Wang, X.; Ke, Z.; Zhang, W.; Lau, R.W. Biformer: Vision transformer with bi-level routing attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 10323–10333. [Google Scholar]
- Niu, L.; Liao, J.; Sha, F.; Cheng, Z.; Qiu, Y. An Adaptive Auxiliary Training Method of Autoencoders and Its Application in Anomaly Detection. In Proceedings of the International Conference on Neural Information Processing, Changsha, China, 20–23 November 2023; Springer: Singapore, 2023; pp. 524–540. [Google Scholar]
- Qiu, Y.; Sha, F.; Niu, L. DKA-YOLO: Enhanced Small Object Detection via Dilation Kernel Aggregation Convolution Modules. IEEE Access 2024, 12, 187353–187366. [Google Scholar] [CrossRef]
- Irie, K.; Yicheng, Q.; Nishikawa, K. On Improving the Accuracy of Object Detection for High Resolution Images Based on SSD. In Proceedings of the 2021 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Tokyo, Japan, 14–17 December 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 392–399. [Google Scholar]
- Rokh, B.; Azarpeyvand, A.; Khanteymoori, A. A comprehensive survey on model quantization for deep neural networks in image classification. In ACM Transactions on Intelligent Systems and Technology; Association for Computing Machinery: New York, NY, USA, 2023; Volume 14, pp. 1–50. [Google Scholar]
- Xu, K.; Han, L.; Tian, Y.; Yang, S.; Zhang, X. Eq-net: Elastic quantization neural networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 1505–1514. [Google Scholar]
- Li, Y.; Yu, Y.; Zhang, Q.; Liang, C.; He, P.; Chen, W.; Zhao, T. Losparse: Structured compression of large language models based on low-rank and sparse approximation. In Proceedings of the International Conference on Machine Learning. PMLR, Honolulu, HI, USA, 23–29 July 2023; pp. 20336–20350. [Google Scholar]
- Qiu, Y.; Sha, F.; Niu, L.; Zhang, G. Fire anomaly detection based on low-rank adaption fine-tuning and localization using Gradient Filtering. Appl. Soft Comput. 2025, 171, 112782. [Google Scholar] [CrossRef]
- Li, Z.; Li, X.; Yang, L.; Zhao, B.; Song, R.; Luo, L.; Li, J.; Yang, J. Curriculum temperature for knowledge distillation. Proc. Aaai Conf. Artif. Intell. 2023, 37, 1504–1512. [Google Scholar] [CrossRef]
- He, Y.; Xiao, L. Structured pruning for deep convolutional neural networks: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 46, 2900–2919. [Google Scholar] [CrossRef]
- Ye, H.; Zhang, B.; Chen, T.; Fan, J.; Wang, B. Performance-aware approximation of global channel pruning for multitask cnns. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 10267–10284. [Google Scholar] [CrossRef]
- Cheng, H.; Zhang, M.; Shi, J.Q. A Survey on Deep Neural Network Pruning: Taxonomy, Comparison, Analysis, and Recommendations. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 10558–10578. [Google Scholar] [CrossRef]
- Li, Y.; van Gemert, J.C.; Hoefler, T.; Moons, B.; Eleftheriou, E.; Verhoef, B.E. Differentiable transportation pruning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 16957–16967. [Google Scholar]
- Luo, J.H.; Wu, J. Neural network pruning with residual-connections and limited-data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1458–1467. [Google Scholar]
- Gao, S.; Huang, F.; Cai, W.; Huang, H. Network pruning via performance maximization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 9270–9280. [Google Scholar]
- Liu, J.; Zhuang, B.; Zhuang, Z.; Guo, Y.; Huang, J.; Zhu, J.; Tan, M. Discrimination-aware network pruning for deep model compression. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 4035–4051. [Google Scholar] [CrossRef]
- Lin, M.; Zhang, Y.; Li, Y.; Chen, B.; Chao, F.; Wang, M.; Li, S.; Tian, Y.; Ji, R. 1xn pattern for pruning convolutional neural networks. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 3999–4008. [Google Scholar] [CrossRef]
- Rachwan, J.; Zügner, D.; Charpentier, B.; Geisler, S.; Ayle, M.; Günnemann, S. Winning the lottery ahead of time: Efficient early network pruning. In Proceedings of the International Conference on Machine Learning. PMLR, Baltimore, MD, USA, 17–23 July 2022; pp. 18293–18309. [Google Scholar]
- Li, Y.; Adamczewski, K.; Li, W.; Gu, S.; Timofte, R.; Van Gool, L. Revisiting random channel pruning for neural network compression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 191–201. [Google Scholar]
- Yang, C.; Zhao, P.; Li, Y.; Niu, W.; Guan, J.; Tang, H.; Qin, M.; Ren, B.; Lin, X.; Wang, Y. Pruning parameterization with bi-level optimization for efficient semantic segmentation on the edge. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 15402–15412. [Google Scholar]
- Sun, X.; Shi, H. Towards Better Structured Pruning Saliency by Reorganizing Convolution. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2024; pp. 2204–2214. [Google Scholar]
- Qiu, Z.; Wei, P.; Yao, M.; Zhang, R.; Kuang, Y. Channel Pruning Method Based on Decoupling Feature Scale Distribution in Batch Normalization Layers. IEEE Access 2024, 12, 48865–48880. [Google Scholar] [CrossRef]
- Rabi, M.; Abarkan, I.; Sarfarazi, S.; Ferreira, F.P.V.; Alkherret, A.J. Automated design and optimization of concrete beams reinforced with stainless steel. Struct. Concr. 2025. early view. [Google Scholar] [CrossRef]
- Hu, H.; Peng, R.; Tai, Y.W.; Tang, C.K. Network trimming: A data-driven neuron pruning approach towards efficient deep architectures. arXiv 2016, arXiv:1607.03250. [Google Scholar] [CrossRef]
- Luo, J.H.; Wu, J.; Lin, W. Thinet: A filter level pruning method for deep neural network compression. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 5058–5066. [Google Scholar]
- Yu, R.; Li, A.; Chen, C.F.; Lai, J.H.; Morariu, V.I.; Han, X.; Gao, M.; Lin, C.Y.; Davis, L.S. Nisp: Pruning networks using neuron importance score propagation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 9194–9203. [Google Scholar]
- He, Y.; Liu, P.; Wang, Z.; Hu, Z.; Yang, Y. Filter pruning via geometric median for deep convolutional neural networks acceleration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 4340–4349. [Google Scholar]
- Zhuang, Z.; Tan, M.; Zhuang, B.; Liu, J.; Guo, Y.; Wu, Q.; Huang, J.; Zhu, J. Discrimination-aware channel pruning for deep neural networks. Adv. Neural Inf. Process. Syst. 2018, 31. [Google Scholar]
- Li, B.; Wu, B.; Su, J.; Wang, G. Eagleeye: Fast sub-net evaluation for efficient neural network pruning. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part II 16. Springer: Cham, Switzerland, 2020; pp. 639–654. [Google Scholar]
- Wang, Z.; Li, C.; Wang, X. Convolutional neural network pruning with structural redundancy reduction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 14913–14922. [Google Scholar]
- Fang, G.; Ma, X.; Song, M.; Mi, M.B.; Wang, X. Depgraph: Towards any structural pruning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 16091–16101. [Google Scholar]
- Pham, V.T.; Zniyed, Y.; Nguyen, T.P. Efficient tensor decomposition-based filter pruning. Neural Netw. 2024, 178, 106393. [Google Scholar]
- Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
- Dibley, M.J.; Staehling, N.; Nieburg, P.; Trowbridge, F.L. Interpretation of Z-score anthropometric indicators derived from the international growth reference. Am. J. Clin. Nutr. 1987, 46, 749–762. [Google Scholar] [CrossRef] [PubMed]
- Cheadle, C.; Vawter, M.P.; Freed, W.J.; Becker, K.G. Analysis of Microarray Data Using Z Score Transformation. J. Mol. Diagn. 2003, 5, 73–81. [Google Scholar] [CrossRef]
- Jocher, G.; Nishimura, K.; Mineeva, T.; Vilariño, A. YOLOv5. 2020. Available online: https://github.com/ultralytics/yolov5 (accessed on 31 August 2024).
- Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common objects in context. In Proceedings of the Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Proceedings, Part V 13. Springer: Cham, Switzerland, 2014; pp. 740–755. [Google Scholar]
- Zhu, P.; Wen, L.; Du, D.; Bian, X.; Fan, H.; Hu, Q.; Ling, H. Detection and tracking meet drones challenge. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 7380–7399. [Google Scholar] [CrossRef]
- Soper, H.; Young, A.; Cave, B.; Lee, A.; Pearson, K. On the distribution of the correlation coefficient in small samples. Appendix II to the papers of “Student” and RA Fisher. Biometrika 1917, 11, 328–413. [Google Scholar]
- Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. Scaled-yolov4: Scaling cross stage partial network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 13029–13038. [Google Scholar]
- Ge, Z. Yolox: Exceeding yolo series in 2021. arXiv 2021, arXiv:2107.08430. [Google Scholar] [CrossRef]
- Enderich, L.; Timm, F.; Burgard, W. Holistic filter pruning for efficient deep neural networks. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Virtual, 5–9 January 2021; pp. 2596–2605. [Google Scholar]
- Jeon, J.; Kim, J.; Kang, J.K.; Moon, S.; Kim, Y. Target capacity filter pruning method for optimized inference time based on YOLOv5 in embedded systems. IEEE Access 2022, 10, 70840–70849. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Chen, L.C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking atrous convolution for semantic image segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar] [CrossRef]
Figure 1.
Overview of the proposed entropy-guided pruning (EGSSO), we compute layer-wise weight entropies . Outliers in are detected by a z-score and replaced to stabilize the scale, after which the corrected entropies are normalized to . Each is mapped to a per-layer retention ratio , forming a bar profile across layers. Within every layer, filters are ranked along the k-th filter dimension by an L1 importance score, and the top filters are kept while the rest are pruned. The dashed connections indicate the propagation of the kept indices to subsequent layers to maintain tensor compatibility, producing the final pruned weights.
Figure 1.
Overview of the proposed entropy-guided pruning (EGSSO), we compute layer-wise weight entropies . Outliers in are detected by a z-score and replaced to stabilize the scale, after which the corrected entropies are normalized to . Each is mapped to a per-layer retention ratio , forming a bar profile across layers. Within every layer, filters are ranked along the k-th filter dimension by an L1 importance score, and the top filters are kept while the rest are pruned. The dashed connections indicate the propagation of the kept indices to subsequent layers to maintain tensor compatibility, producing the final pruned weights.
Figure 2.
The original information entropy values with outliers marked. Each row corresponds to one of the layers of the model, and the horizontal axis shows its raw layer-wise information entropy. The colored rectangles mark entries flagged as outliers under different z-score thresholds, illustrating that the thresholds isolate only a few extreme outlier layers.
Figure 2.
The original information entropy values with outliers marked. Each row corresponds to one of the layers of the model, and the horizontal axis shows its raw layer-wise information entropy. The colored rectangles mark entries flagged as outliers under different z-score thresholds, illustrating that the thresholds isolate only a few extreme outlier layers.
Figure 3.
Comparison of search spaces. For each of the layers in model, the orange bar shows the existing uniform space and the blue bar shows our entropy-guided bounds derived from normalized entropies. Layers with higher normalized entropy receive higher and narrower bounds, whereas low entropy layers receive proportionally lower intervals; the dashed red line denotes the common lower limit. Our restriction steers sampling toward more promising subnetworks, improving search efficiency and higher network performance.
Figure 3.
Comparison of search spaces. For each of the layers in model, the orange bar shows the existing uniform space and the blue bar shows our entropy-guided bounds derived from normalized entropies. Layers with higher normalized entropy receive higher and narrower bounds, whereas low entropy layers receive proportionally lower intervals; the dashed red line denotes the common lower limit. Our restriction steers sampling toward more promising subnetworks, improving search efficiency and higher network performance.
Figure 4.
(a) Visualization of the distributions of the normalized entropy value before vs. after outlier detection. (b) Variance of entropy of all layers before vs. after outlier detection.
Figure 4.
(a) Visualization of the distributions of the normalized entropy value before vs. after outlier detection. (b) Variance of entropy of all layers before vs. after outlier detection.
Figure 5.
Histograms of subnetwork mAP distributions at pruning retention ratios of 0.4, 0.5, and 0.6 on the COCO and VisDrone datasets. Blue bars represent the proposed EGSSO method, and red bars indicate the existing baseline. For each sub-figure, the mean, median, and maximum mAP values of both methods are also provided to illustrate overall performance differences.
Figure 5.
Histograms of subnetwork mAP distributions at pruning retention ratios of 0.4, 0.5, and 0.6 on the COCO and VisDrone datasets. Blue bars represent the proposed EGSSO method, and red bars indicate the existing baseline. For each sub-figure, the mean, median, and maximum mAP values of both methods are also provided to illustrate overall performance differences.
Figure 6.
Scatter plots of subnetwork mAP values at pruning retention ratios of 0.4, 0.5, and 0.6 on the COCO and VisDrone datasets. Each point corresponds to one sampled subnetwork (horizontal axis: subnetwork identifier; vertical axis: mAP). Green markers denote the top-10 subnetworks discovered by the existing method, while gold markers highlight the top-10 subnetworks obtained by the proposed EGSSO.
Figure 6.
Scatter plots of subnetwork mAP values at pruning retention ratios of 0.4, 0.5, and 0.6 on the COCO and VisDrone datasets. Each point corresponds to one sampled subnetwork (horizontal axis: subnetwork identifier; vertical axis: mAP). Green markers denote the top-10 subnetworks discovered by the existing method, while gold markers highlight the top-10 subnetworks obtained by the proposed EGSSO.
Figure 7.
Number of entropy values flagged as outliers under different z-score thresholds in datasets COCO (left) and VisDrone (right).
Figure 7.
Number of entropy values flagged as outliers under different z-score thresholds in datasets COCO (left) and VisDrone (right).
Figure 8.
Top-20 subnetworks (before fine-tuning) at a retention ratio of 0.4 for COCO (left) and VisDrone (right) under different configurations.
Figure 8.
Top-20 subnetworks (before fine-tuning) at a retention ratio of 0.4 for COCO (left) and VisDrone (right) under different configurations.
Table 1.
Comparison of subnetwork performance (before fine-tuning) between existing random-pruning methods [
24,
32] and EGSSO.
Table 1.
Comparison of subnetwork performance (before fine-tuning) between existing random-pruning methods [
24,
32] and EGSSO.
| Dataset | Remaining Rate | Max-Existing | Max-Proposed | Search-Count-Existing Max | Search-Count-First-Exceed | Count-Exceeding-Existing Max |
|---|
| COCO | 40% | | | 985 | 12 | 226 |
| | 50% | | | 1898 | 389 | 5 |
| | 60% | | | 751 | 1000 | 8 |
| VisDrone | 40% | | | 1302 | 1 | 1941 |
| | 50% | | | 897 | 1 | 984 |
| | 60% | | | 82 | 1 | 927 |
Table 2.
Results of pruning YOLOv5m on COCO with the target remaining rate of 0.5.
Table 2.
Results of pruning YOLOv5m on COCO with the target remaining rate of 0.5.
| Method | mAP@0.5 (%) | mAP@0.5:0.95 (%) | FLOPs (G) | Parameters (M) |
|---|
| YOLOv5m [39] (baseline) | 63.1 | 44.5 | 51.3 | 21.4 |
| YOLOv5s [39] | 55.4 | 36.7 | 17.0 | 7.3 |
| YOLOv4-Tiny [43] | 42.6 | 23.6 | 7.8 | 6.5 |
| YOLOX-S [44] | 58.7 | 39.6 | 26.8 | 9.0 |
| EagleEye [32] | 58.7 | 40.2 | 26.8 | 10.4 |
| PAGCP [14] | 60.7 | 41.5 | 23.5 | 7.7 |
| DSD [25] | 61.14 | 40.94 | 26.2 | 7.3 |
| Ours | 62.1 | 44.4 | 24.8 | 6.18 |
Table 3.
Results of pruning YOLOv5L on COCO with the target remaining rates of 0.5 and 0.6.
Table 3.
Results of pruning YOLOv5L on COCO with the target remaining rates of 0.5 and 0.6.
| Method | Target Remaining Rate (FLOPs) | mAP@0.5 (%) | mAP@0.5:0.95 (%) | Parameters (M) | GFLOPs (Remaining Rate%) |
|---|
| Baseline | - | 66.9 | 48.2 | 47.06 | 115.6 |
| HFP [45] | 60% | 63.1 | 43.6 | 16.09 | 67.6 (58.5) |
| | 50% | 63.5 | 43.4 | 12.36 | 57.2 (49.5) |
| EagleEye [32] | 60% | 63.8 | 45.0 | 23.95 | 66.9 (57.9) |
| | 50% | 63.7 | 44.6 | 23.72 | 55.9 (48.4) |
| TCFP [46] | 60% | 63.6 | 44.2 | 28.73 | 72.8 (63.0) |
| | 50% | 61.8 | 42.7 | 24.08 | 61.9 (53.5) |
| Ours | 60% | 64.5 | 45.3 | 21.36 | 67.4 (58.3) |
| | 50% | 64.3 | 45.0 | 17.03 | 56.3 (48.7) |
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).