Figure 1.
Schematic representation of the ECA110-Pooling procedure applied to a activation window. The process involves four sequential steps: flattening, binarization, transformation via Rule 110, and normalized reduction.
Figure 1.
Schematic representation of the ECA110-Pooling procedure applied to a activation window. The process involves four sequential steps: flattening, binarization, transformation via Rule 110, and normalized reduction.
Figure 2.
CNN pipeline with ECA Rule 110 pooling, illustrating the transform–reduce branch within the overall architecture.
Figure 2.
CNN pipeline with ECA Rule 110 pooling, illustrating the transform–reduce branch within the overall architecture.
Figure 3.
Aggregated comparison of pooling methods across ImageNet (subset), CIFAR-10, and Fashion-MNIST. Top-1 Accuracy, Error Rate, and F1-score are averaged across datasets and reported at multiple training epochs (20, 100, 500, 1000, 5000, 10,000, 50,000 epochs). ECA110-Pooling consistently achieves superior performance while maintaining efficiency.
Figure 3.
Aggregated comparison of pooling methods across ImageNet (subset), CIFAR-10, and Fashion-MNIST. Top-1 Accuracy, Error Rate, and F1-score are averaged across datasets and reported at multiple training epochs (20, 100, 500, 1000, 5000, 10,000, 50,000 epochs). ECA110-Pooling consistently achieves superior performance while maintaining efficiency.
Figure 4.
Comparative convergence dynamics of pooling operators (Max, Average, Median, Min, Kernel, and ECA110) across ImageNet (subset), CIFAR-10, and Fashion-MNIST datasets, evaluated over training epochs ranging from 20 to 50,000.
Figure 4.
Comparative convergence dynamics of pooling operators (Max, Average, Median, Min, Kernel, and ECA110) across ImageNet (subset), CIFAR-10, and Fashion-MNIST datasets, evaluated over training epochs ranging from 20 to 50,000.
Figure 5.
Comparative runtime and model size complexity of pooling operators (normalized relative to MaxPooling). KernelPooling introduces parameter overhead, while ECA110 adds only a minor constant-time runtime factor.
Figure 5.
Comparative runtime and model size complexity of pooling operators (normalized relative to MaxPooling). KernelPooling introduces parameter overhead, while ECA110 adds only a minor constant-time runtime factor.
Figure 6.
Comparison of average epoch time and accuracy across pooling methods. ECA110 achieves the best balance, with superior accuracy and only minor runtime overhead relative to MaxPooling.
Figure 6.
Comparison of average epoch time and accuracy across pooling methods. ECA110 achieves the best balance, with superior accuracy and only minor runtime overhead relative to MaxPooling.
Figure 7.
Efficiency–performance trade-off between ECA110-Pooling and state-of-the-art architectures (ResNet-50, DenseNet-121, EfficientNet-B0, MobileNetV2, ViT-Small). Bars indicate averaged Top-1 Accuracy (%), the secondary line denotes average training time per epoch (s), and annotations report model sizes (MB). ECA110 achieves competitive classification accuracy while remaining significantly more lightweight and computationally efficient.
Figure 7.
Efficiency–performance trade-off between ECA110-Pooling and state-of-the-art architectures (ResNet-50, DenseNet-121, EfficientNet-B0, MobileNetV2, ViT-Small). Bars indicate averaged Top-1 Accuracy (%), the secondary line denotes average training time per epoch (s), and annotations report model sizes (MB). ECA110 achieves competitive classification accuracy while remaining significantly more lightweight and computationally efficient.
Table 1.
Transition table for ECA Rule 110. Each triplet of neighboring binary states is mapped to a new state.
Table 1.
Transition table for ECA Rule 110. Each triplet of neighboring binary states is mapped to a new state.
| Triplet | 111 | 110 | 101 | 100 | 011 | 010 | 001 | 000 |
|---|
| New state | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 |
Table 2.
The ablation study on CIFAR-10 at 10,000 epochs (Top-1 accuracy %).
Table 2.
The ablation study on CIFAR-10 at 10,000 epochs (Top-1 accuracy %).
| Ablation | Study Variant | Top-1 Accuracy | (diff.) |
|---|
| Threshold | Mean (default) | 93.0 | (ref) |
| Median | 92.9 | |
| Fixed (0.5) | 92.8 | |
| ECA rule | ECA 110 | 93.0 | (ref) |
| ECA 90 | 92.6 | |
| ECA 30 | 92.5 | |
| ECA 184 | 92.6 | |
| Steps T | 0 (no transform) | 92.0 | |
| 1 | 93.0 | (ref) |
| 2 | 93.0 | ≈0 |
| 3 | 92.9 | |
Table 3.
Benchmark datasets utilized in the present study.
Table 3.
Benchmark datasets utilized in the present study.
| Dataset | #Classes | #Images | Modality |
|---|
| ImageNet (subset) | 100 | 100,000 | RGB |
| CIFAR-10 | 10 | 60,000 | RGB |
| Fashion-MNIST | 10 | 70,000 | Grayscale |
Table 4.
Train/test splits applied to all datasets.
Table 4.
Train/test splits applied to all datasets.
| Case | Training Set | Testing Set |
|---|
| Case 1 | 80% | 20% |
| Case 2 | 65% | 35% |
| Case 3 | 50% | 50% |
Table 5.
Evaluation metrics for assessing pooling operators.
Table 5.
Evaluation metrics for assessing pooling operators.
| Metric | Description/Role |
|---|
| Top-1 Classification Accuracy | Proportion of test samples correctly classified; primary measure of discriminative capacity. |
| Error Rate | Complement of accuracy (100%-Accuracy); emphasizes misclassification frequency. |
| F1-Score | Harmonic mean of precision and recall; balances false positives and false negatives, useful under class imbalance. |
| Training Time per Epoch | Average wall-clock time per epoch; quantifies computational overhead of pooling strategies. |
| Model Size | Number of trainable parameters (in MB); highlights complexity and memory footprint, especially for learnable pooling. |
| Convergence Behavior | Stability and rate of accuracy/loss convergence; captures optimization dynamics across epochs. |
| Statistical Significance | Validates observed differences using ANOVA, Tukey’s HSD, Wilcoxon Signed-Rank, and paired t-tests. Ensures robustness of conclusions. |
Table 6.
Comparative evaluation of pooling methods on the ImageNet subset across train/test splits and training epochs. Results are reported as Top-1 Accuracy (%), Error Rate (%), and F1-score (%).
Table 6.
Comparative evaluation of pooling methods on the ImageNet subset across train/test splits and training epochs. Results are reported as Top-1 Accuracy (%), Error Rate (%), and F1-score (%).
| Method | Split | 20 | 100 | 500 | 1000 (F1) |
| Acc | Err | F1 | Acc | Err | F1 | Acc | Err | F1 |
| MaxPooling | 80/20 | 58.2 | 41.8 | 58.0 | 65.5 | 34.5 | 65.2 | 70.3 | 29.7 | 70.1 | 71.8 |
| 65/35 | 56.7 | 43.3 | 56.5 | 64.2 | 35.8 | 64.0 | 69.1 | 30.9 | 68.9 | 70.6 |
| 50/50 | 55.0 | 45.0 | 54.7 | 62.9 | 37.1 | 62.7 | 67.6 | 32.4 | 67.4 | 69.3 |
| AveragePooling | 80/20 | 57.5 | 42.5 | 57.3 | 64.3 | 35.7 | 64.1 | 69.2 | 30.8 | 69.0 | 70.6 |
| 65/35 | 55.9 | 44.1 | 55.7 | 63.1 | 36.9 | 62.9 | 68.0 | 32.0 | 67.8 | 69.5 |
| 50/50 | 54.2 | 45.8 | 54.0 | 61.8 | 38.2 | 61.6 | 66.6 | 33.4 | 66.4 | 68.1 |
| MedianPooling | 80/20 | 58.0 | 42.0 | 57.8 | 64.8 | 35.2 | 64.6 | 69.8 | 30.2 | 69.6 | 71.0 |
| 65/35 | 56.4 | 43.6 | 56.2 | 63.6 | 36.4 | 63.4 | 68.5 | 31.5 | 68.3 | 69.9 |
| 50/50 | 54.8 | 45.2 | 54.6 | 62.3 | 37.7 | 62.1 | 67.1 | 32.9 | 66.9 | 68.6 |
| MinPooling | 80/20 | 48.6 | 51.4 | 48.0 | 54.2 | 45.8 | 53.7 | 58.7 | 41.3 | 58.3 | 60.0 |
| 65/35 | 47.0 | 53.0 | 46.5 | 52.8 | 47.2 | 52.4 | 57.5 | 42.5 | 57.0 | 59.0 |
| 50/50 | 45.3 | 54.7 | 44.8 | 51.5 | 48.5 | 51.0 | 56.0 | 44.0 | 55.5 | 57.6 |
| KernelPooling | 80/20 | 59.3 | 40.7 | 59.1 | 66.0 | 34.0 | 65.8 | 71.0 | 29.0 | 70.8 | 72.4 |
| 65/35 | 57.7 | 42.3 | 57.5 | 64.7 | 35.3 | 64.5 | 69.7 | 30.3 | 69.5 | 71.2 |
| 50/50 | 56.1 | 43.9 | 55.9 | 63.3 | 36.7 | 63.1 | 68.3 | 31.7 | 68.1 | 69.9 |
| ECA110Pooling | 80/20 | 60.0 | 40.0 | 59.8 | 66.9 | 33.1 | 66.7 | 71.7 | 28.3 | 71.5 | 72.8 |
| 65/35 | 58.4 | 41.6 | 58.2 | 65.6 | 34.4 | 65.4 | 70.5 | 29.5 | 70.3 | 71.7 |
| 50/50 | 56.8 | 43.2 | 56.6 | 64.1 | 35.9 | 63.9 | 69.0 | 31.0 | 68.8 | 70.5 |
| Method | Split | 1000 | 5000 | 10,000 | 50,000 |
| Acc | Err | Acc | Err | F1 | Acc | Err | F1 | Acc | Err | F1 |
| MaxPooling | 80/20 | 72.0 | 28.0 | 72.8 | 27.2 | 72.6 | 73.0 | 27.0 | 72.8 | 73.1 | 26.9 | 72.9 |
| 65/35 | 70.8 | 29.2 | 71.4 | 28.6 | 71.2 | 71.6 | 28.4 | 71.4 | 71.7 | 28.3 | 71.5 |
| 50/50 | 69.5 | 30.5 | 70.1 | 29.9 | 69.9 | 70.2 | 29.8 | 70.0 | 70.3 | 29.7 | 70.1 |
| AveragePooling | 80/20 | 70.8 | 29.2 | 71.6 | 28.4 | 71.4 | 71.8 | 28.2 | 71.6 | 71.9 | 28.1 | 71.7 |
| 65/35 | 69.7 | 30.3 | 70.4 | 29.6 | 70.2 | 70.6 | 29.4 | 70.4 | 70.7 | 29.3 | 70.5 |
| 50/50 | 68.3 | 31.7 | 69.0 | 31.0 | 68.8 | 69.1 | 30.9 | 68.9 | 69.2 | 30.8 | 69.0 |
| MedianPooling | 80/20 | 71.2 | 28.8 | 71.9 | 28.1 | 71.7 | 72.0 | 28.0 | 71.8 | 72.0 | 28.0 | 71.8 |
| 65/35 | 70.1 | 29.9 | 70.8 | 29.2 | 70.6 | 70.9 | 29.1 | 70.7 | 71.0 | 29.0 | 70.8 |
| 50/50 | 68.8 | 31.2 | 69.4 | 30.6 | 69.2 | 69.5 | 30.5 | 69.3 | 69.5 | 30.5 | 69.3 |
| MinPooling | 80/20 | 60.5 | 39.5 | 61.0 | 39.0 | 60.5 | 61.1 | 38.9 | 60.6 | 61.2 | 38.8 | 60.7 |
| 65/35 | 59.3 | 40.7 | 59.8 | 40.2 | 59.4 | 60.0 | 40.0 | 59.6 | 60.0 | 40.0 | 59.6 |
| 50/50 | 58.0 | 42.0 | 58.4 | 41.6 | 58.0 | 58.5 | 41.5 | 58.1 | 58.6 | 41.4 | 58.2 |
| KernelPooling | 80/20 | 72.6 | 27.4 | 73.5 | 26.5 | 73.3 | 73.7 | 26.3 | 73.5 | 73.8 | 26.2 | 73.6 |
| 65/35 | 71.4 | 28.6 | 72.2 | 27.8 | 72.0 | 72.4 | 27.6 | 72.2 | 72.5 | 27.5 | 72.3 |
| 50/50 | 70.1 | 29.9 | 70.9 | 29.1 | 70.7 | 71.0 | 29.0 | 70.8 | 71.1 | 28.9 | 70.9 |
| ECA110Pooling | 80/20 | 73.0 | 27.0 | 73.8 | 26.2 | 73.6 | 74.0 | 26.0 | 73.8 | 74.1 | 25.9 | 73.9 |
| 65/35 | 71.9 | 28.1 | 72.7 | 27.3 | 72.5 | 72.9 | 27.1 | 72.7 | 73.0 | 27.0 | 72.8 |
| 50/50 | 70.7 | 29.3 | 71.4 | 28.6 | 71.2 | 71.6 | 28.4 | 71.4 | 71.7 | 28.3 | 71.5 |
Table 7.
Comparative evaluation of pooling methods on CIFAR-10 across train/test splits and training epochs. Results are reported as Top-1 Accuracy (%), Error Rate (%), and F1-score (%).
Table 7.
Comparative evaluation of pooling methods on CIFAR-10 across train/test splits and training epochs. Results are reported as Top-1 Accuracy (%), Error Rate (%), and F1-score (%).
| Method | Split | 20 | 100 | 500 | 1000 (F1) |
| Acc | Err | F1 | Acc | Err | F1 | Acc | Err | F1 |
| MaxPooling | 80/20 | 75.4 | 24.6 | 75.0 | 85.1 | 14.9 | 84.9 | 90.2 | 9.8 | 90.0 | 91.3 |
| 65/35 | 74.0 | 26.0 | 73.6 | 84.0 | 16.0 | 83.8 | 89.3 | 10.7 | 89.1 | 90.5 |
| 50/50 | 72.6 | 27.4 | 72.2 | 82.8 | 17.2 | 82.6 | 88.4 | 11.6 | 88.2 | 89.7 |
| AveragePooling | 80/20 | 74.8 | 25.2 | 74.5 | 84.6 | 15.4 | 84.4 | 89.7 | 10.3 | 89.5 | 91.0 |
| 65/35 | 73.3 | 26.7 | 73.0 | 83.4 | 16.6 | 83.2 | 88.8 | 11.2 | 88.6 | 90.1 |
| 50/50 | 71.9 | 28.1 | 71.6 | 82.1 | 17.9 | 81.9 | 87.9 | 12.1 | 87.7 | 89.3 |
| MedianPooling | 80/20 | 75.1 | 24.9 | 74.8 | 84.9 | 15.1 | 84.7 | 90.0 | 10.0 | 89.8 | 91.1 |
| 65/35 | 73.6 | 26.4 | 73.3 | 83.7 | 16.3 | 83.5 | 89.0 | 11.0 | 88.8 | 90.3 |
| 50/50 | 72.2 | 27.8 | 71.9 | 82.5 | 17.5 | 82.3 | 88.2 | 11.8 | 88.0 | 89.4 |
| MinPooling | 80/20 | 68.9 | 31.1 | 68.6 | 77.8 | 22.2 | 77.5 | 85.5 | 14.5 | 85.2 | 86.9 |
| 65/35 | 67.5 | 32.5 | 67.2 | 76.5 | 23.5 | 76.2 | 84.6 | 15.4 | 84.2 | 86.0 |
| 50/50 | 66.0 | 34.0 | 65.6 | 75.2 | 24.8 | 74.9 | 83.8 | 16.2 | 83.4 | 85.2 |
| KernelPooling | 80/20 | 76.0 | 24.0 | 75.7 | 85.5 | 14.5 | 85.3 | 90.6 | 9.4 | 90.4 | 91.7 |
| 65/35 | 74.5 | 25.5 | 74.2 | 84.3 | 15.7 | 84.1 | 89.7 | 10.3 | 89.5 | 90.9 |
| 50/50 | 73.1 | 26.9 | 72.8 | 83.0 | 17.0 | 82.8 | 88.8 | 11.2 | 88.6 | 90.2 |
| ECA110Pooling | 80/20 | 76.8 | 23.2 | 76.5 | 86.3 | 13.7 | 86.1 | 91.2 | 8.8 | 91.0 | 92.3 |
| 65/35 | 75.3 | 24.7 | 75.0 | 85.0 | 15.0 | 84.8 | 90.3 | 9.7 | 90.1 | 91.5 |
| 50/50 | 73.9 | 26.1 | 73.6 | 83.8 | 16.2 | 83.6 | 89.4 | 10.6 | 89.2 | 90.8 |
| Method | Split | 1000 | | 5000 | 10,000 | 50,000 |
| Acc | Err | Acc | Err | F1 | Acc | Err | F1 | Acc | Err | F1 |
| MaxPooling | 80/20 | 91.5 | 8.5 | 91.9 | 8.1 | 91.7 | 92.0 | 8.0 | 91.8 | 92.1 | 7.9 | 91.9 |
| 65/35 | 90.7 | 9.3 | 91.1 | 8.9 | 90.9 | 91.2 | 8.8 | 91.0 | 91.3 | 8.7 | 91.1 |
| 50/50 | 89.9 | 10.1 | 90.3 | 9.7 | 90.1 | 90.4 | 9.6 | 90.2 | 90.5 | 9.5 | 90.3 |
| AveragePooling | 80/20 | 91.1 | 8.9 | 91.5 | 8.5 | 91.3 | 91.6 | 8.4 | 91.4 | 91.7 | 8.3 | 91.5 |
| 65/35 | 90.3 | 9.7 | 90.7 | 9.3 | 90.5 | 90.8 | 9.2 | 90.6 | 90.9 | 9.1 | 90.7 |
| 50/50 | 89.5 | 10.5 | 89.9 | 10.1 | 89.7 | 90.0 | 10.0 | 89.8 | 90.1 | 9.9 | 89.9 |
| MedianPooling | 80/20 | 91.3 | 8.7 | 91.7 | 8.3 | 91.5 | 91.8 | 8.2 | 91.6 | 91.9 | 8.1 | 91.7 |
| 65/35 | 90.5 | 9.5 | 90.9 | 9.1 | 90.7 | 91.0 | 9.0 | 90.8 | 91.1 | 8.9 | 90.9 |
| 50/50 | 89.7 | 10.3 | 90.1 | 9.9 | 89.8 | 90.2 | 9.8 | 89.9 | 90.3 | 9.7 | 90.0 |
| MinPooling | 80/20 | 87.3 | 12.7 | 87.7 | 12.3 | 87.3 | 87.8 | 12.2 | 87.4 | 87.9 | 12.1 | 87.5 |
| 65/35 | 86.4 | 13.6 | 86.8 | 13.2 | 86.4 | 86.9 | 13.1 | 86.5 | 87.0 | 13.0 | 86.6 |
| 50/50 | 85.6 | 14.4 | 86.0 | 14.0 | 85.6 | 86.1 | 13.9 | 85.7 | 86.2 | 13.8 | 85.8 |
| KernelPooling | 80/20 | 91.9 | 8.1 | 92.3 | 7.7 | 92.1 | 92.4 | 7.6 | 92.2 | 92.5 | 7.5 | 92.3 |
| 65/35 | 91.1 | 8.9 | 91.5 | 8.5 | 91.3 | 91.6 | 8.4 | 91.4 | 91.7 | 8.3 | 91.5 |
| 50/50 | 90.4 | 9.6 | 90.8 | 9.2 | 90.6 | 90.9 | 9.1 | 90.7 | 91.0 | 9.0 | 90.8 |
| ECA110Pooling | 80/20 | 92.5 | 7.5 | 92.9 | 7.1 | 92.7 | 93.0 | 7.0 | 92.8 | 93.1 | 6.9 | 92.9 |
| 65/35 | 91.7 | 8.3 | 92.1 | 7.9 | 91.9 | 92.2 | 7.8 | 92.0 | 92.3 | 7.7 | 92.1 |
| 50/50 | 91.0 | 9.0 | 91.4 | 8.6 | 91.2 | 91.5 | 8.5 | 91.3 | 91.6 | 8.4 | 91.4 |
Table 8.
Comparative evaluation of pooling methods on Fashion-MNIST across train/test splits and training epochs. Results are reported as Top-1 Accuracy (%), Error Rate (%), and F1-score (%).
Table 8.
Comparative evaluation of pooling methods on Fashion-MNIST across train/test splits and training epochs. Results are reported as Top-1 Accuracy (%), Error Rate (%), and F1-score (%).
| Method | Split | 20 | 100 | 500 | 1000 (F1) |
| Acc | Err | F1 | Acc | Err | F1 | Acc | Err | F1 |
| MaxPooling | 80/20 | 89.8 | 10.2 | 89.6 | 93.6 | 6.4 | 93.5 | 95.3 | 4.7 | 95.2 | 95.6 |
| 65/35 | 89.0 | 11.0 | 88.8 | 92.9 | 7.1 | 92.8 | 94.8 | 5.2 | 94.7 | 94.8 |
| 50/50 | 88.3 | 11.7 | 88.1 | 92.2 | 7.8 | 92.1 | 94.3 | 5.7 | 94.2 | 94.3 |
| AveragePooling | 80/20 | 89.5 | 10.5 | 89.3 | 93.3 | 6.7 | 93.2 | 95.0 | 5.0 | 94.9 | 95.3 |
| 65/35 | 88.7 | 11.3 | 88.5 | 92.6 | 7.4 | 92.5 | 94.5 | 5.5 | 94.4 | 94.6 |
| 50/50 | 87.9 | 12.1 | 87.7 | 91.9 | 8.1 | 91.8 | 94.0 | 6.0 | 93.9 | 94.0 |
| MedianPooling | 80/20 | 90.0 | 10.0 | 89.8 | 93.9 | 6.1 | 93.8 | 95.6 | 4.4 | 95.5 | 95.8 |
| 65/35 | 89.2 | 10.8 | 89.0 | 93.2 | 6.8 | 93.1 | 95.0 | 5.0 | 94.9 | 95.1 |
| 50/50 | 88.5 | 11.5 | 88.3 | 92.5 | 7.5 | 92.4 | 94.5 | 5.5 | 94.4 | 94.5 |
| MinPooling | 80/20 | 85.6 | 14.4 | 85.3 | 89.7 | 10.3 | 89.5 | 92.1 | 7.9 | 91.9 | 92.3 |
| 65/35 | 84.8 | 15.2 | 84.5 | 89.0 | 11.0 | 88.7 | 91.4 | 8.6 | 91.1 | 91.6 |
| 50/50 | 84.1 | 15.9 | 83.8 | 88.3 | 11.7 | 88.0 | 90.8 | 9.2 | 90.5 | 91.0 |
| KernelPooling | 80/20 | 90.4 | 9.6 | 90.2 | 94.3 | 5.7 | 94.2 | 96.0 | 4.0 | 96.0 | 96.0 |
| 65/35 | 89.6 | 10.4 | 89.4 | 93.6 | 6.4 | 93.5 | 95.4 | 4.6 | 95.3 | 95.4 |
| 50/50 | 88.9 | 11.1 | 88.7 | 92.9 | 7.1 | 92.8 | 94.9 | 5.1 | 94.8 | 94.9 |
| ECA110Pooling | 80/20 | 90.9 | 9.1 | 90.7 | 94.7 | 5.3 | 94.6 | 96.2 | 3.8 | 96.1 | 96.3 |
| 65/35 | 90.1 | 9.9 | 89.9 | 94.1 | 5.9 | 94.0 | 95.8 | 4.2 | 95.7 | 95.8 |
| 50/50 | 89.4 | 10.6 | 89.2 | 93.4 | 6.6 | 93.3 | 95.3 | 4.7 | 95.2 | 95.4 |
| Method | Split | 1000 | 5000 | 10,000 | 50,000 |
| Acc | Err | Acc | Err | F1 | Acc | Err | F1 | Acc | Err | F1 |
| MaxPooling | 80/20 | 95.7 | 4.3 | 95.9 | 4.1 | 95.8 | 96.0 | 4.0 | 95.9 | 96.0 | 4.0 | 95.9 |
| 65/35 | 95.0 | 5.0 | 95.2 | 4.8 | 95.1 | 95.3 | 4.7 | 95.2 | 95.3 | 4.7 | 95.2 |
| 50/50 | 94.4 | 5.6 | 94.6 | 5.4 | 94.5 | 94.6 | 5.4 | 94.5 | 94.6 | 5.4 | 94.5 |
| AveragePooling | 80/20 | 95.4 | 4.6 | 95.6 | 4.4 | 95.5 | 95.7 | 4.3 | 95.6 | 95.7 | 4.3 | 95.6 |
| 65/35 | 94.7 | 5.3 | 94.9 | 5.1 | 94.8 | 95.0 | 5.0 | 94.9 | 95.0 | 5.0 | 94.9 |
| 50/50 | 94.1 | 5.9 | 94.3 | 5.7 | 94.2 | 94.3 | 5.7 | 94.2 | 94.3 | 5.7 | 94.2 |
| MedianPooling | 80/20 | 95.9 | 4.1 | 96.1 | 3.9 | 96.0 | 96.2 | 3.8 | 96.1 | 96.2 | 3.8 | 96.1 |
| 65/35 | 95.2 | 4.8 | 95.4 | 4.6 | 95.3 | 95.5 | 4.5 | 95.4 | 95.5 | 4.5 | 95.4 |
| 50/50 | 94.6 | 5.4 | 94.8 | 5.2 | 94.7 | 94.9 | 5.1 | 94.8 | 94.9 | 5.1 | 94.8 |
| MinPooling | 80/20 | 92.6 | 7.4 | 92.8 | 7.2 | 92.5 | 92.9 | 7.1 | 92.6 | 92.9 | 7.1 | 92.6 |
| 65/35 | 91.9 | 8.1 | 92.1 | 7.9 | 91.8 | 92.2 | 7.8 | 91.9 | 92.2 | 7.8 | 91.9 |
| 50/50 | 91.3 | 8.7 | 91.5 | 8.5 | 91.2 | 91.5 | 8.5 | 91.2 | 91.5 | 8.5 | 91.2 |
| KernelPooling | 80/20 | 96.1 | 3.9 | 96.2 | 3.8 | 96.1 | 96.2 | 3.8 | 96.1 | 96.2 | 3.8 | 96.1 |
| 65/35 | 95.5 | 4.5 | 95.6 | 4.4 | 95.5 | 95.6 | 4.4 | 95.5 | 95.6 | 4.4 | 95.5 |
| 50/50 | 95.0 | 5.0 | 95.1 | 4.9 | 95.0 | 95.1 | 4.9 | 95.0 | 95.1 | 4.9 | 95.0 |
| ECA110Pooling | 80/20 | 96.4 | 3.6 | 96.5 | 3.5 | 96.4 | 96.6 | 3.4 | 96.5 | 96.6 | 3.4 | 96.5 |
| 65/35 | 95.9 | 4.1 | 96.0 | 4.0 | 95.9 | 96.1 | 3.9 | 96.0 | 96.1 | 3.9 | 96.0 |
| 50/50 | 95.5 | 4.5 | 95.6 | 4.4 | 95.5 | 95.7 | 4.3 | 95.6 | 95.7 | 4.3 | 95.6 |
Table 9.
Aggregated comparative performance of pooling methods across all datasets (ImageNet subset, CIFAR-10, Fashion-MNIST) and training epochs. Results are averaged for Top-1 Accuracy, Error Rate, and F1-score.
Table 9.
Aggregated comparative performance of pooling methods across all datasets (ImageNet subset, CIFAR-10, Fashion-MNIST) and training epochs. Results are averaged for Top-1 Accuracy, Error Rate, and F1-score.
| Pooling Method | Top-1 Accuracy (%) | Error Rate (%) | F1-Score (%) |
|---|
| MaxPooling | 85.0 | 15.0 | 84.7 |
| AveragePooling | 84.5 | 15.5 | 84.2 |
| MedianPooling | 84.8 | 15.2 | 84.5 |
| MinPooling | 80.0 | 20.0 | 79.6 |
| KernelPooling | 86.0 | 14.0 | 85.8 |
| ECA110-Pooling | 87.2 | 12.8 | 87.0 |
Table 10.
Computational complexity and parameterization of the six pooling operators. Here k denotes the window size, C the number of channels, and T the number of automaton steps in ECA110.
Table 10.
Computational complexity and parameterization of the six pooling operators. Here k denotes the window size, C the number of channels, and T the number of automaton steps in ECA110.
| Pooling Method | Time Complexity | Extra Parameters | Remarks |
|---|
| MaxPooling | | None | Selects strongest activations |
| AveragePooling | | None | Computes local averages |
| MedianPooling | | None | Requires sorting per window |
| MinPooling | | None | Selects weakest activations |
| KernelPooling | | | Learnable weighted aggregation |
| ECA110-Pooling | | None | Rule-based transform + reduction |
Table 11.
Aggregated efficiency results: average training time per epoch and model size across ImageNet (subset), CIFAR-10, and Fashion-MNIST.
Table 11.
Aggregated efficiency results: average training time per epoch and model size across ImageNet (subset), CIFAR-10, and Fashion-MNIST.
| Pooling Method | ImageNet (Subset) | CIFAR-10 | Fashion-MNIST |
|---|
|
Time (s/epoch)
|
Size (MB)
|
Time (s/epoch)
|
Size (MB)
|
Time (s/epoch)
|
Size (MB)
|
|---|
| MaxPooling | 128.0 | 4.75 | 35.2 | 4.75 | 12.4 | 4.75 |
| AveragePooling | 130.6 | 4.75 | 35.9 | 4.75 | 12.7 | 4.75 |
| MedianPooling | 162.6 | 4.75 | 44.7 | 4.75 | 15.8 | 4.75 |
| MinPooling | 127.5 | 4.75 | 35.1 | 4.75 | 12.4 | 4.75 |
| KernelPooling | 148.5 | 4.80 | 40.8 | 4.80 | 14.4 | 4.80 |
| ECA110Pooling | 133.1 | 4.76 | 36.6 | 4.76 | 12.9 | 4.76 |
Table 12.
One-way ANOVA results across pooling operators for each dataset and split, reported across training epochs (20–1000). Between-group df = 5, Within-group df = 24.
Table 12.
One-way ANOVA results across pooling operators for each dataset and split, reported across training epochs (20–1000). Between-group df = 5, Within-group df = 24.
| Dataset | Split | 20 | 100 | 500 | 1000 |
|---|
| ImageNet (subset) | 80/20 | 3.41, 0.017, Yes | 4.26, 0.007, Yes | 6.18, 0.001, Yes | 7.34, <0.001, Yes |
| 65/35 | 3.12, 0.021, Yes | 4.05, 0.009, Yes | 5.91, 0.002, Yes | 7.12, <0.001, Yes |
| 50/50 | 2.89, 0.032, Yes | 3.87, 0.011, Yes | 5.66, 0.002, Yes | 6.94, <0.001, Yes |
| CIFAR-10 | 80/20 | 4.89, 0.003, Yes | 6.72, 0.001, Yes | 8.05, <0.001, Yes | 9.47, <0.001, Yes |
| 65/35 | 4.51, 0.005, Yes | 6.38, 0.001, Yes | 7.81, <0.001, Yes | 9.12, <0.001, Yes |
| 50/50 | 4.18, 0.008, Yes | 6.05, 0.002, Yes | 7.54, <0.001, Yes | 8.91, <0.001, Yes |
| Fashion-MNIST | 80/20 | 3.02, 0.028, Yes | 4.75, 0.004, Yes | 6.41, 0.001, Yes | 7.96, <0.001, Yes |
| 65/35 | 2.81, 0.034, Yes | 4.51, 0.006, Yes | 6.12, 0.001, Yes | 7.64, <0.001, Yes |
| 50/50 | 2.64, 0.041, Yes | 4.26, 0.008, Yes | 5.91, 0.002, Yes | 7.31, <0.001, Yes |
Table 13.
One-way ANOVA results across pooling operators for each dataset and split, reported across training epochs (5000–50000). Between-group df = 5, Within-group df = 24.
Table 13.
One-way ANOVA results across pooling operators for each dataset and split, reported across training epochs (5000–50000). Between-group df = 5, Within-group df = 24.
| Dataset | Split | 5000 | 10000 | 50000 |
|---|
| ImageNet (subset) | 80/20 | 9.11, <0.001, Yes | 11.02, <0.001, Yes | 12.85, <0.001, Yes |
| 65/35 | 8.87, <0.001, Yes | 10.74, <0.001, Yes | 12.45, <0.001, Yes |
| 50/50 | 8.65, <0.001, Yes | 10.43, <0.001, Yes | 12.12, <0.001, Yes |
| CIFAR-10 | 80/20 | 11.38, <0.001, Yes | 12.24, <0.001, Yes | 13.10, <0.001, Yes |
| 65/35 | 11.03, <0.001, Yes | 11.87, <0.001, Yes | 12.73, <0.001, Yes |
| 50/50 | 10.77, <0.001, Yes | 11.59, <0.001, Yes | 12.41, <0.001, Yes |
| Fashion-MNIST | 80/20 | 9.88, <0.001, Yes | 11.07, <0.001, Yes | 12.31, <0.001, Yes |
| 65/35 | 9.55, <0.001, Yes | 10.77, <0.001, Yes | 11.98, <0.001, Yes |
| 50/50 | 9.11, <0.001, Yes | 10.41, <0.001, Yes | 11.64, <0.001, Yes |
Table 14.
Tukey’s HSD pairwise comparisons across pooling operators for each dataset, split ratio, and selected training epochs (20, 500, 5000). Cells list significant differences (direction and adjusted p). Abbreviations: Max = MaxPooling, Avg = AveragePooling, Med = MedianPooling, Min = MinPooling, Ker = KernelPooling, ECA = ECA110-Pooling; n.s. = not significant.
Table 14.
Tukey’s HSD pairwise comparisons across pooling operators for each dataset, split ratio, and selected training epochs (20, 500, 5000). Cells list significant differences (direction and adjusted p). Abbreviations: Max = MaxPooling, Avg = AveragePooling, Med = MedianPooling, Min = MinPooling, Ker = KernelPooling, ECA = ECA110-Pooling; n.s. = not significant.
| Dataset | Epochs | 80/20 | 65/35 | 50/50 |
|---|
| ImageNet (subset) | 20 | Max > Min (p < 0.05); n.s. (ECA vs Max/Avg/Ker) | Max > Min (p < 0.05); n.s. | Max > Min (p < 0.05); n.s. |
| 500 | ECA > Avg (p < 0.05); Max > Min (p < 0.001); Ker > Min (p < 0.01) | ECA > Avg (p < 0.05); Max > Min (p < 0.001) | ECA > Avg (p < 0.05); Max > Min (p < 0.001) |
| 5000 | ECA > Max (p < 0.05); ECA > Avg (p < 0.01); ECA > Min (p < 0.001) | same pattern | same pattern |
| CIFAR-10 | 20 | Max > Min (p < 0.01); Med > Min (p < 0.05); n.s. (ECA vs Max) | same pattern | same pattern |
| 500 | ECA > Avg (p < 0.01); ECA > Min (p < 0.001); Max > Min (p < 0.001) | same pattern | same pattern |
| 5000 | ECA > Max (p < 0.01); ECA > Avg (p < 0.01); ECA > Min (p < 0.001) | same pattern | same pattern |
| Fashion-MNIST | 20 | Med > Min (p < 0.05); n.s. (ECA vs Med/Max) | same pattern | same pattern |
| 500 | ECA > Min (p < 0.001); Med > Min (p < 0.001); Max > Min (p < 0.001) | same pattern | same pattern |
| 5000 | ECA > Max (p < 0.05); ECA > Avg (p < 0.01); ECA > Min (p < 0.001) | same pattern | same pattern |
Table 15.
Paired t-test (two-sided) between ECA110-Pooling (ECA) and baseline pooling operators, reported at selected epochs (20, 500, 5000) and split ratios.
Table 15.
Paired t-test (two-sided) between ECA110-Pooling (ECA) and baseline pooling operators, reported at selected epochs (20, 500, 5000) and split ratios.
| Dataset | Epochs | 80/20 | 65/35 | 50/50 |
|---|
| ImageNet (subset) | 20 | ECA > Min () | ECA > Min () | ECA > Min () |
| 500 | ECA > Max (); ECA > Avg (); ECA > Med (); ECA > Min () | ECA > Max (); ECA > Avg (); ECA > Min () | ECA > Avg (); ECA > Min () |
| 5000 | ECA > Max (); ECA > Avg (); ECA > Med (); ECA > Min () | ECA > Max (); ECA > Avg (); ECA > Min () | ECA > Avg (); ECA > Min () |
| CIFAR-10 | 20 | ECA > Min () | ECA > Min () | ECA > Min () |
| 500 | ECA > Max (); ECA > Avg (); ECA > Med (); ECA > Min () | ECA > Max (); ECA > Avg (); ECA > Min () | ECA > Avg (); ECA > Min () |
| 5000 | ECA > Max (); ECA > Avg (); ECA > Med (); ECA > Min (); ECA > Ker () | ECA > Max (); ECA > Avg (); ECA > Min () | ECA > Avg (); ECA > Min () |
| Fashion-MNIST | 20 | ECA > Min () | ECA > Min () | ECA > Min () |
| 500 | ECA > Max (); ECA > Avg (); ECA > Min () | ECA > Avg (); ECA > Min () | ECA > Min () |
| 5000 | ECA > Max (); ECA > Avg (); ECA > Med (); ECA > Min () | ECA > Avg (); ECA > Min () | ECA > Min () |
Table 16.
Wilcoxon signed-rank test between ECA110-Pooling (ECA) and baseline pooling operators, reported at selected epochs and split ratios.
Table 16.
Wilcoxon signed-rank test between ECA110-Pooling (ECA) and baseline pooling operators, reported at selected epochs and split ratios.
| Dataset | Epochs | 80/20 | 65/35 | 50/50 |
|---|
| ImageNet (subset) | 20 | ECA > Min () | ECA > Min () | ECA > Min () |
| 500 | ECA > Max (); ECA > Avg (); ECA > Min () | ECA > Avg (); ECA > Min () | ECA > Min () |
| 5000 | ECA > Max (); ECA > Avg (); ECA > Med (); ECA > Min () | ECA > Max (); ECA > Avg (); ECA > Min () | ECA > Avg (); ECA > Min () |
| CIFAR-10 | 20 | ECA > Min () | ECA > Min () | ECA > Min () |
| 500 | ECA > Max (); ECA > Avg (); ECA > Min () | ECA > Avg (); ECA > Min () | ECA > Min () |
| 5000 | ECA > Max (); ECA > Avg (); ECA > Med (); ECA > Min (); ECA > Ker () | ECA > Max (); ECA > Avg (); ECA > Min () | ECA > Avg (); ECA > Min () |
| Fashion-MNIST | 20 | ECA > Min () | ECA > Min () | ECA > Min () |
| 500 | ECA > Max (); ECA > Avg (); ECA > Min () | ECA > Avg (); ECA > Min () | ECA > Min () |
| 5000 | ECA > Max (); ECA > Avg (); ECA > Med (); ECA > Min () | ECA > Avg (); ECA > Min () | ECA > Min () |
Table 17.
Comparison of ECA110-Pooling with SOTA architectures across datasets, split ratios, and training epochs. The results include Top-1 Accuracy (%), Error Rate (%), and F1-score (%).
Table 17.
Comparison of ECA110-Pooling with SOTA architectures across datasets, split ratios, and training epochs. The results include Top-1 Accuracy (%), Error Rate (%), and F1-score (%).
| Dataset | Method | 80/20 Split | 65/35 (500 ep.) |
| 500 ep. | 5000 ep. | 10,000 ep. | 500 ep. |
| ImageNet (subset) | ResNet-50 | 70.3/29.7/70.0 | 75.9/24.1/75.7 | 76.1/23.9/75.9 | 68.4/31.6/68.2 |
| DenseNet-121 | 71.0/29.0/70.8 | 76.8/23.2/76.7 | 76.9/23.1/76.8 | 69.0/31.0/68.7 |
| EfficientNet-B0 | 72.1/27.9/71.9 | 77.5/22.5/77.3 | 77.7/22.3/77.5 | 70.5/29.5/70.3 |
| MobileNetV2 | 69.2/30.8/68.9 | 74.1/25.9/73.9 | 74.3/25.7/74.1 | 67.3/32.7/67.0 |
| ViT-Small | 70.7/29.3/70.5 | 76.2/23.8/76.0 | 76.4/23.6/76.2 | 69.1/30.9/68.8 |
| ECA110-Pooling | 71.7/28.3/71.5 | 73.8/26.2/73.6 | 74.0/26.0/73.9 | 70.5/29.5/70.3 |
| CIFAR-10 | ResNet-50 | 88.6/11.4/88.5 | 94.4/5.6/94.3 | 94.6/5.4/94.5 | 87.3/12.7/87.2 |
| DenseNet-121 | 89.2/10.8/89.1 | 94.9/5.1/94.7 | 95.1/4.9/94.9 | 87.9/12.1/87.7 |
| EfficientNet-B0 | 89.8/10.2/89.6 | 95.4/4.6/95.2 | 95.6/4.4/95.4 | 88.5/11.5/88.3 |
| MobileNetV2 | 87.3/12.7/87.1 | 93.2/6.8/93.0 | 93.5/6.5/93.3 | 85.9/14.1/85.7 |
| ViT-Small | 88.2/11.8/88.0 | 94.0/6.0/93.8 | 94.2/5.8/94.0 | 86.8/13.2/86.6 |
| ECA110-Pooling | 91.2/8.8/91.0 | 92.9/7.1/92.7 | 93.0/7.0/92.8 | 90.3/9.7/90.1 |
| Fashion-MNIST | ResNet-50 | 95.0/5.0/94.9 | 96.2/3.8/96.1 | 96.3/3.7/96.2 | 94.2/5.8/94.1 |
| DenseNet-121 | 95.3/4.7/95.2 | 96.4/3.6/96.3 | 96.5/3.5/96.4 | 94.5/5.5/94.4 |
| EfficientNet-B0 | 95.6/4.4/95.5 | 96.6/3.4/96.5 | 96.7/3.3/96.6 | 94.8/5.2/94.7 |
| MobileNetV2 | 94.7/5.3/94.6 | 95.9/4.1/95.8 | 96.0/4.0/95.9 | 94.0/6.0/93.9 |
| ViT-Small | 95.1/4.9/95.0 | 96.2/3.8/96.1 | 96.3/3.7/96.2 | 94.4/5.6/94.3 |
| ECA110-Pooling | 96.2/3.8/96.1 | 96.5/3.5/96.4 | 96.6/3.4/96.5 | 95.8/4.2/95.7 |
| Dataset | Method | 65/35 Split | 50/50 Split |
| 5000 ep. | 10,000 ep. | 500 ep. | 5000 ep. | 10,000 ep. |
| ImageNet (subset) | ResNet-50 | 74.3/25.7/74.0 | 75.0/25.0/74.8 | 66.5/33.5/66.2 | 72.5/27.5/72.3 | 73.0/27.0/72.8 |
| DenseNet-121 | 75.2/24.8/75.0 | 75.8/24.2/75.6 | 67.3/32.7/67.0 | 73.6/26.4/73.4 | 74.1/25.9/73.9 |
| EfficientNet-B0 | 76.0/24.0/75.8 | 76.4/23.6/76.2 | 68.4/31.6/68.2 | 74.2/25.8/74.0 | 74.7/25.3/74.5 |
| MobileNetV2 | 72.6/27.4/72.3 | 73.2/26.8/73.0 | 65.5/34.5/65.2 | 70.8/29.2/70.5 | 71.4/28.6/71.1 |
| ViT-Small | 74.9/25.1/74.7 | 75.4/24.6/75.2 | 67.2/32.8/66.9 | 73.3/26.7/73.1 | 73.9/26.1/73.7 |
| ECA110-Pooling | 72.7/27.3/72.5 | 72.9/27.1/72.7 | 69.0/31.0/68.8 | 71.4/28.6/71.2 | 71.7/28.3/71.5 |
| CIFAR-10 | ResNet-50 | 93.8/6.2/93.6 | 94.1/5.9/93.9 | 85.9/14.1/85.7 | 92.4/7.6/92.2 | 92.7/7.3/92.5 |
| DenseNet-121 | 94.3/5.7/94.1 | 94.6/5.4/94.4 | 86.5/13.5/86.3 | 92.9/7.1/92.7 | 93.2/6.8/93.0 |
| EfficientNet-B0 | 94.8/5.2/94.6 | 95.0/5.0/94.8 | 87.0/13.0/86.8 | 93.5/6.5/93.3 | 93.9/6.1/93.7 |
| MobileNetV2 | 92.7/7.3/92.5 | 93.1/6.9/92.9 | 84.5/15.5/84.3 | 91.4/8.6/91.2 | 91.9/8.1/91.7 |
| ViT-Small | 93.4/6.6/93.2 | 93.7/6.3/93.5 | 85.4/14.6/85.2 | 92.0/8.0/91.8 | 92.4/7.6/92.2 |
| ECA110-Pooling | 92.1/7.9/91.9 | 92.3/7.7/92.1 | 89.4/10.6/89.2 | 91.4/8.6/91.2 | 91.6/8.4/91.4 |
| Fashion-MNIST | ResNet-50 | 95.6/4.4/95.5 | 95.8/4.2/95.7 | 93.5/6.5/93.4 | 94.9/5.1/94.8 | 95.1/4.9/95.0 |
| DenseNet-121 | 95.8/4.2/95.7 | 96.0/4.0/95.9 | 93.8/6.2/93.7 | 95.2/4.8/95.1 | 95.4/4.6/95.3 |
| EfficientNet-B0 | 96.0/4.0/95.9 | 96.2/3.8/96.1 | 94.1/5.9/94.0 | 95.5/4.5/95.4 | 95.7/4.3/95.6 |
| MobileNetV2 | 95.3/4.7/95.2 | 95.5/4.5/95.4 | 93.3/6.7/93.2 | 94.7/5.3/94.6 | 94.9/5.1/94.8 |
| ViT-Small | 95.6/4.4/95.5 | 95.8/4.2/95.7 | 93.6/6.4/93.5 | 95.0/5.0/94.9 | 95.2/4.8/95.1 |
| ECA110-Pooling | 96.0/4.0/95.9 | 96.1/3.9/96.0 | 95.3/4.7/95.2 | 95.6/4.4/95.5 | 95.7/4.3/95.6 |
Table 18.
Number of parameters, memory footprint, and observations for pooling operators and SOTA architectures.
Table 18.
Number of parameters, memory footprint, and observations for pooling operators and SOTA architectures.
| Method | No. Parameters | Mem. Footprint (MB) | Observations |
|---|
| MaxPooling | 0 | ≈0 | Fixed operator, no trainable parameters. |
| AveragePooling | 0 | ≈0 | Captures global information but loses local details. |
| MedianPooling | 0 | ≈0 | Robust to noise and outliers, slightly higher computational cost. |
| MinPooling | 0 | ≈0 | Rarely used, generally yields weak performance. |
| KernelPooling | ∼50 k | ∼0.2–0.5 | Learnable kernels; modest increase in model size. |
| ECA110-Pooling | 0 | ≈0 | Lightweight rule-based operator with competitive performance. |
| ResNet-50 | ∼25 M | ∼98 | High accuracy but computationally expensive. |
| DenseNet-121 | ∼8 M | ∼33 | Dense connections; strong accuracy with higher inference cost. |
| EfficientNet-B0 | ∼5 M | ∼20 | Excellent balance of accuracy and efficiency. |
| MobileNetV2 | ∼3.5 M | ∼14 | Optimized for mobile and embedded deployments. |
| ViT-Small | ∼22 M | ∼85 | Transformer-based; strong performance but high memory needs. |