CE-FPN-YOLO: A Contrast-Enhanced Feature Pyramid for Detecting Concealed Small Objects in X-Ray Baggage Images
Abstract
1. Introduction
- A novel feature pyramid network, termed the Contrast-Enhanced Feature Pyramid Network (CE-FPN), is proposed for integration into the YOLO framework. It incorporates a contrast-guided multi-branch fusion module to enhance the representation of small and low-contrast objects in cluttered X-ray security images.
- The proposed multi-branch design simultaneously improves boundary sensitivity and semantic consistency across feature scales, enabling more accurate detection of visually ambiguous items, such as plastic lighters, even under occlusion and overlapping conditions.
- Extensive experiments demonstrate that CE-FPN consistently outperforms mainstream FPN-based detectors, achieving notable accuracy improvements while maintaining a lightweight computational cost.
2. Related Work
2.1. Traditional Methods for X-Ray Baggage Object Detection
2.2. Deep Learning-Based Methods for X-Ray Baggage Object Detection
3. Methodology
3.1. Design Motivation
3.2. Architecture of Proposed Method
3.3. CE-FPN
3.3.1. Feature Representation and Motivation
- x: semantically rich but lacking spatial detail,
- z: rich in spatial detail (edges and textures) but semantically shallow,
- y: intermediate-level features, serving as the fusion base.
3.3.2. High-Level Semantic Enhancement
3.3.3. Low-Level Contrast Reweighting
| Algorithm 1: Forward propagation of the CE-FPN module |
|
3.3.4. Learnable Feature Aggregation
4. Experiments
4.1. Dataset and Experimental Configuration
4.2. Evaluation Metrics
4.2.1. Precision and Recall
4.2.2. Mean Average Precision at IoU = 0.5 (mAP@50)
4.3. Analysis of Results
4.3.1. Ablation Experiment
4.3.2. Generalization Across Architectures
4.3.3. Controlled Experiment
4.3.4. Qualitative Analysis
4.3.5. Precision–Recall Trade-Off in Safety-Critical Scenarios
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Rahman, F.; Wu, D. A Statistical Method to Schedule the First Exam Using Chest X-Ray in Lung Cancer Screening. Mathematics 2025, 13, 2623. [Google Scholar] [CrossRef]
- Do, T.H.; Le, H.; Dang, M.H.H.; Nguyen, V.D.; Do, P. Cross-Domain Approach for Automated Thyroid Classification Using Diff-Quick Images. Mathematics 2025, 13, 2191. [Google Scholar] [CrossRef]
- Tariq, M.; Choi, K. YOLO11-Driven Deep Learning Approach for Enhanced Detection and Visualization of Wrist Fractures in X-Ray Images. Mathematics 2025, 13, 1419. [Google Scholar] [CrossRef]
- Nazarov, V.G.; Prokhorov, I.V.; Yarovenko, I.P. Identification of an Unknown Substance by the Methods of Multi-Energy Pulse X-ray Tomography. Mathematics 2023, 11, 3263. [Google Scholar] [CrossRef]
- Andriyanov, N. Using ArcFace Loss Function and Softmax with Temperature Activation Function for Improvement in X-ray Baggage Image Classification Quality. Mathematics 2024, 12, 2547. [Google Scholar] [CrossRef]
- Mery, D.; Riffo, V.; Zscherpel, U.; Mondragon, G.; Lillo, I.; Zuccar, I.; Lóbel, H.; Carrasco, M. GDXray: The database of X-ray images for nondestructive testing. J. Nondestruct. Eval. 2015, 34, 42. [Google Scholar] [CrossRef]
- Mery, D.; Katsaggelos, A.K. A Logarithmic X-Ray Imaging Model for Baggage Inspection: Simulation and Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 283–290. [Google Scholar]
- Akcay, S.; Kundegorski, M.E.; Willcocks, C.G.; Breckon, T.P. Using Deep Convolutional Neural Network Architectures for Object Classification and Detection Within X-Ray Baggage Security Imagery. IEEE Trans. Inf. Forensics Secur. 2018, 13, 2203–2215. [Google Scholar] [CrossRef]
- Miao, C.; Xie, L.; Wan, F.; Su, C.; Liu, H.; Jiao, J.; Ye, Q. SIXray: A Large-Scale Security Inspection X-Ray Benchmark for Prohibited Item Discovery in Overlapping Images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 2119–2128. [Google Scholar]
- Wei, Y.; Tao, R.; Wu, Z.; Ma, Y.; Zhang, L.; Liu, X. Occluded Prohibited Items Detection: An X-ray Security Inspection Benchmark and De-occlusion Attention Module. In Proceedings of the 28th ACM International Conference on Multimedia, Seattle, WA, USA, 12–16 October 2020; pp. 1388–1396. [Google Scholar] [CrossRef]
- Tao, R.; Wei, Y.; Jiang, X.; Li, H.; Qin, H.; Wang, J.; Ma, Y.; Zhang, L.; Liu, X. Towards real-world X-ray security inspection: A high-quality benchmark and lateral inhibition module for prohibited items detection. In Proceedings of the IEEE International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 10923–10932. [Google Scholar]
- Ma, C.; Zhuo, L.; Li, J.; Zhang, Y.; Zhang, J. Occluded prohibited object detection in X-ray images with global Context-aware Multi-Scale feature Aggregation. Neurocomputing 2023, 519, 1–16. [Google Scholar] [CrossRef]
- Isaac-Medina, B.K.S.; Yucer, S.; Bhowmik, N.; Breckon, T.P. Seeing Through the Data: A Statistical Evaluation of Prohibited Item Detection Benchmark Datasets for X-Ray Security Screening. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Vancouver, BC, Canada, 17–24 June 2023; pp. 530–539. [Google Scholar]
- Ma, C.; Du, L.; Gao, Z.; Zhuo, L.; Wang, M. A Coarse to Fine Detection Method for Prohibited Object in X-ray Images Based on Progressive Transformer Decoder. In Proceedings of the 32nd ACM International Conference on Multimedia, MM ’24, New York, NY, USA, 16–19 June 2024; pp. 2700–2708. [Google Scholar] [CrossRef]
- Zhu, Z.; Zhu, Y.; Wang, H.; Wang, N.; Ye, J.; Ling, X. FDTNet: Enhancing frequency-aware representation for prohibited object detection from X-ray images via dual-stream transformers. Eng. Appl. Artif. Intell. 2024, 133, 108076. [Google Scholar] [CrossRef]
- Hassan, T.; Hassan, B.; Owais, M.; Velayudhan, D.; Dias, J.; Ghazal, M.; Werghi, N. Incremental convolutional transformer for baggage threat detection. Pattern Recognit. 2024, 153, 110493. [Google Scholar] [CrossRef]
- Meng, X.; Feng, H.; Ren, Y.; Zhang, H.; Zou, W.; Ouyang, X. Transformer-based dual-view X-ray security inspection image analysis. Eng. Appl. Artif. Intell. 2024, 138, 109382. [Google Scholar] [CrossRef]
- Azizzadeh Mehmandost Olya, B.; Mohebian, R.; Bagheri, H.; Mahdavi Hezaveh, A.; Khan Mohammadi, A. Toward real-time fracture detection on image logs using deep convolutional neural network YOLOv5. Interpretation 2024, 12, SB9–SB18. [Google Scholar] [CrossRef]
- Dong, S.; Hao, J.; Zeng, L.; Yang, X.; Wang, L.; Ji, C.; Zhong, Z.; Chen, S.; Fu, K. A Deep Learning Object Detection Method for Fracture Identification Using Conventional Well Logs. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–16. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
- Lin, T.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Yang, Z.; Liu, S.; Hu, H.; Wang, L.; Lin, S. RepPoints: Point Set Representation for Object Detection. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9657–9666. [Google Scholar]
- Jocher, G.; Qiu, J. Ultralytics YOLO11. Available online:https://docs.ultralytics.com/models/yolo11/ (accessed on 14 November 2025).
- Jocher, G. Ultralytics YOLOv5. Available online:https://github.com/ultralytics/yolov5 (accessed on 14 November 2025).
- Li, C.; Li, L.; Jiang, H.; Weng, K.; Geng, Y.; Li, L.; Ke, Z.; Li, Q.; Cheng, M.; Nie, W.; et al. YOLOv6: A single-stage object detection framework for industrial applications. arXiv 2022, arXiv:2209.02976. [Google Scholar] [CrossRef]
- Wang, C.; Bochkovskiy, A.; Liao, H.M. YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 7464–7475. [Google Scholar]
- Jocher, G.; Chaurasia, A.; Qiu, J. Ultralytics YOLOv8. Available online:https://docs.ultralytics.com/models/yolov8/ (accessed on 14 November 2025).
- Wang, C.Y.; Yeh, I.H.; Liao, H.Y.M. YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information. In European Conference on Computer Vision; Springer Nature: Cham, Switzerland, 2025; pp. 1–21. [Google Scholar]
- Wang, A.; Chen, H.; Liu, L.; Chen, K.; Lin, Z.; Han, J.; Ding, G. YOLOv10: Real-Time End-to-End Object Detection. In Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2024; Volume 37, pp. 107984–108011. [Google Scholar]
- Tian, Y.; Ye, Q.; Doermann, D. Yolov12: Attention-centric real-time object detectors. arXiv 2025, arXiv:2502.12524. [Google Scholar]
- Lei, M.; Li, S.; Wu, Y.; Hu, H.; Zhou, Y.; Zheng, X.; Ding, G.; Du, S.; Wu, Z.; Gao, Y. YOLOv13: Real-Time Object Detection with Hypergraph-Enhanced Adaptive Visual Perception. arXiv 2025, arXiv:2506.17733. [Google Scholar]
- Liao, H.; Huang, B.; Gao, H. Feature-Aware Prohibited Items Detection for X-Ray Images. In Proceedings of the 2023 IEEE International Conference on Image Processing (ICIP), Kuala Lumpur, Malaysia, 8–11 October 2023; pp. 1040–1044. [Google Scholar] [CrossRef]
- Liu, W.; Sun, D.; Wang, Y.; Chen, Z.; Han, X.; Yang, H. ABTD-Net: Autonomous Baggage Threat Detection Networks for X-ray Images. In Proceedings of the 2023 IEEE International Conference on Multimedia and Expo (ICME), Brisbane, Australia, 10–14 July 2023; pp. 1229–1234. [Google Scholar] [CrossRef]
- Guan, F.; Zhang, H.; Wang, X. An improved YOLOv8 model for prohibited item detection with deformable convolution and dynamic head. J. Real-Time Image Process. 2025, 22, 84. [Google Scholar] [CrossRef]
- Wang, A.; Yuan, P.; Wu, H.; Iwahori, Y.; Liu, Y. Improved YOLOv8 for Dangerous Goods Detection in X-ray Security Images. Electronics 2024, 13, 3238. [Google Scholar] [CrossRef]
- Zhang, W.; Zhu, Q.; Li, Y.; Li, H. MAM Faster R-CNN: Improved Faster R-CNN based on Malformed Attention Module for object detection on X-ray security inspection. Digit. Signal Process. 2023, 139, 104072. [Google Scholar] [CrossRef]
- Huang, X.; Zhang, Y. ScanGuard-YOLO: Enhancing X-ray Prohibited Item Detection with Significant Performance Gains. Sensors 2024, 24, 102. [Google Scholar] [CrossRef]
- Wang, M.; Du, H.; Mei, W.; Wang, S.; Yuan, D. Material-aware Cross-channel Interaction Attention (MCIA) for occluded prohibited item detection. Vis. Comput. 2023, 39, 2865–2877. [Google Scholar] [CrossRef]
- Wang, B.; Ding, H.; Chen, C. AC-YOLOv4: An object detection model incorporating attention mechanism and atrous convolution for contraband detection in x-ray images. Multimed. Tools Appl. 2024, 83, 26485–26504. [Google Scholar] [CrossRef]
- Li, M.; Ma, B.; Wang, H.; Chen, D.; Jia, T. GADet: A Geometry-Aware X-Ray Prohibited Items Detector. IEEE Sens. J. 2024, 24, 1665–1678. [Google Scholar] [CrossRef]
- Li, M.; Jia, T.; Wang, H.; Ma, B.; Lu, H.; Lin, S.; Cai, D.; Chen, D. AO-DETR: Anti-Overlapping DETR for X-Ray Prohibited Items Detection. IEEE Trans. Neural Netw. Learn. Syst. 2025, 36, 12076–12090. [Google Scholar] [CrossRef] [PubMed]
- Zhu, X.; Su, W.; Lu, L.; Li, B.; Wang, X.; Dai, J. Deformable DETR: Deformable Transformers for End-to-End Object Detection. arXiv 2020, arXiv:cs.CV/2010.04159. [Google Scholar]
- Li, M.; Jia, T.; Lu, H.; Ma, B.; Wang, H.; Chen, D. MMCL: Boosting deformable DETR-based detectors with multi-class min-margin contrastive learning for superior prohibited item detection. arXiv 2024, arXiv:2406.03176. [Google Scholar]
- Chang, A.; Zhang, Y.; Zhang, S.; Zhong, L.; Zhang, L. Detecting prohibited objects with physical size constraint from cluttered X-ray baggage images. Knowl.-Based Syst. 2022, 237, 107916. [Google Scholar] [CrossRef]
- Zhao, C.; Zhu, L.; Dou, S.; Deng, W.; Wang, L. Detecting Overlapped Objects in X-Ray Security Imagery by a Label-Aware Mechanism. IEEE Trans. Inf. Forensics Secur. 2022, 17, 998–1009. [Google Scholar] [CrossRef]





| Class | Training | Testing | Total |
|---|---|---|---|
| Portable Charger 1 | 9919 | 2502 | 12,421 |
| Portable Charger 2 | 6216 | 1572 | 7788 |
| Water | 2471 | 621 | 3092 |
| Laptop | 8046 | 1996 | 10,042 |
| Mobile Phone | 43,204 | 10,631 | 53,835 |
| Tablet | 3921 | 997 | 4918 |
| Cosmetic | 7969 | 1980 | 9949 |
| Non-metallic Lighter | 706 | 177 | 883 |
| Class | Precision | Recall | mAP@50 | ||||||
|---|---|---|---|---|---|---|---|---|---|
| Base | CE-FPN | Δ | Base | CE-FPN | Δ | Base | CE-FPN | Δ | |
| Portable_Charger_1 | 93.8 | 92.5 | −1.3 | 94.3 | 95.2 | +0.9 | 96.0 | 96.4 | +0.4 |
| Portable_Charger_2 | 88.8 | 88.7 | −0.1 | 95.1 | 95.4 | +0.3 | 95.3 | 95.7 | +0.4 |
| Water | 89.9 | 89.9 | 0.0 | 91.7 | 92.3 | +0.6 | 94.2 | 94.2 | 0.0 |
| Laptop | 92.7 | 91.3 | −1.4 | 99.5 | 99.5 | 0.0 | 97.9 | 98.3 | +0.4 |
| Mobile_Phone | 93.4 | 93.0 | −0.4 | 98.9 | 98.9 | 0.0 | 98.3 | 98.2 | −0.1 |
| Tablet | 92.7 | 91.6 | −1.1 | 94.1 | 95.4 | +1.3 | 96.2 | 97.1 | +0.9 |
| Cosmetic | 63.7 | 61.6 | −2.1 | 72.3 | 72.5 | +0.2 | 70.5 | 70.9 | +0.4 |
| Nonmetallic_Lighter | 82.0 | 74.4 | −7.6 | 25.8 | 31.1 | +5.3 | 35.8 | 45.9 | +10.1 |
| Overall | 87.1 | 85.4 | −1.7 | 84.0 | 85.0 | +1.0 | 85.5 | 87.1 | +1.6 |
| Method | mAP@50 |
|---|---|
| Baseline (YOLO11s) | 85.5 |
| w/o Low | 85.3 |
| w/o High | 87.0 |
| w/o High & Low | 85.4 |
| Ours (YOLO11s + CE-FPN) | 87.1 |
| Module | mAP@50 |
|---|---|
| Baseline (YOLO11s) | 85.5 |
| SENet | 85.7 |
| CBAM | 86.0 |
| BiFPN | 85.3 |
| ASFF | 86.2 |
| Ours (YOLO11s + CE-FPN) | 87.1 |
| YOLO Version | Params (M) | GFLOPs | mAP@50 |
|---|---|---|---|
| YOLOv5s | 7.0 | 16.0 | 84.2 |
| YOLOv5s + CE-FPN | 13.5 | 34.6 | 86.3 |
| YOLOv8s | 11.1 | 28.7 | 85.4 |
| YOLOv8s + CE-FPN | 14.3 | 36.3 | 86.5 |
| YOLO11s | 9.42 | 21.3 | 85.5 |
| YOLO11s + CE-FPN | 11.1 | 38.2 | 87.1 |
| Resolution | Model | Params | GFLOPs | mAP@50 | Year |
|---|---|---|---|---|---|
| Low | FAPID [32] | – | – | 79.4 | 2023 |
| ABTD-Net [33] | – | – | 72.7 | 2023 | |
| Faster R-CNN [34] | 41.2 | 215.7 | 81.7 | 2015 | |
| YOLOv5s [11] | 7.0 | 16.0 | 81.7 | 2020 | |
| YOLOv6m [34] | 52.0 | 161.2 | 83.1 | 2022 | |
| YOLOv5m [34] | 25.1 | 64.0 | 82.6 | 2021 | |
| YOLOv8s [35] | 11.1 | 28.7 | 82.3 | 2023 | |
| YOLOv8m [34] | 25.9 | 78.7 | 82.5 | 2023 | |
| YOLOv9m [34] | 20.0 | 76.5 | 83.1 | 2024 | |
| YOLOv10m [34] | 16.5 | 63.4 | 82.7 | 2024 | |
| YOLO11m [34] | 20.0 | 67.7 | 83.5 | 2024 | |
| YOLO11s [23] | 9.4 | 21.3 | 82.7 | 2024 | |
| YOLOv12s | 9.07 | 19.3 | 82.3 | 2025 | |
| YOLOv13s | 9.4 | 21.3 | 85.5 | 2025 | |
| YOLOv5s + CE-FPN | 13.5 | 34.4 | 82.7 | – | |
| YOLOv8s + CE-FPN | 14.4 | 36.7 | 82.9 | – | |
| YOLO11s + CE-FPN | 16.4 | 60.5 | 84.0 | – | |
| High | Faster R-CNN [36] | 41.2 | 215.7 | 84.3 | 2015 |
| YOLOX [36] | 54.3 | 165.7 | 83.8 | 2021 | |
| RetinaNet [36] | 36.3 | 217.0 | 82.8 | 2017 | |
| RepPoints [36] | 36.6 | 198.6 | 83.2 | 2019 | |
| MAM Faster R-CNN [36] | 47.7 | 247.3 | 85.1 | 2023 | |
| YOLOv5s [37] | 7.0 | 16.0 | 84.2 | 2020 | |
| YOLOv6s [37] | 18.5 | 45.3 | 84.9 | 2022 | |
| YOLOv7 [37] | 36.5 | 103.3 | 85.8 | 2022 | |
| YOLOv8s [37] | 11.1 | 28.7 | 85.4 | 2023 | |
| YOLOv10s | 8.07 | 24.8 | 84.1 | 2024 | |
| YOLO11s [23] | 9.4 | 21.3 | 85.5 | 2024 | |
| YOLOv12s | 9.07 | 19.3 | 84.2 | 2025 | |
| YOLOv13s | 9.4 | 21.3 | 85.5 | 2025 | |
| YOLOv5s + CE-FPN | 13.5 | 34.4 | 86.3 | – | |
| YOLOv8s + CE-FPN | 14.4 | 36.7 | 86.5 | – | |
| YOLO11s + CE-FPN | 16.4 | 60.5 | 87.1 | – |
| Method | Backbone | Epoch | Input Size | mAP@50 | FO | ST | SC | UT | MU | Params | FPS | Year |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| DOAM [10] | ResNet50 | MAX | – | 82.41 | 86.71 | 68.58 | 90.23 | 78.84 | 87.67 | 90.79M | – | ACM MM20 |
| MCIA-FPN [38] | ResNet101 | MAX | – | 82.59 | 89.08 | 74.48 | 89.99 | 86.13 | 89.75 | – | – | TVC23 |
| AC-YOLOv4 [39] | CSPDarknet53 | 300 | 416 | 85.69 | – | – | – | – | – | – | – | MTA23 |
| GADet-S [40] | Modified CSP v5 | 60 | 320 | 69.6 | 72.6 | 43.6 | 86.6 | 67.5 | 77.5 | 8.94M | 116 | JSEN24 |
| GADet-L [40] | Modified CSP v5 | 60 | 320 | 77.7 | 81.8 | 54.0 | 89.8 | 77.5 | 85.2 | 54.16M | 75 | JSEN24 |
| GADet-X [40] | Modified CSP v5 | 60 | 320 | 78.1 | 83.1 | 56.3 | 89.8 | 75.7 | 85.5 | 99.01M | 56 | JSEN24 |
| FDTNet [15] | ResNeXt101 | 12 | 512 | 82.04 | 87.90 | 60.20 | 96.10 | 78.90 | 87.10 | 66.17M | – | EAAI24 |
| FDTNet [15] | ResNeXt101 | 12 | 1333 | 88.02 | 91.50 | 74.60 | 97.60 | 85.20 | 91.20 | 66.17M | – | EAAI24 |
| DINO [41] | ResNet50 | – | 640 | 78.2 | 83.2 | 58.8 | 89.4 | 72.7 | 86.7 | 58.38M | 54 | ICLR23 |
| DINO [41] | Swin-L | – | 640 | 80.0 | 84.2 | 61.1 | 89.0 | 78.9 | 86.6 | 229.0M | 40 | ICLR23 |
| AO-DETR [41] | ResNet50 | 15 | 640 | 87.2 | 90.0 | 80.1 | 90.8 | 85.6 | 89.5 | 58.38M | 29 | TNNLS24 |
| AO-DETR [41] | Swin-L | 15 | 640 | 89.0 | 89.4 | 80.4 | 97.8 | 87.4 | 90.0 | 229.0M | 15 | TNNLS24 |
| Deformable DETR [ 42] | ResNet50 | – | 640 | 63.4 | 70.1 | 29.0 | 86.0 | 55.7 | 76.4 | 52.14 M | 60 | ICLR21 |
| RT-DETR+MMCL [43] | ResNet50 | – | 640 | 62.5 | 65.9 | 22.3 | 86.4 | 57.1 | 80.7 | 42.81M | 64 | arXiv24 |
| POD-F-R [12] | ResNet50 | 24 | 1333 | 84.9 | 88.7 | 76.0 | 88.9 | 82.8 | 88.1 | 118.32M | 7 | IJON23 |
| POD-F-X [12] | ResNeXt50 | 24 | 1333 | 86.1 | 89.4 | 78.7 | 90.6 | 83.3 | 88.7 | 119.67M | 6 | IJON23 |
| XDet [44] | ResNet50 | MAX | 1280 | 86.69 | 90.42 | 75.95 | 91.46 | 84.31 | 91.29 | 41.19M | 25 | KBS22 |
| ATSS+LAreg [45] | ResNet50 | 12 | 1280 | 87.39 | 92.78 | 71.17 | 96.61 | 83.45 | 92.92 | – | – | TIFS22 |
| ATSS+LAcls [45] | ResNet50 | 12 | 1280 | 88.26 | 90.04 | 74.99 | 97.60 | 85.70 | 92.96 | – | – | TIFS22 |
| Ours | CSPDarknet53 | 200 | 1280 | 89.8 | 92.4 | 76.5 | 98.8 | 87.7 | 93.5 | 16.36M | 149 |
| Precision (%) | Yolo11s_Recall (%) | Yolo11s + CE-FPN_Recall (%) | Δ |
|---|---|---|---|
| 70 | 94.6 | 94.6 | +0.0 |
| 75 | 88.8 | 92.0 | +3.2 |
| 80 | 78.6 | 80.3 | +1.7 |
| 85 | 51.9 | 61.7 | +9.8 |
| 90 | 41.7 | 51.4 | +9.7 |
| 95 | 26.5 | 30.5 | +4.0 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Cheng, Q.; Cai, Z.; Lin, Y.; Li, J.; Lan, T. CE-FPN-YOLO: A Contrast-Enhanced Feature Pyramid for Detecting Concealed Small Objects in X-Ray Baggage Images. Mathematics 2025, 13, 4012. https://doi.org/10.3390/math13244012
Cheng Q, Cai Z, Lin Y, Li J, Lan T. CE-FPN-YOLO: A Contrast-Enhanced Feature Pyramid for Detecting Concealed Small Objects in X-Ray Baggage Images. Mathematics. 2025; 13(24):4012. https://doi.org/10.3390/math13244012
Chicago/Turabian StyleCheng, Qianxiang, Zhanchuan Cai, Yi Lin, Jiayao Li, and Ting Lan. 2025. "CE-FPN-YOLO: A Contrast-Enhanced Feature Pyramid for Detecting Concealed Small Objects in X-Ray Baggage Images" Mathematics 13, no. 24: 4012. https://doi.org/10.3390/math13244012
APA StyleCheng, Q., Cai, Z., Lin, Y., Li, J., & Lan, T. (2025). CE-FPN-YOLO: A Contrast-Enhanced Feature Pyramid for Detecting Concealed Small Objects in X-Ray Baggage Images. Mathematics, 13(24), 4012. https://doi.org/10.3390/math13244012

