PSF-C-Net: A Counterfactual Deep Learning Model for Person Re-Identification Based on Random Cropping Patch and Shuffling Filling
Abstract
:1. Introduction
- In this study, we propose a counterfactual framework aimed at improving the accuracy of person re-identification by explicitly incorporating long-range dependent features. The proposed framework adopts a dual-path counterfactual architecture with shared convolutional parameters, simultaneously accepting both factual and counterfactual instances as inputs to disrupt contextual relationships. The framework also implements controllable counterfactual operations in the prediction space to interpret global features without the need for additional parameters.
- In this study, we propose a method for generating counterfactual data at the instance level. This method utilizes a counterfactual strategy to construct counterfactual instances directly in the instance space. The main objective of our proposed method is to efficiently create an explicit counterfactual space that enables the reliable implementation of counterfactual predictions. Specifically, we randomly crop patches of equal size from the original image and shuffle them before filling the image back. This process breaks the context relationship within the image without introducing any additional noise.
- In this study, we conducted an evaluation of the proposed method using widely recognized benchmarks for person re-identification, namely the Market-1501 and Duke-MTMC-ReID datasets. The experimental results demonstrate that our proposed method consistently achieves state-of-the-art performance across various scenarios.
2. Related Work
2.1. Deep Model for Person Re-Identification
2.2. Causal Counterfactual in Vision
3. Methods
3.1. Person Instance Long-Range Features Extraction
3.2. RCPSF of Counterfactual Data Generation
3.3. Counterfactual Data Learning
4. Experiments
4.1. Datasets and Evaluation Metrics
4.1.1. Dataset
4.1.2. Evaluation Metric
4.1.3. Implementation Details
4.2. Ablation Experiments
4.2.1. Superiority of the Model
4.2.2. Training Parameters
4.2.3. RCPSF Improves Different Baseline Models
4.3. Choice of Parameter Setting
4.3.1. Counterfactual Data Generation Probability p
4.3.2. Counterfactual Loss Weights
4.4. Effectiveness of Each Loss
4.5. Ranking Results
4.6. Comparison with State-of-the-Art Methods
4.7. Baseline Meets State of the Art
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Wang, Z.; Jiang, J.; Yu, Y.; Satoh, S. Incremental Re-Identification by Cross-Direction and Cross-Ranking Adaption. IEEE Trans. Multimedia 2019, 21, 2376–2386. [Google Scholar] [CrossRef]
- Zheng, L.; Yang, Y.; Hauptmann, A.G. Person re-identification: Past, present and future. arXiv 2016, arXiv:1610.02984. [Google Scholar]
- Wang, C.; Zhang, Q.; Huang, C.; Liu, W.; Wang, X. Mancs: A multi-task attentional network with curriculum sampling for person re-identification. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2018; pp. 365–381. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-scale Image Recognition. In Proceedings of the International Conference on Learning Representation, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Jia, M.; Cheng, X.; Lu, S.; Zhang, J. Learning Disentangled Representation Implicitly Via Transformer for Occluded Person Re-Identification. IEEE Trans. Multimedia 2022, 25, 1294–1305. [Google Scholar] [CrossRef]
- Zhang, J.; Niu, L.; Zhang, L. Person Re-Identification with Reinforced Attribute Attention Selection. IEEE Trans. Image Process. 2021, 30, 603–616. [Google Scholar] [CrossRef] [PubMed]
- Gong, X.; Yao, Z.; Li, X.; Fan, Y.; Luo, B.; Fan, J.; Lao, B. LAG-Net: Multi-Granularity Network for Person Re-Identification via Local Attention System. IEEE Trans. Multimedia 2022, 24, 217–229. [Google Scholar] [CrossRef]
- Zheng, Z.; Yang, X.; Yu, Z.; Zheng, L.; Yang, Y.; Kautz, J. Joint Discriminative and Generative Learning for Person Re-Identification. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 2133–2142. [Google Scholar] [CrossRef]
- Yang, M.; Liu, F.; Chen, Z.; Shen, X.; Hao, J.; Wang, J. CausalVAE: Disentangled Representation Learning via Neural Structural Causal Models. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 9588–9597. [Google Scholar] [CrossRef]
- Rao, Y.; Chen, G.; Lu, J.; Zhou, J. Counterfactual Attention Learning for Fine-Grained Visual Categorization and Re-identification. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 1005–1014. [Google Scholar] [CrossRef]
- Zhong, Z.; Zheng, L.; Kang, G.; Li, S.; Yang, Y. Random erasing data augmentation. arXiv 2017, arXiv:1708.04896. [Google Scholar] [CrossRef]
- Sun, Y.; Zheng, L.; Yang, Y.; Tian, Q.; Wang, S. Beyond part models: Person retrieval with refined part pooling (and a strong convolutional baseline). In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 480–496. [Google Scholar]
- Wang, G.; Yuan, Y.; Chen, X.; Li, J.; Zhou, X. Learning discriminative features with multiple granularities for person re-identification. In Proceedings of the 26th ACM International Conference on Multimedia, Seoul, Republic of Korea, 22–26 October 2018; pp. 274–282. [Google Scholar]
- Fan, X.; Luo, H.; Zhang, X.; He, L.; Zhang, C.; Jiang, W. SCPNet: Spatial-Channel Parallelism Network for Joint Holistic and Partial Person Re-identification. In Computer Vision; Jawahar, C., Li, H., Mori, G., Schindler, K., Eds.; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
- Miao, J.; Wu, Y.; Liu, P.; Ding, Y.; Yang, Y. Pose-guided feature alignment for occluded person re-identification. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 542–551. [Google Scholar]
- Fu, X.; Huang, F.; Zhou, Y.; Ma, H.; Xu, X.; Zhang, L. Cross-Modal Cross-Domain Dual Alignment Network for RGB-Infrared Person Re-Identification. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 6874–6887. [Google Scholar] [CrossRef]
- Farooq, A.; Awais, M.; Kittler, J.; Akbari, A.; Khalid, S.S. Cross Modal Person Re-identification with Visual-Textual Queries. In Proceedings of the 2020 IEEE International Joint Conference on Biometrics (IJCB), Houston, TX, USA, 28 September–1 October 2020; pp. 1–8. [Google Scholar] [CrossRef]
- Yang, X.; Liu, L.; Wang, N.; Gao, X. A Two-Stream Dynamic Pyramid Representation Model for Video-Based Person Re-Identification. IEEE Trans. Image Process. 2021, 30, 6266–6276. [Google Scholar] [CrossRef] [PubMed]
- Chen, H.; Wang, Y.; Shi, Y.; Yan, K.; Geng, M.; Tian, Y.; Xiang, T. Deep Transfer Learning for Person Re-Identification. In Proceedings of the 2018 IEEE 4th International Conference on Multimedia Big Data (BigMM), Xi’an, China, 13–16 September 2018; pp. 1–5. [Google Scholar] [CrossRef]
- Pearl, J.; Mackenzie, D. The Book of Why: The New Science of Cause and Effect; Basic Books: New York, NY, USA, 2018. [Google Scholar]
- Jin, X.; Lan, C.; Zeng, W.; Chen, Z.; Zhang, L. Style Normalization and Restitution for Generalizable Person Re-Identification. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 3140–3149. [Google Scholar] [CrossRef]
- Zheng, L.; Shen, L.; Tian, L.; Wang, S.; Wang, J.; Tian, Q. Scalable person re-identification: A benchmark. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 13–16 December 2015; pp. 1116–1124. [Google Scholar]
- Solera, F.; Zou, R.; Cucchiara, R.; Tomasi, C. Performance measures and a data set for multi-target, multi-camera tracking. In Proceedings of the Computer Vision Workshop on Benchmarking Multi-Target Tracking, Amsterdam, The Netherlands, 8–10 and 15–16 October 2016. Proceedings, Part II. [Google Scholar]
- Zhang, K.X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25. [Google Scholar] [CrossRef]
- Fan, X.; Jiang, W.; Luo, H.; Fei, M. SphereReID: Deep hypersphere manifold embedding for person re-identification. J. Vis. Commun. Image Represent. 2019, 60, 51–58. [Google Scholar] [CrossRef]
- Luo, H.; Gu, Y.; Liao, X.; Lai, S.; Jiang, W. Bag of Tricks and a Strong Baseline for Deep Person Re-Identification. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 15–20 June 2019; pp. 1487–1495. [Google Scholar] [CrossRef]
- Zheng, Z.; Zheng, L.; Yang, Y. Unlabeled samples generated by gan improve the person re-identification baseline in vitro. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017. [Google Scholar]
- Zheng, F.; Deng, C.; Sun, X.; Jiang, X.; Guo, X.; Yu, Z.; Huang, F.; Ji, R. Pyramidal person re-identification via multi-loss dynamic training. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 8514–8522. [Google Scholar]
- Yun, S.; Han, D.; Oh, S.J.; Chun, S.; Choe, J.; Yoo, Y. CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 6023–6032. [Google Scholar] [CrossRef]
- Liu, X.; Shen, F.; Zhao, J.; Nie, C. RandoMix: A mixed sample data augmentation method with multiple mixed modes. Multimedia Tools Appl. 2024, 1–17. [Google Scholar] [CrossRef]
- Hendrycks, D.; Mu, N.; Cubuk, E.D.; Zoph, B.; Gilmer, J.; Lakshminarayanan, B. AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty. arXiv 2020, arXiv:1912.02781. [Google Scholar]
- Seo, J.-W.; Jung, H.-G.; Lee, S.-W. Self-augmentation: Generalizing deep networks to unseen classes for few-shot learning. Neural Netw. 2021, 138, 140–149. [Google Scholar] [CrossRef] [PubMed]
- Zheng, Z.; Zheng, L.; Yang, Y. A Discriminatively Learned CNN Embedding for Person Reidentification. ACM Trans. Multimedia Comput. Commun. Appl. 2018, 14, 1–20. [Google Scholar] [CrossRef]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
- Zheng, M.; Karanam, S.; Wu, Z.; Radke, R.J. Re-identification with consistent attentive Siamese networks. In Proceedings of the IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 5735–5744. [Google Scholar]
- Quan, R.; Dong, X.; Wu, Y.; Zhu, L.; Yang, Y. Auto-ReID: Searching for a part-aware ConvNet for person re-identification. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 3750–3759. [Google Scholar]
- Park, H.; Ham, B. Relation Network for Person Re-Identification. Proc. AAAI Conf. Artif. Intell. 2020, 34, 11839–11847. [Google Scholar] [CrossRef]
- Hou, R.; Ma, B.; Chang, H.; Gu, X.; Shan, S.; Chen, X. Interaction-and-aggregation network for person re-identification. In Proceedings of the IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 9317–9326. [Google Scholar]
- Yang, W.; Huang, H.; Zhang, Z.; Chen, X.; Huang, K.; Zhang, S. Towards rich feature discovery with class activation maps augmentation for person re-identification. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 1389–1398. [Google Scholar]
- Chen, B.; Deng, W.; Hu, J. Mixed high-order attention network for person re-identification. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 371–381. [Google Scholar]
- Chen, G.; Lin, C.; Ren, L.; Lu, J.; Zhou, J. Self-critical attention learning for person re-identification. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9637–9646. [Google Scholar]
- Kalayeh, M.M.; Basaran, E.; Gokmen, M.; Kamasak, M.E.; Shah, M. Human semantic parsing for person re-identification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1062–1071. [Google Scholar]
- Ren, M.; He, L.; Liao, X.; Liu, W.; Wang, Y.; Tan, T. Learning Instance-level Spatial-Temporal Patterns for Person Re-identification. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 14910–14919. [Google Scholar] [CrossRef]
- Wang, G.; Lai, J.; Huang, P.; Xie, X. Spatial-temporal person re-identification. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27–28 January 2019; Volume 33. No. 01. [Google Scholar]
- Zhang, Z.; Lan, C.; Zeng, W.; Chen, Z. Densely semantically aligned person re-identification. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 667–676. [Google Scholar]
- Tay, C.-P.; Roy, S.; Yap, K.-H. AANet: Attribute attention network for person re-identifications. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 7134–7143. [Google Scholar]
- Nguyen, B.X.; Nguyen, B.D.; Do, T.; Tjiputra, E.; Tran, Q.D.; Nguyen, A. Graph-based Person Signature for Person Re-Identifications. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Virtual, 19–25 June 2021; pp. 3487–3496. [Google Scholar] [CrossRef]
- He, S.; Luo, H.; Wang, P.; Wang, F.; Li, H.; Jiang, W. TransReID: Transformer-based Object Re-Identification. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 14993–15002. [Google Scholar] [CrossRef]
- Wang, G.; Yang, S.; Liu, H.; Wang, Z.; Yang, Y.; Wang, S.; Yu, G.; Zhou, E.; Sun, J. High-order information matters: Learning relation and topology for occluded person re-identification. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 6449–6458. [Google Scholar]
- Chen, G.; Zhang, T.; Lu, J.; Zhou, J. Deep meta metric learning. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9547–9556. [Google Scholar]
- Luo, C.; Chen, Y.; Wang, N.; Zhang, Z.-X. Spectral feature transformation for person re-identification. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 4976–4985. [Google Scholar]
- Sun, Y.; Cheng, C.; Zhang, Y.; Zhang, C.; Zheng, L.; Wang, Z.; Wei, Y. Circle loss: A unified perspective of pair similarity optimization. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 6398–6407. [Google Scholar]
Market-1501 | Duke-MTMC | |||
---|---|---|---|---|
Methods | Rank-1 | mAP | Rank-1 | mAP |
B | 94.5 | 85.9 | 86.4 | 76.4 |
94.9 | 86.4 | 86.6 | 76.6 | |
94.8 | 86.6 | 86.8 | 76.7 | |
(overall) | 95.2 | 87.3 | 87.1 | 76.9 |
Methods | Market-1501 #nParam (K) | Duke-MTMC #nParam (K) |
---|---|---|
Baseline | 25,668 | 25,892 |
Proposed method | 27,688 | 27,616 |
Market-1501 | Duke-MTMC | |||||
---|---|---|---|---|---|---|
Method | Model | RCPSF | Rank-1 | mAP | Rank-1 | mAP |
IDE | ResNet18 | NO | 79.87 | 57.37 | 67.73 | 46.87 |
YES | 82.83 | 63.02 | 71.20 | 51.83 | ||
ResNet34 | NO | 82.93 | 62.34 | 71.63 | 49.71 | |
YES | 85.23 | 66.02 | 74.02 | 54.63 | ||
ResNet50 | NO | 83.14 | 63.56 | 71.99 | 51.29 | |
YES | 85.64 | 68.50 | 74.86 | 56.53 | ||
Baseline | ResNet50 | NO | 94.5 | 85.9 | 86.4 | 76.4 |
YES | 94.9 | 86.4 | 86.6 | 76.6 | ||
ResNet101 | NO | 94.5 | 87.1 | 87.6 | 77.6 | |
YES | 94.9 | 87.4 | 88.1 | 77.8 | ||
IBN-Net50-a | NO | 95.0 | 88.2 | 90.1 | 79.1 | |
YES | 95.5 | 88.4 | 90.5 | 79.3 | ||
Pyramid | ResNet50 | NO | 95.7 | 88.2 | 89.0 | 79.0 |
YES | 96.1 | 88.6 | 89.4 | 79.3 |
Market-1501 | Duke-MTMC | |||||
---|---|---|---|---|---|---|
Method | Model | Data Augmentation | Rank-1 | mAP | Rank-1 | mAP |
Random Crop | 94.3 | 86.3 | 86.2 | 76.3 | ||
Cutmix [30] | 94.5 | 86.5 | 86.5 | 76.4 | ||
Random Erasing [11] | 94.6 | 86.7 | 86.6 | 76.5 | ||
PSF-C-Net | ResNet50 | RandomMix [31] | 94.3 | 86.8 | 86.4 | 76.3 |
AugMix [32] | 94.9 | 86.6 | 86.3 | 76.2 | ||
Self-Augmentation [33] | 94.8 | 86.4 | 86.5 | 76.1 | ||
RCPSF (Proposed method) | 95.2 | 87.3 | 87.1 | 76.9 |
Rank-1 | mAP | ||||
---|---|---|---|---|---|
✗ | ✓ | ✓ | ✓ | 81.7 | 62.3 |
✓ | ✗ | ✓ | ✓ | 91.8 | 81.5 |
✓ | ✓ | ✗ | ✓ | 91.8 | 82.5 |
✓ | ✓ | ✓ | ✗ | 94.8 | 86.1 |
✓ | ✓ | ✓ | ✓ | 95.2 | 87.3 |
Market-1501 | Duke-MTMC | |||||||
---|---|---|---|---|---|---|---|---|
Type | Methods | Backbone | Rank-1 | Rank-5 | mAP | Rank-1 | Rank-5 | mAP |
PCB+RPP [12] | ResNet50 | 93.8 | 97.5 | 81.6 | 83.3 | - | 69.2 | |
MGN [13] | ResNet50 | 95.7 | - | 86.9 | 88.7 | - | 78.4 | |
Stripe-Based | Pyramid [29] | ResNet50 | 95.7 | 98.4 | 88.2 | 89.0 | 94.7 | 79.0 |
Auto-ReID [37] | Searched | 94.5 | - | 85.1 | - | - | - | |
GCP [38] | ResNet50 | 95.2 | - | 88.9 | 89.7 | - | 78.6 | |
IANet [39] | ResNet50 | 94.4 | - | 83.1 | 87.1 | - | 73.4 | |
CASN+PCB [36] | ResNet50 | 94.4 | - | 82.8 | 87.7 | - | 73.7 | |
Attention-Based | CAMA [40] | ResNet50 | 94.7 | 98.1 | 84.5 | 85.8 | - | 72.9 |
MHN-6 [41] | ResNet50 | 95.1 | 98.1 | 85.0 | 89.1 | 94.6 | 77.2 | |
SCAL [42] | ResNet50 | 95.8 | 98.5 | 88.9 | 89.0 | 95.1 | 79.6 | |
CAL [10] | ResNet50 | 95.5 | 98.5 | 89.5 | 90.0 | 96.1 | 80.5 | |
SPReID [43] | ResNet152 | 92.5 | - | 81.3 | 84.4 | - | 71.0 | |
AANet [47] | ResNet50 | 93.9 | - | 82.5 | 86.4 | - | 72.6 | |
DSA-reID [46] | ResNet50 | 95.7 | - | 87.6 | 86.2 | - | 74.3 | |
HONet [50] | ResNet50 | 94.2 | - | 84.9 | 86.9 | - | 75.6 | |
Extra Semantics-Based | GPS [48] | ResNet50 | 95.2 | 98.4 | 87.8 | 88.2 | 95.2 | 78.7 |
TransReID [49] | ViT-B/16 | 95.2 | - | 89.5 | 90.7 | - | 82.6 | |
st-reID [44] | ResNet50 | 98.1 | 99.3 | 87.6 | 94.4 | 97.4 | 83.9 | |
InSTD [45] | ResNet50 | 97.6 | 99.5 | 90.8 | 95.7 | 97.2 | 89.1 | |
DMML [51] | ResNet50 | 93.5 | - | 81.6 | 85.9 | - | 73.7 | |
SFT [52] | RseNet50 | 93.4 | - | 82.7 | 86.9 | - | 73.2 | |
Global feature | Circle [53] | RseNet50 | 94.2 | - | 84.9 | - | - | - |
Baseline [27] | RseNet50 | 94.5 | - | 85.9 | 86.4 | - | 76.4 | |
Baseline [27] (RK) | RseNet50 | 95.4 | - | 94.2 | 90.3 | - | 89.1 | |
Global feature | PSF-C-Net (Our) | RseNet50 | 95.2 | 98.7 | 87.3 | 87.1 | 93.9 | 76.9 |
PSF-C-Net (Our) (RK) | RseNet50 | 96.5 | 98.7 | 94.8 | 91.2 | 93.8 | 89.8 |
Market-1501 | Duke-MTMC | ||||||
---|---|---|---|---|---|---|---|
Methods | Backbone | Rank-1 | Rank-5 | mAP | Rank-1 | Rank-5 | mAP |
CAL [10] | ResNet50 | 95.5 | 98.5 | 89.5 | 90.0 | 96.1 | 80.5 |
CAL † | ResNet50 | 95.7 | 98.6 | 89.8 | 90.5 | 96.2 | 80.7 |
MGN [13] | ResNet50 | 95.7 | - | 86.9 | 87.7 | - | 78.4 |
MGN † | ResNet50 | 95.8 | - | 87.2 | 88.9 | - | 78.6 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sun, R.; Chen, Q.; Dong, H.; Zhang, H.; Wang, M. PSF-C-Net: A Counterfactual Deep Learning Model for Person Re-Identification Based on Random Cropping Patch and Shuffling Filling. Mathematics 2024, 12, 1957. https://doi.org/10.3390/math12131957
Sun R, Chen Q, Dong H, Zhang H, Wang M. PSF-C-Net: A Counterfactual Deep Learning Model for Person Re-Identification Based on Random Cropping Patch and Shuffling Filling. Mathematics. 2024; 12(13):1957. https://doi.org/10.3390/math12131957
Chicago/Turabian StyleSun, Ruiwang, Qing Chen, Heng Dong, Haifeng Zhang, and Meng Wang. 2024. "PSF-C-Net: A Counterfactual Deep Learning Model for Person Re-Identification Based on Random Cropping Patch and Shuffling Filling" Mathematics 12, no. 13: 1957. https://doi.org/10.3390/math12131957
APA StyleSun, R., Chen, Q., Dong, H., Zhang, H., & Wang, M. (2024). PSF-C-Net: A Counterfactual Deep Learning Model for Person Re-Identification Based on Random Cropping Patch and Shuffling Filling. Mathematics, 12(13), 1957. https://doi.org/10.3390/math12131957