Learning Part-Based Features for Vehicle Re-Identification with Global Context
Abstract
1. Introduction
2. Motivation and Related Works
Vehicle Re-Identification Datasets
- Attribute-based models, which generally use a separate dedicated network for the identification of attributes or local parts.
- Part-based models, which use single or multiple branches to extract local features with different schemes of partitioning. These methods may or may not include a global branch that learns the global features. Single-branch models are inherently less complex than multi-branch models due to their smaller number of convolutional layers.
3. Proposed Unified Framework for Global and Part-Based Local Features
3.1. Data Pre-Processing
- The images are first resized to a fixed size (384,192). The resizing is performed to match the input size expected by the pretrained backbone model.
- Random Horizontal Flip—This is an image data augmentation technique in which an input image is flipped horizontally with a given probability.
- Random Erasing—Using this technique, a region in an image is randomly selected, and the pixels are erased. This is useful for the robust training of the given data.
- Finally, the images are tensorized and then normalized to the same mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225) as standard ImageNet [7] images. This is done to ensure the best results due to the use of a backbone architecture pretrained on ImageNet.
3.2. Backbone Network
3.3. Proposed GLSIPNet Model
3.4. Global–Local Similarity Score Generator
4. Experimental Results and Analysis
- Our proposed model clearly outperforms the baseline, thereby establishing the significance of global–local similarity-induced part learning in addressing the VReID problem using local part-based features. Our method achieves improvements of 2.5% (mAP) on the VeRi dataset and 2.4%, 3.3%, and 2.8% (mAP) on the small, medium, and large variants of the VehicleId dataset, respectively.
- Our proposed model also performs better than the other comparison models in terms of mAP on the VeRi dataset and Rank-5 accuracy on the VehicleId dataset.
- The models proposed in [51,52], which outperform our model in Rank-1 and Rank-5 accuracies on the VeRi dataset, fail to do so on the VehicleId dataset. Similarly, the models proposed in [35,48] perform better in Rank-1 accuracy, but in Rank-5 accuracy on the VeRi dataset, our method performs better. Thus, considering its low complexity and consistent performance, our approach offers a better trade-off than attention-based models with higher complexity.
- Our proposed model successfully achieves better results in part-based vehicle re-identification by incorporating a global aspect without adding any additional complexity. This approach is found to be more effective than directly incorporating a global feature branch, as demonstrated by the comparison with SAN [27] in Table 2 and Table 3.
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Babu, R.; Rajitha, B. Accident Detection through CCTV Surveillance. In Proceedings of the 2022 IEEE Students Conference on Engineering and Systems (SCES), Prayagraj, India, 1–3 July 2022; pp. 01–06. [Google Scholar] [CrossRef]
- Wang, H.; Hou, J.; Chen, N. A Survey of Vehicle Re-Identification Based on Deep Learning. IEEE Access 2019, 7, 172443–172469. [Google Scholar] [CrossRef]
- Yan, L.; Li, K.; Gao, R.; Wang, C.; Xiong, N. An Intelligent Weighted Object Detector for Feature Extraction to Enrich Global Image Information. Appl. Sci. 2022, 12, 7825. [Google Scholar] [CrossRef]
- Huan, W.; Shcherbakova, G.; Sachenko, A.; Yan, L.; Volkova, N.; Rusyn, B.; Molga, A. Haar Wavelet-Based Classification Method for Visual Information Processing Systems. Appl. Sci. 2023, 13, 5515. [Google Scholar] [CrossRef]
- Liu, X.; Liu, W.; Ma, H.; Fu, H. Large-scale vehicle re-identification in urban surveillance videos. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), Seattle, WA, USA, 11–15 July 2016. [Google Scholar]
- Liu, X.; Liu, W.; Mei, T.; Ma, H. PROVID: Progressive and Multimodal Vehicle Reidentification for Large-Scale Urban Surveillance. IEEE Trans. Multimed. 2018, 20, 645–658. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the Advances in Neural Information Processing Systems; Pereira, F., Burges, C., Bottou, L., Weinberger, K., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2012; Volume 25. [Google Scholar]
- Liu, H.; Tian, Y.; Wang, Y.; Pang, L.; Huang, T. Deep Relative Distance Learning: Tell the Difference between Similar Vehicles. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2167–2175. [Google Scholar] [CrossRef]
- Liu, S.; Deng, W. Very deep convolutional neural network based image classification using small training sample size. In Proceedings of the 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR), Kuala Lumpur, Malaysia, 3–6 November 2015; pp. 730–734. [Google Scholar] [CrossRef]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Los Alamitos, CA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Chen, H.; Lagadec, B.; Bremond, F. Partition and Reunion: A Two-Branch Neural Network for Vehicle Re-identification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Sun, Y.; Zheng, L.; Li, Y.; Yang, Y.; Tian, Q.; Wang, S. Learning Part-based Convolutional Features for Person Re-Identification. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 902–917. [Google Scholar] [CrossRef] [PubMed]
- Chen, X.; Yu, H.; Hu, C.; Wang, H. Multi-Branch Feature Learning Network via Global-Local Self-Distillation for Vehicle Re-Identification. IEEE Trans. Veh. Technol. 2024, 73, 12415–12425. [Google Scholar] [CrossRef]
- Pang, X.; Zheng, Y.; Nie, X.; Yin, Y.; Li, X. Multi-axis interactive multidimensional attention network for vehicle re-identification. Image Vis. Comput. 2024, 144, 104972. [Google Scholar] [CrossRef]
- Kanacı, A.; Zhu, X.; Gong, S. Vehicle Re-identification in Context. In Proceedings of the Pattern Recognition: 40th German Conference, GCPR 2018, Stuttgart, Germany, 9–12 October 2018; Brox, T., Bruhn, A., Fritz, M., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 377–390. [Google Scholar]
- Lou, Y.; Bai, Y.; Liu, J.; Wang, S.; Duan, L. VERI-Wild: A Large Dataset and a New Method for Vehicle Re-Identification in the Wild. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 3230–3238. [Google Scholar] [CrossRef]
- Tang, Z.; Naphade, M.; Liu, M.; Yang, X.; Birchfield, S.; Wang, S.; Kumar, R.; Anastasiu, D.; Hwang, J. CityFlow: A City-Scale Benchmark for Multi-Target Multi-Camera Vehicle Tracking and Re-Identification. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Los Alamitos, CA, USA, 15–20 June 2019; pp. 8789–8798. [Google Scholar] [CrossRef]
- Yan, K.; Tian, Y.; Wang, Y.; Zeng, W.; Huang, T. Exploiting Multi-grain Ranking Constraints for Precisely Searching Visually-similar Vehicles. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 562–570. [Google Scholar] [CrossRef]
- Zhou, Y.; Liu, L.; Shao, L. Vehicle Re-Identification by Deep Hidden Multi-View Inference. IEEE Trans. Image Process. 2018, 27, 3275–3287. [Google Scholar] [CrossRef]
- Wang, H.; Peng, J.; Chen, D.; Jiang, G.; Zhao, T.; Fu, X. Attribute-Guided Feature Learning Network for Vehicle Reidentification. IEEE Multimed. 2020, 27, 112–121. [Google Scholar] [CrossRef]
- Zhao, Y.; Shen, C.; Wang, H.; Chen, S. Structural Analysis of Attributes for Vehicle Re-Identification and Retrieval. IEEE Trans. Intell. Transp. Syst. 2020, 21, 723–734. [Google Scholar] [CrossRef]
- Li, Z.; Shi, Y.; Ling, H.; Chen, J.; Liu, B.; Wang, R.; Zhao, C. Viewpoint Disentangling and Generation for Unsupervised Object Re-ID. Acm Trans. Multimed. Comput. Commun. Appl. 2024, 20. [Google Scholar] [CrossRef]
- He, B.; Li, J.; Zhao, Y.; Tian, Y. Part-Regularized Near-Duplicate Vehicle Re-Identification. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 3992–4000. [Google Scholar] [CrossRef]
- Yan, B.; Liu, Y.; Yan, W. A Novel Fusion Perception Algorithm of Tree Branch/Trunk and Apple for Harvesting Robot Based on Improved YOLOv8s. Agronomy 2024, 14, 1895. [Google Scholar] [CrossRef]
- Qian, J.; Jiang, W.; Luo, H.; Yu, H. Stripe-based and attribute-aware network: A two-branch deep model for vehicle re-identification. Meas. Sci. Technol. 2020, 31, 095401. [Google Scholar] [CrossRef]
- Wang, Z.; Tang, L.; Liu, X.; Yao, Z.; Yi, S.; Shao, J.; Yan, J.; Wang, S.; Li, H.; Wang, X. Orientation Invariant Feature Embedding and Spatial Temporal Regularization for Vehicle Re-identification. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 379–387. [Google Scholar] [CrossRef]
- Zhu, J.; Zeng, H.; Lei, Z.; Liao, S.; Zheng, L.; Cai, C. A Shortly and Densely Connected Convolutional Neural Network for Vehicle Re-identification. In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 3285–3290. [Google Scholar] [CrossRef]
- Zhu, J.; Du, Y.; Hu, Y.; Zheng, L.; Cai, C. VRSDNet: Vehicle re-identification with a shortly and densely connected convolutional neural network. Multimed. Tools Appl. 2019, 78, 29043–29057. [Google Scholar] [CrossRef]
- Zhu, J.; Huang, J.; Zeng, H.; Ye, X.; Li, B.; Lei, Z.; Zheng, L. Object Reidentification via Joint Quadruple Decorrelation Directional Deep Networks in Smart Transportation. IEEE Internet Things J. 2020, 7, 2944–2954. [Google Scholar] [CrossRef]
- Peng, J.; Wang, H.; Zhao, T.; Fu, X. Learning multi-region features for vehicle re-identification with context-based ranking method. Neurocomputing 2019, 359, 427–437. [Google Scholar] [CrossRef]
- Cho, Y.; Kim, W.J.; Hong, S.; Yoon, S.E. Part-based Pseudo Label Refinement for Unsupervised Person Re-identification. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 7298–7308. [Google Scholar] [CrossRef]
- Wang, H.; Peng, J.; Jiang, G.; Xu, F.; Fu, X. Discriminative feature and dictionary learning with part-aware model for vehicle re-identification. Neurocomputing 2021, 438, 55–62. [Google Scholar] [CrossRef]
- Chen, X.; Sui, H.; Fang, J.; Feng, W.; Zhou, M. Vehicle Re-Identification Using Distance-Based Global and Partial Multi-Regional Feature Learning. IEEE Trans. Intell. Transp. Syst. 2021, 22, 1276–1286. [Google Scholar] [CrossRef]
- Sun, W.; Dai, G.; Zhang, X.; He, X.; Chen, X. TBE-Net: A Three-Branch Embedding Network with Part-Aware Ability and Feature Complementary Learning for Vehicle Re-Identification. IEEE Trans. Intell. Transp. Syst. 2022, 23, 14557–14569. [Google Scholar] [CrossRef]
- Xu, Y.; Jiang, Z.; Men, A.; Pei, J.; Ju, G.; Yang, B. Attentional Part-based Network for Person Re-identification. In Proceedings of the 2019 IEEE Visual Communications and Image Processing (VCIP), Sydney, NSW, Australia, 1–4 December 2019; pp. 1–4. [Google Scholar] [CrossRef]
- Wang, G.; Yuan, Y.; Chen, X.; Li, J.; Zhou, X. Learning Discriminative Features with Multiple Granularities for Person Re-Identification. In Proceedings of the 26th ACM International Conference on Multimedia, Seoul, Republic of Korea, 22–26 October 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 274–282. [Google Scholar] [CrossRef]
- Qian, J.; Pan, M.; Tong, W.; Law, R.; Wu, E.Q. URRNet: A Unified Relational Reasoning Network for Vehicle Re-Identification. IEEE Trans. Veh. Technol. 2023, 72, 11156–11168. [Google Scholar] [CrossRef]
- Li, J.; Gong, X. Unleashing the Potential of Pre-Trained Diffusion Models for Generalizable Person Re-Identification. Sensors 2025, 25, 552. [Google Scholar] [CrossRef]
- Lv, K.; Han, S.; Lin, Y. Identity-Guided Spatial Attention for Vehicle Re-Identification. Sensors 2023, 23, 5152. [Google Scholar] [CrossRef]
- Bai, L.; Rong, L. Vehicle re-identification with multiple discriminative features based on non-local-attention block. Sci. Rep. 2024, 14, 31386. [Google Scholar] [CrossRef] [PubMed]
- Gong, R.; Zhang, X.; Pan, J.; Guo, J.; Nie, X. Vehicle Reidentification Based on Convolution and Vision Transformer Feature Fusion. IEEE Multimed. 2024, 31, 61–68. [Google Scholar] [CrossRef]
- Wang, Y.; Zhang, P.; Wang, D.; Lu, H. Other tokens matter: Exploring global and local features of Vision Transformers for Object Re-Identification. Comput. Vis. Image Underst. 2024, 244, 104030. [Google Scholar] [CrossRef]
- Huang, F.; Lv, X.; Zhang, L. Coarse-to-fine sparse self-attention for vehicle re-identification. Knowl.-Based Syst. 2023, 270, 110526. [Google Scholar] [CrossRef]
- Zhong, Z.; Zheng, L.; Cao, D.; Li, S. Re-ranking Person Re-identification with k-Reciprocal Encoding. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 3652–3661. [Google Scholar] [CrossRef]
- Xu, Y.; Jiang, N.; Zhang, L.; Zhou, Z.; Wu, W. Multi-scale Vehicle Re-identification Using Self-adapting Label Smoothing Regularization. In Proceedings of the ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 2117–2121. [Google Scholar] [CrossRef]
- Chu, R.; Sun, Y.; Li, Y.; Liu, Z.; Zhang, C.; Wei, Y. Vehicle Re-Identification with Viewpoint-Aware Metric Learning. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 8281–8290. [Google Scholar] [CrossRef]
- Kuma, R.; Weill, E.; Aghdasi, F.; Sriram, P. Vehicle Re-identification: An Efficient Baseline Using Triplet Embedding. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14–19 July 2019; pp. 1–9. [Google Scholar] [CrossRef]
- Lin, W.; Li, Y.; Yang, X.; Peng, P.; Xing, J. Multi-View Learning for Vehicle Re-Identification. In Proceedings of the 2019 IEEE International Conference on Multimedia and Expo (ICME), Shanghai, China, 8–12 July 2019; pp. 832–837. [Google Scholar] [CrossRef]
- Quispe, R.; Lan, C.; Zeng, W.; Pedrini, H. AttributeNet: Attribute enhanced vehicle re-identification. Neurocomputing 2021, 465, 84–92. [Google Scholar] [CrossRef]
- Zhang, X.; Zhang, R.; Cao, J.; Gong, D.; You, M.; Shen, C. Part-Guided Attention Learning for Vehicle Instance Retrieval. IEEE Trans. Intell. Transp. Syst. 2022, 23, 3048–3060. [Google Scholar] [CrossRef]
- Li, J.; Yu, C.; Shi, J.; Zhang, C.; Ke, T. Vehicle Re-identification method based on Swin-Transformer network. Array 2022, 16, 100255. [Google Scholar] [CrossRef]
- He, S.; Luo, H.; Wang, P.; Wang, F.; Li, H.; Jiang, W. TransReID: Transformer-Based Object Re-Identification. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, BC, Canada, 11–17 October 2021; pp. 15013–15022. [Google Scholar]
- Li, B.; Liu, P.; Fu, L.; Li, J.; Fang, J.; Xu, Z.; Yu, H. VehicleGAN: Pair-flexible Pose Guided Image Synthesis for Vehicle Re-identification. In Proceedings of the 2024 IEEE Intelligent Vehicles Symposium (IV), Jeju Island, Republic of Korea, 2–5 June 2024; pp. 447–453. [Google Scholar] [CrossRef]
- Sun, K.; Pang, X.; Zheng, M.; Nie, X.; Li, X.; Zhou, H.; Yin, Y. Heterogeneous context interaction network for vehicle re-identification. Neural Netw. 2024, 169, 293–306. [Google Scholar] [CrossRef] [PubMed]
Dataset | Model | mAP | Rank-1 | Rank-5 | Rank-10 |
---|---|---|---|---|---|
VeRi Dataset | GLSIPNet | 76.76 | 94.63 | 97.55 | 98.92 |
Baseline | 74.17 | 94.04 | 97.37 | 98.56 | |
GLSIPNet-RR | 80.99 | 95.70 | 97.55 | 98.68 | |
Baseline-RR | 78.75 | 94.69 | 96.36 | 97.67 | |
VehicleId Dataset (Small) | GLSIPNet | 88.46 | 85.78 | 98.34 | 99.24 |
Baseline | 86.01 | 82.90 | 97.15 | 98.68 | |
GLSIPNet-RR | 88.84 | 86.26 | 98.26 | 99.29 | |
Baseline-RR | 86.54 | 83.48 | 97.36 | 98.71 | |
VehicleId Dataset (Medium) | GLSIPNet | 85.08 | 82.21 | 95.73 | 98.00 |
Baseline | 81.78 | 78.55 | 93.78 | 96.92 | |
GLSIPNet-RR | 85.08 | 82.15 | 95.92 | 98.04 | |
Baseline-RR | 82.04 | 78.78 | 94.04 | 97.07 | |
VehicleId Dataset (Large) | GLSIPNet | 82.66 | 79.63 | 93.88 | 96.98 |
Baseline | 79.87 | 76.81 | 91.16 | 95.52 | |
GLSIPNet-RR | 83.01 | 80.12 | 93.80 | 97.02 | |
Baseline-RR | 80.16 | 77.13 | 91.22 | 95.58 |
Sl. No. | Method | mAP (%) | R-1 (%) | R-5 (%) |
---|---|---|---|---|
1 | SLSR [47] | 65.13 | 91.24 | NR |
2 | VANeT [48] | 66.34 | 89.78 | 95.99 |
3 | Batch Sample [49] | 67.55 | 90.23 | 96.42 |
4 | Part-regularized near duplicate [25] | 74.30 | 94.30 | 98.70 |
5 | MRL + Softmax Loss [50] | 78.50 | 94.30 | 98.70 |
6 | MRM [23] | 68.55 | 91.77 | 95.82 |
7 | SAN [27] | 72.5 | 93.3 | 97.1 |
8 | TCPM [34] | 74.59 | 93.98 | 97.13 |
9 | DGPM [35] | 79.39 | 96.19 | 98.09 |
10 | AttributeNet [51] | 80.1 | 97.1 | 98.6 |
11 | PGAN [52] | 79.30 | 96.5 | 98.30 |
12 | Swin Transformer [53] | 78.6 | 97.3 | NR |
13 | TransReid [54] | 78.2 | 96.5 | NR |
14 | VehicleGAN [55] | 74.2 | 93.6 | 97.3 |
15 | URRNet [39] | 72.2 | 93.1 | 97.1 |
16 | MIMA Net [16] | 79.89 | 94.99 | 98.81 |
17 | CFSA [45] | 79.89 | 94.99 | 98.81 |
18 | GLSIPNet + RR (Ours) | 80.99 | 95.7 | 97.55 |
Sl. No. | Method | Rank-1 | Rank-5 | ||||
---|---|---|---|---|---|---|---|
Small (%) | Medium (%) | Large (%) | Small (%) | Medium (%) | Large (%) | ||
1 | SLSR [47] | 75.10 | 71.80 | 68.70 | 89.70 | 86.10 | 83.10 |
2 | VANet [48] | 88.12 | 83.17 | 80.35 | 97.29 | 95.14 | 92.97 |
3 | Batch Sample [49] | 78.80 | 73.41 | 69.33 | 96.17 | 92.57 | 89.45 |
4 | Part regularised near duplicate [25] | 78.40 | 75.00 | 74.20 | 92.30 | 88.30 | 86.40 |
5 | MRL + Softmax Loss [50] | 84.80 | 80.90 | 78.40 | 96.9 | 94.1 | 92.1 |
6 | MRM [23] | 76.64 | 74.20 | 70.86 | 92.34 | 88.54 | 84.82 |
7 | SAN [27] | 79.7 | 78.4 | 75.6 | 94.3 | 91.3 | 88.3 |
8 | TCPM [34] | 81.96 | 78.82 | 74.58 | 96.38 | 94.29 | 90.71 |
9 | DGPM [35] | 88.56 | 87.63 | 86.31 | 94.69 | 94.29 | 93.34 |
10 | Attribute Net [51] | 86.0 | 81.9 | 79.6 | 97.4 | 95.1 | 92.7 |
11 | PGAN [52] | NR | NR | 77.8 | NR | NR | 92.1 |
12 | TransReid [54] | 83.6 | NR | NR | 97.1 | NR | NR |
13 | VehicleGAN [55] | 83.5 | 78.2 | 75.7 | 96.5 | 93.2 | 90.6 |
14 | URRNet [39] | 76.5, | 73.7 | 68.2 | 96.5 | 92.0 | 89.6 |
15 | MDFENet [42] | 83.66 | 80.78 | 77.88 | NR | NR | NR |
16 | MIMANet [16] | 83.28 | 80.14 | 77.72 | 96.31 | 93.71 | 91.29 |
17 | HCI-Net [56] | 83.8 | 79.4 | 76.4 | 96.5 | 92.7 | 91.2 |
18 | GLSIPNet + RR (Ours) | 86.26 | 82.15 | 80.12 | 98.26 | 95.92 | 93.80 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Nath, R.K.; Mitra, D. Learning Part-Based Features for Vehicle Re-Identification with Global Context. Appl. Sci. 2025, 15, 7041. https://doi.org/10.3390/app15137041
Nath RK, Mitra D. Learning Part-Based Features for Vehicle Re-Identification with Global Context. Applied Sciences. 2025; 15(13):7041. https://doi.org/10.3390/app15137041
Chicago/Turabian StyleNath, Rajsekhar Kumar, and Debjani Mitra. 2025. "Learning Part-Based Features for Vehicle Re-Identification with Global Context" Applied Sciences 15, no. 13: 7041. https://doi.org/10.3390/app15137041
APA StyleNath, R. K., & Mitra, D. (2025). Learning Part-Based Features for Vehicle Re-Identification with Global Context. Applied Sciences, 15(13), 7041. https://doi.org/10.3390/app15137041