Automatic Vehicle Recognition: A Practical Approach with VMMR and VCR
Abstract
1. Introduction
- Top-1 accuracy;
- Top-2 accuracy;
- Per-class precision, recall, and F1-score;
- Cohen’s κ coefficient.
- YOLOv8-based vehicle detection;
- Segmentation-guided VCR;
- HOG feature extraction for shape descriptors;
- SVM-based vehicle make-model classification.
2. Related Work and Dataset
2.1. Related Work
2.2. Major Vehicle Recognition Datasets
3. Proposed Solution
- COARSE—creates partition with 9 components;
- FINE—creates partition with 59 components.
4. Experimental Workflow
4.1. Datasets
- Vehicles were recorded while in motion, with speeds ranging from 15 km/h to 70 km/h, reflecting actual traffic conditions;
- Data was collected under different weather conditions, including sunny, rainy and foggy;
- Images were captured during various periods of the day with the exception of nighttime;
- The camera angle of view was approximately 40 degrees on the horizontal front side and up to 60 degrees on the vertical front side.
4.2. Experimental Setup
- EfficientNetV2, known for high accuracy with optimized parameter efficiency [8];
- MobileNetV3, designed for lightweight, fast inference on edge devices [9];
- ResNet50, a widely adopted residual network capable of deep feature extraction [10];
- ViT-B16, leveraging attention mechanisms to capture long-range dependencies in visual data [11];
- ConvNeXt, a modernized convolutional network inspired by transformer architectures, combining convolutional efficiency with design elements from Vision Transformers [33];
- Resized Crop—we randomly crop a region of the image and then resize it to 224 × 224 pixels, keeping between 85% and 100% of the original input;
- Horizontal Flip—with a probability of 80% the image is flipped horizontally;
- Rotation—the image is randomly rotated by up to ±15 degrees;
- Affine Transformation—up to 10% translation, between 90% and 110% scaling, and up to ±15 degrees shear;
- Gaussian Blur—with a 30% probability a slight blur is applied using a kernel size of 3 × 3 or 5 × 5;
- Brightness—with a probability of 70%, the image brightness is shifted by up to ±20 units;
- Contrast—with a probability of 70%, the image contrast is varied between 0.8 and 1.2;
4.3. Results
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Neagu, A.-C.; Ciubotaru, B.-I. Use cases of artificial intelligence in military systems. In Proceedings of the 20th International Scientific Conference “Strategies XXI Technologies—Military Applications, Simulation And Resources”, Bucharest, Romania, 28 March 2024; “Carol I” National Defence University Publishing House: Bucharest, Romania, 2024; Volume 20, pp. 151–155. [Google Scholar]
- Sadiq, S.; Sultan, K.; Sheraz, M.; Chuah, T.C.; Hashmi, M.U. Towards Efficient Vehicle Recognition: A Unified System for VMMR, ANPR, and Color Classification. Comput. Mater. Contin. 2025, 85, 3945–3963. [Google Scholar] [CrossRef]
- Munoz, A.; Thomas, N.; Vapsi, A.; Borrajo, D. Veri-Car: Towards open-world vehicle information retrieval. Neural Comput. Appl. 2025, 37, 15183–15221. [Google Scholar] [CrossRef]
- Kim, J. Deep learning-based vehicle type and color classification to support safe autonomous driving. Appl. Sci. 2024, 14, 1600. [Google Scholar] [CrossRef]
- Varghese, R.; Sambath, M. Yolov8: A novel object detection algorithm with enhanced performance and robustness. In Proceedings of the 2024 International Conference on Advances in Data Engineering and Intelligent Computing Systems (ADICS), Chennai, India, 18–19 April 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–6. [Google Scholar] [CrossRef]
- Chen, L.C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking atrous convolution for semantic image segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar] [CrossRef]
- Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; IEEE: Piscataway, NJ, USA, 2024; Volume 1, pp. 886–893. [Google Scholar] [CrossRef]
- Tan, M.; Le, Q.V. EfficientNetV2: Smaller models and faster training. In Proceedings of the International Conference on Machine Learning (PMLR), Virtual Event, 18–24 July 2021; Volume 139, pp. 10096–10106. [Google Scholar]
- Howard, A.; Sandler, M.; Chu, G.; Chen, L.-C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. Searching for MobileNetV3. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; IEEE: Piscataway, NJ, USA, 2020; pp. 1314–1324. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE/CVF, International Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 12 December 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2021, arXiv:2010.11929. [Google Scholar] [CrossRef]
- Gayen, S.; Maity, S.; Singh, P.K.; Geem, Z.W.; Sarkar, R. Two decades of vehicle make and model recognition–survey, challenges and future directions. J. King Saud Univ. Comput. Inf. Sci. 2024, 36, 101885. [Google Scholar] [CrossRef]
- Tafazzoli, F.; Frigui, H.; Nishiyama, K. A large and diverse dataset for improved vehicle make and model recognition. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–8. [Google Scholar] [CrossRef]
- Buzzelli, M.; Segantin, L. Revisiting the compcars dataset for hierarchical car classification: New annotations, experiments, and results. Sensors 2021, 21, 596. [Google Scholar] [CrossRef]
- Lyu, Y.; Schiopu, I.; Cornelis, B.; Munteanu, A. Framework for vehicle make and model recognition—A new large-scale dataset and an efficient two-branch–two-stage deep learning architecture. Sensors 2022, 22, 8439. [Google Scholar] [CrossRef] [PubMed]
- Komolovaite, D.; Krisciunas, A.; Lagzdinyte-Budnike, I.; Budnikas, A.; Rentelis, D. Vehicle make detection using the transfer learning approach. Elektron. Elektrotechnika 2022, 28, 55–64. [Google Scholar] [CrossRef]
- Zhang, C.; Li, Q.; Liu, C.; Zhang, Y.; Zhao, D.; Ji, C.; Wang, J. A Fine-Grained Car Recognition Method Based on a Lightweight Attention Network and Regularized FineTuning. Electronics 2025, 14, 211. [Google Scholar] [CrossRef]
- Manzoor, M.A.; Morgan, Y.; Bais, A. Real-time vehicle make and model recognition system. Mach. Learn. Knowl. Extr. 2019, 1, 611–629. [Google Scholar] [CrossRef]
- Lv, C.; Kumari, S.; Singh, P.; Wang, H.; Kumar, S.; Liu, W.; Madaan, V.; Agrawal, P. Vehicle Detection and Classification Using an Ensemble of EfficientDet and YOLOv8. PeerJ Comput. Sci. 2024, 10, e2233. [Google Scholar] [CrossRef] [PubMed]
- Hu, M.; Wu, Y.; Fan, J.; Jing, B. Joint semantic intelligent detection of vehicle color under rainy conditions. Mathematics 2022, 10, 3512. [Google Scholar] [CrossRef]
- Stavrothanasopoulos, K.; Gkountakos, K.; Ioannidis, K.; Tsikrika, T.; Vrochidis, S.; Kompatsiaris, I. Vehicle Color Identification Framework using Pixel-level Color Estimation from Segmentation Masks of Car Parts. In Proceedings of the 5th International Conference on Image Processing Applications and Systems (IPAS), Genova, Italy, 5–7 December 2022; IEEE: Piscataway, NJ, USA, 2023; pp. 1–7. [Google Scholar] [CrossRef]
- Lima, G.E.; Laroca, R.; Santos, E.; Nascimento, E.; Menotti, D. Toward enhancing vehicle color recognition in adverse conditions: A dataset and benchmark. In Proceedings of the 37th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), Manaus, Brazil, 30 September–3 October 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–6. [Google Scholar] [CrossRef]
- Ayub, A.; Kim, H. GAN-based data augmentation with vehicle color changes to train a vehicle detection CNN. Electronics 2024, 13, 1231. [Google Scholar] [CrossRef]
- Semiromizadeh, N.; Manzari, O.N.; Shokouhi, S.B.; Mirzakuchaki, S. Enhancing Vehicle Make and Model Recognition with 3D Attention Modules. In Proceedings of the 14th International Conference on Computer and Knowledge Engineering (ICCKE), Mashhad, Iran, 19–20 November 2024; IEEE: Piscataway, NJ, USA, 2025; pp. 87–92. [Google Scholar] [CrossRef]
- Nafzi, M.; Brauckmann, M.; Glasmachers, T. Vehicle shape and color classification using convolutional neural network. arXiv 2019, arXiv:1905.08612. [Google Scholar] [CrossRef]
- Liu, D. Progressive multi-task anti-noise learning and distilling frameworks for fine-grained vehicle recognition. IEEE Trans. Intell. Transp. Syst. 2024, 25, 10667–10678. [Google Scholar] [CrossRef]
- Wolf, S.; Loran, D.; Beyerer, J. Knowledge-distillation-based label smoothing for fine-grained open-set vehicle recognition. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, WACV Workshop 2024, Waikoloa, HI, USA, 1–6 January 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 330–340. [Google Scholar]
- Sielemann, A.; Wolf, S.; Roschani, M.; Ziehn, J.; Beyerer, J. Synset boulevard: A synthetic image dataset for VMMR. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 13–17 May 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 9146–9153. [Google Scholar] [CrossRef]
- Berwo, M.A.; Khan, A.; Fang, Y.; Fahim, H.; Javaid, S.; Mahmood, J.; Abideen, Z.U.; M.S., S. Deep Learning Techniques for Vehicle Detection and Classification from Images/Videos: A Survey. Sensors 2023, 23, 4832. [Google Scholar] [CrossRef]
- Xie, S.; Girshick, R.; Dollár, P.; Tu, Z.; He, K. Aggregated Residual Transformations for Deep Neural Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1492–1500. [Google Scholar] [CrossRef]
- Liu, Z.; Mao, H.; Wu, C.-Y.; Feichtenhofer, C.; Darrell, T.; Xie, S. A ConvNet for the 2020s. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 11966–11976. [Google Scholar] [CrossRef]
- Liu, X.; Liu, W.; Zheng, J.; Yan, C.; Mei, T. Beyond the parts: Learning multi-view cross-part correlation for vehicle re-identification. In Proceedings of the 28th ACM International Conference on Multimedia, Seattle, WA, USA, 12–16 October 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 907–915. [Google Scholar] [CrossRef]
- Todi, A.; Narula, N.; Sharma, M.; Gupta, U. ConvNext: A Contemporary Architecture for Convolutional Neural Networks for Image Classification. In Proceedings of the 3rd International Conference on Innovative Sustainable Computational Technologies (CISCT), Dehradun, India, 8–9 September 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–6. [Google Scholar] [CrossRef]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Li, F.-F. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 248–255. [Google Scholar] [CrossRef]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar] [CrossRef]








| No. | Dataset (Name: Year) | Availability • Images • Classes | Annotation Type • VMMR/ VCR Usage | Advantages/ Disadvantages | References |
|---|---|---|---|---|---|
| 1 | Stanford Cars (Cars196): 2013 | Public (research mirrors) • 16,185 images • 196 classes (MMY—model, make, year) | Image-level make-model–year; devkit has bboxes. • VMMR usage: fine-grained classification. | + Clean, fine-grained labels; standard FGVC-fine-grained visual clasification benchmark. − Web images; limited surveillance viewpoints. | [2,3,12,17,24,25] |
| 2 | NTOU-MMR: 2014 | Public (research) • 6639 images (2846 train/3793 test) • 29 classes | Image-level make-model (frontal surveillance). • VMMR usage: traditional ML baselines and real-traffic testbed. | + Early real-traffic set; simple splits. − Small; class imbalance; narrow views. | [12,18] |
| 3 | CompCars (Web + Surveillance): 2015 | Non-commercial research license • 136,726 web + 27,618 parts + 50,000 surveillance • 1716 models (163 makes) | Web: bboxes/viewpoints/attributes; Surveillance: front-view labels. • VMMR usage: large-scale fine-grained classification; hierarchy; parts. | + Large, multi-scenario; rich attributes. − License restricted; web and surveillance domain gap; surveillance is front-view only. | [12,14,26,27] |
| 4 | VehicleID (PKU-Peking University): 2016 | Public (registration/request) • 221,763 images • 26,267 IDs (subset with model labels) | Re-ID IDs; partial model labels. • VMMR usage: limited; mainly Re-ID. | + Large multi-camera Re-ID set. − Daytime bias; incomplete make/model labels for VMMR. | [12] |
| 5 | VeRi-776: 2016 | Public (by request; non-commercial) • ≈50 k images • 776 IDs | IDs, bboxes, type/color/brand, plate and spatio-temporal meta. • VCR/VMMR usage: attributes; Re-ID benchmarks. | + Rich attributes and multi-view; widely benchmarked. − Moderate size; single-city domain. | [12] |
| 6 | BoxCars116k: 2017 | Public • 116,286 images • 693 fine-grained classes | 3D bbox/viewpoint ‘unpacking’; fine-grained make-model-submodel-year. • VMMR usage: surveillance FGVC-fine-grained visual clasification. | + Many viewpoints; strong for surveillance FGVC-fine-grained visual clasification. − Long-tail classes; limited non-appearance attrs. | [12] |
| 7 | VMMRdb: 2017 | Public (MIT-Massachusetts Institute of Technology) • 291,752 images • 9170 classes (MMY—model, make, year) | Image-level make-model–year (no bbox). • VMMR usage: very large class coverage across decades. | + Huge coverage; fine-grained. − Quality/label noise; heavy imbalance. | [12,13] |
| 8 | VeRi-Wild: 2019 | Public (request; non-commercial) • 416,314 images • 40,671 IDs | Re-ID IDs from 174 cameras over a month; vehicle crops. • VCR/VMMR usage: attributes via Re-ID; domain for surveillance. | + Very large; high variability (view/illumination/occlusion). − No make/model labels; imbalance across IDs. | [12] |
| 9 | MVP (Multi-grained Vehicle Parsing): 2020 | Parsing annotations public (images from VeRi/AICITY19/VeRi-Wild) • 24,000 images • 10 coarse parts/59 fine parts | Pixel-level part masks for part-aware training. • VMMR/VCR usage: parts-aware recognition and color estimation. | + Enables part-aware VMMR/Re-ID; complements appearance cues. − Sourced from other sets; no make/model class labels. | [12] |
| 10 | DVMM: 2022 | Public (by request) • Large-scale VMMR dataset. 281,133 images, 326 models, 23 makes | Framework + dataset for VMMR; two-branch, two-stage deep learning. • VMMR usage: fine-grained make/model. | + Newer large-scale data; efficient two-branch/two-stage baseline. − Access/logistics may be by request; region bias possible. | [15] |
| 11 | Synset Boulevard: 2024 | Public (synthetic-artificial generated images) • 500,000 images • 800 vehicle classes | Synthetic 3D renders with segmentation, pose, color metadata. • VMMR/VCR usage: pretraining; domain randomization; rare-case coverage. | + Perfect labels; controllable domains. − Synthetic-to-real gap; limited realism in adverse conditions. | [28] |
| 12 | GLPD (Global License Plate Dataset): 2024 | Public (request) • 1.2 M images • Global fleet coverage | ALPR-centric with vehicle attributes (bboxes, plate text, make-model-color). • VMMR/VCR usage: unified AVR (ANPR + VMMR + VCR) and cross-attribute checks. | + Massive, multi-region coverage; rich labels. − Heavily ALPR-focused; quality varies; not solely VMMR/VCR. | [2] |
| 13 | UFPR-VCR: 2024 | Academic license (request; free for non-commercial research) • 10,039 images • 11 colors (9502 unique vehicles) | Color labels; frontal and rear; includes night/occlusion. • VCR usage: color classification under adverse/night conditions. | + Hard VCR benchmark (night/adverse) with validated color labels. − Color-only task; sourced from Brazilian ALPR sets. | [22] |
| 14 | Car-1000: 2025 | Public (research license) • 240,000 images • 1000 fine-grained classes | Image-level make-model–year; balanced web and traffic imagery. • VMMR/VCR usage: balanced fine-grained VMMR; optional color benchmarking. | + Balanced multi-region data; supports multitask VMMR/VCR. − Mixed lighting quality; limited rare-model coverage. | [12,17,24] |
| No. | Make-Model | Generation | Train | Test | |
|---|---|---|---|---|---|
| 1 | BMW | 5 Series | 2009–2013 | 32 | 9 |
| 2 | BMW | X3 | 2010–2013 | 36 | 10 |
| 3 | Dacia | Logan, MCV, Sandero | 2016–2020 | 184 | 46 |
| 4 | Dacia | Duster | 2013–2021 | 71 | 28 |
| 5 | Dacia | Duster | 2024 | 50 | 13 |
| 6 | Dacia | Logan, MCV | 2004–2012 | 107 | 27 |
| 7 | Dacia | Logan, Sandero | 2020–2024 | 125 | 32 |
| 8 | Dacia | Spring | 2021 | 33 | 9 |
| 9 | Ford | Focus | 2004–2008 | 39 | 10 |
| 10 | Renault | Megane | 2003–2009 | 36 | 10 |
| 11 | Renault | Megane | 2016–2020 | 39 | 10 |
| 12 | Renault | Clio | 2013–2019 | 37 | 10 |
| 13 | Skoda | Octavia | 2004–2008 | 24 | 7 |
| 14 | Skoda | Octavia | 2019–2024 | 26 | 7 |
| 15 | Toyota | Auris | 2013–2018 | 12 | 4 |
| 16 | Volkswagen | Golf | 2000–2003 | 126 | 32 |
| 17 | Volkswagen | Golf | 2008–2012 | 58 | 15 |
| 18 | Volkswagen | Passat | 2000–2005 | 68 | 17 |
| 19 | Volkswagen | Passat | 2005–2010 | 180 | 46 |
| 20 | Volkswagen | Passat | 2010–2014 | 55 | 14 |
| 21 | Volkswagen | Passat | 2014–2019 | 44 | 12 |
| 22 | Volkswagen | Tiguan | 2008–2011 | 77 | 20 |
| 23 | Volkswagen | Touran | 2003–2006 | 57 | 15 |
| 24 | Volkswagen | Up | 2012–2023 | 28 | 8 |
| Model | White | Blue | Yellow | Silver | Gray | Black | Red | Green | Weighted Acc | Avg. Acc |
|---|---|---|---|---|---|---|---|---|---|---|
| EfficientNetV2 | 100 | 95.08 | 97.78 | 100 | 79.69 | 91.89 | 100 | 88.89 | 94.93 | 94.17 |
| MobileNetV3 | 94.44 | 96.72 | 97.78 | 92.11 | 95.31 | 86.49 | 100 | 77.78 | 94.08 | 92.58 |
| ViT-B16 | 100 | 98.36 | 97.78 | 97.37 | 84.38 | 87.84 | 100 | 44.44 | 94.08 | 88.77 |
| ResNet50 | 100 | 88.52 | 97.78 | 98.68 | 85.94 | 94.59 | 100 | 77.78 | 94.93 | 92.91 |
| ConvNeXt | 100 | 98.36 | 97.78 | 96.05 | 96.88 | 79.73 | 100 | 55.56 | 94.50 | 90.54 |
| Model | White | Blue | Yellow | Silver | Gray | Black | Red | Green | Weighted Acc | Avg. Acc |
|---|---|---|---|---|---|---|---|---|---|---|
| EfficientNetV2 | 100 | 96.72 | 100 | 100 | 100 | 98.65 | 100 | 88.89 | 99.15 | 98.03 |
| MobileNetV3 | 100 | 98.36 | 100 | 100 | 100 | 100 | 100 | 88.89 | 99.58 | 98.41 |
| ViT-B16 | 100 | 100 | 97.78 | 100 | 98.44 | 100 | 100 | 44.44 | 99.15 | 96.75 |
| ResNet50 | 100 | 98.36 | 100 | 100 | 98.44 | 100 | 100 | 88.89 | 99.37 | 98.21 |
| ConvNeXt | 100 | 100 | 100 | 100 | 100 | 98.65 | 100 | 88.89 | 99.58 | 98.44 |
| Model | White | Blue | Yellow | Silver | Gray | Black | Red | Green | Avg. F1-Score |
|---|---|---|---|---|---|---|---|---|---|
| EfficientNetV2 | 100 | 97.48 | 98.88 | 98.70 | 83.61 | 88.89 | 99.31 | 80 | 93.36 |
| MobileNetV3 | 97.14 | 97.52 | 98.88 | 92.11 | 86.52 | 90.78 | 99.31 | 82.35 | 93.08 |
| ViT-B16 | 100 | 97.56 | 98.88 | 97.37 | 84.38 | 85.53 | 99.31 | 61.54 | 90.57 |
| ResNet50 | 100 | 93.91 | 98.88 | 94.94 | 88 | 91.50 | 99.31 | 82.35 | 93.61 |
| ConvNeXt | 100 | 95.24 | 98.88 | 95.42 | 87.94 | 88.72 | 99.31 | 66.67 | 91.52 |
| Model | Cohen’s Kappa |
|---|---|
| EfficientNetV2 | 0.94 |
| MobileNetV3 | 0.93 |
| ViT-B16 | 0.93 |
| ResNet50 | 0.94 |
| ConvNeXt | 0.93 |
| Class | Precision | Recall | F1-Score |
|---|---|---|---|
| BMW 5 Series 2009–2013 | 100 | 88.89 | 94.12 |
| BMW X3 2010–2013 | 90.91 | 100 | 95.24 |
| Dacia Logan, MCV, Sandero 2016–2020 | 95.74 | 97.83 | 96.77 |
| Dacia Duster 2013–2021 | 100 | 96.43 | 98.18 |
| Dacia Duster 2024 | 92.31 | 92.31 | 92.31 |
| Dacia Logan, MCV 2004–2012 | 89.66 | 96.30 | 92.86 |
| Dacia Logan, Sandero 2020–2024 | 96.88 | 96.88 | 96.88 |
| Dacia Spring 2021 | 100 | 66.67 | 80 |
| Ford Focus 2004–2008 | 90 | 90 | 90 |
| Renault Megane 2003–2009 | 76.92 | 100 | 86.96 |
| Renault Megane 2016–2020 | 100 | 90 | 94.74 |
| Renault Clio 2013–2019 | 100 | 90 | 94.74 |
| Skoda Octavia 2004–2008 | 100 | 100 | 100 |
| Skoda Octavia 2019–2024 | 100 | 100 | 100 |
| Toyota Auris 2013–2018 | 100 | 75 | 85.71 |
| Volkswagen Golf 2000–2003 | 96.97 | 100 | 98.46 |
| Volkswagen Golf 2008–2012 | 88.24 | 100 | 93.75 |
| Volkswagen Passat 2000–2005 | 93.33 | 82.35 | 87.50 |
| Volkswagen Passat 2005–2010 | 93.75 | 97.83 | 95.74 |
| Volkswagen Passat 2010–2014 | 100 | 85.71 | 92.31 |
| Volkswagen Passat 2014–2019 | 91.67 | 91.67 | 91.67 |
| Volkswagen Tiguan 2008–2011 | 100 | 95 | 97.44 |
| Volkswagen Touran 2003–2006 | 93.75 | 100 | 96.77 |
| Volkswagen Up 2012–2023 | 100 | 100 | 100 |
| Overall | 95.42 | 93.04 | 93.84 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Istrate, A.; Boboc, M.-G.; Hritcu, D.-T.; Rastoceanu, F.; Grozea, C.; Enache, M. Automatic Vehicle Recognition: A Practical Approach with VMMR and VCR. AI 2025, 6, 329. https://doi.org/10.3390/ai6120329
Istrate A, Boboc M-G, Hritcu D-T, Rastoceanu F, Grozea C, Enache M. Automatic Vehicle Recognition: A Practical Approach with VMMR and VCR. AI. 2025; 6(12):329. https://doi.org/10.3390/ai6120329
Chicago/Turabian StyleIstrate, Andrei, Madalin-George Boboc, Daniel-Tiberius Hritcu, Florin Rastoceanu, Constantin Grozea, and Mihai Enache. 2025. "Automatic Vehicle Recognition: A Practical Approach with VMMR and VCR" AI 6, no. 12: 329. https://doi.org/10.3390/ai6120329
APA StyleIstrate, A., Boboc, M.-G., Hritcu, D.-T., Rastoceanu, F., Grozea, C., & Enache, M. (2025). Automatic Vehicle Recognition: A Practical Approach with VMMR and VCR. AI, 6(12), 329. https://doi.org/10.3390/ai6120329

