Advancements in Embedded Vision Systems for Automotive: A Comprehensive Study on Detection and Recognition Techniques
Abstract
1. Introduction
- Colorimetric Techniques: This division includes strategies that implement segmentation processes for the purpose of localization.
- Geometrical Approaches: This segment comprises methods that utilize geometric principles.
- Learning-Based Algorithms: This category encompasses techniques that incorporate learning paradigms for identification and classification tasks.
- Feature-Based Methods: This category encompasses techniques wherein attributes are meticulously traced by domain experts.
- Deep Learning Approaches: This classification pertains to methods that employ deep learning algorithms for feature extraction and pattern recognition tasks [31].
- -
- Sign supports must not encroach on the left and right lane and must be positioned as far as possible from surfaces accessible to vehicles.
- -
- Supports for gantries, jibs, etc., must generally be isolated by safety guardrails.
- To make road traffic safer and easier.
- To remind you of certain traffic regulations.
- To indicate and remind the various special regulations.
- To provide information about the road user.
- RQ1: What are the most efficient embedded vision techniques for road element detection?
- RQ2: What are the current limitations of these systems under real-world constraints?
- RQ3: Which unexplored areas can lead to safer, more scalable ADAS integration?
- RQ4: What types of processors are most commonly used in embedded vision systems for traffic sign and lane detection, and how do they impact system performance under real-time constraints?
2. Materials and Methods
2.1. Types of Signs
- ❖
- Road infrastructure server;
- ❖
- From one vehicle to another.
2.2. The Different Systems Existing in the Automotive Industry
- a.
- Advantages of TSR Systems
- Facilitate accurate traffic sign readings;
- Ensure interrupted performance;
- Provide complete speed solutions (avoid accidents);
- Ability to encrypt image data with other systems;
- Reads all types of infrared and non-reflective infrared plates.
- b.
- Limitations of TSR Systems
- A road sign outside the camera’s detection zone will not be detected.
- The systems operate only within the limits of the system and assist the driver.
- The driver must remain attentive while driving and remains fully responsible for their actions.
- Passive application: consists of informing the driver by means of pictograms or sounds that they are entering a zone that has a new limit indicated by a traffic light. In this case, it will be the driver’s decision whether they obey the signal or not.
- Active application: consists of automatically intervening in the car when it detects a sign. For example, if the driver is driving at an excessive speed and the TSR system detects a stop sign but the car does not interpret that the driver intends to stop, the command to brake will be sent directly to the car to avoid a possible accident.
2.3. Horizontal Signs: Road Lanes
- Enhanced Perception of Safety: Image processing algorithms are key to enhancing a vehicle’s perception of its surroundings. They process visual data from cameras to detect objects, lanes, signs, and pedestrians, which is vital for safety features like collision avoidance and lane-keeping assistance [34,42].
- Real-time processing: Embedded vision systems must process and analyze visual data in real time to be effective. Image processing techniques allow for the quick interpretation of data, enabling immediate responses to dynamic road conditions [43].
- Machine Learning Integration: The integration of machine learning with image processing has led to more accurate and adaptive vision systems. These systems can learn from vast amounts of data, improving their ability to recognize and respond to various traffic scenarios over time [44].
- Reduced Computational Load: Advanced image processing techniques help in reducing the computational load on embedded systems. By preprocessing visual data and extracting relevant features, these systems can operate efficiently without compromising on speed or accuracy.
- Sensor Fusion: Although camera-based vision systems are fundamental in enabling perception for autonomous vehicles, they exhibit several inherent limitations. Environmental conditions such as fog, shadows, glare, or heavy rain can reduce the reliability of image-based detection. Moreover, vision alone, especially from monocular cameras, fails to provide accurate depth information, which is critical for tasks like distance estimation and obstacle avoidance. In contrast, sensors like LiDAR and radar offer more consistent depth measurements and greater resilience to weather variations, though they may lack the resolution and semantic detail provided by visual sensors. By combining complementary sensor modalities, sensor fusion techniques significantly enhance perception reliability. As demonstrated in [45], fusing visual data with LiDAR or radar improves detection confidence, reduces false positives, and strengthens decision-making, particularly in edge-case scenarios [46].
2.4. Harnessing Artificial Intelligence in the Automotive Sector
- Visual perception: object recognition or scene description.
- Understanding of written or spoken natural language: automatic translation, automatic production of press articles, and sentiment analysis.
- Automatic analysis by “understanding” a query and returning relevant results, even if the result does not contain the words of the query.
- Autonomous decision-making for ADASs and autonomous vehicles.
- a.
- Exploring Artificial Intelligence Technologies
- b.
- The usefulness of AI for embedded vision systems in the automotive sector
2.5. Vision Transformers in Object Detection and Tracking
3. Related Work
3.1. Traditional Tracking Approach
3.2. Deep Leaming Based Tracking Approach
3.3. Three-Dimensional Perception and Structured Scene Understanding
3.4. Dataset Description
3.4.1. German Traffic Sign Recognition Benchmark (GTSRB)
3.4.2. LISA Traffic Sign Dataset
3.4.3. Cityscapes Dataset
3.4.4. TuSimple Lane Detection Dataset
3.4.5. CULane Dataset
3.4.6. U.S. Traffic Signs Dataset
3.4.7. Traffic Sign Dataset—Classification
3.4.8. Caltech Pedestrian Dataset
3.4.9. KITTI Dataset
3.4.10. Malaysia Roads Dataset
3.4.11. STS Dataset (Simulated Traffic Sign Dataset)
3.4.12. Belgian Traffic Sign Classification Dataset
3.4.13. Driver Inattention and Traffic Sign Dataset
3.4.14. Text-Based Traffic Sign Dataset in Chinese and English (TTSDCE) Dataset
3.4.15. Comprehensive Analysis and Comparison of Automotive Datasets
4. Results and Discussions
4.1. Image Processing Methods
4.2. The Color-Based Methods
4.2.1. The Methods That Use the RGB Space
4.2.2. Methods That Use Non-Linear Color Spaces
4.2.3. The Methods That Use Color Spaces Linear
- In individual images, optimal outcomes were achieved using the RGB method; however, in videos, this was not the case.
- The HSV space gives higher results, but on condition that the execution time constraint is eliminated.
4.3. Geometry-Based Methods
4.3.1. Hough Transform
The Detection of the Roadway
Detection of Traffic Signs
4.3.2. HOG Transforms
4.4. Centroids and Contours
4.5. Comparative Analysis of Traffic Signs and Road Lane Detection Methods
4.6. Artificial Intelligence Methods
4.7. Recognition Methods
4.7.1. Learning Methods Based on Manually Extracted Features
- Acquisition: The input image is acquired.
- Pre-processing: Initial preprocessing tasks are performed.
- Segmentation: The image is segmented using color information from the HSV color model.
- Morphological Closure: Refinement of the segmented image occurs.
- Region Filtering: Filtering based on region properties and shape signature is applied.
- Region Cropping: The desired region (likely containing the traffic sign) is cropped.
- Classification: The extracted sign region undergoes classification using automatic feature extraction via the deep CNN.
4.7.2. Deep Learning Methods
4.8. Impact of Lighting Conditions on Traffic Sign Detection: Methodological and Dataset Analysis
4.9. Parameter Tuning and Its Impact on Model Performance
4.9.1. Comparison of Strategies
- High Iteration Techniques: Methods like Automated Hyperparameter Search (1500 iterations) and Meta-Learning Hyperparameter Tuning (1300 iterations) prioritize exhaustive exploration of parameter spaces, yielding accuracy improvements of +4.2% to +5.5%. These approaches are ideal for systems that require maximum precision and reliability.
- Moderate Iteration Techniques: Approaches such as ShuffleNet with YOLOv5 tuning (950 iterations) achieve accuracy gains of +4.8% and cost reductions of −18%, offering scalability for systems with moderate resource availability.
- Dynamic Iteration Techniques: Crowdsourced tuning strategies adapt to variable workloads, providing flexibility while maintaining strong performance.
- a.
- Recommendations for Optimization
- Adopt Bayesian Optimization: This method consistently delivers reliable improvements, making it a strong choice for moderately constrained systems.
- Utilize Dynamic Tuning Approaches: Techniques such as crowdsourced optimization provide scalability for real-time or large-scale applications.
- Optimize Lightweight Architectures: Models like Reparametrized YOLOX-s demonstrate that combining lightweight designs with effective parameter tuning achieve high efficiency and accuracy.
4.9.2. Research Directions
4.10. Limitations in Traffic Sign and Road Marking Detection
4.10.1. Geographic Variations in Traffic Signs
4.10.2. Challenges with Road Markings
4.10.3. Dataset Limitations
4.10.4. Enhanced Performance
4.11. Proposed Solutions in Traffic Sign and Road Marking Detection
4.11.1. Transformers Models
4.11.2. Modeling Spatial Hierarchies
4.11.3. Addressing Complex Backgrounds and Enhancing Model Performance
4.12. Advancing Intelligent Transportation Through Hybrid Methodologies
The Region of Interest (ROI)
Category | Technique Used | Detection Score (%) | Recognition Score (%) | Dataset | Execution Time (ms) | False Positive Rate (%) | Type of ROI | Reference | Method | Hardware Platform/Real-Time Specs | FPS (Frames per Second) |
---|---|---|---|---|---|---|---|---|---|---|---|
Traffic Signs | RANSAC, ROI, Triangular Shapes | 95 | No specified | Belgium Road Code | No specified | 2.5 | Static | M. Boumediene [210] | Traditional | Not specified | N/S |
ROI, SVM, CNN | 94 | 98.33 | GTSRB | 20 | 3 | Static | N. Hasan [211] | Hybrid | GPU (unspecified), ~20 ms | 50 FPS approx. | |
ROI, YOLOv8 | 96 | No specified | CCTSDB dataset | Real Tie | 1.8 | Dynamic | Y. Luo [212] | Hybrid | Jetson Xavier NX @21FPS, 15 W | 21 FPS | |
ROI, CNN, Color Segmentation | 95 | 94 | TTSDCE dataset | 25 | 2.3 | Static | Y. Zhu [213] | Hybrid | GPU (assumed), ~25 ms | 40 FPS est. | |
Road Lane Boundaries | ROI, Dynamic, Modified Hough Transform | 96 | N/A | Road videos | Real Time | No specified | Dynamic | Y. Shen [214] | Hybrid | PC-based, estimated RT | RT (est.) |
ROI, Hough Transform | 98 | N/A | 640 × 480 pixel video | 5 | 1.2 | Static | M. H. Syed [215] | Traditional | Desktop CPU (5 ms) | 200 FPS est. | |
ROI, Adaptive, Stereo Vision | 97 | N/A | Road dataset | No specified | No specified | Dynamic | Yingfo Chen [216] | Hybrid | Jetson TX2 | 15–20 FPS | |
ROI, (CNN) | 97 | N/A | Several datasets | Real Time | No specified | Dynamic | A. Gudigar [217] | Hybrid | GPU or Jetson (N/S | 30–40 FPS est. | |
ROI, Segmentation | 93 | N/A | KITI | 15 | 2.8 | Static | S. P. Narote [114] | Traditional | Not specified | N/S |
4.13. Analysis of Histogram Equalization and CLAHE Techniques for Traffic Sign and Lane Detection
- is the new normalized intensity;
- is the original pixel intensity;
- L is the total number of intensity levels (typically 256 for an 8-bit image);
- is the probability of intensity rjr_jrj;
- N is the total number of pixels.
- is the HE transformation applied locally;
- is the threshold controlling the maximum amplification
Authors | Year | Techniques Used | Histogram Equalization | CLAHE | Application | Results/Key Values |
---|---|---|---|---|---|---|
Utkarsh Dubey, Rahul Kumar Chaurasiya [220] | 2021 | CLAHE, CNN (ResNet) | Not Used | Used | Traffic sign recognition | Classification accuracy: 95.7% after CLAHE |
Jiana Yao et al. [221] | 2023 | Histogram Equalization, CLAHE, Mask RCNN | Used | Used | Detection and recognition of traffic signs | 12% increase in F1-Score under low-light conditions |
Chen [223] | 2024 | CLAHE, YOLO | Not Used | Used | Recognition of road markings | Detection rate improved to 91% under low-light conditions |
Manongga [224] | 2024 | CLAHE, Fusion Linear Image Enhancement, YOLOv7 | Not Used | Used | Detection of markings | Detection rate: 89.6%; reduction in false positives by 15% |
Yan [165] | 2023 | CLAHE, Lightweight Model | Not Used | Used | Traffic sign recognition | Accuracy improvement of 8% compared to non-enhanced images |
Wang [225] | 2023 | Histogram Equalization, CLAHE | Used | Used | Detection of signs | Performance improvement to 93.2% in nighttime scenarios |
Shuen Zhao et al. [226]. | 2024 | CLAHE, Correction Gamma, CNN | Not Used | Used | Signs and markings | Accuracy: 94.8%; reduction in inference time by 25% |
Sun [227] | 2024 | CLAHE, YOLOv5 | Not Used | Used | Detection of signs | Detection rate: 95.2%; increase in detection speed by 20% |
Prasanthi [219] | 2022 | Histogram Equalization, CLAHE, CNN | Used | Used | Lanes and signs | 15% improvement in mAP metrics under low-light conditions |
Qin [218] | 2019 | Histogram Equalization | Used | Not Used | Improvement in sign image | 20% contrast improvement; reduction in detection errors |
4.14. Board Experimentation for Detection and Recognition
4.14.1. Traffic Signs
4.14.2. Road Lines
5. Research Gaps and Challenges
- Dataset Limitations
- Real-Time Performance on Embedded Platforms
- Sensor Fusion Complexity
- Explainability and Trustworthiness
- Lack of Standard Benchmarks:
6. Conclusions
7. Future Research Directions
Author Contributions
Funding
Conflicts of Interest
References
- Zheng, L.; Sayed, T.; Mannering, F. Modeling traffic conflicts for use in road safety analysis: A review of analytic methods and future directions. Anal. Methods Accid. Res. 2021, 29, 100142. [Google Scholar] [CrossRef]
- Hu, Y.; Ou, J.; Hu, L. A review of research on traffic conflicts based on intelligent vehicles perception technology. In Proceedings of the 2019 International Conference on Advances in Construction Machinery and Vehicle Engineering: ICACMVE 2019, Changsha, China, 14–16 May 2019; pp. 137–142. [Google Scholar] [CrossRef]
- Al-Turjman, F.; Lemayian, J.P. Intelligence, security, and vehicular sensor networks in internet of things (IoT)-enabled smart-cities: An overview. Comput. Electr. Eng. 2020, 87, 106776. [Google Scholar] [CrossRef]
- Barodi, A.; Zemmouri, A.; Bajit, A.; Benbrahim, M.; Tamtaoui, A. Intelligent Transportation System Based on Smart Soft-Sensors to Analyze Road Traffic and Assist Driver Behavior Applicable to Smart Cities. Microprocess. Microsyst. 2023, 100, 104830. [Google Scholar] [CrossRef]
- Macioszek, E.; Tumminello, M.L. Simulating Vehicle-to-Vehicle Communication at Roundabouts. Transp. Probl. 2024, 19, 45–57. [Google Scholar] [CrossRef]
- Ouerhani, Y.; Alfalou, A.; Desthieux, M.; Brosseau, C. Advanced driver assistance system: Road sign identification using VIAPIX system and a correlation technique. Opt. Lasers Eng. 2016, 89, 184–194. [Google Scholar] [CrossRef]
- Weber, M.; Weiss, T.; Gechter, F.; Kriesten, R. Approach for improved development of advanced driver assistance systems for future smart mobility concepts. Auton. Intell. Syst. 2023, 3, 2. [Google Scholar] [CrossRef]
- Waykole, S.; Shiwakoti, N.; Stasinopoulos, P. Review on lane detection and tracking algorithms of advanced driver assistance system. Sustainability 2021, 13, 11417. [Google Scholar] [CrossRef]
- Bao, Z.; Hossain, S.; Lang, H.; Lin, X. A review of high-definition map creation methods for autonomous driving. Eng. Appl. Artif. Intell. 2023, 122, 106125. [Google Scholar] [CrossRef]
- Belim, S.V.; Belim, S.Y.; Khiryanov, E.V. Hierarchical System for Recognition of Traffic Signs Based on Segmentation of Their Images. Information 2023, 14, 335. [Google Scholar] [CrossRef]
- Kim, T.Y.; Lee, S.H. Combustion and Emission Characteristics of Wood Pyrolysis Oil-Butanol Blended Fuels in a Di Diesel Engine. Int. J. Automot. Technol. 2015, 16, 903–912. [Google Scholar] [CrossRef]
- Barodi, A.; Bajit, A.; Benbrahim, M.; Tamtaoui, A. An Enhanced Approach in Detecting Object Applied to Automotive Traffic Roads Signs. In Proceedings of the 6th International Conference on Optimization and Applications, ICOA 2020, Beni Mellal, Morocco, 20–21 April 2020; pp. 1–6. [Google Scholar] [CrossRef]
- Barodi, A.; Bajit, A.; Benbrahim, M.; Tamtaoui, A. Applying Real-Time Object Shapes Detection to Automotive Traffic Roads Signs. In Proceedings of the 2020 International Symposium on Advanced Electrical and Communication Technologies, ISAECT 2020, Virtual, 25–27 November 2020. [Google Scholar] [CrossRef]
- Barodi, A.; Zemmouri, A.; Bajit, A.; Benbrahim, M.; Tamtaoui, A. An Explainable Model for Detection and Recognition of Traffic Road Signs. In Explainable Artificial Intelligence for Intelligent Transportation Systems; CRC Press: Boca Raton, FL, USA, 2023; pp. 171–206. [Google Scholar] [CrossRef]
- Barodi, M.; Soudane, M.A.; Lalaoui, S. The Organizational Change Conduct: A Lever for the Moroccan Public Digital Transformation. In International Conference on Advanced Technologies for Humanity; Springer: Cham, Switzerland, 2025; pp. 3–10. [Google Scholar] [CrossRef]
- Tian, J.; Liu, S.; Zhong, X.; Zeng, J. LSD-based adaptive lane detection and tracking for ADAS in structured road environment. Soft Comput. 2021, 25, 5709–5722. [Google Scholar] [CrossRef]
- Li, J.; Jiang, F.; Yang, J.; Kong, B.; Gogate, M.; Dashtipour, K.; Hussain, A. Lane-DeepLab: Lane semantic segmentation in automatic driving scenarios for high-definition maps. Neurocomputing 2021, 465, 15–25. [Google Scholar] [CrossRef]
- Chen, W.; Wang, W.; Wang, K.; Li, Z.; Li, H.; Liu, S. Lane departure warning systems and lane line detection methods based on image processing and semantic segmentation: A review. J. Traffic Transp. Eng. 2020, 7, 748–774. [Google Scholar] [CrossRef]
- Zhu, Y.; Zhang, C.; Zhou, D.; Wang, X.; Bai, X.; Liu, W. Traffic sign detection and recognition using fully convolutional network guided proposals. Neurocomputing 2016, 214, 758–766. [Google Scholar] [CrossRef]
- Ruta, A.; Li, Y.; Liu, X. Real-time traffic sign recognition from video by class-specific discriminative features. Pattern Recognit. 2010, 43, 416–430. [Google Scholar] [CrossRef]
- Megalingam, R.K.; Thanigundala, K.; Musani, S.R.; Nidamanuru, H.; Gadde, L. Indian traffic sign detection and recognition using deep learning. Int. J. Transp. Sci. Technol. 2023, 12, 683–699. [Google Scholar] [CrossRef]
- Barodi, A.; Bajit, A.; Benbrahim, M.; Tamtaoui, A. Improving the transfer learning performances in the classification of the automotive traffic roads signs. E3S Web Conf. 2021, 234, 64. [Google Scholar] [CrossRef]
- Barodi, M.; Lalaoui, S. Evaluation du Niveau d’ Ouverture des Acteurs Publiques Quant à la Nouvelle Réforme Publique Marocaine; Evaluation of the Level of Openness of Public Actors Regarding the New Moroccan Public Reform; Ibn Tofail University: Kénitra, Morocco, 2022. [Google Scholar]
- Parsa, A.; Farhadi, A. Measurement and control of nonlinear dynamic systems over the internet (IoT): Applications in remote control of autonomous vehicles. Automatica 2018, 95, 93–103. [Google Scholar] [CrossRef]
- Wang, W.; Lin, H.; Wang, J. CNN based lane detection with instance segmentation in edge-cloud computing. J. Cloud Comput. 2020, 9, 27. [Google Scholar] [CrossRef]
- Kortli, Y.; Gabsi, S.; Voon, L.F.C.L.Y.; Jridi, M.; Merzougui, M.; Atri, M. Deep embedded hybrid CNN–LSTM network for lane detection on NVIDIA Jetson Xavier NX. Knowl. Based Syst. 2022, 240, 107941. [Google Scholar] [CrossRef]
- Chowdhury, K.; Kapoor, R. Relevance of Smart Management of Road Traffic System Using Advanced Intelligence. In Optimized Computational Intelligence Driven Decision-Making; Wiley: Chichester, UK, 2024; pp. 131–150. [Google Scholar]
- Rezaee, K.; Khosravi, M.R.; Attar, H.; Menon, V.G.; Khan, M.A.; Issa, H.; Qi, L. IoMT-Assisted Medical Vehicle Routing Based on UAV-Borne Human Crowd Sensing and Deep Learning in Smart Cities. IEEE Internet Things J. 2023, 10, 18529–18536. [Google Scholar] [CrossRef]
- Bishop, R. Intelligent vehicle R&D: A review and contrast of programs worldwide and emerging trends. Ann. Des Télécommunications 2005, 60, 228–263. [Google Scholar] [CrossRef]
- Elmquist, A.; Negrut, D. Methods and Models for Simulating Autonomous Vehicle Sensors. IEEE Trans. Intell. Veh. 2020, 5, 684–692. [Google Scholar] [CrossRef]
- Barodi, A.; Bajit, A.; Zemmouri, A.; Benbrahim, M.; Tamtaoui, A. Improved Deep Learning Performance for Real-Time Traffic Sign Detection and Recognition Applicable to Intelligent Transportation Systems. Int. J. Adv. Comput. Sci. Appl. 2022, 13, 712–723. [Google Scholar] [CrossRef]
- Francis, S. ADAS: Features of Advanced Driver Assistance Systems. 2017, pp. 1–2. Available online: https://roboticsandautomationnews.com/2017/07/01/adas-features-of-advanced-driver-assistance-systems/13194/ (accessed on 1 June 2025).
- Zemmouri, A.; Elgouri, R.; Alareqi, M.; Dahou, H.; Benbrahim, M.; Hlou, L. A comparison analysis of PWM circuit with arduino and FPGA. ARPN J. Eng. Appl. Sci. 2017, 12, 4679–4683. [Google Scholar]
- Mohamed, B.; Siham, L. Moroccan Public Administration in the Era of Artificial Intelligence: What Challenges to Overcome? In Proceedings of the 2023 9th International Conference on Optimization and Applications (ICOA) 2023, Abu Dhabi, United Arab Emirates, 5–6 October 2023. [Google Scholar] [CrossRef]
- Ye, X.Y.; Hong, D.S.; Chen, H.H.; Hsiao, P.Y.; Fu, L.C. A two-stage real-time YOLOv2-based road marking detector with lightweight spatial transformation-invariant classification. Image Vis. Comput. 2020, 102, 103978. [Google Scholar] [CrossRef]
- De Paula, M.B.; Jung, C.R. Real-time detection and classification of road lane markings. In Proceedings of the 2013 XXVI Conference on Graphics, Patterns and Images, Arequipa, Peru, 5–8 August 2013; pp. 83–90. [Google Scholar] [CrossRef]
- Taamneh, M. Investigating the role of socio-economic factors in comprehension of traf fi c signs using decision tree algorithm. J. Safety Res. 2018, 66, 121–129. [Google Scholar] [CrossRef]
- Taamneh, M.; Alkheder, S. Traffic sign perception among Jordanian drivers: An evaluation study. Transp. Policy 2018, 66, 17–29. [Google Scholar] [CrossRef]
- Serna, C.G.; Ruichek, Y. Classification of Traffic Signs: The European Dataset. IEEE Access 2018, 6, 78136–78148. [Google Scholar] [CrossRef]
- Barodi, A.; Bajit, A.; Tamtaoui, A.; Benbrahim, M. An Enhanced Artificial Intelligence-Based Approach Applied to Vehicular Traffic Signs Detection and Road Safety Enhancement. Adv. Sci. Technol. Eng. Syst. J. 2021, 6, 672–683. [Google Scholar] [CrossRef]
- Obayd, M.; Zemmouri, A.; Barodi, A.; Benbrahim, M. Advanced Diagnostic Techniques for Automotive Systems: Innovations and AI-Driven Approaches. In International Conference on Advanced Sustainability Engineering and Technology; Springer: Cham, Switzerland, 2025; pp. 485–496. [Google Scholar] [CrossRef]
- Nikolić, Z. Embedded vision in advanced driver assistance systems. Adv. Comput. Vis. Pattern Recognit. 2014, 68, 45–69. [Google Scholar] [CrossRef]
- Duong, T.T.; Seo, J.H.; Tran, T.D.; Young, B.J.; Jeon, J.W. Evaluation of embedded systems for automotive image processing. In Proceedings of the 2018 19th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), Busan, Republic of Korea, 27–29 June 2018; pp. 123–128. [Google Scholar] [CrossRef]
- Udendhran, R.; Balamurugan, M.; Suresh, A.; Varatharajan, R. Enhancing image processing architecture using deep learning for embedded vision systems. Microprocess. Microsyst. 2020, 76, 103094. [Google Scholar] [CrossRef]
- Zemmouri, A.; Barodi, A.; Elgouri, R.; Benbrahim, M. Proposal automatic water purging system for machinery in high humidity environments controlled by an ECU. Comput. Electr. Eng. 2024, 120, 109775. [Google Scholar] [CrossRef]
- Nahata, D.; Othman, K. Exploring the challenges and opportunities of image processing and sensor fusion in autonomous vehicles: A comprehensive review. AIMS Electron. Electr. Eng. 2023, 7, 271–321. [Google Scholar] [CrossRef]
- Mueller, C.; Mezhuyev, V. AI Models and Methods in Automotive Manufacturing: A Systematic Literature Review. In Recent Innovations in Artificial Intelligence and Smart Applications; Springer: Cham, Switzerland, 2022; pp. 1–25. [Google Scholar]
- Bodenhausen, U. Quick Start with AI for Automotive Development: Five Process Changes and One New Process. In Internationales Stuttgarter Symposium: Automobil-und Motorentechnik; Springer: Cham, Switzerland, 2021; pp. 247–262. [Google Scholar]
- Barodi, M.; Lalaoui, S. Civil servants’ readiness for AI adoption: The role of change management in Morocco’s public sector. Probl. Perspect. Manag. 2025, 23, 63–75. [Google Scholar] [CrossRef]
- Barodi, M.; Lalaoui, S. The Readiness of Civil Servants to Join the Era of Artificial Intelligence: A Case Study of Moroccan Public Administration. Chang. Manag. An Int. J. 2025, 25, 1–21. [Google Scholar] [CrossRef]
- da Silva Neto, V.J.; Chiarini, T. The Platformization of Science: Towards a Scientific Digital Platform Taxonomy. Minerva 2023, 61, 1–29. [Google Scholar] [CrossRef]
- Barodi, M.; Lalaoui, S. Le management du changement: Un levier de la réforme publique au Maroc. Change management: A lever for the public reform in Morocco. Introduction. Rev. Int. du Cherch. 2022, 5, 1–18. [Google Scholar]
- Espina-Romero, L.; Guerrero-Alcedo, J. Fields Touched by Digitalization: Analysis of Scientific Activity in Scopus. Sustainability 2022, 14, 14425. [Google Scholar] [CrossRef]
- He, S. An endogenous intelligent architecture for wireless communication networks. Wirel. Networks 2024, 30, 1069–1084. [Google Scholar] [CrossRef]
- Barodi, M.; Yassine, H.; Hicham, E.G.; Abdellatif, R.; Khalid, R. Siham Lalaoui Assessing the Relevance of Change Management Strategy in Moroccan Public Sector Reform. J. IUS Kaji. Huk. dan Keadilan 2024, 12, 447–471. [Google Scholar] [CrossRef]
- Reinhardt, D.; Jesorsky, O.; Traub, M.; Denis, J.; Notton, P. Electronic Components and Systems for Automotive Applications; Langheim, J., Ed.; Lecture Notes in Mobility; Springer: Cham, Switzerland, 2019; ISBN 978-3-030-14155-4. [Google Scholar]
- Garikapati, D.; Shetiya, S.S. Autonomous Vehicles: Evolution of Artificial Intelligence and Learning Algorithms. arXiv 2024, arXiv:2402.17690. [Google Scholar] [CrossRef]
- Zhang, X.; Liao, X.P.; Tu, J.C. A Study of Bibliometric Trends in Automotive Human–Machine Interfaces. Sustainability 2022, 14, 9262. [Google Scholar] [CrossRef]
- Nagy, M.; Lăzăroiu, G. Computer Vision Algorithms, Remote Sensing Data Fusion Techniques, and Mapping and Navigation Tools in the Industry 4.0-Based Slovak Automotive Sector. Mathematics 2022, 10, 3543. [Google Scholar] [CrossRef]
- Pavel, M.I.; Tan, S.Y.; Abdullah, A. Vision-Based Autonomous Vehicle Systems Based on Deep Learning: A Systematic Literature Review. Appl. Sci. 2022, 12, 6831. [Google Scholar] [CrossRef]
- Schlicht, P. AI in the Automotive Industry. In Work and AI 2030; Springer Fachmedien Wiesbaden: Wiesbaden, Germany, 2023; pp. 257–265. [Google Scholar]
- Vermesan, O.; John, R.; Pype, P.; Daalderop, G.; Kriegel, K.; Mitic, G.; Lorentz, V.; Bahr, R.; Sand, H.E.; Bockrath, S.; et al. Automotive Intelligence Embedded in Electric Connected Autonomous and Shared Vehicles Technology for Sustainable Green Mobility. Front. Futur. Transp. 2021, 2, 688482. [Google Scholar] [CrossRef]
- Zhang, Y.; Dhua, A.S.; Kiselewich, S.J.; Bauson, W.A. Challenges of Embedded Computer Vision in Automotive Safety Systems. In Embedded Computer Vision; Springer: London, UK, 2009; pp. 257–279. [Google Scholar]
- Mehta, S.; Rastegari, M. Mobilevit: Light-Weight, General-Purpose, and Mobile-Friendly Vision Transformer. arXiv 2022, arXiv:2110.02178. [Google Scholar]
- Xu, G.; Hao, Z.; Luo, Y.; Hu, H.; An, J. DeViT: Decomposing Vision Transformers for Collaborative Inference in Edge Devices. IEEE Trans. Mob. Comput. 2023, 23, 5917–5932. [Google Scholar] [CrossRef]
- Setyawan, N.; Kurniawan, G.W.; Sun, C.-C.; Kuo, W.-K.; Hsieh, J.-W. Fast-COS: A Fast One-Stage Object Detector Based on Reparameterized Attention Vision Transformer for Autonomous Driving. arXiv 2025, arXiv:2502.07417. [Google Scholar]
- Lai-Dang, Q.-V. A Survey of Vision Transformers in Autonomous Driving: Current Trends and Future Directions. arXiv 2024, arXiv:2403.07542. [Google Scholar] [CrossRef]
- Zemmouri, A.; Barodi, A.; Dahou, H.; Alareqi, M.; Elgouri, R.; Hlou, L.; Benbrahim, M. A microsystem design for controlling a DC motor by pulse width modulation using MicroBlaze soft-core. Int. J. Electr. Comput. Eng. 2023, 13, 1437. [Google Scholar] [CrossRef]
- Rinosha, S.M.J. Gethsiyal Augasta M Review of recent advances in visual tracking techniques. Multimed. Tools Appl. 2021, 80, 24185–24203. [Google Scholar] [CrossRef]
- Juan, O.; Keriven, R.; Postelnicu, G. Stochastic Motion and the Level Set Method in Computer Vision: Stochastic Active Contours. Int. J. Comput. Vis. 2006, 69, 7–25. [Google Scholar] [CrossRef]
- Preusser, T.; Kirby, R.M.; Pätz, T. Image Processing and Computer Vision with Stochastic Images. In Stochastic Partial Differential Equations for Computer Vision with Uncertain Data; Springer: Cham, Switzerland, 2017; pp. 81–116. [Google Scholar]
- Panagakis, Y.; Kossaifi, J.; Chrysos, G.G.; Oldfield, J.; Nicolaou, M.A.; Anandkumar, A.; Zafeiriou, S. Tensor Methods in Computer Vision and Deep Learning. Proc. IEEE 2021, 109, 863–890. [Google Scholar] [CrossRef]
- Guo, M.-H.; Xu, T.-X.; Liu, J.-J.; Liu, Z.-N.; Jiang, P.-T.; Mu, T.-J.; Zhang, S.-H.; Martin, R.R.; Cheng, M.-M.; Hu, S.-M. Attention mechanisms in computer vision: A survey. Comput. Vis. Media 2022, 8, 331–368. [Google Scholar] [CrossRef]
- Chang, C.-H.; Hung, J.C.; Chang, J.-W. Exploring the Potential of Webcam-Based Eye-Tracking for Traditional Eye-Tracking Analysis. In International Conference on Frontier Computing; Springer: Singapore, 2024; pp. 313–316. [Google Scholar]
- Zarindast, A.; Sharma, A. Opportunities and Challenges in Vehicle Tracking: A Computer Vision-Based Vehicle Tracking System. Data Sci. Transp. 2023, 5, 3. [Google Scholar] [CrossRef]
- Vongkulbhisal, J.; De la Torre, F.; Costeira, J.P. Discriminative Optimization: Theory and Applications to Computer Vision. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 829–843. [Google Scholar] [CrossRef]
- Chen, Z.; Du, Y.; Deng, J.; Zhuang, J.; Liu, P. Adaptive Hyper-Feature Fusion for Visual Tracking. IEEE Access 2020, 8, 68711–68724. [Google Scholar] [CrossRef]
- Walia, G.S.; Ahuja, H.; Kumar, A.; Bansal, N.; Sharma, K. Unified Graph-Based Multicue Feature Fusion for Robust Visual Tracking. IEEE Trans. Cybern. 2020, 50, 2357–2368. [Google Scholar] [CrossRef]
- Cao, J.; Pang, J.; Kitani, K. Multi-Object Tracking by Hierarchical Visual Representations. In Proceedings of the 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 13–17 May 2024. [Google Scholar]
- Wahid, A.; Yahya, M.; Breslin, J.G.; Intizar, M.A. Self-Attention Transformer-Based Architecture for Remaining Useful Life Estimation of Complex Machines. Procedia Comput. Sci. 2023, 217, 456–464. [Google Scholar] [CrossRef]
- Li, F.; Zhang, S.; Yang, J.; Feng, Z.; Chen, Z. Rail-PillarNet: A 3D Detection Network for Railway Foreign Object Based on LiDAR. Comput. Mater. Contin. 2024, 80, 3819–3833. [Google Scholar] [CrossRef]
- Chen, Z.; Yang, J.; Chen, L.; Li, F.; Feng, Z.; Jia, L.; Li, P. RailVoxelDet: An Lightweight 3D Object Detection Method for Railway Transportation Driven by on-Board LiDAR Data. IEEE Internet Things J. 2025, 12, 37175–37189. [Google Scholar] [CrossRef]
- Li, Z.; Wang, W.; Li, H.; Xie, E.; Sima, C.; Lu, T.; Yu, Q.; Dai, J. BEVFormer: Learning Bird’s-Eye-View Representation From LiDAR-Camera via Spatiotemporal Transformers. IEEE Trans. Pattern Anal. Mach. Intell. 2025, 47, 2020–2036. [Google Scholar] [CrossRef]
- Yu, Z.; Li, J.; Wei, Y.; Lyu, Y.; Tan, X. Combining Camera–LiDAR Fusion and Motion Planning Using Bird’s-Eye View Representation for End-to-End Autonomous Driving. Drones 2025, 9, 281. [Google Scholar] [CrossRef]
- Qi, C.R.; Liu, W.; Wu, C.; Su, H.; Guibas, L.J. Frustum PointNets for 3D Object Detection from RGB-D Data. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 918–927. [Google Scholar]
- Keshun, Y.; Puzhou, W.; Yingkui, G. Toward Efficient and Interpretative Rolling Bearing Fault Diagnosis via Quadratic Neural Network With Bi-LSTM. IEEE Internet Things J. 2024, 11, 23002–23019. [Google Scholar] [CrossRef]
- Houben, S.; Stallkamp, J.; Salmen, J.; Schlipsing, M.; Igel, C. Detection of traffic signs in real-world images: The German traffic sign detection benchmark. In Proceedings of the 2013 International Joint Conference on Neural Networks (IJCNN), Dallas, TX, USA, 4–9 August 2013. [Google Scholar] [CrossRef]
- Maldonado-Bascon, S.; Lafuente-Arroyo, S.; Gil-Jimenez, P.; Gomez-Moreno, H.; Lopez-Ferreras, F. Road-Sign Detection and Recognition Based on Support Vector Machines. IEEE Trans. Intell. Transp. Syst. 2007, 8, 264–278. [Google Scholar] [CrossRef]
- Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; Schiele, B. The Cityscapes Dataset for Semantic Urban Scene Understanding. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 3213–3223. [Google Scholar]
- Neven, D.; Brabandere, B.D.; Georgoulis, S.; Proesmans, M.; Gool, L. Van Towards End-to-End Lane Detection: An Instance Segmentation Approach. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 286–291. [Google Scholar]
- Pan, X.; Shi, J.; Luo, P.; Wang, X.; Tang, X. Spatial as Deep: Spatial CNN for Traffic Scene Understanding. Proc. AAAI Conf. Artif. Intell. 2018, 32, 7276–7283. [Google Scholar] [CrossRef]
- Stallkamp, J.; Schlipsing, M.; Salmen, J.; Igel, C. Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition. Neural Netw. 2012, 32, 323–332. [Google Scholar] [CrossRef]
- Haque, W.A.; Arefin, S.; Shihavuddin, A.S.M.; Hasan, M.A. DeepThin: A novel lightweight CNN architecture for traffic sign recognition without GPU requirements. Expert Syst. Appl. 2021, 168, 114481. [Google Scholar] [CrossRef]
- Dollar, P.; Wojek, C.; Schiele, B.; Perona, P. Pedestrian Detection: An Evaluation of the State of the Art. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 743–761. [Google Scholar] [CrossRef] [PubMed]
- Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The KITTI dataset. Int. J. Rob. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef]
- Jiang, S.; Huang, Z.; Qian, K.; Luo, Z.; Zhu, T.; Zhong, Y.; Tang, Y.; Kong, M.; Wang, Y.; Jiao, S.; et al. A Survey on Vision-Language-Action Models for Autonomous Driving. arXiv 2025, arXiv:2506.24044. [Google Scholar]
- Madam, A.; Yusof, R. Malaysian traffic sign dataset for traffic sign detection and recognition systems. J. Telecommun. Electron. Comput. Eng. 2016, 8, 137–143. [Google Scholar]
- Larsson, F.; Felsberg, M. Using Fourier Descriptors and Spatial Models for Traffic Sign Recognition. In Scandinavian Conference on Image Analysis; Springer: Berlin/Heidelberg, Germany, 2011; pp. 238–249. [Google Scholar]
- Mathias, M.; Timofte, R.; Benenson, R.; Van Gool, L. Traffic sign recognition—How far are we from the solution? In Proceedings of the 2013 International Joint Conference on Neural Networks (IJCNN 2013-Dallas), Dallas, TX, USA, 4–9 August 2013. [Google Scholar] [CrossRef]
- Youssef, A.; Albani, D.; Nardi, D.; Bloisi, D.D. Fast Traffic Sign Recognition Using Color Segmentation and Deep Convolutional Networks. In International Conference on Advanced Concepts for Intelligent Vision Systems; Springer: Cham, Switzerland, 2016; pp. 205–216. [Google Scholar] [CrossRef]
- Zhang, X.; He, L.; Chen, J.; Wang, B.; Wang, Y.; Zhou, Y. Multiattention Mechanism 3D Object Detection Algorithm Based on RGB and LiDAR Fusion for Intelligent Driving. Sensors 2023, 23, 8732. [Google Scholar] [CrossRef]
- Yasas Mahima, K.T.; Perera, A.G.; Anavatti, S.; Garratt, M. Toward Robust 3D Perception for Autonomous Vehicles: A Review of Adversarial Attacks and Countermeasures. IEEE Trans. Intell. Transp. Syst. 2024, 25, 19176–19202. [Google Scholar] [CrossRef]
- Lillo-Castellano, J.M.; Mora-Jiménez, I.; Figuera-Pozuelo, C.; Rojo-Álvarez, J.L. Traffic sign segmentation and classification using statistical learning methods. Neurocomputing 2015, 153, 286–299. [Google Scholar] [CrossRef]
- Chen, Z.; Yang, J.; Kong, B. A Robust Traffic Sign Recognition System for Intelligent Vehicles. In Proceedings of the 2011 Sixth International Conference on Image and Graphics, Hefei, China, 12–15 August 2011; pp. 975–980. [Google Scholar] [CrossRef]
- Saravanan, G.; Yamuna, G.; Nandhini, S. Real time implementation of RGB to HSV/HSI/HSL and its reverse color space models. In Proceedings of the 2016 International Conference on Communication and Signal Processing (ICCSP), Melmaruvathur, India, 6–8 April 2016; pp. 462–466. [Google Scholar] [CrossRef]
- Zakir, U.; Leonce, A.N.J.; Edirisinghe, E.A. Road sign segmentation based on colour spaces: A comparative study. In Proceedings of the Computer Graphics and Imaging, Innsbruck, Austria, 17–19 February 2010; pp. 72–79. [Google Scholar] [CrossRef]
- Farhat, W.; Sghaier, S.; Faiedh, H.; Souani, C. Design of efficient embedded system for road sign recognition. J. Ambient Intell. Humaniz. Comput. 2019, 10, 491–507. [Google Scholar] [CrossRef]
- Liu, C.; Li, S.; Chang, F.; Wang, Y. Machine Vision Based Traffic Sign Detection Methods: Review, Analyses and Perspectives. IEEE Access 2019, 7, 86578–86596. [Google Scholar] [CrossRef]
- Gomez-Moreno, H.; Maldonado-Bascon, S.; Gil-Jimenez, P.; Lafuente-Arroyo, S. Goal evaluation of segmentation algorithms for traffic sign recognition. IEEE Trans. Intell. Transp. Syst. 2010, 11, 917–930. [Google Scholar] [CrossRef]
- Venetsanopoulos, A.N.; Plataniotis, K.N. Color Image Processing and Applications; Springer: Cham, Switzerland, 2013; ISBN 9783540669531. [Google Scholar]
- De La Escalera, A.; Armingol, J.M.; Pastor, J.M.; Rodríguez, F.J. Visual sign information extraction and identification by deformable models for intelligent vehicles. IEEE Trans. Intell. Transp. Syst. 2004, 5, 57–68. [Google Scholar] [CrossRef]
- Song, L.; Liu, Z.; Duan, H.; Liu, N. A Color-Based Image Segmentation Approach for Traffic Scene Understanding. In Proceedings of the 2017 13th International Conference on Semantics, Knowledge and Grids (SKG), Beijing, China, 13–14 August 2017; pp. 33–37. [Google Scholar] [CrossRef]
- Manjunatha, H.T.; Danti, A.; ArunKumar, K.L. A Novel Approach for Detection and Recognition of Traffic Signs for Automatic Driver Assistance System Under Cluttered Background; Springer: Singapore, 2019; Volume 1035, ISBN 9789811391804. [Google Scholar]
- Narote, S.P.; Bhujbal, P.N.; Narote, A.S.; Dhane, D.M. A review of recent advances in lane detection and departure warning system. Pattern Recognit. 2018, 73, 216–234. [Google Scholar] [CrossRef]
- Yang, T.; Long, X.; Sangaiah, A.K.; Zheng, Z.; Tong, C. Deep detection network for real-life traffic sign in vehicular networks. Comput. Networks 2018, 136, 95–104. [Google Scholar] [CrossRef]
- Chahid, M.; Zemmouri, A.; Barodi, A.; Kartita, M.; Benbrahim, M. Classification of Multiple Eye Diseases, Parallel Feature Extraction with Transfer Learning. In International Conference on Advanced Sustainability Engineering and Technology; Springer: Cham, Switzerland, 2025; pp. 56–64. [Google Scholar]
- Wei, X.; Zhang, Z.; Chai, Z.; Feng, W. Research on Lane Detection and Tracking Algorithm Based on Improved Hough Transform. In Proceedings of the 2018 IEEE International Conference of Intelligent Robotic and Control Engineering (IRCE), Lanzhou, China, 24–27 August 2018; pp. 263–269. [Google Scholar] [CrossRef]
- Farag, W.; Saleh, Z. Road lane-lines detection in real-time for advanced driving assistance systems. In Proceedings of the 2018 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT), Sakhier, Bahrain, 18–20 November 2018; pp. 1–8. [Google Scholar] [CrossRef]
- Xu, S.; Wang, J.; Wu, P.; Shou, W.; Wang, X.; Chen, M. Vision-based pavement marking detection and condition assessment-a case study. Appl. Sci. 2021, 11, 3152. [Google Scholar] [CrossRef]
- Bente, T.F.; Szeghalmy, S.; Fazekas, A. Detection of lanes and traffic signs painted on road using on-board camera. In Proceedings of the 2018 IEEE International Conference on Future IoT Technologies (Future IoT), Eger, Hungary, 18–19 January 2018; pp. 1–7. [Google Scholar] [CrossRef]
- García-Garrido, M.Á.; Sotelo, M.Á.; Martín-Gorostiza, E. Fast traffic sign detection and recognition under changing lighting conditions. In Proceedings of the 2006 IEEE Intelligent Transportation Systems Conference, Toronto, ON, Canada, 17–20 September 2006; pp. 811–816. [Google Scholar] [CrossRef]
- Loy, G.; Zelinsky, A. A fast radial symmetry transform for detecting points of interest. Lect. Notes Comput. Sci. 2002, 2350, 358–368. [Google Scholar] [CrossRef]
- González, Á.; García-garrido, M.Á.; Llorca, D.F.; Gavilán, M.; Fernández, J.P.; Alcantarilla, P.F.; Parra, I.; Herranz, F.; Bergasa, L.M.; Sotelo, M.Á.; et al. System Using Computer Vision. Transportation 2011, 12, 485–499. [Google Scholar]
- Romdhane, N.B.; Mliki, H.; El Beji, R.; Hammami, M. Combined 2d/3d traffic signs recognition and distance estimation. In Proceedings of the 2016 IEEE Intelligent Vehicles Symposium (IV), Gothenburg, Sweden, 19–22 June 2016; pp. 355–360. [Google Scholar] [CrossRef]
- Kartita, M.; Zemmouri, A.; Barodi, A.; Chahid, M.; Benbrahim, M. Evaluating OpenCL, OpenMP, MPI and CUDA for Embedded Systems. In International Conference on Advanced Sustainability Engineering and Technology; Springer: Cham, Switzerland, 2025; pp. 65–79. [Google Scholar]
- Greenhalgh, J.; Mirmehdi, M. Real-time detection and recognition of road traffic signs. IEEE Trans. Intell. Transp. Syst. 2012, 13, 1498–1506. [Google Scholar] [CrossRef]
- Chen, L.; Li, Q.; Li, M.; Mao, Q. Traffic sign detection and recognition for intelligent vehicle. In Proceedings of the 2011 IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 5–9 June 2011; pp. 908–913. [Google Scholar] [CrossRef]
- Tsai, C.Y.; Liao, H.C.; Hsu, K.J. Real-time embedded implementation of robust speed-limit sign recognition using a novel centroid-to-contour description method. IET Comput. Vis. 2017, 11, 407–414. [Google Scholar] [CrossRef]
- Zaklouta, F.; Stanciulescu, B. Real-time traffic sign recognition in three stages. Rob. Auton. Syst. 2014, 62, 16–24. [Google Scholar] [CrossRef]
- García-garrido, M.Á.; Sotelo, M.Á.; Martín-gorostiza, E. Fast Road Sign Detection Using Hough Transform for Assisted Driving of Road Vehicles. In International Conference on Computer Aided Systems Theory; Springer: Berlin/Heidelberg, Germany, 2005; pp. 543–548. [Google Scholar]
- Borrego-carazo, J.; Castells-rufas, D.; Biempica, E.; Carrabina, J. Resource-Constrained Machine Learning for ADAS: A Systematic Review. IEEE Access 2020, 8, 40573–40598. [Google Scholar] [CrossRef]
- Procedures, V. Real-Time Straight-Line Detection for XGA-Size Videos by Hough Transform with Parallelized Voting Procedures. Sensors 2017, 17, 270. [Google Scholar] [CrossRef]
- Gupta, A.; Choudhary, A. A Framework for Camera based Real-Time Lane and Road Surface Marking Detection and Recognition. IEEE Trans. Intell. Veh. 2018, 3, 476–485. [Google Scholar] [CrossRef]
- Park, M.W.; Park, J.P.; Korea, S.; Jung, S.K. Real-time Vehicle Detection using Equi-Height Mosaicking Image. In Proceedings of the 2013 Research in Adaptive and Convergent Systems, Montreal, QC, Canada, 1–4 October 2013; pp. 171–176. [Google Scholar]
- Huang, D.Y.; Chen, C.H.; Chen, T.Y.; Hu, W.C.; Feng, K.W. Vehicle detection and inter-vehicle distance estimation using single-lens video camera on urban/suburb roads. J. Vis. Commun. Image Represent. 2017, 46, 250–259. [Google Scholar] [CrossRef]
- Gudigar, A.; Chokkadi, S.; Raghavendra, U.; Acharya, U.R. Local texture patterns for traffic sign recognition using higher order spectra. Pattern Recognit. Lett. 2017, 94, 202–210. [Google Scholar] [CrossRef]
- Villalón-Sepúlveda, G.; Torres-Torriti, M.; Flores-Calero, M. Traffic sign detection system for locating road intersections and roundabouts: The chilean case. Sensors 2017, 17, 1207. [Google Scholar] [CrossRef]
- Ellahyani, A.; Ansari, M. El Mean shift and log-polar transform for road sign detection. Multimed. Tools Appl. 2017, 76, 24495–24513. [Google Scholar] [CrossRef]
- Lee, J.; Seo, Y.W.; Zhang, W.; Wettergreen, D. Kernel-based traffic sign tracking to improve highway workzone recognition for reliable autonomous driving. In Proceedings of the 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013), The Hague, The Netherlands, 6–9 October 2013; pp. 1131–1136. [Google Scholar] [CrossRef]
- Hao, Q.; Tao, Y.; Cao, J.; Tang, M.; Cheng, Y.; Zhou, D.; Ning, Y.; Bao, C.; Cui, H. Retina-like imaging and its applications: A brief review. Appl. Sci. 2021, 11, 7058. [Google Scholar] [CrossRef]
- Cheng, J.C.P.; Wang, M. Automated detection of sewer pipe defects in closed-circuit television images using deep learning techniques. Autom. Constr. 2018, 95, 155–171. [Google Scholar] [CrossRef]
- Kalms, L.; Rettkowski, J.; Hamme, M.; Gohringer, D. Robust lane recognition for autonomous driving. In Proceedings of the 2017 Conference on Design and Architectures for Signal and Image Processing (DASIP), Dresden, Germany, 27–29 September 2017; pp. 1–6. [Google Scholar] [CrossRef]
- Kumtepe, Ö.; Akar, G.B.; Yüncü, E. On Vehicle Aggressive Driving Behavior Detection Using Visual Information. In Proceedings of the 2015 23nd Signal Processing and Communications Applications Conference (SIU), Malatya, Turkey, 16–19 May 2015; pp. 1–4. [Google Scholar] [CrossRef]
- Anderson, R. Feasibility Study on the Utilization of Microsoft HoloLens to Increase Driving Conditions Awareness. In Proceedings of the 2019 SoutheastCon, Huntsville, AL, USA, 11–14 April 2019; pp. 1–8. [Google Scholar]
- Song, W.; Yang, Y.; Fu, M.; Li, Y.; Wang, M. Lane Detection and Classification for Forward Collision Warning System Based on Stereo Vision. IEEE Sens. J. 2018, 18, 5151–5163. [Google Scholar] [CrossRef]
- Zhang, J.; Huang, Q.; Wu, H.; Liu, Y. A shallow network with combined pooling for fast traffic sign recognition. Information 2017, 8, 45. [Google Scholar] [CrossRef]
- Huang, Z.; Yu, Y.; Gu, J.; Liu, H. An Efficient Method for Traffic Sign Recognition Based on Extreme Learning Machine. IEEE Trans. Cybern. 2017, 47, 920–933. [Google Scholar] [CrossRef]
- Liang, M.; Yuan, M.; Hu, X.; Li, J.; Liu, H. Traffic sign detection by ROI extraction and histogram features-based recognition. In Proceedings of the 2013 International Joint Conference on Neural Networks (IJCNN), Dallas, TX, USA, 4–9 August 2013. [Google Scholar] [CrossRef]
- Abdi, L.; Meddeb, A. Spatially Enhanced Bags of Visual Words Representation to Improve Traffic Signs Recognition. J. Signal Process Syst. 2018, 90, 1729–1741. [Google Scholar] [CrossRef]
- Jose, A.; Thodupunoori, H.; Nair, B.B. Combining Viola—Jones Framework and Deep Learning; Springer: Singapore, 2019; ISBN 9789811336003. [Google Scholar]
- Gudigar, A.; Chokkadi, S.; Raghavendra, U.; Acharya, U.R. Multiple thresholding and subspace based approach for detection and recognition of traffic sign. Multimed. Tools Appl. 2017, 76, 6973–6991. [Google Scholar] [CrossRef]
- Ellahyani, A.; El Ansari, M.; Lahmyed, R.; Trémeau, A. Traffic sign recognition method for intelligent vehicles. J. Opt. Soc. Am. A 2018, 35, 1907. [Google Scholar] [CrossRef]
- Azimi, S.M.; Fischer, P.; Korner, M.; Reinartz, P. Aerial LaneNet: Lane-Marking Semantic Segmentation in Aerial Imagery Using Wavelet-Enhanced Cost-Sensitive Symmetric Fully Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2019, 57, 2920–2938. [Google Scholar] [CrossRef]
- Aziz, S.; Mohamed, E.A.; Youssef, F. Traffic sign recognition based on multi-feature fusion and ELM classifier. Procedia Comput. Sci. 2018, 127, 146–153. [Google Scholar] [CrossRef]
- Malik, Z.; Siddiqi, I. Detection and Recognition of Traffic Signs from Road Scene Images. In Proceedings of the 2014 12th International Conference on Frontiers of Information Technology, Islamabad, Pakistan, 17–19 December 2014; pp. 330–335. [Google Scholar] [CrossRef]
- Dhar, P.; Abedin, M.Z.; Biswas, T.; Datta, A. Traffic sign detection—A new approach and recognition using convolution neural network. In Proceedings of the 2017 IEEE Region 10 Humanitarian Technology Conference (R10-HTC), Dhaka, Bangladesh, 21–23 December 2017; pp. 416–419. [Google Scholar] [CrossRef]
- Arcos-García, Á.; Álvarez-García, J.A.; Soria-Morillo, L.M. Evaluation of deep neural networks for traffic sign detection systems. Neurocomputing 2018, 316, 332–344. [Google Scholar] [CrossRef]
- Arcos-García, Á.; Álvarez-García, J.A.; Soria-Morillo, L.M. Deep neural network for traffic sign recognition systems: An analysis of spatial transformers and stochastic optimisation methods. Neural Netw. 2018, 99, 158–165. [Google Scholar] [CrossRef]
- Haghighat, A.K.; Ravichandra-Mouli, V.; Chakraborty, P.; Esfandiari, Y.; Arabi, S.; Sharma, A. Applications of Deep Learning in Intelligent Transportation Systems; Springer: Singapore, 2020; Volume 2, ISBN 0123456789. [Google Scholar]
- Bangquan, X.; Xiong, W.X. Real-time embedded traffic sign recognition using efficient convolutional neural network. IEEE Access 2019, 7, 53330–53346. [Google Scholar] [CrossRef]
- Ma, L.; Stückler, J.; Wu, T.; Cremers, D. Detailed Dense Inference with Convolutional Neural Networks via Discrete Wavelet Transform. arXiv 2018, arXiv:1808.01834. [Google Scholar] [CrossRef]
- Abdi, L. Deep Learning Traffic Sign Detection, Recognition and Augmentation. In Proceedings of the Symposium on Applied Computing, Marrakech, Morocco, 3–7 April 2017; pp. 131–136. [Google Scholar]
- Qin, Z.; Wang, H.; Li, X. Ultra Fast Structure-Aware Deep Lane Detection. In Proceedings of the 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part XXIV; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2020; Volume 12369, pp. 276–291. [Google Scholar] [CrossRef]
- Zou, Q.; Jiang, H.; Dai, Q.; Yue, Y.; Chen, L.; Wang, Q. Robust lane detection from continuous driving scenes using deep neural networks. IEEE Trans. Veh. Technol. 2020, 69, 41–54. [Google Scholar] [CrossRef]
- Yan, Y.; Deng, C.; Ma, J.; Wang, Y.; Li, Y. A Traffic Sign Recognition Method Under Complex Illumination Conditions. IEEE Access 2023, 11, 39185–39196. [Google Scholar] [CrossRef]
- Lim, X.R.; Lee, C.P.; Lim, K.M.; Ong, T.S.; Alqahtani, A.; Ali, M. Recent Advances in Traffic Sign Recognition: Approaches and Datasets. Sensors 2023, 23, 4674. [Google Scholar] [CrossRef]
- Kaleybar, J.M.; Khaloo, H.; Naghipour, A. Efficient Vision Transformer for Accurate Traffic Sign Detection. In Proceedings of the 2023 13th International Conference on Computer and Knowledge Engineering (ICCKE) Mashhad, Iran, Islamic Republic, 1–2 November 2023; pp. 36–41. [Google Scholar] [CrossRef]
- Toshniwal, D.; Loya, S.; Khot, A.; Marda, Y. Optimized Detection and Classification on GTRSB: Advancing Traffic Sign Recognition with Convolutional Neural Networks. arXiv 2024, arXiv:2403.08283. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal Loss for Dense Object Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 318–327. [Google Scholar] [CrossRef]
- Onorato, G. Bayesian Optimization for Hyperparameters Tuning in Neural Networks. arXiv 2024, arXiv:2410.21886. [Google Scholar] [CrossRef]
- Shi, B. On the Hyperparameters in Stochastic Gradient Descent with Momentum. J. Mach. Learn. Res. 2024, 25, 1–40. [Google Scholar]
- Victoria, A.H.; Maragatham, G. Automatic tuning of hyperparameters using Bayesian optimization. Evol. Syst. 2021, 12, 217–223. [Google Scholar] [CrossRef]
- Kumaravel, T.; Shanmugaveni, V.; Natesan, P.; Shruthi, V.K.; Kowsalya, M.; Malarkodi, M.S. Optimizing Hyperparameters in Deep Learning Algorithms for Self-Driving Vehicles in Traffic Sign Recognition. In Proceedings of the 2024 International Conference on Science Technology Engineering and Management (ICSTEM), Coimbatore, India, 26–27 April 2024; pp. 1–7. [Google Scholar]
- Kim, T.; Park, S.; Lee, K. Traffic Sign Recognition Based on Bayesian Angular Margin Loss for an Autonomous Vehicle. Electronics 2023, 12, 3073. [Google Scholar] [CrossRef]
- Jaiswal, A.; Deepali; Sachdeva, N. Bayesian Optimized Traffic Sign Recognition on Social Media Data Using Deep Learning. In International Conference on Data Science and Applications; Springer: Singapore, 2024; pp. 499–513. [Google Scholar]
- Liu, L.; Wang, L.; Ma, Z. Improved lightweight YOLOv5 based on ShuffleNet and its application on traffic signs detection. PLoS ONE 2024, 19, e0310269. [Google Scholar] [CrossRef]
- Huang, M.; Wan, Y.; Gao, Z.; Wang, J. Real-time traffic sign detection model based on multi-branch convolutional reparameterization. J. Real-Time Image Process. 2023, 20, 57. [Google Scholar] [CrossRef]
- Fridman, L.; Terwilliger, J.; Jenik, B. DeepTraffic: Crowdsourced Hyperparameter Tuning of Deep Reinforcement Learning Systems for Multi-Agent Dense Traffic Navigation. arXiv 2018, arXiv:1801.02805. [Google Scholar]
- Yi, H.; Bui, K.-H.N. An Automated Hyperparameter Search-Based Deep Learning Model for Highway Traffic Prediction. IEEE Trans. Intell. Transp. Syst. 2021, 22, 5486–5495. [Google Scholar] [CrossRef]
- Yalamanchili, S.; Kodepogu, K.; Manjeti, V.B.; Mareedu, D.; Madireddy, A.; Mannem, J.; Kancharla, P.K. Optimizing Traffic Sign Detection and Recognition by Using Deep Learning. Int. J. Transp. Dev. Integr. 2024, 8, 131–139. [Google Scholar] [CrossRef]
- Bui, K.-H.N.; Yi, H. Optimal Hyperparameter Tuning using Meta-Learning for Big Traffic Datasets. In Proceedings of the 2020 IEEE International Conference on Big Data and Smart Computing (BigComp), Busan, Republic of Korea, 19–22 February 2020; pp. 48–54. [Google Scholar]
- Rubio, A.; Demoor, G.; Chalmé, S.; Sutton-Charani, N.; Magnier, B. Sensitivity Analysis of Traffic Sign Recognition to Image Alteration and Training Data Size. Information 2024, 15, 621. [Google Scholar] [CrossRef]
- Maletzky, A.; Thumfart, S.; Wruß, C. Comparing the Machine Readability of Traffic Sign Pictograms in Austria and Germany. arXiv 2021, arXiv:2109.02362. [Google Scholar] [CrossRef]
- Alom, M.R.; Opi, T.A.; Palok, H.I.; Shakib, M.N.; Hossain, M.P.; Rahaman, M.A. Enhanced Road Lane Marking Detection System: A CNN-Based Approach for Safe Driving. In Proceedings of the 2023 5th International Conference on Sustainable Technologies for Industry 5.0 (STI), Dhaka, Bangladesh, 9–10 December 2023; pp. 1–6. [Google Scholar]
- Hosseini, S.H.; Ghaderi, F.; Moshiri, B.; Norouzi, M. Road Sign Classification Using Transfer Learning and Pre-trained CNN Models. In Proceedings of the International Conference on Artificial Intelligence and Smart Vehicles, Tehran, Iran, 24–25 May 2023; Springer: Cham, Switzerland, 2023; pp. 39–52. [Google Scholar]
- Yang, Z.; Zhao, C.; Maeda, H.; Sekimoto, Y. Development of a Large-Scale Roadside Facility Detection Model Based on the Mapillary Dataset. Sensors 2022, 22, 9992. [Google Scholar] [CrossRef]
- Jurisic, F.; Filkovic, I.; Kalafatic, Z. Multiple-dataset traffic sign classification with OneCNN. In Proceedings of the 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR), Kuala Lumpur, Malaysia, 3–6 November 2015; pp. 614–618. [Google Scholar]
- Bayoudh, K.; Hamdaoui, F.; Mtibaa, A. Transfer learning based hybrid 2D-3D CNN for traffic sign recognition and semantic road detection applied in advanced driver assistance systems. Appl. Intell. 2021, 51, 124–142. [Google Scholar] [CrossRef]
- Ma, X.; Zhang, T.; Xu, C. Gcan: Graph convolutional adversarial network for unsupervised domain adaptation. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 8258–8268. [Google Scholar] [CrossRef]
- Chen, S.; Zhang, Z.; Zhang, L.; He, R.; Li, Z.; Xu, M.; Ma, H. A Semi-Supervised Learning Framework Combining CNN and Multiscale Transformer for Traffic Sign Detection and Recognition. IEEE Internet Things J. 2024, 11, 19500–19519. [Google Scholar] [CrossRef]
- Zhu, Y.; Yan, W.Q. Traffic sign recognition based on deep learning. Multimed. Tools Appl. 2022, 81, 17779–17791. [Google Scholar] [CrossRef]
- Zhang, L.; Yang, K.; Han, Y.; Li, J.; Wei, W.; Tan, H.; Yu, P.; Zhang, K.; Yang, X. TSD-DETR: A lightweight real-time detection transformer of traffic sign detection for long-range perception of autonomous driving. Eng. Appl. Artif. Intell. 2025, 139, 109536. [Google Scholar] [CrossRef]
- Yang, Y.; Peng, H.; Li, C.; Zhang, W.; Yang, K. LaneFormer: Real-Time Lane Exaction and Detection via Transformer. Appl. Sci. 2022, 12, 9722. [Google Scholar] [CrossRef]
- Kumar, A.D. Novel Deep Learning Model for Traffic Sign Detection Using Capsule Networks. arXiv 2018, arXiv:1805.04424. [Google Scholar] [CrossRef]
- Liu, X.; Yan, W.Q. Traffic-light sign recognition using capsule network. Multimed. Tools Appl. 2021, 80, 15161–15171. [Google Scholar] [CrossRef]
- Ma, L.; Li, Y.; Li, J.; Yu, Y.; Junior, J.M.; Goncalves, W.N.; Chapman, M.A. Capsule-Based Networks for Road Marking Extraction and Classification From Mobile LiDAR Point Clouds. IEEE Trans. Intell. Transp. Syst. 2021, 22, 1981–1995. [Google Scholar] [CrossRef]
- Wang, J.; Chen, Y.; Dong, Z.; Gao, M. Improved YOLOv5 network for real-time multi-scale traffic sign detection. Neural Comput. Appl. 2023, 35, 7853–7865. [Google Scholar] [CrossRef]
- Han, Y.; Wang, F.; Wang, W.; Li, X.; Zhang, J. YOLO-SG: Small traffic signs detection method in complex scene. J. Supercomput. 2024, 80, 2025–2046. [Google Scholar] [CrossRef]
- Sun, C.; Wen, M.; Zhang, K.; Meng, P.; Cui, R. Traffic sign detection algorithm based on feature expression enhancement. Multimed. Tools Appl. 2021, 80, 33593–33614. [Google Scholar] [CrossRef]
- Kandasamy, K.; Natarajan, Y.; Sri Preethaa, K.R.; Ali, A.A.Y. A Robust TrafficSignNet Algorithm for Enhanced Traffic Sign Recognition in Autonomous Vehicles Under Varying Light Conditions. Neural Process. Lett. 2024, 56, 241. [Google Scholar] [CrossRef]
- Saadna, Y.; Behloul, A. An overview of traffic sign detection and classification methods. Int. J. Multimed. Inf. Retr. 2017, 6, 193–210. [Google Scholar] [CrossRef]
- Wei, H.; Zhang, Q.; Qian, Y.; Xu, Z.; Han, J. MTSDet: Multi-scale traffic sign detection with attention and path aggregation. Appl. Intell. 2023, 53, 238–250. [Google Scholar] [CrossRef]
- Zhou, S.; Wang, H.; Nie, C.; Zhang, H.; Sun, Z. Design and Experimental Evaluation of Nighttime Traffic-Sign Detection and Classification Based on Low-Light Enhancement. In Proceedings of the 2022 6th CAA International Conference on Vehicular Control and Intelligence (CVCI), Nanjing, China, 28–30 October 2022; pp. 1–6. [Google Scholar]
- Fleyeh, H. Traffic signs color detection and segmentation in poor light conditions. In Proceedings of the MVA2005 IAPR Conference on Machine VIsion Applications, Tsukuba Science City, Japan, 16–18 May 2005; pp. 306–309. [Google Scholar]
- Ayaou, T.; Beghdadi, A.; Karim, A.; Amghar, A. Enhancing Road Signs Segmentation Using Photometric Invariants. arXiv 2020, arXiv:2010.13844. [Google Scholar] [CrossRef]
- Papagianni, S.; Iliopoulou, C.; Kepaptsoglou, K.; Stathopoulos, A. Decision-Making Framework to Allocate Real-Time Passenger Information Signs at Bus Stops: Model Application in Athens, Greece. Transp. Res. Rec. 2017, 2647, 61–70. [Google Scholar] [CrossRef]
- Ertler, C.; Mislej, J.; Ollmann, T.; Porzi, L.; Neuhold, G.; Kuang, Y. The Mapillary Traffic Sign Dataset for Detection and Classification on a Global Scale. In Proceedings of the 16th European Conference, 2020, Proceedings, Part XXIII, Glasgow, UK, 23–28 August 2020; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2020; Volume 12368, pp. 68–84. [Google Scholar] [CrossRef]
- Boumediene, M.; Cudel, C.; Basset, M.; Ouamri, A. Triangular traffic signs detection based on RSLD algorithm. Mach. Vis. Appl. 2013, 24, 1721–1732. [Google Scholar] [CrossRef]
- Hasan, N.; Anzum, T.; Jahan, N. Traffic sign recognition system (tsrs): Svm and convolutional neural network. Lect. Notes Networks Syst. 2021, 145, 69–79. [Google Scholar] [CrossRef]
- Luo, Y.; Ci, Y.; Jiang, S.; Wei, X. A novel lightweight real-time traffic sign detection method based on an embedded device and YOLOv8. J. Real-Time Image Process. 2024, 21, 24. [Google Scholar] [CrossRef]
- Zhu, Y.; Liao, M.; Yang, M.; Liu, W. Cascaded Segmentation-Detection Networks for Text-Based Traffic Sign Detection. IEEE Trans. Intell. Transp. Syst. 2018, 19, 209–219. [Google Scholar] [CrossRef]
- Shen, Y.; Bi, Y.; Yang, Z.; Liu, D.; Liu, K.; Du, Y. Lane line detection and recognition based on dynamic ROI and modified firefly algorithm. Int. J. Intell. Robot. Appl. 2021, 5, 143–155. [Google Scholar] [CrossRef]
- Syed, M.H.; Kumar, S. Road Lane Line Detection Based on ROI Using Hough Transform Algorithm. Lect. Notes Networks Syst. 2023, 421, 567–580. [Google Scholar] [CrossRef]
- Chen, Y.; Wong, P.K.; Yang, Z.-X. A New Adaptive Region of Interest Extraction Method for Two-Lane Detection. Int. J. Automot. Technol. 2021, 22, 1631–1649. [Google Scholar] [CrossRef]
- Zakaria, N.J.; Shapiai, M.I.; Ghani, R.A.; Yassin, M.N.M.; Ibrahim, M.Z.; Wahid, N. Lane Detection in Autonomous Vehicles: A Systematic Review. IEEE Access 2023, 11, 3729–3765. [Google Scholar] [CrossRef]
- Qin, Y.Y.; Cui, W.; Li, Q.; Zhu, W.; Li, X.G. Traffic Sign Image Enhancement in Low Light Environment. Procedia Comput. Sci. 2019, 154, 596–602. [Google Scholar] [CrossRef]
- Prasanthi, B.; Kantheti, K.R. Lane Detection and Traffic Sign Recognition using OpenCV and Deep Learning for Autonomous Vehicles. Int. Res. J. Eng. Technol. (IRJET) 2022, 9, 478–480. [Google Scholar]
- Dubey, U.; Chaurasiya, R.K. Efficient Traffic Sign Recognition Using CLAHE-Based Image Enhancement and ResNet CNN Architectures. Int. J. Cogn. Informatics Nat. Intell. 2022, 15, 295811. [Google Scholar] [CrossRef]
- Yao, J.; Huang, B.; Yang, S.; Xiang, X.; Lu, Z. Traffic sign detection and recognition under low illumination. Mach. Vis. Appl. 2023, 34, 75. [Google Scholar] [CrossRef]
- Dewi, C.; Chernovita, H.P.; Philemon, S.A.; Ananta, C.A.; Dai, G.; Chen, A.P.S. Integration of YOLOv9 and Contrast Limited Adaptive Histogram Equalization for Nighttime Traffic Sign Detection. Math. Model. Eng. Probl. 2025, 12, 37–45. [Google Scholar] [CrossRef]
- Chen, R.-C.; Dewi, C.; Zhuang, Y.-C.; Chen, J.-K. Contrast Limited Adaptive Histogram Equalization for Recognizing Road Marking at Night Based on Yolo Models. IEEE Access 2023, 11, 92926–92942. [Google Scholar] [CrossRef]
- Manongga, W.E.; Chen, R.; Jiang, X.; Chen, R. Enhancing road marking sign detection in low-light conditions with YOLOv7 and contrast enhancement techniques. Int. J. Appl. Sci. Eng. 2023, 21, 1–10. [Google Scholar] [CrossRef]
- Wang, T.; Qu, H.; Liu, C.; Zheng, T.; Lyu, Z. LLE-STD: Traffic Sign Detection Method Based on Low-Light Image Enhancement and Small Target Detection. Mathematics 2024, 12, 3125. [Google Scholar] [CrossRef]
- Zhao, S.; Gong, Z.; Zhao, D. Traffic signs and markings recognition based on lightweight convolutional neural network. Vis. Comput. 2024, 40, 559–570. [Google Scholar] [CrossRef]
- Sun, X.; Liu, K.; Chen, L.; Cai, Y.; Wang, H. LLTH-YOLOv5: A Real-Time Traffic Sign Detection Algorithm for Low-Light Scenes. Automot. Innov. 2024, 7, 121–137. [Google Scholar] [CrossRef]
- Lopez-Montiel, M.; Orozco-Rosas, U.; Sanchez-Adame, M.; Picos, K.; Ross, O.H.M. Evaluation Method of Deep Learning-Based Embedded Systems for Traffic Sign Detection. IEEE Access 2021, 9, 101217–101238. [Google Scholar] [CrossRef]
- Siddiqui, F.; Amiri, S.; Minhas, U.I.; Deng, T.; Woods, R.; Rafferty, K.; Crookes, D. FPGA-based processor acceleration for image processing applications. J. Imaging 2019, 5, 16. [Google Scholar] [CrossRef] [PubMed]
- El Hajjouji, I.; Mars, S.; Asrih, Z.; El Mourabit, A. A novel FPGA implementation of Hough Transform for straight lane detection. Eng. Sci. Technol. Int. J. 2020, 23, 274–280. [Google Scholar] [CrossRef]
- Zemmouri, A.; Alareqi, M.; Elgouri, R.; Benbrahim, M.; Hlou, L. Integration and implimentation system-on-aprogrammable-chip (SOPC) in FPGA. J. Theor. Appl. Inf. Technol. 2015, 76, 127–133. [Google Scholar]
- Zemmouri, A.; Barodi, A.; Alareqi, M.; Elgouri, R.; Hlou, L.; Benbrahim, M. Proposal of a reliable embedded circuit to control a stepper motor using microblaze soft-core processor. Int. J. Reconfigurable Embed. Syst. 2022, 11, 215. [Google Scholar] [CrossRef]
- Lam, D.K.; Du, C.V.; Pham, H.L. QuantLaneNet: A 640-FPS and 34-GOPS/W FPGA-Based CNN Accelerator for Lane Detection. Sensors 2023, 23, 6661. [Google Scholar] [CrossRef]
- Zemmouri, A.; Elgouri, R.; Alareqi, M.; Benbrahim, M.; Hlou, L. Design and implementation of pulse width modulation using hardware/software microblaze soft-core. Int. J. Power Electron. Drive Syst. 2017, 8, 167–175. [Google Scholar] [CrossRef]
- Isa, I.S.B.M.; Yeong, C.J.; Shaari Azyze, N.L.A. bin M. Real-time traffic sign detection and recognition using Raspberry Pi. Int. J. Electr. Comput. Eng. 2022, 12, 331–338. [Google Scholar] [CrossRef]
- Triki, N.; Karray, M.; Ksantini, M. A Real-Time Traffic Sign Recognition Method Using a New Attention-Based Deep Convolutional Neural Network for Smart Vehicles. Appl. Sci. 2023, 13, 4739. [Google Scholar] [CrossRef]
- Han, Y.; Virupakshappa, K.; Pinto, E.V.S.; Oruklu, E. Hardware/software co-design of a traffic sign recognition system using zynq FPGas. Electronics 2015, 4, 1062–1089. [Google Scholar] [CrossRef]
- Farhat, W.; Faiedh, H.; Souani, C.; Besbes, K. Real-time embedded system for traffic sign recognition based on ZedBoard. J. Real-Time Image Process. 2019, 16, 1813–1823. [Google Scholar] [CrossRef]
- Hmida, R.; Ben Abdelali, A.; Mtibaa, A. Hardware implementation and validation of a traffic road sign detection and identification system. J. Real-Time Image Process. 2018, 15, 13–30. [Google Scholar] [CrossRef]
- Malmir, S.; Shalchian, M. Design and FPGA implementation of dual-stage lane detection, based on Hough transform and localized stripe features. Microprocess. Microsyst. 2019, 64, 12–22. [Google Scholar] [CrossRef]
- Teo, T.Y.; Sutopo, R.; Lim, J.M.Y.; Wong, K.S. Innovative lane detection method to increase the accuracy of lane departure warning system. Multimed. Tools Appl. 2021, 80, 2063–2080. [Google Scholar] [CrossRef]
- Gajjar, H.; Sanyal, S.; Shah, M. A comprehensive study on lane detecting autonomous car using computer vision. Expert Syst. Appl. 2023, 233, 120929. [Google Scholar] [CrossRef]
- Suder, J.; Podbucki, K.; Marciniak, T.; Dabrowski, A. Low complexity lane detection methods for light photometry system. Electronics 2021, 10, 1665. [Google Scholar] [CrossRef]
- Guo, Y.; Zhou, J.; Dong, Q.; Bian, Y.; Li, Z.; Xiao, J. A lane-level localization method via the lateral displacement estimation model on expressway. Expert Syst. Appl. 2024, 243, 122848. [Google Scholar] [CrossRef]
SALSA Step | Description |
---|---|
Search | A comprehensive literature search was conducted using major scientific databases, including IEEE Xplore, MDPI, SpringerLink, ScienceDirect, and Wiley, covering the period from 2010 to 2025. Boolean keyword combinations such as “traffic sign recognition”, “lane detection”, “ADAS”, “embedded vision systems”, and “deep learning in automotive” were employed. The search yielded over 200 potentially relevant sources, including journal articles, conference papers, and technical reports. |
Appraisal | A systematic selection process was applied based on predefined inclusion and exclusion criteria. Selected studies demonstrated scientific rigor, empirical contributions, and direct relevance to embedded vision systems for road safety. Non-peer-reviewed materials, non-empirical works, and out-of-scope studies were excluded. The final pool of studies reflects the most impactful contributions to the field. |
Synthesis | The retained articles were categorized into six thematic domains: (i) traffic sign detection and recognition, (ii) lane detection and departure warning, (iii) vision algorithms and deep learning models, (iv) sensor fusion and embedded architectures, (v) benchmark datasets and evaluation metrics, and (vi) hardware implementation and real-time constraints. This structure provides a coherent synthesis of the state-of-the-art technologies. |
Analysis | The categorized works were critically analyzed to identify strengths, limitations, and future direction. Special focus was placed on real-time challenges, robustness under adverse conditions, hardware-software integration, and dataset limitations. This analysis helps highlight technological gaps and propose avenues for further research in intelligent embedded automotive systems. |
Tracking Approach | Description | Equations and Parameters | |
---|---|---|---|
Stochastic Method | Incorporates randomness and probability to model uncertainty in data, used in noisy or dynamic scenarios [34,70,71]. | Gaussian Distribution: | |
(1) | |||
Kalman Filter: | |||
(2) | |||
(3) | |||
Monte Carlo: | |||
(4) | |||
MCMC: | |||
(5) | |||
SGD: | |||
(6) | |||
Deterministic Method | Uses predefined rules for consistent outputs, effective in stable conditions [72,73]. | Image Filtering: | |
(7) | |||
Edge Detection: | |||
(8) | |||
Geometric Transform: | |||
P′ = T·P | (9) | ||
Generative Method | Models the underlying data distribution P(X), useful for image synthesis and representation learning [74]. | Data Distribution: | |
(10) | |||
GAN Objective: | |||
(11) | |||
VAE Loss: | |||
(12) | |||
Discriminative Method | Learns decision boundaries between classes by modeling P(Y|X), applied in classification and detection [75,76]. | Logistic Regression: | |
(13) | |||
SVM Margin: | |||
(14) | |||
Neural Network Loss: | |||
(15) |
Tracking Approach | Description | Equations and Parameters | |
---|---|---|---|
Deep Feature-Based | Uses hierarchical CNN representations and transfer learning (e.g., VGG16, ResNet) for classification and detection tasks [77]. | Weight update: | |
(16) | |||
Feature map: | |||
(17) | |||
Softmax: | |||
(18) | |||
Hyper Feature-Based | Combines handcrafted (HOG, color histograms) and CNN features using multi-modal integration for robust tracking [78,79]. | HOG gradient: | |
(19) | |||
Unified graph fusion: | |||
(20) | |||
Transformer-Based (ViT) | Uses self-attention mechanisms for global feature modeling; models like MobileViT and Fast-COS improve runtime performance [80]. | Self-attention: | |
(21) |
3D Approach | Description | Equations and Parameters | |
---|---|---|---|
Lightweight 3D Detection (Voxel-Based) | LiDAR point clouds are discretized into voxel grids, and features are aggregated per voxel to reduce computation [85]. | Voxelization: | |
(22) | |||
P = point cloud, = voxel bin ranges. | |||
Bird’s-Eye-View (BEV) Representation | Transforms 3D coordinates into a 2D top-down plane for detection/segmentation [83]. | Projection: | |
(23) | |||
(x, y, z) = 3D point, (u, v) = BEV coordinates. | |||
Multi-Sensor Fusion | Features from LiDAR, camera, and radar are combined via weighted or attention-based fusion [85]. | Weighted fusion: | |
(24) | |||
, = feature vectors, α = fusion weight. | |||
Non-Destructive Measurement (Quadratic NN) | Quadratic neural networks constrained by physics provide interpretable diagnostics under zero-fault conditions [86]. | Quadratic model: | |
(25) | |||
x = input vector, Q = quadratic coefficient matrix, W = weight vector, b = bias. |
Dataset Name | Category | Number of Images | Number of Classes | Training Images | Validation Images | Test Images | Image Resolution | Annotation Type | Captured Environments | Data Augmentation | Metadata Availability | License | Official Link |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
GTSRB Dataset | Traffic Sign Recognition | 50,000+ | 43 | 39,209 | 12,630 | 12,630 | Varies | Bounding Box | Urban | Yes | No | Open Source | https://benchmark.ini.rub.de/ (accessed on 12 June 2025) |
LISA Dataset | Traffic Sign Recognition | 6610+ | 47 | 4700+ | 1000+ | 1000+ | 1280 × 960 | Bounding Box | Urban/ Suburban | No | No | Open Source | https://cvrr.ucsd.edu/home (accessed on 12 June 2025) |
Cityscapes Dataset | Urban Scene Segmentation | 25,000+ | 30 | 5000 | 500 | 19,500 | 2048 × 1024 | Pixel-Level | Urban | No | No | Open Source | https://www.cityscapes-dataset.com/ (accessed on 12 June 2025) |
TuSimple Dataset | Lane Detection | 6408 | Not Applicable | 3626 | 358 | 2782 | 1280 × 720 | Lane Points | Highways | No | No | Open Source | https://paperswithcode.com/dataset/tusimple (accessed on 12 June 2025) |
CULane Dataset | Lane Detection | 133,000 | Not Applicable | Approx. 100,000 | Approx. 20,000 | Approx. 13,000 | 1920 × 1080 | Lane Points | Urban/Suburban | No | No | Open Source | https://xingangpan.github.io/projects/CULane.html (accessed on 12 June 2025) |
U.S. Traffic Signs Dataset | Traffic Sign Recognition | Varies | 100+ | Varies | Varies | Varies | Varies | Bounding Box | Urban/Suburban | No | No | Proprietary | Not Publicly Available |
Traffic Sign Dataset - Classification | Traffic Sign Recognition | 6960 | 58 | Varies | Varies | 2000 files | Varies | Bounding Box | Urban/Rural | No | No | Open Source | https://www.kaggle.com/datasets/ahemateja19bec1025/traffic-sign-dataset-classification (accessed on 12 June 2025) |
Caltech Pedestrian Dataset | Pedestrian Detection | 250,000+ | 1 | 200,000+ | 30,000+ | 20,000+ | 640 × 480 | Bounding Box | Urban | No | No | Open Source | https://www.kaggle.com/datasets/kalvinquackenbush/caltechpedestriandataset (accessed on 12 June 2025) |
KITTI Dataset | Multiple Vision Tasks | Varies | Varies | Varies | Varies | Varies | Varies | Bounding Box, 3D | Mixed | Yes | Yes | Open Source | http://www.cvlibs.net/datasets/kitti/ (accessed on 12 June 2025) |
Malaysia Roads Dataset | Road Markings | Thousands | Not Applicable | Varies | Varies | Varies | Varies | Bounding Box | Urban | No | No | Unknown | Not Publicly Available |
GNSS Dataset | Geolocation | Varies | Not Applicable | Varies | Varies | Varies | Varies | Metadata | Mixed | No | Yes | Proprietary | Not Publicly Available |
STS Dataset | Traffic Sign Recognition | Varies | Varies | Varies | Varies | Varies | Varies | Bounding Box | Mixed | No | No | Unknown | Not Publicly Available |
BTSC Dataset | Traffic Sign Classification | 10,000+ | 62 | 7000+ | 1500+ | 1500+ | Varies | Bounding Box | Urban/Suburban | No | No | Open Source | https://btsd.ethz.ch/shareddata/ (accessed on 12 June 2025) |
DITS Dataset | Driver Attention | Varies | Not Applicable | Varies | Varies | Varies | Varies | Driver Metadata | Mixed | No | Yes | Open Source | https://universe.roboflow.com/basharsss1998-gmail-com/dits (accessed on 12 June 2025) |
TTSDCE Dataset | Traffic Signs | 1800 | Multi-ling | 1500 | N/A | 300 | 300 × 300 to 1280 × 720 | Bounding boxes, classes | Highways, urban streets | No | No | Open Source | http://www.aaroads.com (accessed on 12 June 2025) |
Color | H | S | V |
---|---|---|---|
Blue | 0.47 ≤ H ≤ 0.72||(0.85 ≤ H ≤ 1) | S ≥ 0.3 | V ≥ 0.11 |
Red | 0.03 ≤ H ≤ 0.94 | S ≥ 0.15 | V ≥ 0.07 |
Couleur | ThR1 | ThR2 | ThB1 | ThB2 | ThY1 | ThY2 | ThY3 |
---|---|---|---|---|---|---|---|
HST | 10 | 300 | 190 | 270 | 20 | 60 | 150 |
Panel | Panel Detected (%) | Panel Validated (%) |
---|---|---|
STOP | 99.92 | 98.92 |
Circles | 99.92 | 98.92 |
Rectangles | 99.46 | 96.79 |
Triangles | 99.94 | 99.94 |
Total | 99.81 | 99.64 |
Method | Treatment Times (s) |
---|---|
N. Romdhane [124] | 0.957 |
J. Greenhalgh [126] | 0.972 |
L. Chen [127] | 0.984 |
Authors | Method | Dataset | Rate (%) | Times (ms) | False Alarms | |
---|---|---|---|---|---|---|
Traffic Signs | F. Zaklouta [129] | HOG/SVM Linear | 14,763 training images | 90.90 | 55.54 | - |
Ruta [20] | CDT Color Distance Transformer | 13,287 images with radius between 15 and 25 pixels | 90.30 | - | 9% | |
M. García-garrido [130] | Hough Transform | Spanish code of conduct. | 99 | 30 | 2% | |
M. García-garrido [121] | Hough Transform | Triangular panel (Belgium Road Code) | 94.2–97.3 | 20 | - | |
A. Youssef [100] | HOG | GTSDB | 89.71–98.67 | 197–693 | - | |
J. Borrego-carazo [131] | CtC | 42,413 images (German, Belgium) | 99 | 30 | - | |
Road Lane | Jungang Guan [132] | Hough Transform | Video (1920 × 1080 pixels) à 14.3 ms/image on average | 99 | 5.4 | - |
W. Farage [118] | Hough Transform + Canny | Video (960 × 540 pixels) | 99 | 10 | - | |
A. Gupta [133] | Grassmann’s discriminant analysis | Video (320 × 240 pixels) | 95 | 28–36.47 | 0.83 | |
M. Park [134] | HOG | Video (640 × 480 pixels) | 88.19 | 51 | - | |
D. Y. Huang [135] | HOG-SVM | Video (320 × 240) pixels | 94.08 | N/A | - |
Method | Technique | Dataset | Precision (%) |
---|---|---|---|
HOS and K-NN | -- | BTSC | 98.89 |
GTSRB | 97.84 | ||
LDA | BTSC | 97.90 | |
GTSRB | 97.47 |
Method | Dataset | Precision (%) | Rappel (%) | AUC (%) |
---|---|---|---|---|
Transformation Log-polar | STS | 94.15 | 93.87 | 95.17 |
GTSDB | 94.03 | 92.98 | 94.22 |
Road lane | Authors | Method | Precision (%) | Times (ms) |
Ö. Kumtepe [143] | Viola–Jones | 90 | -- | |
R. Anderson [144] | Viola–Jones and YOLO | -- | 14.8 | |
W. Song [145] | Hough + CNN | 99.6 | -- |
Authors | Method | Dataset | Traffic Signs | Precision (%) | Times (ms) | |||
---|---|---|---|---|---|---|---|---|
Vitesse Limits (%) | Interdiction (%) | Danger (%) | ||||||
Traffic Signs | J. Zhang [146] | HOS-LDA | GTSRB | 99.93 | 99.44 | 99.13 | 97.84 | 0.64 |
Ruta [147] | HOG + KELM | GTSRB BTSC MASTIF | 99.54 | 100 | 98.96 | 98.56 | 3.9 | |
M. Liang [148] | HOG + Color | GTSDB | 86.91 | 92 | 86.34 | 89.49 | -- | |
L. Abdi [149] | Viola-Jones | GTSRB | 44.87 | 90.81 | 46.26 | 64.66 | -- | |
A. Jose [150] | Viola-jones + CNN | GTSRB | 94.10 | -- | 21.43% | 90% | -- | |
A. Gudigar [151] | Log-polar | GTSDB GTSRB | 98.46 | 98.55 | 98.62 | 98.31 | 0.40 | |
A. Ellahyani [152] | Log-polar | GTSRB | -- | -- | -- | 97.96 | 51.35 |
Dataset | Method | Precision (%) | Time (s) |
---|---|---|---|
GTSDB | HOG | 95.70 | 0.08 |
CLBP | 96.88 | 1.21 | |
Gabor | 94.10 | 2.32 | |
HOG + CLBP | 97.03 | 1.49 | |
HOG + Gabor | 96.90 | 2.57 | |
CLBP + Gabor | 96.40 | 3.54 | |
HOG + CLBP + Gabor | 99.10 | 3.68 | |
BTSC | HOG | 94.98 | 0.06 |
CLBP | 95.50 | 1.18 | |
Gabor | 93.18 | 2.09 | |
HOG + CLBP | 96.58 | 1.27 | |
HOG + Gabor | 96.74 | 2.42 | |
CLBP + Gabor | 97.04 | 3.40 | |
HOG + CLBP + Gabor | 98.30 | 3.50 |
Algorithm | Temps | Scenario I | Scenario II |
---|---|---|---|
SIFT | 2.1 t | 100% | 93.75% |
BRISK | 1.4 t | 93.75% | 87.5% |
SURF | t | 93.75% | 81.25% |
Classifier | Method | Precision |
---|---|---|
SVM (Cubic) | SURF | 82.0% |
SVM(Quadratic) | 81.0% | |
ANN | 93.3% | |
KNN | 76.0% | |
Decision Trees | 71.0% | |
Ensembles (Adaboost) | 68.0% | |
CNN (Proposed) | CNN Extractor | 97.0% |
Classifications | mAP (%) | FPS | Belgique (MB) |
---|---|---|---|
Faster R-CNN Inception ResNet V2 | 95.77 | 2.26 | 18,250.45 |
R-FCN ResNet 101 | 95.15 | 11.70 | 3509.75 |
Faster R-CNN ResNet 101 | 95.08 | 8.11 | 6134.71 |
Faster R-CNN ResNet 50 | 91.52 | 9.61 | 5256.45 |
Faster R-CNN Inception V2 | 90.62 | 17.08 | 2175.21 |
SSD Inception V2 | 66.10 | 42.12 | 284.51 |
SSD MobileNet | 61.64 | 66.03 | 94.70 |
Authors | Dataset | Method | Precision (%) | Recall (%) | |
---|---|---|---|---|---|
Learning methods based on manually extracted features | Abedin et al. [49] | 200 images | SURF + ANN | ---- | 97 |
Malik et al. [14] | 172 images | SIFT SURF BRISK | --- | 93.75 81.25 87.50 | |
Deep learning methods | A. Haghighat [159] | GTSDB | Architecture CNN | 99.4 | -- |
X. BANGQUAN [160] | GTSRB et LISA US | Architecture CNN | 96.80 | -- |
Authors | Dataset | Method | Precision (%) | Recall (%) | |
---|---|---|---|---|---|
Learning methods based on manually extracted features | Lingni Ma [161] | Cityspaces | DWT + CNN | 92.80 | --- |
L. Abdi [162] | GTSRB | Cascade Haar + CNN | 98.81 | 98.22 | |
Deep learning methods | Z. Qin [163] | TuSimple CULane | Architecture RNN | 96.06 | -- |
Q. Zou [164] | TuSimple Lane | CNN + RNN | 97.30 | 90.50 |
Method | Key Strength | Weakness | Performance Accuracy (%) | Processing Time (ms) | Positive False Rate (%) | Robustness to Occlusion | Hardware Dependency |
---|---|---|---|---|---|---|---|
Adaptive Image Enhancement [165] | Improves image quality in low light and glare | Limited for extreme lighting changes | 85 | 30 | 10 | Moderate | Moderate |
Color-Based Segmentation [108] | Effectively isolates signs under low-light conditions | Sensitivity to noise and shadows | 80 | 25 | 12 | Low | Low |
Photometric Invariants [166] | Handles illumination variations robustly | Requires high computational resources | 88 | 40 | 8 | High | High |
YOLO-Based Models [167] | High detection accuracy and speed | Performance depends on training data quality | 92 | 15 | 6 | High | High |
Combined Transfer Learning and YOLO [168] | Improved detection in diverse lighting conditions | Needs more training data for optimization | 91 | 20 | 5 | High | High |
Dataset Name | Number of Images | Lighting Scenarios | Resolution Range | Annotation Type | Diversity in Sign Types | Geographical Coverage | Scientific Reference |
---|---|---|---|---|---|---|---|
GTSRB | 51,839 | Normal daylight, limited low-light data | 30 × 30 to 256 × 256 | Bounding boxes | High | Germany | https://benchmark.ini.rub.de/ (accessed on 11 June 2025) |
Cityscapes | 25,000 | Daylight with some shadow variations | 2048 × 1024 | Semantic segmentation | Moderate | Global (urban-focused) | https://www.cityscapes-dataset.com/ (accessed on 11 June 2025) |
TTSDCE | 1800 | Daylight and low light with some urban conditions | 300 × 300 to 1280 × 720 | Bounding boxes | Moderate | China and English regions | N/A (self-collected) |
KITTI | 14,999 | Varied conditions including shadows and glare | Varied (~1242 × 375) | Bounding boxes and lanes | Low | Germany | http://www.cvlibs.net/datasets/kitti/ (accessed on 11 June 2025) |
Mapillary Traffic Sign Dataset | 30,000 | Highly diverse lighting scenarios | Varied (~1920 × 1080) | Bounding boxes, semantic segmentation | High | Global | https://arxiv.org/abs/1909.04422 (accessed on 11 June 2025) |
Optimization Method | Number of Iterations | Impact on Accuracy (%) | Impact on Computational Cost (%) |
---|---|---|---|
Pelican Optimization Algorithm (POA) and Cuckoo Search Algorithm (CSA) [175] | 1000 | 4.5 | −15% |
Bayesian Optimization with Angular Margin Loss [176] | 1200 | 5.2 | −12% |
Bayesian Optimization [177] | 800 | 3.8 | −10% |
ShuffleNet with YOLOv5 tuning [178] | 950 | 4.8 | −18% |
Reparameterized YOLOX-s [179] | 1100 | 5 | −16% |
Crowdsourced Hyperparameter Tuning [180] | Variable | 6 | −20% |
Automated Hyperparameter Search [181] | 1500 | 4.2 | −14% |
Adaptive Hyperparameter Selection [182] | 900 | 4.7 | −13% |
Meta-Learning Hyperparameter Tuning [183] | 1300 | 5.5 | −17% |
Sensitivity Analysis with Hyperparameter Adjustment [184] | 1100 | 4 | −11% |
Least Squares Method | Acc | FP | FN |
---|---|---|---|
Quadratic curve | 90.22 | 0.1259 | 0.0895 |
Cubic curve | 93.22 | 0.0954 | 0.0715 |
Quartic curve | 91.58 | 0.1061 | 0.0845 |
Quadratic curve | 90.22 | 0.1259 | 0.0895 |
Model | Detection Accuracy (%) | Positive False Rate (%) | Processing Time (ms) | Robustness to Complex Backgrounds | Scalability to Larger Datasets | Hardware Dependency |
---|---|---|---|---|---|---|
YOLO-SG [200] | 95.3 | 4.7 | 25 | High | High | GPU Required |
ESSD [201] | 93.2 | 6.8 | 30 | Moderate | High | GPU Required |
TrafficSignNet [202] | 91.5 | 8.5 | 28 | Moderate | Moderate | Moderate |
Color-Based Segmentation [203] | 85 | 15 | 20 | Low | Low | Low |
HOG-Based Detection | 87 | 12 | 35 | Moderate | Moderate | Low |
Faster R-CNN | 92 | 7 | 40 | High | High | GPU Required |
SSD with FPN | 93.8 | 6.2 | 32 | High | High | GPU Required |
Multi-Scale Attention Network [204] | 91.2 | 7.5 | 31 | High | High | GPU Required |
Description | Color | Morphology |
---|---|---|
No. of cores | 32 | 1,643,588 (41%) |
FF | 41,624 (39%) | 33,545 (63%) |
LUT | 29,945 (56%) | 48 (22%) |
BRAM | 60 (42%) | 112 (80%) |
Cycles/Pixel | 160 | 26 |
Exec. (ms) | 19.7 (8.7 *) | 41.3 (18.3 *) |
Speed-up | 4.5 (10.3 *) | 9.6 (21.75 *) |
Authors | Board | Datasets | Precision (%) | |
---|---|---|---|---|
Traffic Signs | F. Zaklouta [235] | Raspberry Pi | GTSRB | 90 |
N. Triki [236] | Raspberry Pi | GTSRB | 98.56 | |
Yan Han [237] | FPGA | U.S. traffic signs | 95 | |
W. Farhat [238] | Zynq FPGA | Original | 97.72 | |
R. Hmida [239] | Virtex-5 FPGA | Tunisian and European road signs | 91 | |
Road Lane | S. Malmir [240] | XC7K325T2FFG900C FPGA | Caltech datasets and KITTI dataset | 97.80 |
T.Yau Teo [241] | Raspberry | Malaysia roads | 95.24 | |
H. Gajjar [242] | Raspberry Pico and NVIDIA Jetson Nano | The data from the depth camera perceived using OpenNI library | 98.50 | |
J. Suder [243] | Raspberry Pi 4 model B and NVIDIA Jetson Nano | CuLane dataset | 97 | |
Y. Guo [244] | NVIDIA TITAN XP GPU | GNSS data | 99.4 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Barodi, A.; Benbrahim, M.; Zemmouri, A. Advancements in Embedded Vision Systems for Automotive: A Comprehensive Study on Detection and Recognition Techniques. Vehicles 2025, 7, 99. https://doi.org/10.3390/vehicles7030099
Barodi A, Benbrahim M, Zemmouri A. Advancements in Embedded Vision Systems for Automotive: A Comprehensive Study on Detection and Recognition Techniques. Vehicles. 2025; 7(3):99. https://doi.org/10.3390/vehicles7030099
Chicago/Turabian StyleBarodi, Anass, Mohammed Benbrahim, and Abdelkarim Zemmouri. 2025. "Advancements in Embedded Vision Systems for Automotive: A Comprehensive Study on Detection and Recognition Techniques" Vehicles 7, no. 3: 99. https://doi.org/10.3390/vehicles7030099
APA StyleBarodi, A., Benbrahim, M., & Zemmouri, A. (2025). Advancements in Embedded Vision Systems for Automotive: A Comprehensive Study on Detection and Recognition Techniques. Vehicles, 7(3), 99. https://doi.org/10.3390/vehicles7030099