Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (66)

Search Parameters:
Keywords = street lighting detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1948 KB  
Article
Graph-MambaRoadDet: A Symmetry-Aware Dynamic Graph Framework for Road Damage Detection
by Zichun Tian, Xiaokang Shao and Yuqi Bai
Symmetry 2025, 17(10), 1654; https://doi.org/10.3390/sym17101654 - 5 Oct 2025
Viewed by 432
Abstract
Road-surface distress poses a serious threat to traffic safety and imposes a growing burden on urban maintenance budgets. While modern detectors based on convolutional networks and Vision Transformers achieve strong frame-level performance, they often overlook an essential property of road environments—structural symmetry [...] Read more.
Road-surface distress poses a serious threat to traffic safety and imposes a growing burden on urban maintenance budgets. While modern detectors based on convolutional networks and Vision Transformers achieve strong frame-level performance, they often overlook an essential property of road environments—structural symmetry within road networks and damage patterns. We present Graph-MambaRoadDet (GMRD), a symmetry-aware and lightweight framework that integrates dynamic graph reasoning with state–space modeling for accurate, topology-informed, and real-time road damage detection. Specifically, GMRD employs an EfficientViM-T1 backbone and two DefMamba blocks, whose deformable scanning paths capture sub-pixel crack patterns while preserving geometric symmetry. A superpixel-based graph is constructed by projecting image regions onto OpenStreetMap road segments, encoding both spatial structure and symmetric topological layout. We introduce a Graph-Generating State–Space Model (GG-SSM) that synthesizes sparse sample-specific adjacency in O(M) time, further refined by a fusion module that combines detector self-attention with prior symmetry constraints. A consistency loss promotes smooth predictions across symmetric or adjacent segments. The full INT8 model contains only 1.8 M parameters and 1.5 GFLOPs, sustaining 45 FPS at 7 W on a Jetson Orin Nano—eight times lighter and 1.7× faster than YOLOv8-s. On RDD2022, TD-RD, and RoadBench-100K, GMRD surpasses strong baselines by up to +6.1 mAP50:95 and, on the new RoadGraph-RDD benchmark, achieves +5.3 G-mAP and +0.05 consistency gain. Qualitative results demonstrate robustness under shadows, reflections, back-lighting, and occlusion. By explicitly modeling spatial and topological symmetry, GMRD offers a principled solution for city-scale road infrastructure monitoring under real-time and edge-computing constraints. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

35 pages, 2863 KB  
Article
DeepSIGNAL-ITS—Deep Learning Signal Intelligence for Adaptive Traffic Signal Control in Intelligent Transportation Systems
by Mirabela Melinda Medvei, Alin-Viorel Bordei, Ștefania Loredana Niță and Nicolae Țăpuș
Appl. Sci. 2025, 15(17), 9396; https://doi.org/10.3390/app15179396 - 27 Aug 2025
Viewed by 1236
Abstract
Urban traffic congestion remains a major contributor to vehicle emissions and travel inefficiency, prompting the need for adaptive and intelligent traffic management systems. In response, we introduce DeepSIGNAL-ITS (Deep Learning Signal Intelligence for Adaptive Lights in Intelligent Transportation Systems), a unified framework that [...] Read more.
Urban traffic congestion remains a major contributor to vehicle emissions and travel inefficiency, prompting the need for adaptive and intelligent traffic management systems. In response, we introduce DeepSIGNAL-ITS (Deep Learning Signal Intelligence for Adaptive Lights in Intelligent Transportation Systems), a unified framework that leverages real-time traffic perception and learning-based control to optimize signal timing and reduce congestion. The system integrates vehicle detection via the YOLOv8 architecture at roadside units (RSUs) and manages signal control using Proximal Policy Optimization (PPO), guided by global traffic indicators such as accumulated vehicle waiting time. Secure communication between RSUs and cloud infrastructure is ensured through Transport Layer Security (TLS)-encrypted data exchange. We validate the framework through extensive simulations in SUMO across diverse urban settings. Simulation results show an average 30.20% reduction in vehicle waiting time at signalized intersections compared to baseline fixed-time configurations derived from OpenStreetMap (OSM). Furthermore, emissions assessed via the HBEFA-based model in SUMO reveal measurable reductions across pollutant categories, underscoring the framework’s dual potential to improve both traffic efficiency and environmental sustainability in simulated urban environments. Full article
(This article belongs to the Section Transportation and Future Mobility)
Show Figures

Figure 1

19 pages, 8749 KB  
Article
Applying Computer Vision for the Detection and Analysis of the Condition and Operation of Street Lighting
by Sunggat Aiymbay, Ainur Zhumadillayeva, Eric T. Matson, Bakhyt Matkarimov and Bigul Mukhametzhanova
Symmetry 2025, 17(8), 1294; https://doi.org/10.3390/sym17081294 - 11 Aug 2025
Viewed by 838
Abstract
Urban safety critically depends on effective street lighting systems; however, rapidly expanding cities, such as Astana, face considerable challenges in maintaining these systems due to the inefficiency, high labor intensity, and error-prone nature of conventional manual inspection methods. This necessitates an urgent shift [...] Read more.
Urban safety critically depends on effective street lighting systems; however, rapidly expanding cities, such as Astana, face considerable challenges in maintaining these systems due to the inefficiency, high labor intensity, and error-prone nature of conventional manual inspection methods. This necessitates an urgent shift toward automated, accurate, and scalable monitoring systems capable of quickly identifying malfunctioning streetlights. In response, this study introduces an advanced computer vision-based approach for automated detection and analysis of street lighting conditions. Leveraging high-resolution dashcam footage collected under diverse nighttime weather conditions, we constructed a robust dataset of 4260 carefully annotated frames highlighting streetlight poles and lamps. To significantly enhance detection accuracy, we propose the novel YOLO-CSE model, which integrates a Channel Squeeze-and-Excitation (CSE) module into the YOLO (You Only Look Once) detection architecture. The CSE module leverages the inherent symmetry of streetlight structures, such as the bilateral symmetry of poles and the radial symmetry of lamps, to dynamically recalibrate feature channels, emphasizing spatially repetitive and geometrically uniform patterns. By modifying the bottleneck layer through the addition of an extra convolutional layer and the SE block, the model learns richer, more discriminative feature representations, particularly for small or distant lamps under partial occlusion or low illumination. A comprehensive comparative analysis demonstrates that YOLO-CSE outperforms conventional YOLO variants and state-of-the-art models, achieving a mean average precision (mAP) of 0.798, recall of 0.794, precision of 0.824, and an F1 score of 0.808. The model’s symmetry-aware design enhances robustness to urban clutter (e.g., asymmetric noise from headlights or signage) while maintaining real-time efficiency. These results validate YOLO-CSE as a scalable solution for smart cities, where symmetry principles bridge geometric priors with computational efficiency in infrastructure monitoring. Full article
(This article belongs to the Special Issue Symmetry in Advancing Digital Signal and Image Processing)
Show Figures

Figure 1

21 pages, 4763 KB  
Article
AI-Based Counting of Traffic Participants: An Explorative Study Using Public Webcams
by Anton Galich, Dorothee Stiller, Michael Wurm and Hannes Taubenböck
Future Transp. 2025, 5(3), 87; https://doi.org/10.3390/futuretransp5030087 - 7 Jul 2025
Viewed by 1068
Abstract
This paper explores the potential of public webcams as a source of data for transport research. Eight different open-source object detection models were tested on three publicly accessible webcams located in the city of Brunswick, Germany. Fifteen images at different lighting conditions (bright [...] Read more.
This paper explores the potential of public webcams as a source of data for transport research. Eight different open-source object detection models were tested on three publicly accessible webcams located in the city of Brunswick, Germany. Fifteen images at different lighting conditions (bright light, dusk, and night) were selected from each webcam and manually labelled with regard to the following six categories: cars, persons, bicycles, trucks, trams, and buses. The manual counts in these six categories were then compared to the number of counts found by the object detection models. The results show that public webcams constitute a useful source of data for transport research. In bright light conditions, applying out-of-the-box object detection models can yield reliable counts of cars or persons in public squares, streets, and junctions. However, the detection of cars and persons was not reliably accurate at dusk or night. Thus, different object detection models might have to be used to generate accurate counts in different lighting conditions. Furthermore, the object detection models worked less well for identifying trams, buses, bicycles, and trucks. Hence fine-tuning and adapting the models to the specific webcams might be needed to achieve satisfactory results for these four types of traffic participants. Full article
Show Figures

Figure 1

17 pages, 5935 KB  
Technical Note
Merging Various Types of Remote Sensing Data and Social Participation GIS with AI to Map the Objects Affected by Light Occlusion
by Yen-Chun Lin, Teng-To Yu, Yu-En Yang, Jo-Chi Lin, Guang-Wen Lien and Shyh-Chin Lan
Remote Sens. 2025, 17(13), 2131; https://doi.org/10.3390/rs17132131 - 21 Jun 2025
Viewed by 616
Abstract
This study proposes a practical integration of an existing deep learning model (YOLOv9-E) and social participation GIS using multi-source remote sensing data to identify asbestos-containing materials located on the side of a building affected by light occlusions. These objects are often undetectable by [...] Read more.
This study proposes a practical integration of an existing deep learning model (YOLOv9-E) and social participation GIS using multi-source remote sensing data to identify asbestos-containing materials located on the side of a building affected by light occlusions. These objects are often undetectable by traditional vertical or oblique photogrammetry, yet their precise localization is essential for effective removal planning. By leveraging the mobility and responsiveness of citizen investigators, we conducted fine-grained surveys in community spaces that were often inaccessible using conventional methods. The YOLOv9-E model demonstrated robustness on mobile-captured images, enriched with geolocation and orientation metadata, which improved the association between detections and specific buildings. By comparing results from Google Street View and field-based social imagery, we highlight the complementary strengths of both sources. Rather than introducing new algorithms, this study focuses on an applied integration framework to improve detection coverage, spatial precision, and participatory monitoring for environmental risk management. The dataset comprised 20,889 images, with 98% being used for training and validation and 2% being used for independent testing. The YOLOv9-E model achieved an mAP50 of 0.81 and an F1-score of 0.85 on the test set. Full article
Show Figures

Figure 1

24 pages, 12563 KB  
Article
Analyzing Gaze During Driving: Should Eye Tracking Be Used to Design Automotive Lighting Functions?
by Korbinian Kunst, David Hoffmann, Anıl Erkan, Karina Lazarova and Tran Quoc Khanh
J. Eye Mov. Res. 2025, 18(2), 13; https://doi.org/10.3390/jemr18020013 - 10 Apr 2025
Viewed by 1063
Abstract
In this work, an experiment was designed in which a defined route consisting of country roads, highways, and urban roads was driven by 20 subjects during the day and at night. The test vehicle was equipped with GPS and a camera, and the [...] Read more.
In this work, an experiment was designed in which a defined route consisting of country roads, highways, and urban roads was driven by 20 subjects during the day and at night. The test vehicle was equipped with GPS and a camera, and the subject wore head-mounted eye-tracking glasses to record gaze. Gaze distributions for country roads, highways, urban roads, and specific urban roads were then calculated and compared. The day/night comparisons showed that the horizontal fixation distribution of the subjects was wider during the day than at night over the whole test distance. When the distributions were divided into urban roads, country roads, and motorways, the difference was also seen in each road environment. For the vertical distribution, no clear differences between day and night can be seen for country roads or urban roads. In the case of the highway, the vertical dispersion is significantly lower, so the gaze is more focused. On highways and urban roads there is a tendency for the gaze to be lowered. The differentiation between a residential road and a main road in the city made it clear that gaze behavior differs significantly depending on the urban area. For example, the residential road led to a broader gaze behavior, as the sides of the street were scanned much more often in order to detect potential hazards lurking between parked cars at an early stage. This paper highlights the contradictory results of eye-tracking research and shows that it is not advisable to define a holy grail of gaze distribution for all environments. Gaze is highly situational and context-dependent, and generalized gaze distributions should not be used to design lighting functions. The research highlights the importance of an adaptive light distribution that adapts to the traffic situation and the environment, always providing good visibility for the driver and allowing a natural gaze behavior. Full article
Show Figures

Figure 1

15 pages, 2935 KB  
Article
Evaluation of Pothole Detection Performance Using Deep Learning Models Under Low-Light Conditions
by Yuliia Zanevych, Vasyl Yovbak, Oleh Basystiuk, Nataliya Shakhovska, Solomiia Fedushko and Sotirios Argyroudis
Sustainability 2024, 16(24), 10964; https://doi.org/10.3390/su162410964 - 13 Dec 2024
Cited by 4 | Viewed by 5776
Abstract
In our interconnected society, prioritizing the resilience and sustainability of road infrastructure has never been more critical, especially in light of growing environmental and climatic challenges. By harnessing data from various sources, we can proactively enhance our ability to detect road damage. This [...] Read more.
In our interconnected society, prioritizing the resilience and sustainability of road infrastructure has never been more critical, especially in light of growing environmental and climatic challenges. By harnessing data from various sources, we can proactively enhance our ability to detect road damage. This approach will enable us to make well-informed decisions for timely maintenance and implement effective mitigation strategies, ultimately leading to safer and more durable road systems. This paper presents a new method for detecting road potholes during low-light conditions, particularly at night when influenced by street and traffic lighting. We examined and assessed various advanced machine learning and computer vision models, placing a strong emphasis on deep learning algorithms such as YOLO, as well as the combination of Grad-CAM++ with feature pyramid networks for feature extraction. Our approach utilized innovative data augmentation techniques, which enhanced the diversity and robustness of the training dataset, ultimately leading to significant improvements in model performance. The study results reveal that the proposed YOLOv11+FPN+Grad-CAM model achieved a mean average precision (mAP) score of 0.72 for the 50–95 IoU thresholds, outperforming other tested models, including YOLOv8 Medium with a score of 0.611. The proposed model also demonstrated notable improvements in key metrics, with mAP50 and mAP75 values of 0.88 and 0.791, reflecting enhancements of 1.5% and 5.7%, respectively, compared to YOLOv11. These results highlight the model’s superior performance in detecting potholes under low-light conditions. By leveraging a specialized dataset for nighttime scenarios, the approach offers significant advancements in hazard detection, paving the way for more effective and timely driver alerts and ultimately contributing to improved road safety. This paper makes several key contributions, including implementing advanced data augmentation methods and a thorough comparative analysis of various YOLO-based models. Future plans involve developing a real-time driver warning application, introducing enhanced evaluation metrics, and demonstrating the model’s adaptability in diverse environmental conditions, such as snow and rain. The contributions significantly advance the field of road maintenance and safety by offering a robust and scalable solution for pothole detection, particularly in developing countries. Full article
Show Figures

Figure 1

27 pages, 7600 KB  
Article
Spiking Neural Networks for Real-Time Pedestrian Street-Crossing Detection Using Dynamic Vision Sensors in Simulated Adverse Weather Conditions
by Mustafa Sakhai, Szymon Mazurek, Jakub Caputa, Jan K. Argasiński and Maciej Wielgosz
Electronics 2024, 13(21), 4280; https://doi.org/10.3390/electronics13214280 - 31 Oct 2024
Cited by 2 | Viewed by 2908
Abstract
This study explores the integration of Spiking Neural Networks (SNNs) with Dynamic Vision Sensors (DVSs) to enhance pedestrian street-crossing detection in adverse weather conditions—a critical challenge for autonomous vehicle systems. Utilizing the high temporal resolution and low latency of DVSs, which excel in [...] Read more.
This study explores the integration of Spiking Neural Networks (SNNs) with Dynamic Vision Sensors (DVSs) to enhance pedestrian street-crossing detection in adverse weather conditions—a critical challenge for autonomous vehicle systems. Utilizing the high temporal resolution and low latency of DVSs, which excel in dynamic, low-light, and high-contrast environments, this research evaluates the effectiveness of SNNs compared to traditional Convolutional Neural Networks (CNNs). The experimental setup involved a custom dataset from the CARLA simulator, designed to mimic real-world variability, including rain, fog, and varying lighting conditions. Additionally, the JAAD dataset was adopted to allow for evaluations using real-world data. The SNN models were optimized using Temporally Effective Batch Normalization (TEBN) and benchmarked against well-established deep learning models, concerning their accuracy, computational efficiency, and energy efficiency in complex weather conditions. This study also conducted a comprehensive analysis of energy consumption, highlighting the significant reduction in energy usage achieved by SNNs when processing DVS data. The results indicate that SNNs, when integrated with DVSs, not only reduce computational overhead but also dramatically lower energy consumption, making them a highly efficient choice for real-time applications in autonomous vehicles (AVs). Full article
(This article belongs to the Special Issue Autonomous and Connected Vehicles)
Show Figures

Figure 1

37 pages, 5927 KB  
Article
Object and Pedestrian Detection on Road in Foggy Weather Conditions by Hyperparameterized YOLOv8 Model
by Ahmad Esmaeil Abbasi, Agostino Marcello Mangini and Maria Pia Fanti
Electronics 2024, 13(18), 3661; https://doi.org/10.3390/electronics13183661 - 14 Sep 2024
Cited by 3 | Viewed by 3993
Abstract
Connected cooperative and automated (CAM) vehicles and self-driving cars need to achieve robust and accurate environment understanding. With this aim, they are usually equipped with sensors and adopt multiple sensing strategies, also fused among them to exploit their complementary properties. In recent years, [...] Read more.
Connected cooperative and automated (CAM) vehicles and self-driving cars need to achieve robust and accurate environment understanding. With this aim, they are usually equipped with sensors and adopt multiple sensing strategies, also fused among them to exploit their complementary properties. In recent years, artificial intelligence such as machine learning- and deep learning-based approaches have been applied for object and pedestrian detection and prediction reliability quantification. This paper proposes a procedure based on the YOLOv8 (You Only Look Once) method to discover objects on the roads such as cars, traffic lights, pedestrians and street signs in foggy weather conditions. In particular, YOLOv8 is a recent release of YOLO, a popular neural network model used for object detection and image classification. The obtained model is applied to a dataset including about 4000 foggy road images and the object detection accuracy is improved by changing hyperparameters such as epochs, batch size and augmentation methods. To achieve good accuracy and few errors in detecting objects in the images, the hyperparameters are optimized by four different methods, and different metrics are considered, namely accuracy factor, precision, recall, precision–recall and loss. Full article
(This article belongs to the Special Issue Applications and Challenges of Image Processing in Smart Environment)
Show Figures

Figure 1

23 pages, 8631 KB  
Article
Analysis of Road Safety Perception and Influencing Factors in a Complex Urban Environment—Taking Chaoyang District, Beijing, as an Example
by Xinyu Hou and Peng Chen
ISPRS Int. J. Geo-Inf. 2024, 13(8), 272; https://doi.org/10.3390/ijgi13080272 - 31 Jul 2024
Cited by 6 | Viewed by 1961
Abstract
Measuring human perception of environmental safety and quantifying the street view elements that affect human perception of environmental safety are of great significance for improving the urban environment and residents’ safety perception. However, domestic large-scale quantitative research on the safety perception of Chinese [...] Read more.
Measuring human perception of environmental safety and quantifying the street view elements that affect human perception of environmental safety are of great significance for improving the urban environment and residents’ safety perception. However, domestic large-scale quantitative research on the safety perception of Chinese local cities needs to be deepened. Therefore, this paper chooses Chaoyang District in Beijing as the research area. Firstly, the network safety perception distribution of Chaoyang District is calculated and presented through the CNN model trained based on the perception dataset constructed by Chinese local cities. Then, the street view elements are extracted from the street view images using image semantic segmentation and target detection technology. Finally, the street view elements that affect the road safety perception are identified and analyzed based on LightGBM and SHAP interpretation framework. The results show the following: (1) the overall safety perception level of Chaoyang District in Beijing is high; (2) the number of motor vehicles and the proportion of the area of roads, skies, and sidewalks are the four factors that have the greatest impact on environmental safety perception; (3) there is an interaction between different street view elements on safety perception, and the proportion and number of street view elements have interaction on safety perception; (4) in the sections with the lowest, moderate, and highest levels of safety perception, the influence of street view elements on safety perception is inconsistent. Finally, this paper summarizes the results and points out the shortcomings of the research. Full article
Show Figures

Figure 1

15 pages, 2633 KB  
Article
Energy Efficiency in Public Lighting Systems Friendly to the Environment and Protected Areas
by Carlos Velásquez, Francisco Espín, María Ángeles Castro and Francisco Rodríguez
Sustainability 2024, 16(12), 5113; https://doi.org/10.3390/su16125113 - 16 Jun 2024
Cited by 13 | Viewed by 4590
Abstract
Solid-state lighting technology, such as LED devices, is critical to improving energy efficiency in street lighting systems. In Ecuador, government policies have established the obligation to exclusively use LED systems starting in 2023, except in special projects. Ecuador, known for its vast biodiversity, [...] Read more.
Solid-state lighting technology, such as LED devices, is critical to improving energy efficiency in street lighting systems. In Ecuador, government policies have established the obligation to exclusively use LED systems starting in 2023, except in special projects. Ecuador, known for its vast biodiversity, protects its national parks, which are rich in flora, fauna and natural resources, through international institutions and agreements such as UNESCO, CBD and CITES. Although reducing electrical consumption usually measures energy efficiency, this article goes further. It considers aspects such as the correlated color temperature in the lighting design of protected areas, light pollution and the decrease in energy quality due to harmonic distortion. Measurements of the electromagnetic spectrum of the light sources were made in an area in the Galápagos National Park of Ecuador, revealing highly correlated color temperatures that can affect ecosystem cycles. In addition, the investigation detected levels of light pollution increasing the night sky brightness and a notable presence of harmonic distortion in the electrical grid. Using simulations to predict the behavior of these variables offers an efficient option to help preserve protected environments and the quality of energy supply while promoting energy savings. Full article
Show Figures

Figure 1

24 pages, 9047 KB  
Article
Integrated Investigations to Study the Materials and Degradation Issues of the Urban Mural Painting Ama Il Tuo Sogno by Jorit Agoch
by Giulia Germinario, Andrea Luigia Logiodice, Paola Mezzadri, Giorgia Di Fusco, Roberto Ciabattoni, Davide Melica and Angela Calia
Sustainability 2024, 16(12), 5069; https://doi.org/10.3390/su16125069 - 14 Jun 2024
Cited by 5 | Viewed by 2148
Abstract
This paper focuses on an integrated approach to study the materials and the degradation issues in the urban mural painting Ama Il Tuo Sogno, painted by the famous street artist Jorit Agoch in Matera (Italy). The study was conducted in the framework of [...] Read more.
This paper focuses on an integrated approach to study the materials and the degradation issues in the urban mural painting Ama Il Tuo Sogno, painted by the famous street artist Jorit Agoch in Matera (Italy). The study was conducted in the framework of a conservation project, aiming to contrast a progressive decay affecting the artifact that started a few months after its creation. Multi-analytical techniques were used to investigate the stratigraphy and chemical composition of the pictorial film within a low-impact analytical protocol for sustainable diagnostics. They included polarized light microscopy in UV and VIS reflected light, FTIR spectroscopy, Py-GC-HRAMS, and SEM-EDS. The mineralogical–petrographic composition of the mortar employed in the pictorial support was also studied with optical microscopy of thin sections and X-ray diffractometry. To know the mechanism underlying the degradation, IR thermography was performed in situ to establish the waterways and the distribution of the humidity in the mural painting. In addition, ion chromatography and X-ray diffractometry were used to identify and quantify the soluble salts and to understand their sources. The overall results allowed us to determine the chemical composition of the binder and pigments within the pictorial layers, the mineralogical–petrographic characteristics of the mortar of the support, and the execution technique of the painting. They also highlighted a correlation between the presence of humidity in the painted mural and the salt damage. The mineralogical phases were detected in the mural materials by XRD, and the results of ion chromatographic analyses suggested a supply of soluble salts mainly from the mortar of the support. Finally, the study provided basic knowledge for planning appropriate sustainable conservation measures. Full article
Show Figures

Figure 1

27 pages, 10879 KB  
Article
Fusion of Google Street View, LiDAR, and Orthophoto Classifications Using Ranking Classes Based on F1 Score for Building Land-Use Type Detection
by Nafiseh Ghasemian Sorboni, Jinfei Wang and Mohammad Reza Najafi
Remote Sens. 2024, 16(11), 2011; https://doi.org/10.3390/rs16112011 - 3 Jun 2024
Cited by 8 | Viewed by 2420
Abstract
Building land-use type classification using earth observation data is essential for urban planning and emergency management. Municipalities usually do not hold a detailed record of building land-use types in their jurisdictions, and there is a significant need for a detailed classification of this [...] Read more.
Building land-use type classification using earth observation data is essential for urban planning and emergency management. Municipalities usually do not hold a detailed record of building land-use types in their jurisdictions, and there is a significant need for a detailed classification of this data. Earth observation data can be beneficial in this regard, because of their availability and requiring a reduced amount of fieldwork. In this work, we imported Google Street View (GSV), light detection and ranging-derived (LiDAR-derived) features, and orthophoto images to deep learning (DL) models. The DL models were trained on building land-use type data for the Greater Toronto Area (GTA). The data was created using building land-use type labels from OpenStreetMap (OSM) and web scraping. Then, we classified buildings into apartment, house, industrial, institutional, mixed residential/commercial, office building, retail, and other. Three DL-derived classification maps from GSV, LiDAR, and orthophoto images were combined at the decision level using the proposed ranking classes based on the F1 score method. For comparison, the classifiers were combined using fuzzy fusion as well. The results of two independent case studies, Vancouver and Fort Worth, showed that the proposed fusion method could achieve an overall accuracy of 75%, up to 8% higher than the previous study using CNNs and the same ground truth data. Also, the results showed that while mixed residential/commercial buildings were correctly detected using GSV images, the DL models confused many houses in the GTA with mixed residential/commercial because of their similar appearance in GSV images. Full article
Show Figures

Figure 1

20 pages, 32718 KB  
Article
Characterizing Dust and Biomass Burning Events from Sentinel-2 Imagery
by Simone Lolli, Luciano Alparone, Alberto Arienzo and Andrea Garzelli
Atmosphere 2024, 15(6), 672; https://doi.org/10.3390/atmos15060672 - 31 May 2024
Cited by 2 | Viewed by 1932
Abstract
The detection and evaluation of biomass burning and dust events are critical for understanding their impact on air quality, climate, and human health, particularly in the Mediterranean region. This research pioneers an innovative methodology that uses Sentinel-2 multispectral (MS) imagery to meticulously pinpoint [...] Read more.
The detection and evaluation of biomass burning and dust events are critical for understanding their impact on air quality, climate, and human health, particularly in the Mediterranean region. This research pioneers an innovative methodology that uses Sentinel-2 multispectral (MS) imagery to meticulously pinpoint and analyze long-transport dust outbreaks and biomass burning phenomena, originating both locally and transported from remote areas. We developed the dust/biomass burning (DBB) composite normalized differential index, a tool that identifies clear, dusty, and biomass burning scenarios in the selected region. The DBB index jointly employs specific Sentinel-2 bands: B2-B3-B4 for visible light analysis, and B11 and B12 for short-wave infrared (SWIR), exploiting the specificity of each wavelength to assess the presence of different aerosols. A key feature of the DBB index is its normalization by the surface reflectance of the scene, which ensures independence from the underlying texture, such as streets and buildings, for urban areas. The differentiation involves the comparison of the top-of-atmosphere (TOA) reflectance values from aerosol events with those from clear-sky reference images, thereby constituting a sort of calibration. The index is tailored for urban settings, where Sentinel-2 imagery provides a decametric spatial resolution and revisit time of 5 days. The average values of DBB achieve a 96% match with the coarse-mode aerosol optical depths (AOD), measured by a local station of the AERONET network of sun-photometers. In future studies, the map of DBB could be integrated with that achieved from Sentinel-3 images, which offer similar spectral bands, albeit with much less fine spatial resolution, yet benefit from daily coverage. Full article
(This article belongs to the Special Issue Haze and Related Aerosol Air Pollution in Remote and Urban Areas)
Show Figures

Figure 1

13 pages, 5686 KB  
Article
Traffic Sign Recognition Using Multi-Task Deep Learning for Self-Driving Vehicles
by Khaldaa Alawaji, Ramdane Hedjar and Mansour Zuair
Sensors 2024, 24(11), 3282; https://doi.org/10.3390/s24113282 - 21 May 2024
Cited by 9 | Viewed by 5151
Abstract
Over the coming years, the advancement of driverless transport systems for people and goods that are designed to be used on fixed routes will revolutionize the transportation system. Therefore, for a safe transportation system, detecting and recognizing traffic signals based on computer vision [...] Read more.
Over the coming years, the advancement of driverless transport systems for people and goods that are designed to be used on fixed routes will revolutionize the transportation system. Therefore, for a safe transportation system, detecting and recognizing traffic signals based on computer vision has become increasingly important. Deep learning approaches, particularly convolutional neural networks, have shown exceptional performance in various computer vision applications. The goal of this research is to precisely detect and recognize traffic signs that are present on the streets using computer vision and deep learning techniques. Previous work has focused on symbol-based traffic signals, where popular single-task learning models have been trained and tested. Therefore, several comparisons have been conducted to select accurate single-task learning models. For further improvement, these models are employed in a multi-task learning approach. Indeed, multi-task learning algorithms are built by sharing the convolutional layer parameters between the different tasks. Hence, for the multi-task learning approach, different experiments have been carried out using pre-trained architectures like, for instance, InceptionResNetV2 and DenseNet201. A range of traffic signs and traffic lights are employed to validate the designed model. An accuracy of 99.07% is achieved when the entire network has been trained. To further enhance the accuracy of the model for traffic signs obtained from the street, a region of interest module is added to the multi-task learning module to accurately extract the traffic signs available in the image. To check the effectiveness of the adopted methodology, the designed model has been successfully tested in real-time on a few Riyadh highways. Full article
(This article belongs to the Special Issue Advances in Intelligent Transportation Systems Based on Sensor Fusion)
Show Figures

Figure 1

Back to TopTop