Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (39)

Search Parameters:
Keywords = DLS LiDAR

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 12509 KB  
Article
Assessing the Impact of LiDAR Density and Input Features on Forest Canopy Height Estimation Through XGBoost, CNN, and CNN + Transformer Approaches
by Sergio Sierra, Rubén Ramo, Marc Padilla and Adolfo Cobo
Remote Sens. 2026, 18(4), 584; https://doi.org/10.3390/rs18040584 - 13 Feb 2026
Cited by 1 | Viewed by 830
Abstract
Accurate estimation of forest Canopy Height Model (CHM) is essential for understanding ecosystem structure, biomass, and carbon dynamics. Machine learning (ML) and deep learning (DL) models have shown strong potential when trained using LiDAR-derived (Light Detection and Ranging) canopy heights as reference data. [...] Read more.
Accurate estimation of forest Canopy Height Model (CHM) is essential for understanding ecosystem structure, biomass, and carbon dynamics. Machine learning (ML) and deep learning (DL) models have shown strong potential when trained using LiDAR-derived (Light Detection and Ranging) canopy heights as reference data. However, the influence of LiDAR point density on model performance remains poorly understood. This study systematically evaluates three modeling frameworks, Extreme Gradient Boosting (XGBoost), Convolutional Neural Networks (CNN) and hybrid CNN–Transformer architectures, for predicting CHM across multiple regions in Spain, using LiDAR data with varying point densities (1–5 points/m2) from the Plan Nacional de Ortofotografía Aérea (PNOA). Models were trained using multiple sources of remote sensing data: Sentinel-2 spectral bands (visible and near-infrared bands at 10 m) and Sentinel-1 VH backscatter and topographic variables (slope and elevation). Model performance was assessed in terms of predictive accuracy, generalization capacity and sensitivity to LiDAR point density data. Results show that CNN-based approaches outperform classical ML across all datasets, and that integrating Transformer modules further improves performance particularly in areas with higher LiDAR density. The best results were achieved on the dataset with higher point density (5 points/m2), where the CNN + Transformer model reached a Mean Absolute Error (MAE) of 3.37 ± 0.1 m and an R2 of 0.66 ± 0.02. Lower-density LiDAR datasets degraded the accuracy for all methods. These findings highlight that both model architecture and LiDAR point density critically influence CHM estimation. The results provide practical guidance for designing large-scale structural mapping workflows in regions with heterogeneous LiDAR coverage, helping to support forest management, carbon accounting and environmental decision-making. Full article
(This article belongs to the Special Issue Multimodal Remote Sensing Data Fusion, Analysis and Application)
Show Figures

Figure 1

33 pages, 7494 KB  
Article
AI-Driven Wetland Mapping Across Diverse Natural Regions of Alberta, Canada, Using Combined Airborne and Satellite Remote Sensing Data
by Michael A. Merchant, Joshua Evans, Rebecca Edwards, Lyle Boychuk, John Simms, Jennifer N. Hird, Jenet Dooley, Thuy Doan, Sydney Toni, Danielle Cobbaert, Amanda Cooper, Craig Mahoney, Kristyn Mayner, Mina Nasr, Nicole Skakun, Marsha Trites-Russell and Cynthia N. McClain
Remote Sens. 2026, 18(3), 507; https://doi.org/10.3390/rs18030507 - 4 Feb 2026
Viewed by 1788
Abstract
This study evaluates the performance of artificial intelligence (AI) technologies for wetland classification in the province of Alberta, Canada, using integrated remote sensing inputs, including airborne light detection and ranging (LiDAR), orthophotography, and multi-sensor satellite imagery (Sentinel-1, Sentinel-2, PlanetScope). Our primary objective was [...] Read more.
This study evaluates the performance of artificial intelligence (AI) technologies for wetland classification in the province of Alberta, Canada, using integrated remote sensing inputs, including airborne light detection and ranging (LiDAR), orthophotography, and multi-sensor satellite imagery (Sentinel-1, Sentinel-2, PlanetScope). Our primary objective was to assess whether AI-driven modelling approaches, specifically machine learning (ML) and deep learning (DL), can meet Alberta’s provincial wetland mapping standards. We hypothesized that integrating high-resolution LiDAR with multi-seasonal optical and radar data composites into advanced AI algorithms would achieve the required classification accuracy, detail, and minimum mapping unit targets. We tested several methodologies in four ecologically distinct pilot areas representing Alberta’s Boreal, Grassland, and Parkland Natural Regions. AI models included ensemble ML using Extreme Gradient Boosting (XGBoost) and Random Forest, and a DL U-Net convolutional neural network (CNN). AI models were trained on expert-labelled photoplots and validated using in situ field surveys. Our findings demonstrate that both ML and DL models met and, in several cases, exceeded the provincial mapping standards with validation overall accuracies surpassing >70% (form), >80% (class), and >90% (wetland–upland). U-Net CNN models generally produced the highest overall accuracies and most precise wetland extent delineation, but XGBoost offered finer detail and granularity for detailed mapping of rare wetland forms. Integrating LiDAR data and derivatives further enhanced model performance, improving accuracy by as much as 13%. Based on these outcomes, we provide a set of recommendations for scaling up these approaches, focusing on model selection, LiDAR imagery integration, and the continued value of field surveys to support the operational scaling of AI-driven classification approaches for wetland inventory updates across Alberta’s diverse landscapes. However, key challenges remain in scaling up this approach due to the cost of acquiring high-resolution LiDAR and satellite imagery. Full article
(This article belongs to the Special Issue Application of Remote Sensing Technology in Wetland Ecology)
Show Figures

Graphical abstract

19 pages, 668 KB  
Article
Analysis of Using Machine Learning Application Possibilities for the Detection and Classification of Topographic Objects
by Katarzyna Kryzia, Aleksandra Radziejowska, Justyna Adamczyk and Dominik Kryzia
ISPRS Int. J. Geo-Inf. 2026, 15(2), 59; https://doi.org/10.3390/ijgi15020059 - 27 Jan 2026
Viewed by 782
Abstract
The growing availability of spatial data from remote sensing, laser scanning (LiDAR), and photogrammetric techniques stimulates the dynamic development of methods for the automatic detection and classification of topographic objects. In recent years, both classical machine learning (ML) algorithms and deep learning (DL) [...] Read more.
The growing availability of spatial data from remote sensing, laser scanning (LiDAR), and photogrammetric techniques stimulates the dynamic development of methods for the automatic detection and classification of topographic objects. In recent years, both classical machine learning (ML) algorithms and deep learning (DL) methods have found wide application in the analysis of large and complex data sets. Despite significant achievements, literature on the subject remains scattered, and a comprehensive review that systematically compares algorithm classes with respect to data modality, performance, and application context is still needed. The aim of this article is to provide a critical analysis of the current state of research on the use of ML and DL algorithms in the detection and classification of topographic objects. The theoretical foundations of selected methods, their applications to various data sources, and the accuracy and computational requirements reported in the literature are presented. Attention is paid to comparing classical ML algorithms (including SVM, RF, KNN) with modern deep architectures (CNN, U-Net, ResNet), with respect to different data types such as satellite imagery, aerial orthophotos, and LiDAR point clouds, indicating their effectiveness in the context of cartographic and elevation data. The article also discusses the main challenges related to data availability, model interpretability, and computational costs, and points to promising directions for further research. The summary of the results shows that DL methods are frequently reported to achieve several to over ten percentage points higher segmentation and classification accuracy than classical ML approaches, depending on data type and object complexity, particularly in the analysis of raster data and LiDAR point clouds. The conclusions emphasize the practical significance of these methods for spatial planning, infrastructure monitoring, and environmental management, as well as their potential in the automation of topographic analysis. Full article
Show Figures

Figure 1

20 pages, 2057 KB  
Article
Applying Deep Learning to Bathymetric LiDAR Point Cloud Data for Classifying Submerged Environments
by Nabila Tabassum, Henri Giudici, Vimala Nunavath and Ivar Oveland
Appl. Sci. 2025, 15(24), 12914; https://doi.org/10.3390/app152412914 - 8 Dec 2025
Viewed by 887
Abstract
Subsea environments are vital for global biodiversity, climate regulation, and human activities such as fishing, transport, and resource extraction. Accurate mapping and monitoring of these ecosystems are essential for sustainable management. Airborne LiDAR bathymetry (ALB) provides high-resolution underwater data but produces large and [...] Read more.
Subsea environments are vital for global biodiversity, climate regulation, and human activities such as fishing, transport, and resource extraction. Accurate mapping and monitoring of these ecosystems are essential for sustainable management. Airborne LiDAR bathymetry (ALB) provides high-resolution underwater data but produces large and complex datasets that make efficient analysis challenging. This study employs deep learning (DL) models for the multi-class classification of ALB waveform data, comparing two recurrent neural networks, i.e., Long Short-Term Memory (LSTM) and Bidirectional LSTM (BiLSTM). A preprocessing pipeline was developed to extract and label waveform peaks corresponding to five classes: sea surface, water, vegetation, seabed, and noise. Experimental results from two datasets demonstrated high classification accuracy for both models, with LSTM achieving 95.22% and 94.85%, and BiLSTM obtaining 94.37% and 84.18% on Dataset 1 and Dataset 2, respectively. Results show that the LSTM exhibited robustness and generalization, confirming its suitability for modeling causal, time-of-flight ALB signals. Overall, the findings highlight the potential of DL-based ALB data processing to improve underwater classification accuracy, thereby supporting safe navigation, resource management, and marine environmental monitoring. Full article
(This article belongs to the Special Issue AI for Sustainability and Innovation—2nd Edition)
Show Figures

Figure 1

29 pages, 7711 KB  
Article
Fundamentals of Controlled Demolition in Structures: Real-Life Applications, Discrete Element Methods, Monitoring, and Artificial Intelligence-Based Research Directions
by Julide Yuzbasi
Buildings 2025, 15(19), 3501; https://doi.org/10.3390/buildings15193501 - 28 Sep 2025
Cited by 3 | Viewed by 3185
Abstract
Controlled demolition is a critical engineering practice that enables the safe and efficient dismantling of structures while minimizing risks to the surrounding environment. This study presents, for the first time, a detailed, structured framework for understanding the fundamental principles of controlled demolition by [...] Read more.
Controlled demolition is a critical engineering practice that enables the safe and efficient dismantling of structures while minimizing risks to the surrounding environment. This study presents, for the first time, a detailed, structured framework for understanding the fundamental principles of controlled demolition by outlining key procedures, methodologies, and directions for future research. Through original, carefully designed charts and full-scale numerical simulations, including two 23-story building scenarios with different delay and blasting sequences, this paper provides real-life insights into the effects of floor-to-floor versus axis-by-axis delays on structural collapse behavior, debris spread, and toppling control. Beyond traditional techniques, this study explores how emerging technologies, such as real-time structural monitoring via object tracking, LiDAR scanning, and Unmanned Aerial Vehicle (UAV)-based inspections, can be further advanced through the integration of artificial intelligence (AI). The potential Deep learning (DL) and Machine learning (ML)-based applications of tools like Convolutional Neural Network (CNN)-based digital twins, YOLO object detection, and XGBoost classifiers are highlighted as promising avenues for future research. These technologies could support real-time decision-making, automation, and risk assessment in demolition scenarios. Furthermore, vision-language models such as SAM and Grounding DINO are discussed as enabling technologies for real-time risk assessment, anomaly detection, and adaptive control. By sharing insights from full-scale observations and proposing a forward-looking analytical framework, this work lays a foundation for intelligent and resilient demolition practices. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

54 pages, 2856 KB  
Review
Applications, Trends, and Challenges of Precision Weed Control Technologies Based on Deep Learning and Machine Vision
by Xiangxin Gao, Jianmin Gao and Waqar Ahmed Qureshi
Agronomy 2025, 15(8), 1954; https://doi.org/10.3390/agronomy15081954 - 13 Aug 2025
Cited by 11 | Viewed by 6268
Abstract
Advanced computer vision (CV) and deep learning (DL) are essential for sustainable agriculture via automated vegetation management. This paper methodically reviews advancements in these technologies for agricultural settings, analyzing their fundamental principles, designs, system integration, and practical applications. The amalgamation of transformer topologies [...] Read more.
Advanced computer vision (CV) and deep learning (DL) are essential for sustainable agriculture via automated vegetation management. This paper methodically reviews advancements in these technologies for agricultural settings, analyzing their fundamental principles, designs, system integration, and practical applications. The amalgamation of transformer topologies with convolutional neural networks (CNNs) in models such as YOLO (You Only Look Once) and Mask R-CNN (Region-Based Convolutional Neural Network) markedly enhances target recognition and semantic segmentation. The integration of LiDAR (Light Detection and Ranging) with multispectral imagery significantly improves recognition accuracy in intricate situations. Moreover, the integration of deep learning models with control systems, which include laser modules, robotic arms, and precision spray nozzles, facilitates the development of intelligent robotic mowing systems that significantly diminish chemical herbicide consumption and enhance operational efficiency relative to conventional approaches. Significant obstacles persist, including restricted environmental adaptability, real-time processing limitations, and inadequate model generalization. Future directions entail the integration of varied data sources, the development of streamlined models, and the enhancement of intelligent decision-making systems, establishing a framework for the advancement of sustainable agricultural technology. Full article
(This article belongs to the Special Issue Research Progress in Agricultural Robots in Arable Farming)
Show Figures

Figure 1

34 pages, 2523 KB  
Technical Note
A Technical Note on AI-Driven Archaeological Object Detection in Airborne LiDAR Derivative Data, with CNN as the Leading Technique
by Reyhaneh Zeynali, Emanuele Mandanici and Gabriele Bitelli
Remote Sens. 2025, 17(15), 2733; https://doi.org/10.3390/rs17152733 - 7 Aug 2025
Cited by 3 | Viewed by 5048
Abstract
Archaeological research fundamentally relies on detecting features to uncover hidden historical information. Airborne (aerial) LiDAR technology has significantly advanced this field by providing high-resolution 3D terrain maps that enable the identification of ancient structures and landscapes with improved accuracy and efficiency. This technical [...] Read more.
Archaeological research fundamentally relies on detecting features to uncover hidden historical information. Airborne (aerial) LiDAR technology has significantly advanced this field by providing high-resolution 3D terrain maps that enable the identification of ancient structures and landscapes with improved accuracy and efficiency. This technical note comprehensively reviews 45 recent studies to critically examine the integration of Machine Learning (ML) and Deep Learning (DL) techniques, particularly Convolutional Neural Networks (CNNs), with airborne LiDAR derivatives for automated archaeological feature detection. The review highlights the transformative potential of these approaches, revealing their capability to automate feature detection and classification, thus enhancing efficiency and accuracy in archaeological research. CNN-based methods, employed in 32 of the reviewed studies, consistently demonstrate high accuracy across diverse archaeological features. For example, ancient city walls were delineated with 94.12% precision using U-Net, Maya settlements with 95% accuracy using VGG-19, and with an IoU of around 80% using YOLOv8, and shipwrecks with a 92% F1-score using YOLOv3 aided by transfer learning. Furthermore, traditional ML techniques like random forest proved effective in tasks such as identifying burial mounds with 96% accuracy and ancient canals. Despite these significant advancements, the application of ML/DL in archaeology faces critical challenges, including the scarcity of large, labeled archaeological datasets, the prevalence of false positives due to morphological similarities with natural or modern features, and the lack of standardized evaluation metrics across studies. This note underscores the transformative potential of LiDAR and ML/DL integration and emphasizes the crucial need for continued interdisciplinary collaboration to address these limitations and advance the preservation of cultural heritage. Full article
(This article belongs to the Special Issue Application of Remote Sensing in Cultural Heritage Research II)
Show Figures

Figure 1

27 pages, 658 KB  
Systematic Review
Advances in the Automated Identification of Individual Tree Species: A Systematic Review of Drone- and AI-Based Methods in Forest Environments
by Ricardo Abreu-Dias, Juan M. Santos-Gago, Fernando Martín-Rodríguez and Luis M. Álvarez-Sabucedo
Technologies 2025, 13(5), 187; https://doi.org/10.3390/technologies13050187 - 6 May 2025
Cited by 4 | Viewed by 5221
Abstract
The classification and identification of individual tree species in forest environments are critical for biodiversity conservation, sustainable forestry management, and ecological monitoring. Recent advances in drone technology and artificial intelligence have enabled new methodologies for detecting and classifying trees at an individual level. [...] Read more.
The classification and identification of individual tree species in forest environments are critical for biodiversity conservation, sustainable forestry management, and ecological monitoring. Recent advances in drone technology and artificial intelligence have enabled new methodologies for detecting and classifying trees at an individual level. However, significant challenges persist, particularly in heterogeneous forest environments with high species diversity and complex canopy structures. This systematic review explores the latest research on drone-based data collection and AI-driven classification techniques, focusing on studies that classify specific tree species rather than generic tree detection. Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, peer review studies from the last decade were analyzed to identify trends in data acquisition instruments (e.g., RGB, multispectral, hyperspectral, LiDAR), preprocessing techniques, segmentation approaches, and machine learning (ML) algorithms used for classification. Findings of this study reveal that deep learning (DL) models, particularly convolutional neural networks (CNN), are increasingly replacing traditional ML methods such as random forest (RF) or support vector machines (SVMs) because there is no need for a feature extraction phase, as this is implicit in the DL models. The integration of LiDAR with hyperspectral imaging further enhances classification accuracy but remains limited due to cost constraints. Additionally, we discuss the challenges of model generalization across different forest ecosystems and propose future research directions, including the development of standardized datasets and improved model architectures for robust tree species classification. This review provides a comprehensive synthesis of existing methodologies, highlighting both advancements and persistent gaps in AI-driven forest monitoring. Full article
(This article belongs to the Collection Review Papers Collection for Advanced Technologies)
Show Figures

Figure 1

28 pages, 13811 KB  
Article
MMTSCNet: Multimodal Tree Species Classification Network for Classification of Multi-Source, Single-Tree LiDAR Point Clouds
by Jan Richard Vahrenhold, Melanie Brandmeier and Markus Sebastian Müller
Remote Sens. 2025, 17(7), 1304; https://doi.org/10.3390/rs17071304 - 5 Apr 2025
Cited by 6 | Viewed by 1948
Abstract
Trees play a critical role in climate regulation, biodiversity, and carbon storage as they cover approximately 30% of the global land area. Nowadays, Machine Learning (ML)is key to automating large-scale tree species classification based on active and passive sensing systems, with a recent [...] Read more.
Trees play a critical role in climate regulation, biodiversity, and carbon storage as they cover approximately 30% of the global land area. Nowadays, Machine Learning (ML)is key to automating large-scale tree species classification based on active and passive sensing systems, with a recent trend favoring data fusion approaches for higher accuracy. The use of 3D Deep Learning (DL) models has improved tree species classification by capturing structural and geometric data directly from point clouds. We propose a fully Multimodal Tree Species Classification Network (MMTSCNet) that processes Light Detection and Ranging (LiDAR) point clouds, Full-Waveform (FWF) data, derived features, and bidirectional, color-coded depth images in their native data formats without any modality transformation. We conduct several experiments as well as an ablation study to assess the impact of data fusion. Classification performance on the combination of Airborne Laser Scanning (ALS) data with FWF data scored the highest, achieving an Overall Accuracy (OA) of nearly 97%, a Mean Average F1-score (MAF) of nearly 97%, and a Kappa Coefficient of 0.96. Results for the other data subsets show that the ALS data in combination with or even without FWF data produced the best results, which was closely followed by the UAV-borne Laser Scanning (ULS) data. Additionally, it is evident that the inclusion of FWF data provided significant benefits to the classification performance, resulting in an increase in the MAF of +4.66% for the ALS data, +4.69% for the ULS data under leaf-on conditions, and +2.59% for the ULS data under leaf-off conditions. The proposed model is also compared to a state-of-the-art unimodal 3D-DL model (PointNet++) as well as a feature-based unimodal DL architecture (DSTCN). The MMTSCNet architecture outperformed the other models by several percentage points, depending on the characteristics of the input data. Full article
Show Figures

Figure 1

29 pages, 4530 KB  
Systematic Review
Advances in Deep Learning for Semantic Segmentation of Low-Contrast Images: A Systematic Review of Methods, Challenges, and Future Directions
by Claudio Urrea and Maximiliano Vélez
Sensors 2025, 25(7), 2043; https://doi.org/10.3390/s25072043 - 25 Mar 2025
Cited by 11 | Viewed by 7476
Abstract
The semantic segmentation (SS) of low-contrast images (LCIs) remains a significant challenge in computer vision, particularly for sensor-driven applications like medical imaging, autonomous navigation, and industrial defect detection, where accurate object delineation is critical. This systematic review develops a comprehensive evaluation of state-of-the-art [...] Read more.
The semantic segmentation (SS) of low-contrast images (LCIs) remains a significant challenge in computer vision, particularly for sensor-driven applications like medical imaging, autonomous navigation, and industrial defect detection, where accurate object delineation is critical. This systematic review develops a comprehensive evaluation of state-of-the-art deep learning (DL) techniques to improve segmentation accuracy in LCI scenarios by addressing key challenges such as diffuse boundaries and regions with similar pixel intensities. It tackles primary challenges, such as diffuse boundaries and regions with similar pixel intensities, which limit conventional methods. Key advancements include attention mechanisms, multi-scale feature extraction, and hybrid architectures combining Convolutional Neural Networks (CNNs) with Vision Transformers (ViTs), which expand the Effective Receptive Field (ERF), improve feature representation, and optimize information flow. We compare the performance of 25 models, evaluating accuracy (e.g., mean Intersection over Union (mIoU), Dice Similarity Coefficient (DSC)), computational efficiency, and robustness across benchmark datasets relevant to automation and robotics. This review identifies limitations, including the scarcity of diverse, annotated LCI datasets and the high computational demands of transformer-based models. Future opportunities emphasize lightweight architectures, advanced data augmentation, integration with multimodal sensor data (e.g., LiDAR, thermal imaging), and ethically transparent AI to build trust in automation systems. This work contributes a practical guide for enhancing LCI segmentation, improving mean accuracy metrics like mIoU by up to 15% in sensor-based applications, as evidenced by benchmark comparisons. It serves as a concise, comprehensive guide for researchers and practitioners advancing DL-based LCI segmentation in real-world sensor applications. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

28 pages, 1683 KB  
Article
Energy-Saving Geospatial Data Storage—LiDAR Point Cloud Compression
by Artur Warchoł, Karolina Pęzioł and Marek Baścik
Energies 2024, 17(24), 6413; https://doi.org/10.3390/en17246413 - 20 Dec 2024
Cited by 3 | Viewed by 2751
Abstract
In recent years, the growth of digital data has been unimaginable. This also applies to geospatial data. One of the largest data types is LiDAR point clouds. Their large volumes on disk, both at the acquisition and processing stages, and in the final [...] Read more.
In recent years, the growth of digital data has been unimaginable. This also applies to geospatial data. One of the largest data types is LiDAR point clouds. Their large volumes on disk, both at the acquisition and processing stages, and in the final versions translate into a high demand for disk space and therefore electricity. It is therefore obvious that in order to reduce energy consumption, lower the carbon footprint of the activity and sensitize sustainability in the digitization of the industry, lossless compression of the aforementioned datasets is a good solution. In this article, a new format for point clouds—3DL—is presented, the effectiveness of which is compared with 21 available formats that can contain LiDAR data. A total of 404 processes were carried out to validate the 3DL file format. The validation was based on four LiDAR point clouds stored in LAS files: two files derived from ALS (airborne laser scanning), one in the local coordinate system and the other in PL-2000; and two obtained by TLS (terrestrial laser scanning), also with the same georeferencing (local and national PL-2000). During research, each LAS file was saved 101 different ways in 22 different formats, and the results were then compared in several ways (according to the coordinate system, ALS and TLS data, both types of data within a single coordinate system and the time of processing). The validated solution (3DL) achieved CR (compression rate) results of around 32% for ALS data and around 42% for TLS data, while the best solutions reached 15% for ALS and 34% for TLS. On the other hand, the worst method compressed the file up to 424.92% (ALS_PL2000). This significant reduction in file size contributes to a significant reduction in energy consumption during the storage of LiDAR point clouds, their transmission over the internet and/or during copy/transfer. For all solutions, rankings were developed according to CR and CT (compression time) parameters. Full article
(This article belongs to the Special Issue Low-Energy Technologies in Heavy Industries)
Show Figures

Figure 1

32 pages, 6180 KB  
Article
Improving Sewer Damage Inspection: Development of a Deep Learning Integration Concept for a Multi-Sensor System
by Jan Thomas Jung and Alexander Reiterer
Sensors 2024, 24(23), 7786; https://doi.org/10.3390/s24237786 - 5 Dec 2024
Cited by 10 | Viewed by 5241
Abstract
The maintenance and inspection of sewer pipes are essential to urban infrastructure but remain predominantly manual, resource-intensive, and prone to human error. Advancements in artificial intelligence (AI) and computer vision offer significant potential to automate sewer inspections, improving reliability and reducing costs. However, [...] Read more.
The maintenance and inspection of sewer pipes are essential to urban infrastructure but remain predominantly manual, resource-intensive, and prone to human error. Advancements in artificial intelligence (AI) and computer vision offer significant potential to automate sewer inspections, improving reliability and reducing costs. However, the existing vision-based inspection robots fail to provide data quality sufficient for training reliable deep learning (DL) models. To address these limitations, we propose a novel multi-sensor robotic system coupled with a DL integration concept. Following a comprehensive review of the current 2D (image) and 3D (point cloud) sewage pipe inspection methods, we identify key limitations and propose a system incorporating a camera array, front camera, and LiDAR sensor to optimise surface capture and enhance data quality. Damage types are assigned to the sensor best suited for their detection and quantification, while tailored DL models are proposed for each sensor type to maximise performance. This approach enables the optimal detection and processing of relevant damage types, achieving higher accuracy for each compared to single-sensor systems. Full article
(This article belongs to the Special Issue AI-Based Computer Vision Sensors & Systems)
Show Figures

Figure 1

28 pages, 18069 KB  
Article
An AI-Based Deep Learning with K-Mean Approach for Enhancing Altitude Estimation Accuracy in Unmanned Aerial Vehicles
by Prot Piyakawanich and Pattarapong Phasukkit
Drones 2024, 8(12), 718; https://doi.org/10.3390/drones8120718 - 29 Nov 2024
Cited by 4 | Viewed by 2858
Abstract
In the rapidly evolving domain of Unmanned Aerial Vehicles (UAVs), precise altitude estimation remains a significant challenge, particularly for lightweight UAVs. This research presents an innovative approach to enhance altitude estimation accuracy for UAVs weighing under 2 kg without cameras, utilizing advanced AI [...] Read more.
In the rapidly evolving domain of Unmanned Aerial Vehicles (UAVs), precise altitude estimation remains a significant challenge, particularly for lightweight UAVs. This research presents an innovative approach to enhance altitude estimation accuracy for UAVs weighing under 2 kg without cameras, utilizing advanced AI Deep Learning algorithms. The primary novelty of this study lies in its unique integration of unsupervised and supervised learning techniques. By synergistically combining K-Means Clustering with a multiple-input deep learning regression-based model (DL-KMA), we have achieved substantial improvements in altitude estimation accuracy. This methodology represents a significant advancement over conventional approaches in UAV technology. Our experimental design involved comprehensive field data collection across two distinct altitude environments, employing a high-precision Digital Laser Distance Meter as the reference standard (Class II). This rigorous approach facilitated a thorough evaluation of our model’s performance across varied terrains, ensuring robust and reliable results. The outcomes of our study are particularly noteworthy, with the model demonstrating remarkably low Mean Squared Error (MSE) values across all data clusters, ranging from 0.011 to 0.072. These results not only indicate significant improvements over traditional methods, but also establish a new benchmark in UAVs altitude estimation accuracy. A key innovation in our approach is the elimination of costly additional hardware such as Light Detection and Ranging (LiDAR), offering a cost-effective, software-based solution. This advancement has broad implications, enhancing the accessibility of advanced UAVs technology and expanding its potential applications across diverse sectors including precision agriculture, urban planning, and emergency response. This research represents a significant contribution to the integration of AI and UAVs technology, potentially unlocking new possibilities in UAVs applications. By enhancing the capabilities of lightweight UAVs, we are not merely improving a technical aspect, but revolutionizing the potential applications of UAVs across industries. Our work sets the stage for safer, more reliable, and precise UAVs operations, marking a pivotal moment in the evolution of aerial technology in an increasingly UAV-dependent world. Full article
Show Figures

Figure 1

14 pages, 6043 KB  
Article
Developing Site-Specific Prescription Maps for Sugarcane Weed Control Using High-Spatial-Resolution Images and Light Detection and Ranging (LiDAR)
by Kerin F. Romero and Muditha K. Heenkenda
Land 2024, 13(11), 1751; https://doi.org/10.3390/land13111751 - 25 Oct 2024
Cited by 6 | Viewed by 3091
Abstract
Sugarcane is a perennial grass species mainly for sugar production and one of the significant crops in Costa Rica, where ideal growing conditions support its cultivation. Weed control is a critical aspect of sugarcane farming, traditionally managed through preventive or corrective mechanical and [...] Read more.
Sugarcane is a perennial grass species mainly for sugar production and one of the significant crops in Costa Rica, where ideal growing conditions support its cultivation. Weed control is a critical aspect of sugarcane farming, traditionally managed through preventive or corrective mechanical and chemical methods. However, these methods can be time-consuming and costly. This study aimed to develop site-specific, variable rate prescription maps for weed control using remote sensing. High-spatial-resolution images (5 cm) and Light Detection And Ranging (LiDAR) were acquired using a Micasense Rededge-P camera and a DJI L1 sensor mounted on a drone. Precise locations of weeds were collected for calibration and validation. Normalized Difference Vegetation Index derived from multispectral images separated vegetation coverage and soil. A deep learning (DL) algorithm further classified vegetation coverage into sugarcane and weeds. The DL model performed well without overfitting. The classification accuracy was 87% compared to validation samples. The density and average heights of weed patches were extracted from the canopy height model (LiDAR). They were used to derive site-specific prescription maps for weed control. This efficient and precise alternative to traditional methods could optimize weed control, reduce herbicide usage and provide more profitable yield. Full article
Show Figures

Figure 1

37 pages, 92018 KB  
Article
Semantic Mapping of Landscape Morphologies: Tuning ML/DL Classification Approaches for Airborne LiDAR Data
by Marco Cappellazzo, Giacomo Patrucco, Giulia Sammartano, Marco Baldo and Antonia Spanò
Remote Sens. 2024, 16(19), 3572; https://doi.org/10.3390/rs16193572 - 25 Sep 2024
Cited by 6 | Viewed by 3007
Abstract
The interest in the enhancement of innovative solutions in the geospatial data classification domain from integrated aerial methods is rapidly growing. The transition from unstructured to structured information is essential to set up and arrange geodatabases and cognitive systems such as digital twins [...] Read more.
The interest in the enhancement of innovative solutions in the geospatial data classification domain from integrated aerial methods is rapidly growing. The transition from unstructured to structured information is essential to set up and arrange geodatabases and cognitive systems such as digital twins capable of monitoring territorial, urban, and general conditions of natural and/or anthropized space, predicting future developments, and considering risk prevention. This research is based on the study of classification methods and the consequent segmentation of low-altitude airborne LiDAR data in highly forested areas. In particular, the proposed approaches investigate integrating unsupervised classification methods and supervised Neural Network strategies, starting from unstructured point-based data formats. Furthermore, the research adopts Machine Learning classification methods for geo-morphological analyses derived from DTM datasets. This paper also discusses the results from a comparative perspective, suggesting possible generalization capabilities concerning the case study investigated. Full article
Show Figures

Graphical abstract

Back to TopTop