Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,177)

Search Parameters:
Keywords = cloud images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 2245 KiB  
Article
Extraction of Corrosion Damage Features of Serviced Cable Based on Three-Dimensional Point Cloud Technology
by Tong Zhu, Shoushan Cheng, Haifang He, Kun Feng and Jinran Zhu
Materials 2025, 18(15), 3611; https://doi.org/10.3390/ma18153611 (registering DOI) - 31 Jul 2025
Abstract
The corrosion of high-strength steel wires is a key factor impacting the durability and reliability of cable-stayed bridges. In this study, the corrosion pit features on a high-strength steel wire, which had been in service for 27 years, were extracted and modeled using [...] Read more.
The corrosion of high-strength steel wires is a key factor impacting the durability and reliability of cable-stayed bridges. In this study, the corrosion pit features on a high-strength steel wire, which had been in service for 27 years, were extracted and modeled using three-dimensional point cloud data obtained through 3D surface scanning. The Otsu method was applied for image binarization, and each corrosion pit was geometrically represented as an ellipse. Key pit parameters—including length, width, depth, aspect ratio, and a defect parameter—were statistically analyzed. Results of the Kolmogorov–Smirnov (K–S) test at a 95% confidence level indicated that the directional angle component (θ) did not conform to any known probability distribution. In contrast, the pit width (b) and defect parameter (Φ) followed a generalized extreme value distribution, the aspect ratio (b/a) matched a Beta distribution, and both the pit length (a) and depth (d) were best described by a Gaussian mixture model. The obtained results provide valuable reference for assessing the stress state, in-service performance, and predicted remaining service life of operational stay cables. Full article
(This article belongs to the Section Construction and Building Materials)
28 pages, 5699 KiB  
Article
Multi-Modal Excavator Activity Recognition Using Two-Stream CNN-LSTM with RGB and Point Cloud Inputs
by Hyuk Soo Cho, Kamran Latif, Abubakar Sharafat and Jongwon Seo
Appl. Sci. 2025, 15(15), 8505; https://doi.org/10.3390/app15158505 (registering DOI) - 31 Jul 2025
Abstract
Recently, deep learning algorithms have been increasingly applied in construction for activity recognition, particularly for excavators, to automate processes and enhance safety and productivity through continuous monitoring of earthmoving activities. These deep learning algorithms analyze construction videos to classify excavator activities for earthmoving [...] Read more.
Recently, deep learning algorithms have been increasingly applied in construction for activity recognition, particularly for excavators, to automate processes and enhance safety and productivity through continuous monitoring of earthmoving activities. These deep learning algorithms analyze construction videos to classify excavator activities for earthmoving purposes. However, previous studies have solely focused on single-source external videos, which limits the activity recognition capabilities of the deep learning algorithm. This paper introduces a novel multi-modal deep learning-based methodology for recognizing excavator activities, utilizing multi-stream input data. It processes point clouds and RGB images using the two-stream long short-term memory convolutional neural network (CNN-LSTM) method to extract spatiotemporal features, enabling the recognition of excavator activities. A comprehensive dataset comprising 495,000 video frames of synchronized RGB and point cloud data was collected across multiple construction sites under varying conditions. The dataset encompasses five key excavator activities: Approach, Digging, Dumping, Idle, and Leveling. To assess the effectiveness of the proposed method, the performance of the two-stream CNN-LSTM architecture is compared with that of single-stream CNN-LSTM models on the same RGB and point cloud datasets, separately. The results demonstrate that the proposed multi-stream approach achieved an accuracy of 94.67%, outperforming existing state-of-the-art single-stream models, which achieved 90.67% accuracy for the RGB-based model and 92.00% for the point cloud-based model. These findings underscore the potential of the proposed activity recognition method, making it highly effective for automatic real-time monitoring of excavator activities, thereby laying the groundwork for future integration into digital twin systems for proactive maintenance and intelligent equipment management. Full article
(This article belongs to the Special Issue AI-Based Machinery Health Monitoring)
Show Figures

Figure 1

19 pages, 3397 KiB  
Article
FEMNet: A Feature-Enriched Mamba Network for Cloud Detection in Remote Sensing Imagery
by Weixing Liu, Bin Luo, Jun Liu, Han Nie and Xin Su
Remote Sens. 2025, 17(15), 2639; https://doi.org/10.3390/rs17152639 - 30 Jul 2025
Abstract
Accurate and efficient cloud detection is critical for maintaining the usability of optical remote sensing imagery, particularly in large-scale Earth observation systems. In this study, we propose FEMNet, a lightweight dual-branch network that combines state space modeling with convolutional encoding for multi-class cloud [...] Read more.
Accurate and efficient cloud detection is critical for maintaining the usability of optical remote sensing imagery, particularly in large-scale Earth observation systems. In this study, we propose FEMNet, a lightweight dual-branch network that combines state space modeling with convolutional encoding for multi-class cloud segmentation. The Mamba-based encoder captures long-range semantic dependencies with linear complexity, while a parallel CNN path preserves spatial detail. To address the semantic inconsistency across feature hierarchies and limited context perception in decoding, we introduce the following two targeted modules: a cross-stage semantic enhancement (CSSE) block that adaptively aligns low- and high-level features, and a multi-scale context aggregation (MSCA) block that integrates contextual cues at multiple resolutions. Extensive experiments on five benchmark datasets demonstrate that FEMNet achieves state-of-the-art performance across both binary and multi-class settings, while requiring only 4.4M parameters and 1.3G multiply–accumulate operations. These results highlight FEMNet’s suitability for resource-efficient deployment in real-world remote sensing applications. Full article
Show Figures

Figure 1

20 pages, 4467 KiB  
Article
Delineation of Dynamic Coastal Boundaries in South Africa from Hyper-Temporal Sentinel-2 Imagery
by Mariel Bessinger, Melanie Lück-Vogel, Andrew Luke Skowno and Ferozah Conrad
Remote Sens. 2025, 17(15), 2633; https://doi.org/10.3390/rs17152633 - 29 Jul 2025
Abstract
The mapping and monitoring of coastal regions are critical to ensure their sustainable use and viability in the long term. Delineation of coastlines is becoming increasingly important in the light of climate change and rising sea levels. However, many coastlines are highly dynamic; [...] Read more.
The mapping and monitoring of coastal regions are critical to ensure their sustainable use and viability in the long term. Delineation of coastlines is becoming increasingly important in the light of climate change and rising sea levels. However, many coastlines are highly dynamic; therefore, mono-temporal assessments of coastal ecosystems and coastlines are mere snapshots of limited practical value for space-based planning. Understanding of the spatio-temporal dynamics of coastal ecosystem boundaries is important to inform ecosystem management but also for a meaningful delineation of the high-water mark, which is used as a benchmark for coastal spatial planning in South Africa. This research aimed to use hyper-temporal Sentinel-2 imagery to extract ecological zones on the coast of KwaZulu-Natal, South Africa. A total of 613 images, collected between 2019 and 2023, were classified into four distinct coastal ecological zones—vegetation, bare, surf, and water—using a Random Forest model. Across all classifications, the percentage of each of the four classes’ occurrence per pixel over time was determined. This enabled the identification of ecosystem locations, spatially static ecosystem boundaries, and the occurrence of ecosystem boundaries with a more dynamic location over time, such as the non-permanent vegetation zone of the foredune area as well as the intertidal zone. The overall accuracy of the model was 98.13%, while the Kappa coefficient was 0.975, with user’s and producer’s accuracies ranging between 93.02% and 100%. These results indicate that cloud-based analysis of Sentinel-2 time series holds potential not just for delineating coastal ecosystem boundaries, but also for enhancing the understanding of spatio-temporal dynamics between them, to inform meaningful environmental management, spatial planning, and climate adaptation strategies. Full article
Show Figures

Figure 1

36 pages, 9354 KiB  
Article
Effects of Clouds and Shadows on the Use of Independent Component Analysis for Feature Extraction
by Marcos A. Bosques-Perez, Naphtali Rishe, Thony Yan, Liangdong Deng and Malek Adjouadi
Remote Sens. 2025, 17(15), 2632; https://doi.org/10.3390/rs17152632 - 29 Jul 2025
Abstract
One of the persistent challenges in multispectral image analysis is the interference caused by dense cloud cover and its resulting shadows, which can significantly obscure surface features. This becomes especially problematic when attempting to monitor surface changes over time using satellite imagery, such [...] Read more.
One of the persistent challenges in multispectral image analysis is the interference caused by dense cloud cover and its resulting shadows, which can significantly obscure surface features. This becomes especially problematic when attempting to monitor surface changes over time using satellite imagery, such as from Landsat-8. In this study, rather than simply masking visual obstructions, we aimed to investigate the role and influence of clouds within the spectral data itself. To achieve this, we employed Independent Component Analysis (ICA), a statistical method capable of decomposing mixed signals into independent source components. By applying ICA to selected Landsat-8 bands and analyzing each component individually, we assessed the extent to which cloud signatures are entangled with surface data. This process revealed that clouds contribute to multiple ICA components simultaneously, indicating their broad spectral influence. With this influence on multiple wavebands, we managed to configure a set of components that could perfectly delineate the extent and location of clouds. Moreover, because Landsat-8 lacks cloud-penetrating wavebands, such as those in the microwave range (e.g., SAR), the surface information beneath dense cloud cover is not captured at all, making it physically impossible for ICA to recover what is not sensed in the first place. Despite these limitations, ICA proved effective in isolating and delineating cloud structures, allowing us to selectively suppress them in reconstructed images. Additionally, the technique successfully highlighted features such as water bodies, vegetation, and color-based land cover differences. These findings suggest that while ICA is a powerful tool for signal separation and cloud-related artifact suppression, its performance is ultimately constrained by the spectral and spatial properties of the input data. Future improvements could be realized by integrating data from complementary sensors—especially those operating in cloud-penetrating wavelengths—or by using higher spectral resolution imagery with narrower bands. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

23 pages, 8942 KiB  
Article
Optical and SAR Image Registration in Equatorial Cloudy Regions Guided by Automatically Point-Prompted Cloud Masks
by Yifan Liao, Shuo Li, Mingyang Gao, Shizhong Li, Wei Qin, Qiang Xiong, Cong Lin, Qi Chen and Pengjie Tao
Remote Sens. 2025, 17(15), 2630; https://doi.org/10.3390/rs17152630 - 29 Jul 2025
Abstract
The equator’s unique combination of high humidity and temperature renders optical satellite imagery highly susceptible to persistent cloud cover. In contrast, synthetic aperture radar (SAR) offers a robust alternative due to its ability to penetrate clouds with microwave imaging. This study addresses the [...] Read more.
The equator’s unique combination of high humidity and temperature renders optical satellite imagery highly susceptible to persistent cloud cover. In contrast, synthetic aperture radar (SAR) offers a robust alternative due to its ability to penetrate clouds with microwave imaging. This study addresses the challenges of cloud-induced data gaps and cross-sensor geometric biases by proposing an advanced optical and SAR image-matching framework specifically designed for cloud-prone equatorial regions. We use a prompt-driven visual segmentation model with automatic prompt point generation to produce cloud masks that guide cross-modal feature-matching and joint adjustment of optical and SAR data. This process results in a comprehensive digital orthophoto map (DOM) with high geometric consistency, retaining the fine spatial detail of optical data and the all-weather reliability of SAR. We validate our approach across four equatorial regions using five satellite platforms with varying spatial resolutions and revisit intervals. Even in areas with more than 50 percent cloud cover, our method maintains sub-pixel edging accuracy under manual check points and delivers comprehensive DOM products, establishing a reliable foundation for downstream environmental monitoring and ecosystem analysis. Full article
Show Figures

Figure 1

24 pages, 4396 KiB  
Article
Study of the Characteristics of a Co-Seismic Displacement Field Based on High-Resolution Stereo Imagery: A Case Study of the 2024 MS7.1 Wushi Earthquake, Xinjiang
by Chenyu Ma, Zhanyu Wei, Li Qian, Tao Li, Chenglong Li, Xi Xi, Yating Deng and Shuang Geng
Remote Sens. 2025, 17(15), 2625; https://doi.org/10.3390/rs17152625 - 29 Jul 2025
Viewed by 77
Abstract
The precise characterization of surface rupture zones and associated co-seismic displacement fields from large earthquakes provides critical insights into seismic rupture mechanisms, earthquake dynamics, and hazard assessments. Stereo-photogrammetric digital elevation models (DEMs), produced from high-resolution satellite stereo imagery, offer reliable global datasets that [...] Read more.
The precise characterization of surface rupture zones and associated co-seismic displacement fields from large earthquakes provides critical insights into seismic rupture mechanisms, earthquake dynamics, and hazard assessments. Stereo-photogrammetric digital elevation models (DEMs), produced from high-resolution satellite stereo imagery, offer reliable global datasets that are suitable for the detailed extraction and quantification of vertical co-seismic displacements. In this study, we utilized pre- and post-event WorldView-2 stereo images of the 2024 Ms7.1 Wushi earthquake in Xinjiang to generate DEMs with a spatial resolution of 0.5 m and corresponding terrain point clouds with an average density of approximately 4 points/m2. Subsequently, we applied the Iterative Closest Point (ICP) algorithm to perform differencing analysis on these datasets. Special care was taken to reduce influences from terrain changes such as vegetation growth and anthropogenic structures. Ultimately, by maintaining sufficient spatial detail, we obtained a three-dimensional co-seismic displacement field with a resolution of 15 m within grid cells measuring 30 m near the fault trace. The results indicate a clear vertical displacement distribution pattern along the causative sinistral–thrust fault, exhibiting alternating uplift and subsidence zones that follow a characteristic “high-in-center and low-at-ends” profile, along with localized peak displacement clusters. Vertical displacements range from approximately 0.2 to 1.4 m, with a maximum displacement of ~1.46 m located in the piedmont region north of the Qialemati River, near the transition between alluvial fan deposits and bedrock. Horizontal displacement components in the east-west and north-south directions are negligible, consistent with focal mechanism solutions and surface rupture observations from field investigations. The successful extraction of this high-resolution vertical displacement field validates the efficacy of satellite-based high-resolution stereo-imaging methods for overcoming the limitations of GNSS and InSAR techniques in characterizing near-field surface displacements associated with earthquake ruptures. Moreover, this dataset provides robust constraints for investigating fault-slip mechanisms within near-surface geological contexts. Full article
Show Figures

Figure 1

20 pages, 2776 KiB  
Article
Automatic 3D Reconstruction: Mesh Extraction Based on Gaussian Splatting from Romanesque–Mudéjar Churches
by Nelson Montas-Laracuente, Emilio Delgado Martos, Carlos Pesqueira-Calvo, Giovanni Intra Sidola, Ana Maitín, Alberto Nogales and Álvaro José García-Tejedor
Appl. Sci. 2025, 15(15), 8379; https://doi.org/10.3390/app15158379 - 28 Jul 2025
Viewed by 124
Abstract
This research introduces an automated 3D virtual reconstruction system tailored for architectural heritage (AH) applications, contributing to the ongoing paradigm shift from traditional CAD-based workflows to artificial intelligence-driven methodologies. It reviews recent advancements in machine learning and deep learning—particularly neural radiance fields (NeRFs) [...] Read more.
This research introduces an automated 3D virtual reconstruction system tailored for architectural heritage (AH) applications, contributing to the ongoing paradigm shift from traditional CAD-based workflows to artificial intelligence-driven methodologies. It reviews recent advancements in machine learning and deep learning—particularly neural radiance fields (NeRFs) and its successor, Gaussian splatting (GS)—as state-of-the-art techniques in the domain. The study advocates for replacing point cloud data in heritage building information modeling workflows with image-based inputs, proposing a novel “photo-to-BIM” pipeline. A proof-of-concept system is presented, capable of processing photographs or video footage of ancient ruins—specifically, Romanesque–Mudéjar churches—to automatically generate 3D mesh reconstructions. The system’s performance is assessed using both objective metrics and subjective evaluations of mesh quality. The results confirm the feasibility and promise of image-based reconstruction as a viable alternative to conventional methods. The study successfully developed a system for automated 3D mesh reconstruction of AH from images. It applied GS and Mip-splatting for NeRFs, proving superior in noise reduction for subsequent mesh extraction via surface-aligned Gaussian splatting for efficient 3D mesh reconstruction. This photo-to-mesh pipeline signifies a viable step towards HBIM. Full article
Show Figures

Figure 1

20 pages, 5843 KiB  
Article
Accurate and Robust Train Localization: Fusing Degeneracy-Aware LiDAR-Inertial Odometry and Visual Landmark Correction
by Lin Yue, Peng Wang, Jinchao Mu, Chen Cai, Dingyi Wang and Hao Ren
Sensors 2025, 25(15), 4637; https://doi.org/10.3390/s25154637 - 26 Jul 2025
Viewed by 308
Abstract
To overcome the limitations of current train positioning systems, including low positioning accuracy and heavy reliance on track transponders or GNSS signals, this paper proposes a novel LiDAR-inertial and visual landmark fusion framework. Firstly, an IMU preintegration factor considering the Earth’s rotation and [...] Read more.
To overcome the limitations of current train positioning systems, including low positioning accuracy and heavy reliance on track transponders or GNSS signals, this paper proposes a novel LiDAR-inertial and visual landmark fusion framework. Firstly, an IMU preintegration factor considering the Earth’s rotation and a LiDAR-inertial odometry factor accounting for degenerate states are constructed to adapt to railway train operating environments. Subsequently, a lightweight network based on YOLO improvement is used for recognizing reflective kilometer posts, while PaddleOCR extracts numerical codes. High-precision vertex coordinates of kilometer posts are obtained by jointly using LiDAR point cloud and an image detection box. Next, a kilometer post factor is constructed, and multi-source information is optimized within a factor graph framework. Finally, onboard experiments conducted on real railway vehicles demonstrate high-precision landmark detection at 35 FPS with 94.8% average precision. The proposed method delivers robust positioning within 5 m RMSE accuracy for high-speed, long-distance train travel, establishing a novel framework for intelligent railway development. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

30 pages, 3451 KiB  
Article
Integrating Google Maps and Smooth Street View Videos for Route Planning
by Federica Massimi, Antonio Tedeschi, Kalapraveen Bagadi and Francesco Benedetto
J. Imaging 2025, 11(8), 251; https://doi.org/10.3390/jimaging11080251 - 25 Jul 2025
Viewed by 265
Abstract
This research addresses the long-standing dependence on printed maps for navigation and highlights the limitations of existing digital services like Google Street View and Google Street View Player in providing comprehensive solutions for route analysis and understanding. The absence of a systematic approach [...] Read more.
This research addresses the long-standing dependence on printed maps for navigation and highlights the limitations of existing digital services like Google Street View and Google Street View Player in providing comprehensive solutions for route analysis and understanding. The absence of a systematic approach to route analysis, issues related to insufficient street view images, and the lack of proper image mapping for desired roads remain unaddressed by current applications, which are predominantly client-based. In response, we propose an innovative automatic system designed to generate videos depicting road routes between two geographic locations. The system calculates and presents the route conventionally, emphasizing the path on a two-dimensional representation, and in a multimedia format. A prototype is developed based on a cloud-based client–server architecture, featuring three core modules: frames acquisition, frames analysis and elaboration, and the persistence of metadata information and computed videos. The tests, encompassing both real-world and synthetic scenarios, have produced promising results, showcasing the efficiency of our system. By providing users with a real and immersive understanding of requested routes, our approach fills a crucial gap in existing navigation solutions. This research contributes to the advancement of route planning technologies, offering a comprehensive and user-friendly system that leverages cloud computing and multimedia visualization for an enhanced navigation experience. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

8 pages, 4452 KiB  
Proceeding Paper
Synthetic Aperture Radar Imagery Modelling and Simulation for Investigating the Composite Scattering Between Targets and the Environment
by Raphaël Valeri, Fabrice Comblet, Ali Khenchaf, Jacques Petit-Frère and Philippe Pouliguen
Eng. Proc. 2025, 94(1), 11; https://doi.org/10.3390/engproc2025094011 - 25 Jul 2025
Viewed by 185
Abstract
The high resolution of the Synthetic Aperture Radar (SAR) imagery, in addition to its capability to see through clouds and rain, makes it a crucial remote sensing technique. However, SAR images are very sensitive to radar parameters, the observation geometry and the scene’s [...] Read more.
The high resolution of the Synthetic Aperture Radar (SAR) imagery, in addition to its capability to see through clouds and rain, makes it a crucial remote sensing technique. However, SAR images are very sensitive to radar parameters, the observation geometry and the scene’s characteristics. Moreover, for a complex scene of interest with targets located on a rough soil, a composite scattering between the target and the surface occurs and creates distortions on the SAR image. These characteristics can make the SAR images difficult to analyse and process. To better understand the complex EM phenomena and their signature in the SAR image, we propose a methodology to generate raw SAR signals and SAR images for scenes of interest with a target located on a rough surface. With this prospect, the entire radar acquisition chain is considered: the sensor parameters, the atmospheric attenuation, the interactions between the incident EM field and the scene, and the SAR image formation. Simulation results are presented for a rough dielectric soil and a canonical target considered as a Perfect Electric Conductor (PEC). These results highlight the importance of the composite scattering signature between the target and the soil. Its power is 21 dB higher that that of the target for the target–soil configuration considered. Finally, these simulations allow for the retrieval of characteristics present in actual SAR images and show the potential of the presented model in investigating EM phenomena and their signatures in SAR images. Full article
Show Figures

Figure 1

15 pages, 2993 KiB  
Article
A Joint LiDAR and Camera Calibration Algorithm Based on an Original 3D Calibration Plate
by Ziyang Cui, Yi Wang, Xiaodong Chen and Huaiyu Cai
Sensors 2025, 25(15), 4558; https://doi.org/10.3390/s25154558 - 23 Jul 2025
Viewed by 252
Abstract
An accurate extrinsic calibration between LiDAR and cameras is essential for effective sensor fusion, directly impacting the perception capabilities of autonomous driving systems. Although prior calibration approaches using planar and point features have yielded some success, they suffer from inherent limitations. Specifically, methods [...] Read more.
An accurate extrinsic calibration between LiDAR and cameras is essential for effective sensor fusion, directly impacting the perception capabilities of autonomous driving systems. Although prior calibration approaches using planar and point features have yielded some success, they suffer from inherent limitations. Specifically, methods that rely on fitting planar contours using depth-discontinuous points are prone to systematic errors, which hinder the precise extraction of the 3D positions of feature points. This, in turn, compromises the accuracy and robustness of the calibration. To overcome these challenges, this paper introduces a novel 3D calibration plate incorporating the gradient depth, localization markers, and corner features. At the point cloud level, the gradient depth enables the accurate estimation of the 3D coordinates of feature points. At the image level, corner features and localization markers facilitate the rapid and precise acquisition of 2D pixel coordinates, with minimal interference from environmental noise. This method establishes a rigorous and systematic framework to enhance the accuracy of LiDAR–camera extrinsic calibrations. In a simulated environment, experimental results demonstrate that the proposed algorithm achieves a rotation error below 0.002 radians and a translation error below 0.005 m. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

31 pages, 4937 KiB  
Article
Proximal LiDAR Sensing for Monitoring of Vegetative Growth in Rice at Different Growing Stages
by Md Rejaul Karim, Md Nasim Reza, Shahriar Ahmed, Kyu-Ho Lee, Joonjea Sung and Sun-Ok Chung
Agriculture 2025, 15(15), 1579; https://doi.org/10.3390/agriculture15151579 - 23 Jul 2025
Viewed by 239
Abstract
Precise monitoring of vegetative growth is essential for assessing crop responses to environmental changes. Conventional methods of geometric characterization of plants such as RGB imaging, multispectral sensing, and manual measurements often lack precision or scalability for growth monitoring of rice. LiDAR offers high-resolution, [...] Read more.
Precise monitoring of vegetative growth is essential for assessing crop responses to environmental changes. Conventional methods of geometric characterization of plants such as RGB imaging, multispectral sensing, and manual measurements often lack precision or scalability for growth monitoring of rice. LiDAR offers high-resolution, non-destructive 3D canopy characterization, yet applications in rice cultivation across different growth stages remain underexplored, while LiDAR has shown success in other crops such as vineyards. This study addresses that gap by using LiDAR for geometric characterization of rice plants at early, middle, and late growth stages. The objective of this study was to characterize rice plant geometry such as plant height, canopy volume, row distance, and plant spacing using the proximal LiDAR sensing technique at three different growth stages. A commercial LiDAR sensor (model: VPL−16, Velodyne Lidar, San Jose, CA, USA) mounted on a wheeled aluminum frame for data collection, preprocessing, visualization, and geometric feature characterization using a commercial software solution, Python (version 3.11.5), and a custom algorithm. Manual measurements compared with the LiDAR 3D point cloud data measurements, demonstrating high precision in estimating plant geometric characteristics. LiDAR-estimated plant height, canopy volume, row distance, and spacing were 0.5 ± 0.1 m, 0.7 ± 0.05 m3, 0.3 ± 0.00 m, and 0.2 ± 0.001 m at the early stage; 0.93 ± 0.13 m, 1.30 ± 0.12 m3, 0.32 ± 0.01 m, and 0.19 ± 0.01 m at the middle stage; and 0.99 ± 0.06 m, 1.25 ± 0.13 m3, 0.38 ± 0.03 m, and 0.10 ± 0.01 m at the late growth stage. These measurements closely matched manual observations across three stages. RMSE values ranged from 0.01 to 0.06 m and r2 values ranged from 0.86 to 0.98 across parameters, confirming the high accuracy and reliability of proximal LiDAR sensing under field conditions. Although precision was achieved across growth stages, complex canopy structures under field conditions posed segmentation challenges. Further advances in point cloud filtering and classification are required to reliably capture such variability. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

14 pages, 7931 KiB  
Article
Characteristics of Surface Temperature Inversion at the Muztagh-Ata Site on the Pamir Plateau
by Dai-Ping Zhang, Wen-Bo Gu, Ali Esamdin, Chun-Hai Bai, Hu-Biao Niu, Li-Yong Liu and Ji-Cheng Zhang
Atmosphere 2025, 16(8), 897; https://doi.org/10.3390/atmos16080897 - 23 Jul 2025
Viewed by 189
Abstract
In this paper, based on all the data from September 2021 to June 2024 collected by a 30 m meteorological tower and a differential image motion monitor (DIMM) at the Muztagh-Ata site located on the Pamir Plateau in western Xinjiang, China, we study [...] Read more.
In this paper, based on all the data from September 2021 to June 2024 collected by a 30 m meteorological tower and a differential image motion monitor (DIMM) at the Muztagh-Ata site located on the Pamir Plateau in western Xinjiang, China, we study the characteristics of the surface temperature inversion and its effect on astronomical seeing at the site. The results show the following: The temperature inversion at the Muztagh-Ata site is highly pronounced at night; it is typically distributed below a height of about 18 m; it weakens and disappears gradually after sunrise, while it forms gradually after sunset and remains stable during the night; and it is weaker in spring and summer but stronger in autumn and winter. Correlation studies with meteorological parameters show the following: increases in both cloud coverage and humidity weaken temperature inversion; the distribution of inversion with wind speed exhibits a bimodal distribution; southwesterly winds prevail at a frequency of 73.76% and are typically accompanied by strong temperature inversions. Finally, by statistical patterns, we found that strong temperature inversion at the Muztagh-Ata site usually bring better seeing by suppressing atmospheric optical turbulence. Full article
Show Figures

Figure 1

17 pages, 2307 KiB  
Article
DeepBiteNet: A Lightweight Ensemble Framework for Multiclass Bug Bite Classification Using Image-Based Deep Learning
by Doston Khasanov, Halimjon Khujamatov, Muksimova Shakhnoza, Mirjamol Abdullaev, Temur Toshtemirov, Shahzoda Anarova, Cheolwon Lee and Heung-Seok Jeon
Diagnostics 2025, 15(15), 1841; https://doi.org/10.3390/diagnostics15151841 - 22 Jul 2025
Viewed by 287
Abstract
Background/Objectives: The accurate identification of insect bites from images of skin is daunting due to the fine gradations among diverse bite types, variability in human skin response, and inconsistencies in image quality. Methods: For this work, we introduce DeepBiteNet, a new [...] Read more.
Background/Objectives: The accurate identification of insect bites from images of skin is daunting due to the fine gradations among diverse bite types, variability in human skin response, and inconsistencies in image quality. Methods: For this work, we introduce DeepBiteNet, a new ensemble-based deep learning model designed to perform robust multiclass classification of insect bites from RGB images. Our model aggregates three semantically diverse convolutional neural networks—DenseNet121, EfficientNet-B0, and MobileNetV3-Small—using a stacked meta-classifier designed to aggregate their predicted outcomes into an integrated, discriminatively strong output. Our technique balances heterogeneous feature representation with suppression of individual model biases. Our model was trained and evaluated on a hand-collected set of 1932 labeled images representing eight classes, consisting of common bites such as mosquito, flea, and tick bites, and unaffected skin. Our domain-specific augmentation pipeline imputed practical variability in lighting, occlusion, and skin tone, thereby boosting generalizability. Results: Our model, DeepBiteNet, achieved a training accuracy of 89.7%, validation accuracy of 85.1%, and test accuracy of 84.6%, and surpassed fifteen benchmark CNN architectures on all key indicators, viz., precision (0.880), recall (0.870), and F1-score (0.875). Our model, optimized for mobile deployment with quantization and TensorFlow Lite, enables rapid on-client computation and eliminates reliance on cloud-based processing. Conclusions: Our work shows how ensemble learning, when carefully designed and combined with realistic data augmentation, can boost the reliability and usability of automatic insect bite diagnosis. Our model, DeepBiteNet, forms a promising foundation for future integration with mobile health (mHealth) solutions and may complement early diagnosis and triage in dermatologically underserved regions. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Diagnostics and Analysis 2024)
Show Figures

Figure 1

Back to TopTop