Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (114)

Search Parameters:
Keywords = small-scale cloud application

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 3865 KB  
Article
An Improved Model Based on YOLOv8 for Small Object Detection and Recognition
by Jia He and Suyun Luo
Information 2026, 17(2), 173; https://doi.org/10.3390/info17020173 - 9 Feb 2026
Viewed by 160
Abstract
With the rapid advancement of remote sensing technology, remote sensing images are increasingly being used in applications such as geographical monitoring, disaster warning, and urban planning. However, detecting small objects—such as vehicles and small buildings—in such imagery remains challenging due to complex backgrounds, [...] Read more.
With the rapid advancement of remote sensing technology, remote sensing images are increasingly being used in applications such as geographical monitoring, disaster warning, and urban planning. However, detecting small objects—such as vehicles and small buildings—in such imagery remains challenging due to complex backgrounds, weak features, and interference from factors like terrain, clouds, and lighting, leading to high rates of missed detections and false alarms. To tackle these issues, this paper proposes an improved YOLOv8-based framework for small object detection in remote sensing images. The enhancements include a multi-scale feature fusion mechanism, optimized data augmentation strategies incorporating super-resolution techniques, and a redesigned loss function that emphasizes small objects. These refinements significantly improve the model’s ability to extract discriminative features and detect small targets against cluttered backgrounds. Experimental results demonstrate superior performance across multiple metrics, including precision, recall, mAP50, and mAP50-95, particularly for challenging categories like small vehicles and buildings. This research not only provides an effective solution to the key technical bottleneck in small object detection, advancing the progress of related algorithms, but also offers important theoretical and practical experience for subsequent work. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

24 pages, 30102 KB  
Article
Developing 3D River Channel Modeling with UAV-Based Point Cloud Data
by Taesam Lee and Yejin Kong
Remote Sens. 2026, 18(3), 495; https://doi.org/10.3390/rs18030495 - 3 Feb 2026
Viewed by 231
Abstract
Accurate characterization of river channel geometry is essential for hydrological and hydraulic analyses, yet the increasing use of unmanned aerial vehicle (UAV) photogrammetry introduces challenges related to uneven point density, shadow-induced data gaps, and spurious outliers. This study proposed a novel approach for [...] Read more.
Accurate characterization of river channel geometry is essential for hydrological and hydraulic analyses, yet the increasing use of unmanned aerial vehicle (UAV) photogrammetry introduces challenges related to uneven point density, shadow-induced data gaps, and spurious outliers. This study proposed a novel approach for reconstructing 3D river channels from UAV-derived point clouds, emphasizing K-nearest neighbor local regression (KLR), and compared it with the LOWESS model. Method performance was examined through controlled simulations of trapezoidal, triangular, and U-shaped synthetic channels, where KLR consistently preserved morphological fidelity and produced lower RMSE than LOWESS, particularly at channel bends and bed undulations, while a neighborhood selection heuristic approach demonstrated robust results across varying data densities. Synthetic channel experiments show that the proposed K-nearest-neighbor local linear regression (KLR) method achieves RMSE values below 0.06 all tested geometries. In contrast, LOWESS produces substantially larger errors, with RMSE values exceeding 0.9 across all channel shapes. Subsequent application to two South Korean field sites reinforced these findings. In the data-scarce Migok-cheon stream, KLR effectively interpolated missing surfaces while maintaining geomorphic realism, whereas LOWESS generated over-smoothed representations. Within the dense Ogsan Bridge dataset, KLR retained small-scale bed features critical for hydraulic simulations and cross-sectional delineation, while LOWESS obscured local variability. Conclusively, the results demonstrate that KLR provides a more reliable and computationally efficient framework for UAV-based 3D river channel reconstruction, with clear implications for hydraulic modeling, flood risk management, and the advancement of digital-twin systems in operational hydrology. Full article
Show Figures

Figure 1

39 pages, 2492 KB  
Systematic Review
Cloud, Edge, and Digital Twin Architectures for Condition Monitoring of Computer Numerical Control Machine Tools: A Systematic Review
by Mukhtar Fatihu Hamza
Information 2026, 17(2), 153; https://doi.org/10.3390/info17020153 - 3 Feb 2026
Viewed by 340
Abstract
Condition monitoring has come to the forefront of intelligent manufacturing and is particularly important in Computer Numerical Control (CNC) machining processes, where reliability, precision, and productivity are crucial. The traditional methods of monitoring, which are mostly premised on single sensors, the localized capture [...] Read more.
Condition monitoring has come to the forefront of intelligent manufacturing and is particularly important in Computer Numerical Control (CNC) machining processes, where reliability, precision, and productivity are crucial. The traditional methods of monitoring, which are mostly premised on single sensors, the localized capture of data, and offline interpretation, are proving too small to handle current machining processes. Being limited in their scale, having limited computational power, and not being responsive in real-time, they do not fit well in a dynamic and data-intensive production environment. Recent progress in the Industrial Internet of Things (IIoT), cloud computing, and edge intelligence has led to a push into distributed monitoring architectures capable of obtaining, processing, and interpreting large amounts of heterogeneous machining data. Such innovations have facilitated more adaptive decision-making approaches, which have helped in supporting predictive maintenance, enhancing machining stability, tool lifespan, and data-driven optimization in manufacturing businesses. A structured literature search was conducted across major scientific databases, and eligible studies were synthesized qualitatively. This systematic review synthesizes over 180 peer-reviewed studies found in major scientific databases, using specific inclusion criteria and a PRISMA-guided screening process. It provides a comprehensive look at sensor technologies, data acquisition systems, cloud–edge–IoT frameworks, and digital twin implementations from an architectural perspective. At the same time, it identifies ongoing challenges related to industrial scalability, standardization, and the maturity of deployment. The combination of cloud platforms and edge intelligence is of particular interest, with emphasis placed on how the two ensure a balance in the computational load and latency, and improve system reliability. The review is a synthesis of the major advances associated with sensor technologies, data collection approaches, machine operations, machine learning, deep learning methods, and digital twins. The paper concludes with what can and cannot be performed to date by providing a comparative analysis of what is known about this topic and the reported industrial case applications. The main issues, such as the inconsistency of data, the lack of standardization, cyber threats, and old system integration, are critically analyzed. Lastly, new research directions are touched upon, including hybrid cloud–edge intelligence, advanced AI models, and adaptive multisensory fusion, which is oriented to autonomous and self-evolving CNC monitoring systems in line with the Industry 4.0 and Industry 5.0 paradigms. The review process was made transparent and repeatable by using a PRISMA-guided approach to qualitative synthesis and literature screening. Full article
Show Figures

Figure 1

20 pages, 5585 KB  
Article
Integrating NDVI and Multisensor Data with Digital Agriculture Tools for Crop Monitoring in Southern Brazil
by Danielle Elis Garcia Furuya, Édson Luis Bolfe, Taya Cristo Parreiras, Victória Beatriz Soares and Luciano Gebler
AgriEngineering 2026, 8(2), 48; https://doi.org/10.3390/agriengineering8020048 - 2 Feb 2026
Viewed by 239
Abstract
The monitoring of perennial and annual crops requires different analytical approaches due to their contrasting phenological dynamics and management practices. This study investigates the temporal behavior of the Normalized Difference Vegetation Index (NDVI) derived from Harmonized Landsat and Sentinel-2 (HLS) imagery to characterize [...] Read more.
The monitoring of perennial and annual crops requires different analytical approaches due to their contrasting phenological dynamics and management practices. This study investigates the temporal behavior of the Normalized Difference Vegetation Index (NDVI) derived from Harmonized Landsat and Sentinel-2 (HLS) imagery to characterize apple, grape, soybean, and maize crops in Vacaria, Southern Brazil, between January 2024 and April 2025. NDVI time series were extracted from cloud-free HLS observations and analyzed using raw, interpolated, and Savitzky–Golay, smoothed data, supported by field reference points collected with the AgroTag application. Distinct NDVI temporal patterns were observed, with apple and grape showing higher stability and soybean and maize exhibiting stronger seasonal variability. Descriptive statistics derived from 112 observation dates confirmed these differences, highlighting the ability of HLS-based NDVI time series to capture crop-specific phenological patterns at the municipal scale. Complementary analysis using the SATVeg platform demonstrated consistency in long-term vegetation trends while evidencing scale limitations of coarse-resolution data for small perennial plots. Overall, the findings demonstrate that the NDVI enables robust monitoring of mixed agricultural landscapes, with complementary spatial resolutions and analytical tools enhancing crop-specific phenological analysis. Full article
(This article belongs to the Special Issue Remote Sensing for Enhanced Agricultural Crop Management)
Show Figures

Figure 1

24 pages, 2221 KB  
Perspective
Digital Twins in Poultry Farming: Deconstructing the Evidence Gap Between Promise and Performance
by Suresh Raja Neethirajan
Appl. Sci. 2026, 16(3), 1317; https://doi.org/10.3390/app16031317 - 28 Jan 2026
Viewed by 170
Abstract
Digital twins, understood as computational replicas of poultry production systems updated in real time by sensor data, are increasingly invoked as transformative tools for precision livestock farming and sustainable agriculture. They are credited with enhancing feed efficiency, reducing greenhouse gas emissions, enabling disease [...] Read more.
Digital twins, understood as computational replicas of poultry production systems updated in real time by sensor data, are increasingly invoked as transformative tools for precision livestock farming and sustainable agriculture. They are credited with enhancing feed efficiency, reducing greenhouse gas emissions, enabling disease detection earlier and improving animal welfare. Yet close examination of the published evidence reveals that these promises rest on a surprisingly narrow empirical foundation. Across the available literature, no peer reviewed study has quantified the full lifecycle carbon footprint of digital twin infrastructure in poultry production. Only one field validated investigation reports a measurable improvement in feed conversion ratio attributable to digital optimization, and that study’s design constrains its general applicability. A standardized performance assessment framework specific to poultry has not been established. Quantitative evaluations of reliability are scarce, limited to a small number of studies reporting data loss, sensor degradation and cloud system downtime, and no work has documented abandonment timelines or reasons for discontinuation. The result is a pronounced gap between technological aspiration and verified performance. Progress in this domain will depend on small-scale, deeply instrumented deployments capable of generating the longitudinal, multidimensional evidence required to substantiate the environmental and operational benefits attributed to digital twins. Full article
Show Figures

Figure 1

20 pages, 5876 KB  
Article
Dynamic Die-Forging Scene Semantic Segmentation via Point Cloud–BEV Feature Fusion with Star Encoding
by Xuewen Feng, Aiming Wang, Guoying Meng, Yiyang Xu, Jie Yang, Xiaohan Cheng, Yijin Xiong and Juntao Wang
Sensors 2026, 26(2), 708; https://doi.org/10.3390/s26020708 - 21 Jan 2026
Viewed by 224
Abstract
Semantic segmentation of workpieces and die cavities is critical for intelligent process monitoring and quality control in hammer die-forging. However, the field of 3D point cloud segmentation currently faces prominent limitations in forging scenario adaptation: existing state-of-the-art (SOTA) methods are predominantly optimized for [...] Read more.
Semantic segmentation of workpieces and die cavities is critical for intelligent process monitoring and quality control in hammer die-forging. However, the field of 3D point cloud segmentation currently faces prominent limitations in forging scenario adaptation: existing state-of-the-art (SOTA) methods are predominantly optimized for road driving or indoor scenes, where targets have stable poses and regular surfaces. They lack dedicated designs for capturing the fine-grained deformation characteristics of forging workpieces and alleviating multi-scale feature misalignment caused by large pose variations—key pain points in forging segmentation. Consequently, these methods fail to balance segmentation accuracy and real-time efficiency required for practical forging applications. To address this gap, this paper proposes a novel semantic segmentation framework fusing 3D point cloud and bird’s-eye-view (BEV) representations for complex die-forging scenes. Specifically, a Star-based encoding module is designed in the BEV encoding stage to enhance capture of fine-grained workpiece deformation characteristics. A hierarchical feature-offset alignment mechanism is developed in decoding to alleviate multi-scale spatial and semantic misalignment, facilitating efficient cross-layer fusion. Additionally, a weighted adaptive fusion module enables complementary information interaction between point cloud and BEV modalities to improve precision.We evaluate the proposed method on our self-constructed simulated and real die-forging point cloud datasets. The results show that when trained solely on simulated data and tested directly in real-world scenarios, our method achieves an mIoU that surpasses RPVNet by 1.1%. After fine-tuning with a small amount of real data, the mIoU further improves by 5%, reaching optimal performance. Full article
Show Figures

Figure 1

19 pages, 5302 KB  
Article
LSSCC-Net: Integrating Spatial-Feature Aggregation and Adaptive Attention for Large-Scale Point Cloud Semantic Segmentation
by Wenbo Wang, Xianghong Hua, Cheng Li, Pengju Tian, Yapeng Wang and Lechao Liu
Symmetry 2026, 18(1), 124; https://doi.org/10.3390/sym18010124 - 8 Jan 2026
Viewed by 314
Abstract
Point cloud semantic segmentation is a key technology for applications such as autonomous driving, robotics, and virtual reality. Current approaches are heavily reliant on local relative coordinates and simplistic attention mechanisms to aggregate neighborhood information. This often leads to an ineffective joint representation [...] Read more.
Point cloud semantic segmentation is a key technology for applications such as autonomous driving, robotics, and virtual reality. Current approaches are heavily reliant on local relative coordinates and simplistic attention mechanisms to aggregate neighborhood information. This often leads to an ineffective joint representation of geometric perturbations and feature variations, coupled with a lack of adaptive selection for salient features during context fusion. On this basis, we propose LSSCC-Net, a novel segmentation framework based on LACV-Net. First, the spatial-feature dynamic aggregation module is designed to fuse offset information by symmetric interaction between spatial positions and feature channels, thus supplementing local structural information. Second, a dual-dimensional attention mechanism (spatial and channel) is introduced to symmetrically deploy attention modules in both the encoder and decoder, prioritizing salient information extraction. Finally, Lovász-Softmax Loss is used as an auxiliary loss to optimize the training objective. The proposed method is evaluated on two public benchmark datasets. The mIoU on the Toronto3D and S3DIS datasets is 83.6% and 65.2%, respectively. Compared with the baseline LACV-Net, LSSCC-Net showed notable improvements in challenging categories: the IoU for “road mark” and “fence” on Toronto3D increased by 3.6% and 8.1%, respectively. These results indicate that LSSCC-Net more accurately characterizes complex boundaries and fine-grained structures, enhancing segmentation capabilities for small-scale targets and category boundaries. Full article
Show Figures

Figure 1

19 pages, 4426 KB  
Article
A Smart AIoT-Based Mobile Application for Plant Disease Detection and Environment Management in Small-Scale Farms Using MobileViT
by Mohamed Bahaa, Abdelrahman Hesham, Fady Ashraf and Lamiaa Abdel-Hamid
AgriEngineering 2026, 8(1), 11; https://doi.org/10.3390/agriengineering8010011 - 1 Jan 2026
Viewed by 753
Abstract
Small-scale farms produce more than one-third of the world’s food supply, making them a crucial contributor to global food security. In this study, an artificial intelligence of things (AIoT) framework is introduced for smart small-scale farm management. For plant disease detection, the lightweight [...] Read more.
Small-scale farms produce more than one-third of the world’s food supply, making them a crucial contributor to global food security. In this study, an artificial intelligence of things (AIoT) framework is introduced for smart small-scale farm management. For plant disease detection, the lightweight MobileViT model, which integrates vision transformer and convolutional modules, was utilized to efficiently capture both global and local image features. Data augmentation and transfer learning were employed to enhance the model’s overall performance. MobileViT resulted in a test accuracy of 99.5%, with per-class precision, recall, and f1-score ranging between 0.92 and 1.00 considering the benchmark Plant Village dataset (14 species–38 classes). MobileViT was shown to outperform several standard deep convolutional networks, including MobileNet, ResNet and Inception, by 2–12%. Additionally, an LLM-powered interactive chatbot was integrated to provide farmers with instant plant care suggestions. For plant environment management, the powerful, cost-effective ESP32 microcontroller was utilized as the core processing unit responsible for collecting sensor data (e.g., soil moisture), controlling actuators (e.g., water pump for irrigation), and maintaining connectivity with Google Firebase Cloud. Finally, a mobile application was developed to integrate the AI and IoT system capabilities, hence providing users with a reliable platform for smart plant disease detection and environment management. Each system component was each tested individually, before being incorporated into the mobile application and tested in real-world scenarios. The presented AIoT-based solution has the potential to enhance crop productivity within small-scale farms while promoting sustainable farming practices and efficient resource management. Full article
(This article belongs to the Special Issue Precision Agriculture: Sensor-Based Systems and IoT-Enabled Machinery)
Show Figures

Figure 1

19 pages, 3122 KB  
Article
Feasibility of Deep Learning-Based Iceberg Detection in Land-Fast Arctic Sea Ice Using YOLOv8 and SAR Imagery
by Johnson Bailey and John Stott
Remote Sens. 2025, 17(24), 3998; https://doi.org/10.3390/rs17243998 - 11 Dec 2025
Viewed by 775
Abstract
Iceberg detection in Arctic sea-ice environments is essential for navigation safety and climate monitoring, yet remains challenging due to observational and environmental constraints. The scarcity of labelled data, limited optical coverage caused by cloud and polar night conditions, and the small, irregular signatures [...] Read more.
Iceberg detection in Arctic sea-ice environments is essential for navigation safety and climate monitoring, yet remains challenging due to observational and environmental constraints. The scarcity of labelled data, limited optical coverage caused by cloud and polar night conditions, and the small, irregular signatures of icebergs in synthetic aperture radar (SAR) imagery make automated detection difficult. This study evaluates the environmental feasibility of applying a modern deep learning model for iceberg detection within land-fast sea ice. We adapt a YOLOv8 convolutional neural network within the Dual Polarisation Intensity Ratio Anomaly Detector (iDPolRAD) framework using dual-polarised Sentinel-1 SAR imagery from the Franz Josef Land region, validated against Sentinel-2 optical data. A total of 2344 icebergs were manually labelled to generate the training dataset. Results demonstrate that the network is capable of detecting icebergs embedded in fast ice with promising precision under highly constrained data conditions (precision = 0.81; recall = 0.68; F1 = 0.74; mAP = 0.78). These findings indicate that deep learning can function effectively within the physical and observational limitations of current Arctic monitoring, establishing a foundation for future large-scale applications once broader datasets become available. Full article
(This article belongs to the Special Issue Applications of SAR for Environment Observation Analysis)
Show Figures

Graphical abstract

19 pages, 2253 KB  
Article
A Domain-Adversarial Mechanism and Invariant Spatiotemporal Feature Extraction Based Distributed PV Forecasting Method for EV Cluster Baseline Load Estimation
by Zhiyu Zhao, Qiran Li, Bo Bo, Po Yang, Xuemei Li, Zhenghao Wu, Ge Wang and Hui Ren
Electronics 2025, 14(23), 4709; https://doi.org/10.3390/electronics14234709 - 29 Nov 2025
Cited by 2 | Viewed by 325
Abstract
Against the backdrop of high-penetration distributed photovoltaic (DPV) integration into distribution networks, the limited measurability of small-scale DPV systems poses significant challenges to accurately estimating the baseline load of electric vehicle (EV) clusters. To address this issue, effective forecasting of DPV power output [...] Read more.
Against the backdrop of high-penetration distributed photovoltaic (DPV) integration into distribution networks, the limited measurability of small-scale DPV systems poses significant challenges to accurately estimating the baseline load of electric vehicle (EV) clusters. To address this issue, effective forecasting of DPV power output becomes essential. This paper proposes a domain-adversarial architecture for ultra-short-term DPV power prediction, designed to support baseline load estimation for EV clusters. The power output of DPV systems is influenced by scattered geographical distribution and abrupt weather changes, leading to complex spatiotemporal distribution shifts. These shifts result in a notable decline in the generalization capability of traditional models that rely on historical statistical patterns. To enhance the robustness of models in complex and dynamic environments, this paper proposes a domain-adversarial architecture for ultra-short-term DPV power forecasting, explicitly designed to address spatiotemporal distribution shifts by extracting spatiotemporal invariant features robust to distribution shifts. First, a Graph Attention Network (GAT) is utilized to capture spatial dependencies among PV stations, characterizing asynchronous power fluctuations caused by factors such as cloud movement. Next, the spatiotemporally fused features generated by the GAT are adaptively partitioned into multiple distribution domains using Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN), providing pseudo-supervised signals for subsequent adversarial learning. Finally, a Temporal Convolutional Network (TCN)-based domain-adversarial mechanism is introduced, where gradient reversal training forces the feature extractor to discard domain-specific characteristics, thereby effectively extracting spatiotemporal invariant features across domains. Experimental results on real-world distributed PV datasets validate the effectiveness of the proposed method in improving prediction accuracy and generalization capability under transitional weather conditions. Full article
Show Figures

Figure 1

20 pages, 4682 KB  
Article
EAS-Det: Edge-Aware Semantic Feature Fusion for Robust 3D Object Detection in LiDAR Point Clouds
by Huishan Wang, Jie Ma, Yuehua Zhao, Jianlei Zhang and Fangwei Chen
Remote Sens. 2025, 17(22), 3743; https://doi.org/10.3390/rs17223743 - 18 Nov 2025
Viewed by 838
Abstract
Accurate 3D object detection and localization in LiDAR point clouds are crucial for applications such as autonomous driving and UAV-based monitoring. However, existing detectors often suffer from the loss of critical geometric information during network processing, mainly due to downsampling and pooling operations. [...] Read more.
Accurate 3D object detection and localization in LiDAR point clouds are crucial for applications such as autonomous driving and UAV-based monitoring. However, existing detectors often suffer from the loss of critical geometric information during network processing, mainly due to downsampling and pooling operations. This leads to imprecise object boundaries and degraded detection accuracy, particularly for small objects. To address these challenges, we propose Edge-Aware Semantic Feature Fusion for Detection (EAS-Det), a lightweight, plug-and-play framework for LiDAR-based perception. The core module, Edge-Semantic Interaction (ESI), employs a dual-attention mechanism to adaptively fuse geometric edge cues with high-level semantic context, yielding multi-scale representations that preserve structural details while enhancing contextual awareness. EAS-Det is compatible with mainstream backbones such as PointPillars and PV-RCNN. Extensive experiments on the KITTI and Waymo datasets demonstrate consistent and significant improvements, achieving up to 10.34% and 8.66% AP gains for pedestrians and cyclists, respectively, on the KITTI benchmark. These results underscore the effectiveness and generalizability of EAS-Det for robust 3D object detection in complex real-world environments. Full article
Show Figures

Graphical abstract

18 pages, 7743 KB  
Article
Improved Daytime Cloud Detection Algorithm in FY-4A’s Advanced Geostationary Radiation Imager
by Xiao Zhang, Song-Ying Zhao and Rui-Xuan Tang
Atmosphere 2025, 16(9), 1105; https://doi.org/10.3390/atmos16091105 - 20 Sep 2025
Viewed by 737
Abstract
Cloud detection is an indispensable step in satellite remote sensing of cloud properties and objects under the influence of cloud occlusion. Nevertheless, interfering targets such as snow and haze pollution are easily misjudged as clouds for most of the current algorithms. Hence, a [...] Read more.
Cloud detection is an indispensable step in satellite remote sensing of cloud properties and objects under the influence of cloud occlusion. Nevertheless, interfering targets such as snow and haze pollution are easily misjudged as clouds for most of the current algorithms. Hence, a robust cloud detection algorithm is urgently needed, especially for regions with high latitudes or severe air pollution. This paper demonstrated that the passive satellite detector Advanced Geosynchronous Radiation Imager (AGRI) onboard the FY-4A satellite has a great possibility to misjudge the dense aerosols in haze pollution as clouds during the daytime, and constructed an algorithm based on the spectral information of the AGRI’s 14 bands with a concise and high-speed calculation. This study adjusted the previously proposed cloud mask rectification algorithm of Moderate-Resolution Imaging Spectroradiometer (MODIS), rectified the MODIS cloud detection result, and used it as the accurate cloud mask data. The algorithm was constructed based on adjusted Fisher discrimination analysis (AFDA) and spectral spatial variability (SSV) methods over four different underlying surfaces (land, desert, snow, and water) and two seasons (summer and winter). This algorithm divides the identification into two steps to screen the confident cloud clusters and broken clouds, which are not easy to recognize, respectively. In the first step, channels with obvious differences in cloudy and cloud-free areas were selected, and AFDA was utilized to build a weighted sum formula across the normalized spectral data of the selected bands. This step transforms the traditional dynamic-threshold test on multiple bands into a simple test of the calculated summation value. In the second step, SSV was used to capture the broken clouds by calculating the standard deviation (STD) of spectra in every 3 × 3-pixel window to quantify the spectral homogeneity within a small scale. To assess the algorithm’s spatial and temporal generalizability, two evaluations were conducted: one examining four key regions and another assessing three different moments on a certain day in East China. The results showed that the algorithm has an excellent accuracy across four different underlying surfaces, insusceptible to the main interferences such as haze and snow, and shows a strong detection capability for broken clouds. This algorithm enables widespread application to different regions and times of day, with a low calculation complexity, indicating that a new method satisfying the requirements of fast and robust cloud detection can be achieved. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

19 pages, 2082 KB  
Article
Multi-Scale Grid-Based Semantic Surface Point Generation for 3D Object Detection
by Xin-Fu Chen, Chun-Chieh Lee, Jung-Hua Lo, Chi-Hung Chuang and Kuo-Chin Fan
Electronics 2025, 14(17), 3492; https://doi.org/10.3390/electronics14173492 - 31 Aug 2025
Viewed by 888
Abstract
3D object detection is a crucial technology in fields such as autonomous driving and robotics. As a direct representation of the 3D world, point cloud data plays a vital role in feature extraction and geometric representation. However, in real-world applications, point cloud data [...] Read more.
3D object detection is a crucial technology in fields such as autonomous driving and robotics. As a direct representation of the 3D world, point cloud data plays a vital role in feature extraction and geometric representation. However, in real-world applications, point cloud data often suffers from occlusion, resulting in incomplete observations and degraded detection performance. Existing methods, such as PG-RCNN, generate semantic surface points within each Region of Interest (RoI) using a single grid size. However, a fixed grid scale cannot adequately capture multi-scale features. A grid that is too small may miss fine structures—especially problematic when dealing with small or sparse objects—while a grid that is too large may introduce excessive background noise, reducing the precision of feature representation. To address this issue, we propose an enhanced PG-RCNN architecture with a Multi-Scale Grid Attention Module as the core contribution. This module improves the expressiveness of point features by aggregating multi-scale information and dynamically weighting features from different grid resolutions. Using a simple linear transformation, we generate attention weights to guide the model to focus on regions that contribute more to object recognition, while effectively filtering out redundant noise. We evaluate our method on the KITTI 3D object detection validation set. Experimental results show that, compared to the original PG-RCNN, our approach improves performance on the Cyclist category by 2.66% and 2.54% in the Moderate and Hard settings, respectively. Additionally, our approach shows more stable performance on small object detection tasks, with an average improvement of 2.57%, validating the positive impact of the Multi-Scale Grid Attention Module on fine-grained geometric modeling, and highlighting the efficiency and generalizability of our model. Full article
(This article belongs to the Special Issue Digital Signal and Image Processing for Multimedia Technology)
Show Figures

Figure 1

25 pages, 7748 KB  
Article
A Deep Learning Approach to Identify Rock Bolts in Complex 3D Point Clouds of Underground Mines Captured Using Mobile Laser Scanners
by Dibyayan Patra, Pasindu Ranasinghe, Bikram Banerjee and Simit Raval
Remote Sens. 2025, 17(15), 2701; https://doi.org/10.3390/rs17152701 - 4 Aug 2025
Cited by 3 | Viewed by 1850
Abstract
Rock bolts are crucial components in the subterranean support systems in underground mines that provide adequate structural reinforcement to the rock mass to prevent unforeseen hazards like rockfalls. This makes frequent assessments of such bolts critical for maintaining rock mass stability and minimising [...] Read more.
Rock bolts are crucial components in the subterranean support systems in underground mines that provide adequate structural reinforcement to the rock mass to prevent unforeseen hazards like rockfalls. This makes frequent assessments of such bolts critical for maintaining rock mass stability and minimising risks in underground mining operations. Where manual surveying of rock bolts is challenging due to the low-light conditions in the underground mines and the time-intensive nature of the process, automated detection of rock bolts serves as a plausible solution. To that end, this study focuses on the automatic identification of rock bolts within medium- to large-scale 3D point clouds obtained from underground mines using mobile laser scanners. Existing techniques for automated rock bolt identification primarily rely on feature engineering and traditional machine learning approaches. However, such techniques lack robustness as these point clouds present several challenges due to data noise, varying environments, and complex surrounding structures. Moreover, the target rock bolts are extremely small objects within large-scale point clouds and are often partially obscured due to the application of reinforcement shotcrete. Addressing these challenges, this paper proposes an approach termed DeepBolt, which employs a novel two-stage deep learning architecture specifically designed for handling severe class imbalance for the automatic and efficient identification of rock bolts in complex 3D point clouds. The proposed method surpasses state-of-the-art semantic segmentation models by up to 42.5% in Intersection over Union (IoU) for rock bolt points. Additionally, it outperforms existing rock bolt identification techniques, achieving a 96.41% precision and 96.96% recall in classifying rock bolts, demonstrating its robustness and effectiveness in complex underground environments. Full article
(This article belongs to the Special Issue New Perspectives on 3D Point Cloud (Third Edition))
Show Figures

Graphical abstract

17 pages, 4914 KB  
Article
Large-Scale Point Cloud Semantic Segmentation with Density-Based Grid Decimation
by Liangcun Jiang, Jiacheng Ma, Han Zhou, Boyi Shangguan, Hongyu Xiao and Zeqiang Chen
ISPRS Int. J. Geo-Inf. 2025, 14(7), 279; https://doi.org/10.3390/ijgi14070279 - 17 Jul 2025
Cited by 4 | Viewed by 3156
Abstract
Accurate segmentation of point clouds into categories such as roads, buildings, and trees is critical for applications in 3D reconstruction and autonomous driving. However, large-scale point cloud segmentation encounters challenges such as uneven density distribution, inefficient sampling, and limited feature extraction capabilities. To [...] Read more.
Accurate segmentation of point clouds into categories such as roads, buildings, and trees is critical for applications in 3D reconstruction and autonomous driving. However, large-scale point cloud segmentation encounters challenges such as uneven density distribution, inefficient sampling, and limited feature extraction capabilities. To address these issues, this paper proposes RT-Net, a novel framework that incorporates a density-based grid decimation algorithm for efficient preprocessing of outdoor point clouds. The proposed framework helps alleviate the problem of uneven density distribution and improves computational efficiency. RT-Net also introduces two modules: Local Attention Aggregation, which extracts local detailed features of points using an attention mechanism, enhancing the model’s recognition ability for small-sized objects; and Attention Residual, which integrates local details of point clouds with global features by an attention mechanism to improve the model’s generalization ability. Experimental results on the Toronto3D, Semantic3D, and SemanticKITTI datasets demonstrate the superiority of RT-Net for small-sized object segmentation, achieving state-of-the-art mean Intersection over Union (mIoU) scores of 86.79% on Toronto3D and 79.88% on Semantic3D. Full article
Show Figures

Figure 1

Back to TopTop