Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (45)

Search Parameters:
Keywords = categories of cloud simulators

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 5895 KB  
Article
Intelligent 3D Potato Cutting Simulation System Based on Multi-View Images and Point Cloud Fusion
by Ruize Xu, Chen Chen, Fanyi Liu and Shouyong Xie
Agriculture 2025, 15(19), 2088; https://doi.org/10.3390/agriculture15192088 - 7 Oct 2025
Viewed by 773
Abstract
The quality of seed pieces is crucial for potato planting. Each seed piece should contain viable potato eyes and maintain a uniform size for mechanized planting. However, existing intelligent methods are limited by a single view, making it difficult to satisfy both requirements [...] Read more.
The quality of seed pieces is crucial for potato planting. Each seed piece should contain viable potato eyes and maintain a uniform size for mechanized planting. However, existing intelligent methods are limited by a single view, making it difficult to satisfy both requirements simultaneously. To address this problem, we present an intelligent 3D potato cutting simulation system. A sparse 3D point cloud of the potato is reconstructed from multi-perspective images, which are acquired with a single-camera rotating platform. Subsequently, the 2D positions of potato eyes in each image are detected using deep learning, from which their 3D positions are mapped via back-projection and a clustering algorithm. Finally, the cutting paths are optimized by a Bayesian optimizer, which incorporates both the potato’s volume and the locations of its eyes, and generates cutting schemes suitable for different potato size categories. Experimental results showed that the system achieved a mean absolute percentage error of 2.16% (95% CI: 1.60–2.73%) for potato volume estimation, a potato eye detection precision of 98%, and a recall of 94%. The optimized cutting plans showed a volume coefficient of variation below 0.10 and avoided damage to the detected potato eyes, producing seed pieces that each contained potato eyes. This work demonstrates that the system can effectively utilize the detected potato eye information to obtain seed pieces containing potato eyes and having uniform size. The proposed system provides a feasible pathway for high-precision automated seed potato cutting. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

54 pages, 18368 KB  
Article
LUME 2D: A Linear Upslope Model for Orographic and Convective Rainfall Simulation
by Andrea Abbate and Francesco Apadula
Meteorology 2025, 4(4), 28; https://doi.org/10.3390/meteorology4040028 - 3 Oct 2025
Cited by 1 | Viewed by 931
Abstract
Rainfalls are the result of complex cloud microphysical processes. Trying to estimate their intensity and duration is a key task necessary for assessing precipitation magnitude. Across mountains, extreme rainfalls may cause several side effects on the ground, triggering severe geo-hydrological issues (floods and [...] Read more.
Rainfalls are the result of complex cloud microphysical processes. Trying to estimate their intensity and duration is a key task necessary for assessing precipitation magnitude. Across mountains, extreme rainfalls may cause several side effects on the ground, triggering severe geo-hydrological issues (floods and landslides) which impact people, human activities, buildings, and infrastructure. Therefore, having a tool able to reconstruct rainfall processes easily and understandably is advisable for non-expert stakeholders and researchers who deal with rainfall management. In this work, an evolution of the LUME (Linear Upslope Model Experiment), designed to simplify the study of the rainfall process, is presented. The main novelties of the new version, called LUME 2D, regard (1) the 2D domain extension, (2) the inclusion of warm-rain and cold-rain bulk-microphysical schemes (with snow and hail categories), and (3) the simulation of convective precipitations. The model was completely rewritten using Python (version 3.11) and was tested on a heavy rainfall event that occurred in Piedmont in April 2025. Using a 2D spatial and temporal interpolation of the radiosonde data, the model was able to reconstruct a realistic rainfall field of the event, reproducing rather accurately the rainfall intensity pattern. Applying the cold microphysics schemes, the snow and hail amounts were evaluated, while the rainfall intensity amplification due to the moist convection activation was detected within the results. The LUME 2D model has revealed itself to be an easy tool for carrying out further studies on intense rainfall events, improving understanding and highlighting their peculiarity in a straightforward way suitable for non-expert users. Full article
(This article belongs to the Special Issue Early Career Scientists' (ECS) Contributions to Meteorology (2025))
Show Figures

Figure 1

23 pages, 348 KB  
Review
Machine Learning-Based Quality Control for Low-Cost Air Quality Monitoring: A Comprehensive Review of the Past Decade
by Yong-Hyuk Kim and Seung-Hyun Moon
Atmosphere 2025, 16(10), 1136; https://doi.org/10.3390/atmos16101136 - 27 Sep 2025
Viewed by 1725
Abstract
Air pollution poses major risks to public health, driving the adoption of low-cost sensor (LCS) networks for fine-grained and real-time monitoring. However, the variable accuracy of LCS data compared with reference instruments necessitates robust quality control (QC) frameworks. Over the past decade, machine [...] Read more.
Air pollution poses major risks to public health, driving the adoption of low-cost sensor (LCS) networks for fine-grained and real-time monitoring. However, the variable accuracy of LCS data compared with reference instruments necessitates robust quality control (QC) frameworks. Over the past decade, machine learning (ML) has emerged as a powerful tool to calibrate sensors, detect anomalies, and mitigate drift in large-scale deployment. This survey reviews advances in three methodological categories: traditional ML models, deep learning architectures, and hybrid or unsupervised methods. We also examine spatiotemporal QC frameworks that exploit redundancies across time and space, as well as real-time implementations based on edge–cloud architectures. Applications include personal exposure monitoring, integration with atmospheric simulations, and support for policy decision making. Despite these achievements, several challenges remain. Traditional models are lightweight but often fail to generalize across contexts, while deep learning models achieve higher accuracy but demand large datasets and remain difficult to interpret. Spatiotemporal approaches improve robustness but face scalability constraints, and real-time systems must balance computational efficiency with accuracy. Broader adoption will also require clear standards, reliable uncertainty quantification, and sustained trust in corrected data. In summary, ML-based QC shows strong potential but is still constrained by data quality, transferability, and governance gaps. Future work should integrate physical knowledge with ML, leverage federated learning for scalability, and establish regulatory benchmarks. Addressing these challenges will enable ML-driven QC to deliver reliable, high-resolution data that directly support science-based policy and public health. Full article
(This article belongs to the Special Issue Emerging Technologies for Observation of Air Pollution (2nd Edition))
35 pages, 2863 KB  
Article
DeepSIGNAL-ITS—Deep Learning Signal Intelligence for Adaptive Traffic Signal Control in Intelligent Transportation Systems
by Mirabela Melinda Medvei, Alin-Viorel Bordei, Ștefania Loredana Niță and Nicolae Țăpuș
Appl. Sci. 2025, 15(17), 9396; https://doi.org/10.3390/app15179396 - 27 Aug 2025
Viewed by 1963
Abstract
Urban traffic congestion remains a major contributor to vehicle emissions and travel inefficiency, prompting the need for adaptive and intelligent traffic management systems. In response, we introduce DeepSIGNAL-ITS (Deep Learning Signal Intelligence for Adaptive Lights in Intelligent Transportation Systems), a unified framework that [...] Read more.
Urban traffic congestion remains a major contributor to vehicle emissions and travel inefficiency, prompting the need for adaptive and intelligent traffic management systems. In response, we introduce DeepSIGNAL-ITS (Deep Learning Signal Intelligence for Adaptive Lights in Intelligent Transportation Systems), a unified framework that leverages real-time traffic perception and learning-based control to optimize signal timing and reduce congestion. The system integrates vehicle detection via the YOLOv8 architecture at roadside units (RSUs) and manages signal control using Proximal Policy Optimization (PPO), guided by global traffic indicators such as accumulated vehicle waiting time. Secure communication between RSUs and cloud infrastructure is ensured through Transport Layer Security (TLS)-encrypted data exchange. We validate the framework through extensive simulations in SUMO across diverse urban settings. Simulation results show an average 30.20% reduction in vehicle waiting time at signalized intersections compared to baseline fixed-time configurations derived from OpenStreetMap (OSM). Furthermore, emissions assessed via the HBEFA-based model in SUMO reveal measurable reductions across pollutant categories, underscoring the framework’s dual potential to improve both traffic efficiency and environmental sustainability in simulated urban environments. Full article
(This article belongs to the Section Transportation and Future Mobility)
Show Figures

Figure 1

32 pages, 3244 KB  
Article
Exploring Industry 4.0 Technologies Implementation to Enhance Circularity in Spanish Manufacturing Enterprises
by Juan-José Ortega-Gras, María-Victoria Bueno-Delgado, José-Francisco Puche-Forte, Josefina Garrido-Lova and Rafael Martínez-Fernández
Sustainability 2025, 17(17), 7648; https://doi.org/10.3390/su17177648 - 25 Aug 2025
Cited by 2 | Viewed by 2572
Abstract
Industry 4.0 (I4.0) is reshaping manufacturing by integrating advanced digital technologies and is increasingly seen as an enabler of the circular economy (CE). However, most research treats digitalisation and circularity separately, with limited empirical insight regarding their combined implementation. This study investigates I4.0 [...] Read more.
Industry 4.0 (I4.0) is reshaping manufacturing by integrating advanced digital technologies and is increasingly seen as an enabler of the circular economy (CE). However, most research treats digitalisation and circularity separately, with limited empirical insight regarding their combined implementation. This study investigates I4.0 adoption to support sustainability and CE across industries, focusing on how enterprise size influences adoption patterns. Based on survey data from 69 enterprises, the research examines which technologies are applied, at what stages of the product life cycle, and what barriers and drivers influence uptake. Findings reveal a modest but growing adoption led by the Internet of Things (IoT), big data, and integrated systems. While larger firms implement more advanced tools (e.g., robotics and simulation), smaller enterprises favour accessible solutions (e.g., IoT and cloud computing). A positive link is observed between digital adoption and CE practices, though barriers remain significant. Five main categories of perceived obstacles are identified: political/institutional, financial, social/market-related, technological/infrastructural, and legal/regulatory. Attitudinal resistance, particularly in micro and small enterprises, emerges as an additional challenge. Based on these insights, and to support the twin transition, the paper proposes targeted policies, including expanded funding, streamlined procedures, enhanced training, and tools for circular performance monitoring. Full article
(This article belongs to the Special Issue Achieving Sustainability: Role of Technology and Innovation)
Show Figures

Figure 1

32 pages, 2917 KB  
Article
Self-Adapting CPU Scheduling for Mixed Database Workloads via Hierarchical Deep Reinforcement Learning
by Suchuan Xing, Yihan Wang and Wenhe Liu
Symmetry 2025, 17(7), 1109; https://doi.org/10.3390/sym17071109 - 10 Jul 2025
Cited by 4 | Viewed by 2318
Abstract
Modern database systems require autonomous CPU scheduling frameworks that dynamically optimize resource allocation across heterogeneous workloads while maintaining strict performance guarantees. We present a novel hierarchical deep reinforcement learning framework augmented with graph neural networks to address CPU scheduling challenges in mixed database [...] Read more.
Modern database systems require autonomous CPU scheduling frameworks that dynamically optimize resource allocation across heterogeneous workloads while maintaining strict performance guarantees. We present a novel hierarchical deep reinforcement learning framework augmented with graph neural networks to address CPU scheduling challenges in mixed database environments comprising Online Transaction Processing (OLTP), Online Analytical Processing (OLAP), vector processing, and background maintenance workloads. Our approach introduces three key innovations: first, a symmetric two-tier control architecture where a meta-controller allocates CPU budgets across workload categories using policy gradient methods while specialized sub-controllers optimize process-level resource allocation through continuous action spaces; second, graph neural network-based dependency modeling that captures complex inter-process relationships and communication patterns while preserving inherent symmetries in database architectures; and third, meta-learning integration with curiosity-driven exploration enabling rapid adaptation to previously unseen workload patterns without extensive retraining. The framework incorporates a multi-objective reward function balancing Service Level Objective (SLO) adherence, resource efficiency, symmetric fairness metrics, and system stability. Experimental evaluation through high-fidelity digital twin simulation and production deployment demonstrates substantial performance improvements: 43.5% reduction in p99 latency violations for OLTP workloads and 27.6% improvement in overall CPU utilization, with successful scaling to 10,000 concurrent processes maintaining sub-3% scheduling overhead. This work represents a significant advancement toward truly autonomous database resource management, establishing a foundation for next-generation self-optimizing database systems with implications extending to broader orchestration challenges in cloud-native architectures. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

22 pages, 7106 KB  
Article
Enhancing Highway Scene Understanding: A Novel Data Augmentation Approach for Vehicle-Mounted LiDAR Point Cloud Segmentation
by Dalong Zhou, Yuanyang Yi, Yu Wang, Zhenfeng Shao, Yanjun Hao, Yuyan Yan, Xiaojin Zhao and Junkai Guo
Remote Sens. 2025, 17(13), 2147; https://doi.org/10.3390/rs17132147 - 23 Jun 2025
Viewed by 1059
Abstract
The intelligent extraction of highway assets is pivotal for advancing transportation infrastructure and autonomous systems, yet traditional methods relying on manual inspection or 2D imaging struggle with sparse, occluded environments, and class imbalance. This study proposes an enhanced MinkUNet-based framework to address data [...] Read more.
The intelligent extraction of highway assets is pivotal for advancing transportation infrastructure and autonomous systems, yet traditional methods relying on manual inspection or 2D imaging struggle with sparse, occluded environments, and class imbalance. This study proposes an enhanced MinkUNet-based framework to address data scarcity, occlusion, and imbalance in highway point cloud segmentation. A large-scale dataset (PEA-PC Dataset) was constructed, covering six key asset categories, addressing the lack of specialized highway datasets. A hybrid conical masking augmentation strategy was designed to simulate natural occlusions and enhance local feature retention, while semi-supervised learning prioritized foreground differentiation. The experimental results showed that the overall mIoU reached 73.8%, with the IoU of bridge railings and emergency obstacles exceeding 95%. The IoU of columnar assets increased from 2.6% to 29.4% through occlusion perception enhancement, demonstrating the effectiveness of this method in improving object recognition accuracy. The framework balances computational efficiency and robustness, offering a scalable solution for sparse highway scenes. However, challenges remain in segmenting vegetation-occluded pole-like assets due to partial data loss. This work highlights the efficacy of tailored augmentation and semi-supervised strategies in refining 3D segmentation, advancing applications in intelligent transportation and digital infrastructure. Full article
Show Figures

Figure 1

24 pages, 1764 KB  
Article
Planning Energy-Efficient Smart Industrial Spaces for Industry 4.0
by Viviane Bessa Ferreira, Raphael de Aquino Gomes, José Luis Domingos, Regina Célia Bueno da Fonseca, Thiago Augusto Mendes, Georgios Bouloukakis, Bruno Barzellay Ferreira da Costa and Assed Naked Haddad
Eng 2025, 6(3), 53; https://doi.org/10.3390/eng6030053 - 16 Mar 2025
Cited by 1 | Viewed by 1455
Abstract
Given the significant increase in electricity consumption, especially in the industrial and commercial categories, exploring new energy sources and developing innovative technologies are essential. The fourth industrial revolution (Industry 4.0) and digital transformation are not just buzzwords; they offer real opportunities for energy [...] Read more.
Given the significant increase in electricity consumption, especially in the industrial and commercial categories, exploring new energy sources and developing innovative technologies are essential. The fourth industrial revolution (Industry 4.0) and digital transformation are not just buzzwords; they offer real opportunities for energy sustainability, using technologies such as cloud computing, artificial intelligence, and the Internet of Things (IoT). In this context, this study focuses on improving energy efficiency in smart spaces within the context of Industry 4.0 by utilizing the SmartParcels framework. This framework creates a detailed and cost-effective plan for equipping specific areas of smart communities, commonly referred to as parcels. By adapting this framework, we propose an integrated model for planning and implementing IoT applications that optimizes service utilization while adhering to operational and deployment cost constraints. The model considers multiple layers, including sensing, communication, computation, and application, and adopts an optimization approach to meet the needs related to IoT deployment. In simulated industrial environments, it demonstrated scalability and economic viability, achieving high service utility and ensuring broad geographic coverage with minimal redundancy. Furthermore, the use of heuristics for device reuse and geophysical mapping selection promotes cost-effectiveness and energy sustainability, highlighting the framework’s potential for large-scale applications in diverse industrial contexts. Full article
(This article belongs to the Special Issue Feature Papers in Eng 2024)
Show Figures

Figure 1

38 pages, 2036 KB  
Article
Advancing Cybersecurity with Honeypots and Deception Strategies
by Zlatan Morić, Vedran Dakić and Damir Regvart
Informatics 2025, 12(1), 14; https://doi.org/10.3390/informatics12010014 - 31 Jan 2025
Cited by 11 | Viewed by 20737
Abstract
Cybersecurity threats are becoming more intricate, requiring preemptive actions to safeguard digital assets. This paper examines the function of honeypots as critical instruments for threat detection, analysis, and mitigation. A novel methodology for comparative analysis of honeypots is presented, offering a systematic framework [...] Read more.
Cybersecurity threats are becoming more intricate, requiring preemptive actions to safeguard digital assets. This paper examines the function of honeypots as critical instruments for threat detection, analysis, and mitigation. A novel methodology for comparative analysis of honeypots is presented, offering a systematic framework to assess their efficacy. Seven honeypot solutions, namely Dionaea, Cowrie, Honeyd, Kippo, Amun, Glastopf, and Thug, are analyzed, encompassing various categories, including SSH and HTTP honeypots. The solutions are assessed via simulated network attacks and comparative analyses based on established criteria, including detection range, reliability, scalability, and data integrity. Dionaea and Cowrie exhibited remarkable versatility and precision, whereas Honeyd revealed scalability benefits despite encountering data quality issues. The research emphasizes the smooth incorporation of honeypots with current security protocols, including firewalls and incident response strategies, while offering comprehensive insights into attackers’ tactics, techniques, and procedures (TTPs). Emerging trends are examined, such as incorporating machine learning for adaptive detection and creating cloud-based honeypots. Recommendations for optimizing honeypot deployment include strategic placement, comprehensive monitoring, and ongoing updates. This research provides a detailed framework for selecting and implementing honeypots customized to organizational requirements. Full article
Show Figures

Figure 1

34 pages, 15986 KB  
Article
A Comprehensive Framework for Transportation Infrastructure Digitalization: TJYRoad-Net for Enhanced Point Cloud Segmentation
by Zhen Yang, Mingxuan Wang and Shikun Xie
Sensors 2024, 24(22), 7222; https://doi.org/10.3390/s24227222 - 12 Nov 2024
Viewed by 1737
Abstract
This research introduces a cutting-edge approach to traffic infrastructure digitization, integrating UAV oblique photography with LiDAR point clouds for high-precision, lightweight 3D road modeling. The proposed method addresses the challenge of accurately capturing the current state of infrastructure while minimizing redundancy and optimizing [...] Read more.
This research introduces a cutting-edge approach to traffic infrastructure digitization, integrating UAV oblique photography with LiDAR point clouds for high-precision, lightweight 3D road modeling. The proposed method addresses the challenge of accurately capturing the current state of infrastructure while minimizing redundancy and optimizing computational efficiency. A key innovation is the development of the TJYRoad-Net model, which achieves over 85% mIoU segmentation accuracy by including a traffic feature computing (TFC) module composed of three critical components: the Regional Coordinate Encoder (RCE), the Context-Aware Aggregation Unit (CAU), and the Hierarchical Expansion Block. Comparative analysis segments the point clouds into road and non-road categories, achieving centimeter-level registration accuracy with RANSAC and ICP. Two lightweight surface reconstruction techniques are implemented: (1) algorithmic reconstruction, which delivers a 6.3 mm elevation error at 95% confidence in complex intersections, and (2) template matching, which replaces road markings, poles, and vegetation using bounding boxes. These methods ensure accurate results with minimal memory overhead. The optimized 3D models have been successfully applied in driving simulation and traffic flow analysis, providing a practical and scalable solution for real-world infrastructure modeling and analysis. These applications demonstrate the versatility and efficiency of the proposed methods in modern traffic system simulations. Full article
Show Figures

Figure 1

19 pages, 1829 KB  
Article
Refined Prior Guided Category-Level 6D Pose Estimation and Its Application on Robotic Grasping
by Huimin Sun, Yilin Zhang, Honglin Sun and Kenji Hashimoto
Appl. Sci. 2024, 14(17), 8009; https://doi.org/10.3390/app14178009 - 7 Sep 2024
Cited by 2 | Viewed by 3216
Abstract
Estimating the 6D pose and size of objects is crucial in the task of visual grasping for robotic arms. Most current algorithms still require the 3D CAD model of the target object to match with the detected points, and they are unable to [...] Read more.
Estimating the 6D pose and size of objects is crucial in the task of visual grasping for robotic arms. Most current algorithms still require the 3D CAD model of the target object to match with the detected points, and they are unable to predict the object’s size, which significantly limits the generalizability of these methods. In this paper, we introduce category priors and extract high-dimensional abstract features from both the observed point cloud and the prior to predict the deformation matrix of the reconstructed point cloud and the dense correspondence between the reconstructed and observed point clouds. Furthermore, we propose a staged geometric correction and dense correspondence refinement mechanism to enhance the accuracy of regression. In addition, a novel lightweight attention module is introduced to further integrate the extracted features and identify potential correlations between the observed point cloud and the category prior. Ultimately, the object’s translation, rotation, and size are obtained by mapping the reconstructed point cloud to a normalized canonical coordinate system. Through extensive experiments, we demonstrate that our algorithm outperforms existing methods in terms of performance and accuracy on commonly used benchmarks for this type of problem. Additionally, we implement the algorithm in robotic arm-grasping simulations, further validating its effectiveness. Full article
(This article belongs to the Special Issue Artificial Intelligence and Its Application in Robotics)
Show Figures

Figure 1

24 pages, 16093 KB  
Article
Inspecting Pond Fabric Using Unmanned Aerial Vehicle-Assisted Modeling, Smartphone Augmented Reality, and a Gaming Engine
by Naai-Jung Shih, Yun-Ting Tasi, Yi-Ting Qiu and Ting-Wei Hsu
Remote Sens. 2024, 16(6), 943; https://doi.org/10.3390/rs16060943 - 7 Mar 2024
Cited by 2 | Viewed by 1808
Abstract
Historical farm ponds have been designed, maintained, and established as heritage sites or cultural landscapes. Has their gradually evolving function resulted in changes to the landscape influenced by their degenerated nature and the new urban fabric? This study aimed to assess the interaction [...] Read more.
Historical farm ponds have been designed, maintained, and established as heritage sites or cultural landscapes. Has their gradually evolving function resulted in changes to the landscape influenced by their degenerated nature and the new urban fabric? This study aimed to assess the interaction between urban fabrics and eight farm ponds in Taoyuan by determining the demolition ratio of ponds subject to the transit-oriented development (TOD) of infrastructure and to evaluate land cover using historical maps, unmanned aerial vehicle (UAV)-assisted 3D modeling, smartphone augmented reality (AR), and a gaming engine to inspect and compare well-developed or reactivated ponds and peripheries. A 46% reduction in pond area around Daxi Interchange was an important indicator of degeneration in the opposite direction to TOD-based instrumentation. Three-dimensional skyline analysis enabled us to create an urban context matrix to be used in the simulations. Nearly 55 paired AR comparisons were made with 100 AR cloud-accessed models from the Augment® platform, and we produced a customized interface to align ponds with landmark construction or other ponds using Unreal Engine®. Smartphone AR is a valuable tool for situated comparisons and was used to conduct analyses across nine categories, from buildings and infrastructure to the intensity and stage of development. The gaming engine handled large point models with high detail and was supported by a customized blueprint. We found that 3D virtual dynamics highlighted the evolving interstitial space and role substitution of the agricultural fabric. This combination of heterogeneous platforms provides a practical method of preserving heritage and enables conflict resolution through policy and TOD instrumentation. Full article
Show Figures

Figure 1

14 pages, 6281 KB  
Technical Note
Creating and Leveraging a Synthetic Dataset of Cloud Optical Thickness Measures for Cloud Detection in MSI
by Aleksis Pirinen, Nosheen Abid, Nuria Agues Paszkowsky, Thomas Ohlson Timoudas, Ronald Scheirer, Chiara Ceccobello, György Kovács and Anders Persson
Remote Sens. 2024, 16(4), 694; https://doi.org/10.3390/rs16040694 - 16 Feb 2024
Cited by 1 | Viewed by 2869
Abstract
Cloud formations often obscure optical satellite-based monitoring of the Earth’s surface, thus limiting Earth observation (EO) activities such as land cover mapping, ocean color analysis, and cropland monitoring. The integration of machine learning (ML) methods within the remote sensing domain has significantly improved [...] Read more.
Cloud formations often obscure optical satellite-based monitoring of the Earth’s surface, thus limiting Earth observation (EO) activities such as land cover mapping, ocean color analysis, and cropland monitoring. The integration of machine learning (ML) methods within the remote sensing domain has significantly improved performance for a wide range of EO tasks, including cloud detection and filtering, but there is still much room for improvement. A key bottleneck is that ML methods typically depend on large amounts of annotated data for training, which are often difficult to come by in EO contexts. This is especially true when it comes to cloud optical thickness (COT) estimation. A reliable estimation of COT enables more fine-grained and application-dependent control compared to using pre-specified cloud categories, as is common practice. To alleviate the COT data scarcity problem, in this work, we propose a novel synthetic dataset for COT estimation, which we subsequently leverage for obtaining reliable and versatile cloud masks on real data. In our dataset, top-of-atmosphere radiances have been simulated for 12 of the spectral bands of the Multispectral Imagery (MSI) sensor onboard Sentinel-2 platforms. These data points have been simulated under consideration of different cloud types, COTs, and ground surface and atmospheric profiles. Extensive experimentation of training several ML models to predict COT from the measured reflectivity of the spectral bands demonstrates the usefulness of our proposed dataset. In particular, by thresholding COT estimates from our ML models, we show on two satellite image datasets (one that is publicly available, and one which we have collected and annotated) that reliable cloud masks can be obtained. The synthetic data, the newly collected real dataset, code and models have been made publicly available. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

16 pages, 4112 KB  
Communication
A Cloud Detection Algorithm Based on FY-4A/GIIRS Infrared Hyperspectral Observations
by Jieying Ma, Yi Liao and Li Guan
Remote Sens. 2024, 16(3), 481; https://doi.org/10.3390/rs16030481 - 26 Jan 2024
Cited by 3 | Viewed by 2559
Abstract
Cloud detection is an essential preprocessing step when using satellite-borne infrared hyperspectral sounders for data assimilation and atmospheric retrieval. In this study, we propose a cloud detection algorithm based solely on the sensitivity and detection characteristics of the FY-4A Geostationary Interferometric Infrared Sounder [...] Read more.
Cloud detection is an essential preprocessing step when using satellite-borne infrared hyperspectral sounders for data assimilation and atmospheric retrieval. In this study, we propose a cloud detection algorithm based solely on the sensitivity and detection characteristics of the FY-4A Geostationary Interferometric Infrared Sounder (GIIRS), rather than relying on other instruments. The algorithm consists of four steps: (1) combining observed radiation and clear radiance data simulated by the Community Radiative Transfer Model (CRTM) to identify clear fields of view (FOVs); (2) determining the number of clouds within adjacent 2 × 2 FOVs via a principal component analysis of observed radiation; (3) identifying whether there are large observed radiance differences between adjacent 2 × 2 FOVs to determine the mixture of clear skies and clouds; and (4) assigning adjacent 2 × 2 FOVs as a cloud cluster following the three steps above to select an appropriate classification threshold. The classification results within each cloud detection cluster were divided into the following categories: clear, partly cloudy, or overcast. The proposed cloud detection algorithm was tested using one month of GIIRS observations from May 2022 in this study. The cloud detection and classification results were compared with the FY-4A Advanced Geostationary Radiation Imager (AGRI)’s operational cloud mask products to evaluate their performance. The results showed that the algorithm’s performance is significantly influenced by the surface type. Among all-day observations, the highest recognition performance was achieved over the ocean, followed by land surfaces, with the lowest performance observed over deep inland water. The proposed algorithm demonstrated better clear sky recognition during the nighttime for ocean and land surfaces, while its performance was higher for partly cloudy and overcast conditions during the day. However, for inland water surfaces, the algorithm consistently exhibited a lower cloud recognition performance during both the day and night. Moreover, in contrast to the GIIRS’s Level 2 cloud mask (CLM) product, the proposed algorithm was able to identify partly cloudy conditions. The algorithm’s classification results departed slightly from those of the AGRI’s cloud mask product in areas with clear sky/cloud boundaries and minimal convective cloud coverage; this was attributed to the misclassification of clear sky as partly cloudy under a low-resolution situation. AGRI’s CLM products, temporally and spatially collocated to the GIIRS FOV, served as the reference value. The proportion of FOVs consistently classified as partly cloudy to the total number of partly cloudy FOVs was 40.6%. In comparison with the GIIRS’s L2 product, the proposed algorithm improved the identification performance by around 10%. Full article
Show Figures

Graphical abstract

26 pages, 14905 KB  
Article
Semantic Segmentation and Roof Reconstruction of Urban Buildings Based on LiDAR Point Clouds
by Xiaokai Sun, Baoyun Guo, Cailin Li, Na Sun, Yue Wang and Yukai Yao
ISPRS Int. J. Geo-Inf. 2024, 13(1), 19; https://doi.org/10.3390/ijgi13010019 - 5 Jan 2024
Cited by 13 | Viewed by 6697
Abstract
In urban point cloud scenarios, due to the diversity of different feature types, it becomes a primary challenge to effectively obtain point clouds of building categories from urban point clouds. Therefore, this paper proposes the Enhanced Local Feature Aggregation Semantic Segmentation Network (ELFA-RandLA-Net) [...] Read more.
In urban point cloud scenarios, due to the diversity of different feature types, it becomes a primary challenge to effectively obtain point clouds of building categories from urban point clouds. Therefore, this paper proposes the Enhanced Local Feature Aggregation Semantic Segmentation Network (ELFA-RandLA-Net) based on RandLA-Net, which enables ELFA-RandLA-Net to perceive local details more efficiently by learning geometric and semantic features of urban feature point clouds to achieve end-to-end building category point cloud acquisition. Then, after extracting a single building using clustering, this paper utilizes the RANSAC algorithm to segment the single building point cloud into planes and automatically identifies the roof point cloud planes according to the point cloud cloth simulation filtering principle. Finally, to solve the problem of building roof reconstruction failure due to the lack of roof vertical plane data, we introduce the roof vertical plane inference method to ensure the accuracy of roof topology reconstruction. The experiments on semantic segmentation and building reconstruction of Dublin data show that the IoU value of semantic segmentation of buildings for the ELFA-RandLA-Net network is improved by 9.11% compared to RandLA-Net. Meanwhile, the proposed building reconstruction method outperforms the classical PolyFit method. Full article
Show Figures

Figure 1

Back to TopTop