Topic Editors

Department of Computer System and Technology, Faculty of Computer Science and Information Technology, Universiti Malaya, Kuala Lumpur 50603, Malaysia
Department of Computer System and Technology, Faculty of Computer Science and Information Technology, Universiti Malaya, Kuala Lumpur 50603, Malaysia
Prof. Dr. Yu Li
School of Geomatics, Liaoning Technical University, Fuxin 123000, China
Faculty of Geography, University of Belgrade, Studentski Trg 3/3, 11000 Belgrade, Serbia

Advances in Sensor Data Fusion and AI for Environmental Monitoring

Abstract submission deadline
30 July 2026
Manuscript submission deadline
30 September 2026
Viewed by
3999

Topic Information

Dear Colleagues,

Environmental monitoring increasingly demands high-resolution, real-time, and reliable information to guide sustainability, disaster response, and ecosystem management. Advances in sensor technologies—including satellites, airborne platforms, in situ stations, and Internet of Things (IoT) devices—have enabled the collection of vast amounts of heterogeneous data (e.g., spectral, structural, chemical, and meteorological observations). However, transforming these multimodal streams into actionable insights remains challenging, necessitating effective data fusion strategies and robust artificial intelligence frameworks. This Topic emphasizes research at the intersection of sensor data fusion and AI-driven analytics, aiming to highlight innovations that integrate multi-source data for accurate environmental assessment and predictive modeling.

Contributions are encouraged in areas such as:

(1) Novel fusion algorithms for heterogeneous sensor integration, especially those improving spatial and temporal resolution;

(2) Deep learning models tailored to fused data for applications such as land-cover change, air and water quality, forest health, and disaster prediction;

(3) Scalable and efficient architectures—e.g., edge-to-cloud systems or federated learning—for near‑real‑time monitoring;

(4) Case studies demonstrating improved decision-making in forestry, agriculture, urban planning, or conservation. Ultimately, this Topic seeks to showcase multidisciplinary approaches that leverage sensor fusion and AI to advance environmental science and inform sustainable resource management.

Dr. Zhenyu Yu
Prof. Dr. Mohd Yamani Idna Idris
Prof. Dr. Yu Li
Dr. Aleksandar Dj Valjarević
Topic Editors

Keywords

  • artificial intelligence (ai)
  • sensor data fusion
  • remote sensing
  • forest resource assessment
  • environmental monitoring
  • smart agriculture
  • machine learning
  • multisource data integration
  • ecological modeling
  • sustainable management

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Geosciences
geosciences
2.1 5.1 2011 23.6 Days CHF 1800 Submit
ISPRS International Journal of Geo-Information
ijgi
2.8 7.2 2012 33.1 Days CHF 1900 Submit
Remote Sensing
remotesensing
4.1 8.6 2009 24.3 Days CHF 2700 Submit
Sensors
sensors
3.5 8.2 2001 17.8 Days CHF 2600 Submit
Data
data
2.0 5.0 2016 25 Days CHF 1600 Submit

Preprints.org is a multidisciplinary platform offering a preprint service designed to facilitate the early sharing of your research. It supports and empowers your research journey from the very beginning.

MDPI Topics is collaborating with Preprints.org and has established a direct connection between MDPI journals and the platform. Authors are encouraged to take advantage of this opportunity by posting their preprints at Preprints.org prior to publication:

  1. Share your research immediately: disseminate your ideas prior to publication and establish priority for your work.
  2. Safeguard your intellectual contribution: Protect your ideas with a time-stamped preprint that serves as proof of your research timeline.
  3. Boost visibility and impact: Increase the reach and influence of your research by making it accessible to a global audience.
  4. Gain early feedback: Receive valuable input and insights from peers before submitting to a journal.
  5. Ensure broad indexing: Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (5 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
25 pages, 15438 KB  
Article
Day–Night All-Sky Scene Classification with an Attention-Enhanced EfficientNet
by Wuttichai Boonpook, Peerapong Torteeka, Kritanai Torsri, Daroonwan Kamthonkiat, Yumin Tan, Asamaporn Sitthi, Patcharin Kamsing, Chomchanok Arunplod, Utane Sawangwit, Thanachot Ngamcharoensuktavorn and Kijnaphat Suksod
ISPRS Int. J. Geo-Inf. 2026, 15(2), 66; https://doi.org/10.3390/ijgi15020066 - 3 Feb 2026
Abstract
All-sky cameras provide continuous hemispherical observations essential for atmospheric monitoring and observatory operations; however, automated classification of sky conditions in tropical environments remains challenging due to strong illumination variability, atmospheric scattering, and overlapping thin-cloud structures. This study proposes EfficientNet-Attention-SPP Multi-scale Network (EASMNet), a [...] Read more.
All-sky cameras provide continuous hemispherical observations essential for atmospheric monitoring and observatory operations; however, automated classification of sky conditions in tropical environments remains challenging due to strong illumination variability, atmospheric scattering, and overlapping thin-cloud structures. This study proposes EfficientNet-Attention-SPP Multi-scale Network (EASMNet), a physics-aware deep learning framework for robust all-sky scene classification using hemispherical imagery acquired at the Thai National Observatory. The proposed architecture integrates Squeeze-and-Excitation (SE) blocks for radiometric channel stabilization, the Convolutional Block Attention Module (CBAM) for spatial–semantic refinement, and Spatial Pyramid Pooling (SPP) for hemispherical multi-scale context aggregation within a fully fine-tuned EfficientNetB7 backbone, forming a domain-aware atmospheric representation framework. A large-scale dataset comprising 122,660 RGB images across 13 day–night sky-scene categories was curated, capturing diverse tropical atmospheric conditions including humidity, haze, illumination transitions, and sensor noise. Extensive experimental evaluations demonstrate that the EASMNet achieves 93% overall accuracy, outperforming representative convolutional (VGG16, ResNet50, DenseNet121) and transformer-based architectures (Swin Transformer, Vision Transformer). Ablation analyses confirm the complementary contributions of hierarchical attention and multi-scale aggregation, while class-wise evaluation yields F1-scores exceeding 0.95 for visually distinctive categories such as Day Humid, Night Clear Sky, and Night Noise. Residual errors are primarily confined to physically transitional and low-contrast atmospheric regimes. These results validate the EASMNet as a reliable, interpretable, and computationally feasible framework for real-time observatory dome automation, astronomical scheduling, and continuous atmospheric monitoring, and provide a scalable foundation for autonomous sky-observation systems deployable across diverse climatic regions. Full article
Show Figures

Figure 1

21 pages, 12301 KB  
Article
Visual Localization Algorithm with Dynamic Point Removal Based on Multi-Modal Information Association
by Jing Ni, Boyang Gao, Hongyuan Zhu, Minkun Zhao and Xiaoxiong Liu
ISPRS Int. J. Geo-Inf. 2026, 15(2), 60; https://doi.org/10.3390/ijgi15020060 - 30 Jan 2026
Viewed by 136
Abstract
To enhance the autonomous navigation capability of intelligent agents in complex environments, this paper presents a visual localization algorithm for dynamic scenes that leverages multi-source information fusion. The proposed approach is built upon an odometry framework integrating LiDAR, camera, and IMU data, and [...] Read more.
To enhance the autonomous navigation capability of intelligent agents in complex environments, this paper presents a visual localization algorithm for dynamic scenes that leverages multi-source information fusion. The proposed approach is built upon an odometry framework integrating LiDAR, camera, and IMU data, and incorporates the YOLOv8 model to extract semantic information from images, which is then fused with laser point cloud data. We design a dynamic point removal method based on multi-modal association, which links 2D image masks to 3D point cloud regions, applies Euclidean clustering to differentiate static and dynamic points, and subsequently employs PnP-RANSAC to eliminate any remaining undetected dynamic points. This process yields a robust localization algorithm for dynamic environments. Experimental results on datasets featuring dynamic objects and a custom-built hardware platform demonstrate that the proposed dynamic point removal method significantly improves both the robustness and accuracy of the visual localization system. These findings confirm the feasibility and effectiveness of our system, showcasing its capabilities in precise positioning and autonomous navigation in complex environments. Full article
Show Figures

Figure 1

17 pages, 4983 KB  
Article
TAGNet: A Tidal Flat-Attentive Graph Network Designed for Airborne Bathymetric LiDAR Point Cloud Classification
by Ahram Song
ISPRS Int. J. Geo-Inf. 2025, 14(12), 466; https://doi.org/10.3390/ijgi14120466 - 28 Nov 2025
Viewed by 427
Abstract
Airborne LiDAR bathymetry (ALB) provides dense three-dimensional point clouds that enable the detailed mapping of tidal flat environments. However, surface classification using these point clouds remains challenging due to residual noise, water surface reflectivity, and subtle class boundaries that persist even after standard [...] Read more.
Airborne LiDAR bathymetry (ALB) provides dense three-dimensional point clouds that enable the detailed mapping of tidal flat environments. However, surface classification using these point clouds remains challenging due to residual noise, water surface reflectivity, and subtle class boundaries that persist even after standard preprocessing. To address these challenges, this study introduces Tidal flat-Attentive Graph Network (TAGNet), a graph-based deep learning framework designed to leverage both local geometric relationships and global contextual cues for the point-wise classification of tidal flat surface classes. The model incorporates multi-scale EdgeConv layers for capturing fine-grained neighborhood structures and employs squeeze-and-excitation channel attention to enhance global feature representation. To validate TAGNet’s effectiveness, classification was conducted on ALB point clouds collected from adjacent tidal flat regions, focusing on four major surface classes: exposed flat, sea surface, sea floor, and vegetation. In benchmarking tests against baseline models, including Dynamic Graph Convolutional Neural Network, PointNeXt with Single-Scale Grouping, and PointNet Transformer, TAGNet consistently achieved higher macro F1-scores. Moreover, ablation studies isolating positional encoding, attention mechanisms, and detrended Z-features confirmed their complementary contributions to TAGNet’s performance. Notably, the full TAGNet outperformed all baselines by a substantial margin, particularly when distinguishing closely related classes, such as sea floor and exposed flat. These findings highlight the potential of graph-based architectures specifically designed for ALB data in enhancing the precision of coastal monitoring and habitat mapping. Full article
Show Figures

Figure 1

24 pages, 10966 KB  
Article
UAV-Based Wellsite Reclamation Monitoring Using Transformer-Based Deep Learning on Multi-Seasonal LiDAR and Multispectral Data
by Dmytro Movchan, Zhouxin Xi, Angeline Van Dongen, Charumitha Selvaraj and Dani Degenhardt
Remote Sens. 2025, 17(20), 3440; https://doi.org/10.3390/rs17203440 - 15 Oct 2025
Viewed by 985
Abstract
Monitoring reclaimed wellsites in boreal forest environments requires accurate, scalable, and repeatable methods for assessing vegetation recovery. This study evaluates the use of uncrewed aerial vehicle (UAV)-based light detection and ranging (LiDAR) and multispectral (MS) imagery for individual tree detection, crown delineation, and [...] Read more.
Monitoring reclaimed wellsites in boreal forest environments requires accurate, scalable, and repeatable methods for assessing vegetation recovery. This study evaluates the use of uncrewed aerial vehicle (UAV)-based light detection and ranging (LiDAR) and multispectral (MS) imagery for individual tree detection, crown delineation, and classification across five reclaimed wellsites in Alberta, Canada. A deep learning workflow using 3D convolutional neural networks was applied to LiDAR and MS data collected in spring, summer, and autumn. Results show that LiDAR alone provided high accuracy for tree segmentation and height estimation, with a mean intersection over union (mIoU) of 0.94 for vegetation filtering and an F1-score of 0.82 for treetop detection. Incorporating MS data improved deciduous/coniferous classification, with the highest accuracy (mIoU = 0.88) achieved using all five spectral bands. Coniferous species were classified more accurately than deciduous species, and classification performance declined for trees shorter than 2 m. Spring conditions yielded the highest classification accuracy (mIoU = 0.93). Comparisons with ground measurements confirmed a strong correlation for tree height estimation (R2 = 0.95; root mean square error = 0.40 m). Limitations of this technique included lower performance for short, multi-stemmed trees and deciduous species, particularly willow. This study demonstrates the value of integrating 3D structural and spectral data for monitoring forest recovery and supports the use of UAV remote sensing for scalable post-disturbance vegetation assessment. The trained models used in this study are publicly available through the TreeAIBox plugin to support further research and operational applications. Full article
Show Figures

Figure 1

27 pages, 16753 KB  
Article
A 1°-Resolution Global Ionospheric TEC Modeling Method Based on a Dual-Branch Input Convolutional Neural Network
by Nian Liu, Yibin Yao and Liang Zhang
Remote Sens. 2025, 17(17), 3095; https://doi.org/10.3390/rs17173095 - 5 Sep 2025
Viewed by 1453
Abstract
Total Electron Content (TEC) is a fundamental parameter characterizing the electron density distribution in the ionosphere. Traditional global TEC modeling approaches predominantly rely on mathematical methods (such as spherical harmonic function fitting), often resulting in models suffering from excessive smoothing and low accuracy. [...] Read more.
Total Electron Content (TEC) is a fundamental parameter characterizing the electron density distribution in the ionosphere. Traditional global TEC modeling approaches predominantly rely on mathematical methods (such as spherical harmonic function fitting), often resulting in models suffering from excessive smoothing and low accuracy. While the 1° high-resolution global TEC model released by MIT offers improved temporal-spatial resolution, it exhibits regions of data gaps. Existing ionospheric image completion methods frequently employ Generative Adversarial Networks (GANs), which suffer from drawbacks such as complex model structures and lengthy training times. We propose a novel high-resolution global ionospheric TEC modeling method based on a Dual-Branch Convolutional Neural Network (DB-CNN) designed for the completion and restoration of incomplete 1°-resolution ionospheric TEC images. The novel model utilizes a dual-branch input structure: the background field, generated using the International Reference Ionosphere (IRI) model TEC maps, and the observation field, consisting of global incomplete TEC maps coupled with their corresponding mask maps. An asymmetric dual-branch parallel encoder, feature fusion, and residual decoder framework enables precise reconstruction of missing regions, ultimately generating a complete global ionospheric TEC map. Experimental results demonstrate that the model achieves Root Mean Square Errors (RMSE) of 0.30 TECU and 1.65 TECU in the observed and unobserved regions, respectively, in simulated data experiments. For measured experiments, the RMSE values are 1.39 TECU and 1.93 TECU in the observed and unobserved regions. Validation results utilizing Jason-3 altimeter-measured VTEC demonstrate that the model achieves stable reconstruction performance across all four seasons and various time periods. In key-day comparisons, its STD and RMSE consistently outperform those of the CODE global ionospheric model (GIM). Furthermore, a long-term evaluation from 2021 to 2024 reveals that, compared to the CODE model, the DB-CNN achieves average reductions of 38.2% in STD and 23.5% in RMSE. This study provides a novel dual-branch input convolutional neural network-based method for constructing 1°-resolution global ionospheric products, offering significant application value for enhancing GNSS positioning accuracy and space weather monitoring capabilities. Full article
Show Figures

Figure 1

Back to TopTop