Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,946)

Search Parameters:
Keywords = remote sensing technology

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 5692 KiB  
Article
RiceStageSeg: A Multimodal Benchmark Dataset for Semantic Segmentation of Rice Growth Stages
by Jianping Zhang, Tailai Chen, Yizhe Li, Qi Meng, Yanying Chen, Jie Deng and Enhong Sun
Remote Sens. 2025, 17(16), 2858; https://doi.org/10.3390/rs17162858 (registering DOI) - 16 Aug 2025
Abstract
The accurate identification of rice growth stages is critical for precision agriculture, crop management, and yield estimation. Remote sensing technologies, particularly multimodal approaches that integrate high spatial and hyperspectral resolution imagery, have demonstrated great potential in large-scale crop monitoring. Multimodal data fusion offers [...] Read more.
The accurate identification of rice growth stages is critical for precision agriculture, crop management, and yield estimation. Remote sensing technologies, particularly multimodal approaches that integrate high spatial and hyperspectral resolution imagery, have demonstrated great potential in large-scale crop monitoring. Multimodal data fusion offers complementary and enriched spectral–spatial information, providing novel pathways for crop growth stage recognition in complex agricultural scenarios. However, the lack of publicly available multimodal datasets specifically designed for rice growth stage identification remains a significant bottleneck that limits the development and evaluation of relevant methods. To address this gap, we present RiceStageSeg, a multimodal benchmark dataset captured by unmanned aerial vehicles (UAVs), designed to support the development and assessment of segmentation models for rice growth monitoring. RiceStageSeg contains paired centimeter-level RGB and 10-band multispectral (MS) images acquired during several critical rice growth stages, including jointing and heading. Each image is accompanied by fine-grained, pixel-level annotations that distinguish between the different growth stages. We establish baseline experiments using several state-of-the-art semantic segmentation models under both unimodal (RGB-only, MS-only) and multimodal (RGB + MS fusion) settings. The experimental results demonstrate that multimodal feature-level fusion outperforms unimodal approaches in segmentation accuracy. RiceStageSeg offers a standardized benchmark to advance future research in multimodal semantic segmentation for agricultural remote sensing. The dataset will be made publicly available on GitHub v0.11.0 (accessed on 1 August 2025). Full article
Show Figures

Figure 1

20 pages, 31614 KiB  
Article
Fine-Scale Classification of Dominant Vegetation Communities in Coastal Wetlands Using Color-Enhanced Aerial Images
by Yixian Liu, Yiheng Zhang, Xin Zhang, Chunguang Che, Chong Huang, He Li, Yu Peng, Zishen Li and Qingsheng Liu
Remote Sens. 2025, 17(16), 2848; https://doi.org/10.3390/rs17162848 - 15 Aug 2025
Abstract
Monitoring salt marsh vegetation in the Yellow River Delta (YRD) wetland is the basis of wetland research, which is of great significance for the further protection and restoration of wetland ecological functions. In the existing remote sensing technologies for wetland salt marsh vegetation [...] Read more.
Monitoring salt marsh vegetation in the Yellow River Delta (YRD) wetland is the basis of wetland research, which is of great significance for the further protection and restoration of wetland ecological functions. In the existing remote sensing technologies for wetland salt marsh vegetation classification, the object-oriented classification method effectively produces landscape patches similar to wetland vegetation and improves the spatial consistency and accuracy of the classification. However, the vegetation classes of the YRD are mixed with uneven distribution, irregular texture, and significant color variation. In order to solve the problem, this study proposes a fine-scale classification of dominant vegetation communities using color-enhanced aerial images. The color information is used to extract the color features of the image. Various features including spectral features, texture features and vegetation features are extracted from the image objects and used as inputs for four machine learning classifiers: random forest (RF), support vector machine (SVM), k-nearest neighbor (KNN) and maximum likelihood (MLC). The results showed that the accuracy of the four classifiers in classifying vegetation communities was significantly improved by adding color features. RF had the highest OA and Kappa coefficients of 96.69% and 0.9603. This shows that the classification method based on color enhancement can effectively distinguish between vegetation and non-vegetation and extract each vegetation type, which provides an effective technical route for wetland vegetation classification in aerial imagery. Full article
(This article belongs to the Special Issue Remote Sensing in Coastal Vegetation Monitoring)
Show Figures

Figure 1

21 pages, 6984 KiB  
Article
Limitations of Polar-Orbiting Satellite Observations inCapturing the Diurnal Variability of Tropospheric NO2: A Case Study Using TROPOMI, GOME-2C, and Pandora Data
by Yichen Li, Chao Yu, Jing Fan, Meng Fan, Ying Zhang, Jinhua Tao and Liangfu Chen
Remote Sens. 2025, 17(16), 2846; https://doi.org/10.3390/rs17162846 - 15 Aug 2025
Abstract
Nitrogen dioxide (NO2) plays a crucial role in environmental processes and public health. In recent years, NO2 pollution has been monitored using a combination of in situ measurements and satellite remote sensing, supported by the development of advanced retrieval algorithms. [...] Read more.
Nitrogen dioxide (NO2) plays a crucial role in environmental processes and public health. In recent years, NO2 pollution has been monitored using a combination of in situ measurements and satellite remote sensing, supported by the development of advanced retrieval algorithms. With advancements in satellite technology, large-scale NO2 monitoring is now feasible through instruments such as GOME-2C and TROPOMI. However, the fixed local overpass times of polar-orbiting satellites limit their ability to capture the complete diurnal cycle of NO2, introducing uncertainties in emission estimation and pollution trend analysis. In this study, we evaluated differences in NO2 observations between GOME-2C (morning overpass at ~09:30 LT) and TROPOMI (afternoon overpass at ~13:30 LT) across three representative regions—East Asia, Central Africa, and Europe—that exhibit distinct emission sources and atmospheric conditions. By comparing satellite-derived tropospheric NO2 column densities with ground-based measurements from the Pandora network, we analyzed spatial distribution patterns and seasonal variability in NO2 concentrations. Our results show that East Asia experiences the highest NO2 concentrations in densely populated urban and industrial areas. During winter, lower boundary layer heights and weakened photolysis processes lead to stronger accumulation of NO2 in the morning. In Central Africa, where biomass burning is the dominant emission source, afternoon fire activity is significantly higher, resulting in a substantial difference (1.01 × 1016 molecules/cm2) between GOME-2C and TROPOMI observations. Over Europe, NO2 pollution is primarily concentrated in Western Europe and along the Mediterranean coast, with seasonal peaks in winter. In high-latitude regions, weaker solar radiation limits the photochemical removal of NO2, causing concentrations to continue rising into the afternoon. These findings demonstrate that differences in polar-orbiting satellite overpass times can significantly affect the interpretation of daily NO2 variability, especially in regions with strong diurnal emissions or meteorological patterns. This study highlights the observational limitations of fixed-time satellites and offers an important reference for the future development of geostationary satellite missions, contributing to improved strategies for NO2 pollution monitoring and control. Full article
30 pages, 1292 KiB  
Review
Advances in UAV Remote Sensing for Monitoring Crop Water and Nutrient Status: Modeling Methods, Influencing Factors, and Challenges
by Xiaofei Yang, Junying Chen, Xiaohan Lu, Hao Liu, Yanfu Liu, Xuqian Bai, Long Qian and Zhitao Zhang
Plants 2025, 14(16), 2544; https://doi.org/10.3390/plants14162544 - 15 Aug 2025
Abstract
With the advancement of precision agriculture, Unmanned Aerial Vehicle (UAV)-based remote sensing has been increasingly employed for monitoring crop water and nutrient status due to its high flexibility, fine spatial resolution, and rapid data acquisition capabilities. This review systematically examines recent research progress [...] Read more.
With the advancement of precision agriculture, Unmanned Aerial Vehicle (UAV)-based remote sensing has been increasingly employed for monitoring crop water and nutrient status due to its high flexibility, fine spatial resolution, and rapid data acquisition capabilities. This review systematically examines recent research progress and key technological pathways in UAV-based remote sensing for crop water and nutrient monitoring. It provides an in-depth analysis of UAV platforms, sensor configurations, and their suitability across diverse agricultural applications. The review also highlights critical data processing steps—including radiometric correction, image stitching, segmentation, and data fusion—and compares three major modeling approaches for parameter inversion: vegetation index-based, data-driven, and physically based methods. Representative application cases across various crops and spatiotemporal scales are summarized. Furthermore, the review explores factors affecting monitoring performance, such as crop growth stages, spatial resolution, illumination and meteorological conditions, and model generalization. Despite significant advancements, current limitations include insufficient sensor versatility, labor-intensive data processing chains, and limited model scalability. Finally, the review outlines future directions, including the integration of edge intelligence, hybrid physical–data modeling, and multi-source, three-dimensional collaborative sensing. This work aims to provide theoretical insights and technical support for advancing UAV-based remote sensing in precision agriculture. Full article
Show Figures

Figure 1

20 pages, 7578 KiB  
Article
Cross Attention Based Dual-Modality Collaboration for Hyperspectral Image and LiDAR Data Classification
by Khanzada Muzammil Hussain, Keyun Zhao, Yang Zhou, Aamir Ali and Ying Li
Remote Sens. 2025, 17(16), 2836; https://doi.org/10.3390/rs17162836 - 15 Aug 2025
Abstract
Advancements in satellite sensor technology have enabled access to diverse remote sensing (RS) data from multiple platforms. Hyperspectral Image (HSI) data offers rich spectral detail for material identification, while LiDAR captures high-resolution 3D structural information, making the two modalities naturally complementary. By fusing [...] Read more.
Advancements in satellite sensor technology have enabled access to diverse remote sensing (RS) data from multiple platforms. Hyperspectral Image (HSI) data offers rich spectral detail for material identification, while LiDAR captures high-resolution 3D structural information, making the two modalities naturally complementary. By fusing HSI and LiDAR, we can mitigate the limitations of each and improve tasks like land cover classification, vegetation analysis, and terrain mapping through more robust spectral–spatial feature representation. However, traditional multi-scale feature fusion models often struggle with aligning features effectively, which can lead to redundant outputs and diminished spatial clarity. To address these issues, we propose the Cross Attention Bridge for HSI and LiDAR (CAB-HL), a novel dual-path framework that employs a multi-stage cross-attention mechanism to guide the interaction between spectral and spatial features. In CAB-HL, features from each modality are refined across three progressive stages using cross-attention modules, which enhance contextual alignment while preserving the distinctive characteristics of each modality. These fused representations are subsequently integrated and passed through a lightweight classification head. Extensive experiments on three benchmark RS datasets demonstrate that CAB-HL consistently outperforms existing state-of-the-art models, confirm that CAB-HL consistently outperforms in learning deep joint representations for multimodal classification tasks. Full article
(This article belongs to the Special Issue Artificial Intelligence Remote Sensing for Earth Observation)
Show Figures

Figure 1

24 pages, 2715 KiB  
Systematic Review
Application of Remote Sensing and Geographic Information Systems for Monitoring and Managing Chili Crops: A Systematic Review
by Ziyue Wang, Md Ali Akber and Ammar Abdul Aziz
Remote Sens. 2025, 17(16), 2827; https://doi.org/10.3390/rs17162827 - 14 Aug 2025
Abstract
Chili (Capsicum sp.) is a high-value crop cultivated by farmers, but its production is vulnerable to weather extremes (such as irregular rainfall, high temperatures, and storms), pest and disease outbreaks, and spatially fragmented cultivation, resulting in unstable yields and income. Remote sensing [...] Read more.
Chili (Capsicum sp.) is a high-value crop cultivated by farmers, but its production is vulnerable to weather extremes (such as irregular rainfall, high temperatures, and storms), pest and disease outbreaks, and spatially fragmented cultivation, resulting in unstable yields and income. Remote sensing (RS) and geographic information systems (GIS) offer promising tools for the timely, spatially explicit monitoring of chili crops. Despite growing interest in agricultural applications of these technologies, no systematic review has yet synthesized how RS and GIS have been used in chili production. This systematic review addresses this gap by evaluating existing literature on methodological approaches and thematic trends in the use of RS and GIS in chili crop monitoring and management. Following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines a comprehensive literature search was conducted using predefined keywords across Scopus, Web of Science, and Google Scholar. Sixty-five peer-reviewed articles published through January 2025 were identified and grouped into different thematic areas: crop mapping, biotic stress, abiotic stress, land suitability, crop health, soil and fertilizer management, and others. The findings indicate RS predominantly serves as the primary analytical method (82% of studies), while GIS primarily supports spatial integration and visualization. Key research gaps identified include limitations in spatial resolution, insufficient integration of intelligent predictive models, and limited scalability for smallholder farming contexts. The review highlights the need for future research incorporating high-resolution RS data, advanced modelling techniques, and spatial decision-support frameworks. These insights aim to guide researchers, agronomists, and policymakers toward enhanced precision monitoring and digital innovation in chili crop production. Full article
(This article belongs to the Special Issue Advances in Multi-Sensor Remote Sensing for Vegetation Monitoring)
Show Figures

Figure 1

64 pages, 20332 KiB  
Review
Reviewing a Decade of Structural Health Monitoring in Footbridges: Advances, Challenges, and Future Directions
by JP Liew, Maria Rashidi, Khoa Le, Ali Matin Nazar and Ehsan Sorooshnia
Remote Sens. 2025, 17(16), 2807; https://doi.org/10.3390/rs17162807 - 13 Aug 2025
Viewed by 109
Abstract
Aging infrastructure is a growing concern worldwide, with many bridges exceeding 50 years of service, prompting questions about their structural integrity. Over the past decade, the deterioration of bridges has driven extensive research into Structural Health Monitoring (SHM), a tool for early detection [...] Read more.
Aging infrastructure is a growing concern worldwide, with many bridges exceeding 50 years of service, prompting questions about their structural integrity. Over the past decade, the deterioration of bridges has driven extensive research into Structural Health Monitoring (SHM), a tool for early detection of structural deterioration, with particular emphasis on remote-sensing technologies. This review combines a scientometric analysis and a state-of-the-art review to assess recent advancements in the field. From a dataset of 702 publications (2014–2024), 171 relevant papers were analyzed, covering key SHM aspects including sensing devices, data acquisition, processing, damage detection, and reporting. Results show a 433% increase in publications, with the United States leading in output (28.65%), and Glisic, B., with collaborators forming the largest research cluster (11.7%). Accelerometers are the most commonly used sensors (50.88%), and data processing dominates the research focus (50.29%). Key challenges identified include cost (noted in 17.5% of studies), data corruption, and WSN limitations, particularly energy supply. Trends show a notable growth in AI applications (400%), and increasing interest in low-cost, crowdsource-based SHM using smartphones, MEMS, and cameras. These findings highlight both progress and future opportunities in SHM of footbridges. Full article
Show Figures

Figure 1

25 pages, 5956 KiB  
Article
Research on Crop Classification Using U-Net Integrated with Multimodal Remote Sensing Temporal Features
by Zhihui Zhu, Yuling Chen, Chengzhuo Lu, Minglong Yang, Yonghua Xia, Dewu Huang and Jie Lv
Sensors 2025, 25(16), 5005; https://doi.org/10.3390/s25165005 - 13 Aug 2025
Viewed by 110
Abstract
Crop classification plays a vital role in acquiring the spatial distribution of agricultural crops, enhancing agricultural management efficiency, and ensuring food security. With the continuous advancement of remote sensing technologies, achieving efficient and accurate crop classification using remote sensing imagery has become a [...] Read more.
Crop classification plays a vital role in acquiring the spatial distribution of agricultural crops, enhancing agricultural management efficiency, and ensuring food security. With the continuous advancement of remote sensing technologies, achieving efficient and accurate crop classification using remote sensing imagery has become a prominent research focus. Conventional approaches largely rely on empirical rules or single-feature selection (e.g., NDVI or VV) for temporal feature extraction, lacking systematic optimization of multimodal feature combinations from optical and radar data. To address this limitation, this study proposes a crop classification method based on feature-level fusion of multimodal remote sensing data, integrating the complementary advantages of optical and SAR imagery to overcome the temporal and spatial representation constraints of single-sensor observations. The study was conducted in Story County, Iowa, USA, focusing on the growth cycles of corn and soybean. Eight vegetation indices (including NDVI and NDRE) and five polarimetric features (VV and VH) were constructed and analyzed. Using a random forest algorithm to assess feature importance, NDVI+NDRE and VV+VH were identified as the optimal feature combinations. Subsequently, 16 scenes of optical imagery (Sentinel-2) and 30 scenes of radar imagery (Sentinel-1) were fused at the feature level to generate a multimodal temporal feature image with 46 channels. Using Cropland Data Layer (CDL) samples as reference data, a U-Net deep neural network was employed for refined crop classification and compared with single-modal results. Experimental results demonstrated that the fusion model outperforms single-modal approaches in classification accuracy, boundary delineation, and consistency, achieving training, validation, and test accuracies of 95.83%, 91.99%, and 90.81% respectively. Furthermore, consistent improvements were observed across evaluation metrics, including F1-score, precision, and recall. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

33 pages, 7399 KiB  
Article
A DMA Engine for On-Board Real-Time Imaging Processing of Spaceborne SAR Based on a Dedicated Instruction Set
by Ao Zhang, Zhu Yang, Yongrui Li, Ming Xu and Yizhuang Xie
Electronics 2025, 14(16), 3209; https://doi.org/10.3390/electronics14163209 - 13 Aug 2025
Viewed by 81
Abstract
With advancements in remote sensing technology and very-large-scale integration (VLSI) circuit technology, the Earth observation capabilities of spaceborne synthetic aperture radar (SAR) have continuously improved, leading to significantly increased performance demands for on-board SAR real-time imaging processors. Currently, the low data access efficiency [...] Read more.
With advancements in remote sensing technology and very-large-scale integration (VLSI) circuit technology, the Earth observation capabilities of spaceborne synthetic aperture radar (SAR) have continuously improved, leading to significantly increased performance demands for on-board SAR real-time imaging processors. Currently, the low data access efficiency of traditional direct memory access (DMA) engines remains a critical technical bottleneck limiting the real-time processing performance of SAR imaging systems. To address this limitation, this paper proposes a dedicated instruction set for spaceborne SAR data transfer control, leveraging the memory access characteristics of DDR4 SDRAM and common data read/write address jump patterns during on-board SAR real-time imaging processing. This instruction set can significantly reduce the number of instructions required in DMA engine data access operations and optimize data access logic patterns. While effectively reducing memory resource usage, it also substantially enhances the data access efficiency of DMA engines. Based on the proposed dedicated instruction set, we designed a DMA engine optimized for efficient data access in on-board SAR real-time imaging processing scenarios. Module-level performance tests were conducted on this engine, and full-process imaging experiments were performed using an FPGA-based SAR imaging system. Experimental results demonstrate that, under spaceborne SAR imaging processing conditions, the proposed DMA engine achieves a receive data bandwidth of 2.385 GB/s and a transmit data bandwidth of 2.649 GB/s at a 200 MHz clock frequency, indicating excellent memory access bandwidth and efficiency. Furthermore, tests show that the complete SAR imaging system incorporating this DMA engine processes a 16 k × 16 k SAR image using the Chirp Scaling (CS) algorithm in 1.2325 s, representing a significant improvement in timeliness compared to existing solutions. Full article
Show Figures

Figure 1

27 pages, 4588 KiB  
Article
Remote Sensing as a Sentinel for Safeguarding European Critical Infrastructure in the Face of Natural Disasters
by Miguel A. Belenguer-Plomer, Omar Barrilero, Paula Saameño, Inês Mendes, Michele Lazzarini, Sergio Albani, Naji El Beyrouthy, Mario Al Sayah, Nathan Rueche, Abla Mimi Edjossan-Sossou, Tommaso Monopoli, Edoardo Arnaudo and Gianfranco Caputo
Appl. Sci. 2025, 15(16), 8908; https://doi.org/10.3390/app15168908 - 13 Aug 2025
Viewed by 146
Abstract
Critical infrastructure, such as transport networks, energy facilities, and urban installations, is increasingly vulnerable to natural hazards and climate change. Remote sensing technologies, namely satellite imagery, offer solutions for monitoring, evaluating, and enhancing the resilience of these vital assets. This paper explores how [...] Read more.
Critical infrastructure, such as transport networks, energy facilities, and urban installations, is increasingly vulnerable to natural hazards and climate change. Remote sensing technologies, namely satellite imagery, offer solutions for monitoring, evaluating, and enhancing the resilience of these vital assets. This paper explores how applications based on synthetic aperture radar (SAR) and optical satellite imagery contribute to the protection of critical infrastructure by enabling near real-time monitoring and early detection of natural hazards for actionable insights across various European critical infrastructure sectors. Case studies demonstrate the integration of remote sensing data into geographic information systems (GISs) for promoting situational awareness, risk assessment, and predictive modeling of natural disasters. These include floods, landslides, wildfires, and earthquakes. Accordingly, this study underlines the role of remote sensing in supporting long-term infrastructure planning and climate adaptation strategies. The presented work supports the goals of the European Union (EU-HORIZON)-sponsored ATLANTIS project, which focuses on strengthening the resilience of critical EU infrastructures by providing authorities and civil protection services with effective tools for managing natural hazards. Full article
Show Figures

Figure 1

35 pages, 7825 KiB  
Review
Approaches for Assessment of Soil Moisture with Conventional Methods, Remote Sensing, UAV, and Machine Learning Methods—A Review
by Songthet Chinnunnem Haokip, Yogesh A. Rajwade, K. V. Ramana Rao, Satya Prakash Kumar, Andyco B. Marak and Ankur Srivastava
Water 2025, 17(16), 2388; https://doi.org/10.3390/w17162388 - 12 Aug 2025
Viewed by 346
Abstract
Soil moisture or moisture content is a fundamental constituent of the hydrological system of the Earth and its ecological systems, playing a pivotal role in the productivity of agricultural produce, climate modeling, and water resource management. This review comprehensively examines conventional and advanced [...] Read more.
Soil moisture or moisture content is a fundamental constituent of the hydrological system of the Earth and its ecological systems, playing a pivotal role in the productivity of agricultural produce, climate modeling, and water resource management. This review comprehensively examines conventional and advanced approaches for estimation or measuring of soil moisture, including in situ methods, remote sensing technologies, UAV-based monitoring, and machine learning-driven models. Emphasis is primarily on the evolution of soil moisture measurement from destructive gravimetric techniques to non-invasive, high-resolution sensing systems. The paper emphasizes how machine learning modules like Random Forest models, support vector machines, and AI-based neural networks are becoming more and more popular for modeling intricate soil moisture dynamics with data from several sources. A bibliometric analysis further underscores the research trends and identifies key contributors, regions, and technologies in this domain. The findings advocate for the integration of physics-based understanding, sensor technologies, and data-driven approaches to enhance prediction accuracy, spatiotemporal coverage, and decision-making capabilities. Full article
Show Figures

Figure 1

34 pages, 1262 KiB  
Review
Deep Learning-Based Fusion of Optical, Radar, and LiDAR Data for Advancing Land Monitoring
by Yizhe Li and Xinqing Xiao
Sensors 2025, 25(16), 4991; https://doi.org/10.3390/s25164991 - 12 Aug 2025
Viewed by 187
Abstract
Accurate and timely land monitoring is crucial for addressing global environmental, economic, and societal challenges, including climate change, sustainable development, and disaster mitigation. While single-source remote sensing data offers significant capabilities, inherent limitations such as cloud cover interference (optical), speckle noise (radar), or [...] Read more.
Accurate and timely land monitoring is crucial for addressing global environmental, economic, and societal challenges, including climate change, sustainable development, and disaster mitigation. While single-source remote sensing data offers significant capabilities, inherent limitations such as cloud cover interference (optical), speckle noise (radar), or limited spectral information (LiDAR) often hinder comprehensive and robust characterization of land surfaces. Recent advancements in synergistic harmonization technology for land monitoring, along with enhanced signal processing techniques and the integration of machine learning algorithms, have significantly broadened the scope and depth of geosciences. Therefore, it is essential to summarize the comprehensive applications of synergistic harmonization technology for geosciences, with a particular focus on recent advancements. Most of the existing review papers focus on the application of a single technology in a specific area, highlighting the need for a comprehensive review that integrates synergistic harmonization technology. This review provides a comprehensive review of advancements in land monitoring achieved through the synergistic harmonization of optical, radar, and LiDAR satellite technologies. It details the unique strengths and weaknesses of each sensor type, highlighting how their integration overcomes individual limitations by leveraging complementary information. This review analyzes current data harmonization and preprocessing techniques, various data fusion levels, and the transformative role of machine learning and deep learning algorithms, including emerging foundation models. Key applications across diverse domains such as land cover/land use mapping, change detection, forest monitoring, urban monitoring, agricultural monitoring, and natural hazard assessment are discussed, demonstrating enhanced accuracy and scope. Finally, this review identifies persistent challenges such as technical complexities in data integration, issues with data availability and accessibility, validation hurdles, and the need for standardization. It proposes future research directions focusing on advanced AI, novel fusion techniques, improved data infrastructure, integrated “space–air–ground” systems, and interdisciplinary collaboration to realize the full potential of multi-sensor satellite data for robust and timely land surface monitoring. Supported by deep learning, this synergy will improve our ability to monitor land surface conditions more accurately and reliably. Full article
Show Figures

Figure 1

22 pages, 33740 KiB  
Article
Detection of Pine Wilt Disease in UAV Remote Sensing Images Based on SLMW-Net
by Xiaoli Yuan, Guoxiong Zhou, Yongming Yan and Xuewu Yan
Plants 2025, 14(16), 2490; https://doi.org/10.3390/plants14162490 - 11 Aug 2025
Viewed by 274
Abstract
The pine wood nematode is responsible for pine wilt disease, which poses a significant threat to forest ecosystems worldwide. If not quickly detected and removed, the disease spreads rapidly. Advancements in UAV and image detection technologies are crucial for disease monitoring, enabling efficient [...] Read more.
The pine wood nematode is responsible for pine wilt disease, which poses a significant threat to forest ecosystems worldwide. If not quickly detected and removed, the disease spreads rapidly. Advancements in UAV and image detection technologies are crucial for disease monitoring, enabling efficient and automated identification of pine wilt disease. However, challenges persist in the detection of pine wilt disease, including complex UAV imagery backgrounds, difficulty extracting subtle features, and prediction frame bias. In this study, we develop a specialized UAV remote sensing pine forest ARen dataset and introduce a novel pine wilt disease detection model, SLMW-Net. Firstly, the Self-Learning Feature Extraction Module (SFEM) is proposed, combining a convolutional operation and a learnable normalization layer, which effectively solves the problem of difficult feature extraction from pine trees in complex backgrounds and reduces the interference of irrelevant regions. Secondly, the MicroFeature Attention Mechanism (MFAM) is designed to enhance the capture of tiny features of pine trees infected by initial nematode diseases by combining Grouped Attention and Gated Feed-Forward. Then, Weighted and Linearly Scaled IoU Loss (WLIoU Loss) is introduced, which combines weight adjustment and linear stretch truncation to improve the learning strategy, enhance the model performance and generalization ability. SLMW-Net is trained on the self-built ARen dataset and compared with seven existing methods. The experimental results show that SLMW-Net outperforms all other methods, achieving an mAP@0.5 of 86.7% and an mAP@0.5:0.95 of 40.1%. Compared to the backbone model, the mAP@0.5 increased from 83.9% to 86.7%. Therefore, the proposed SLMW-Net has demonstrated strong capabilities to address three major challenges related to pine wilt disease detection, helping to protect forest health and maintain ecological balance. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

34 pages, 4433 KiB  
Article
Estimation of Residential Vacancy Rate in Underdeveloped Areas of China Based on Baidu Street View Residential Exterior Images: A Case Study of Nanning, Guangxi
by Weijia Zeng, Binglin Liu, Yi Hu, Weijiang Liu, Yuhe Fu, Yiyue Zhang and Weiran Zhang
Algorithms 2025, 18(8), 500; https://doi.org/10.3390/a18080500 - 11 Aug 2025
Viewed by 203
Abstract
Housing vacancy rate is a key indicator for evaluating urban sustainable development. Due to rapid urbanization, population outflow and insufficient industrial support, the housing vacancy problem is particularly prominent in China’s underdeveloped regions. However, the lack of official data and the limitations of [...] Read more.
Housing vacancy rate is a key indicator for evaluating urban sustainable development. Due to rapid urbanization, population outflow and insufficient industrial support, the housing vacancy problem is particularly prominent in China’s underdeveloped regions. However, the lack of official data and the limitations of traditional survey methods restrict in-depth research. This study proposes a vacancy rate estimation method based on Baidu Street View residential exterior images and deep learning technology. Taking Nanning, Guangxi as a case study, an automatic discrimination model for residential vacancy status is constructed by identifying visual clues such as window occlusion, balcony debris accumulation, and facade maintenance status. The study first uses Baidu Street View API to collect images of residential communities in Nanning. After manual annotation and field verification, a labeled dataset is constructed. A pre-trained deep learning model (ResNet50) is applied to estimate the vacancy rate of the community after fine-tuning with labeled street view images of Nanning’s residential communities. GIS spatial analysis is combined to reveal the spatial distribution pattern and influencing factors of the vacancy rate. The results show that street view images can effectively capture vacancy characteristics that are difficult to identify with traditional remote sensing and indirect indicators, providing a refined data source and method innovation for housing vacancy research in underdeveloped regions. The study further found that the residential vacancy rate in Nanning showed significant spatial differentiation, and the vacancy driving mechanism in the old urban area and the emerging area was significantly different. This study expands the application boundaries of computer vision in urban research and fills the research gap on vacancy issues in underdeveloped areas. Its results can provide a scientific basis for the government to optimize housing planning, developers to make rational investments, and residents to make housing purchase decisions, thus helping to improve urban sustainable development and governance capabilities. Full article
(This article belongs to the Special Issue Algorithms for Smart Cities (2nd Edition))
Show Figures

Figure 1

16 pages, 1318 KiB  
Perspective
Shared Presence via XR Communication and Interaction Within a Dynamically Updated Digital Twin of a Smart Space: Conceptual Framework and Research Challenges
by Lea Skorin-Kapov, Maja Matijasevic, Ivana Podnar Zarko, Mario Kusek, Darko Huljenic, Vedran Skarica, Darian Skarica and Andrej Grguric
Appl. Sci. 2025, 15(16), 8838; https://doi.org/10.3390/app15168838 - 11 Aug 2025
Viewed by 109
Abstract
The integration of emerging eXtended Reality (XR) technologies, digital twins (DTs), smart environments, and advanced mobile and wireless networks is set to enable novel forms of immersive interaction and communication. This paper proposes a high-level conceptual framework for shared presence via XR-based communication [...] Read more.
The integration of emerging eXtended Reality (XR) technologies, digital twins (DTs), smart environments, and advanced mobile and wireless networks is set to enable novel forms of immersive interaction and communication. This paper proposes a high-level conceptual framework for shared presence via XR-based communication and interaction within a virtual reality (VR) representation of the digital twin of a smart space. The digital twin is continuously updated and synchronized—both spatially and temporally—with a physical smart space equipped with sensors and actuators. This architecture enables interactive experiences and fosters a sense of co-presence between a local user in the smart physical environment utilizing augmented reality (AR) and a remote VR user engaging through the digital counterpart. We present our lab deployment architecture used as a basis for ongoing experimental work related to testing and integrating functionalities defined in the conceptual framework. Finally, key technology requirements and research challenges are outlined, aiming to provide a foundation for future research efforts in immersive, interconnected XR systems. Full article
(This article belongs to the Special Issue Extended Reality (XR) and User Experience (UX) Technologies)
Show Figures

Figure 1

Back to TopTop