Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (719)

Search Parameters:
Keywords = remote interference

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1207 KB  
Article
EDM-UNet: An Edge-Enhanced and Attention-Guided Model for UAV-Based Weed Segmentation in Soybean Fields
by Jiaxin Gao, Feng Tan and Xiaohui Li
Agriculture 2025, 15(24), 2575; https://doi.org/10.3390/agriculture15242575 - 12 Dec 2025
Abstract
Weeds will compete with soybeans for resources such as light, water and nutrients, inhibit the growth of soybeans, and reduce their yield and quality. Aiming at the problems of low efficiency, high environmental risk and insufficient weed identification accuracy in complex farmland scenarios [...] Read more.
Weeds will compete with soybeans for resources such as light, water and nutrients, inhibit the growth of soybeans, and reduce their yield and quality. Aiming at the problems of low efficiency, high environmental risk and insufficient weed identification accuracy in complex farmland scenarios of traditional weed management methods, this study proposes a weed segmentation method for soybean fields based on unmanned aerial vehicle remote sensing. This method enhances the channel feature selection capability by introducing a lightweight ECA module, improves the target boundary recognition by combining Canny edge detection, and designs directional consistency filtering and morphological post-processing to optimize the spatial structure of the segmentation results. The experimental results show that the EDM-UNet method achieves the best performance effect on the self-built dataset, and the MIoU, Recall and Precision on the test set reach 89.45%, 93.53% and 94.78% respectively. In terms of model inference speed, EDM-UNet also performs well, with an FPS of 40.36, which can meet the requirements of real-time detection models. Compared with the baseline network model, the MIoU, Recall and Precision of EDM-UNet increased by 6.71%, 5.67% and 3.03% respectively, and the FPS decreased by 11.25. In addition, performance evaluation experiments were conducted under different degrees of weed interference conditions. The models all showed good detection effects, verifying that the model proposed in this study can accurately segment weeds in soybean fields. This research provides an efficient solution for weed segmentation in complex farmland environments that takes into account both computational efficiency and segmentation accuracy, and has significant practical value for promoting the development of smart agricultural technology. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
18 pages, 4553 KB  
Article
Changes of Terrace Distribution in the Qinba Mountain Based on Deep Learning
by Xiaohua Meng, Zhihua Song, Xiaoyun Cui and Peng Shi
Sustainability 2025, 17(24), 10971; https://doi.org/10.3390/su172410971 - 8 Dec 2025
Viewed by 83
Abstract
The Qinba Mountains in China span six provinces, characterized by a large population, rugged terrain, steep peaks, deep valleys, and scarce flat land, making large-scale agricultural development challenging. Terraced fields serve as the core cropland type in this region, playing a vital role [...] Read more.
The Qinba Mountains in China span six provinces, characterized by a large population, rugged terrain, steep peaks, deep valleys, and scarce flat land, making large-scale agricultural development challenging. Terraced fields serve as the core cropland type in this region, playing a vital role in preventing soil erosion on sloping farmland and expanding agricultural production space. They also function as a crucial medium for sustaining the ecosystem services of mountainous areas. As a transitional zone between China’s northern and southern climates and a vital ecological barrier, the Qinba Mountains’ terraced ecosystems have undergone significant spatial changes over the past two decades due to compound factors including the Grain-for-Green Program, urban expansion, and population outflow. However, current large-scale, long-term, high-resolution monitoring studies of terraced fields in this region still face technical bottlenecks. On one hand, traditional remote sensing interpretation methods rely on manually designed features, making them ill-suited for the complex scenarios of fragmented, multi-scale distribution, and terrain shadow interference in Qinba terraced fields. On the other hand, the lack of high-resolution historical imagery means that low-resolution data suffers from insufficient accuracy and spatial detail for capturing dynamic changes in terraced fields. This study aims to fill the technical gap in detailed dynamic monitoring of terraced fields in the Qinba Mountains. By creating image tiles from Landsat-8 satellite imagery collected between 2017 and 2020, it employs three deep learning semantic segmentation models—DeepLabV3 based on ResNet-34, U-Net, and PSPNet deep learning semantic segmentation models. Through optimization strategies such as data augmentation and transfer learning, the study achieves 15-m-resolution remote sensing interpretation of terraced field information in the Qinba Mountains from 2000 to 2020. Comparative results revealed DeepLabV3 demonstrated significant advantages in identifying terraced field types: Mean Pixel Accuracy (MPA) reached 79.42%, Intersection over Union (IoU) was 77.26%, F1 score attained 80.98, and Kappa coefficient reached 0.7148—all outperforming U-Net and PSPNet models. The model’s accuracy is not uniform but is instead highly contingent on the topographic context. The model excels in environments that are archetypal for mid-altitudes with moderately steep slopes. Based on it we create a set of tiles integrating multi-source data from RBG and DEM. The fusion model, which incorporates DEM-derived topographic data, demonstrates improvement across these aspects. Dynamic monitoring based on the optimal model indicates that terraced fields in the Qinba Mountains expanded between 2000 and 2020: the total area was 57.834 km2 in 2000, and by 2020, this had increased to 63,742 km2, representing an approximate growth rate of 8.36%. Sichuan, Gansu, and Shaanxi provinces contributed the majority of this expansion, accounting for 71% of the newly added terraced fields. Over the 20-year period, the center of gravity of terraced fields shifted upward. The area of terraced fields above 500 m in elevation increased, while that below 500 m decreased. Terraced fields surrounding urban areas declined, and mountainous slopes at higher elevations became the primary source of newly constructed terraces. This study not only establishes a technical paradigm for the refined monitoring of terraced field resources in mountainous regions but also provides critical data support and theoretical foundations for implementing sustainable land development in the Qinba Mountains. It holds significant practical value for advancing regional sustainable development. Full article
(This article belongs to the Section Sustainable Agriculture)
Show Figures

Figure 1

24 pages, 5153 KB  
Article
Temperature-Field Driven Adaptive Radiometric Calibration for Scan Mirror Thermal Radiation Interference in FY-4B GIIRS
by Xiao Liang, Yaopu Zou, Changpei Han, Pengyu Huang, Libing Li and Yuanshu Zhang
Remote Sens. 2025, 17(24), 3948; https://doi.org/10.3390/rs17243948 - 6 Dec 2025
Viewed by 109
Abstract
To meet the growing demand for quantitative remote sensing applications in GIIRS radiometric calibration, this paper proposes a temperature field-driven adaptive scan mirror thermal radiation interference correction method. Based on the on-orbit deep space observation data from the Fengyun-4B satellite, this paper systematically [...] Read more.
To meet the growing demand for quantitative remote sensing applications in GIIRS radiometric calibration, this paper proposes a temperature field-driven adaptive scan mirror thermal radiation interference correction method. Based on the on-orbit deep space observation data from the Fengyun-4B satellite, this paper systematically analyzes the thermal radiation interference characteristics caused by scan mirror deflection and constructs the first scan mirror thermal radiation response model suitable for GIIRS. On the basis of this model, this paper further introduces the dynamic variation characteristics of the internal thermal environment of the instrument, enabling adaptive response and compensation for radiation disturbances. This method overcomes the limitations of relying on static calibration parameters and improves the generality and robustness of the model. Independent validation results show that this method effectively suppresses the interference of scan mirror deflection on instrument background radiation and enhances the consistency of the deep space and blackbody spectral diurnal variation time series. After correction, the average system bias of the interference-sensitive channel decreased by 94%, and the standard deviation of radiance bias from 2.5 mW/m2·sr·cm−1 to below 0.5 mW/m2·sr·cm−1. In the O-B test, the maximum improvement in relative standard deviation reached 0.15 K. Full article
(This article belongs to the Special Issue Remote Sensing Data Preprocessing and Calibration)
Show Figures

Figure 1

26 pages, 5797 KB  
Article
ASGT-Net: A Multi-Modal Semantic Segmentation Network with Symmetric Feature Fusion and Adaptive Sparse Gating
by Wendie Yue, Kai Chang, Xinyu Liu, Kaijun Tan and Wenqian Chen
Symmetry 2025, 17(12), 2070; https://doi.org/10.3390/sym17122070 - 3 Dec 2025
Viewed by 247
Abstract
In the field of remote sensing, accurate semantic segmentation is crucial for applications such as environmental monitoring and urban planning. Effective fusion of multi-modal data is a key factor in improving land cover classification accuracy. To address the limitations of existing methods, such [...] Read more.
In the field of remote sensing, accurate semantic segmentation is crucial for applications such as environmental monitoring and urban planning. Effective fusion of multi-modal data is a key factor in improving land cover classification accuracy. To address the limitations of existing methods, such as inadequate feature fusion, noise interference, and insufficient modeling of long-range dependencies, this paper proposes ASGT-Net, an enhanced multi-modal fusion network. The network adopts an encoder-decoder architecture, with the encoder featuring a symmetric dual-branch structure based on a ResNet50 backbone and a hierarchical feature extraction framework. At each layer, Adaptive Weighted Fusion (AWF) modules are introduced to dynamically adjust the feature contributions from different modalities. Additionally, this paper innovatively introduces an alternating mechanism of Learnable Sparse Attention (LSA) and Adaptive Gating Fusion (AGF): LSA selectively activates salient features to capture critical spatial contextual information, while AGF adaptively gates multi-modal data flows to suppress common conflicting noise. These mechanisms work synergistically to significantly enhance feature integration, improve multi-scale representation, and reduce computational redundancy. Experiments on the ISPRS benchmark datasets (Vaihingen and Potsdam) demonstrate that ASGT-Net outperforms current mainstream multi-modal fusion techniques in both accuracy and efficiency. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

36 pages, 22245 KB  
Article
CMSNet: A SAM-Enhanced CNN–Mamba Framework for Damaged Building Change Detection in Remote Sensing Imagery
by Jianli Zhang, Liwei Tao, Wenbo Wei, Pengfei Ma and Mengdi Shi
Remote Sens. 2025, 17(23), 3913; https://doi.org/10.3390/rs17233913 - 3 Dec 2025
Viewed by 394
Abstract
In war and explosion scenarios, buildings often suffer varying degrees of damage characterized by complex, irregular, and fragmented spatial patterns, posing significant challenges for remote sensing–based change detection. Additionally, the scarcity of high-quality datasets limits the development and generalization of deep learning approaches. [...] Read more.
In war and explosion scenarios, buildings often suffer varying degrees of damage characterized by complex, irregular, and fragmented spatial patterns, posing significant challenges for remote sensing–based change detection. Additionally, the scarcity of high-quality datasets limits the development and generalization of deep learning approaches. To overcome these issues, we propose CMSNet, an end-to-end framework that integrates the structural priors of the Segment Anything Model (SAM) with the efficient temporal modeling and fine-grained representation capabilities of CNN–Mamba. Specifically, CMSNet adopts CNN–Mamba as the backbone to extract multi-scale semantic features from bi-temporal images, while SAM-derived visual priors guide the network to focus on building boundaries and structural variations. A Pre-trained Visual Prior-Guided Feature Fusion Module (PVPF-FM) is introduced to align and fuse these priors with change features, enhancing robustness against local damage, non-rigid deformations, and complex background interference. Furthermore, we construct a new RWSBD (Real-world War Scene Building Damage) dataset based on Gaza war scenes, comprising 42,732 annotated building damage instances across diverse scales, offering a strong benchmark for real-world scenarios. Extensive experiments on RWSBD and three public datasets (CWBD, WHU-CD, and LEVIR-CD+) demonstrate that CMSNet consistently outperforms eight state-of-the-art methods in both quantitative metrics (F1, IoU, Precision, Recall) and qualitative evaluations, especially in fine-grained boundary preservation, small-scale change detection, and complex scene adaptability. Overall, this work introduces a novel detection framework that combines foundation model priors with efficient change modeling, along with a new large-scale war damage dataset, contributing valuable advances to both research and practical applications in remote sensing change detection. Additionally, the strong generalization ability and efficient architecture of CMSNet highlight its potential for scalable deployment and practical use in large-area post-disaster assessment. Full article
Show Figures

Figure 1

32 pages, 22810 KB  
Article
Research on Forest Fire Smoke and Cloud Separation Method Based on Fisher Discriminant Analysis
by Jiayi Zhang, Jun Pan, Yehan Sun, Lijun Jiang and Kaifeng Liu
Remote Sens. 2025, 17(23), 3880; https://doi.org/10.3390/rs17233880 - 29 Nov 2025
Viewed by 160
Abstract
In remote sensing monitoring of forest fires, smoke and clouds exhibit similar spectral characteristics in satellite imagery, which can easily lead to clouds being misjudged as smoke. This incorrect discrimination may result in missed detections or false alarms of fire points. The precise [...] Read more.
In remote sensing monitoring of forest fires, smoke and clouds exhibit similar spectral characteristics in satellite imagery, which can easily lead to clouds being misjudged as smoke. This incorrect discrimination may result in missed detections or false alarms of fire points. The precise differentiation of smoke and clouds has become increasingly challenging, significantly limiting the ability to accurately identify fires in their early stages. Additionally, electromagnetic waves penetrating the smoke and clouds interact with the underlying surface, which interferes with the effective separation of smoke and clouds. In response to the aforementioned issues, this paper systematically studies the impact mechanism of different underlying surfaces on the spectral response of smoke and clouds. We constructed a dataset using sample collection and gradation methods. It contains smoke at varying concentrations and clouds of different thicknesses over three typical underlying surfaces: vegetation, soil, and water. Based on the analysis of spectral characteristics, analysis of variance (ANOVA) was applied to screen sensitive bands suitable for the separation of smoke and clouds. Furthermore, considering the distribution characteristics of smoke and cloud samples in spectral space, single-band threshold models, visible-band index (VBI) models, ratio index models, and Fisher smoke and cloud recognition index (FSCRI) models were developed for three typical underlying surfaces. The validation results demonstrate that the FSCRI models significantly outperform other models in terms of both robustness and accuracy. Their recognition accuracy rates for smoke and clouds in the underlying surfaces of vegetation, soil and water reached 95.5%, 93.5% and 99%, respectively. The proposed method effectively suppresses cloud interference to improve smoke and cloud separation. This capability enables more accurate early detection of forest fires and localization of their sources. Full article
Show Figures

Figure 1

22 pages, 32335 KB  
Article
MAIENet: Multi-Modality Adaptive Interaction Enhancement Network for SAR Object Detection
by Yu Tong, Kaina Xiong, Jun Liu, Guixing Cao and Xinyue Fan
Remote Sens. 2025, 17(23), 3866; https://doi.org/10.3390/rs17233866 - 28 Nov 2025
Viewed by 168
Abstract
Syntheticaperture radar (SAR) object detection offers significant advantages in remote sensing applications, particularly under adverse weather conditions or low-light environments. However, single-modal SAR image object detection encounters numerous challenges, including speckle noise, limited texture information, and interference from complex backgrounds. To address these [...] Read more.
Syntheticaperture radar (SAR) object detection offers significant advantages in remote sensing applications, particularly under adverse weather conditions or low-light environments. However, single-modal SAR image object detection encounters numerous challenges, including speckle noise, limited texture information, and interference from complex backgrounds. To address these issues, we present Modality-Aware Adaptive Interaction Enhancement Network (MAIENet), a multimodal detection framework designed to effectively extract complementary information from both SAR and optical images, thereby enhancing object detection performance. MAIENet comprises three primary components: batch-wise splitting and channel-wise concatenation (BSCC) module, modality-aware adaptive interaction enhancement (MAIE) module, and multi-directional focus (MF) module. The BSCC module extracts and reorganizes features from each modality to preserve their distinct characteristics. The MAIE module component facilitates deeper cross-modal fusion through channel reweighting, deformable convolutions, atrous convolution, and attention mechanisms, enabling the network to emphasize critical modal information while reducing interference. By integrating features from various spatial directions, the MF module expands the receptive field, allowing the model to adapt more effectively to complex scenes. The MAIENet framework is end-to-end trainable and can be seamlessly integrated into existing detection networks with minimal modifications. Experimental results on the publicly available OGSOD-1.0 dataset demonstrate that MAIENet achieves superior performance compared with existing methods, achieving 90.8% mAP50. Full article
Show Figures

Figure 1

19 pages, 4927 KB  
Article
Enhanced Remote Sensing Object Detection via AFDNet: Integrating Dual-Sensing Attention and Dynamic Bounding Box Optimization
by Ziyan Wang, Miao Fang and Xiaofei Zhang
Algorithms 2025, 18(12), 751; https://doi.org/10.3390/a18120751 - 28 Nov 2025
Viewed by 193
Abstract
Existing remote sensing object detection methods struggle with challenges such as complex background interference, variable object scales, and class imbalance due to a lack of coordinated internal optimization. This paper proposes AFDNet, a novel RSOD algorithm that establishes an internal collaborative evolution mechanism [...] Read more.
Existing remote sensing object detection methods struggle with challenges such as complex background interference, variable object scales, and class imbalance due to a lack of coordinated internal optimization. This paper proposes AFDNet, a novel RSOD algorithm that establishes an internal collaborative evolution mechanism to systematically enhance the model’s feature perception and localization capabilities in complex scenes. AFDNet achieves this through three tightly coupled, co-evolving components: (1) A channel–spatial dual-sensing module that adaptively focuses on crucial features and suppresses background noise. (2) A dynamic bounding box optimization module that integrates distance-aware and scale-normalization strategies, significantly boosting localization accuracy and regression robustness for multi-scale objects. (3) A Gaussian adaptive activation unit that enhances the model’s nonlinear fitting capability for better detail extraction under weak conditions. Extensive experiments on two public datasets, RSOD and NWPU VHR-10, verify the excellent performance of AFDNet. AFDNet achieved a leading 95.16% mAP@50 on the RSOD dataset and an astonishing 96.52% mAP@50 on the NWPU VHR-10 dataset, which is significantly better than the mainstream detection models. This study verifies the effectiveness of introducing internal co-evolution mechanisms and provides a novel and reliable solution for high-precision remote sensing target detection. Full article
Show Figures

Figure 1

19 pages, 3672 KB  
Article
DualRecon: Building 3D Reconstruction from Dual-View Remote Sensing Images
by Ruizhe Shao, Hao Chen, Jun Li, Mengyu Ma and Chun Du
Remote Sens. 2025, 17(23), 3793; https://doi.org/10.3390/rs17233793 - 22 Nov 2025
Viewed by 524
Abstract
Large-scale and rapid 3D reconstruction of urban areas holds significant practical value. Recently, methods that reconstruct buildings from off-nadir imagery have gained attention for their potential to meet the demand for large-scale, time-sensitive reconstruction applications. These methods typically estimate the building height and [...] Read more.
Large-scale and rapid 3D reconstruction of urban areas holds significant practical value. Recently, methods that reconstruct buildings from off-nadir imagery have gained attention for their potential to meet the demand for large-scale, time-sensitive reconstruction applications. These methods typically estimate the building height and footprint position by extracting building roof and the roof-to-footprint offset within a single off-nadir image. However, the reconstruction accuracy of these methods is primarily constrained by two issues: first, errors in the single-view building detection, and second, the inaccurate extraction of offsets, which is often a consequence of these detection errors as well as interference from shadow occlusion. To address these challenges, we propose DualRecon, a method for 3D building reconstruction from heterogeneous dual-view remote sensing imagery. In contrast to single-image detection methods, DualRecon achieves more accurate 3D information extraction for reconstruction by fusing and correlating building information across different views. This success can be attributed to three key advantages of DualRecon. First, DualRecon fuses the two input views and extracts building objects based on the fused image features, thereby improving the accuracy of building detection and localization. Second, compared to the roof-to-footprint offset, the disparity offset of the same rooftop between different views is less affected by interference from shadows and occlusions. Our method leverages this disparity offset to determine building height, which enhances the accuracy of height estimation. Third, we designed DualRecon with a three-branch architecture to be optimally tailored for the dual-view 3D information extraction task. Third, we designed DualRecon with a three-branch architecture to be optimally tailored for the dual-view 3D information extraction task. Moreover, this paper introduces BuildingDual—the first large-scale dual-view 3D building reconstruction dataset. It comprises 3789 image pairs containing 288,787 building instances, where each instance is annotated with its respective roofs in both views, roof-to-footprint offset, footprint, and the disparity offset of the roof. Experiments on this dataset demonstrate that DualRecon achieves more accurate reconstruction results than existing methods when performing 3D building reconstruction from dual-view remote sensing imagery. Our data and code will be made publicly available. Full article
Show Figures

Figure 1

19 pages, 2710 KB  
Article
Internet of Things-Based Electromagnetic Compatibility Monitoring (IEMCM) Architecture for Biomedical Devices
by Chiedza Hwata, Gerard Rushingabigwi, Omar Gatera, Didacienne Mukalinyigira, Celestin Twizere, Bolaji N. Thomas and Diego H. Peluffo-Ord’onez
Appl. Sci. 2025, 15(22), 12337; https://doi.org/10.3390/app152212337 - 20 Nov 2025
Cited by 1 | Viewed by 390
Abstract
Electromagnetic compatibility is the capability of electrical and electronic equipment to function properly around devices radiating electromagnetic energy, without mutual disturbance. Hospital environments contain numerous devices operating simultaneously and sharing resources. Undetected electromagnetic interference can cause medical devices’ malfunctions, exposing patients and staff. [...] Read more.
Electromagnetic compatibility is the capability of electrical and electronic equipment to function properly around devices radiating electromagnetic energy, without mutual disturbance. Hospital environments contain numerous devices operating simultaneously and sharing resources. Undetected electromagnetic interference can cause medical devices’ malfunctions, exposing patients and staff. Traditional monitoring is time-consuming and relies on expert interpretation. An Internet of Things-enabled embedded system architecture for remote and real-time monitoring of electromagnetic fields from medical devices is proposed. It integrates frequency probes, a Raspberry Pi 4, and a communication module. A three-month study conducted at Muhima District Hospital, Kigali, Rwanda, demonstrated the system’s effectiveness in monitoring electromagnetic field levels and cloud transmission. The signals were benchmarked against International Electrotechnical Commission and Rwanda Standards Board standards. Alerts are triggered when thresholds are exceeded, with results plotted on website and mobile interfaces. Emissions were highest at noon when the equipment was most active and lower after 1:30 PM, indicating reduced activity. The sample recorded statistics of electric fields include mean (1.0028), minimum (0.7228), and maximum (1.3515). Among the five filters evaluated, the Savitzky–Golay performed better, with MSE (0.235) and SNR (9.308). A 412 ms average latency and 24 h operation was achieved, offering a portable solution for hospital safety and equipment optimization. Full article
Show Figures

Figure 1

31 pages, 6735 KB  
Article
Comparison of Vegetation Indices from Sentinel-2 on Table Grape Plastic-Covered Vineyards: Utilisation of Spectral Correction and Correlation with Yield
by Giuseppe Roselli, Giovanni Gentilesco, Antonio Serra and Antonio Coletta
Horticulturae 2025, 11(11), 1385; https://doi.org/10.3390/horticulturae11111385 - 17 Nov 2025
Viewed by 435
Abstract
Climate change represents a critical challenge for viticulture worldwide, primarily through increased heat stress, more frequent and severe drought periods, and unseasonal rainfall events. There is increasing evidence of its negative effects on both thermal regimes—potentially leading to accelerated phenology and unbalanced sugar-to-acid [...] Read more.
Climate change represents a critical challenge for viticulture worldwide, primarily through increased heat stress, more frequent and severe drought periods, and unseasonal rainfall events. There is increasing evidence of its negative effects on both thermal regimes—potentially leading to accelerated phenology and unbalanced sugar-to-acid ratios—and hydric regimes—causing water stress that impacts berry development and final yield. The use of plastic covering in vineyards is a widespread technique, particularly in regions with high climatic variability such as the Mediterranean Basin (e.g., Southern Italy, Spain, Greece), aimed at protecting both vegetation and grapes from external factors such as hail, heavy rainfall, wind, and extreme solar radiation, which can cause physical damage, promote fungal diseases, and lead to berry sunburn. This study explores the impact of six distinct commercial plastic films, with varying optical properties, on the retrieval and accuracy of vegetation indices derived from Sentinel-2 imagery in a mid-season table grape vineyard (Autumn Crisp®) in Southern Italy during the 2024 growing season. Laboratory spectroradiometric analyses were conducted to measure film-specific transmittance and reflectance factors from 200 to 1500 nm, enabling the development of a first-order linear spectral correction model applied to Sentinel-2 imagery. Vegetation indices (NDVI, CVI, GNDVI, LWCI) were corrected for plastic interference and analysed through univariate statistics and Principal Component Analysis. Results showed that after applying the spectral correction model, film T2 displayed the higher NDVI value (0.73). Films T3 and T4—characterised by high visible light transmittance (>39%) and low reflectance (<11% in the Red/NIR)—resulted in lower vine vigour and photosynthetic activity, with mean corrected NDVI values equal to 0.70, though still significantly higher than those of films T1 (0.65) and T5 (0.67). Films T6 and T1 were associated with greater water conservation, as indicated by the highest mean LWCI values (T6: 0.59; T1: 0.52), but lower chlorophyll-related signals, evidenced by the lowest mean CVI values (T6: 1.31; T1: 1.74) and GNDVI values (T6: 0.46; T1: 0.48). Among the corrected indices, NDVI demonstrated strong positive correlations with yield (r = 0.900) and total soluble solids per vine (TSS*vine, in kg), a key quality parameter representing the total sugar yield (r = 0.883), supporting its suitability as an index for vine productivity and fruit quality. The proposed correction method significantly improves the reliability of remote sensing in covered vineyards, as demonstrated by the strong correlations between corrected NDVI and yield (R2 = 0.810) and sugar content (R2 = 0.779), relationships that were not analysable with the uncorrected data; may guide film selection—opting for high-transmittance films (e.g., T2, T3) for yield or water-conserving films (e.g., T6) for stress mitigation—and irrigation strategies, such as using the corrected LWCI for precision scheduling. Future efforts should include angular effects and ground-truth validation to enhance correction accuracy and operational relevance. Full article
(This article belongs to the Section Fruit Production Systems)
Show Figures

Figure 1

23 pages, 4775 KB  
Article
Standardized Dataset and Image-Subspace-Based Method for Strip-Mode Synthetic Aperture Radar Block-Type Radio Frequency Interference Suppression
by Fuping Fang, Sinong Quan, Shiqi Xing, Dahai Dai and Yuanrong Tian
Remote Sens. 2025, 17(22), 3688; https://doi.org/10.3390/rs17223688 - 11 Nov 2025
Viewed by 527
Abstract
Synthetic aperture radar (SAR), as a high-resolution microwave remote sensing imaging technology, plays an indispensable role in both military and civilian applications. However, in complex electromagnetic countermeasure environments, radio frequency interference (RFI) severely degrades SAR imaging quality. SAR anti-interference, as a countermeasure method, [...] Read more.
Synthetic aperture radar (SAR), as a high-resolution microwave remote sensing imaging technology, plays an indispensable role in both military and civilian applications. However, in complex electromagnetic countermeasure environments, radio frequency interference (RFI) severely degrades SAR imaging quality. SAR anti-interference, as a countermeasure method, has significantly practical values. Although deep learning-based anti-interference techniques have demonstrated notable advantages, two key issues remain unresolved: 1. Strong coupling between interference suppression and SAR imaging—most existing methods rely on raw echo data, leading to a complex processing pipeline and error accumulation. 2. Scarcity of labeled data—the lack of high-quality labeled data severely restricts model deployment. To address these challenges, this work constructs a standardized dataset and conducts comprehensive validation experiments based on this dataset. The main contributions are as follows: Firstly, this work establishes the mathematical model for block-type interference, laying a theoretical foundation for the subsequent construction of RFI-polluted data. Secondly, this work constructs a block-type interference dataset, which includes the labeled data constructed by our laboratory and open-source data from the Sentinel-1 satellites, providing reliable data support for deep learning. Thirdly, this work proposes an image subspace-based interference suppression method, which eliminates the dependence on raw echo data and significantly simplifies the processing pipeline. Finally, this work makes a fair comparison of the current works, summarizes the existing problems, and looks forward to possible future research directions. Full article
Show Figures

Figure 1

25 pages, 183005 KB  
Article
Optimizing Cotton Cultivation Through Variable Rate Seeding: An Enabling Methodology
by João de Mendonça Naime, Ivani de Oliveira Negrão Lopes, Eduardo Antonio Speranza, Carlos Manoel Pedro Vaz, Júlio Cezar Franchini dos Santos, Ricardo Yassushi Inamasu, Sérgio das Chagas, Mathias Xavier Schelp and Leonardo Vecchi
AgriEngineering 2025, 7(11), 382; https://doi.org/10.3390/agriengineering7110382 - 11 Nov 2025
Viewed by 375
Abstract
This study develops a practical, on-farm methodology for optimizing cotton cultivation through Variable Rate Seeding (VRS), utilizing existing farm data and remote sensing, while minimizing operational interference. The methodology involved an experimental design across five rainfed cotton fields on a Brazilian commercial farm, [...] Read more.
This study develops a practical, on-farm methodology for optimizing cotton cultivation through Variable Rate Seeding (VRS), utilizing existing farm data and remote sensing, while minimizing operational interference. The methodology involved an experimental design across five rainfed cotton fields on a Brazilian commercial farm, testing four seeding rates (90%, 100%, 110%, 120%) within grid cells using a 4 × 4 Latin square design. Management zones (MZs) were defined using existing soil clay content and elevation data, augmented by twelve vegetation indices from Sentinel-2 satellite imagery and K-Means clustering. Statistical analysis evaluated plant population density’s effect on cotton yield and its association with MZs. For the 2023/2024 season, results showed no positive yield response to increasing plant density above field averages, with negative responses in many plots (e.g., 84% in Field A), suggesting potential gains from reducing rates. The association between population density effect classes and MZs was highly significant with moderate to relatively strong Cramer’s V values (up to 0.47), indicating MZs effectively distinguished response areas. Lower clay content consistently correlated with yield losses at higher densities. This work empowers farm managers to conduct their own site-specific experimentation for optimal seed populations, enhancing precision agriculture and resource efficiency. Full article
(This article belongs to the Section Sensors Technology and Precision Agriculture)
Show Figures

Graphical abstract

16 pages, 3567 KB  
Article
DCSC Mamba: A Novel Network for Building Change Detection with Dense Cross-Fusion and Spatial Compensation
by Rui Xu, Renzhong Mao, Yihui Yang, Weiping Zhang, Yiteng Lin and Yining Zhang
Information 2025, 16(11), 975; https://doi.org/10.3390/info16110975 - 11 Nov 2025
Viewed by 347
Abstract
Change detection in remote sensing imagery plays a vital role in urban planning, resource monitoring, and disaster assessment. However, current methods, including CNN-based approaches and Transformer-based detectors, still suffer from false change interference, irregular regional variations, and the loss of fine-grained details. To [...] Read more.
Change detection in remote sensing imagery plays a vital role in urban planning, resource monitoring, and disaster assessment. However, current methods, including CNN-based approaches and Transformer-based detectors, still suffer from false change interference, irregular regional variations, and the loss of fine-grained details. To address these issues, this paper proposes a novel building change detection network named Dense Cross-Fusion and Spatial Compensation Mamba (DCSC Mamba). The network adopts a Siamese encoder–decoder architecture, where dense cross-scale fusion is employed to achieve multi-granularity integration of cross-modal features, thereby enhancing the overall representation of multi-scale information. Furthermore, a spatial compensation module is introduced to effectively capture both local details and global contextual dependencies, improving the recognition of complex change patterns. By integrating dense cross-fusion with spatial compensation, the proposed network exhibits a stronger capability in extracting complex change features. Experimental results on the LEVIR-CD and SYSU-CD datasets demonstrate that DCSC Mamba achieves superior performance in detail preservation and robustness against interference. Specifically, it achieves F1 scores of 90.29% and 79.62%, and IoU scores of 82.30% and 66.13% on the two datasets, respectively, validating the effectiveness and robustness of the proposed method in challenging change detection scenarios. Full article
Show Figures

Figure 1

33 pages, 11140 KB  
Article
OWTDNet: A Novel CNN-Mamba Fusion Network for Offshore Wind Turbine Detection in High-Resolution Remote Sensing Images
by Pengcheng Sha, Sujie Lu, Zongjie Xu, Jianhai Yu, Lei Li, Yibo Zou and Linlin Zhao
J. Mar. Sci. Eng. 2025, 13(11), 2124; https://doi.org/10.3390/jmse13112124 - 10 Nov 2025
Viewed by 415
Abstract
Real-time monitoring of offshore wind turbines (OWTs) through satellite remote sensing imagery is considered an essential process for large-scale infrastructure surveillance in ocean engineering. Current detection systems, however, are constrained by persistent technical limitations, including prohibitive deployment costs, insufficient discriminative power for learned [...] Read more.
Real-time monitoring of offshore wind turbines (OWTs) through satellite remote sensing imagery is considered an essential process for large-scale infrastructure surveillance in ocean engineering. Current detection systems, however, are constrained by persistent technical limitations, including prohibitive deployment costs, insufficient discriminative power for learned features, and susceptibility to environmental interference. To address these challenges, a dual-branch architecture named OWTDNet is proposed, which integrates global contextual modeling via State Space Models (SSMs) with CNN-based local feature extraction for high-resolution OWTs detection. The primary branch utilizes a Mamba-structured encoder with linear computational complexity to establish long-range spatial dependencies, while an auxiliary Blurring-MobileNetv3 (B-Mv3) branch is designed to compensate for the local feature extraction deficiencies inherent in SSMs. Additionally, a novel Feature Alignment Module (FAM) is introduced to systematically coordinate cross-modal feature fusion between Mamba and CNN branches through channel-wise recalibration and position-aware alignment mechanisms. This module not only enables complementary feature integration but also enhances turbine-specific responses through attention-driven feature modulation. Comprehensive experimental validation demonstrated the superiority of the proposed framework, achieving a mean average precision (AP) of 47.1% on 40,000 × 40,000-pixel satellite imagery, while maintaining practical computational efficiency (127.7 s per image processing time). Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Back to TopTop