Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,287)

Search Parameters:
Keywords = spectral fusion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 4072 KB  
Article
MKF-NET: KAN-Enhanced Vision Transformer for Remote Sensing Image Segmentation
by Ning Ye, Yi-Han Xu, Wen Zhou, Gang Yu and Ding Zhou
Appl. Sci. 2025, 15(20), 10905; https://doi.org/10.3390/app152010905 - 10 Oct 2025
Abstract
Remote sensing images, which obtain surface information from aerial or satellite platforms, are of great significance in fields such as environmental monitoring, urban planning, agricultural management, and disaster response. However, due to the complex and diverse types of ground coverage and significant differences [...] Read more.
Remote sensing images, which obtain surface information from aerial or satellite platforms, are of great significance in fields such as environmental monitoring, urban planning, agricultural management, and disaster response. However, due to the complex and diverse types of ground coverage and significant differences in spectral characteristics in remote sensing images, achieving high-quality semantic segmentation still faces many challenges, such as blurred target boundaries and difficulty in recognizing small-scale objects. To address these issues, this study proposes a novel deep learning model, MKF-NET. The fusion of KAN convolution and Vision Transformer (ViT), combined with the multi-scale feature extraction and dense connection mechanism, significantly improves the semantic segmentation performance of remote sensing images. Experiments were conducted on the LoveDA dataset to systematically evaluate the segmentation performance of MKF-NET and several existing traditional deep learning models (U-net, Unet++, Deeplabv3+, Transunet, and U-KAN). Experimental results show that MKF-NET performs best in many indicators: it achieved a pixel precision of 78.53%, a pixel accuracy of 79.19%, an average class accuracy of 76.50%, and an average intersection-over-union ratio of 64.31%; it provides efficient technical support for remote sensing image analysis. Full article
Show Figures

Figure 1

21 pages, 1768 KB  
Review
Evolution of Deep Learning Approaches in UAV-Based Crop Leaf Disease Detection: A Web of Science Review
by Dorijan Radočaj, Petra Radočaj, Ivan Plaščak and Mladen Jurišić
Appl. Sci. 2025, 15(19), 10778; https://doi.org/10.3390/app151910778 - 7 Oct 2025
Viewed by 200
Abstract
The integration of unmanned aerial vehicles (UAVs) and deep learning (DL) has significantly advanced crop disease detection by enabling scalable, high-resolution, and near real-time monitoring within precision agriculture. This systematic review analyzes peer-reviewed literature indexed in the Web of Science Core Collection as [...] Read more.
The integration of unmanned aerial vehicles (UAVs) and deep learning (DL) has significantly advanced crop disease detection by enabling scalable, high-resolution, and near real-time monitoring within precision agriculture. This systematic review analyzes peer-reviewed literature indexed in the Web of Science Core Collection as articles or proceeding papers through 2024. The main selection criterion was combining “unmanned aerial vehicle*” OR “UAV” OR “drone” with “deep learning”, “agriculture” and “leaf disease” OR “crop disease”. Results show a marked surge in publications after 2019, with China, the United States, and India leading research contributions. Multirotor UAVs equipped with RGB sensors are predominantly used due to their affordability and spatial resolution, while hyperspectral imaging is gaining traction for its enhanced spectral diagnostic capability. Convolutional neural networks (CNNs), along with emerging transformer-based and hybrid models, demonstrate high detection performance, often achieving F1-scores above 95%. However, critical challenges persist, including limited annotated datasets for rare diseases, high computational costs of hyperspectral data processing, and the absence of standardized evaluation frameworks. Addressing these issues will require the development of lightweight DL architectures optimized for edge computing, improved multimodal data fusion techniques, and the creation of publicly available, annotated benchmark datasets. Advancements in these areas are vital for translating current research into practical, scalable solutions that support sustainable and data-driven agricultural practices worldwide. Full article
Show Figures

Figure 1

24 pages, 73520 KB  
Article
2C-Net: A Novel Spatiotemporal Dual-Channel Network for Soil Organic Matter Prediction Using Multi-Temporal Remote Sensing and Environmental Covariates
by Jiale Geng, Chong Luo, Jun Lu, Depiao Kong, Xue Li and Huanjun Liu
Remote Sens. 2025, 17(19), 3358; https://doi.org/10.3390/rs17193358 - 3 Oct 2025
Viewed by 245
Abstract
Soil organic matter (SOM) is essential for ecosystem health and agricultural productivity. Accurate prediction of SOM content is critical for modern agricultural management and sustainable soil use. Existing digital soil mapping (DSM) models, when processing temporal data, primarily focus on modeling the changes [...] Read more.
Soil organic matter (SOM) is essential for ecosystem health and agricultural productivity. Accurate prediction of SOM content is critical for modern agricultural management and sustainable soil use. Existing digital soil mapping (DSM) models, when processing temporal data, primarily focus on modeling the changes in input data across successive time steps. However, they do not adequately model the relationships among different input variables, which hinders the capture of complex data patterns and limits the accuracy of predictions. To address this problem, this paper proposes a novel deep learning model, 2-Channel Network (2C-Net), leveraging sequential multi-temporal remote sensing images to improve SOM prediction. The network separates input data into temporal and spatial data, processing them through independent temporal and spatial channels. Temporal data includes multi-temporal Sentinel-2 spectral reflectance, while spatial data consists of environmental covariates including climate and topography. The Multi-sequence Feature Fusion Module (MFFM) is proposed to globally model spectral data across multiple bands and time steps, and the Diverse Convolutional Architecture (DCA) extracts spatial features from environmental data. Experimental results show that 2C-Net outperforms the baseline model (CNN-LSTM) and mainstream machine learning model for DSM, with R2 = 0.524, RMSE = 0.884 (%), MAE = 0.581 (%), and MSE = 0.781 (%)2. Furthermore, this study demonstrates the significant importance of sequential spectral data for the inversion of SOM content and concludes the following: for the SOM inversion task, the bare soil period after tilling is a more important time window than other bare soil periods. 2C-Net model effectively captures spatiotemporal features, offering high-accuracy SOM predictions and supporting future DSM and soil management. Full article
(This article belongs to the Special Issue Remote Sensing in Soil Organic Carbon Dynamics)
Show Figures

Figure 1

36 pages, 4484 KB  
Review
Research Progress of Deep Learning-Based Artificial Intelligence Technology in Pest and Disease Detection and Control
by Yu Wu, Li Chen, Ning Yang and Zongbao Sun
Agriculture 2025, 15(19), 2077; https://doi.org/10.3390/agriculture15192077 - 3 Oct 2025
Viewed by 251
Abstract
With the rapid advancement of artificial intelligence technology, the widespread application of deep learning in computer vision is driving the transformation of agricultural pest detection and control toward greater intelligence and precision. This paper systematically reviews the evolution of agricultural pest detection and [...] Read more.
With the rapid advancement of artificial intelligence technology, the widespread application of deep learning in computer vision is driving the transformation of agricultural pest detection and control toward greater intelligence and precision. This paper systematically reviews the evolution of agricultural pest detection and control technologies, with a special focus on the effectiveness of deep-learning-based image recognition methods for pest identification, as well as their integrated applications in drone-based remote sensing, spectral imaging, and Internet of Things sensor systems. Through multimodal data fusion and dynamic prediction, artificial intelligence has significantly improved the response times and accuracy of pest monitoring. On the control side, the development of intelligent prediction and early-warning systems, precision pesticide-application technologies, and smart equipment has advanced the goals of eco-friendly pest management and ecological regulation. However, challenges such as high data-annotation costs, limited model generalization, and constrained computing power on edge devices remain. Moving forward, further exploration of cutting-edge approaches such as self-supervised learning, federated learning, and digital twins will be essential to build more efficient and reliable intelligent control systems, providing robust technical support for sustainable agricultural development. Full article
Show Figures

Figure 1

19 pages, 7222 KB  
Article
Multi-Channel Spectro-Temporal Representations for Speech-Based Parkinson’s Disease Detection
by Hadi Sedigh Malekroodi, Nuwan Madusanka, Byeong-il Lee and Myunggi Yi
J. Imaging 2025, 11(10), 341; https://doi.org/10.3390/jimaging11100341 - 1 Oct 2025
Viewed by 190
Abstract
Early, non-invasive detection of Parkinson’s Disease (PD) using speech analysis offers promise for scalable screening. In this work, we propose a multi-channel spectro-temporal deep-learning approach for PD detection from sentence-level speech, a clinically relevant yet underexplored modality. We extract and fuse three complementary [...] Read more.
Early, non-invasive detection of Parkinson’s Disease (PD) using speech analysis offers promise for scalable screening. In this work, we propose a multi-channel spectro-temporal deep-learning approach for PD detection from sentence-level speech, a clinically relevant yet underexplored modality. We extract and fuse three complementary time–frequency representations—mel spectrogram, constant-Q transform (CQT), and gammatone spectrogram—into a three-channel input analogous to an RGB image. This fused representation is evaluated across CNNs (ResNet, DenseNet, and EfficientNet) and Vision Transformer using the PC-GITA dataset, under 10-fold subject-independent cross-validation for robust assessment. Results showed that fusion consistently improves performance over single representations across architectures. EfficientNet-B2 achieves the highest accuracy (84.39% ± 5.19%) and F1-score (84.35% ± 5.52%), outperforming recent methods using handcrafted features or pretrained models (e.g., Wav2Vec2.0, HuBERT) on the same task and dataset. Performance varies with sentence type, with emotionally salient and prosodically emphasized utterances yielding higher AUC, suggesting that richer prosody enhances discriminability. Our findings indicate that multi-channel fusion enhances sensitivity to subtle speech impairments in PD by integrating complementary spectral information. Our approach implies that multi-channel fusion could enhance the detection of discriminative acoustic biomarkers, potentially offering a more robust and effective framework for speech-based PD screening, though further validation is needed before clinical application. Full article
(This article belongs to the Special Issue Celebrating the 10th Anniversary of the Journal of Imaging)
Show Figures

Figure 1

23 pages, 10418 KB  
Article
Daily Water Mapping and Spatiotemporal Dynamics Analysis over the Tibetan Plateau
by Qi Feng, Kai Yu and Luyan Ji
Hydrology 2025, 12(10), 257; https://doi.org/10.3390/hydrology12100257 - 30 Sep 2025
Viewed by 261
Abstract
The Tibetan Plateau, known as the “Asian Water Tower”, contains thousands of lakes that are sensitive to climate variability and human activities. To investigate their long-term and short-term dynamics, we developed a daily surface-water mapping dataset covering the period from 2000 to 2024 [...] Read more.
The Tibetan Plateau, known as the “Asian Water Tower”, contains thousands of lakes that are sensitive to climate variability and human activities. To investigate their long-term and short-term dynamics, we developed a daily surface-water mapping dataset covering the period from 2000 to 2024 based on MODIS daily reflectance time series (MOD09GQ/MYD09GQ and MOD09GA/MYD09GA). A hybrid methodology combining per-pixel spectral indices, superpixel segmentation, and fusion of Terra and Aqua results was applied, followed by temporal interpolation to produce cloud-free daily water maps. Validation against Landsat classifications and the 30 m global water dataset indicates an overall accuracy of 96.89% and a mean relative error below 9.1%, confirming the robustness of our dataset. Based on this dataset, we analyzed the spatiotemporal evolution of 1293 lakes (no less than 5 km2). Results show that approximately 87.7% of lakes expanded, with the fastest growth reaching +43.18 km2/y, whereas 12.3% shrank, with the largest decrease being −5.91 km2/y. Seasonal patterns reveal that most lakes reach maximum extent in October and minimum extent in January. This study provides a long-term, cloud-free daily water mapping product for the Tibetan Plateau, which can serve as a valuable resource for future research on regional hydrology, ecosystem vulnerability, and climate–water interactions in high-altitude regions. Full article
(This article belongs to the Special Issue Advances in Cold Regions' Hydrology and Hydrogeology)
Show Figures

Figure 1

25 pages, 5189 KB  
Article
Day-Ahead Photovoltaic Station Power Prediction Driven by Weather Typing: A Collaborative Modelling Approach Based on Multi-Feature Fusion Spectral Clustering and DCS-NsT-BiLSTM
by Mao Yang, Sihan Guo, Jianfeng Che, Wei He, Kang Wu and Wei Xu
Electronics 2025, 14(19), 3836; https://doi.org/10.3390/electronics14193836 - 27 Sep 2025
Viewed by 196
Abstract
To address the challenge of effective tracking of weather-induced power fluctuation trends in daytime PV power forecasting, this paper proposes a joint forecasting framework oriented to weather classification. For the weather classification module, a spectral clustering method incorporating multivariate feature fusion-based evaluation is [...] Read more.
To address the challenge of effective tracking of weather-induced power fluctuation trends in daytime PV power forecasting, this paper proposes a joint forecasting framework oriented to weather classification. For the weather classification module, a spectral clustering method incorporating multivariate feature fusion-based evaluation is introduced to address the limitation that conventional clustering models fail to effectively identify power fluctuations caused by dynamic weather variations. Simultaneously, to tackle non-stationary fluctuations and local abrupt changes in PV power forecasting, a non-stationary Transformer-BiLSTM model optimised using the Differentiated Creative Search (DCS) algorithm (DCS-NsT-BiLSTM)is proposed. This model enables the co-optimisation of global and local features under diverse weather patterns. The proposed method takes into consideration the climatic typology of PV power plants, thereby overcoming the insensitivity of traditional clustering models to high-dimensional non-stationary data. Furthermore, the approach utilises the novel intelligent optimisation algorithm DCS to update the key hyperparameters of the forecasting model, which in turn enhances the accuracy of day-ahead PV power generation forecasting. Applied to a photovoltaic power station in Jilin Province, China, this method reduced the mean root mean square error by 4.63% across various weather conditions, effectively validating the proposed methodology. Full article
(This article belongs to the Section Industrial Electronics)
Show Figures

Figure 1

23 pages, 11596 KB  
Article
Combined Hyperspectral Imaging with Wavelet Domain Multivariate Feature Fusion Network for Bioactive Compound Prediction of Astragalus membranaceus var. mongholicus
by Suning She, Zhiyun Xiao and Yulong Zhou
Agriculture 2025, 15(19), 2009; https://doi.org/10.3390/agriculture15192009 - 25 Sep 2025
Viewed by 250
Abstract
The pharmacological quality of Astragalus membranaceus var. mongholicus (AMM) is determined by its bioactive compounds, and developing a rapid prediction method is essential for quality assessment. This study proposes a predictive model for AMM bioactive compounds using hyperspectral imaging (HSI) and wavelet domain [...] Read more.
The pharmacological quality of Astragalus membranaceus var. mongholicus (AMM) is determined by its bioactive compounds, and developing a rapid prediction method is essential for quality assessment. This study proposes a predictive model for AMM bioactive compounds using hyperspectral imaging (HSI) and wavelet domain multivariate features. The model employs techniques such as the first-order derivative (FD) algorithm and the continuum removal (CR) algorithm for initial feature extraction. Unlike existing models that primarily focus on a single-feature extraction algorithm, the proposed tree-structured feature extraction module based on discrete wavelet transform and one-dimensional convolutional neural network (1D-CNN) integrates FD and CR, enabling robust multivariate feature extraction. Subsequently, the multivariate feature cross-fusion module is introduced to implement multivariate feature interaction, facilitating mutual enhancement between high- and low-frequency features through hierarchical recombination. Additionally, a multi-objective prediction mechanism is proposed to simultaneously predict the contents of flavonoids, saponins, and polysaccharides in AMM, effectively leveraging the enhanced, recombined spectral features. During testing, the model achieved excellent predictive performance with R2 values of 0.981 for flavonoids, 0.992 for saponins, and 0.992 for polysaccharides. The corresponding RMSE values were 0.37, 0.04, and 0.86; RPD values reached 7.30, 10.97, and 11.16; while MAE values were 0.14, 0.02, and 0.38, respectively. These results demonstrate that integrating multivariate features extracted through diverse methods with 1D-CNN enables efficient prediction of AMM bioactive compounds using HSI. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

22 pages, 6045 KB  
Article
Early Warning of Anthracnose on Illicium verum Through the Synergistic Integration of Environmental and Remote Sensing Time Series Data
by Junji Li, Yuxin Zhao, Tianteng Zhang, Jiahui Du, Yucai Li, Ling Wu and Xiangnan Liu
Remote Sens. 2025, 17(19), 3294; https://doi.org/10.3390/rs17193294 - 25 Sep 2025
Viewed by 193
Abstract
Anthracnose on Illicium verum Hook.f (I. verum) significantly affects the yield and quality of I. verum, and timely detection methods are urgently needed for early control. However, early warning is difficult due to two major challenges, including the sparse availability [...] Read more.
Anthracnose on Illicium verum Hook.f (I. verum) significantly affects the yield and quality of I. verum, and timely detection methods are urgently needed for early control. However, early warning is difficult due to two major challenges, including the sparse availability of optical remote sensing observations due to frequent cloud and rain interference, and the weak spectral responses caused by infestation during early stages. In this article, a framework for early warning of anthracnose on I. verum that combines high-frequency environmental (meteorological and topographical) data and Sentinel-2 remote sensing time-series data, along with a Time-Aware Long Short-Term Memory (T-LSTM) network incorporating an attentional mechanism (At-T-LSTM) was proposed. First, all available environmental and remote sensing data during the study period were analyzed to characterize the early anthracnose outbreaks, and sensitive features were selected as the algorithm input. On this basis, to address the issue of unequal temporal lengths between environmental and remote sensing time series, the At-T-LSTM model incorporates a time-aware mechanism to capture intra-feature temporal dependencies, while a Self-Attention layer is used to quantify inter-feature interaction weights, enabling effective multi-source features time-series fusion. The results show that the proposed framework achieves a spatial accuracy (F1-score) of 0.86 and a temporal accuracy of 83% in early-stage detection, demonstrating high reliability. By integrating remote sensing features with environmental drivers, this approach enables multi-feature collaborative modeling for the risk assessment and monitoring of I. verum anthracnose. It effectively mitigates the impact of sparse observations and significantly improves the accuracy of early warnings. Full article
(This article belongs to the Special Issue Application of Remote Sensing in Agroforestry (Third Edition))
Show Figures

Graphical abstract

24 pages, 7350 KB  
Article
An Attention-Driven Multi-Scale Framework for Rotating-Machinery Fault Diagnosis Under Noisy Conditions
by Le-Min Xu, Pak Kin Wong, Zhi-Jiang Gao, Zhi-Xin Yang, Jing Zhao and Xian-Bo Wang
Electronics 2025, 14(19), 3805; https://doi.org/10.3390/electronics14193805 - 25 Sep 2025
Viewed by 383
Abstract
Failures of rotating machinery, such as bearings and gears, are a critical concern in industrial systems, leading to significant operational downtime and economic losses. A primary research challenge is achieving accurate fault diagnosis under complex industrial noise, where weak fault signatures are often [...] Read more.
Failures of rotating machinery, such as bearings and gears, are a critical concern in industrial systems, leading to significant operational downtime and economic losses. A primary research challenge is achieving accurate fault diagnosis under complex industrial noise, where weak fault signatures are often masked by interference signals. This problem is particularly acute in demanding applications like offshore wind turbines, where harsh operating conditions and high maintenance costs necessitate highly robust and reliable diagnostic methods. To address this challenge, this paper proposes a novel Multi-Scale Domain Convolutional Attention Network (MSDCAN). The method integrates enhanced adaptive multi-domain feature extraction with a hybrid attention mechanism, combining information from the time, frequency, wavelet, and cyclic spectral domains with domain-specific attention weighting. A core innovation is the hybrid attention fusion mechanism, which enables cross-modal interaction between deep convolutional features and domain-specific features, enhanced by channel attention modules. The model’s effectiveness is validated on two public benchmark datasets for key rotating components. On the Case Western Reserve University (CWRU) bearing dataset, the MSDCAN achieves accuracies of 97.3% under clean conditions, 96.6% at 15 dB signal-to-noise ratio (SNR), 94.4% at 10 dB SNR, and a robust 85.5% under severe 5 dB SNR. To further validate its generalization, on the Xi’an Jiaotong University (XJTU) gear dataset, the model attains accuracies of 94.8% under clean conditions, 95.0% at 15 dB SNR, 83.6% at 10 dB SNR, and 63.8% at 5 dB SNR. These comprehensive results quantitatively validate the model’s superior diagnostic accuracy and exceptional noise robustness for rotating machinery, establishing a strong foundation for its application in reliable condition monitoring for complex systems, including wind turbines. Full article
(This article belongs to the Special Issue Advances in Condition Monitoring and Fault Diagnosis)
Show Figures

Figure 1

21 pages, 3946 KB  
Article
Research on Non Destructive Detection Method and Model Op-Timization of Nitrogen in Facility Lettuce Based on THz and NIR Hyperspectral
by Yixue Zhang, Jialiang Zheng, Jingbo Zhi, Jili Guo, Jin Hu, Wei Liu, Tiezhu Li and Xiaodong Zhang
Agronomy 2025, 15(10), 2261; https://doi.org/10.3390/agronomy15102261 - 24 Sep 2025
Viewed by 290
Abstract
Considering the growing demand for modern facility agriculture, it is essential to develop non-destructive technologies for assessing lettuce nutritional status. To overcome the limitations of traditional methods, which are destructive and time-consuming, this study proposes a multimodal non-destructive nitrogen detection method for lettuce [...] Read more.
Considering the growing demand for modern facility agriculture, it is essential to develop non-destructive technologies for assessing lettuce nutritional status. To overcome the limitations of traditional methods, which are destructive and time-consuming, this study proposes a multimodal non-destructive nitrogen detection method for lettuce based on multi-source imaging. The approach integrates terahertz time-domain spectroscopy (THz-TDS) and near-infrared hyperspectral imaging (NIR-HSI) to achieve rapid and non-invasive nitrogen detection. Spectral imaging data of lettuce samples under different nitrogen gradients (20–150%) were simultaneously acquired using a THz-TDS system (0.2–1.2 THz) and a NIR-HSI system (1000–1600 nm), with image segmentation applied to remove background interference. During data processing, Savitzky–Golay smoothing, MSC (for THz data), and SNV (for NIR data) were employed for combined preprocessing, and sample partitioning was performed using the SPXY algorithm. Subsequently, SCARS/iPLS/IRIV algorithms were applied for THz feature selection, while RF/SPA/ICO methods were used for NIR feature screening, followed by nitrogen content prediction modeling with LS-SVM and KELM. Furthermore, small-sample learning was utilized to fuse crop feature information from the two modalities, providing a more comprehensive and effective detection strategy. The results demonstrated that the THz-based model with SCARS-selected power spectrum features and an RBF-kernel LS-SVM achieved the best predictive performance (R2 = 0.96, RMSE = 0.20), while the NIR-based model with ICO features and an RBF-kernel LS-SVM achieved the highest accuracy (R2 = 0.967, RMSE = 0.193). The fusion model, combining SCARS and ICO features, exhibited the best overall performance, with training accuracy of 96.25% and prediction accuracy of 95.94%. This dual-spectral technique leverages the complementary responses of nitrogen in molecular vibrations (THz) and organic chemical bonds (NIR), significantly enhancing model performance. To the best of our knowledge, this is the first study to realize the synergistic application of THz and NIR spectroscopy in nitrogen detection of facility-grown lettuce, providing a high-precision, non-destructive solution for rapid crop nutrition diagnosis. Full article
(This article belongs to the Special Issue Crop Nutrition Diagnosis and Efficient Production)
Show Figures

Figure 1

25 pages, 17562 KB  
Article
SGFNet: Redundancy-Reduced Spectral–Spatial Fusion Network for Hyperspectral Image Classification
by Boyu Wang, Chi Cao and Dexing Kong
Entropy 2025, 27(10), 995; https://doi.org/10.3390/e27100995 - 24 Sep 2025
Viewed by 323
Abstract
Hyperspectral image classification (HSIC) involves analyzing high-dimensional data that contain substantial spectral redundancy and spatial noise, which increases the entropy and uncertainty of feature representations. Reducing such redundancy while retaining informative content in spectral–spatial interactions remains a fundamental challenge for building efficient and [...] Read more.
Hyperspectral image classification (HSIC) involves analyzing high-dimensional data that contain substantial spectral redundancy and spatial noise, which increases the entropy and uncertainty of feature representations. Reducing such redundancy while retaining informative content in spectral–spatial interactions remains a fundamental challenge for building efficient and accurate HSIC models. Traditional deep learning methods often rely on redundant modules or lack sufficient spectral–spatial coupling, limiting their ability to fully exploit the information content of hyperspectral data. To address these challenges, we propose SGFNet, which is a spectral-guided fusion network designed from an information–theoretic perspective to reduce feature redundancy and uncertainty. First, we designed a Spectral-Aware Filtering Module (SAFM) that suppresses noisy spectral components and reduces redundant entropy, encoding the raw pixel-wise spectrum into a compact spectral representation accessible to all encoder blocks. Second, we introduced a Spectral–Spatial Adaptive Fusion (SSAF) module, which strengthens spectral–spatial interactions and enhances the discriminative information in the fused features. Finally, we developed a Spectral Guidance Gated CNN (SGGC), which is a lightweight gated convolutional module that uses spectral guidance to more effectively extract spatial representations while avoiding unnecessary sequence modeling overhead. We conducted extensive experiments on four widely used hyperspectral benchmarks and compared SGFNet with eight state-of-the-art models. The results demonstrate that SGFNet consistently achieves superior performance across multiple metrics. From an information–theoretic perspective, SGFNet implicitly balances redundancy reduction and information preservation, providing an efficient and effective solution for HSIC. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

27 pages, 5776 KB  
Article
R-SWTNet: A Context-Aware U-Net-Based Framework for Segmenting Rural Roads and Alleys in China with the SQVillages Dataset
by Jianing Wu, Junqi Yang, Xiaoyu Xu, Ying Zeng, Yan Cheng, Xiaodong Liu and Hong Zhang
Land 2025, 14(10), 1930; https://doi.org/10.3390/land14101930 - 23 Sep 2025
Viewed by 294
Abstract
Rural road networks are vital for rural development, yet narrow alleys and occluded segments remain underrepresented in digital maps due to irregular morphology, spectral ambiguity, and limited model generalization. Traditional segmentation models struggle to balance local detail preservation and long-range dependency modeling, prioritizing [...] Read more.
Rural road networks are vital for rural development, yet narrow alleys and occluded segments remain underrepresented in digital maps due to irregular morphology, spectral ambiguity, and limited model generalization. Traditional segmentation models struggle to balance local detail preservation and long-range dependency modeling, prioritizing either local features or global context alone. Hypothesizing that integrating hierarchical local features and global context will mitigate these limitations, this study aims to accurately segment such rural roads by proposing R-SWTNet, a context-aware U-Net-based framework, and constructing the SQVillages dataset. R-SWTNet integrates ResNet34 for hierarchical feature extraction, Swin Transformer for long-range dependency modeling, ASPP for multi-scale context fusion, and CAM-Residual blocks for channel-wise attention. The SQVillages dataset, built from multi-source remote sensing imagery, includes 18 diverse villages with adaptive augmentation to mitigate class imbalance. Experimental results show R-SWTNet achieves a validation IoU of 54.88% and F1-score of 70.87%, outperforming U-Net and Swin-UNet, and with less overfitting than R-Net and D-LinkNet. Its lightweight variant supports edge deployment, enabling on-site road management. This work provides a data-driven tool for infrastructure planning under China’s Rural Revitalization Strategy, with potential scalability to global unstructured rural road scenes. Full article
(This article belongs to the Section Land Innovations – Data and Machine Learning)
Show Figures

Figure 1

28 pages, 14783 KB  
Article
HSSTN: A Hybrid Spectral–Structural Transformer Network for High-Fidelity Pansharpening
by Weijie Kang, Yuan Feng, Yao Ding, Hongbo Xiang, Xiaobo Liu and Yaoming Cai
Remote Sens. 2025, 17(19), 3271; https://doi.org/10.3390/rs17193271 - 23 Sep 2025
Viewed by 435
Abstract
Pansharpening fuses multispectral (MS) and panchromatic (PAN) remote sensing images to generate outputs with high spatial resolution and spectral fidelity. Nevertheless, conventional methods relying primarily on convolutional neural networks or unimodal fusion strategies frequently fail to bridge the sensor modality gap between MS [...] Read more.
Pansharpening fuses multispectral (MS) and panchromatic (PAN) remote sensing images to generate outputs with high spatial resolution and spectral fidelity. Nevertheless, conventional methods relying primarily on convolutional neural networks or unimodal fusion strategies frequently fail to bridge the sensor modality gap between MS and PAN data. Consequently, spectral distortion and spatial degradation often occur, limiting high-precision downstream applications. To address these issues, this work proposes a Hybrid Spectral–Structural Transformer Network (HSSTN) that enhances multi-level collaboration through comprehensive modelling of spectral–structural feature complementarity. Specifically, the HSSTN implements a three-tier fusion framework. First, an asymmetric dual-stream feature extractor employs a residual block with channel attention (RBCA) in the MS branch to strengthen spectral representation, while a Transformer architecture in the PAN branch extracts high-frequency spatial details, thereby reducing modality discrepancy at the input stage. Subsequently, a target-driven hierarchical fusion network utilises progressive crossmodal attention across scales, ranging from local textures to multi-scale structures, to enable efficient spectral–structural aggregation. Finally, a novel collaborative optimisation loss function preserves spectral integrity while enhancing structural details. Comprehensive experiments conducted on QuickBird, GaoFen-2, and WorldView-3 datasets demonstrate that HSSTN outperforms existing methods in both quantitative metrics and visual quality. Consequently, the resulting images exhibit sharper details and fewer spectral artefacts, showcasing significant advantages in high-fidelity remote sensing image fusion. Full article
(This article belongs to the Special Issue Artificial Intelligence in Hyperspectral Remote Sensing Data Analysis)
Show Figures

Figure 1

28 pages, 3554 KB  
Review
Angle Effects in UAV Quantitative Remote Sensing: Research Progress, Challenges and Trends
by Weikang Zhang, Hongtao Cao, Dabin Ji, Dongqin You, Jianjun Wu, Hu Zhang, Yuquan Guo, Menghao Zhang and Yanmei Wang
Drones 2025, 9(10), 665; https://doi.org/10.3390/drones9100665 - 23 Sep 2025
Viewed by 451
Abstract
In recent years, unmanned aerial vehicle (UAV) quantitative remote sensing technology has demonstrated significant advantages in fields such as agricultural monitoring and ecological environment assessment. However, achieving the goal of quantification still faces major challenges due to the angle effect. This effect, caused [...] Read more.
In recent years, unmanned aerial vehicle (UAV) quantitative remote sensing technology has demonstrated significant advantages in fields such as agricultural monitoring and ecological environment assessment. However, achieving the goal of quantification still faces major challenges due to the angle effect. This effect, caused by the bidirectional reflectance distribution function (BRDF) of surface targets, leads to significant spectral response variations at different observation angles, thereby affecting the inversion accuracy of physicochemical parameters, internal components, and three-dimensional structures of ground objects. This study systematically reviewed 48 relevant publications from 2000 to the present, retrieved from the Web of Science Core Collection through keyword combinations and screening criteria. The analysis revealed a significant increase in both the number of publications and citation frequency after 2017, with research spanning multiple disciplines such as remote sensing, agriculture, and environmental science. The paper comprehensively summarizes research progress on the angle effect in UAV quantitative remote sensing. Firstly, its underlying causes based on BRDF mechanisms and radiative transfer theory are explained. Secondly, multi-angle data acquisition techniques, processing methods, and their applications across various research fields are analyzed, considering the characteristics of UAV platforms and sensors. Finally, in view of the current challenges, such as insufficient fusion of multi-source data and poor model adaptability, it is proposed that in the future, methods such as deep learning algorithms and multi-platform collaborative observation need to be combined to promote theoretical innovation and engineering application in the research of the angle effect in UAV quantitative remote sensing. This paper provides a theoretical reference for improving the inversion accuracy of surface parameters and the development of UAV remote sensing technology. Full article
Show Figures

Figure 1

Back to TopTop