Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (363)

Search Parameters:
Keywords = RGB band

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 4926 KB  
Article
Generating Multispectral Point Clouds for Digital Agriculture
by Isabella Subtil Norberto, Antonio Maria Garcia Tommaselli and Milton Hirokazu Shimabukuro
AgriEngineering 2025, 7(12), 407; https://doi.org/10.3390/agriengineering7120407 - 2 Dec 2025
Viewed by 295
Abstract
Digital agriculture is increasingly important for plant-level analysis, enabling detailed assessments of growth, nutrition and overall condition. Multispectral point clouds are promising due to the integration of geometric and radiometric information. Although RGB point clouds can be generated with commercial terrestrial scanners, multi-band [...] Read more.
Digital agriculture is increasingly important for plant-level analysis, enabling detailed assessments of growth, nutrition and overall condition. Multispectral point clouds are promising due to the integration of geometric and radiometric information. Although RGB point clouds can be generated with commercial terrestrial scanners, multi-band multispectral point clouds are rarely obtained directly. Most existing methods are limited to aerial platforms, restricting close-range monitoring and plant-level studies. Efficient workflows for generating multispectral point clouds from terrestrial sensors, while ensuring geometric accuracy and computational efficiency, are still lacking. Here, we propose a workflow combining photogrammetric and computer vision techniques to generate high-resolution multispectral point clouds by integrating terrestrial light detection and ranging (LiDAR) and multispectral imagery. Bundle adjustment estimates the camera’s position and orientation relative to the LiDAR reference system. A frustum-based culling algorithm reduces the computational cost by selecting only relevant points, and an occlusion removal algorithm assigns spectral attributes only to visible points. The results showed that colourisation is effective when bundle adjustment uses an adequate number of well-distributed ground control points. The generated multispectral point clouds achieved high geometric consistency between overlapping views, with displacements varying from 0 to 9 mm, demonstrating stable alignment across perspectives. Despite some limitations due to wind during acquisition, the workflow enables the generation of high-resolution multispectral point clouds of vegetation. Full article
(This article belongs to the Section Remote Sensing in Agriculture)
Show Figures

Graphical abstract

20 pages, 13362 KB  
Article
Portable Multispectral Imaging System for Sodium Nitrite Detection via Griess Reaction on Cellulose Fiber Sample Pads
by Chanwit Kataphiniharn, Nawapong Unsuree, Suwatwong Janchaysang, Sumrerng Lumjeak, Tatpong Tulyananda, Thidarat Wangkham, Preeyanuch Srichola, Thanawat Nithiwutratthasakul, Nattaporn Chattham and Sorasak Phanphak
Sensors 2025, 25(23), 7323; https://doi.org/10.3390/s25237323 - 2 Dec 2025
Viewed by 613
Abstract
This study presents a custom-built, portable multispectral imaging (MSI) system integrated with computer vision for sodium nitrite detection via the Griess reaction on paper-based substrates. The MSI system was used to investigate the absorption characteristics of sodium nitrite at concentrations from 0 to [...] Read more.
This study presents a custom-built, portable multispectral imaging (MSI) system integrated with computer vision for sodium nitrite detection via the Griess reaction on paper-based substrates. The MSI system was used to investigate the absorption characteristics of sodium nitrite at concentrations from 0 to 10 ppm across nine spectral bands spanning 360–940 nm on para-aminobenzoic acid (PABA) and sulfanilamide (SA) substrates. Upon forming azo dyes with N-(1-naphthyl) ethylenediamine (NED), the PABA and SA substrates exhibited strong absorption near 545 nm and 540 nm, respectively, as measured by a spectrometer. This agrees with the 550 nm MSI images, in which higher sodium nitrite concentration regions appeared darker due to increased absorption. A concentration-correlation analysis was conducted for each spectral band. The normalized difference index (NDI), constructed from the most and least correlated bands at 550 nm and 940 nm, showed a stronger correlation with sodium nitrite concentration than the single best-performing band for both substrates. The NDI increased the coefficient of determination (R2) by approximately 19.32% for PABA–NED and 19.89% for SA–NED. This improvement was further confirmed under varying illumination conditions and through comparison with a conventional smartphone RGB imaging approach, in which the MSI-based NDI showed substantially superior performance. The enhancement is attributed to improved contrast, illumination normalization by the NDI, and the narrower spectral bands of the MSI compared with RGB imaging. In addition, the NDI framework enabled effective image segmentation, classification, and visualization, improving both interpretability and usability and providing a practical guideline for developing more robust models with larger training datasets. The proposed MSI system offers strong advantages in portability, sub-minute acquisition time, and operational simplicity, enabling rapid, on-site, and non-destructive chemical analysis. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Graphical abstract

20 pages, 4705 KB  
Article
Forest Aboveground Biomass Estimation Using High-Resolution Imagery and Integrated Machine Learning
by Jiaqi Liu, Maohua Liu, Tao Shen, Fei Yan and Zeyuan Zhou
Forests 2025, 16(12), 1777; https://doi.org/10.3390/f16121777 - 26 Nov 2025
Viewed by 365
Abstract
This study quantifies forest aboveground biomass (AGB) using integrated remote sensing features from high-resolution GaoFen-7 (GF-7) satellite imagery. We combined texture features, vegetation indices, and RGB spectral bands to improve estimation accuracy. Three machine learning algorithms—Random Forest (RF), Gradient Boosting Tree (GBT), and [...] Read more.
This study quantifies forest aboveground biomass (AGB) using integrated remote sensing features from high-resolution GaoFen-7 (GF-7) satellite imagery. We combined texture features, vegetation indices, and RGB spectral bands to improve estimation accuracy. Three machine learning algorithms—Random Forest (RF), Gradient Boosting Tree (GBT), and XGBoost—were compared with a stacking ensemble model using five-fold cross-validation on forest plots in Beijing’s Daxing District. Feature importance was evaluated through SHAP to identify key predictive variables. Results show that texture features exhibit scale-dependent predictive power, while visible-band vegetation indices strongly correlate with AGB. The Stacking ensemble achieved optimal performance (R2 = 0.62, RMSE = 57.34 Mg/ha, MAE = 39.99 Mg/ha), outperforming XGBoost (R2 = 0.59), RF (R2 = 0.58), and GBT (R2 = 0.57). Compared to the best individual model, Stacking improved R2 by 5.1% and effectively mitigated over- and underestimation biases. These findings demonstrate the effectiveness of ensemble learning for forest AGB estimation and suggest potential for regional-scale carbon monitoring applications. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

27 pages, 4518 KB  
Article
Study on the Detection Model of Tea Red Scab Severity Class Using Hyperspectral Imaging Technology
by Weibin Wu, Ting Tang, Yuxin Duan, Wenlong Qiu, Linhui Duan, Jinhong Lv, Yunfang Zeng, Jiacheng Guo and Yuanqiang Luo
Agriculture 2025, 15(22), 2372; https://doi.org/10.3390/agriculture15222372 - 16 Nov 2025
Viewed by 369
Abstract
Tea red scab, a contagious disease affecting tea plants, can infect both buds and mature leaves. This study developed discrimination models to assess the severity of this disease using RGB and hyperspectral images. The models were constructed from a total of 1188 [...] Read more.
Tea red scab, a contagious disease affecting tea plants, can infect both buds and mature leaves. This study developed discrimination models to assess the severity of this disease using RGB and hyperspectral images. The models were constructed from a total of 1188 samples collected in May 2024. The results demonstrated that the model based on hyperspectral Imaging (HSI) data significantly outperformed the RGB-based model. Four spectral preprocessing methods were applied, among which the combination of SNV, SG, and FD (SNV-SG-FD) proved to be the most effective. To better capture long-range dependencies among spectral bands, a hybrid architecture integrating a Gated Recurrent Unit (GRU) with a one-dimensional convolutional neural network (1D-CNN), termed CNN-GRU, was proposed. This hybrid model was compared against standalone CNN and GRU benchmarks. The hyperparameters of the CNN-GRU model were optimized using the Newton-Raphson-based optimizer (NRBO) algorithm. The proposed NRBO-optimized SNV-SG-FD-CNN-GRU model achieved superior performance, with accuracy, precision, recall, and F1-score reaching 92.94%, 92.54%, 92.42%, and 92.43%, respectively. Significant improvements were observed across all evaluation metrics compared to the single-model alternatives, confirming the effectiveness of both the hybrid architecture and the optimization strategy. Full article
Show Figures

Figure 1

21 pages, 23269 KB  
Article
Wavelet-Guided Zero-Reference Diffusion for Unsupervised Low-Light Image Enhancement
by Yuting Peng, Xiaojun Guo, Mengxi Xu, Bing Ding, Bei Sun and Shaojing Su
Electronics 2025, 14(22), 4460; https://doi.org/10.3390/electronics14224460 - 16 Nov 2025
Viewed by 711
Abstract
Low-light image enhancement (LLIE) remains a challenging task due to the scarcity of paired training data and the complex signal-dependent noise inherent in low-light scenes. To address these issues, this paper proposes a fully unsupervised framework named Wavelet-Guided Zero-Reference Diffusion (WZD) for natural [...] Read more.
Low-light image enhancement (LLIE) remains a challenging task due to the scarcity of paired training data and the complex signal-dependent noise inherent in low-light scenes. To address these issues, this paper proposes a fully unsupervised framework named Wavelet-Guided Zero-Reference Diffusion (WZD) for natural low-light image restoration. WZD leverages an ImageNet-pre-trained diffusion prior and a multi-scale representation of the Discrete Wavelet Transform (DWT) to restore natural illumination from a single dark image. Specifically, the input low-light image is first processed by a Practical Exposure Corrector (PEC) to provide an initial robust luminance baseline. It is then converted from the RGB to the YCbCr color space. The Y channels of the input image and the current diffusion estimate are decomposed into four orthogonal sub-bands—LL, LH, HL, and HH—and fused via learnable, step-wise weights while preserving structural integrity. An exposure control loss and a detail consistency loss are jointly employed to suppress over/under-exposure and preserve high-frequency details. Unlike recent approaches that rely on complex supervised training or lack physical guidance, our method integrates wavelet guidance with a zero-reference learning framework, incorporates the PEC module as a physical prior, and achieves significant improvements in detail preservation and noise suppression without requiring paired training data. Comprehensive experiments on the LOL-v1, LOL-v2, and LSRW datasets demonstrate that WZD achieves a superior or competitive performance, surpassing all referenced unsupervised methods. Ablation studies confirm the critical roles of the PEC prior, YCbCr conversion, wavelet-guided fusion, and the joint loss function. WZD also enhances the performance of downstream tasks, verifying its practical value. Full article
Show Figures

Figure 1

23 pages, 4818 KB  
Article
Multispectral-NeRF: A Multispectral Modeling Approach Based on Neural Radiance Fields
by Hong Zhang, Fei Guo, Zihan Xie and Dizhao Yao
Appl. Sci. 2025, 15(22), 12080; https://doi.org/10.3390/app152212080 - 13 Nov 2025
Viewed by 603
Abstract
3D reconstruction technology generates three-dimensional representations of real-world objects, scenes, or environments using sensor data such as 2D images, with extensive applications in robotics, autonomous vehicles, and virtual reality systems. Traditional 3D reconstruction techniques based on 2D images typically rely on RGB spectral [...] Read more.
3D reconstruction technology generates three-dimensional representations of real-world objects, scenes, or environments using sensor data such as 2D images, with extensive applications in robotics, autonomous vehicles, and virtual reality systems. Traditional 3D reconstruction techniques based on 2D images typically rely on RGB spectral information. With advances in sensor technology, additional spectral bands beyond RGB have been increasingly incorporated into 3D reconstruction workflows. Existing methods that integrate these expanded spectral data often suffer from expensive scheme prices, low accuracy, and poor geometric features. Three-dimensional reconstruction based on NeRF can effectively address the various issues in current multispectral 3D reconstruction methods, producing high-precision and high-quality reconstruction results. However, currently, NeRF and some improved models such as NeRFacto are trained on three-band data and cannot take into account the multi-band information. To address this problem, we propose Multispectral-NeRF—an enhanced neural architecture derived from NeRF that can effectively integrate multispectral information. Our technical contributions comprise threefold modifications: Expanding hidden layer dimensionality to accommodate 6-band spectral inputs; redesigning residual functions to optimize spectral discrepancy calculations between reconstructed and reference images; and adapting data compression modules to address the increased bit-depth requirements of multispectral imagery. Experimental results confirm that Multispectral-NeRF successfully processes multi-band spectral features while accurately preserving the original scenes’ spectral characteristics. Full article
Show Figures

Figure 1

24 pages, 53871 KB  
Article
Hyperspectral Object Tracking via Band and Context Refinement Network
by Jingyan Zhang, Zhizhong Zheng, Kang Ni, Nan Huang, Qichao Liu and Pengfei Liu
Remote Sens. 2025, 17(22), 3689; https://doi.org/10.3390/rs17223689 - 12 Nov 2025
Viewed by 550
Abstract
The scarcity of labeled hyperspectral video samples has motivated existing methods to leverage RGB-pretrained networks; however, many existing methods of hyperspectral object tracking (HOT) select only three representative spectral bands from hyperspectral images, leading to spectral information loss and weakened target discrimination. To [...] Read more.
The scarcity of labeled hyperspectral video samples has motivated existing methods to leverage RGB-pretrained networks; however, many existing methods of hyperspectral object tracking (HOT) select only three representative spectral bands from hyperspectral images, leading to spectral information loss and weakened target discrimination. To address this issue, we propose the Band and Context Refinement Network (BCR-Net) for HOT. Firstly, we design a band importance learning module to partition hyperspectral images into multiple false-color images for pre-trained backbone network. Specifically, each hyperspectral band is expressed as a non-negative linear combination of other bands to form a correlation matrix. This correlation matrix is used to guide an importance ranking of the bands, enabling the grouping of bands into false-color images that supply informative spectral features for the multi-branch tracking framework. Furthermore, to exploit spectral–spatial relationships and contextual information, we design a Contextual Feature Refinement Module, which integrates multi-scale fusion and context-aware optimization to improve feature discrimination. Finally, to adaptively fuse multi-branch features according to band importance, we employ a correlation matrix-guided fusion strategy. Extensive experiments on two public hyperspectral video datasets show that BCR-Net achieves competitive performance compared with existing classical tracking methods. Full article
(This article belongs to the Special Issue SAR and Multisource Remote Sensing: Challenges and Innovations)
Show Figures

Figure 1

37 pages, 25662 KB  
Article
A Hyperspectral Remote Sensing Image Encryption Algorithm Based on a Novel Two-Dimensional Hyperchaotic Map
by Zongyue Bai, Qingzhan Zhao, Wenzhong Tian, Xuewen Wang, Jingyang Li and Yuzhen Wu
Entropy 2025, 27(11), 1117; https://doi.org/10.3390/e27111117 - 30 Oct 2025
Viewed by 438
Abstract
With the rapid advancement of hyperspectral remote sensing technology, the security of hyperspectral images (HSIs) has become a critical concern. However, traditional image encryption methods—designed primarily for grayscale or RGB images—fail to address the high dimensionality, large data volume, and spectral-domain characteristics inherent [...] Read more.
With the rapid advancement of hyperspectral remote sensing technology, the security of hyperspectral images (HSIs) has become a critical concern. However, traditional image encryption methods—designed primarily for grayscale or RGB images—fail to address the high dimensionality, large data volume, and spectral-domain characteristics inherent to HSIs. Existing chaotic encryption schemes often suffer from limited chaotic performance, narrow parameter ranges, and inadequate spectral protection, leaving HSIs vulnerable to spectral feature extraction and statistical attacks. To overcome these limitations, this paper proposes a novel hyperspectral image encryption algorithm based on a newly designed two-dimensional cross-coupled hyperchaotic map (2D-CSCM), which synergistically integrates Cubic, Sinusoidal, and Chebyshev maps. The 2D-CSCM exhibits superior hyperchaotic behavior, including a wider hyperchaotic parameter range, enhanced randomness, and higher complexity, as validated by Lyapunov exponents, sample entropy, and NIST tests. Building on this, a layered encryption framework is introduced: spectral-band scrambling to conceal spectral curves while preserving spatial structure, spatial pixel permutation to disrupt correlation, and a bit-level diffusion mechanism based on dynamic DNA encoding, specifically designed to secure high bit-depth digital number (DN) values (typically >8 bits). Experimental results on multiple HSI datasets demonstrate that the proposed algorithm achieves near-ideal information entropy (up to 15.8107 for 16-bit data), negligible adjacent-pixel correlation (below 0.01), and strong resistance to statistical, cropping, and differential attacks (NPCR ≈ 99.998%, UACI ≈ 33.30%). The algorithm not only ensures comprehensive encryption of both spectral and spatial information but also supports lossless decryption, offering a robust and practical solution for secure storage and transmission of hyperspectral remote sensing imagery. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

22 pages, 3835 KB  
Article
Phenology-Guided Wheat and Corn Identification in Xinjiang: An Improved U-Net Semantic Segmentation Model Using PCA and CBAM-ASPP
by Yang Wei, Xian Guo, Yiling Lu, Hongjiang Hu, Fei Wang, Rongrong Li and Xiaojing Li
Remote Sens. 2025, 17(21), 3563; https://doi.org/10.3390/rs17213563 - 28 Oct 2025
Viewed by 456
Abstract
Wheat and corn are two major food crops in Xinjiang. However, the spectral similarity between these crop types and the complexity of their spatial distribution has posed significant challenges to accurate crop identification. To this end, the study aimed to improve the accuracy [...] Read more.
Wheat and corn are two major food crops in Xinjiang. However, the spectral similarity between these crop types and the complexity of their spatial distribution has posed significant challenges to accurate crop identification. To this end, the study aimed to improve the accuracy of crop distribution identification in complex environments in three ways. First, by analysing the kNDVI and EVI time series, the optimal identification window was determined to be days 156–176—a period when wheat is in the grain-filling to milk-ripening phase and maize is in the jointing to tillering phase—during which, the strongest spectral differences between the two crops occurs. Second, principal component analysis (PCA) was applied to Sentinel-2 data. The top three principal components were extracted to construct the input dataset, effectively integrating visible and near-infrared band information. This approach suppressed redundancy and noise while replacing traditional RGB datasets. Finally, the Convolutional Block Attention Module (CBAM) was integrated into the U-Net model to enhance feature focusing on key crop areas. An improved Atrous Spatial Pyramid Pooling (ASPP) module based on deep separable convolutions was adopted to reduce the computational load while boosting multi-scale context awareness. The experimental results showed the following: (1) Wheat and corn exhibit obvious phenological differences between the 156th and 176th days of the year, which can be used as the optimal time window for identifying their spatial distributions. (2) The method proposed by this research had the best performance, with its mIoU, mPA, F1-score, and overall accuracy (OA) reaching 83.03%, 91.34%, 90.73%, and 90.91%, respectively. Compared to DeeplabV3+, PSPnet, HRnet, Segformer, and U-Net, the OA improved by 5.97%, 4.55%, 2.03%, 8.99%, and 1.5%, respectively. The recognition accuracy of the PCA dataset improved by approximately 2% compared to the RGB dataset. (3) This strategy still had high accuracy when predicting wheat and corn yields in Qitai County, Xinjiang, and had a certain degree of generalisability. In summary, the improved strategy proposed in this study holds considerable application potential for identifying the spatial distribution of wheat and corn in arid regions. Full article
Show Figures

Figure 1

22 pages, 4258 KB  
Article
Visible Image-Based Machine Learning for Identifying Abiotic Stress in Sugar Beet Crops
by Seyed Reza Haddadi, Masoumeh Hashemi, Richard C. Peralta and Masoud Soltani
Algorithms 2025, 18(11), 680; https://doi.org/10.3390/a18110680 - 24 Oct 2025
Viewed by 565
Abstract
Previous researches have proved that the synchronized use of inexpensive RGB images, image processing, and machine learning (ML) can accurately identify crop stress. Four Machine Learning Image Modules (MLIMs) were developed to enable the rapid and cost-effective identification of sugar beet stresses caused [...] Read more.
Previous researches have proved that the synchronized use of inexpensive RGB images, image processing, and machine learning (ML) can accurately identify crop stress. Four Machine Learning Image Modules (MLIMs) were developed to enable the rapid and cost-effective identification of sugar beet stresses caused by water and/or nitrogen deficiencies. RGB images representing stressed and non-stressed crops were used in the analysis. To improve robustness, data augmentation was applied, generating six variations on each image and expanding the dataset from 150 to 900 images for training and testing. Each MLIM was trained and tested using 54 combinations derived from nine canopy and RGB-based input features and six ML algorithms. The most accurate MLIM used RGB bands as inputs to a Multilayer Perceptron, achieving 96.67% accuracy for overall stress detection, and 95.93% and 94.44% for water and nitrogen stress identification, respectively. A Random Forest model, using only the green band, achieved 92.22% accuracy for stress detection while requiring only one-fourth the computation time. For specific stresses, a Random Forest (RF) model using a Scale-Invariant Feature Transform descriptor (SIFT) achieved 93.33% for water stress, while RF with RGB bands and canopy cover reached 85.56% for nitrogen stress. To address the trade-off between accuracy and computational cost, a bargaining theory-based framework was applied. This approach identified optimal MLIMs that balance performance and execution efficiency. Full article
Show Figures

Figure 1

18 pages, 112460 KB  
Article
Gradient Boosting for the Spectral Super-Resolution of Ocean Color Sensor Data
by Brittney Slocum, Jason Jolliff, Sherwin Ladner, Adam Lawson, Mark David Lewis and Sean McCarthy
Sensors 2025, 25(20), 6389; https://doi.org/10.3390/s25206389 - 16 Oct 2025
Viewed by 924
Abstract
We present a gradient boosting framework for reconstructing hyperspectral signatures in the visible spectrum (400–700 nm) of satellite-based ocean scenes from limited multispectral inputs. Hyperspectral data is composed of many, typically greater than 100, narrow wavelength bands across the electromagnetic spectrum. While hyperspectral [...] Read more.
We present a gradient boosting framework for reconstructing hyperspectral signatures in the visible spectrum (400–700 nm) of satellite-based ocean scenes from limited multispectral inputs. Hyperspectral data is composed of many, typically greater than 100, narrow wavelength bands across the electromagnetic spectrum. While hyperspectral data can offer reflectance values at every nanometer, multispectral sensors typically provide only 3 to 11 discrete bands, undersampling the visible color space. Our approach is applied to remote sensing reflectance (Rrs) measurements from a set of ocean color sensors, including Suomi-National Polar-orbiting Partnership (SNPP) Visible Infrared Imaging Radiometer Suite (VIIRS), the Ocean and Land Colour Instrument (OLCI), Hyperspectral Imager for the Coastal Ocean (HICO), and NASA’s Plankton, Aerosol, Cloud, Ocean Ecosystem Ocean Color Instrument (PACE OCI), as well as in situ Rrs data from National Oceanic and Atmospheric Administration (NOAA) calibration and validation cruises. By leveraging these datasets, we demonstrate the feasibility of transforming low-spectral-resolution imagery into high-fidelity hyperspectral products. This capability is particularly valuable given the increasing availability of low-cost platforms equipped with RGB or multispectral imaging systems. Our results underscore the potential of hyperspectral enhancement for advancing ocean color monitoring and enabling broader access to high-resolution spectral data for scientific and environmental applications. Full article
Show Figures

Figure 1

22 pages, 4807 KB  
Article
Adapting Gated Axial Attention for Microscopic Hyperspectral Cholangiocarcinoma Image Segmentation
by Jianxia Xue, Xiaojing Chen and Soo-Hyung Kim
Electronics 2025, 14(20), 3979; https://doi.org/10.3390/electronics14203979 - 11 Oct 2025
Viewed by 385
Abstract
Accurate segmentation of medical images is essential for clinical diagnosis and treatment planning. Hyperspectral imaging (HSI), with its rich spectral information, enables improved tissue characterization and structural localization compared with traditional grayscale or RGB imaging. However, the effective modeling of both spatial and [...] Read more.
Accurate segmentation of medical images is essential for clinical diagnosis and treatment planning. Hyperspectral imaging (HSI), with its rich spectral information, enables improved tissue characterization and structural localization compared with traditional grayscale or RGB imaging. However, the effective modeling of both spatial and spectral dependencies remains a significant challenge, particularly in small-scale medical datasets. In this study, we propose GSA-Net, a 3D segmentation framework that integrates Gated Spectral-Axial Attention (GSA) to capture long-range interband dependencies and enhance spectral feature discrimination. The GSA module incorporates multilayer perceptrons (MLPs) and adaptive LayerScale mechanisms to enable the fine-grained modulation of spectral attention across feature channels. We evaluated GSA-Net on a hyperspectral cholangiocarcinoma (CCA) dataset, achieving an average Intersection over Union (IoU) of 60.64 ± 14.48%, Dice coefficient of 74.44 ± 11.83%, and Hausdorff Distance of 76.82 ± 42.77 px. It outperformed state-of-the-art baselines. Further spectral analysis revealed that informative spectral bands are widely distributed rather than concentrated, and full-spectrum input consistently outperforms aggressive band selection, underscoring the importance of adaptive spectral attention for robust hyperspectral medical image segmentation. Full article
(This article belongs to the Special Issue Image Segmentation, 2nd Edition)
Show Figures

Figure 1

28 pages, 5791 KB  
Article
Tree Health Assessment Using Mask R-CNN on UAV Multispectral Imagery over Apple Orchards
by Mohadeseh Kaviani, Brigitte Leblon, Thangarajah Akilan, Dzhamal Amishev, Armand LaRocque and Ata Haddadi
Remote Sens. 2025, 17(19), 3369; https://doi.org/10.3390/rs17193369 - 6 Oct 2025
Viewed by 1052
Abstract
Accurate tree health monitoring in orchards is essential for optimal orchard production. This study investigates the efficacy of a deep learning-based object detection single-step method for detecting tree health on multispectral UAV imagery. A modified Mask R-CNN framework is employed with four different [...] Read more.
Accurate tree health monitoring in orchards is essential for optimal orchard production. This study investigates the efficacy of a deep learning-based object detection single-step method for detecting tree health on multispectral UAV imagery. A modified Mask R-CNN framework is employed with four different backbones—ResNet-50, ResNet-101, ResNeXt-101, and Swin Transformer—on three image combinations: (1) RGB images, (2) 5-band multispectral images comprising RGB, Red-Edge, and Near-Infrared (NIR) bands, and (3) three principal components (3PCs) computed from the reflectance of the five spectral bands and twelve associated vegetation index images. The Mask R-CNN, having a ResNeXt-101 backbone, and applied to the 5-band multispectral images, consistently outperforms other configurations, with an F1-score of 85.68% and a mean Intersection over Union (mIoU) of 92.85%. To address the class imbalance, class weighting and focal loss were integrated into the model, yielding improvements in the detection of the minority class, i.e., the unhealthy trees. The tested method has the advantage of allowing the detection of unhealthy trees over UAV images using a single-step approach. Full article
Show Figures

Figure 1

30 pages, 14129 KB  
Article
Evaluating Two Approaches for Mapping Solar Installations to Support Sustainable Land Monitoring: Semantic Segmentation on Orthophotos vs. Multitemporal Sentinel-2 Classification
by Adolfo Lozano-Tello, Andrés Caballero-Mancera, Jorge Luceño and Pedro J. Clemente
Sustainability 2025, 17(19), 8628; https://doi.org/10.3390/su17198628 - 25 Sep 2025
Viewed by 863
Abstract
This study evaluates two approaches for detecting solar photovoltaic (PV) installations across agricultural areas, emphasizing their role in supporting sustainable energy monitoring, land management, and planning. Accurate PV mapping is essential for tracking renewable energy deployment, guiding infrastructure development, assessing land-use impacts, and [...] Read more.
This study evaluates two approaches for detecting solar photovoltaic (PV) installations across agricultural areas, emphasizing their role in supporting sustainable energy monitoring, land management, and planning. Accurate PV mapping is essential for tracking renewable energy deployment, guiding infrastructure development, assessing land-use impacts, and informing policy decisions aimed at reducing carbon emissions and fostering climate resilience. The first approach applies deep learning-based semantic segmentation to high-resolution RGB orthophotos, using the pretrained “Solar PV Segmentation” model, which achieves an F1-score of 95.27% and an IoU of 91.04%, providing highly reliable PV identification. The second approach employs multitemporal pixel-wise spectral classification using Sentinel-2 imagery, where the best-performing neural network achieved a precision of 99.22%, a recall of 96.69%, and an overall accuracy of 98.22%. Both approaches coincided in detecting 86.67% of the identified parcels, with an average surface difference of less than 6.5 hectares per parcel. The Sentinel-2 method leverages its multispectral bands and frequent revisit rate, enabling timely detection of new or evolving installations. The proposed methodology supports the sustainable management of land resources by enabling automated, scalable, and cost-effective monitoring of solar infrastructures using open-access satellite data. This contributes directly to the goals of climate action and sustainable land-use planning and provides a replicable framework for assessing human-induced changes in land cover at regional and national scales. Full article
Show Figures

Figure 1

19 pages, 4445 KB  
Article
Hyperspectral Imaging-Based Deep Learning Method for Detecting Quarantine Diseases in Apples
by Hang Zhang, Naibo Ye, Jingru Gong, Huajie Xue, Peihao Wang, Binbin Jiao, Liping Yin and Xi Qiao
Foods 2025, 14(18), 3246; https://doi.org/10.3390/foods14183246 - 18 Sep 2025
Viewed by 1248
Abstract
Rapid detection of quarantine diseases in apples is essential for import–export control but remains difficult because routine inspections rely on manual visual checks that limit automation at port scale. A fast, non-destructive system suitable for deployment at customs is therefore needed. In this [...] Read more.
Rapid detection of quarantine diseases in apples is essential for import–export control but remains difficult because routine inspections rely on manual visual checks that limit automation at port scale. A fast, non-destructive system suitable for deployment at customs is therefore needed. In this study, three common apple quarantine pathogens were targeted using hyperspectral images acquired by a close-range hyperspectral camera and analyzed with a convolutional neural network (CNN). Symptoms of these diseases often appear similar in RGB images, making reliable differentiation difficult. Reflectance from 400 to 1000 nm was recorded to provide richer spectral detail for separating subtle disease signatures. To quantify stage-dependent differences, average reflectance curves were extracted for apples infected by each pathogen at early, middle, and late lesion stages. A CNN tailored to hyperspectral inputs, termed HSC-Resnet, was designed with an increased number of convolutional channels to accommodate the broad spectral dimension and with channel and spatial attention integrated to highlight informative bands and regions. HSC-Resnet achieved a precision of 95.51%, indicating strong potential for fast, accurate, and non-destructive detection of apple quarantine diseases in import–export management. Full article
Show Figures

Figure 1

Back to TopTop