Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,406)

Search Parameters:
Keywords = spatial and spectral resolution

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 22010 KB  
Article
Improving the Temporal Resolution of Land Surface Temperature Using Machine and Deep Learning Models
by Mohsen Niroomand, Parham Pahlavani, Behnaz Bigdeli and Omid Ghorbanzadeh
Geomatics 2025, 5(4), 50; https://doi.org/10.3390/geomatics5040050 - 1 Oct 2025
Abstract
Land Surface Temperature (LST) is a critical parameter for analyzing urban heat islands, surface–atmosphere interactions, and environmental management. This study enhances the temporal resolution of LST data by leveraging machine learning and deep learning models. A novel methodology was developed using Landsat 8 [...] Read more.
Land Surface Temperature (LST) is a critical parameter for analyzing urban heat islands, surface–atmosphere interactions, and environmental management. This study enhances the temporal resolution of LST data by leveraging machine learning and deep learning models. A novel methodology was developed using Landsat 8 thermal data and Sentinel-2 multispectral imagery to predict LST at finer temporal intervals in an urban setting. Although Sentinel-2 lacks a thermal band, its high-resolution multispectral data, when integrated with Landsat 8 thermal observations, provide valuable complementary information for LST estimation. Several models were employed for LST prediction, including Random Forest Regression (RFR), Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM) network, and Gated Recurrent Unit (GRU). Model performance was assessed using the coefficient of determination (R2) and Mean Absolute Error (MAE). The CNN model demonstrated the highest predictive capability, achieving an R2 of 74.81% and an MAE of 1.588 °C. Feature importance analysis highlighted the role of spectral bands, spectral indices, topographic parameters, and land cover data in capturing the dynamic complexity of LST variations and directional patterns. A refined CNN model, trained with the features exhibiting the highest correlation with the reference LST, achieved an improved R2 of 84.48% and an MAE of 1.19 °C. These results underscore the importance of a comprehensive analysis of the factors influencing LST, as well as the need to consider the specific characteristics of the study area. Additionally, a modified TsHARP approach was applied to enhance spatial resolution, though its accuracy remained lower than that of the CNN model. The study was conducted in Tehran, a rapidly urbanizing metropolis facing rising temperatures, heavy traffic congestion, rapid horizontal expansion, and low energy efficiency. The findings contribute to urban environmental management by providing high-temporal-resolution LST data, essential for mitigating urban heat islands and improving climate resilience. Full article
Show Figures

Figure 1

9 pages, 1539 KB  
Communication
The Sensing Attack: Mechanism and Deployment in Submarine Cable Systems
by Haokun Song, Xiaoming Chen, Junshi Gao, Tianpu Yang, Jianhua Xi, Xiaoqing Zhu, Shuo Sun, Wenjing Yu, Xinyu Bai, Chao Wu and Chen Wei
Photonics 2025, 12(10), 976; https://doi.org/10.3390/photonics12100976 - 30 Sep 2025
Abstract
Submarine cable systems, serving as the critical backbone of global communications, face evolving resilience threats. This work proposes a novel sensing attack that utilizes ultra-narrow-linewidth lasers to surveil these infrastructures. First, the Narrowband Jamming Attack (NJA) is introduced as an evolution of conventional [...] Read more.
Submarine cable systems, serving as the critical backbone of global communications, face evolving resilience threats. This work proposes a novel sensing attack that utilizes ultra-narrow-linewidth lasers to surveil these infrastructures. First, the Narrowband Jamming Attack (NJA) is introduced as an evolution of conventional physical-layer jamming. NJA is divided into three categories according to the spectral position, and the non-overlapping class represents the proposed sensing attack. Its operational principles and the key parameters determining its efficacy are analyzed, along with its deployment strategy in submarine cable systems. Finally, the sensing capability is validated via OptiSystem simulations. Results demonstrate successful localization of vibrations within the 50–200 Hz range on a 1 km fiber, achieving a spatial resolution of 1 m, and confirm the influence of vibration parameters on sensing performance. This work reveals that the proposed sensing attack has the potential to covertly monitor environmental data, thereby posing a threat to information security in submarine cable systems. Full article
14 pages, 3363 KB  
Article
Design for Assembly of a Confocal System Applied to Depth Profiling in Biological Tissue Using Raman Spectroscopy
by Edgar Urrieta Almeida, Lelio de la Cruz May, Olena Benavides, Magdalena Bandala Garces and Aaron Flores Gil
Technologies 2025, 13(10), 440; https://doi.org/10.3390/technologies13100440 - 30 Sep 2025
Abstract
This work presents the development of a Z-depth system for Confocal Raman Spectroscopy (CRS), which allows for the acquisition of Raman spectra both at the surface and at depth profile in heterogeneous samples. The proposed CRS system consists of the coupling of a [...] Read more.
This work presents the development of a Z-depth system for Confocal Raman Spectroscopy (CRS), which allows for the acquisition of Raman spectra both at the surface and at depth profile in heterogeneous samples. The proposed CRS system consists of the coupling of a commercial 785 nm Raman Probe Bifurcated (RPB) with a 20x/0.40 infinity plan achromatic polarizing microscope objective, a Long Working Distance (LWD) of 1.2 cm, and a 50 μm core-multimode optical fiber used as a pinhole filter. With this implementation, it is possible to achieve both a high spatial resolution of approximately 16.2 μm and a spectral resolution of ∼14 cm1, which is determined by the FWHM of the thin 1004 cm1 Raman profile band. The system is configured to operate within 400–1800 cm1 spectral windows. The implementation of a system of this nature offers a favorable cost–benefit ratio, as commercial CRS is typically found in high-cost environments such as cosmetics, pharmaceutical, and biological laboratories. The proposed system is low-cost and employs a minimal set of optical components to achieve functionality comparable to that of a confocal Raman microscope. High signal-to-noise ratio (SNR) Raman spectra (∼660.05 at 1447 cm1) can be obtained with short integration times (∼25 s) and low laser power (30–35 mW) when analyzing biological samples such as in vivo human fingernails and fingertips. This power level is significantly lower than the exposure limits established by the American National Standards Institute (ANSI) for human laser experiments. Raman spectra were recorded from the surface of both the nails and fingertips of three volunteers, in order to characterize their biological samples at different depths. The measurements were performed in 50 μm steps to obtain molecular structural information from both surface and subsurface tissue layers. The proposed CRS enables the identification of differences between two closely spaced, centered, and narrow Raman bands. Additionally, broad Raman bands observed at the skin surface can be deconvolved into at least three sub-bands, which can be quantitatively characterized in terms of intensity, peak position, and bandwidth, as the confocal plane advances in depth. Moreover, the CRS system enables the detection of subtle, low-intensity features that appear at the surface but disappear beyond specific depth layers. Full article
19 pages, 12926 KB  
Article
Mapping Banana and Peach Palm in Diversified Landscapes in the Brazilian Atlantic Forest with Sentinel-2
by Victória Beatriz Soares, Taya Cristo Parreiras, Danielle Elis Garcia Furuya, Édson Luis Bolfe and Katia de Lima Nechet
Agriculture 2025, 15(19), 2052; https://doi.org/10.3390/agriculture15192052 - 30 Sep 2025
Abstract
Mapping banana and peach palm in heterogeneous landscapes remains challenging due to spatial heterogeneity, spectral similarities between crops and native vegetation, and persistent cloud cover. This study focused on the municipality of Jacupiranga, located within the Ribeira Valley region and surrounded by the [...] Read more.
Mapping banana and peach palm in heterogeneous landscapes remains challenging due to spatial heterogeneity, spectral similarities between crops and native vegetation, and persistent cloud cover. This study focused on the municipality of Jacupiranga, located within the Ribeira Valley region and surrounded by the Atlantic Forest, which is home to one of Brazil’s largest remaining continuous forest areas. More than 99% of Jacupiranga’s agricultural output in the 21st century came from bananas (Musa spp.) and peach palms (Bactris gasipaes), underscoring the importance of perennial crops to the local economy and traditional communities. Using a time series of vegetation indices from Sentinel-2 imagery combined with field and remote data, we used a hierarchical classification method to map where these two crops are cultivated. The Random Forest classifier fed with 10 m resolution images enabled the detection of intricate agricultural mosaics that are typical of family farming systems and improved class separability between perennial and non-perennial crops and banana and peach palm. These results show how combining geographic information systems, data analysis, and remote sensing can improve digital agriculture, rural management, and sustainable agricultural development in socio-environmentally important areas. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

16 pages, 4246 KB  
Article
Hyperspectral Imaging for Non-Destructive Detection of Chemical Residues on Textiles
by Lukas Kampik, Sophie Helen Gruber, Klemens Weisleitner, Gerald Bauer, Hannes Steiner, Leo Tous, Seraphin Hubert Unterberger and Johannes Dominikus Pallua
Textiles 2025, 5(4), 42; https://doi.org/10.3390/textiles5040042 - 28 Sep 2025
Abstract
Detecting chemical residues on surfaces is critical in environmental monitoring, industrial hygiene, public health, and incident management after chemical releases. Compounds such as acrylonitrile (ACN) and tetraethylguanidine (TEG), widely used in chemical processes, can pose risks upon residual exposure. Hyperspectral imaging (HSI), a [...] Read more.
Detecting chemical residues on surfaces is critical in environmental monitoring, industrial hygiene, public health, and incident management after chemical releases. Compounds such as acrylonitrile (ACN) and tetraethylguanidine (TEG), widely used in chemical processes, can pose risks upon residual exposure. Hyperspectral imaging (HSI), a high-resolution, non-destructive method, offers a secure and effective solution to identify and spatially map chemical contaminants based on spectral signatures. In this study, we present an HSI-based framework to detect and differentiate ACN and TEG residues on textile surfaces. High-resolution spectral data were collected from three representative textiles using a hyperspectral camera operating in the short-wave infrared range. The spectral datasets were processed using standard normal variate transformation, Savitzky–Golay filtering, and principal component analysis to enhance contrast and identify material-specific features. The results demonstrate the effectiveness of this approach in resolving spectral differences corresponding to distinct chemical residues and concentrations but also provide a practical and scalable method for detecting chemical contaminants in consumer and industrial textile materials, supporting reliable residue assessment and holding promise for broader applications in safety-critical fields. Full article
Show Figures

Figure 1

34 pages, 9527 KB  
Article
High-Resolution 3D Thermal Mapping: From Dual-Sensor Calibration to Thermally Enriched Point Clouds
by Neri Edgardo Güidi, Andrea di Filippo and Salvatore Barba
Appl. Sci. 2025, 15(19), 10491; https://doi.org/10.3390/app151910491 - 28 Sep 2025
Abstract
Thermal imaging is increasingly applied in remote sensing to identify material degradation, monitor structural integrity, and support energy diagnostics. However, its adoption is limited by the low spatial resolution of thermal sensors compared to RGB cameras. This study proposes a modular pipeline to [...] Read more.
Thermal imaging is increasingly applied in remote sensing to identify material degradation, monitor structural integrity, and support energy diagnostics. However, its adoption is limited by the low spatial resolution of thermal sensors compared to RGB cameras. This study proposes a modular pipeline to generate thermally enriched 3D point clouds by fusing RGB and thermal imagery acquired simultaneously with a dual-sensor unmanned aerial vehicle system. The methodology includes geometric calibration of both cameras, image undistortion, cross-spectral feature matching, and projection of radiometric data onto the photogrammetric model through a computed homography. Thermal values are extracted using a custom parser and assigned to 3D points based on visibility masks and interpolation strategies. Calibration achieved 81.8% chessboard detection, yielding subpixel reprojection errors. Among twelve evaluated algorithms, LightGlue retained 99% of its matches and delivered a reprojection accuracy of 18.2% at 1 px, 65.1% at 3 px and 79% at 5 px. A case study on photovoltaic panels demonstrates the method’s capability to map thermal patterns with low temperature deviation from ground-truth data. Developed entirely in Python, the workflow integrates into Agisoft Metashape or other software. The proposed approach enables cost-effective, high-resolution thermal mapping with applications in civil engineering, cultural heritage conservation, and environmental monitoring applications. Full article
Show Figures

Figure 1

23 pages, 17838 KB  
Article
Integrating Multi-Temporal Sentinel-1/2 Vegetation Signatures with Machine Learning for Enhanced Soil Salinity Mapping Accuracy in Coastal Irrigation Zones: A Case Study of the Yellow River Delta
by Junyong Zhang, Tao Liu, Wenjie Feng, Lijing Han, Rui Gao, Fei Wang, Shuang Ma, Dongrui Han, Zhuoran Zhang, Shuai Yan, Jie Yang, Jianfei Wang and Meng Wang
Agronomy 2025, 15(10), 2292; https://doi.org/10.3390/agronomy15102292 - 27 Sep 2025
Abstract
Soil salinization poses a severe threat to agricultural sustainability in the Yellow River Delta, where conventional spectral indices are limited by vegetation interference and seasonal dynamics in coastal saline-alkali landscapes. To address this, we developed an inversion framework integrating spectral indices and vegetation [...] Read more.
Soil salinization poses a severe threat to agricultural sustainability in the Yellow River Delta, where conventional spectral indices are limited by vegetation interference and seasonal dynamics in coastal saline-alkali landscapes. To address this, we developed an inversion framework integrating spectral indices and vegetation temporal features, combining multi-temporal Sentinel-2 optical data (January 2024–March 2025), Sentinel-1 SAR data, and terrain covariates. The framework employs Savitzky–Golay (SG) filtering to extract vegetation temporal indices—including NDVI temporal extremum and principal component features, capturing salt stress response mechanisms beyond single-temporal spectral indices. Based on 119 field samples and Variable Importance in Projection (VIP) feature selection, three ensemble models (XGBoost, CatBoost, LightGBM) were constructed under two strategies: single spectral features versus fused spectral and vegetation temporal features. The key results demonstrate the following: (1) The LightGBM model with fused features achieved optimal validation accuracy (R2 = 0.77, RMSE = 0.26 g/kg), outperforming single-feature models by 13% in R2. (2) SHAP analysis identified vegetation-related factors as key predictors, revealing a negative correlation between peak biomass and salinity accumulation, and the summer crop growth process affects soil salinization in the following spring. (3) The fused strategy reduced overestimation in low-salinity zones, enhanced model robustness, and significantly improved spatial gradient continuity. This study confirms that vegetation phenological features effectively mitigate agricultural interference (e.g., tillage-induced signal noise) and achieve high-resolution salinity mapping in areas where traditional spectral indices fail. The multi-temporal integration framework provides a replicable methodology for monitoring coastal salinization under complex land cover conditions. Full article
Show Figures

Figure 1

26 pages, 12387 KB  
Article
Mapping for Larimichthys crocea Aquaculture Information with Multi-Source Remote Sensing Data Based on Segment Anything Model
by Xirui Xu, Ke Nie, Sanling Yuan, Wei Fan, Yanan Lu and Fei Wang
Fishes 2025, 10(10), 477; https://doi.org/10.3390/fishes10100477 - 24 Sep 2025
Viewed by 62
Abstract
Monitoring Larimichthys crocea aquaculture in a low-cost, efficient and flexible manner with remote sensing data is crucial for the optimal management and the sustainable development of aquaculture industry and aquaculture industry intelligent fisheries. An innovative automated framework, based on the Segment Anything Model [...] Read more.
Monitoring Larimichthys crocea aquaculture in a low-cost, efficient and flexible manner with remote sensing data is crucial for the optimal management and the sustainable development of aquaculture industry and aquaculture industry intelligent fisheries. An innovative automated framework, based on the Segment Anything Model (SAM) and multi-source high-resolution remote sensing image data, is proposed for high-precision aquaculture facility extraction and overcomes the problems of low efficiency and limited accuracy in traditional manual inspection methods. The research method includes systematic optimization of SAM segmentation parameters for different data sources and strict evaluation of model performance at multiple spatial resolutions. Additionally, the impact of different spectral band combinations on the segmentation effect is systematically analyzed. Experimental results demonstrate a significant correlation between resolution and accuracy, with UAV-derived imagery achieving exceptional segmentation accuracy (97.71%), followed by Jilin-1 (91.64%) and Sentinel-2 (72.93%) data. Notably, the NIR-Blue-Red band combination exhibited superior performance in delineating aquaculture infrastructure, suggesting its optimal utility for such applications. A robust and scalable solution for automatically extracting facilities is established, which offers significant insights for extending SAM’s capabilities to broader remote sensing applications within marine resource assessment domains. Full article
(This article belongs to the Section Fishery Facilities, Equipment, and Information Technology)
Show Figures

Graphical abstract

28 pages, 14783 KB  
Article
HSSTN: A Hybrid Spectral–Structural Transformer Network for High-Fidelity Pansharpening
by Weijie Kang, Yuan Feng, Yao Ding, Hongbo Xiang, Xiaobo Liu and Yaoming Cai
Remote Sens. 2025, 17(19), 3271; https://doi.org/10.3390/rs17193271 - 23 Sep 2025
Viewed by 133
Abstract
Pansharpening fuses multispectral (MS) and panchromatic (PAN) remote sensing images to generate outputs with high spatial resolution and spectral fidelity. Nevertheless, conventional methods relying primarily on convolutional neural networks or unimodal fusion strategies frequently fail to bridge the sensor modality gap between MS [...] Read more.
Pansharpening fuses multispectral (MS) and panchromatic (PAN) remote sensing images to generate outputs with high spatial resolution and spectral fidelity. Nevertheless, conventional methods relying primarily on convolutional neural networks or unimodal fusion strategies frequently fail to bridge the sensor modality gap between MS and PAN data. Consequently, spectral distortion and spatial degradation often occur, limiting high-precision downstream applications. To address these issues, this work proposes a Hybrid Spectral–Structural Transformer Network (HSSTN) that enhances multi-level collaboration through comprehensive modelling of spectral–structural feature complementarity. Specifically, the HSSTN implements a three-tier fusion framework. First, an asymmetric dual-stream feature extractor employs a residual block with channel attention (RBCA) in the MS branch to strengthen spectral representation, while a Transformer architecture in the PAN branch extracts high-frequency spatial details, thereby reducing modality discrepancy at the input stage. Subsequently, a target-driven hierarchical fusion network utilises progressive crossmodal attention across scales, ranging from local textures to multi-scale structures, to enable efficient spectral–structural aggregation. Finally, a novel collaborative optimisation loss function preserves spectral integrity while enhancing structural details. Comprehensive experiments conducted on QuickBird, GaoFen-2, and WorldView-3 datasets demonstrate that HSSTN outperforms existing methods in both quantitative metrics and visual quality. Consequently, the resulting images exhibit sharper details and fewer spectral artefacts, showcasing significant advantages in high-fidelity remote sensing image fusion. Full article
(This article belongs to the Special Issue Artificial Intelligence in Hyperspectral Remote Sensing Data Analysis)
Show Figures

Figure 1

18 pages, 7743 KB  
Article
Improved Daytime Cloud Detection Algorithm in FY-4A’s Advanced Geostationary Radiation Imager
by Xiao Zhang, Song-Ying Zhao and Rui-Xuan Tang
Atmosphere 2025, 16(9), 1105; https://doi.org/10.3390/atmos16091105 - 20 Sep 2025
Viewed by 227
Abstract
Cloud detection is an indispensable step in satellite remote sensing of cloud properties and objects under the influence of cloud occlusion. Nevertheless, interfering targets such as snow and haze pollution are easily misjudged as clouds for most of the current algorithms. Hence, a [...] Read more.
Cloud detection is an indispensable step in satellite remote sensing of cloud properties and objects under the influence of cloud occlusion. Nevertheless, interfering targets such as snow and haze pollution are easily misjudged as clouds for most of the current algorithms. Hence, a robust cloud detection algorithm is urgently needed, especially for regions with high latitudes or severe air pollution. This paper demonstrated that the passive satellite detector Advanced Geosynchronous Radiation Imager (AGRI) onboard the FY-4A satellite has a great possibility to misjudge the dense aerosols in haze pollution as clouds during the daytime, and constructed an algorithm based on the spectral information of the AGRI’s 14 bands with a concise and high-speed calculation. This study adjusted the previously proposed cloud mask rectification algorithm of Moderate-Resolution Imaging Spectroradiometer (MODIS), rectified the MODIS cloud detection result, and used it as the accurate cloud mask data. The algorithm was constructed based on adjusted Fisher discrimination analysis (AFDA) and spectral spatial variability (SSV) methods over four different underlying surfaces (land, desert, snow, and water) and two seasons (summer and winter). This algorithm divides the identification into two steps to screen the confident cloud clusters and broken clouds, which are not easy to recognize, respectively. In the first step, channels with obvious differences in cloudy and cloud-free areas were selected, and AFDA was utilized to build a weighted sum formula across the normalized spectral data of the selected bands. This step transforms the traditional dynamic-threshold test on multiple bands into a simple test of the calculated summation value. In the second step, SSV was used to capture the broken clouds by calculating the standard deviation (STD) of spectra in every 3 × 3-pixel window to quantify the spectral homogeneity within a small scale. To assess the algorithm’s spatial and temporal generalizability, two evaluations were conducted: one examining four key regions and another assessing three different moments on a certain day in East China. The results showed that the algorithm has an excellent accuracy across four different underlying surfaces, insusceptible to the main interferences such as haze and snow, and shows a strong detection capability for broken clouds. This algorithm enables widespread application to different regions and times of day, with a low calculation complexity, indicating that a new method satisfying the requirements of fast and robust cloud detection can be achieved. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

13 pages, 2630 KB  
Article
Research on Polar-Axis Direct Solar Radiation Spectrum Measurement Method
by Jingrui Sun, Yangyang Zou, Lu Wang, Jian Zhang, Yu Zhang, Ke Zhang, Yang Su, Junjie Yang, Ran Zhang and Guoyu Zhang
Photonics 2025, 12(9), 931; https://doi.org/10.3390/photonics12090931 - 18 Sep 2025
Viewed by 170
Abstract
High-precision measurements of direct solar radiation spectra are crucial for the development of solar resources, climate change research, and agricultural applications. However, the current measurement systems all rely on a moving two-axis tracking system with a complex structure and many error transmission links. [...] Read more.
High-precision measurements of direct solar radiation spectra are crucial for the development of solar resources, climate change research, and agricultural applications. However, the current measurement systems all rely on a moving two-axis tracking system with a complex structure and many error transmission links. In response to the above problems, a polar-axis rotating solar direct radiation spectroscopic measurement method is proposed, and an overall architecture consisting of a rotating reflector and a spectroradiometric measurement system is constructed, which simplifies the system’s structural form and enables year-round, full-latitude solar direct radiation spectroscopic measurements without requiring moving tracking. The paper focuses on the study of its optical system, optimizes the design of a polar-axis rotating solar direct radiation spectroscopy measurement optical system with a spectral range of 380–780 nm and a spectral resolution better than 2 nm, and carries out spectral reconstruction of the solar direct radiation spectra as well as the assessment of measurement accuracy. The results show that the point error distribution of the AM0 spectral curve ranges from −9.05% to 13.35%, and the area error distribution ranges from −0.04% to 0.09%; the point error distribution of the AM1.5G spectral curve ranges from −9.19% to 13.66%, and the area error distribution ranges from −0.03% to 0.11%. Both exhibit spatial and temporal uniformity exceeding 99.92%, ensuring excellent measurement performance throughout the year. The measurement method proposed in this study enhances the solar direct radiation spectral measurement system. Compared to the existing dual-axis moving tracking measurement method, the system composition is simplified, enabling direct solar radiation spectrum measurement at all latitudes throughout the year without the need for tracking, providing technical support for the development and application of new technologies for solar direct radiation measurement. It is expected to promote future theoretical research and technological breakthroughs in this field. Full article
Show Figures

Figure 1

16 pages, 1616 KB  
Review
Decoding Molecular Network Dynamics in Cells: Advances in Multiplexed Live Imaging of Fluorescent Biosensors
by Qiaowen Chen, Yichu Xu, Jhen-Wei Wu, Jr-Ming Yang and Chuan-Hsiang Huang
Biosensors 2025, 15(9), 614; https://doi.org/10.3390/bios15090614 - 17 Sep 2025
Viewed by 508
Abstract
Genetically encoded fluorescent protein (FP)-based biosensors have revolutionized cell biology research by enabling real-time monitoring of molecular activities in live cells with exceptional spatial and temporal resolution. Multiplexed biosensing advances this capability by allowing the simultaneous tracking of multiple signaling pathways to uncover [...] Read more.
Genetically encoded fluorescent protein (FP)-based biosensors have revolutionized cell biology research by enabling real-time monitoring of molecular activities in live cells with exceptional spatial and temporal resolution. Multiplexed biosensing advances this capability by allowing the simultaneous tracking of multiple signaling pathways to uncover network interactions and dynamic coordination. However, challenges in spectral overlap limit broader implementation. Innovative strategies have been devised to address these challenges, including spectral separation through FP palette expansion and novel biosensor designs, temporal differentiation using photochromic or reversibly switching FPs, and spatial segregation of biosensors to specific subcellular regions or through cell barcoding techniques. Combining multiplexed biosensors with artificial intelligence-driven analysis holds great potential for uncovering cellular decision-making processes. Continued innovation in this field will deepen our understanding of molecular networks in cells, with implications for both fundamental biology and therapeutic development. Full article
Show Figures

Figure 1

21 pages, 37484 KB  
Article
Reconstructing Hyperspectral Images from RGB Images by Multi-Scale Spectral–Spatial Sequence Learning
by Wenjing Chen, Lang Liu and Rong Gao
Entropy 2025, 27(9), 959; https://doi.org/10.3390/e27090959 - 15 Sep 2025
Viewed by 455
Abstract
With rapid advancements in transformers, the reconstruction of hyperspectral images from RGB images, also known as spectral super-resolution (SSR), has made significant breakthroughs. However, existing transformer-based methods often struggle to balance computational efficiency with long-range receptive fields. Recently, Mamba has demonstrated linear complexity [...] Read more.
With rapid advancements in transformers, the reconstruction of hyperspectral images from RGB images, also known as spectral super-resolution (SSR), has made significant breakthroughs. However, existing transformer-based methods often struggle to balance computational efficiency with long-range receptive fields. Recently, Mamba has demonstrated linear complexity in modeling long-range dependencies and shown broad applicability in vision tasks. This paper proposes a multi-scale spectral–spatial sequence learning method, named MSS-Mamba, for reconstructing hyperspectral images from RGB images. First, we introduce a continuous spectral–spatial scan (CS3) mechanism to improve cross-dimensional feature extraction of the foundational Mamba model. Second, we propose a sequence tokenization strategy that generates multi-scale-aware sequences to overcome Mamba’s limitations in hierarchically learning multi-scale information. Specifically, we design the multi-scale information fusion (MIF) module, which tokenizes input sequences before feeding them into Mamba. The MIF employs a dual-branch architecture to process global and local information separately, dynamically fusing features through an adaptive router that generates weighting coefficients. This produces feature maps that contain both global contextual information and local details, ultimately reconstructing a high-fidelity hyperspectral image. Experimental results on the ARAD_1k, CAVE and grss_dfc_2018 dataset demonstrate the performance of MSS-Mamba. Full article
Show Figures

Figure 1

29 pages, 19475 KB  
Article
Fine-Scale Grassland Classification Using UAV-Based Multi-Sensor Image Fusion and Deep Learning
by Zhongquan Cai, Changji Wen, Lun Bao, Hongyuan Ma, Zhuoran Yan, Jiaxuan Li, Xiaohong Gao and Lingxue Yu
Remote Sens. 2025, 17(18), 3190; https://doi.org/10.3390/rs17183190 - 15 Sep 2025
Viewed by 433
Abstract
Grassland classification via remote sensing is essential for ecosystem monitoring and precision management, yet conventional satellite-based approaches are fundamentally constrained by coarse spatial resolution. To overcome this limitation, we harness high-resolution UAV multi-sensor data, integrating multi-scale image fusion with deep learning to achieve [...] Read more.
Grassland classification via remote sensing is essential for ecosystem monitoring and precision management, yet conventional satellite-based approaches are fundamentally constrained by coarse spatial resolution. To overcome this limitation, we harness high-resolution UAV multi-sensor data, integrating multi-scale image fusion with deep learning to achieve fine-scale grassland classification that satellites cannot provide. First, four categories of UAV data, including RGB, multispectral, thermal infrared, and LiDAR point cloud, were collected, and a fused image tensor consisting of 10 channels (NDVI, VCI, CHM, etc.) was constructed through orthorectification and resampling. For feature-level fusion, four deep fusion networks were designed. Among them, the MultiScale Pyramid Fusion Network, utilizing a pyramid pooling module, effectively integrated spectral and structural features, achieving optimal performance in all six image fusion evaluation metrics, including information entropy (6.84), spatial frequency (15.56), and mean gradient (12.54). Subsequently, training and validation datasets were constructed by integrating visual interpretation samples. Four backbone networks, including UNet++, DeepLabV3+, PSPNet, and FPN, were employed, and attention modules (SE, ECA, and CBAM) were introduced separately to form 12 model combinations. Results indicated that the UNet++ network combined with the SE attention module achieved the best segmentation performance on the validation set, with a mean Intersection over Union (mIoU) of 77.68%, overall accuracy (OA) of 86.98%, F1-score of 81.48%, and Kappa coefficient of 0.82. In the categories of Leymus chinensis and Puccinellia distans, producer’s accuracy (PA)/user’s accuracy (UA) reached 86.46%/82.30% and 82.40%/77.68%, respectively. Whole-image prediction validated the model’s coherent identification capability for patch boundaries. In conclusion, this study provides a systematic approach for integrating multi-source UAV remote sensing data and intelligent grassland interpretation, offering technical support for grassland ecological monitoring and resource assessment. Full article
Show Figures

Figure 1

19 pages, 4303 KB  
Article
Design and Series Product Development of a Space-Based Dyson Spectrometer for Ocean Applications
by Xinyin Jia, Siyuan Li, Xianqiang He, Zhaohui Zhang, Pan Hou, Xin Jiang and Jia Liu
Photonics 2025, 12(9), 918; https://doi.org/10.3390/photonics12090918 - 14 Sep 2025
Viewed by 248
Abstract
An advanced Dyson spectrometer is proposed that redesigns the Dyson prism to make the entire system coaxial and easy to implement for each subassembly, thus greatly enhancing optical design, optical processing, and system alignment. The design concept and fabrication methods, as well as [...] Read more.
An advanced Dyson spectrometer is proposed that redesigns the Dyson prism to make the entire system coaxial and easy to implement for each subassembly, thus greatly enhancing optical design, optical processing, and system alignment. The design concept and fabrication methods, as well as the results of imaging evaluations of the proposed spectrometer, are described in detail. At present, the advanced Dyson spectrometer has been in orbit for more than a year, serving smart agriculture and marine applications. The advanced imaging spectrometer achieves high resolution in both the spectral and spatial directions and low spectral distortion at a high numerical aperture in the working waveband. On the basis of the above research, we have developed three other imaging spectrometers with different performance indicators, including a space-based instrument, an airborne instrument and a ground-based instrument, thus verifying the progress and versatility of advanced Dyson spectrometer technology. Full article
Show Figures

Figure 1

Back to TopTop