Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (12,072)

Search Parameters:
Keywords = remote-sensing imaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 19340 KB  
Article
Integrating Surface Deformation and Ecological Indicators for Mining Environment Assessment: A Novel MDECI Approach
by Lei Zhang, Qiaomei Su, Bin Zhang, Hongwen Xue, Zhengkang Zuo, Yanpeng Li and He Zheng
Remote Sens. 2026, 18(9), 1272; https://doi.org/10.3390/rs18091272 - 22 Apr 2026
Abstract
Surface subsidence induced by underground coal mining is a primary driver of ecological degradation. The traditional Remote Sensing Ecological Index (RSEI), however, struggles to capture surface deformation constraints and vegetation response lags. To address this, we developed a Mining Deformation–Ecology Coupling Index (MDECI). [...] Read more.
Surface subsidence induced by underground coal mining is a primary driver of ecological degradation. The traditional Remote Sensing Ecological Index (RSEI), however, struggles to capture surface deformation constraints and vegetation response lags. To address this, we developed a Mining Deformation–Ecology Coupling Index (MDECI). This index integrates Interferometric Synthetic Aperture Radar (InSAR)-monitored surface stability with multi-spectral indicators via Principal Component Analysis (PCA). We applied this method to the Datong Coalfield, China, using 231 Sentinel-1A SAR scenes and 8 Landsat images (2017–2024) to validate the effectiveness of the index. Meanwhile, we systematically analyzed non-linear response mechanisms, the Ecological Turning Point (ETP), and spatial clustering characteristics. The results demonstrate the following: (1) InSAR and MDECI effectively identified patterns of surface subsidence and ecological decline. Subsidence centers expanded to a maximum of −2085 mm, causing the mean MDECI in these areas to drop to 0.185 (<−1800 mm). This represents a 57.4% decrease relative to the regional average (0.434). (2) MDECI outperformed traditional models with a stable Average Correlation Coefficient (ACC) (0.63–0.75) and high cross-correlation coefficients with RSEI (0.906) and the Mine-specific Eco-environment Index (MSEEI) (0.931). During the 2018 drought, MDECI maintained a robust ACC of 0.628 while RSEI dropped to 0.482. (3) Multi-scale analysis revealed a unimodal MDECI response with an ETP at −100 mm. Initial ‘micro-disturbance gain’ (0.371 to 0.471) is followed by a progressive decline to a minimum of 0.185 under severe deformation. (4) Local Indicators of Spatial Association (LISA) spatial clustering characterized the distribution patterns of ecological damage and localised high-maintenance areas. High–Low damaged areas accounted for 5.09%, while High–High high-maintenance areas reached 9.00%. The scale of High–High areas was approximately 1.77 times that of the damaged areas. The MDECI addresses the deficiencies of traditional indices in high-disturbance areas and isolates the impact of mining on the ecology, providing a quantitative basis for risk identification and differentiated restoration. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Figure 1

20 pages, 3665 KB  
Article
SDS-Former: A Transformer-Based Method for Semantic Segmentation of Arid Land Remote Sensing Imagery
by Yujie Du, Junfu Fan, Kuan Li and Yongrui Li
Algorithms 2026, 19(5), 325; https://doi.org/10.3390/a19050325 - 22 Apr 2026
Abstract
Semantic segmentation of land use and land cover (LULC) in arid regions remains challenging due to severe class imbalance, fragmented spatial distributions, and high spectral similarity among different land cover types. These characteristics often lead to an information bottleneck in deep segmentation networks [...] Read more.
Semantic segmentation of land use and land cover (LULC) in arid regions remains challenging due to severe class imbalance, fragmented spatial distributions, and high spectral similarity among different land cover types. These characteristics often lead to an information bottleneck in deep segmentation networks and hinder the extraction of discriminative semantic representations. To address these issues, we propose SDS-Former, a lightweight semantic segmentation network specifically designed for remote sensing imagery in arid environments. SDS-Former incorporates an SSM-inspired Lightweight Semantic Enhancement (LSE) module to strengthen contextual modeling and alleviate the loss of discriminative information in deep features. To tackle scale variations, a Dynamic Selective Feature Fusion (DSFF) module is employed in the decoder to adaptively weight and fuse high-level semantics with low-level spatial details. Furthermore, a Feature Refinement Head (FRH) is introduced to enhance boundary localization and improve the recognition of small-scale and sparsely distributed land cover objects. Extensive ablation and comparative experiments demonstrate that SDS-Former consistently outperforms representative semantic segmentation methods across multiple evaluation metrics. On the Tarim Basin dataset, the proposed network achieves a mean Intersection over Union (mIoU) of 82.51% and an F1 score of 86.47%, indicating its superior effectiveness and robustness. Qualitative results further verify that SDS-Former exhibits clear advantages in distinguishing spectrally similar land cover types and preserving the spatial continuity of ground objects in complex arid-region scenes. Full article
18 pages, 1044 KB  
Article
Optical Design and Analysis of a Conical Scan-Type Slanted Off-Axis Camera
by Yiting Wang, Xi He, Zongqiang Fu, Rui Duan and Xiubin Yang
Photonics 2026, 13(4), 400; https://doi.org/10.3390/photonics13040400 - 21 Apr 2026
Abstract
Compared with the conventional push-broom imaging mode, conical scanning extends the imaging swath through rotational scanning and is suitable for high-resolution, wide-swath remote sensing. To achieve continuous full-coverage imaging, the camera must be mounted at a certain tilt angle and employ an off-axis [...] Read more.
Compared with the conventional push-broom imaging mode, conical scanning extends the imaging swath through rotational scanning and is suitable for high-resolution, wide-swath remote sensing. To achieve continuous full-coverage imaging, the camera must be mounted at a certain tilt angle and employ an off-axis optical system with a sufficiently large field of view (FOV). However, the tilted installation causes nonuniform irradiance and increased off-axis distortion, while wide-field off-axis imaging also introduces radiometric consistency problems in focal-plane multi-detector stitching. To address these issues, this study investigates the optical design of a tilted off-axis camera for conical-scan imaging. Under the constraints of full coverage and swath requirements, key optical parameters were jointly determined, and a lightweight wide-coverage off-axis three-mirror system was designed, optimized, and evaluated. The final system has a focal length of 1545 mm, an F-number of 8.4, and a full FOV of 23.4° × 11.7°. The modulation transfer function is greater than 0.41 at the Nyquist frequency, and the maximum distortion is less than 2.5446%. In addition, for the focal-plane optical stitching structure, the coupled effects of local structural vignetting and global geometric vignetting induced by the tilted installation were analyzed. The results show that the gray-level difference in the adjacent detector overlap regions is only 0.31–0.53 digital numbers (DN), and the full focal plane shows a smooth gray-level attenuation rate of 5.39–6.77%. These results indicate that vignetting has no significant effect on focal-plane stitching. The proposed camera is well suited for conical-scan imaging. Full article
23 pages, 19480 KB  
Article
A Multi-Spatial Scale Integration Framework of UAV Image Features and Machine Learning for Predicting Root-Zone Soil Electrical Conductivity in the Arid Oasis Cotton Fields of Xinjiang
by Chenyu Li, Xinjun Wang, Qingfu Liang, Wenli Dong, Wanzhi Zhou, Yu Huang, Rui Qi, Shenao Wang and Jiandong Sheng
Agriculture 2026, 16(8), 913; https://doi.org/10.3390/agriculture16080913 - 21 Apr 2026
Abstract
Soil salinization is one of the primary forms of land degradation in arid and semi-arid regions, severely constraining agricultural production in Xinjiang’s oases. Unmanned aerial vehicle (UAV) imagery provides an effective means for precise monitoring of soil salinization, with image spatial resolution being [...] Read more.
Soil salinization is one of the primary forms of land degradation in arid and semi-arid regions, severely constraining agricultural production in Xinjiang’s oases. Unmanned aerial vehicle (UAV) imagery provides an effective means for precise monitoring of soil salinization, with image spatial resolution being a key factor affecting assessment accuracy. However, traditional single-scale remote sensing monitoring methods rely solely on spectral and textural features at the leaf scale (0.1 m resolution captures leaf-scale characteristics), neglecting the contribution of multi-scale features (single-row canopy scale and single-membrane-covered area scale (6-row crop canopy)) to soil salinity. For instance, 0.5–1 m reflects single-row canopy scale, while 2 m reflects single-membrane-covered area scale. Therefore, this study developed a multi-scale UAV imagery and machine learning framework to enhance soil electrical conductivity prediction accuracy. This study focuses on oasis cotton fields in Shaya County, Xinjiang. Based on UAV multispectral imagery, we resampled data to generate eight datasets at different spatial resolutions: 0.1, 0.5, 1, 1.5, 2, 2.5, 5, and 10 m. For each resolution, we calculated 21 spectral indices and 48 texture features to construct a feature set. At both single and multispatial scales, spectral indices, texture features, and their spectral-texture fusion features were constructed. Combining these with Backpropagation Neural Network (BPNN), Random Forest Regression (RFR), and Extreme Gradient Boosting (XGBoost) models, a soil EC estimation framework was developed. The impact of three feature combination schemes on cotton field soil conductivity estimation using single-scale UAV imagery was compared. The accuracy of soil EC estimation for cotton fields was compared between multi-spatial scale and single-scale UAV image features. The optimal combination strategy for a multi-spatial scale and multiple features was determined. Results indicate that combining spectral and texture features yields the highest estimation accuracy for cotton field soil electrical conductivity in single-scale analysis. Multi-spatial scale image features outperform single-scale image features in estimating cotton field soil electrical conductivity accuracy. By comparing different feature combinations, when integrating 0.5 m spatial-scale spectra (S1, EVI, DVI, NDVI, Int1, SI) with 0.1 m texture features (RE1_ent, R_cor, RE1_cor, G_hom, B_mea, R_con, NIR_con), the XGBoost model achieved the optimal prediction accuracy (R2 = 0.693, RMSE = 0.515 dS/m), outperforming the methods using multiple features at a single scale. This study developed a novel multi-scale image feature fusion technique to construct a machine learning model. This method describes the image characteristics of soil electrical conductivity at different geographical scales, providing a reference approach for the rapid and accurate prediction of soil electrical conductivity in arid regions. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

31 pages, 4260 KB  
Article
Geographical Zoning-Based Classification of Agricultural Land Use in Hilly and Mountainous Areas Using High-Resolution Remote Sensing Images
by Junyao Zhang, Xiaomei Yang, Zhihua Wang, Xiaoliang Liu, Haiyan Wu, Xiaoqiong Cai and Shifeng Fu
Remote Sens. 2026, 18(8), 1259; https://doi.org/10.3390/rs18081259 - 21 Apr 2026
Abstract
Accurately mapping agricultural land use in fragmented hilly and mountainous areas is crucial for resource management but is severely challenged by spatial heterogeneity. While high-resolution (HR) images excel at delineating fine parcel boundaries, their limited spectral and temporal information often leads to spectral [...] Read more.
Accurately mapping agricultural land use in fragmented hilly and mountainous areas is crucial for resource management but is severely challenged by spatial heterogeneity. While high-resolution (HR) images excel at delineating fine parcel boundaries, their limited spectral and temporal information often leads to spectral confusion among diverse agricultural types. To address this limitation, this study proposes a novel spatiotemporal feature-driven geographical zoning method integrating vegetation phenology, topography, and human activity. This zoning strategy decouples the complex global classification task into relatively simple local problems, providing explicit geoscientific constraints for subsequent classification. The proposed method was validated by classifying plain open-field croplands, sloping croplands, terraces, and greenhouses in the hilly and mountainous areas of Beijing using 2 m resolution satellite images. Compared to traditional global classification methods, the proposed zoning-based method increased the overall accuracy from 84.81% to 90.81%, the Kappa coefficient from 0.74 to 0.85, and the Intersection over Union (IoU) from 77.85% to 90.85%. The advantages of geographic zoning were particularly evident in mitigating spatial heterogeneity and enhancing boundary precision. These findings indicate that integrating dynamic geographical zoning as a priori knowledge successfully bridges the gap between HR spatial details and environmental contexts, offering a robust solution for mapping fragmented agricultural landscapes. Full article
23 pages, 7993 KB  
Article
A Pyramid-Enhanced Swin Transformer for Robust Hyperspectral–Multispectral Image Fusion and Super-Resolution
by Yu Lu, Lin Hu, Jiankai Hu, Shu Gan, Xiping Yuan, Wang Li and Hailong Zhao
Remote Sens. 2026, 18(8), 1255; https://doi.org/10.3390/rs18081255 - 21 Apr 2026
Abstract
Due to the inherent limitations of both hyperspectral and multispectral imagery, balancing high spatial resolution with high spectral fidelity has become one of the fundamental challenges in remote sensing image processing. A prevailing strategy is to fuse these two types of data to [...] Read more.
Due to the inherent limitations of both hyperspectral and multispectral imagery, balancing high spatial resolution with high spectral fidelity has become one of the fundamental challenges in remote sensing image processing. A prevailing strategy is to fuse these two types of data to reconstruct images that jointly preserve their respective advantages. However, existing reconstruction approaches still suffer from complex coupling between spatial and spectral information, and limited feature extraction capabilities. To address these issues, this study proposes PMSwinNet (Pyramid Multi-scale Swin Transformer Network), a novel architecture that integrates pyramid-based feature enhancement with Transformer mechanisms. The PMSwinNet incorporates multi-scale pyramid feature fusion and window-based self-attention. Through a progressive multi-stage design and three complementary components—feature extraction and reconstruction modules—the Transformer branch leverages window partitioning and shifting operations to capture long-range spatial dependencies and local contextual cues, while the pyramid features extract both global and local information across multiple spatial scales. In addition, a high-frequency branch is introduced, which employs lightweight convolutions to enhance edges, textures, and other high-frequency details, effectively suppressing blurring and artifacts during reconstruction. Experimental evaluations on multiple public hyperspectral datasets demonstrate that the PMSwinNet outperforms state-of-the-art methods, particularly in terms of detail preservation, spectral distortion suppression, and robustness. Full article
29 pages, 14926 KB  
Article
Semi-Supervised Remote Sensing Image Semantic Segmentation Based on Multi-Scale Consistency and Cross-Attention
by Yuan Cao, Lin Chang, Jiahao Sun, Xinyu Li, Jing Liu, Xin Li and Daofang Liu
Remote Sens. 2026, 18(8), 1256; https://doi.org/10.3390/rs18081256 - 21 Apr 2026
Abstract
Remote sensing image (RSI) semantic segmentation is challenged by high inter-class spectral similarity, significant intra-class scale variation, and limited availability of labeled data. Although semi-supervised learning has reduced the dependency on large-scale annotations, existing approaches still suffer from degraded boundary precision and incomplete [...] Read more.
Remote sensing image (RSI) semantic segmentation is challenged by high inter-class spectral similarity, significant intra-class scale variation, and limited availability of labeled data. Although semi-supervised learning has reduced the dependency on large-scale annotations, existing approaches still suffer from degraded boundary precision and incomplete geometric structures in complex remote sensing scenes. To address these issues, this paper proposes a Multi-scale Consistency and Cross-Attention Teacher–Student Network (MSCA-TSN) for semi-supervised RSI semantic segmentation. Specifically, an Adaptive Multi-scale Uncertainty Consistency module (AMUC) is introduced to model feature reliability across hierarchical levels. By leveraging Monte Carlo Dropout to estimate feature uncertainty and employing adaptive weighting for multi-scale consistency learning, AMUC effectively suppresses unreliable supervision and improves segmentation robustness under significant scale variations. Furthermore, a Cross-Teacher–Student Cross-Attention Module (CCAM) is designed to enhance cross-network feature interaction. In CCAM, student features act as queries while teacher features serve as keys and values to construct cross-attention, enabling the student network to reconstruct more discriminative feature representations and reduce confusion among visually similar land-cover categories. Extensive experiments are conducted on the LoveDA and ISPRS Potsdam benchmarks under both 5% and 10% labeling ratios. On the LoveDA dataset, MSCA-TSN achieves mIoU scores of 51.05% and 52.41% under 5% and 10% labeled data, respectively, outperforming several state-of-the-art semi-supervised methods. On the ISPRS Potsdam dataset, the proposed method further reaches 75.35% and 76.34% mIoU under the same settings. Ablation and parameter sensitivity analyses further verify the effectiveness and robustness of the proposed AMUC and CCAM modules. Full article
28 pages, 7089 KB  
Article
Multi-Scale Context-Aware Network Implementation for Efficient Image Semantic Segmentation
by Yi Yang and Chong Guo
Appl. Sci. 2026, 16(8), 4033; https://doi.org/10.3390/app16084033 - 21 Apr 2026
Abstract
Image semantic segmentation is essential in autonomous driving, medical imaging, and remote sensing. While convolutional neural networks (CNNs) excel at local feature extraction and spatial structure modeling, their limited receptive fields restrict the capture of long-range dependencies and global semantic consistency. Transformers provide [...] Read more.
Image semantic segmentation is essential in autonomous driving, medical imaging, and remote sensing. While convolutional neural networks (CNNs) excel at local feature extraction and spatial structure modeling, their limited receptive fields restrict the capture of long-range dependencies and global semantic consistency. Transformers provide strong global modeling through self-attention but often lack local inductive bias and show weaker generalization on small datasets. To address these limitations, this paper proposes a Multi-Scale Context-aware Network (MSC-Net) for image semantic segmentation. Under an encoder–decoder framework, MSC-Net combines a convolutional backbone with a Multi-Scale Self-Attention module to integrate the complementary strengths of CNNs and attention mechanisms. The backbone extracts local texture and structural information and can adopt architectures such as MobileNet, Xception, DRN, and ResNet, while the attention module captures long-range dependencies and multi-scale contextual information. This design improves cross-layer feature collaboration, multi-scale feature fusion, and boundary quality while maintaining computational efficiency. Experimental results show that MSC-Net achieves 38.8% mIoU and 98.4% ACC under comparable computational settings. Compared with SegFormer and DeepLabV3+, the model improves mIoU by approximately +3.0 and +3.3 percentage points, respectively, while reducing FLOPs and parameter size. Full article
Show Figures

Figure 1

21 pages, 4869 KB  
Article
Joint Adjustment Image Stabilization Method Based on Trajectories of Maritime Multi-Target Detection and Tracking
by Fangjian Liu, Yuan Li and Mi Wang
Appl. Sci. 2026, 16(8), 4029; https://doi.org/10.3390/app16084029 - 21 Apr 2026
Abstract
Existing technologies can achieve relative geometric correction and stabilization of geostationary satellite image sequences through fixed land scene matching or homonymous point adjustment. However, these methods heavily rely on fixed land areas, rendering them completely ineffective in vast ocean regions with only ship [...] Read more.
Existing technologies can achieve relative geometric correction and stabilization of geostationary satellite image sequences through fixed land scene matching or homonymous point adjustment. However, these methods heavily rely on fixed land areas, rendering them completely ineffective in vast ocean regions with only ship targets. Additionally, the trajectories of ship targets after processing still exhibit noticeable jitter, hindering motion information analysis. To address these issues, this paper proposes a joint image adjustment and stabilization method based on multi-target trajectories in marine environments: (1) An optimized target detection algorithm based on a multi-scale heterogeneous convolution module is introduced, which extracts background and target features through convolutions of different scales, enabling accurate detection and tracking of weak small targets in the image sequence frame by frame. (2) Curve fitting is performed on the detected positions of the same ship across multiple frames to simulate its motion trajectory under stabilized conditions. Combined with the prior assumption of uniform motion, an equal-division strategy is adopted to determine the corrected positions of the target in the image sequence. (3) The deviation correction values of multiple targets within the same frame are obtained, and based on the principle of intra-frame deviation consistency, precise image stabilization is achieved under multi-target constraints. Experiments based on Gaofen-4 satellite image sequences demonstrate that this method reduces the average position deviation of ship targets in the original images from 8.5 pixels (425 m) to 3.4 pixels (170 m), a decrease of approximately 59.41%, effectively improving the relative geometric accuracy of the image sequence and significantly eliminating target trajectory jitter. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

24 pages, 15099 KB  
Article
Weakly Supervised Oriented Object Detection in Remote Sensing via Geometry-Aware Enhancement Network
by Yufei Zhu, Jianzhi Hong and Taoyang Wang
Remote Sens. 2026, 18(8), 1253; https://doi.org/10.3390/rs18081253 - 21 Apr 2026
Abstract
In remote sensing image oriented object detection tasks, weakly supervised learning methods based on horizontal bounding boxes have attracted much attention due to their lower annotation costs compared to fully supervised methods. However, remote sensing images, characterized by complex backgrounds, exhibit a wide [...] Read more.
In remote sensing image oriented object detection tasks, weakly supervised learning methods based on horizontal bounding boxes have attracted much attention due to their lower annotation costs compared to fully supervised methods. However, remote sensing images, characterized by complex backgrounds, exhibit a wide range of target scales and diverse geometric characteristics across target categories. Existing methods exhibit inadequate exploitation of background and angular information under weak supervision, resulting in compromised perception of dense and high-aspect-ratio targets. Neglecting the imbalance in angle estimation samples further leads to excessively low detection accuracy for few-shot categories. To address the aforementioned issues, this paper proposes a Geometry-Aware Enhancement Network (WSOOD-GAEN) for weakly supervised oriented object detection tasks. First, in the backbone network stage, a channel-space deformable attention module (DAE-ResNet) was constructed. Through deformable sampling and screening of key regions, feature extraction has both morphological adaptability to complex shapes and semantic discriminability of key features in complex backgrounds. Secondly, in the feature pyramid stage, an Angle-Guided Feature Pyramid Network (AG-FPN) is proposed. This module dynamically applies rotation transformation to the sampling offsets of deformable convolutions, thereby enhancing the feature representation of objects with different orientations and scales. Furthermore, an adaptive geometric perception loss (AGL) was designed. Based on the geometric characteristics of different categories, it automatically learns differentiated rotation and flip consistency weights, thereby improving the prediction accuracy of small sample categories. Experiments on the DOTA-v1.0, HRSC, and RSAR datasets validate our approach. Specifically, under the AP75 evaluation metric, the proposed method outperforms existing weakly supervised methods by 1.51%, 9.86%, and 3.28%, respectively. Full article
Show Figures

Figure 1

26 pages, 31446 KB  
Article
A Training-Free Paradigm for Data-Scarce Maritime Scene Classification Using Vision-Language Models
by Jiabao Wu, Yujie Chen, Wentao Chen, Yicheng Lai, Junjun Li, Xuhang Chen and Wangyu Wu
Sensors 2026, 26(8), 2549; https://doi.org/10.3390/s26082549 - 21 Apr 2026
Abstract
Maritime Domain Awareness (MDA) relies heavily on data acquired from high-resolution optical spaceborne sensors; however, processing this massive quantity of sensor data via traditional supervised deep learning is severely bottlenecked by its dependency on exhaustively annotated datasets. Under extreme data scarcity, conventional architectures [...] Read more.
Maritime Domain Awareness (MDA) relies heavily on data acquired from high-resolution optical spaceborne sensors; however, processing this massive quantity of sensor data via traditional supervised deep learning is severely bottlenecked by its dependency on exhaustively annotated datasets. Under extreme data scarcity, conventional architectures suffer severe performance degradation, rendering them impractical for time-critical, zero-day deployments. To overcome this barrier, we propose a training-free inference paradigm that leverages the extensive pre-trained knowledge of Large Vision-Language Models (VLMs). Specifically, we introduce a Domain Knowledge-Enhanced In-Context Learning (DK-ICL) framework coupled with a Macro-Topological Chain-of-Thought (MT-CoT) strategy. This approach bridges the perspective gap between natural images and top–down optical sensor imagery by translating expert remote sensing heuristics into a strict, step-by-step reasoning pipeline. Extensive evaluations demonstrate the substantial efficacy of this framework. Armed with merely 4 visual exemplars per category as in-context triggers, our MT-CoT augmented VLMs outperform traditional models trained under identical scarcity by over 38% in F1-score. Crucially, real-world case studies confirm that this zero-gradient approach maintains robust generalization on unannotated, out-of-distribution coastal clutters, achieving performance parity with data-heavy networks trained on 50 times the data volume. By substituting massive human annotation and GPU optimization with scalable logical deduction, this paradigm establishes a resource-efficient foundation for next-generation intelligent maritime sensing networks. Full article
Show Figures

Figure 1

35 pages, 4414 KB  
Article
Superpixel-Based Deep Feature Analysis Coupled with Dense CRF for Land Use Change Detection Using High-Resolution Remote Sensing Images
by Jinqi Gong, Tie Wang, Zongchen Wang and Junyi Zhou
Remote Sens. 2026, 18(8), 1245; https://doi.org/10.3390/rs18081245 - 20 Apr 2026
Abstract
Land use change detection (LUCD) serves as a crucial technical cornerstone for natural resource management and ecological environment monitoring, playing an indispensable role in advancing the modernization of national governance capacities. Nonetheless, severe interference from radiometric variations on feature representation readily induces spurious [...] Read more.
Land use change detection (LUCD) serves as a crucial technical cornerstone for natural resource management and ecological environment monitoring, playing an indispensable role in advancing the modernization of national governance capacities. Nonetheless, severe interference from radiometric variations on feature representation readily induces spurious changes and thus a high false alarm rate. Additionally, the challenge of balancing discriminative feature extraction and fine-grained contextual modeling leads to fragmented change regions and missed detection. To address these issues and eliminate the reliance on annotated samples, a novel framework is proposed for unsupervised LUCD, integrating superpixel-based deep feature analysis with a dense conditional random field (CRF). Firstly, relative radiometric correction and band-wise maximum stacking fusion are performed on the bi-temporal images. A simple non-iterative clustering (SNIC) algorithm is adopted to generate homogeneous superpixels with cross-temporal consistency. Then, a deep feature coupling mining mechanism is introduced to implement spatial–spectral feature extraction and in-depth parsing of invariant semantic information. Meanwhile, the difference confidence map based on dual features is constructed using superpixel-level discriminant vectors to enhance the separability. Finally, leveraging homogeneous units with spatial correspondence, a task-specific redesign of a global optimization model is established to achieve the precise extraction of change regions, which incorporates difference confidence, spatial adjacency relationship, and cross-temporal feature similarity into the dense CRF. The experimental results demonstrate that the proposed method achieves an average overall accuracy of over 90% across all datasets with excellent comprehensive performance, striking a well-balanced trade-off in practical applicability. It can effectively suppress salt-and-pepper noise, significantly improve the recall rate of change regions (maintaining at approximately 90%), and exhibit favorable superiority and robustness in complex land cover scenarios. Full article
24 pages, 4858 KB  
Article
Reconstructing Shallow River Bathymetry Through Sequence-Based Modeling Approach
by Modestas Butnorius, Timas Akelis, Matas Vaitkevičius, Dominykas Matulis, Andrius Kriščiūnas, Vytautas Akstinas and Rimantas Barauskas
Water 2026, 18(8), 975; https://doi.org/10.3390/w18080975 - 20 Apr 2026
Abstract
Hydrological monitoring is crucial for protecting aquatic ecosystems, especially downstream of hydropower plants where water levels can change suddenly and cause the degradation of instream habitats. There are lot of traditional methods used to monitor water levels and river bathymetry, but most of [...] Read more.
Hydrological monitoring is crucial for protecting aquatic ecosystems, especially downstream of hydropower plants where water levels can change suddenly and cause the degradation of instream habitats. There are lot of traditional methods used to monitor water levels and river bathymetry, but most of them rely on in situ measurements. Drone-based remote sensing has received more attention in recent years, with the data in turn processed using CNNs. In this paper, we propose a new sequence-based method that uses multiple frames to expand the available context and compare it to already existing methods, such as Lyzenga, Stumpf, CNN, and SfM. The best performing models within this study end up being SfM and CNN, with the former being more accurate on rivers with clean riverbeds and the latter being the most consistent. The sequence-based model shows promise, and even outperforms CNN, in terms of MAE, on rivers where the same location across multiple views is mapped, achieving the most accurate results across different images. This shows that utilizing multiple views to increase the available context can improve the accuracy of riverine depth estimation based on multispectral visual information. Full article
Show Figures

Figure 1

19 pages, 5438 KB  
Article
Chlorophyll-a Retrieval in Turbid Inland Waters Using BC-1A Multispectral Observations: A Case Study of Taihu Lake
by Wen Jiang, Qiyun Guo, Chen Cao and Shijie Liu
Sensors 2026, 26(8), 2535; https://doi.org/10.3390/s26082535 - 20 Apr 2026
Abstract
Turbid Class II inland waters such as Taihu Lake exhibit a “spectral uplift” effect driven by suspended particulate matter (SPM) scattering and colored dissolved organic matter (CDOM) absorption, which can obscure chlorophyll-a (Chl-a) signals in the visible–red-edge region and challenge retrieval under small-sample, [...] Read more.
Turbid Class II inland waters such as Taihu Lake exhibit a “spectral uplift” effect driven by suspended particulate matter (SPM) scattering and colored dissolved organic matter (CDOM) absorption, which can obscure chlorophyll-a (Chl-a) signals in the visible–red-edge region and challenge retrieval under small-sample, collinear feature settings. Using multispectral observations from the BC-1A satellite (carrying the Lightweight Hyperspectral Remote Sensing Imager, LHRSI) and synchronous satellite–ground in situ measurements acquired over Taihu Lake in late autumn, this study proposes Chl-a-oriented PCA–RF (COP-RF), a leakage-safe inversion framework integrating correlation screening, principal component analysis (PCA), and random forest (RF) regression. Candidate band-combination features are generated, and PCA is applied for orthogonal compression to mitigate collinearity before RF learning. A stratified five-fold cross-validation based on Chl-a quantile bins is adopted, with screening, standardization, and PCA fitted only on training folds. COP-RF achieves stable performance under the current dataset (R2=0.671, RMSE =1.80μg/L, MAE =1.25μg/L). Spatial inversion shows higher Chl-a near shores and bays and lower values in the lake center, consistent with Sentinel-2 hotspot ranks. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

17 pages, 5384 KB  
Review
Hyperspectral Sensing Enabled by Optics-Free Sensor Architectures
by Yicheng Wang, Xueyi Wang, Xintong Guo and Yining Mu
Nanomanufacturing 2026, 6(2), 8; https://doi.org/10.3390/nanomanufacturing6020008 - 20 Apr 2026
Abstract
Hyperspectral sensing allows for the capture of spatially resolved spectral data, a capability critical for applications spanning from remote sensing to biomedical diagnostics. Nevertheless, the widespread adoption of this technology is hindered by the bulk and complexity of traditional systems based on diffractive [...] Read more.
Hyperspectral sensing allows for the capture of spatially resolved spectral data, a capability critical for applications spanning from remote sensing to biomedical diagnostics. Nevertheless, the widespread adoption of this technology is hindered by the bulk and complexity of traditional systems based on diffractive optics. To overcome these hurdles, substantial research efforts have been dedicated to system miniaturization via component scaling and computational imaging. This review outlines the technological progression of compact hyperspectral imaging, ranging from miniaturized dispersive elements and tunable filters to computational snapshot designs using optical multiplexing. Although these approaches decrease system volume, they generally treat the sensor as a passive intensity recorder requiring external encoding. Therefore, we focus here on the rising paradigm of sensor-level integration made possible by nanomanufacturing. We examine optics-free architectures where spectral discrimination is embedded directly into the pixel, distinguishing between pixel-level nanophotonic filtering and intrinsic material-based selectivity. We specifically highlight emerging platforms such as compositionally engineered and cavity-enhanced perovskites, as well as electrically tunable organic or two-dimensional (2D) material heterostructures. To conclude, this review discusses persistent challenges regarding fabrication uniformity and stability, providing an outlook on the future of scalable and fully integrated hyperspectral vision systems. Full article
Show Figures

Figure 1

Back to TopTop