Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (682)

Search Parameters:
Keywords = GF-6 images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 9443 KB  
Article
A Dynamic Gaussian Modified Spectral Band Adjustment Factors Method for Radiometric Cross-Calibration of HJ-2A/HSI with ZY1-02D/AHSI
by Can Yu, Xiangyu Gao, Hang Zhao, Xiangpeng Feng, Juan Cheng, Bingliang Hu and Shuang Wang
Remote Sens. 2025, 17(24), 3988; https://doi.org/10.3390/rs17243988 - 10 Dec 2025
Viewed by 150
Abstract
The Huanjing Jianzai-2A (HJ-2A), launched in 2020 as China’s civilian operational environmental satellite, exhibits intrinsic non-uniformity from spectral channel distribution and inconsistency from the spectral resolution in its hyperspectral imager (HSI). These spectral characteristics compromise the spectral channel matching process, posing challenges to [...] Read more.
The Huanjing Jianzai-2A (HJ-2A), launched in 2020 as China’s civilian operational environmental satellite, exhibits intrinsic non-uniformity from spectral channel distribution and inconsistency from the spectral resolution in its hyperspectral imager (HSI). These spectral characteristics compromise the spectral channel matching process, posing challenges to the traditional cross-calibration method. To overcome these spectral matching constraints, this study proposed a Dynamic Gaussian Spectral Band Adjustment Factors (DG-SBAF) method for cross-calibration that constructs a Gaussian distribution model for each spectral channel of the target sensor, dynamically matches the spectral channels of the reference sensor and optimizes SBAF compensation weights through Gaussian function values. The cross-calibration of HJ-2A/HSI was conducted using ZiYuan1-02D Advanced Hyperspectral Imager (ZY1-02D/AHSI) through three distinct test sites: Dunhuang, Baotou, and Taklamakan Desert. The cross-calibration results analysis across three sites revealed mean relative deviations of 6.46% (VNIR) and 8.67% (SWIR), demonstrating superior performance over the traditional SBAF method (7.35% to VNIR, 9.49% to SWIR). Analyses of SBAF fluctuation showed that the DG-SBAF method achieved SBAF distributions approaching 1 with mean RMSE values of 0.0312 (VNIR) and 0.1086 (SWIR). Validation through spectral consistency assessment showed spectral angles less than 5° and 7° in VNIR bands when compared with Gaofen-5B/AHSI and Land-sat-9/OLI-2, respectively, and less than 6° with GF-5B/AHSI in SWIR bands. The pro-posed method effectively corrects spectral channel discrepancies in the matching process, enhances radiometric stability, and provides effective supplementary on-orbit calibration capability. Full article
Show Figures

Figure 1

19 pages, 6617 KB  
Article
Domain-Adaptive Segment Anything Model for Cross-Domain Water Body Segmentation in Satellite Imagery
by Lihong Yang, Pengfei Liu, Guilong Zhang, Huaici Zhao and Chunyang Zhao
J. Imaging 2025, 11(12), 437; https://doi.org/10.3390/jimaging11120437 - 9 Dec 2025
Viewed by 154
Abstract
Monitoring surface water bodies is crucial for environmental protection and resource management. Existing segmentation methods often struggle with limited generalization across different satellite domains. We propose DASAM, a domain-adaptive Segment Anything Model for cross-domain water body segmentation in satellite imagery. The core innovation [...] Read more.
Monitoring surface water bodies is crucial for environmental protection and resource management. Existing segmentation methods often struggle with limited generalization across different satellite domains. We propose DASAM, a domain-adaptive Segment Anything Model for cross-domain water body segmentation in satellite imagery. The core innovation of DASAM is a contrastive learning module that aligns features between source and style-augmented images, enabling robust domain generalization without requiring annotations from the target domain. Additionally, DASAM integrates a prompt-enhanced module and an encoder adapter to capture fine-grained spatial details and global context, further improving segmentation accuracy. Experiments on the China GF-2 dataset demonstrate superior performance over existing methods, while cross-domain evaluations on GLH-water and Sentinel-2 water body image datasets verify its strong generalization and robustness. These results highlight DASAM’s potential for large-scale, diverse satellite water body monitoring and accurate environmental analysis. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

22 pages, 2302 KB  
Article
MAF-GAN: A Multi-Attention Fusion Generative Adversarial Network for Remote Sensing Image Super-Resolution
by Zhaohe Wang, Hai Tan, Zhongwu Wang, Jinlong Ci and Haoran Zhai
Remote Sens. 2025, 17(24), 3959; https://doi.org/10.3390/rs17243959 - 7 Dec 2025
Viewed by 262
Abstract
Existing Generative Adversarial Networks (GANs) frequently yield remote sensing images with blurred fine details, distorted textures, and compromised spatial structures when applied to super-resolution (SR) tasks, so this study proposes a Multi-Attention Fusion Generative Adversarial Network (MAF-GAN) to address these limitations: the generator [...] Read more.
Existing Generative Adversarial Networks (GANs) frequently yield remote sensing images with blurred fine details, distorted textures, and compromised spatial structures when applied to super-resolution (SR) tasks, so this study proposes a Multi-Attention Fusion Generative Adversarial Network (MAF-GAN) to address these limitations: the generator of MAF-GAN is built on a U-Net backbone, which incorporates Oriented Convolutions (OrientedConv) to enhance the extraction of directional features and textures, while a novel co-calibration mechanism—incorporating channel, spatial, gating, and spectral attention—is embedded in the encoding path and skip connections, supplemented by an adaptive weighting strategy to enable effective multi-scale feature fusion, and a composite loss function is further designed to integrate adversarial loss, perceptual loss, hybrid pixel loss, total variation loss, and feature consistency loss for optimizing model performance; extensive experiments on the GF7-SR4×-MSD dataset demonstrate that MAF-GAN achieves state-of-the-art performance, delivering a Peak Signal-to-Noise Ratio (PSNR) of 27.14 dB, Structural Similarity Index (SSIM) of 0.7206, Learned Perceptual Image Patch Similarity (LPIPS) of 0.1017, and Spectral Angle Mapper (SAM) of 1.0871, which significantly outperforms mainstream models including SRGAN, ESRGAN, SwinIR, HAT, and ESatSR as well as exceeds traditional interpolation methods (e.g., Bicubic) by a substantial margin, and notably, MAF-GAN maintains an excellent balance between reconstruction quality and inference efficiency to further reinforce its advantages over competing methods; additionally, ablation studies validate the individual contribution of each proposed component to the model’s overall performance, and this method generates super-resolution remote sensing images with more natural visual perception, clearer spatial structures, and superior spectral fidelity, thus offering a reliable technical solution for high-precision remote sensing applications. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

29 pages, 163937 KB  
Article
Deep Learning-Based Classification of Aquatic Vegetation Using GF-1/6 WFV and HJ-2 CCD Satellite Data
by Yifan Shao, Qian Shen, Yue Yao, Xuelei Wang, Huan Zhao, Hangyu Gao, Yuting Zhou, Haobin Zhang and Zhaoning Gong
Remote Sens. 2025, 17(23), 3817; https://doi.org/10.3390/rs17233817 - 25 Nov 2025
Viewed by 285
Abstract
The Yangtze River Basin, one of China’s most vital watersheds, sustains both ecological balance and human livelihoods through its extensive lake systems. However, since the 1980s, these lakes have experienced significant ecological degradation, particularly in terms of aquatic vegetation decline. To acquire reliable [...] Read more.
The Yangtze River Basin, one of China’s most vital watersheds, sustains both ecological balance and human livelihoods through its extensive lake systems. However, since the 1980s, these lakes have experienced significant ecological degradation, particularly in terms of aquatic vegetation decline. To acquire reliable aquatic vegetation data during the peak growing season (July–September), when clear-sky conditions are scarce, we employed Chinese domestic satellite imagery—Gaofen-1/6 (GF-1/6) Wide Field of View (WFV) and Huanjing-2A/B (HJ-2A/B) Charge-Coupled Device (CCD)—with approximately one-day revisit frequency after constellation networking, 16 m spatial resolution, and excellent spectral consistency, in combination with deep learning algorithms, to monitor aquatic vegetation across the basin. Comparative experiments identified the near-infrared, red, and green bands as the most informative input features, with an optimal input size of 256 × 256. Through visual interpretation and dataset augmentation, we generated a total of 5016 labeled image pairs of this size. The U-Net++ model, equipped with an EfficientNet-B5 backbone, achieved robust performance with an mIoU of 90.16% and an mPA of 95.27% on the validation dataset. On independent test data, the model reached an mIoU of 79.10% and an mPA of 86.42%. Field-based assessment yielded an overall accuracy (OA) of 75.25%, confirming the reliability of the model. As a case study, the proposed model was applied to satellite imagery of Lake Taihu captured during the peak growing season of aquatic vegetation (July–September) from 2020 to 2025. Overall, this study introduces an automated classification approach for aquatic vegetation using 16 m resolution Chinese domestic satellite imagery and deep learning, providing a reliable framework for large-scale monitoring of aquatic vegetation across lakes in the Yangtze River Basin during their peak growth period. Full article
Show Figures

Figure 1

23 pages, 4544 KB  
Article
ASROT: A Novel Resampling Algorithm to Balance Training Datasets for Classification of Minor Crops in High-Elevation Regions
by Wei Li, Jie Zhu, Tongjie Li, Zhiyuan Ma, Timothy A. Warner, Hengbiao Zheng, Chongya Jiang, Tao Cheng, Yongchao Tian, Yan Zhu, Weixing Cao and Xia Yao
Remote Sens. 2025, 17(23), 3814; https://doi.org/10.3390/rs17233814 - 25 Nov 2025
Viewed by 215
Abstract
Accurately mapping crop distribution is important for environmental and food security applications. The success of machine learning algorithms (MLs) applied to mapping crops is partly dependent on the acquisition of sufficient training samples. However, since minor crops typically cover only few areas within [...] Read more.
Accurately mapping crop distribution is important for environmental and food security applications. The success of machine learning algorithms (MLs) applied to mapping crops is partly dependent on the acquisition of sufficient training samples. However, since minor crops typically cover only few areas within agricultural landscapes, opportunities for collecting training data for those classes are often constrained. This problem is particularly acute in high-elevation regions, where fields tend to be small and heterogeneous in shape. This often leads to imbalanced training datasets, where the proportions of samples for each class differ greatly. To address this issue, a novel resampling algorithm, i.e., the adaptive synthetic and repeat oversampling technique (ASROT), was proposed by coupling two existed algorithms: adaptive synthetic sampling (ADASYN) and density-based spatial clustering of applications with noise (DBSCAN). Then, we explored the application of the proposed ASROT approach and compared it with six commonly used alternative algorithms, using 13 imbalanced datasets generated from GF-6 images of a high-elevation region. The imbalanced training datasets as well as balanced versions produced by ASROT and the comparison algorithms were used with two classifiers (i.e., random forest (RF) and a stacking classifier) to map crop types. The results showed a negative correlation between overall accuracy and the imbalance degree of datasets, illustrating the latter does affect the models in calibrating the crop classification. The balanced datasets produced higher accuracy for crop classification than the original imbalanced datasets for both the RF and stacking classifiers. The classification accuracy of almost all the crop classes and the overall classification accuracy (OA) increased. Most notably, the accuracy for minor crops (e.g., highland barley and broad beans) increased by approximately 30%. Overall, the proposed ASROT algorithm provides an effective method for balancing training datasets, simultaneously improving classification accuracy of both major and minor crops in high-elevation regions. Full article
Show Figures

Figure 1

26 pages, 5500 KB  
Article
Structure and Functional Characteristics of Soybean Protein from Different Northeast Cultivars and Their Effects on the Quality of Soymilk Gel
by Xiaoyu Xia, Chunlei Zhang, Shiyao Zhang, Tianjiao Gao, Shuping Yan, Xiuqing Zhu, Jiaxin Kang, Guixing Zhao, Sobhi F. Lamlom, Honglei Ren and Jiajun Wang
Foods 2025, 14(23), 4029; https://doi.org/10.3390/foods14234029 - 24 Nov 2025
Viewed by 452
Abstract
Soymilk gel quality hinges on soybean protein composition and structure, yet direct comparisons linking protein traits to gel properties are limited. This study compared seven Northeast Chinese soybean varieties to identify which protein characteristics best predict tofu gel quality. Protein analysis included composition [...] Read more.
Soymilk gel quality hinges on soybean protein composition and structure, yet direct comparisons linking protein traits to gel properties are limited. This study compared seven Northeast Chinese soybean varieties to identify which protein characteristics best predict tofu gel quality. Protein analysis included composition (11S/7S ratio), structure, and functional properties. Gel quality was measured through yield, water retention, texture, rheology, and microstructure imaging. Results showed substantial variation among varieties: 11S/7S ratios ranged from 1.14 to 4.10, solubility from 57.50% to 69.74%, gel yield from 193.25% to 236.12%, water-holding capacity from 42.09% to 60.23%, and gel firmness from 1520 to 1889 gf. The 11S/7S ratio emerged as the strongest quality predictor, correlating with gel firmness (R = 0.92) and elasticity (R = 0.98), while solubility correlated with yield (R = 0.79) and water retention (R = 0.83). Microscopy revealed that variety HD-1, with the highest 11S/7S ratio (4.10) and solubility (69.74%), formed dense networks with small pores (20–50 μm), whereas variety HK-60 (ratio 1.14) produced coarse structures with large pores (100–200 μm). HD-1 showed the best overall performance. Varieties with 11S/7S ratios above 3.5 and solubility above 68% consistently produced high-quality gels, while ratios below 2.5 indicated poor gel formation regardless of total protein content. These findings demonstrate that protein composition matters more than protein quantity for tofu quality. The approach enables rapid variety screening and provides practical guidelines for tofu manufacturers and soybean breeders. Full article
Show Figures

Figure 1

29 pages, 8374 KB  
Article
Cross-Domain Land Surface Temperature Retrieval via Strategic Fine-Tuning-Based Transfer Learning: Application to GF5-02 VIMI Imagery
by Peyman Heidarian, Hua Li, Zelin Zhang, Yumin Tan, Feng Zhao, Biao Cao, Yongming Du and Qinhuo Liu
Remote Sens. 2025, 17(23), 3803; https://doi.org/10.3390/rs17233803 - 23 Nov 2025
Viewed by 503
Abstract
Accurate prediction of land surface temperature (LST) is critical for remote sensing applications, yet remains hindered by in situ data scarcity, limited input variables, and regional variability. To address these limitations, we introduce a three-stage strategic fine-tuning-based transfer learning (SFTL) framework that integrates [...] Read more.
Accurate prediction of land surface temperature (LST) is critical for remote sensing applications, yet remains hindered by in situ data scarcity, limited input variables, and regional variability. To address these limitations, we introduce a three-stage strategic fine-tuning-based transfer learning (SFTL) framework that integrates a large simulated dataset (430 K samples), in situ measurements from the Heihe and Huailai regions in China, and high-resolution imagery from the GF5-02 Visible and Infrared Multispectral Imager (VIMI). The key novelty of this study is the combination of large-scale simulation, an engineered humidity-sensitive feature, and multiple parameter-efficient tuning strategies—full, head, gradual, adapter, and low-rank adaptation (LoRA)—within a unified transfer-learning framework for cross-site LST estimation. In Stage 1, pre-training with 5-fold cross-validation on the simulated dataset produced strong baseline models, including Random Forest (RF), Light Gradient Boosting Machine (LGBM), Deep Neural Network (DNN), Transformer (TrF), and Convolutional Neural Network (CNN). In Stage 2, strategic fine-tuning was conducted under two cross-regional scenarios—Heihe-to-Huailai and Huailai-to-Heihe—and model transfer for tree-based learners. Fine-tuning achieved competitive in-domain performance while materially improving cross-site transfer. When trained on Huailai and tested on Heihe, DNN-gradual attained RMSE 2.89 K (R2 ≈ 0.96); when trained on Heihe and tested on Huailai, TrF-head achieved RMSE 3.34 K (R2 ≈ 0.94). In Stage 3, sensitivity analyses confirmed stability across IQR multipliers of 1.0–1.5, with <1% RMSE variation across models and sites, indicating robustness against outliers. Additionally, application to real GF5-02 VIMI imagery demonstrated that the best SFTL configurations aligned with spatiotemporal in situ observations at both sites, capturing the expected spatial gradients. Overall, the proposed SFTL framework—anchored in cross-validation, strategic fine-tuning, and large-scale simulation—outperforms the widely used Split-Window (SW) algorithm (Huailai: RMSE = 3.64 K; Heihe: RMSE = 4.22 K) as well as direct-training Machine Learning (ML) models, underscoring their limitations in modeling complex regional variability. Full article
Show Figures

Figure 1

23 pages, 8750 KB  
Article
Semi-BSU: A Boundary-Aware Semi-Supervised Semantic Segmentation Framework with Superpixel Refinement for Coastal Aquaculture Pond Extraction from Remote Sensing Images
by Yaocan Gan, Bo Cheng, Chunbo Li, Weilong Fu and Xiaoping Zhang
Remote Sens. 2025, 17(22), 3733; https://doi.org/10.3390/rs17223733 - 17 Nov 2025
Viewed by 470
Abstract
Accurate segmentation of coastal aquaculture ponds from high-resolution remote sensing images is critical for applications such as coastal environmental monitoring, land use mapping, and infrastructure management. Semi-supervised learning (SSL) has emerged as a promising paradigm by leveraging labeled and unlabeled data to reduce [...] Read more.
Accurate segmentation of coastal aquaculture ponds from high-resolution remote sensing images is critical for applications such as coastal environmental monitoring, land use mapping, and infrastructure management. Semi-supervised learning (SSL) has emerged as a promising paradigm by leveraging labeled and unlabeled data to reduce annotation costs. However, existing SSL methods often suffer from pseudo-label quality degradation, manifested as boundary adhesion and intra-class inconsistencies, which significantly affect segmentation accuracy. To address these challenges, we propose Semi-BSU, a boundary-aware semi-supervised semantic segmentation framework based on the mean teacher architecture. Semi-BSU integrates two novel components: (1) a Boundary Consistency Constraint (BCC), which employs an auxiliary boundary classifier to enhance contour accuracy in pseudo labels, and (2) a Superpixel Refinement Module (SRM), which refines pseudo labels at the superpixel level to improve intra-class consistency. Comprehensive experiments conducted on GF6 and ZY1E high-resolution remote sensing imagery, covering diverse coastal environments with complex geomorphological features, demonstrate the effectiveness of our approach. With half of the training set labeled, Semi-BSU achieves an MIOU of 0.8606, F1 score of 0.8896, and Kappa coefficient of 0.8080, outperforming state-of-the-art methods including CPS, GCT, and UniMatch by 0.3–4.9% in MIOU. The method maintains a compact computational footprint with only 1.81 M parameters and 55.71 GFLOPs. Even with only 1/8 labeled data, it yields a 3.57% MIOU improvement over the supervised baseline. The results demonstrate that combining boundary-aware learning with superpixel-based refinement offers an effective and efficient strategy for high-quality pseudo-label generation and accurate mapping of coastal aquaculture ponds in remote sensing imagery. Full article
Show Figures

Figure 1

19 pages, 5630 KB  
Article
A New Method for Detecting Plastic-Mulched Land Using GF-2 Imagery
by Shixian Lu, Shuyuan Zheng, Cheng Chen, Shanshan Liu, Jian Dao, Chenwei Xu and Jianxiong Wang
Appl. Sci. 2025, 15(22), 11978; https://doi.org/10.3390/app152211978 - 11 Nov 2025
Viewed by 432
Abstract
Plastic mulch residues threaten soil fertility and contribute to microplastic pollution, creating an urgent need for accurate, rapid mapping of plastic-mulched land (PML). This study presents a novel method for detecting PML from GF-2 imagery by introducing the second component of the K-T [...] Read more.
Plastic mulch residues threaten soil fertility and contribute to microplastic pollution, creating an urgent need for accurate, rapid mapping of plastic-mulched land (PML). This study presents a novel method for detecting PML from GF-2 imagery by introducing the second component of the K-T transform as a PML-enhancement feature to compensate for the sensor’s limited spectral bands. The K-T component was fused with selected texture metrics and the original spectral bands, and an object-oriented classification framework was applied to delineate PML. Validation shows that the proposed method achieves high identification accuracy for PML and good transferability, with accuracies exceeding 90% across the four selected study areas. Moreover, the method demonstrates strong temporal stability: classification accuracies exceeded 90% for two different time periods within the same study area. Compared with methods reported in previous studies, our approach attains comparable accuracy while offering higher classification efficiency. Overall, the proposed method enables accurate PML identification from GF-2 imagery and provides a valuable reference for agricultural planning and ecological protection. Full article
Show Figures

Figure 1

24 pages, 59144 KB  
Article
EWAM: Scene-Adaptive Infrared-Visible Image Matching with Radiation-Prior Encoding and Learnable Wavelet Edge Enhancement
by Mingwei Li, Hai Tan, Haoran Zhai and Jinlong Ci
Remote Sens. 2025, 17(22), 3666; https://doi.org/10.3390/rs17223666 - 7 Nov 2025
Viewed by 632
Abstract
Infrared–visible image matching is a prerequisite for environmental monitoring, military reconnaissance, and multisource geospatial analysis. However, pronounced texture disparities, intensity drift, and complex non-linear radiometric distortions in such cross-modal pairs mean that existing frameworks such as SuperPoint + SuperGlue (SP + SG) and [...] Read more.
Infrared–visible image matching is a prerequisite for environmental monitoring, military reconnaissance, and multisource geospatial analysis. However, pronounced texture disparities, intensity drift, and complex non-linear radiometric distortions in such cross-modal pairs mean that existing frameworks such as SuperPoint + SuperGlue (SP + SG) and LoFTR cannot reliably establish correspondences. To address this issue, we propose a dual-path architecture, the Environment-Adaptive Wavelet Enhancement and Radiation Priors Aided Matcher (EWAM). EWAM incorporates two synergistic branches: (1) an Environment-Adaptive Radiation Feature Extractor, which first classifies the scene according to radiation-intensity variations and then incorporates a physical radiation model into a learnable gating mechanism for selective feature propagation; (2) a Wavelet-Transform High-Frequency Enhancement Module, which recovers blurred edge structures by boosting wavelet coefficients under directional perceptual losses. The two branches collectively increase the number of tie points (reliable correspondences) and refine their spatial localization. A coarse-to-fine matcher subsequently refines the cross-modal correspondences. We benchmarked EWAM against SIFT, AKAZE, D2-Net, SP + SG, and LoFTR on a newly compiled dataset that fuses GF-7, Landsat-8, and Five-Billion-Pixels imagery. Across desert, mountain, gobi, urban and farmland scenes, EWAM reduced the average RMSE to 1.85 pixels and outperformed the best competing method by 2.7%, 2.6%, 2.0%, 2.3% and 1.8% in accuracy, respectively. These findings demonstrate that EWAM yields a robust and scalable framework for large-scale multi-sensor remote-sensing data fusion. Full article
Show Figures

Graphical abstract

18 pages, 1644 KB  
Technical Note
Cross-Validation of Surface Reflectance Between GF5-02 AHSI and EnMAP Across Diverse Land Cover Types
by Shuhan Liu, Yujie Zhao, Xia Wang, Li Guo, Kun Shang, Ping Zhou, Bangyu Ge, Bai Xue and Jiaxing Liu
Remote Sens. 2025, 17(21), 3524; https://doi.org/10.3390/rs17213524 - 24 Oct 2025
Viewed by 543
Abstract
Multi-source hyperspectral data are increasingly applied in environmental monitoring, precision agriculture, and geological exploration, yet differences in sensor characteristics hinder interoperability. This study presents a systematic cross-validation of surface reflectance between the German EnMAP mission and the Chinese GF5-02 Advanced Hyperspectral Imager (AHSI) [...] Read more.
Multi-source hyperspectral data are increasingly applied in environmental monitoring, precision agriculture, and geological exploration, yet differences in sensor characteristics hinder interoperability. This study presents a systematic cross-validation of surface reflectance between the German EnMAP mission and the Chinese GF5-02 Advanced Hyperspectral Imager (AHSI) across four representative land cover types: minerals in the East Tianshan Mountains, tropical grasslands in Hainan Danzhou, desert in Dunhuang, and inland salt lakes in Qinghai. Using EnMAP Level-2A products as reference, we evaluated GF5-02 reflectance with spectral angle (SA), root mean squared error (RMSE), relative RMSE (RRMSE), and correlation coefficient (R). Results show strong consistency for high- and medium-reflectance surfaces (R > 0.96, SA < 0.08 rad), while water bodies exhibit larger discrepancies (R = 0.82, SA = 0.34 rad), likely due to atmospheric correction and sensor response differences. Additional ground validation in the East Tianshan region confirmed the reliability and stability of GF5-02 data. Overall, GF5-02 demonstrates high consistency with EnMAP across most land cover types, supporting quantitative applications, though further improvements are needed for low-reflectance environments. Full article
Show Figures

Figure 1

21 pages, 9744 KB  
Article
MsGf: A Lightweight Self-Supervised Monocular Depth Estimation Framework with Multi-Scale Feature Extraction
by Xinxing Tian, Zhilin He, Yawei Zhang, Fengkai Liu and Tianhao Gu
Sensors 2025, 25(20), 6380; https://doi.org/10.3390/s25206380 - 16 Oct 2025
Cited by 1 | Viewed by 764
Abstract
Monocular depth estimation is an essential component in computer vision that enables 3D scene understanding, with critical applications in autonomous driving and augmented reality. This paper proposes a lightweight self-supervised framework from single RGB images for multi-scale feature extraction and artifact elimination in [...] Read more.
Monocular depth estimation is an essential component in computer vision that enables 3D scene understanding, with critical applications in autonomous driving and augmented reality. This paper proposes a lightweight self-supervised framework from single RGB images for multi-scale feature extraction and artifact elimination in monocular depth estimation (MsGf). The proposed framework first designs a Cross-Dimensional Multi-scale Feature Extraction (CDMs) module. The CDMs module combines parallel multi-scale convolution with sequential feature convolutions to achieve multi-scale feature extraction with minimal parameters. Additionally, a Sobel Edge Perception-Guided Filtering (SEGF) module is proposed. The SEGF module uses the Sobel operator to decompose the features into horizontal direction features and vertical direction features, and then generates the filter kernel through two steps of filtering to effectively suppress artifacts and better capture structural and edge features. A large number of ablation experiments and comparative experiments on the KITTI and Make3D datasets demonstrate that the MsGf with only 0.8 M parameters can achieve better performance than the current most advanced methods. Full article
Show Figures

Figure 1

27 pages, 9637 KB  
Article
ConvNeXt-L-Based Recognition of Decorative Patterns in Historical Architecture: A Case Study of Macau
by Junling Zhou, Lingfeng Xie, Pia Fricker and Kuan Liu
Buildings 2025, 15(20), 3705; https://doi.org/10.3390/buildings15203705 - 14 Oct 2025
Viewed by 834
Abstract
As a well-known World Cultural Heritage Site, the Historic Centre of Macao’s historical buildings possess a wealth of decorative patterns. These patterns contain cultural esthetics, geographical environment, cultural traditions, and other elements from specific historical periods, deeply reflecting the evolution of religious rituals [...] Read more.
As a well-known World Cultural Heritage Site, the Historic Centre of Macao’s historical buildings possess a wealth of decorative patterns. These patterns contain cultural esthetics, geographical environment, cultural traditions, and other elements from specific historical periods, deeply reflecting the evolution of religious rituals and political and economic systems throughout history. Through long-term research, this article constructs a dataset of 11,807 images of local decorative patterns of historical buildings in Macau, and proposes a fine-grained image classification method using the ConvNeXt-L model. The ConvNeXt-L model is an efficient convolutional neural network that has demonstrated excellent performance in image classification tasks in fields such as medicine and architecture. Its outstanding advantages lie in limited training samples, diverse image features, and complex scenes. The most typical advantage of this model is its structural integration of key design concepts from a Transformer, which significantly enhances the feature extraction and generalization ability of samples. In response to the objective reality that the decorative patterns of historical buildings in Macau have rich levels of detail and a limited number of functional building categories, ConvNeXt-L maximizes its ability to recognize and classify patterns while ensuring computational efficiency. This provides a more ideal technical path for the classification of small-sample complex images. This article constructs a deep learning system based on the PyTorch 1.11 framework and compares ResNet50, EfficientNet-B7, ViT-B/16, Swin-B, RegNet-Y-16GF, and ConvNeXt series models. The results indicate a positive correlation between model performance and structural complexity, with ConvNeXt-L being the most ideal in terms of accuracy in decorative pattern classification, due to its fusion of convolution and attention mechanisms. This study not only provides a multidimensional exploration for the protection and revitalization of Macao’s historical and cultural heritage and enriches theoretical support and practical foundations but also provides new research paths and methodological support for artificial intelligence technology to assist in the planning and decision-making of historical urban areas. Full article
Show Figures

Figure 1

15 pages, 8859 KB  
Article
A Hybrid Estimation Model for Graphite Nodularity of Ductile Cast Iron Based on Multi-Source Feature Extraction
by Yongjian Yang, Yanhui Liu, Yuqian He, Zengren Pan and Zhiwei Li
Modelling 2025, 6(4), 126; https://doi.org/10.3390/modelling6040126 - 13 Oct 2025
Viewed by 527
Abstract
Graphite nodularity is a key indicator for evaluating the microstructure quality of ductile iron and plays a crucial role in ensuring product quality and enhancing manufacturing efficiency. Existing research often only focuses on a single type of feature and fails to utilize multi-source [...] Read more.
Graphite nodularity is a key indicator for evaluating the microstructure quality of ductile iron and plays a crucial role in ensuring product quality and enhancing manufacturing efficiency. Existing research often only focuses on a single type of feature and fails to utilize multi-source information in a coordinated manner. Single-feature methods are difficult to comprehensively capture microstructures, which limits the accuracy and robustness of the model. This study proposes a hybrid estimation model for the graphite nodularity of ductile cast iron based on multi-source feature extraction. A comprehensive feature engineering pipeline was established, incorporating geometric, color, and texture features extracted via Hue-Saturation-Value color space (HSV) histograms, gray level co-occurrence matrix (GLCM), Local Binary Pattern (LBP), and multi-scale Gabor filters. Dimensionality reduction was performed using Principal Component Analysis (PCA) to mitigate redundancy. An improved watershed algorithm combined with intelligent filtering was used for accurate particle segmentation. Several machine learning algorithms, including Support Vector Regression (SVR), Multi-Layer Perceptron (MLP), Random Forest (RF), Gradient Boosting Regressor (GBR), eXtreme Gradient Boosting (XGBoost) and Categorical Boosting (CatBoost), are applied to estimate graphite nodularity based on geometric features (GFs) and feature extraction. Experimental results demonstrate that the CatBoost model trained on fused features achieves high estimation accuracy and stability for geometric parameters, with R-squared (R2) exceeding 0.98. Furthermore, introducing geometric features into the fusion set enhances model generalization and suppresses overfitting. This framework offers an efficient and robust approach for intelligent analysis of metallographic images and provides valuable support for automated quality assessment in casting production. Full article
Show Figures

Figure 1

19 pages, 12919 KB  
Article
Mapping Flat Peaches Using GF-1 Imagery and Overwintering Features by Comparing Pixel/Object-Based Random Forest Algorithm
by Yawen Wang, Jing Wang and Cheng Tang
Forests 2025, 16(10), 1566; https://doi.org/10.3390/f16101566 - 10 Oct 2025
Viewed by 353
Abstract
The flat peach, an important commercial crop in the 143rd Regiment of Shihezi, China, is overwintered using plastic film mulching. Flat peaches are cultivated to boost the local temperate rural economy. The development of accurate maps of the spatial distribution of flat peach [...] Read more.
The flat peach, an important commercial crop in the 143rd Regiment of Shihezi, China, is overwintered using plastic film mulching. Flat peaches are cultivated to boost the local temperate rural economy. The development of accurate maps of the spatial distribution of flat peach plantations is crucial for the intelligent management of economic orchards. This study evaluated the performance of pixel-based and object-based random forest algorithms for mapping flat peaches using the GF-1 image acquired during the overwintering period. A total of 45 variables, including spectral bands, vegetation indices, and texture, were used as input features. To assess the importance of different features on classification accuracy, the five different sets of variables (5, 15, 25, and 35 input variables and all 45 variables) were classified using pixel/object-based classification methods. Results of the feature optimization suggested that vegetation indices played a key role in the study, and the mean and variance of Gray-Level Co-occurrence Matrix (GLCM) texture features were important variables for distinguishing flat peach orchards. The object-based classification method was superior to the pixel-based classification method with statistically significant differences. The optimal performance was achieved by the object-based method using 25 input variables, with an overall accuracy of 94.47% and a Kappa coefficient of 0.9273. Furthermore, there were no statistically significant differences between the image-derived flat peach cultivated area and the statistical yearbook data. The result indicated that high-resolution images based on the overwintering period can successfully achieve the mapping of flat peach planting areas, which will provide a useful reference for temperate lands with similar agricultural management. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

Back to TopTop