Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (3)

Search Parameters:
Keywords = SF-UNet

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 3819 KiB  
Article
SF-UNet: An Adaptive Cross-Level Residual Cascade for Forest Hyperspectral Image Classification Algorithm by Fusing SpectralFormer and U-Net
by Xinggui Xu, Xuyang Li, Xiangsuo Fan, Qi Li, Hong Li and Haotian Yu
Forests 2025, 16(5), 858; https://doi.org/10.3390/f16050858 - 20 May 2025
Viewed by 390
Abstract
Traditional deep learning algorithms struggle to effectively utilize local spectral info in forest HS images and adequately capture subtle feature differences, often causing model confusion and misclassification. To tackle these issues, we present SF-UNet, a novel pixel-level classification network for forest HS images. [...] Read more.
Traditional deep learning algorithms struggle to effectively utilize local spectral info in forest HS images and adequately capture subtle feature differences, often causing model confusion and misclassification. To tackle these issues, we present SF-UNet, a novel pixel-level classification network for forest HS images. It integrates the strengths of SpectralFormer and U-Net. First, the HGSE module generates semicomponent spectral nesting, strengthening local info element connections via spectral embedding. Next, the CAM within SpectralFormer serves as an auxiliary U-Net encoder. This allows cross-level jump connections and cascading through interlayer soft residuals, enhancing feature representation via cross-regional adaptive learning. Finally, the U-Net decoder is used for pixel-level classification. Experiments on forest Sentinel-2 data show that SF-UNet outperforms mainstream frameworks. While Vision Transformer has an 88.29% classification accuracy, SF-UNet achieves 95.28%, a significant 6.99% improvement. Moreover, SF-UNet excels in land cover change analysis using multi-temporal Sentinel-2 images. It can accurately capture subtle land use changes and maintain classification consistency across seasons and years. These results highlight SF-UNet’s effectiveness in forest remote sensing image classification and its potential application value in deep learning-based forest HS remote sensing image classification research. Full article
Show Figures

Figure 1

16 pages, 5437 KiB  
Article
Three Dimensional Shape Reconstruction via Polarization Imaging and Deep Learning
by Xianyu Wu, Penghao Li, Xin Zhang, Jiangtao Chen and Feng Huang
Sensors 2023, 23(10), 4592; https://doi.org/10.3390/s23104592 - 9 May 2023
Cited by 17 | Viewed by 3427
Abstract
Deep-learning-based polarization 3D imaging techniques, which train networks in a data-driven manner, are capable of estimating a target’s surface normal distribution under passive lighting conditions. However, existing methods have limitations in restoring target texture details and accurately estimating surface normals. Information loss can [...] Read more.
Deep-learning-based polarization 3D imaging techniques, which train networks in a data-driven manner, are capable of estimating a target’s surface normal distribution under passive lighting conditions. However, existing methods have limitations in restoring target texture details and accurately estimating surface normals. Information loss can occur in the fine-textured areas of the target during the reconstruction process, which can result in inaccurate normal estimation and reduce the overall reconstruction accuracy. The proposed method enables extraction of more comprehensive information, mitigates the loss of texture information during object reconstruction, enhances the accuracy of surface normal estimation, and facilitates more comprehensive and precise reconstruction of objects. The proposed networks optimize the polarization representation input by utilizing the Stokes-vector-based parameter, in addition to separated specular and diffuse reflection components. This approach reduces the impact of background noise, extracts more relevant polarization features of the target, and provides more accurate cues for restoration of surface normals. Experiments are performed using both the DeepSfP dataset and newly collected data. The results show that the proposed model can provide more accurate surface normal estimates. Compared to the UNet architecture-based method, the mean angular error is reduced by 19%, calculation time is reduced by 62%, and the model size is reduced by 11%. Full article
(This article belongs to the Special Issue Recent Advances in Optical Imaging and 3D Display Technologies)
Show Figures

Figure 1

41 pages, 20239 KiB  
Article
Deep and Machine Learning Image Classification of Coastal Wetlands Using Unpiloted Aircraft System Multispectral Images and Lidar Datasets
by Ali Gonzalez-Perez, Amr Abd-Elrahman, Benjamin Wilkinson, Daniel J. Johnson and Raymond R. Carthy
Remote Sens. 2022, 14(16), 3937; https://doi.org/10.3390/rs14163937 - 13 Aug 2022
Cited by 33 | Viewed by 5407
Abstract
The recent developments of new deep learning architectures create opportunities to accurately classify high-resolution unoccupied aerial system (UAS) images of natural coastal systems and mandate continuous evaluation of algorithm performance. We evaluated the performance of the U-Net and DeepLabv3 deep convolutional network architectures [...] Read more.
The recent developments of new deep learning architectures create opportunities to accurately classify high-resolution unoccupied aerial system (UAS) images of natural coastal systems and mandate continuous evaluation of algorithm performance. We evaluated the performance of the U-Net and DeepLabv3 deep convolutional network architectures and two traditional machine learning techniques (support vector machine (SVM) and random forest (RF)) applied to seventeen coastal land cover types in west Florida using UAS multispectral aerial imagery and canopy height models (CHM). Twelve combinations of spectral bands and CHMs were used. Our results using the spectral bands showed that the U-Net (83.80–85.27% overall accuracy) and the DeepLabV3 (75.20–83.50% overall accuracy) deep learning techniques outperformed the SVM (60.50–71.10% overall accuracy) and the RF (57.40–71.0%) machine learning algorithms. The addition of the CHM to the spectral bands slightly increased the overall accuracy as a whole in the deep learning models, while the addition of a CHM notably improved the SVM and RF results. Similarly, using bands outside the three spectral bands, namely, near-infrared and red edge, increased the performance of the machine learning classifiers but had minimal impact on the deep learning classification results. The difference in the overall accuracies produced by using UAS-based lidar and SfM point clouds, as supplementary geometrical information, in the classification process was minimal across all classification techniques. Our results highlight the advantage of using deep learning networks to classify high-resolution UAS images in highly diverse coastal landscapes. We also found that low-cost, three-visible-band imagery produces results comparable to multispectral imagery that do not risk a significant reduction in classification accuracy when adopting deep learning models. Full article
(This article belongs to the Special Issue Remote Sensing Applications in Vegetation Classification)
Show Figures

Graphical abstract

Back to TopTop