You are currently viewing a new version of our website. To view the old version click .
Plants
  • This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
  • Article
  • Open Access

30 December 2025

Research on Intelligent Wood Species Identification Method Based on Multimodal Texture-Dominated Features and Deep Learning Fusion

,
,
,
,
,
,
,
and
1
College of Materials and Chemical Engineering, Southwest Forestry University, Kunming 650224, China
2
College of Big Data and Intelligent Engineering, Southwest Forestry University, Kunming 650224, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Plants2026, 15(1), 108;https://doi.org/10.3390/plants15010108 
(registering DOI)
This article belongs to the Special Issue Advances in Dendrological Research: Celebrating the 40th Anniversary of the Dendrology Branch of the Chinese Society of Forestry

Abstract

Aimed at the problems of traditional wood species identification relying on manual experience, slow identification speed, and insufficient robustness, this study takes hyperspectral images of cross-sections of 10 typical wood species commonly found in Puer, Yunnan, China, as the research object. It comprehensively applies various spectral and texture feature extraction technologies and proposes an intelligent wood species identification method based on the fusion of multimodal texture-dominated features and deep learning. Firstly, an SOC710-VP hyperspectral imager is used to collect hyperspectral data under standard laboratory lighting conditions, and a hyperspectral database of wood cross-sections is constructed through reflectance calibration. Secondly, in the spectral space construction stage, a comprehensive similarity matrix is built based on four types of spectral similarity indicators. Representative bands are selected using two Max–Min strategies: partitioned quota and coverage awareness. Multi-scale wavelet fusion is performed to generate high-resolution fused images and extract interest point features. Thirdly, in the texture space construction stage, three types of texture feature matrices are generated based on the PCA first principal component map, and interest point features are extracted. Fourthly, in the complementary collaborative learning stage, the ST-former model is constructed. The weights of the trained SpectralFormer++ and TextureFormer are imported, and only the fusion weights are optimized and learned to realize category-adaptive spectral–texture feature fusion. Experimental results show that the overall classification accuracy of the proposed joint model reaches 90.27%, which is about 8% higher than that of single-modal models on average.

Article Metrics

Citations

Article Access Statistics

Article metric data becomes available approximately 24 hours after publication online.