Next Article in Journal
Coupling Bayesian Optimization with Generalized Linear Mixed Models for Managing Spatiotemporal Dynamics of Sediment PFAS
Previous Article in Journal
In Situ Self-Assembled Particle-Enhanced Foam System for Profile Control and Enhanced Oil Recovery in Offshore Heterogeneous Reservoirs
Previous Article in Special Issue
Analyzing the Nitrate Content in Various Bell Pepper Varieties Through Non-Destructive Methods Using Vis/NIR Spectroscopy Enhanced by Metaheuristic Algorithms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Selection of Specimen Orientations for Hyperspectral Identification of Wild and Cultivated Ophiocordyceps sinensis

1
The School of Information Engineering, Xizang Minzu University, Xianyang 712089, China
2
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
3
The School of Computer Science and Engineering, Institute of Disaster Prevention, Langfang 065201, China
4
NMPA key Laboratory for Quality Control of Traditional Chinese Medicine, Tibetan Medicine, Xizang Institute for Food and Drug Control, Lhasa 850000, China
5
State Key Laboratory Breeding Base of Dao-di Herbs, National Resource Center for Chinese Materia Medica, Chinese Academy of Chinese Medical Sciences, Beijing 100700, China
*
Authors to whom correspondence should be addressed.
Processes 2026, 14(3), 412; https://doi.org/10.3390/pr14030412 (registering DOI)
Submission received: 25 December 2025 / Revised: 18 January 2026 / Accepted: 21 January 2026 / Published: 24 January 2026

Abstract

Ophiocordyceps sinensis is a precious medicinal material with significant pharmacological and economic value. However, the visual similarity between its wild and cultivated forms poses a challenge for authentication. This study investigates the influence of specimen orientation on the accuracy of hyperspectral identification. Hyperspectral data were systematically acquired from four standard specimen orientations (left lateral, right lateral, dorsal, and ventral) for each sample. Random Forest (RF), Support Vector Machine (SVM), Logistic Regression (LR), and Fully Connected Neural Network (FCNN) models were trained and evaluated using both single-orientation and multi-orientation fused data. Results indicate that the LR model achieved superior and stable performance, with an average identification accuracy exceeding 98%. Crucially, for all tested models, no statistically significant difference in identification accuracy was observed across the different specimen orientations. This finding demonstrates that specimen orientation does not significantly influence identification accuracy. The conclusion was further corroborated in experiments using randomly orientation-fused datasets, in which model performance remained consistent and reliable. It is therefore concluded that precise specimen orientation control is unnecessary for the hyperspectral identification of Ophiocordyceps sinensis. This insight substantially simplifies the hardware design of dedicated identification devices by eliminating the need for complex orientation-fixing mechanisms and facilitating the standardization of operational protocols. The study provides a practical theoretical foundation for developing cost-effective, user-friendly, and widely applicable identification instruments for Ophiocordyceps sinensis and offers a reference for similar non-destructive testing applications involving anisotropic medicinal materials.

1. Introduction

Traditional Chinese Medicine plays an indispensable role in global healthcare. Ophiocordyceps sinensis, a renowned and valuable medicinal fungus, possesses unique pharmacological properties and tonic effects, including anti-aging [1], anti-inflammatory [2], and antioxidant [3] activities. Ophiocordyceps sinensis is famous as one of the “three treasures of Chinese medicine”, along with ginseng and pilose antler [4]. However, factors such as climate change and over-excavation have led to a decline in wild Ophiocordyceps sinensis resources, with the species being included in the International Union for Conservation of Nature (IUCN) Red List of Threatened Species in 2020 [5]. The limited supply, coupled with increasing market demand, has driven the development of cultivated alternatives. Industrial-scale cultivation has been successfully established, with an annual output now exceeding 30 tons [6].
Ophiocordyceps sinensis is predominantly distributed in the Tibetan Plateau region of China and typically grows at altitudes ranging from 3000 to 5000 m [7]. Its natural growth environment is characterized by low temperatures, strong ultraviolet radiation, and hypoxic conditions. Through long-term evolution, Ophiocordyceps sinensis has developed a unique life cycle that enables adaptation to these extreme alpine environments. Zhang et al. studied Ophiocordyceps sinensis in the extreme environment of the Tibetan Plateau and confirmed the evolutionary relationship between the fungus and its host insect genotypes, providing insights for artificial cultivation through the selection of suitable host insects [8]. In recent years, significant progress has been made in the artificial induction of sexual fruiting bodies of Ophiocordyceps sinensis through cultivation on artificial media or inoculation of host insect larvae, providing important material support for scientific research and industrial development [9].
Cultivated Ophiocordyceps sinensis shares the same host insect species, fungus, life cycle, infection process, and even macroscopic and microscopic morphology with its wild counterpart [10]. Nevertheless, there has been no comprehensive review detailing the differences between wild and cultivated Ophiocordyceps sinensis [11]. The substantial disparity in economic value has, however, incentivized the fraudulent sale of cultivated products as wild, creating market confusion. Their visual similarity makes a reliable distinction by consumers nearly impossible, thereby undermining market integrity.
Current methods for authenticating wild and cultivated Ophiocordyceps sinensis include protein composition analysis [12] and adenosine profiling [13]. These techniques typically require complex sample preparation and are inherently destructive, limiting their practical application. Consequently, identifying a more accurate, rapid, and feasible identification method is imperative. Hyperspectral technology, with its multi-band, high-resolution capabilities, presents a potential solution in this context [14]. Recognized for its speed and non-destructive nature [15], it has been successfully deployed in fields ranging from geological survey [16], environmental monitoring, and medical diagnosis [17]. Within the domain of Chinese medicinal materials, it has been effectively applied to the identification of Panax ginseng, Salvia miltiorrhiza, and Radix glycyrrhizae, for purposes such as geographical origin and age determination [18,19,20,21,22].
Compared with traditional identification techniques for Ophiocordyceps sinensis, hyperspectral technology offers significant advantages due to its narrow spectral bandwidths and high spectral resolution, enabling comprehensive characterization of material properties [23]. When light interacts with different substances, specific wavelengths are selectively absorbed, reflected, or scattered, forming unique spectral signatures that can be exploited to analyze material composition and physiological status. In recent years, hyperspectral and related spectroscopic techniques have been increasingly applied across various fields. Zhang et al. employed near-infrared spectroscopy (NIRS) combined with portable instruments and model transfer algorithms to achieve rapid, non-destructive in situ identification of wild Ophiocordyceps sinensis [24], while Du et al. utilized Fourier-transform infrared photoacoustic spectroscopy (FTIR-PAS) for fast and non-destructive identification and traceability of natural specimens [25]. Beyond medicinal materials, hyperspectral imaging covering the 350–2500 nm spectral range has also been successfully applied under field conditions for rapid, non-destructive prediction and evaluation of key yield and quality parameters in crops such as alfalfa [26].
These advances demonstrate the considerable potential of hyperspectral technology in the authentication of traditional Chinese medicinal materials. However, for anisotropic materials such as Ophiocordyceps sinensis, structural differences between dorsal, ventral, and lateral aspects may introduce spectral variability during data acquisition. This practical factor has rarely been systematically examined in existing studies, and the influence of specimen orientation on hyperspectral identification accuracy remains unclear, representing an important gap between laboratory research and real-world instrument deployment.
Ophiocordyceps sinensis comprises two parts: the caterpillar or the base, which is a fungus-filled endosclerotium, and the top fruiting body or stroma [27]. The caterpillar measures 3–5 cm in length and 0.3–0.8 cm in diameter. Its surface exhibits 20–30 annulations, with those near the head being finer. Eight pairs of legs are present, with the four central pairs being most distinct. Visually, the caterpillar is not isotropic, displaying clear differentiation between its dorsal and ventral aspects. To date, no study has systematically investigated whether the selection of different specimen orientations (e.g., dorsal, ventral, and lateral) during identification influences the outcome. The four specimen orientations of Ophiocordyceps sinensis are illustrated in Figure 1.
Therefore, this study aims to utilize hyperspectral technology, machine learning and deep learning to systematically evaluate the impact of different specimen orientations on the identification accuracy of wild and cultivated Ophiocordyceps sinensis. The core objective is to determine whether an optimal specimen orientation exists, thereby providing direct, reliable data support and a theoretical foundation for the subsequent hardware design of hyperspectral discriminators and the standardization of their operational workflows.

2. Materials and Methods

2.1. Sample Preparation and Sources

A total of 1121 Ophiocordyceps sinensis samples were used in this study, categorized into two classes: 681 authenticated wild samples and 440 cultivated samples. Detailed sources and quantities are provided in Table 1.
The collected samples were sourced and obtained from geographic origins by personnel dispatched from the Xizang Institute for Food and Drug Control. Xizang, is a provincial autonomous region in China, is a major production area for wild Ophiocordyceps sinensis [28], located in the southwest of the Tibetan Plateau. and the wild samples in this study originated from five prefecture-level cities within the region. Variations in altitude, climate, and local ecology among these areas effectively encompass the natural diversity present within wild populations. All wild samples were authenticated by experts. The cultivated Ophiocordyceps sinensis samples were sourced from multiple commercial brands in several production regions, including Qinghai, Zhejiang, and Sichuan Provinces, among others. These samples represent the standardized output of commercial cultivation. Post-collection, all samples underwent only surface cleaning to remove dust and loose debris, with no further processing applied, ensuring the specimens remained undamaged. The sample collection process was conducted rigorously to ensure the conclusions drawn are based on a comprehensive and authentic dataset.

2.2. Hyperspectral Image Acquisition and Specimen Orientation Definition

Despite the evident advantages of hyperspectral technology, its engineering application faces specific operational challenges. During the development of an Ophiocordyceps sinensis identification instrument, a critical yet unresolved question emerged: what specimen orientation should the sample be placed in to achieve optimal identification accuracy? The answer directly determines whether the hardware requires a customized platform or mechanism to constrain the sample’s orientation with a specific surface facing upward, and it influences the formulation of end-user operational protocols. If spectral information varies significantly across different specimen orientations, standardizing an optimal specimen orientation would be necessary. Conversely, if no such difference exists, instrument design could be simplified, allowing arbitrary sample placement. However, this aspect has not been systematically explored.
To investigate the optimal specimen orientation for hyperspectral identification, a systematic specimen orientation data acquisition protocol was implemented. Each sample was sequentially placed on the acquisition platform in four standard specimen orientations (left lateral, right lateral, dorsal, and ventral), and its hyperspectral data were captured separately. For subsequent analysis and traceability, the acquired data were labeled using a standardized naming convention: suffixes “A”, “B”, “C”, and “D” were appended to the sample name, corresponding to left lateral, right lateral, dorsal, and ventral orientations, respectively. The workflow for hyperspectral data acquisition is illustrated in Figure 2.
Hyperspectral data were acquired using an ASD FieldSpec4 spectroradiometer. ASD is a brand of the UK-based company Malvern Panalytical. The instrument comprises three discrete detectors: VNIR (Visible and Near-Infrared, 350–1000 nm), SWIR1 (Short-Wave Infrared 1, 1000–1800 nm), and SWIR2 (Short-Wave Infrared 2, 1800–2500 nm), enabling high-precision full-spectrum acquisition. The spectroradiometer was installed within a controlled darkroom environment, configured with quartz tungsten halogen lamps, and maintained under consistent ambient conditions. The arrangement of the spectroradiometer, light source, and operational environment is illustrated in Figure 2.
Data acquisition was performed in the darkroom following this procedure: the instrument was warmed up for 30 min, followed by baseline calibration using a standard white reference panel, with reflectance defined as the radiance of the sample divided by that of the spectral on the white panel; the sample was fixed on the testing platform in the predetermined specimen orientation, maintaining consistent probe distance and illumination angle; five consecutive spectral measurements were taken for each specimen orientation, and after removing outliers, the average was computed as the effective spectral reflectance curve for that specimen orientation; this process was repeated for all samples and specimen orientations, resulting in four averaged hyperspectral reflectance curves for each Ophiocordyceps sinensis sample.
Due to significant noise observed in the 350–400 nm and 2200–2500 nm wavelength ranges, along with large amplitude fluctuations, only hyperspectral reflectance information within the 400–2200 nm range was selected for analysis. The Savitzky–Golay smoothing was subsequently applied to suppress high-frequency noise. Reflectance curves for the four specimen orientations of 20 randomly selected samples are presented in Figure 3.
The acquired data were used for subsequent comparative analysis. By training and evaluating the classification performance of models on data from each specimen orientation, the impact of specimen orientation on identification accuracy was quantitatively assessed.

2.3. Machine Learning Methods

Random Forest (RF), Support Vector Machine (SVM), and Logistic Regression (LR) were selected as classification models for distinguishing wild and cultivated Ophiocordyceps sinensis, with the ultimate goal of identifying an optimal model for potential deployment in a dedicated identification instrument. These models were chosen because they are classical, well-established machine learning algorithms widely applied in hyperspectral analysis and offer a favorable balance between computationally efficiency and interpretability.
RF is a powerful and widely used ensemble learning algorithm [29]. It operates by constructing multiple decision trees and aggregating their predictions to form a robust model for classification tasks. The core principle involves training each tree on a random subset of the training data and a random subset of features. Predictions from individual trees are combined through voting or averaging to produce the final output.
SVM is fundamentally based on finding an optimal hyperplane within the data space to separate samples of different classes [30]. Its distinctiveness lies in its reliance on support vectors—the training samples closest to the hyperplane—which critically determine model performance and generalization capability. By employing kernel functions, SVM can project data into higher-dimensional spaces, enabling effective handling of nonlinear classification problems.
LR models the probability of an event occurrence based on the logistic function and is commonly employed for binary prediction [31]. The core idea involves mapping a linear combination of input features to a probability between 0 and 1. Key advantages of LR include its linear nature, which contributes to relatively fast training and prediction speeds, making it suitable for large datasets. In addition, LR provides feature coefficients, facilitating model interpretation and the identification of variables with significant influence on classification outcomes.

2.4. Deep Learning Methods

The Fully Connected Neural Network (FCNN), often referred to as the Multilayer Perceptron (MLP), is a foundational deep learning architecture. It comprises an input layer, multiple hidden layers, and an output layer, characterized by dense connections where each neuron connects to every neuron in the subsequent layer. By incorporating non-linear activation functions, FCNNs can map input data to high-dimensional feature spaces, effectively capturing complex non-linear relationships. Model training relies on the backpropagation algorithm, which iteratively optimizes weights via gradient descent to minimize prediction error.

2.5. Validation

The hyperspectral recognition system for wild and cultivated Ophiocordyceps sinensis was developed in Python 3.11, with machine learning models constructed using scikit-learn version 1.0.2. To ensure reproducibility, the random state for all machine learning models was fixed at 42. For the RF model, the number of estimators was set to 500 to enhance model stability and robustness. The SVM classifier utilized the Radial Basis Function kernel to handle non-linear relationships, with the regularization parameter C set to 1.0, kernel coefficient γ set to ‘scale’, and tolerance set to 0.001. The LR model employed the L-BFGS solver with L2 regularization. To guarantee convergence on the high-dimensional hyperspectral data, the maximum number of iterations was increased to 10,000.
In this study, a FCNN model was constructed, comprising a specific architecture with 9 hidden layers and 1 output layer. The neuron configuration for the hidden layers was set to [512, 1024, 512, 256, 1024, 1024, 1024, 512, 16], respectively. This design aims to extract complex features through high-dimensional mapping, while the intermediate “bottleneck” layers facilitate effective feature dimensionality reduction. To mitigate gradient vanishing and accelerate convergence, the Rectified Linear Unit (ReLU) activation and Batch Normalization (BN) were applied after each fully connected operation.
To mitigate the vanishing gradient problem in deep networks and accelerate convergence, the Rectified Linear Unit (ReLU) is employed as the activation function for all hidden layers. Furthermore, a Batch Normalization (BN) layer is introduced following each fully connected operation. Given the substantial number of model parameters, L2 regularization and Dropout techniques are incorporated into the hidden layers to enhance generalization ability and prevent overfitting.
The output layer consists of 2 neurons and utilizes the Softmax function to output the classification probabilities for wild and cultivated Ophiocordyceps sinensis. The model parameters are updated using the Adam optimizer, whose adaptive learning rate mechanism effectively handles sparse gradients. The initial learning rate was set to 0.001. The Cross-Entropy Loss function is selected to measure the divergence between the predicted probability distribution and the ground truth labels. Considering the sample characteristics of the hyperspectral dataset, the training batch size was set to 6 to increase the stochasticity during optimization. The model was trained for a total of 800 epochs.
The appropriate partitioning of a dataset is fundamental to ensuring model validity. During this process, it is crucial to maintain both the balance and representativeness between the training and test sets. The training set must be sufficiently large to enable the model to learn the underlying patterns within the data, while the test set must also be adequately sized to provide a statistically robust performance evaluation. In related research, Zhao et al. employed an 8:2 ratio split for their dataset when using RF for the year identification of ginseng [20]. Similarly, Gámez et al. utilized a 7:3 ratio split when applying various machine learning models to discriminate quality parameters in alfalfa [26]. Therefore, both an 8:2 and a 7:3 partitioning ratio will be used for the dataset. The results obtained from two ratios will be compared to determine which is more suitable for the experiment.
The dataset of 1121 samples was randomly split into training and test sets at two different ratios: 8:2 (resulting in 897 training and 224 test samples) and 7:3 (resulting in 785 training and 336 test samples). The test set was held out from model training and used solely for independent validation. The splitting process ensured a balanced distribution of both sample classes within the test set.
To enhance result stability, the random splitting procedure described above was independently repeated 10 times. For each repetition, training and test sets were generated, and the performance of different models across different specimen orientations was evaluated.
Prior to model training, all data underwent standardization. This involved calculating the mean and variance for each feature within the training set, followed by normalizing both the training and test sets using these computed statistics to ensure consistent scaling during training and validation.
Performance evaluation primarily employed accuracy as the metric, reflecting the proportion of correctly classified samples. The final reported performance values represent the average of the 10 independent validation runs. This approach mitigates potential bias arising from a single random partition, yielding a more stable reflection of the models’ true performance.
  A c c u r a c y = T P + T N T P + T N + F P + F N
where TP: True Positives (wild samples correctly predicted as wild); TN: True Negatives (cultivated samples correctly predicted as cultivated); FP: False Positives (cultivated samples incorrectly predicted as wild); FN: False Negatives (wild samples incorrectly predicted as cultivated).
M e a n   A c c u r a c y = 1 N K = 1 N A c c u r a c y K
where: N: Number of independent experiments; A c c u r a c y K : Accuracy from the k-th experiment.

3. Results

3.1. Single-Orientation Classification Results

Initially, RF, SVM, LR, and FCNN models were trained separately on hyperspectral data acquired from each of the four specimen orientations of Ophiocordyceps sinensis. This was done to systematically evaluate the performance of machine learning in discriminating wild from cultivated samples and to assess the influence of observation orientation. All results represent the mean accuracy from 10 independent random splits. The results for the dataset split at an 8:2 ratio are presented in Table 2, Table 3, Table 4 and Table 5.
The results for the dataset split at a 7:3 ratio are presented in Table 6, Table 7, Table 8 and Table 9.
A comparison was conducted between the experimental results obtained using 8:2 and 7:3 dataset splits, and the comparative results are shown in Figure 4.
Under four models and four specimen orientations, the mean identification accuracies obtained using two dataset split ratios (8:2 and 7:3) were compared for each model and orientation. The results indicate that the performance gap between the two ratios is minimal, with the difference in mean identification accuracy ranging from 0 to 1.2%. Notably, the 8:2 ratio shows a slight advantage over the 7:3 ratio. Therefore, the subsequent experiments in this study primarily adopt the 8:2 dataset split ratio.
A systematic analysis of the above experimental results for the dataset split at an 8:2 ratio was conducted to evaluate the effects of different models and specimen orientations on hyperspectral identification. The mean accuracy comparison of four models is shown in Figure 5.
The LR model demonstrated the best accuracy and most stable identification performance across all specimen orientations, achieving the highest mean accuracy (up to 99.4%), with the lowest being 99.1%. In contrast, RF, SVM and FCNN models performed comparatively lower, with maximum accuracies reaching only 89.7%, 91.8% and 82%, respectively.
Comparing model performances reveals that the LR model exhibits significantly superior identification capability compared to RF, SVM and FCNN. Its mean accuracy remains consistently above 99.1%, with minimal fluctuation across the 10 runs, indicating exceptional performance for this hyperspectral identification task.
To assess the impact of different specimen orientations on the identification of Ophiocordyceps sinensis, we performed a comprehensive comparison of the ten validation results for each of the four orientations across the four models, as shown in Figure 6.
Analyzing the effect of specimen orientation shows that for any given model, the mean identification accuracies obtained from different specimen orientations are remarkably similar. For instance, in the optimal LR model, the range (difference between the highest and lowest) of mean accuracy across the four specimen orientations is merely 0.3%. This indicates that, within the context of this hyperspectral identification framework, the specimen orientation of Ophiocordyceps sinensis is not a critical variable influencing identification accuracy.

3.2. Multi-Orientation Data Fusion Results

To further investigate the influence of specimen orientation on model performance, a mixed-orientation dataset was constructed. The construction method was as follows: for each sample, one specimen orientation was randomly selected from its four available hyperspectral data records. The randomly selected data from all samples were then aggregated to form a new dataset. Ten such experiments were conducted, with a new mixed-orientation dataset constructed for each. The distribution of specimen orientation counts within the mixed-orientation datasets across the ten experiments is shown in Table 10.
In the ten mixed-orientation data experiments, the four models were retrained and evaluated to simulate model performance in scenarios where specimen orientation is uncertain or random, as might occur in practical applications. The training methodology was as follows: the four models were trained on the mixed-orientation dataset, followed by ten independent validation runs per experiment. The mean accuracy from these ten runs was taken as the performance metric for that specific experiment. This process was repeated for ten distinct experiments, and the final performance for each model under mixed-orientation conditions was calculated as the mean accuracy across these ten experiments. The results are presented in Table 11. For more detailed experimental results, please refer to Table A1, Table A2, Table A3 and Table A4 in the Appendix A.
The results show that the LR model maintains the best performance on the fused dataset, achieving a mean accuracy of 98.9%. Furthermore, its results across the ten experiments are highly concentrated, again confirming its excellent stability. The mean accuracies for RF, SVM and FCNN models on the fused data are 90.3%, 91.0% and 81.7%, respectively, still showing a clear performance gap compared to LR.
Compared to the previous results based on single-orientation training, the performance of all models on the fused data did not exhibit significant fluctuation or decline. This observation, on one hand, further substantiates the conclusion that specimen orientation does not affect identification performance. On the other hand, it indicates that, even when specimen orientation is not perfectly uniform or is random in practical instrument use, the models can maintain stable and reliable classification capability. This provides additional justification for simplifying instrument operation and hardware design.

4. Discussion

The superior performance of the relatively simple LR model over the more complex RF, SVM and FCNN models suggests that the differences between wild and cultivated Ophiocordyceps sinensis within the high-dimensional hyperspectral feature space might be effectively separable by a near-linear decision boundary. The inherent simplicity, reduced propensity for overfitting, and built-in regularization of LR may contribute to its superior generalization, particularly when handling hyperspectral data characterized by high multicollinearity.
Concurrently, this study demonstrates that the specimen orientation of Ophiocordyceps sinensis samples does not significantly influence the accuracy of their hyperspectral identification. This conclusion carries important positive implications for advancing the development of identification instruments. Spectral reflectance similarity across four specimen orientations of a representative sample in Figure 7.
Although the caterpillar body of Ophiocordyceps sinensis exhibits morphological anisotropy, hyperspectral data reflect the comprehensive absorption and scattering characteristics of light by the material, which encode chemical composition information. The lack of significant accuracy variation across the four specimen orientations may be attributed to the relatively uniform distribution of key chemical constituents (e.g., adenosine, polysaccharides, characteristic proteins) within the cross-section of the caterpillar body. Consequently, the spectroradiometer captures these core chemical signatures regardless of the acquisition geometry.
This finding greatly simplifies the hardware design philosophy and operational procedures for hyperspectral discriminators of Ophiocordyceps sinensis. It implies that complex mechanical positioning systems designed to constrain samples to a specific pose are unnecessary, thereby reducing manufacturing costs, system complexity, and potential failure rates. Furthermore, future standardized protocols will not require stringent directional alignment; simply placing the sample stably within the detection field will suffice. This lowers the technical threshold for operators and enhances high-throughput detection efficiency.
While specimen orientation selection is not a stringent requirement, maintaining data acquisition remains crucial for ensuring model generalizability and result comparability. This includes stable illumination conditions, fixed probe distance, uniform background, and standardized calibration procedures. Our study shifts the focus of device development and operational Operation specification “how to precisely position” to “how to ensure consistent and standardized acquisition conditions.”
From a practical application standpoint, this research empirically demonstrates that hyperspectral-based identification of Ophiocordyceps sinensis does not require strict adherence to a specific sample specimen orientation. This lays a solid theoretical foundation for developing a low-cost, easy-to-operate, and efficient portable identification instrument.
Future research will focus on two main directions. First, beyond specimen orientation, the effects of factors such as production year, storage conditions, and moisture content on hyperspectral characteristics will be further investigated. In the present study, these variables were rigorously controlled, and more systematic and in-depth analyses are planned in subsequent work. Second, deep learning approaches have demonstrated considerable advantages in hyperspectral image analysis. In future studies, larger-scale datasets and increased computational resources will be employed to explore deep learning–based identification methods.

5. Conclusions

This study, through systematic acquisition of hyperspectral data from four specimen orientations of Ophiocordyceps sinensis and analysis using four models, clarifies the impact of specimen orientation on the identification accuracy of its wild and cultivated forms. The principal conclusions are as follows: Firstly, the Logistic Regression (LR) model demonstrated optimal performance for this task, achieving a mean accuracy exceeding 98% with notable stability. Secondly, for all tested models, no statistically significant difference in identification accuracy was observed between different specimen orientations, indicating that specimen orientation does not affect the outcome of hyperspectral identification for Ophiocordyceps sinensis.
Based on these findings, subsequent development of a hyperspectral discriminator for Ophiocordyceps sinensis need not enforce placement of the sample in a specific specimen orientation. The instrument hardware can omit complex orientation-fixing or flipping mechanisms, employing a simple sample stage that accommodates natural specimen orientation. Corresponding user operational protocols can also be simplified, requiring only that the sample be placed within the detection area to initiate measurement.
From an engineering perspective, this study eliminates a key uncertainty in the transition from laboratory research to instrument development. It provides a robust theoretical and practical foundation for creating cost-effective, user-friendly, and high-throughput identification devices. These findings may also serve as a reference for the non-destructive testing of other morphologically anisotropic medicinal materials. Future work will focus on optimizing hardware design based on the above conclusions to ensure consistent and standardized acquisition conditions while reducing development costs and establishing a more extensive sample library to further validate the broad applicability of these results.

Author Contributions

Conceptualization, X.C. (Xingfeng Chen) and H.D.; data curation, X.C. (Xinyue Cui).; formal analysis, X.C. (Xingfeng Chen); funding acquisition, H.D. and D.D.; methodology, X.C. (Xinyue Cui); validation, J.L. (Jiaguo Li) and L.Z.; supervision, X.C. (Xingfeng Chen) and T.S.; visualization, J.L. (Jun Liu); writing—original draft, H.D. and S.X.; writing—review and editing, D.D., T.S. and X.C. (Xinyue Cui). All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Science and Technology Program Project of Xizang Autonomous Region of China (Grant No. XZ202401YD0020).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
RFRandom Forest
SVMSupport Vector Machine
LRLogistic Regression
FCNNFully Connected Neural Network
IUCNInternational Union for Conservation of Nature
NIRSNear-infrared spectroscopy
FTIR-PASFourier-transform infrared photoacoustic spectroscopy
MLPMultilayer Perceptron
ReLURectified Linear Unit
BNBatch Normalization

Appendix A

Table A1. Accuracy of the RF model in each of the ten multi-orientation data fusion experiments.
Table A1. Accuracy of the RF model in each of the ten multi-orientation data fusion experiments.
Number of ExperimentsRun IndexMean Accuracy (%)
12345678910
189.286.187.491.990.191.089.290.188.889.789.4
290.688.889.291.088.392.388.391.988.890.690.0
391.988.889.792.891.492.391.092.387.491.991.0
492.890.688.391.093.291.490.689.789.789.790.7
592.391.086.690.187.994.191.491.988.388.890.2
691.088.889.290.690.689.787.990.687.488.389.4
790.189.791.091.491.991.493.292.887.091.991.0
892.386.689.790.190.691.089.291.987.091.990.0
991.090.191.490.693.290.189.291.987.090.690.5
1091.488.888.390.693.291.087.490.689.793.790.5
Table A2. Accuracy of the SVM model in each of the ten multi-orientation data fusion experiments.
Table A2. Accuracy of the SVM model in each of the ten multi-orientation data fusion experiments.
Number of ExperimentsRun IndexMean Accuracy (%)
12345678910
191.189.888.090.285.889.388.989.88891.189.2
294.290.290.295.190.792.089.392.085.891.191.1
392.991.691.192.990.791.691.693.388.091.691.5
490.292.492.993.892.492.492.490.786.790.791.5
593.392.989.892.992.095.690.790.288.091.691.7
692.491.189.892.090.290.29292.484.992.990.8
795.192.092.492.493.892.092.995.191.192.492.9
891.686.788.992.487.191.689.388.985.390.789.2
992.991.190.291.689.392.489.892.991.190.291.2
1091.688.988.491.691.192.092.091.187.690.790.5
Table A3. Accuracy of the LR model in each of the ten multi-orientation data fusion experiments.
Table A3. Accuracy of the LR model in each of the ten multi-orientation data fusion experiments.
Number of ExperimentsRun IndexMean Accuracy (%)
12345678910
110099.698.299.199.199.198.297.898.799.198.9
298.799.698.710099.699.698.798.799.197.898.7
310099.197.899.199.199.698.798.298.299.198.9
499.199.198.298.798.710097.397.398.299.199.0
599.198.798.799.198.799.697.397.898.799.199.1
699.699.698.299.198.799.699.198.297.898.799.0
710099.698.299.199.698.298.798.798.799.698.8
899.199.699.199.199.199.699.199.199.698.798.6
999.199.110098.798.710097.899.699.198.799.2
1099.698.298.798.799.110098.298.798.298.798.8
Table A4. Accuracy of the FCNN model in each of the ten multi-orientation data fusion experiments.
Table A4. Accuracy of the FCNN model in each of the ten multi-orientation data fusion experiments.
Number of ExperimentsRun IndexMean Accuracy (%)
12345678910
178.681.282.680.680.283.678.277.382.684.280.9
280.179.778.583.582.185.578.277.385.182.681.3
378.978.577.382.380.984.37776.184.981.480.2
482.582.382.180.983.384.583.482.580.282.382.4
582.177.881.581.184.579.177.584.585.183.581.7
676.681.282.680.680.283.684.577.375.784.280.7
783.680.381.779.779.382.777.382.185.183.381.5
874.578.579.978.582.381.578.682.183.586.180.6
983.982.781.183.380.385.582.381.477.979.781.8
1076.681.282.680.680.283.678.277.382.684.280.7

References

  1. Rosa, F.; Giuseppe, C.; Luca, R. Ganoderma Lucidum and Cordyceps Sinensis in Anti-aging Medicine. Pharmacol. Online 2020, 3, 1–16. Available online: https://pharmacologyonline.silae.it/files/specialissues/2020/vol3/PhOL_SI_2020_3_001_Rosa.pdf (accessed on 27 July 2025).
  2. Yang, M.; Kuo, P.; Hwang, T.; Wu, T. Anti-inflammatory Principles from Cordyceps sinensis. J. Nat. Prod. 2011, 74, 1996–2000. [Google Scholar] [CrossRef]
  3. Tong, X.; Guo, J. High Throughput Identification of the Potential Antioxidant Peptides in Ophiocordyceps sinensis. Molecules 2022, 27, 438. [Google Scholar] [CrossRef]
  4. Aramwit, P.; Wirotsaengthong, S. Overview of Commonly Used Chinese Herbs. J. Med. Plants Res. 2012, 6, 4505–4521. [Google Scholar] [CrossRef]
  5. Yang, J.; Zhang, L.; Qun, P.; Lin, J.; Zhou, X. Molecular Identification Technologies in Authentication of Chinese Caterpillar Mushroom Ophiocordyceps sinensis (Ascomycota) and Related Species: A Review. Int. J. Med. Mushrooms 2025, 27, 21–35. [Google Scholar] [CrossRef]
  6. Lin, B.; Li, S. Cordyceps as an Herbal Drug. In Herbal Medicine: Biomolecular and Clinical Aspects, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2011; Volume 5. [Google Scholar] [CrossRef]
  7. Zhou, X.; Gong, Z.; Su, Y.; Lin, J.; Tang, K. Cordyceps Fungi: Natural Products, Pharmacological Functions and Developmental Products. J. Pharm. Pharmacol. 2009, 61, 279–291. [Google Scholar] [CrossRef] [PubMed]
  8. Zhang, Y.; Zhang, S.; Li, Y. Phylogeography and Evolution of a Fungal–insect Association on the Tibetan Plateau. Mol. Ecol. 2014, 23, 5337–5355. [Google Scholar] [CrossRef]
  9. Zhou, X.; Li, L.; Tian, E. Advances in Research of the Artificial Cultivation of Ophiocordyceps sinensis in China. Crit. Rev. Biotechnol. 2014, 34, 233–243. [Google Scholar] [CrossRef] [PubMed]
  10. Li, X.; Liu, Q.; Li, W.; Li, Q.; Qian, Z.; Liu, X.; Dong, C. A Breakthrough in the Artificial Cultivation of Chinese Cordyceps on a Large-scale and its Impact on Science, the Economy, and Industry. Crit. Rev. Biotechnol. 2019, 39, 181–191. [Google Scholar] [CrossRef] [PubMed]
  11. Olatunji, O.J.; Tang, J.; Tola, A.; Auberon, F.; Oluwaniyi, O.; Ouyang, Z. The Genus Cordyceps: An Extensive Review of its Traditional Uses, Phytochemistry and Pharmacology. Fitoterapia 2018, 129, 293–316. [Google Scholar] [CrossRef]
  12. Zhang, X.; Liu, Q.; Zhou, W.; Li, P.; Alolga, R.N.; Qi, L.; Yin, X. A Comparative Proteomic Characterization and Nutritional Assessment of Naturally- and Artificially-Cultivated Cordyceps sinensis. J. Proteom. 2018, 181, 24–35. [Google Scholar] [CrossRef]
  13. Singh, S.; Arif, M. Adenosine and Cordycepin Analysis by HPLC in Ophiocordyceps sinensis a Therapeutic Miracle Fungus. Biochem. Cell. Arch. 2020, 20, 5301. Available online: https://connectjournals.com/03896.2020.20.5301 (accessed on 27 October 2025).
  14. Lu, B.; Dao, P.D.; Liu, J.; He, Y.; Shang, J. Recent Advances of Hyperspectral Imaging Technology and Applications in Agriculture. Remote Sens. 2020, 12, 2659. [Google Scholar] [CrossRef]
  15. Nirere, A.; Sun, J.; Yuhao, Z. A Rapid Non-Destructive Detection Method for Wolfberry Moisture Grade Using Hyperspectral Imaging Technology. J. Nondestr. Eval. 2023, 42, 45. [Google Scholar] [CrossRef]
  16. Bedini, E. The Use of Hyperspectral Remote Sensing for Mineral Exploration: A Review. J. Hypersp. Remote Sens. 2017, 7, 189–211. [Google Scholar] [CrossRef]
  17. Fei, B. Hyperspectral Imaging in Medical Applications. Data Handl. Sci. Technol. 2019, 32, 523–565. [Google Scholar] [CrossRef]
  18. Chen, X.; Du, H.; Liu, Y.; Shi, T.; Li, J.; Liu, J.; Zhao, L.; Liu, S. Fully Connected-Convolutional (FC-CNN) Neural Network Based on Hyperspectral Images for Rapid Identification of P. ginseng Growth Years. Sci. Rep. 2024, 14, 7209. [Google Scholar] [CrossRef]
  19. Pan, Y.; Zhang, H.; Chen, Y.; Gong, X.; Yan, J.; Zhang, H. Applications of Hyperspectral Imaging Technology Combined with Machine Learning in Quality Control of Traditional Chinese Medicine from the Perspective of Artificial Intelligence: A Review. Crit. Rev. Anal. Chem. 2023, 54, 2850–2864. [Google Scholar] [CrossRef] [PubMed]
  20. Zhao, L.; Liu, S.; Chen, X.; Wu, Z.; Yang, R.; Shi, T.; Zhang, Y.; Zhou, K.; Li, J. Hyperspectral Identification of Ginseng Growth Years and Spectral Importance Analysis Based on Random Forest. Appl. Sci. 2022, 12, 5852. [Google Scholar] [CrossRef]
  21. Yan, T.; Duan, L.; Chen, X.; Gao, P.; Xu, W. Application and Interpretation of Deep Learning Methods for the Geographical Origin Identification of Radix glycyrrhizae Using Hyperspectral Imaging. RSC Adv. 2020, 10, 41936–41945. [Google Scholar] [CrossRef]
  22. Han, Q.; Li, Y.; Yu, L. Classification of Glycyrrhiza Seeds by Near Infrared Hyperspectral Imaging Technology. In Proceedings of the 2019 International Conference on High Performance Big Data and Intelligent Systems (HPBD&IS), Shenzhen, China, 9–11 May 2019; IEEE: New York, NY, USA, 2019; pp. 141–145. [Google Scholar] [CrossRef]
  23. Zhou, K.; Liu, S.; Du, H.; Li, J.; Wang, D.; Shi, T. Spectral Simplification of PCA Inter-Class Distance Evaluation: A Case Study of Artificial Breeding and Wild Cordyceps sinensis Identification. In Proceedings of the Seventh International Conference on Advanced Electronic Materials, Computers, and Software Engineering (AEMCSE 2024), Nanchang, China, 10–12 May 2024; Volume 13229, pp. 370–375. [Google Scholar] [CrossRef]
  24. Zhang, M.; Li, Q.; Nie, L.; Hai, P.; Zhang, W.; Caiji, W.; Liang, W.; Zhang, H.; Zang, H. Nondestructive Rapid Identification of Wild Cordyceps Sinensis with Portable Instruments. Phytochem. Anal. 2023, 35, 1540–1549. [Google Scholar] [CrossRef]
  25. Du, C.; Zhou, J.; Liu, J. Identification of Chinese Medicinal Fungus Cordyceps sinensis by Depth-Profiling Mid-Infrared Photoacoustic Spectroscopy. Spectrochim. Acta. A 2017, 173, 119–125. [Google Scholar] [CrossRef] [PubMed]
  26. Gámez, A.; Vatter, T.; Santesteban, L.; Araus, J.; Aranjuelo, I. Onfield Estimation of Quality Parameters in Alfalfa through Hyperspectral Spectrometer Data. Comput. Electron. Agric. 2024, 216, 108463. [Google Scholar] [CrossRef]
  27. Sharma, A.; Ranout, A.; Nadda, G. Insights into Cultivation Strategies, Bioactive Components, Therapeutic Potential, Patents, and Market Products of Ophiocordyceps sinensis: A Comprehensive Review. South Afr. J. Bot. 2024, 171, 546–570. [Google Scholar] [CrossRef]
  28. Shrestha, B.; Zhang, W.; Zhang, Y.; Liu, X. What is the Chinese Caterpillar Fungus Ophiocordyceps sinensis (Ophiocordycipitaceae)? Mycology 2010, 1, 228–236. [Google Scholar] [CrossRef]
  29. Belgiu, M.; Drăguţ, L. Random Forest in Remote Sensing: A Review of Applications and Future Directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  30. Widodo, A.; Yang, B. Support Vector Machine in Machine Condition Monitoring and Fault Diagnosis. Mech. Syst. Signal Process. 2007, 21, 2560–2574. [Google Scholar] [CrossRef]
  31. LaValley, M. Logistic Regression. Circulation 2008, 117, 2395–2399. [Google Scholar] [CrossRef]
Figure 1. Ophiocordyceps sinensis in different specimen orientations: (a) left lateral, (b) right lateral, (c) dorsal, (d) ventral.
Figure 1. Ophiocordyceps sinensis in different specimen orientations: (a) left lateral, (b) right lateral, (c) dorsal, (d) ventral.
Processes 14 00412 g001
Figure 2. Hyperspectral data acquisition setup.
Figure 2. Hyperspectral data acquisition setup.
Processes 14 00412 g002
Figure 3. Hyperspectral reflectance curves for four specimen orientations of randomly selected samples: (a) left lateral, (b) right lateral, (c) dorsal, (d) ventral.
Figure 3. Hyperspectral reflectance curves for four specimen orientations of randomly selected samples: (a) left lateral, (b) right lateral, (c) dorsal, (d) ventral.
Processes 14 00412 g003
Figure 4. Model performance comparison under 8:2 and 7:3 dataset splits across specimen orientations.
Figure 4. Model performance comparison under 8:2 and 7:3 dataset splits across specimen orientations.
Processes 14 00412 g004
Figure 5. Comparison of mean identification accuracy among four models.
Figure 5. Comparison of mean identification accuracy among four models.
Processes 14 00412 g005
Figure 6. Box plots of identification accuracy across different specimen orientations. The box indicates the interquartile range (IQR, Q3–Q1), where Q1 and Q3 denote the 25th and 75th percentiles, respectively; the central line represents the median; whiskers extend to 1.5×IQR; points beyond the whiskers are outliers.
Figure 6. Box plots of identification accuracy across different specimen orientations. The box indicates the interquartile range (IQR, Q3–Q1), where Q1 and Q3 denote the 25th and 75th percentiles, respectively; the central line represents the median; whiskers extend to 1.5×IQR; points beyond the whiskers are outliers.
Processes 14 00412 g006
Figure 7. Spectral reflectance similarity across four specimen orientations of a representative sample.
Figure 7. Spectral reflectance similarity across four specimen orientations of a representative sample.
Processes 14 00412 g007
Table 1. The source and quantity of different varieties of Ophiocordyceps sinensis.
Table 1. The source and quantity of different varieties of Ophiocordyceps sinensis.
VarietiesSourcesQuantities
Wild Ophiocordyceps sinensisQamdo City111
Lhasa City40
Nyingchi City136
Nagqu City356
Shigatse City38
Cultivated Ophiocordyceps sinensisQinghai Province126
Zhejiang Province80
Sichuan Province40
Hubei Province100
Yunnan Province54
Henan Province40
Table 2. Identification accuracy of the RF model across four specimen orientations (8:2 dataset split).
Table 2. Identification accuracy of the RF model across four specimen orientations (8:2 dataset split).
OrientationRun IndexMean Accuracy (%)
12345678910
Left lateral89.789.785.788.488.089.786.289.787.188.488.3
Right lateral87.587.188.888.887.190.288.890.286.689.388.4
Dorsal91.186.287.589.788.490.688.889.386.285.788.4
Ventral91.186.691.190.291.190.290.691.188.487.189.7
Table 3. Identification accuracy of the SVM model across four specimen orientations (8:2 dataset split).
Table 3. Identification accuracy of the SVM model across four specimen orientations (8:2 dataset split).
OrientationRun IndexMean Accuaracy (%)
12345678910
Left lateral92.091.188.092.093.393.391.192.491.192.491.6
Right lateral93.791.191.194.690.292.489.793.789.791.591.8
Dorsal95.188.492.892.085.792.091.592.888.889.390.8
Ventral92.890.288.892.891.591.592.892.892.091.091.6
Table 4. Identification accuracy of the LR model across four specimen orientations (8:2 dataset split).
Table 4. Identification accuracy of the LR model across four specimen orientations (8:2 dataset split).
OrientationRun IndexMean Accuracy (%)
12345678910
Left lateral98.699.599.599.199.599.199.110099.599.599.4
Right lateral99.599.199.198.299.510098.698.698.699.199.1
Dorsal99.599.599.199.599.199.599.197.899.599.199.2
Ventral99.198.699.110099.110098.699.199.198.699.1
Table 5. Identification accuracy of the FCNN model across four specimen orientations (8:2 dataset split).
Table 5. Identification accuracy of the FCNN model across four specimen orientations (8:2 dataset split).
OrientationRun IndexMean Accuracy (%)
12345678910
Left lateral82.678.583.582.185.578.280.179.783.586.182.0
Right lateral81.580.182.178.283.580.176.682.685.683.181.3
Dorsal83.581.581.179.579.177.582.177.883.580.180.6
Ventral82.179.686.177.180.184.178.278.279.782.680.8
Table 6. Identification accuracy of the RF model across four specimen orientations (7:3 dataset split).
Table 6. Identification accuracy of the RF model across four specimen orientations (7:3 dataset split).
OrientationRun IndexMean Accuracy (%)
12345678910
Left lateral90.888.785.588.488.488.787.887.5898988.4
Right lateral86.187.286.989.686.987.589.689.389.389.688.2
Dorsal89.686.685.887.586.988.488.188.186.189.387.7
Ventral90.485.489.687.790.58990.190.18888.688.9
Table 7. Identification accuracy of the SVM model across four specimen orientations (7:3 dataset split).
Table 7. Identification accuracy of the SVM model across four specimen orientations (7:3 dataset split).
OrientationRun IndexMean Accuracy (%)
12345678910
Left lateral9191.1889193.392.391.191.491.192.491.3
Right lateral91.891.191.194.790.291.489.891.889.891.691.4
Dorsal95.188.492.99285.89291.692.988.989.390.9
Ventral92.990.288.992.991.691.692.992.99291.191.7
Table 8. Identification accuracy of the LR model across four specimen orientations (7:3 dataset split).
Table 8. Identification accuracy of the LR model across four specimen orientations (7:3 dataset split).
OrientationRun IndexMean Accuracy (%)
12345678910
Left lateral98.799.699.699.199.698.198.110099.699.699.2
Right lateral99.699.199.198.299.610098.798.798.799.199.1
Dorsal99.699.699.199.699.199.699.197.899.699.199.2
Ventral99.198.798.110098.110098.799.199.198.799.0
Table 9. Identification accuracy of the FCNN model across four specimen orientations (7:3 dataset split).
Table 9. Identification accuracy of the FCNN model across four specimen orientations (7:3 dataset split).
OrientationRun IndexMean Accuracy (%)
12345678910
Left lateral82.681.482.383.578.577.380.381.583.582.381.3
Right lateral79.981.182.180.983.384.581.583.377.979.181.2
Dorsal78.27776.377.583.582.382.383.579.778.579.9
Ventral80.178.980.982.180.182.983.983.176.677.880.6
Table 10. Distribution of orientation counts in mixed-orientation datasets across ten experiments.
Table 10. Distribution of orientation counts in mixed-orientation datasets across ten experiments.
OrientationExperiment Index
12345678910
Left lateral276266297287291300297274265273
Right lateral270270265294273278249283275294
Dorsal289284269275290267283297288272
Ventral286301290265267276292267293282
Table 11. Mean identification accuracy for mixed-orientation data experiments.
Table 11. Mean identification accuracy for mixed-orientation data experiments.
ModelNumber of ExperimentsMean Accuracy (%)
12345678910
RF89.490.091.090.790.289.491.090.090.590.590.3
SVM89.291.191.591.591.790.892.989.291.290.591.0
LR98.998.798.999.099.199.098.898.699.298.898.9
FCNN80.9 81.3 80.2 82.4 81.7 80.7 81.5 80.6 81.8 80.7 81.7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Du, H.; Cui, X.; Chen, X.; Drolma, D.; Xie, S.; Li, J.; Zhao, L.; Liu, J.; Shi, T. Selection of Specimen Orientations for Hyperspectral Identification of Wild and Cultivated Ophiocordyceps sinensis. Processes 2026, 14, 412. https://doi.org/10.3390/pr14030412

AMA Style

Du H, Cui X, Chen X, Drolma D, Xie S, Li J, Zhao L, Liu J, Shi T. Selection of Specimen Orientations for Hyperspectral Identification of Wild and Cultivated Ophiocordyceps sinensis. Processes. 2026; 14(3):412. https://doi.org/10.3390/pr14030412

Chicago/Turabian Style

Du, Hejuan, Xinyue Cui, Xingfeng Chen, Dawa Drolma, Shihao Xie, Jiaguo Li, Limin Zhao, Jun Liu, and Tingting Shi. 2026. "Selection of Specimen Orientations for Hyperspectral Identification of Wild and Cultivated Ophiocordyceps sinensis" Processes 14, no. 3: 412. https://doi.org/10.3390/pr14030412

APA Style

Du, H., Cui, X., Chen, X., Drolma, D., Xie, S., Li, J., Zhao, L., Liu, J., & Shi, T. (2026). Selection of Specimen Orientations for Hyperspectral Identification of Wild and Cultivated Ophiocordyceps sinensis. Processes, 14(3), 412. https://doi.org/10.3390/pr14030412

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop