Next Article in Journal
Road Extraction from High-Resolution Remote Sensing Images via Local and Global Context Reasoning
Next Article in Special Issue
Exploration of Data Scene Characterization and 3D ROC Evaluation for Hyperspectral Anomaly Detection
Previous Article in Journal
Landscape Ecological Risk Assessment for the Tarim River Basin on the Basis of Land-Use Change
Previous Article in Special Issue
Hyperspectral Unmixing Network Accounting for Spectral Variability Based on a Modified Scaled and a Perturbed Linear Mixing Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Automated Detection of Fusarium Wilt on Phalaenopsis Using VIS-NIR and SWIR Hyperspectral Imaging

1
Department of Electrical Engineering, National Chung Hsing University, Taichung 402, Taiwan
2
Taiwan Agricultural Research Institute, Ministry of Agriculture, Taichung 413, Taiwan
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(17), 4174; https://doi.org/10.3390/rs15174174
Submission received: 27 June 2023 / Revised: 20 August 2023 / Accepted: 23 August 2023 / Published: 25 August 2023
(This article belongs to the Special Issue Advances in Hyperspectral Data Exploitation II)

Abstract

:
Phalaenopsis, an essential flower for export, is significantly affected by fusarium wilt, which impacts its export quality. Hyperspectral imaging technology offers the potential to detect fusarium wilt on Phalaenopsis. The goal of this study was to establish an automated platform for the rapid detection of fusarium wilt on Phalaenopsis. In this research, the automatic target generation process (ATGP) method was employed to identify outliers in the hyperspectral spectrum. Subsequently, the Spectral Angle Mapper (SAM) method was utilized to detect signals similar to the outliers. To suppress background noise and extract the region of interest (ROI), the Constrained Energy Minimization (CEM) method was implemented. For ROI classification and detection, a deep neural network (DNN), a support vector machine (SVM), and a Random Forest Classifier (RFC) were employed. Model performance was evaluated using three-dimensional receiver operating characteristics (3D ROC), and the automated identification system was integrated into hyperspectrometers. The proposed system achieved an accuracy of 95.77% with a total detection time of 3380 ms ± 86.36 ms, proving to be a practical and effective tool for detecting fusarium wilt on Phalaenopsis in the industry.

1. Introduction

Phalaenopsis is one of the most popular flowers, and has high economic and ornamental value. Today, countries involved in the commercial production of orchids include the United States, Britain, Japan, China, Taiwan, Thailand, Australia, and Singapore. The production of Phalaenopsis is an industry with a high technical threshold, due to differences in the optimal growth environments and technologies. However, if quality control is inadequate in any segment, with results such as pests or diseases, this can cause severe economic losses. Fusarium is one of the major disease-causing pathogens infecting orchids and is widely distributed in soil and associated with plants worldwide [1]. Fusarium spp. are very destructive as they can affect almost all of the plant’s organs and produce a broad range of symptoms, e.g., crown and root rot, stalk rot, head and grain blight, and vascular wilt [2,3]. These spp. have severely reduced not only the breeding rate of plants, but also the quality of flowers. In addition, these spp. can be easily spread during shipping, which greatly increases the loss rate of arrivals. This disease has had a serious impact on the production and sales of Phalaenopsis, especially in the international market. Therefore, it is necessary to study and control fusarium wilt in Phalaenopsis.
Currently, the prevention and control of yellowing disease in Phalaenopsis orchids relies on the disinfection of cultivation substrates [4], followed by chemical or biological control [5]. However, the effectiveness of these methods is limited, and cultivation management by farmers remains the primary means of prevention. Traditional disease detection methods, such as visual inspection and pathogen culture, suffer from subjectivity and time consumption. In recent years, hyperspectral imaging has emerged as a popular remote sensing technology, and is widely applied in various fields. Hyperspectral images contain the entire electromagnetic spectrum of each pixel, typically composed of visible and non-visible spectra, thereby providing a 3D space with higher spectral resolution than traditional multispectral images. Hyperspectral imaging has also found numerous applications beyond remote sensing, particularly in food and crop safety inspection. Given the constant concern for agricultural product quality, the use of hyperspectral imaging technology for quality detection has gained popularity due to its non-destructive and real-time nature. Moreover, the rise of machine learning has made it possible to construct models using large amounts of data and enable the fast and accurate detection of diseases.
In recent years, spectral technology has been widely applied to the detection of various agricultural crop diseases. In 2018, Albetis et al. [6] explored the potential of using multispectral images to differentiate between symptomatic and asymptomatic grapevines, as well as between different disease-infected grapevines. The validity of their predictions was verified by comparing the error between the predicted infection level (proportion of symptomatic pixels in grapevines) and the observed infection level (proportion of symptomatic leaves in grapevines). While their method effectively distinguished diseased plants, most of the plants already exhibited symptoms visible to the naked eye, and the resolution improved only when the infection reached above 50%. Vélez et al. [7] employed unmanned aerial vehicles (UAVs) with multispectral cameras, along with photogrammetric and spatial analysis techniques and machine learning classification methods, for the detection of Botrytis bunch rot (BBR). However, the accuracy achieved was only around 70%. Chang et al. [8] monitored the severity of citrus greening infection in orchards using vegetation indices derived from multispectral images. Although this showed a certain level of accuracy, it still could not detect the infection early before visible symptoms were confirmed by the naked eye. This impediment may be attributed to the limited number of bands in multispectral images, despite the derivation of multiple vegetation indices. This cannot fully meet the objective of early disease detection. In this study, hyperspectral imaging was used with the aim of preserving all of the spectral information. Compared to multispectral images, hyperspectral images contain a three-dimensional space with higher spectral resolution. In recent years, hyperspectral imaging has found more applications, especially in food and crop safety detection. Agricultural product quality has always been a concern for many, and the use of hyperspectral imaging technology for agricultural product quality inspection is gaining popularity. Its advantages lie in being non-destructive and having the potential for early detection.
Previous studies have utilized hyperspectral imaging for detecting diseases in other crops, such as tea plants, tomatoes [9], and more [10,11]. For example, Lin et al. [12] used hyperspectral imaging to detect anthracnose in tea plants and achieved an overall accuracy of 98% for identifying the disease at the leaf level. Zhang et al. [13] developed a method for the multi-source detection of tomato leaf mildew, with a recognition rate of 97.12%. Cen et al. [14] used a portable spectrometer to propose a tomato bacterial wilt detection model with overall accuracies of 90.7% for leaves and 92.6% for stems. Ashourloo et al. [15] evaluated the impacts of disease stages and symptoms on the spectral characteristics of plants, and their results demonstrated the reliability of using machine learning techniques instead of traditional indices. In 2015, Xie et al. [16] investigated the potential of using hyperspectral imaging for detecting different diseases on tomato leaves. The relevant paragraph also provides specific details about the methods used in the study, such as the successive projections algorithm and the extreme learning machine classifier model.
In this study, the application of hyperspectral detection technology to disease identification in the orchid industry was realized. It has been observed from prior studies [17,18,19] that the acquisition, analysis, and processing of hyperspectral image data are time-consuming, thereby limiting the real-time application of hyperspectral imaging systems in agricultural detection. Therefore, the utilization of statistical indicators, such as mean and variance, was proposed in order to enhance classifier performance and reduce the hyperspectral image processing time. Mean and variance were chosen due to their capability to adequately reflect the inherent features present on the phalaenopsis leaves. The mean helped to mitigate noise generated by non-observed target objects, while the variance captured the disparities among observed pixel values within the hyperspectral image. The application of hyperspectral technology in this research for disease detection was not solely dependent on spectral differences to classify affected leaf segments, but rather, employed statistical indicators to synthesize pixel information. Subsequently, machine learning methods were applied to classify the potential extent of leaf susceptibility. By adopting the aforementioned approach, not only was the high accuracy of disease detection retained, but the processing time for hyperspectral images was also reduced. Ultimately, a non-destructive detection platform utilizing hyperspectral technology was proposed, offering automated classification of infected phalaenopsis. This study not only established a classification method for phalaenopsis yellowing disease, but also significantly reduced the computational time associated with this method, which is practically feasible. This enhancement makes the automated detection platform applicable to real-time inspections on production lines.

2. Materials and Methods

2.1. Pipeline Identification System

Our previous research and experiments were aimed at establishing an automatic pipeline recognition system. In this paper, we show a tracked mobile platform equipped with stepper motors. On top of it, two hyperspectral sensors and a color camera are integrated for object detection purposes. The development of this platform is intended to simulate the shipment equipment and scenarios in the Phalaenopsis production field. In the future, production operators will simply need to place Phalaenopsis seedlings on the platform. The system will then detect the stem base, capture hyperspectral images, and ultimately present disease detection results. To achieve these objectives, we developed a detection model for yellow spot disease in Phalaenopsis and established a real-time hyperspectral image processing workflow. Collaborating with spectral equipment manufacturers, we successfully realized an automated online detection platform in the final stages of our research.
The hyperspectral scanning platform used in this study is shown in Figure 1. The system includes two hyperspectral cameras covering different frequency domains: visible and near-infrared (VNIR) on the upper left and short-wave infrared (SWIR) on the right. The hyperspectral cameras are interfaced with a computer for data acquisition through line scanning. Two halogen lamps are positioned at a 45-degree angle to minimize shadowing and enhance image brightness, thereby reducing the exposure time required for spectral sampling. The entire system, except for the computer, is enclosed in a darkroom.

2.2. Samples of Phalaenopsis

Phalaenopsis plantlets were cultivated in a greenhouse at the Changhua Research Center of Royal Base Co., Ltd., in Changhua County, Taiwan, in August 2021. The 3.5-inch Phalaenopsis Sogo yukidian “V3” plant, one of the most popular varieties worldwide, was used in this study. All experimental plantlets were 3.5 inches in size, had five mature leaves, and met the specifications for flowering induction upon shipping. Additionally, expert visual inspection confirmed that no observable symptoms were present. Upon receiving the Phalaenopsis in the greenhouse, continuous imaging was conducted using a hyperspectral camera and RGB images for a period of one week, from Day 0 to Day 6. Throughout this week, the Phalaenopsis plantlets were placed in an environment with a temperature of 30 °C and humidity of 80%, providing optimal conditions for the growth and infection of the plantlets with fusarium wilt. The infected plants were labeled. Images of the Phalaenopsis at different stages of infection were captured, as shown in Figure 2.

2.3. Hyperspectrometer

In this study, two cameras were used to capture hyperspectral images of Phalaenopsis, operating in the VNIR and SWIR spectral bands, respectively. For detailed specifications of the hyperspectral cameras, please refer to Table 1. The main light source used for capturing was provided by a halogen lamp, with power supplied by a “3900e-ER” power supply unit, delivering a power of 150 W. The capturing process was facilitated using the ISUZU imaging software (version 22.1.3), and the environmental temperature and relative humidity were maintained at 25 °C and 60% during the imaging sessions.

2.4. Hyperspectral Image Calibration

Hyperspectral image calibration involves normalizing the spectral reflectance and reducing noise during sampling. This process requires using white and dark references and can be achieved using the following formula:
I c = I r a w I d a r k I w h i t e I d a r k
where I c is the calibrated hyperspectral image, I r a w is the original raw data, I w h i t e is the white reference value with a Teflon bar, just representing the highest reflectance, and I d a r k is the standard black reference value obtained by covering the lens, representing the highest reflectance.

2.5. Hyperspectral Image Region of Interest (ROI) Extraction

After obtaining the calibrated hyperspectral image following the steps shown in Figure 3, a simple image cropping process was conducted to remove the plastic casing of the Phalaenopsis and select the region near the center of the stem base. Chang [20] mentioned that the automatic target generation process (ATGP) is an unsupervised method that iteratively calculates the target signatures and spectral vectors of all pixels through orthogonal subspace projection (OSP) to find target signatures with the maximum discriminability in the orthogonal space. These target signatures are of particular interest to us. The advantage of the ATGP method lies in its ability to automatically capture the spectral features of targets in hyperspectral data without prior knowledge of their types or signatures. It provides valuable information for subsequent target identification and classification tasks. Since it is not possible to know the target pixels in advance in industrial line inspection, we utilized ATGP for automatic target pixel identification in the images.
Chang [21,22,23] also mentioned that when it is not possible to obtain the spectral information of all substances in the image, Constrained Energy Minimization (CEM) can be employed to treat the substances of interest as target objects, while the rest can be considered background. This allows us to obtain their information from the spectral database, although background information might not be available. However, CEM can be overly sensitive to target spectra, so we first utilized a Spectral Angle Mapper (SAM) to extract geometric features between two spectra. By doing so, we identified pixels in the image with similar characteristics to the target pixels and averaged their values to create reference spectral information for CEM. CEM is primarily designed with a finite impulse response (FIR) filter to minimize the output power and restrict the output of the desired target features to a specific gain. This helps to highlight the required target signals and suppress background signals. Finally, the target signals processed via CEM were subjected to Otsu’s method for binarization to obtain the final region of interest (ROI).

2.6. Statistical Indicators

The purpose of this research is to develop an automated detection platform applicable to the Phalaenopsis industry. In disease recognition, there is a significant need to reduce the processing and computation time for hyperspectral image analysis. To achieve this, simple statistical methods were employed for new features, allowing for the conversion of existing features into new ones, potentially increasing the number of features and improving the accuracy of analysis.
Due to the proximity of the captured hyperspectral images, the spectral characteristics of the target surface could be adequately reflected. Hence, in this study, efforts were made to minimize spectral computations while preserving the captured spectral information. However, individual differences and interferences caused by different target surfaces were also addressed. To accomplish this, the mean and variance were utilized. In the case of the mean, each pixel value in the ROI was averaged with a random proportion to reduce potential noise in individual pixel values. As for the variance, the spectral dispersion of the pixels in the ROI was measured, and a high variance indicated possible spectral changes in the target. The mean reduced spectral noise, and the variance reflected spectral consistency. Through these two measures, the characteristics of the target surface were adequately represented, and the information to be observed in the spectrum was retained.
In this research, the mean and variance were calculated in the hyperspectral images to achieve feature transformation with minimal computational time. The practical approach is as follows: 50% of the spectral reflectance data were randomly selected and used to compute the mean and variance. These statistical indicators doubled the number of features compared to the original single-pixel spectral data. For example, in the VNIR bands, after one computation, the number of features increased from 540 to 1080, as shown in Figure 4.

2.7. Data Training Models

In this study, two kinds of statistical indicators were applied to both the VNIR and SWIR hyperspectral images captured daily (Day 0–Day 6). The pixel-based data were retained alongside the mean and variance for classifier training to distinguish between healthy and diseased samples. Three machine learning methods, namely, a deep neural network (DNN) [24], a support vector machine (SVM) [25], and a Random Forest Classifier (RFC) [26], were employed for training and prediction. As the training data might not encompass all the variations present in different Phalaenopsis plants, the DNN model’s interpretability on new data might be limited. To address this issue, the SVM and RFC models were incorporated to enhance classification reliability and interpretability, thus improving the model’s capability to classify unknown samples. The training process was shown in Figure 5.

Deep Neural Network

A deep neural network (DNN) is a machine learning method. In this experiment, we use a Multilayer Perceptron (MLP), which is a type of DNN. In common RGB two-dimensional (2D) images, the convolution layer is typically used for feature extraction. However, as mentioned earlier, in our study, we use ATGP, SAM, CEM, and Otsu’s method to replace the function of feature extraction. Therefore, the DNN portion of our method mainly consists of the fully connected layer of MLP. The configuration of our fully connected layer is illustrated in Table 2.

2.8. Model Evaluation

To evaluate the classification ability of a model with high accuracy, it is important to have an objective statistical value that is not affected by the amount of data. Therefore, we used the three-dimensional Receiver Operating Characteristic Curve Analysis (3D ROC) method to assess the classifier model [27,28,29,30,31]. The 3D ROC method can provide a comprehensive evaluation of the model’s performance by considering different cutoff values for the predicted probabilities, and can visualize the classifier’s performance in three dimensions, including its sensitivity, specificity, and accuracy. This method can also help to determine the optimal threshold value for classification and compare the performance of different classifiers.

The 3D Receiver Operating Characteristic Curve Analysis (3D ROC)

The 3D receiver operating characteristic (ROC) curve is composed of three 2D charts, namely, ( P D , P F ) , ( P D , τ ) , and P F , τ . ROC analysis has been widely used in signal detection. The principle of ROC is based on the Neyman–Pearson detector (NPD), which uses continuously changing detection probability P D to generate different false alarm probability values P F , and the new value, τ, which we define as the sampling point of the detection target. The continuous change in τ generates different values of P D and P F . We calculated the curves of each of the three 2D charts, and the curves of P D , P F , P D , τ , a n d P F , τ denote the areas under the curves (AUCs), A U C D , F , A U C ( D , τ ) , and A U C ( F , τ ) , respectively. In 3D ROC curve analysis, there are more analysis methods derived from 2D ROC. We used six methods, Target Detectability (TD), Background Suppressibility (BS), TD in Background (TD-BS), Overall Detection Probability (ODP), Overall Detection (OD), and Signal-to-Noise Probability Ratio (SNPR), to evaluate the classifier model.
  • Target Detectability (TD) indicates the effectiveness of a detector and the maximum detection probability it can achieve. Its formula is defined as follows:
    0 A U C T D = A U C D , F + A U C D , τ 2
  • Background Suppressibility (BS) is used to describe the ability of a detector to suppress background noise, with P F as an indicator. A smaller value of A U C F , τ indicates better Background Suppressibility. The formula for BS is defined as follows:
    1 A U C B S = A U C D , F A U C F , τ 1
  • TD in Background (TD-BS) is the first part of Joint Target Detectability with Background Suppressibility. It takes into account the probability of falsely detecting noise as a signal to generate P F . To calculate TD-BS, P F should be subtracted from P D . The formula for TD-BS is defined as follows:
    1 A U C T D B S = A U C D , τ A U C F , τ 1
  • Overall Detection Probability (ODP) represents the overall probability of correct classification for both the signal and background data. It is the second part of the Joint Target Detectability with Background Suppressibility analysis, and is derived from a classification perspective. The formula for ODP is defined as follows:
    0 A U C O D P = A U C D , τ + ( 1 A U C F , τ ) 2
  • Overall Detection (OD) is a single quantitative value of detector performance that combines the AUC values generated by the three 2D ROC curves to evaluate each target detector being compared. It provides an overall assessment of the detector’s ability to differentiate targets from non-targets across a range of operating conditions. The formula for calculating OD is defined as follows:
    1 A U C O C = A U C D , F + A U C D , τ A U C F , τ 2
  • Signal-to-Noise Probability Ratio (SNPR) is an effective detection measure that is derived from a similar idea to the widely used signal-to-noise ratio (SNR) in communication/signal processing. SNPR is defined as the ratio of the target detection probability ( P D ) to the false alarm probability ( P F ), which represents the degree of improvement in target detection while suppressing the background. The formula for SNPR is defined as follows:
    0 A U C S N P R = A U C D , τ A U C F , τ

3. Results

3.1. Reflectance of Phalaenopsis

In this experiment, VNIR, SWIR, and RGB images were captured from Day 0 to Day 6. Subsequently, the plants were observed, and based on the observations, they were labeled as either healthy or sick. The ROI was obtained using the process outlined in Figure 3, and the spectral reflectance was calculated, followed by averaging to obtain the average spectrum. Figure 6 illustrates the average spectrum of both healthy and diseased plants in the VNIR and SWIR. Regarding the VNIR reflectance, significant differences were observed between 530 nm to 690 nm and 740 nm to 950 nm. As for the SWIR sensor, reflectance values with significant differences were collected between 950 nm to 1130 nm and 1150 nm to 1320 nm.

3.2. Training Data

In this paper, we aimed to implement classification in the pipeline system, and thus, we established three classifiers using the original pixel data and classification, as well as mean–variance data and classification, respectively. The training dataset is presented in Table 3, and the test dataset is presented in Table 4.

3.3. Classification

For statistical indicators, 50% of the pixels from the ROI of Phalaenopsis were randomly selected, and their mean and variance were calculated. Training and prediction were performed on all of the training and test data using a Multilayer Perceptron (MLP) with four hidden layers, comprising 700 neurons in the first layer, 500 neurons in the second layer, 300 neurons in the third layer, and 50 neurons in the fourth layer. The output layer contained two neurons. The daily accuracy of VNIR data before and after mean–variance is presented in Table 5, while Table 6 displays the daily accuracy of SWIR data before and after mean–variance.
The daily accuracy tables demonstrate that as the severity of wilt disease increased over time, the model’s accuracy also improved. From Table 5 and Table 6, it can be observed that, for both the VNIR and SWIR data, the pixel-based method exhibited slightly lower accuracy compared to the mean–variance. Moreover, the mean–variance statistical method outperformed the pixel-based method in terms of accuracy.
Comparing the performance of the three classification methods (DNN, SVM, and RFC) based on the results, it is evident that DNN achieved a higher accuracy than SVM and RFC in both the VNIR and SWIR data. SVM performed slightly better than RFC. DNN also showed a slight advantage over SVM and RFC in terms of precision.

3.4. The Model Evaluation Using 3D ROC

In this study, in order to ascertain the effectiveness of the model’s classification performance, the quality of background signal suppression was evaluated, as high accuracy does not necessarily imply good background suppression. For this purpose, the classifier’s performance was evaluated using 3D ROC analysis, the results of which are presented in Figure 7. Three 2D plots were generated, each illustrating the area under the curve (AUC), from which six performance evaluation metrics were computed, as shown in Table 7 and Table 8. Based on the 2D ROC curves (PD, PF), the AUC was calculated, with this area defined as the overall detection rate (DR), serving as a metric to assess the detector’s effectiveness. A DR of 1 indicated optimal detector performance, while a DR of 0.5 indicated the worst performance. Table 7 and Table 8 reveal that for both VNIR and SWIR, the DNN exhibited the best performance, whereas RFC performed less favorably. The 2D ROC curves (PD, PF), (PD, τ), and (PF, τ) were further used to evaluate the detector, along with its target detectability (TD) and background suppressibility (BS). In this aspect, it was observed that DNN yielded superior results, whereas SVM and RFC demonstrated less pronounced differences. Additionally, in terms of AUCODP and AUCOD, minimal variations were observed among the three classifiers. AUCSNPR offered a comprehensive assessment of both TD and BS, providing valuable and crucial insights. Table 7 and Table 8 indicate that DNN outperformed the other classifiers by a significant margin in terms of AUCSNPR. Consequently, it can be concluded that the DNN classifier exhibits better background suppression and superior classification performance compared to the other three classifiers.

3.5. Automated Pipeline Identification System

Our previous research and experiments aimed to establish an automated pipeline identification system. For this purpose, in this paper, we developed three classifiers for prediction and used their calculated probabilities to vote for the final classification. To achieve this integration, we collaborated with Isuzu spectrum equipment manufacturers. Through their engineers, we connected a new Python interface that eliminated the 90 s of image calibration from the original process, as shown in Figure 8. We also accelerated the two processes of translating raw files to mat files and capturing ROI information, reducing the total processing time from 8 min to 1.5 min, as depicted in Figure 9. Moreover, we designed a new user interface and automatic ROI image capture, as illustrated in Figure 10.
The flowchart in Figure 8 primarily addresses the last three steps. In the “Translate RAW file to mat file” step, we used the Python library to read the ENVI format file from VNIR directly, which significantly reduced the file reading time and eliminated the need for format conversion. In the “Capture ROI information” step, we marked the conveyor belt and placed the stem base on the marking point after positioning the potted plants. This allowed us to use a simple center for locating a small area when processing the ROI. During ROI calculation in the prediction part, we used parallel operations to obtain the prediction results of the three methods and finally voted to obtain the final answer. Additionally, we are integrating another spectrometer, SWIR, and plan to use segmentation methods such as Mask-RCNN to capture the ROI in the future.

4. Discussion

4.1. Benefits of Statistical Indicators

The main area for disease recognition in the visual inspection was found to be the stem base, rather than the leaves, due to the potential presence of various causes of leaf yellowing, meaning they were not the primary region for disease detection. In Figure 2d–f, the progressive development of disease symptoms in infected plants from no visible symptoms (Day 0) to apparent symptoms (Day 6) can be observed. On Day 3 (Figure 2e), despite leaf yellowing, there was little distinction in disease recognition between the stem bases of infected plants and healthy ones (Figure 2b). However, on Day 6 (Figure 2f), the stem bases of infected plants displayed evident yellowing, confirming the presence of the disease. It is evident from Figure 2 that disease symptoms in the stem bases of infected Phalaenopsis gradually become more pronounced or conspicuous over time. This observation aligns with the results obtained from the trained models presented in Table 5 and Table 6, where an increase in accuracy with the passage of days can be observed. As the number of days increased, the accuracy of the Day 0-6 models rose due to the expansion of disease symptoms, resulting in a larger area of disease-related features covered by VNIR and SWIR images, thereby facilitating the differentiation between healthy and infected Phalaenopsis.
In Table 5, it can be observed that the accuracy of the Day 1–6 trained models was higher when mean–variance was used as the input feature, as opposed to using pixel-based features alone. This finding is attributed to the fact that stem base symptoms are not well-defined in the early stages of the disease, making it challenging to accurately delineate the region of fusarium wilt infection. Consequently, errors occur when identifying healthy or infected pixels, affecting the accuracy of models trained using pixel-based datasets. In contrast, the statistical indicator involves random pixel selection and statistical calculations, effectively reducing the impact of inaccurate labeling. The process of taking the mean also helps eliminate anomalous noise, reducing errors caused by noise in the model. On the other hand, the variance adequately explained the diversity of pixels within the ROI, which proved advantageous as a feature in the early stages of disease development. Utilizing the mean and variance values of pixels within the ROI as new features not only increases the number of features but also improves the accuracy compared to using pixel-based features alone.

4.2. The Future of Automated Pipeline Identification Systems

In Section 3.5, the integration of VNIR, SWIR, and RGB into the automated pipeline identification system was described. The user interface of the system is depicted in Figure 11a. A separate RGB window was included in the interface to facilitate the real-time monitoring of the conveyor. On the right side of the interface, the two hyperspectral imagers can be configured, classifier training models can be imported, and the file-saving path can be selected. Control buttons for the real-time detection mode are located in the lower right corner. Additionally, we incorporated the original sampling mode for collecting hyperspectral data, and the corresponding user interface is presented in Figure 11b. The upper left section allows users to choose the hyperspectral imager, while the lower section enables adjustments to be made to the camera parameters, the calibration of hyperspectral images, and the selection of the file-saving path. Similar to the real-time detection mode, control buttons for conveyor operations are available at the bottom.
Through this integrated interface, both VNIR and SWIR hyperspectral imagers can be operated simultaneously, streamlining the image acquisition process and reducing the complexity of managing two separate systems. Previously, users had to operate two software platforms on one computer to acquire hyperspectral images from both imagers. Now, this process can be completed through a unified software interface. Additionally, with a single operation, both VNIR and SWIR images can be scanned simultaneously, resulting in a 20% increase in user efficiency. In the context of industrial applications, this integration closely mirrors the actual shipment inspection process, minimizing the need for manual interventions and increasing the industry’s acceptance of the system.

5. Conclusions

This study presents a method for detecting fusarium wilt on Phalaenopsis using hyperspectral imaging and implements an automated pipeline recognition system for real-time monitoring in the industry. To achieve real-time monitoring, a simple integration method was employed, where the spectral values of each pixel in the hyperspectral images were randomly sampled at 50% and their mean and variance calculated. This process was repeated several times, and the combined mean and variance features were then fed into the classifier for training. The classifier training results showed that the accuracy and precision of mean–variance in VNIR and SWIR images were superior to those for pixel-based methods. The best accuracy achieved with the VNIR mean–variance classifier was 95.77%, and the best accuracy achieved with the SWIR mean–variance classifier was 91.72%.
After confirming the effectiveness of the mean–variance, three classifiers (DNN, SVM, and RFC) were trained. DNN achieved the best accuracy and precision in both VNIR and SWIR. In addition to evaluating the accuracy of the classifier models, the 3D ROC objective measurement method was used to assess the classifier models. The SNPR indicator demonstrated that DNN had a good background suppression effect in both spectral bands, outperforming SVM and RFC significantly. Finally, the results of the three classifiers were combined through voting in the automated pipeline recognition system. After conducting tests and engaging in discussions with the equipment manufacturer, the three methods introduced in this study were chosen for detection. The expected detection time for the three methods was within an acceptable range at 3380 ms ± 86.36 ms. The final detection results can be determined through a simple voting process.
At present, the identification and quality inspection of agricultural pests mainly rely on manual visual judgment. However, by optimizing the hyperspectral image processing workflow, the statistical indicator time can be reduced to a range suitable for real-time monitoring. The equipment currently used in the industry for fusarium wilt identification takes approximately 4 s to process a single Phalaenopsis plantlet, which is similar to the time required by the system proposed in this study. The results of this research also demonstrate the capability of hyperspectral imaging to detect the disease on Day 3, even before visible symptoms are apparent to the naked eye. This significant advancement in the application of hyperspectral imaging can provide real-time assistance in manual judgment and achieve early warning effects. In the future, band selection techniques will continue to be developed to simplify the complexity of spectral equipment and reduce the cost of equipment implementation.

Author Contributions

Conceptualization, M.-S.S. and Y.-C.O.; Methodology, M.-S.S., K.-C.C. and S.-A.C.; Software, K.-C.C. and S.-A.C.; Validation, K.-C.C. and S.-A.C.; Investigation, M.-S.S.; Resources, T.-S.L.; Writing—original draft, M.-S.S., K.-C.C. and S.-A.C.; Writing—review & editing, M.-S.S. and Y.-C.O.; Supervision, Y.-C.O.; Project administration, M.-S.S. and Y.-C.O. All authors have read and agreed to the published version of the manuscript.

Funding

The research conducted in this paper was supported by Ministry of Agriculture Taiwan with the grant number 110AS-8.3.2-ST-a6.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Srivastava, S.; Kadooka, C.; Uchida, J.Y. Fusarium species as pathogen on orchids. Microbiol. Res. 2018, 207, 188–195. [Google Scholar] [CrossRef] [PubMed]
  2. Chen, S.-Y.; Wu, Y.-J.; Hsieh, T.-F.; Su, J.-F.; Shen, W.-C.; Lai, Y.-H.; Lai, P.-C.; Chen, W.-H.; Chen, H.-H. Develop an efficient inoculation technique for Fusarium solani isolate ‘TJP-2178-10′ pathogeny assessment in Phalaenopsis orchids. Bot. Stud. 2021, 62, 4. [Google Scholar] [CrossRef] [PubMed]
  3. Kim, W.-G.; Lee, B.-D.; Kim, W.-S.; Cho, W.-D. Root Rot of Moth Orchid Caused by Fusarium spp. Plant Pathol. J. 2002, 18, 225–227. [Google Scholar] [CrossRef]
  4. Gullino, M.L.; Minuto, A.; Gilardi, G.; Garibaldi, A. Efficacy of azoxystrobin and other strobilurins against Fusarium wilts of carnation, cyclamen and Paris daisy. Crop Prot. 2002, 21, 57–61. [Google Scholar] [CrossRef]
  5. Shanavas, J. Biocontrol of fusarium wilt of vanilla (vanilla planifolia) Using combined inoculation of trichoderma sp. And Pseudomonas sp. Int. J. Pharma Bio Sci. 2012, 3, 706–716. [Google Scholar]
  6. Albetis, J.; Jacquin, A.; Goulard, M.; Poilvé, H.; Rousseau, J.; Clenet, H.; Dedieu, G.; Duthoit, S. On the Potentiality of UAV Multispectral Imagery to Detect Flavescence dorée and Grapevine Trunk Diseases. Remote Sens. 2019, 11, 23. [Google Scholar] [CrossRef]
  7. Vélez, S.; Ariza-Sentís, M.; Valente, J. Mapping the spatial variability of Botrytis bunch rot risk in vineyards using UAV multispectral imagery. Eur. J. Agron. 2023, 142, 126691. [Google Scholar] [CrossRef]
  8. Chang, A.; Yeom, J.; Jung, J.; Landivar, J. Comparison of Canopy Shape and Vegetation Indices of Citrus Trees Derived from UAV Multispectral Images for Characterization of Citrus Greening Disease. Remote Sens. 2020, 12, 4122. [Google Scholar] [CrossRef]
  9. van Roy, J.; Wouters, N.; De Ketelaere, B.; Saeys, W. Semi-supervised learning of hyperspectral image segmentation applied to vine tomatoes and table grapes. J. Spectr. Imaging 2018, 7, a7. [Google Scholar] [CrossRef]
  10. Hu, N.; Wei, D.; Zhang, L.; Wang, J.; Xu, H.; Zhao, Y. Application of Vis-NIR hyperspectral imaging in agricultural products detection. In Proceedings of the 2017 9th International Conference on Advanced Infocomm Technology (ICAIT), Chengdu, China, 22–24 November 2017; pp. 350–355. [Google Scholar] [CrossRef]
  11. Dang, H.-Q.; Kim, I.; Cho, B.-K.; Kim, M.S. Detection of bruise damage of pear using hyperspectral imagery. In Proceedings of the 2012 12th International Conference on Control, Automation and Systems, Jeju Island, Republic of Korea, 17–21 October 2012; pp. 1258–1260. [Google Scholar]
  12. Yuan, L.; Yan, P.; Han, W.; Huang, Y.; Wang, B.; Zhang, J.; Zhang, H.; Bao, Z. Detection of anthracnose in tea plants based on hyperspectral imaging. Comput. Electron. Agric. 2019, 167, 105039. [Google Scholar] [CrossRef]
  13. Zhang, X.; Wang, Y.; Zhou, Z.; Zhang, Y.; Wang, X. Detection Method for Tomato Leaf Mildew Based on Hyperspectral Fusion Terahertz Technology. Foods 2023, 12, 535. [Google Scholar] [CrossRef] [PubMed]
  14. Cen, Y.; Huang, Y.; Hu, S.; Zhang, L.; Zhang, J. Early Detection of Bacterial Wilt in Tomato with Portable Hyperspectral Spectrometer. Remote Sens. 2022, 14, 2882. [Google Scholar] [CrossRef]
  15. Ashourloo, D.; Aghighi, H.; Matkan, A.A.; Mobasheri, M.R.; Rad, A.M. An Investigation Into Machine Learning Regression Techniques for the Leaf Rust Disease Detection Using Hyperspectral Measurement. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 4344–4351. [Google Scholar] [CrossRef]
  16. Detection of Early Blight and Late Blight Diseases on Tomato Leaves USING Hyperspectral Imaging. Sci. Rep. 2015, 5, 16564. Available online: https://www.nature.com/articles/srep16564 (accessed on 3 August 2023).
  17. Lowe, A.; Harrison, N.; French, A.P. Hyperspectral image analysis techniques for the detection and classification of the early onset of plant disease and stress. Plant Methods 2017, 13, 80. [Google Scholar] [CrossRef]
  18. Wan, L.; Li, H.; Li, C.; Wang, A.; Yang, Y.; Wang, P. Hyperspectral Sensing of Plant Diseases: Principle and Methods. Agronomy 2022, 12, 1451. [Google Scholar] [CrossRef]
  19. Nguyen, C.; Sagan, V.; Maimaitiyiming, M.; Maimaitijiang, M.; Bhadra, S.; Kwasniewski, M.T. Early Detection of Plant Viral Disease Using Hyperspectral Imaging and Deep Learning. Sensors 2021, 21, 742. [Google Scholar] [CrossRef]
  20. Ren, H.; Chang, C.-I. Automatic spectral target recognition in hyperspectral imagery. IEEE Trans. Aerosp. Electron. Syst. 2003, 39, 1232–1249. [Google Scholar] [CrossRef]
  21. Chang, C.-I. Target signature-constrained mixed pixel classification for hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2002, 40, 1065–1081. [Google Scholar] [CrossRef]
  22. Liu, J.-M.; Chang, C.-I.; Chieu, B.-C.; Ren, H.; Wang, C.-M.; Lo, C.-S.; Chung, P.-C.; Yang, C.-W.; Ma, D.-J. Generalized constrained energy minimization approach to subpixel target detection for multispectral imagery. Opt. Eng. 2000, 39, 1275–1281. [Google Scholar] [CrossRef]
  23. Chen, S.-Y.; Lin, C.; Tai, C.-H.; Chuang, S.-J. Adaptive Window-Based Constrained Energy Minimization for Detection of Newly Grown Tree Leaves. Remote Sens. 2018, 10, 96. [Google Scholar] [CrossRef]
  24. Gardner, M.W.; Dorling, S.R. Artificial neural networks (the multilayer perceptron)—A review of applications in the atmospheric sciences. Atmos. Environ. 1998, 32, 2627–2636. [Google Scholar] [CrossRef]
  25. Hearst, M.A.; Dumais, S.T.; Osuna, E.; Platt, J.; Scholkopf, B. Support vector machines. IEEE Intell. Syst. Their Appl. 1998, 13, 18–28. [Google Scholar] [CrossRef]
  26. Ho, T.K. Random decision forests. In Proceedings of the 3rd International Conference on Document Analysis and Recognition, Montreal, QC, Canada, 14–16 August 1995; Volume 1, pp. 278–282. [Google Scholar] [CrossRef]
  27. Chang, C.-I. An Effective Evaluation Tool for Hyperspectral Target Detection: 3D Receiver Operating Characteristic Curve Analysis. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5131–5153. [Google Scholar] [CrossRef]
  28. Chang, C.-I.; Chiang, S.-S.; Du, Q.; Ren, H.; Ifarragaerri, A. An ROC analysis for subpixel detection. In IGARSS 2001. Scanning the Present and Resolving the Future, Proceedings of the IEEE 2001 International Geoscience and Remote Sensing Symposium (Cat. No.01CH37217), Sydney, Ausralia, 9–13 July 2001; IEEE: Piscataway, NJ, USA, 2001; Volume 5, pp. 2355–2357. [Google Scholar] [CrossRef]
  29. Wang, L.; Chang, C.-I.; Lee, L.-C.; Wang, Y.; Xue, B.; Song, M.; Yu, C.; Li, S. Band Subset Selection for Anomaly Detection in Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4887–4898. [Google Scholar] [CrossRef]
  30. Wang, S.; Chang, C.-I.; Yang, S.-C.; Hsu, G.-C.; Hsu, H.-H.; Chung, P.-C.; Guo, S.-M.; Lee, S.-K. 3D ROC Analysis for Medical Imaging Diagnosis. In Proceedings of the 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference, Shanghai, China, 17–18 January 2006; pp. 7545–7548. [Google Scholar] [CrossRef]
  31. Hyperspectral Imaging: Techniques for Spectral Detection and Classification | SpringerLink. Available online: https://link.springer.com/book/10.1007/978-1-4419-9170-6 (accessed on 3 August 2023).
Figure 1. Hyperspectral imaging system composed of two hyperspectral cameras and a halogen lamp mounted over a conveyor belt.
Figure 1. Hyperspectral imaging system composed of two hyperspectral cameras and a halogen lamp mounted over a conveyor belt.
Remotesensing 15 04174 g001
Figure 2. The RGB images show healthy and diseased Phalaenopsis. Healthy sample on (a) Day 0, (b) Day 3, and (c) Day 6. Diseased sample on (d) Day 0, (e) Day 3, and (f) Day 6.
Figure 2. The RGB images show healthy and diseased Phalaenopsis. Healthy sample on (a) Day 0, (b) Day 3, and (c) Day 6. Diseased sample on (d) Day 0, (e) Day 3, and (f) Day 6.
Remotesensing 15 04174 g002
Figure 3. The hyperspectral image region of interest (ROI) extraction process. After the acquisition and calibration of hyperspectral images, reflectance values were extracted. To enhance the image processing speed, stem base images were retained. During the ROI extraction step, target signals were initially provided via an automatic target generation process (ATGP). A Spectral Angle Mapper (SAM) was employed to identify pixels with similarity to the target signals. The Constrained Energy Minimization (CEM) technique was utilized to amplify the signal, and finally, the Otsu method was applied for the threshold, resulting in the ultimate ROI.
Figure 3. The hyperspectral image region of interest (ROI) extraction process. After the acquisition and calibration of hyperspectral images, reflectance values were extracted. To enhance the image processing speed, stem base images were retained. During the ROI extraction step, target signals were initially provided via an automatic target generation process (ATGP). A Spectral Angle Mapper (SAM) was employed to identify pixels with similarity to the target signals. The Constrained Energy Minimization (CEM) technique was utilized to amplify the signal, and finally, the Otsu method was applied for the threshold, resulting in the ultimate ROI.
Remotesensing 15 04174 g003
Figure 4. The statistical indicator process. The hyperspectral images were initially subjected to two-dimensionalization, followed by the random selection of 50% of the pixels for the calculation of the mean and variance for each band. The combination of mean and variance was employed as a new feature value. M and N represent the length and width of the hyperspectral image. L represents the number of bands/features, and S indicates the number of times random sampling took place.
Figure 4. The statistical indicator process. The hyperspectral images were initially subjected to two-dimensionalization, followed by the random selection of 50% of the pixels for the calculation of the mean and variance for each band. The combination of mean and variance was employed as a new feature value. M and N represent the length and width of the hyperspectral image. L represents the number of bands/features, and S indicates the number of times random sampling took place.
Remotesensing 15 04174 g004
Figure 5. Hyperspectral image training process. The stem base portion was extracted from the hyperspectral images, after which two sets of datasets, namely, the mean + variance and pixel base, were segregated for training. The machine learning methods employed for distinguishing healthy and diseased orchids included a deep neural network (DNN), a Support Vector Machine (SVM), and a Random Forest Classifier (RFC).
Figure 5. Hyperspectral image training process. The stem base portion was extracted from the hyperspectral images, after which two sets of datasets, namely, the mean + variance and pixel base, were segregated for training. The machine learning methods employed for distinguishing healthy and diseased orchids included a deep neural network (DNN), a Support Vector Machine (SVM), and a Random Forest Classifier (RFC).
Remotesensing 15 04174 g005
Figure 6. The reflectance of healthy and diseased Phalaenopsis. Left side represents VNIR and SWIR sensors.
Figure 6. The reflectance of healthy and diseased Phalaenopsis. Left side represents VNIR and SWIR sensors.
Remotesensing 15 04174 g006
Figure 7. The results of 3D ROC analysis graphs of (PD, PF, τ) generated via SVM, DNN, and RFC using VNIR mean–variance data and SWIR mean–variance data. PD, PF, and τ correspond to detection probability, false alarm probability, and soft decision.
Figure 7. The results of 3D ROC analysis graphs of (PD, PF, τ) generated via SVM, DNN, and RFC using VNIR mean–variance data and SWIR mean–variance data. PD, PF, and τ correspond to detection probability, false alarm probability, and soft decision.
Remotesensing 15 04174 g007
Figure 8. The flowchart and execution time of the old system.
Figure 8. The flowchart and execution time of the old system.
Remotesensing 15 04174 g008
Figure 9. The flowchart and execution time of the new system.
Figure 9. The flowchart and execution time of the new system.
Remotesensing 15 04174 g009
Figure 10. (a) The new user interface; (b) the automatic capture of ROI images.
Figure 10. (a) The new user interface; (b) the automatic capture of ROI images.
Remotesensing 15 04174 g010
Figure 11. (a) Real-time detection mode user interface. (b) Sampling mode user interface.
Figure 11. (a) Real-time detection mode user interface. (b) Sampling mode user interface.
Remotesensing 15 04174 g011
Table 1. Specifications of the hyperspectral cameras.
Table 1. Specifications of the hyperspectral cameras.
V10E-B1410CLN17E-InGaAs
Spectral Range (nm)400–1000900–1700
Spectral Channels616512
Spectral Resolution (nm)35
Spatial Pixels816640
Table 2. The neural network architecture used in the experiments.
Table 2. The neural network architecture used in the experiments.
LayerNumber of NeuronsActivation Function
Input Layer540/1080ReLU
Hidden Layer700ReLU
Hidden Layer500ReLU
Hidden Layer300ReLU
Hidden Layer50ReLU
Output Layer2ReLU
Table 3. The training dataset of healthy and diseased samples.
Table 3. The training dataset of healthy and diseased samples.
Healthy SamplesDiseased Samples
VNIR55,60053,000
SWIR56,32052,992
Table 4. The test dataset of healthy and diseased samples.
Table 4. The test dataset of healthy and diseased samples.
Healthy SamplesDiseased Samples
VNIR38,00042,000
SWIR38,65636,864
Table 5. The daily accuracy of VNIR pixel base and mean–variance.
Table 5. The daily accuracy of VNIR pixel base and mean–variance.
Input
Features
ClassifierDay 0Day 1Day 2Day 3Day 4Day 5Day 6Average
Precision
Pixel540SVM71.73%84.51%84.82%86.84%86.91%86.47%88.39%84.24%
DNN72.88%86.49%87.10%87.74%88.32%88.39%90.47%85.91%
RFC62.86%84.46%81.31%81.50%81.29%80.28%82.01%79.10%
Mean–Variance1080SVM83.97%84.25%87.43%89.50%90.3%90.35%92.24%88.29%
DNN86.51%87.97%87.22%87.90%93.00%93.24%95.77%90.23%
RFC82.50%82.68%87.01%89.37%90.04%91.65%92.49%87.96%
Table 6. The daily accuracy of SWIR pixel base and mean–variance.
Table 6. The daily accuracy of SWIR pixel base and mean–variance.
Input
Features
ClassifierDay 0Day 1Day 2Day 3Day 4Day 5Day 6Average
Precision
Pixel540SVM69.80%70.15%72.20%74.64%76.11%78.04%80.8%74.43%
DNN70.87%74.64%84.42%85.83%85.95%86.53%88.35%82.37%
RFC67.33%68.98%69.67%74.47%75.62%76.27%77.40%72.82%
Mean–Variance1080SVM66.36%66.70%72.98%75.28%80.72%81.32%81.94%75.04%
DNN63.11%77.88%87.29%88.19%90.04%91.58%91.72%84.26%
RFC66.13%66.20%71.31%74.75%79.25%79.71%80.37%73.96%
Table 7. AUC values calculated for the detection results using statistical indicators from VNIR in three classifiers from the three 2D ROC curves of (PD, PF).
Table 7. AUC values calculated for the detection results using statistical indicators from VNIR in three classifiers from the three 2D ROC curves of (PD, PF).
AUC(D, F)AUC(D, τ)AUC(F, τ)AUCTDAUCBSAUCTD-BSAUCODPAUCODAUCSNPR
SVM0.95790.47030.30091.42810.65690.16941.16941.12721.5628
DNN0.96740.58310.09671.55050.87080.48641.48641.45386.0312
RFC0.91420.48750.24271.40170.67150.24481.24481.15902.0086
AUC: area under the curve. AUC(D,F), AUC(D,τ), and AUC(F,τ): AUC values of 2D ROC curve of (PD,PF), (PD,τ), and (PF,τ).
Table 8. AUC values calculated for the detection results using statistical indicators from SWIR in three classifiers from the three 2D ROC curves of (PD, PF).
Table 8. AUC values calculated for the detection results using statistical indicators from SWIR in three classifiers from the three 2D ROC curves of (PD, PF).
AUC(D, F)AUC(D, τ)AUC(F, τ)AUCTDAUCBSAUCTD-BSAUCODPAUCODAUCSNPR
SVM0.90320.38170.11621.28490.78700.26541.26541.16863.2833
DNN0.93380.51870.07721.45250.85660.44151.44151.37536.7169
RFC0.87730.39970.13401.39600.74340.38481.38481.26213.8723
AUC: area under the curve. AUC(D,F), AUC(D,τ), and AUC(F,τ): AUC values of 2D ROC curve of (PD,PF), (PD,τ), and (PF,τ).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shih, M.-S.; Chang, K.-C.; Chou, S.-A.; Liu, T.-S.; Ouyang, Y.-C. The Automated Detection of Fusarium Wilt on Phalaenopsis Using VIS-NIR and SWIR Hyperspectral Imaging. Remote Sens. 2023, 15, 4174. https://doi.org/10.3390/rs15174174

AMA Style

Shih M-S, Chang K-C, Chou S-A, Liu T-S, Ouyang Y-C. The Automated Detection of Fusarium Wilt on Phalaenopsis Using VIS-NIR and SWIR Hyperspectral Imaging. Remote Sensing. 2023; 15(17):4174. https://doi.org/10.3390/rs15174174

Chicago/Turabian Style

Shih, Min-Shao, Kai-Chun Chang, Shao-An Chou, Tsang-Sen Liu, and Yen-Chieh Ouyang. 2023. "The Automated Detection of Fusarium Wilt on Phalaenopsis Using VIS-NIR and SWIR Hyperspectral Imaging" Remote Sensing 15, no. 17: 4174. https://doi.org/10.3390/rs15174174

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop