Detecting Infected Cucumber Plants with Close-Range Multispectral Imagery

: This study used close-range multispectral imagery over cucumber plants inside a commercial greenhouse to detect powdery mildew due to Podosphaera xanthii . It was collected using a MicaSense ® RedEdge camera at 1.5 m over the top of the plant. Image registration was performed using Speeded-Up Robust Features (SURF) with an afﬁne geometric transformation. The image background was removed using a binary mask created with the aligned NIR band of each image, and the illumination was corrected using Cheng et al.’s algorithm. Different features were computed, including RGB, image reﬂectance values, and several vegetation indices. For each feature, a ﬁne Gaussian Support Vector Machines algorithm was trained and validated to classify healthy and infected pixels. The data set to train and validate the SVM was composed of 1000 healthy and 1000 infected pixels, split 70–30% into training and validation datasets, respectively. The overall validation accuracy was 89, 73, 82, 51, and 48%, respectively, for blue, green, red, red-edge, and NIR band image. With the RGB images, we obtained an overall validation accuracy of 89%, while the best vegetation index image was the PMVI-2 image which produced an overall accuracy of 81%. Using the ﬁve bands together, overall accuracy dropped from 99% in the training to 57% in the validation dataset. While the results of this work are promising, further research should be considered to increase the number of images to achieve better training and validation datasets.


Introduction
Greenhouse production in Canada in 2019 comprised about 838 specialized commercial greenhouses covering more than 17.6 million m 2 (mainly in Ontario), generating more than $1.1 billion in revenue, and employing more than 12,429 persons [1]. Cucumber is one of the main vegetables produced, with a harvested land area of 4.8 million m 2 and a production of about 240,451 metric tons or CAD 485 million [1]. As with other crops, fungal diseases can affect greenhouse crops and be a significant limiting production factor [2]. Powdery mildew is one cucumber plant disease that can cause yield losses of 30-50% [3]. It is all due to Podosphaera xanthii, a biotrophic pathogen that interacts with the host without killing the host cells to obtain nutrients [4].
To control cucumber mildew, approaches based on temperature and relative humidity have been proposed as early warning methods [5][6][7]. Other methods include periodic visual inspection of the plants by technicians. This approach is time-consuming, costly, and does not collect spatial information. It can also be unreliable as leaves that look healthy can be infected. Detecting a disease when a plant wilts or collapses is too late, as in a greenhouse, diseases can spread quickly. Therefore, early disease detection is needed for This study was performed in a greenhouse managed by Great Lakes Greenhouses Inc. in Leamington (ON). In this facility, cucumber plants (Cucumis sativus L.) cultivar Verdon (Paramount Seeds Inc., Stuart, FL, USA) were grown in parallel rows with a width ranging from 185 to 205 cm, and grew as high as 180 cm. The height of the greenhouse ranged from 3 to 6 m and the space between plants varied from 40 to 45 cm (Figure 1). The controlled environment had a temperature between 20 and 23 • C and a relative humidity between 60 and 70%. Such environmental conditions can be favorable to disease development [39].

Experiment
This study was performed in a greenhouse managed by Great Lakes Greenhouses Inc. in Leamington (ON). In this facility, cucumber plants (Cucumis sativus L.) cultivar Verdon (Paramount Seeds Inc., Stuart, FL, USA) were grown in parallel rows with a width ranging from 185 to 205 cm, and grew as high as 180 cm. The height of the greenhouse ranged from 3 to 6 m and the space between plants varied from 40 to 45 cm (Figure 1). The controlled environment had a temperature between 20 and 23 °C and a relative humidity between 60 and 70%. Such environmental conditions can be favorable to disease development [39].
The cucumber plants were mature and at the fruiting stage. Infection by powdery mildew was done naturally from previously infected plants thanks to environmental conditions that favored spore germination and air circulation that favored their dispersion. The first observed powdery mildew symptoms were pale-yellow leaf spots. Then, on the adaxial side of the leaves, there was the formation of white powdery spots that quickly expanded into large blotches.

Image Acquisition
We acquired 20 multispectral images from a MicaSense ® RedEdge camera (Figure 2a) mounted on a plate attached to an extension arm of a cart (Figure 2b), which had wheels to facilitate movement down the greenhouse aisles. The RedEdge camera was remotely controlled by computer. It had a horizontal field of view of 47.2° and had the five bands listed in Table 1. The images were collected over healthy plants and diseased plants and were acquired in the following greenhouse area: Range 2; House 5; Rows 25, 27, 29, and 31, from Post 1 to 3, covering about 5 × 12 m 2 . To cover the defined area, we needed to take several images over each plant row. Based on the camera's position (1.5 m above the plants) and its vertical FOV of 35.4°, the distance between each adjacent image was estimated at 86 cm, using the equation of Fernández et al. [40]. The images, which had a pixel size of 0.10 cm, were collected under greenhouse daylight without the use of artificial light. Because the acquisition date was day 223 of the Julian calendar and the time of acquisition was between 11 a.m. and 2 p.m., the corresponding solar declination ranged between 14.92° and 14.96°, the solar azimuth between 145.49° and 232.76°, and the solar elevation between 54.63° and 61.0°, according to the solar position calculator of the National Oceanic and Atmospheric Administration (NOAA) (https://gml.noaa.gov/grad/solcalc/azel.html, access on 21 July 2021). The cucumber plants were mature and at the fruiting stage. Infection by powdery mildew was done naturally from previously infected plants thanks to environmental conditions that favored spore germination and air circulation that favored their dispersion. The first observed powdery mildew symptoms were pale-yellow leaf spots. Then, on the adaxial side of the leaves, there was the formation of white powdery spots that quickly expanded into large blotches.

Image Acquisition
We acquired 20 multispectral images from a MicaSense ® RedEdge camera (Figure 2a) mounted on a plate attached to an extension arm of a cart (Figure 2b), which had wheels to facilitate movement down the greenhouse aisles. The RedEdge camera was remotely controlled by computer. It had a horizontal field of view of 47.2 • and had the five bands listed in Table 1. The images were collected over healthy plants and diseased plants and were acquired in the following greenhouse area: Range 2; House 5; Rows 25, 27, 29, and 31, from Post 1 to 3, covering about 5 × 12 m 2 . To cover the defined area, we needed to take several images over each plant row. Based on the camera's position (1.5 m above the plants) and its vertical FOV of 35.4 • , the distance between each adjacent image was estimated at 86 cm, using the equation of Fernández et al. [40]. The images, which had a pixel size of 0.10 cm, were collected under greenhouse daylight without the use of artificial light. Because the acquisition date was day 223 of the Julian calendar and the time of acquisition was between 11 a.m. and 2 p.m., the corresponding solar declination ranged between 14 Before collecting the images, camera calibration was performed using a MicaSense reflectance white panel having an area of 225 cm 2 (15 cm × 15 cm). The panel was placed over the top of the canopy 1.5 m from the camera ( Figure 3). The reflectance value of the panel varied as a function of the band (Table 1).   Before collecting the images, camera calibration was performed using a MicaSense reflectance white panel having an area of 225 cm 2 (15 cm × 15 cm). The panel was placed over the top of the canopy 1.5 m from the camera ( Figure 3). The reflectance value of the panel varied as a function of the band (Table 1).  Before collecting the images, camera calibration was performed using a MicaSense reflectance white panel having an area of 225 cm 2 (15 cm × 15 cm). The panel was placed over the top of the canopy 1.5 m from the camera ( Figure 3). The reflectance value of the panel varied as a function of the band (Table 1).

Image Processing
The workflow used in the image processing is presented in Figure 4. All data were processed using MATLAB R2020b (MathWorks, Inc., Natick, MA, USA). The information related to the MATLAB R2020b functions and their related parameters were obtained Remote Sens. 2021, 13, 2948 5 of 18 from www.mathworks.com. The collected band images were imported into the MATLAB workspace and converted from uint16 to uint8 file formats with the im2uint8 function. Then, the first 700 columns were removed from each band image because this image region is related to the aisle of the greenhouse [40].

Image Processing
The workflow used in the image processing is presented in Figure 4. All data were processed using MATLAB R2020b (MathWorks, Inc., Natick, MA, USA). The information related to the MATLAB R2020b functions and their related parameters were obtained from www.mathworks.com. The collected band images were imported into the MATLAB workspace and converted from uint16 to uint8 file formats with the im2uint8 function. Then, the first 700 columns were removed from each band image because this image region is related to the aisle of the greenhouse [40].

Image Registration
The image bands were aligned following the method detailed in Fernández et al. [40], a brief description of which is provided in this paper. The method first detects matching points using Speeded-Up Robust Features (SURF) on both the fixed (red-edge image) and the moving (all other bands) images with the detectSURFFeatures function that uses a multiscale analysis of the Hessian matrix to compute blob-like structures (SURF features) that are stored in SURF Points objects [42]. The function parameters were modified from their default values to obtain the highest possible number of blobs. Once the SURF Points objects were obtained, the extractFeatures function was used to obtain the extracted feature vectors (descriptors) and their corresponding locations on both the moving and fixed images. To match the extracted features, the matchFeatures function was used to obtain the matching features set from the moving and fixed images. The image registration process involved a geometric transformation that allowed the moving image to be transformed into the fixed image (red-edge) band space based on the matching points of the moving and fixed images. Such transformation corrected the image distortions and allowed band alignment.

Image Registration
The image bands were aligned following the method detailed in Fernández et al. [40], a brief description of which is provided in this paper. The method first detects matching points using Speeded-Up Robust Features (SURF) on both the fixed (red-edge image) and the moving (all other bands) images with the detectSURFFeatures function that uses a multiscale analysis of the Hessian matrix to compute blob-like structures (SURF features) that are stored in SURF Points objects [42]. The function parameters were modified from their default values to obtain the highest possible number of blobs. Once the SURF Points objects were obtained, the extractFeatures function was used to obtain the extracted feature vectors (descriptors) and their corresponding locations on both the moving and fixed images. To match the extracted features, the matchFeatures function was used to obtain the matching features set from the moving and fixed images. The image registration process involved a geometric transformation that allowed the moving image to be transformed into the fixed image (red-edge) band space based on the matching points of the moving and fixed images. Such transformation corrected the image distortions and allowed band alignment.
The geometric transformation was done as follows. First, the M-estimator Sample Consensus (MSAC) algorithm embedded in the estimateGeometricTransform function excluded outlier matching points [43]. The MSAC algorithm is a faster variant of the Random Sample Consensus (RANSAC) algorithm [44]. Once the outliers were removed, the estimate GeometricTransform function created a 2D geometric transform object containing the geometric transformation (T) matrix that defined the geometric transformation type. In this study we considered the affine geometric transformations because it provided the best result; that is, an RMSE lower than 1 pixel with a Gaussian distribution [40]. To apply the geometric transformation to each moving image (blue, green, red, and NIR bands), we used the imwarp function with the respective T matrix. The function returned the moving image transformed into the red-edge band space. No computational load issues were observed because each method was computed separately and did not require many computational resources.

Conversion to Reflectance and Background Removal
After the image registration, digital numbers (DN) of the aligned images were converted into reflectance using Equation (1) and the coefficients of Table 1. The resulting reflectance images were then normalized between 0 and 1 to extend the image histogram into the entire available intensity range to create a higher contrast between the bright and dark structural elements and to reduce the local mean intensity [45].
To remove the background and reduce the presence of the wires used to keep the plants in the greenhouse, we applied the imbinarize function over the normalized NIR registered image, which created a binary image from the grayscale image (in our case, the registered NIR band image) by replacing all values above a globally determined threshold with 1 and setting all the other values to 0. We applied the function using Bradley's adaptive method [46], which did not need the definition of a threshold value because it performed binarization using a locally adaptive threshold that computed a threshold for each pixel depending on the local mean intensity around the pixel neighborhood [46]. By this procedure, we obtained a mask of the aligned NIR band image and multiplied the binary mask with the respective aligned band images (blue, green, red, red-edge, and NIR bands) to remove their background.

Illumination Correction
One challenge when working with close-range imagery inside a greenhouse under natural light conditions concerns color consistency caused by illumination differences during image acquisition. Illumination correction was applied to the RGB images using various MATLAB functions based on the algorithm developed by Cheng et al. [47]. The algorithm worked in the color domain by selecting bright and dark pixels using a projection distance in the color distribution and then applying a principal component analysis to estimate the illumination direction. First, the rfb2lin function was applied to linearize each RGB-masked image because the principal component analysis assumed that the RGB values of the image were linear. The illumpca function computed the projection of all the color points in the color domain on the direction of the mean vector that linked to the mean value for all the colors. Then, the color points were sorted according to the projection distances. The function then selected the colors points with the largest and smallest projection distances using a threshold of 3.5%, which was found to be the best one by Cheng et al. [47]. A principal component analysis was then performed on the selected pixels to obtain the estimated illumination vector, which contained three coefficients (C1, C2, C3) derived from the PCA analysis and corresponded to each of the RGB (red, green, and blue) band images. The illumination vector was then used in the chromadapt function, Remote Sens. 2021, 13, 2948 7 of 18 which adjusted the color balance of the RGB image with a chromatic adaptation according to the scene illumination represented by the 3 PCA coefficients. Finally, to display the illumination-corrected (white-balanced) image, a gamma correction was applied by using the lin2rgb function. The resulting corrected images were then used to compute the vegetation index images listed in Table 2.

Support Vector Machine Classification
To classify healthy and infected pixels from the registered bands and related vegetation indices, we used a fine Gaussian Support Vector Machine (SVM) classification algorithm. An SVM classifier was used because it can successfully work with small training datasets such as the one of this study. Indeed, SVM classifiers have a good generalization capability and classification accuracies comparable to more sophisticated machine learning algorithms [61,62]. ANN or Deep Learning algorithms cannot be used in our case because they require large datasets to be trained [63][64][65]. For example, Ferentinos [66] used 87,848 leaf images including over 1300 images of cucumber leaves infected with downy mildew (Pseudoperonospora cubensis) to train a convolutional neural network (CNN) to detect plant diseases.
From 20 images, we manually selected and extracted the location and spectral information of 50 healthy and 50 infected pixels to obtain a total of 1000 healthy and 1000 infected pixels. For each feature, the SVM algorithm was first trained using 70% of the healthy and infected pixels (i.e., 700 pixels of each class). Then the trained model was validated with the remaining 600 pixels (i.e., 300 healthy and 300 infected) of its corresponding feature. For each feature, a confusion matrix and the Sensitivity, Precision, F1 Score, and Overall Accuracy of the training and validating data sets were computed. The SVM was trained and validated using pixel information such as in Es-Saady et al. [13], Islam et al. [14], Shi et al. [15], Folch et al. [17], Kong et al. [18], Berdugo et al. [33], and Wang et al. [38].

Illumination Correction
The results of the illumpca function to correct the illumination of 20 aligned RGB masked images are presented in Table 3. For each image, the coefficients C1, C2, and C3 correspond to the coefficients computed for the red, green, and blue bands, respectively, Remote Sens. 2021, 13, 2948 8 of 18 during the PCA. For each image, the coefficient with the highest value indicated the band that had the highest illumination. It was possible to observe that the red band constantly presented the lowest illumination value, while the highest coefficient was observed for the green or blue bands (Table 3). Figure 5a shows an RGB image that had a high C3 coefficient (high illumination in the blue band), inducing a blue tint in the image and its corrected version (Figure 5b). The corrected images were then used to compute the vegetation index images listed in Table 2. Figure 6 compares an aligned RGB image of cucumber plants acquired from the top of the canopy and its related vegetation indices.

SVM Classification
We trained a fine Gaussian SVM classifier to sort healthy and infected pixels and validated it first with each of the five band reflectances. Figure 7 shows a comparison of the reflectance values of the healthy and infected pixels for each multispectral band. Table A1 gives the SVM classification parameters obtained with the training dataset. The SVM classification parameters after being validated are presented in Table 4. The best band reflectance was the blue band leading to an overall classification accuracy of 89% with 84% of the healthy pixels and 94% for the infected pixels being well classified. The worse band reflectance was the red-edge band giving an overall classification accuracy of 51%, with only 45% of the healthy pixels and 59% for the infected pixels being well classified.
the reflectance values of the healthy and infected pixels for each multispectral band. Table  A1 gives the SVM classification parameters obtained with the training dataset. The SVM classification parameters after being validated are presented in Table 4. The best band reflectance was the blue band leading to an overall classification accuracy of 89% with 84% of the healthy pixels and 94% for the infected pixels being well classified. The worse band reflectance was the red-edge band giving an overall classification accuracy of 51%, with only 45% of the healthy pixels and 59% for the infected pixels being well classified.   Table 5. The highest overall accuracy (89%) was observed for the RGB composite with 92 and 86% of the infected and healthy pixels, respectively, being properly classified ( Table 5). The second highest overall accuracy (81%) was observed for PMVI-2 with 85% of infected and 79% of healthy pixels being properly classified (Table 5). Then a major drop was observed in the overall accuracy to 70% in the case of NDVI, with 70% of infected and 71% of healthy pixels being correctly classified ( Table 5). The worst overall accuracy (30.5%) was for the redness index, with only 14% of the infected pixels and 47% of the healthy pixels being properly classified (Table 5).  Table 4. Confusion matrix and related statistics when a fine Gaussian Support Vector Machine (SVM) classifier is validated over to reflectance in the blue, green, red, red-edge, and NIR bands to classify 300 healthy (H) and 300 infected pixels (I) from cucumber images collected inside a greenhouse with a Micasense© Red Edge camera.

Band
Confusion  Table A2 presents the SVM parameters of the classifier trained on all the vegetation index images of Table 2. The corresponding SVM parameters for the validation dataset are shown in Table 5. The highest overall accuracy (89%) was observed for the RGB composite with 92 and 86% of the infected and healthy pixels, respectively, being properly classified ( Table 5). The second highest overall accuracy (81%) was observed for PMVI-2 with 85% of infected and 79% of healthy pixels being properly classified (Table 5). Then a major drop was observed in the overall accuracy to 70% in the case of NDVI, with 70% of infected and 71% of healthy pixels being correctly classified ( Table 5). The worst overall accuracy (30.5%) was for the redness index, with only 14% of the infected pixels and 47% of the healthy pixels being properly classified (Table 5). Table 5. Confusion matrix and related statistics when a fine Gaussian Support Vector Machine (SVM) classifier is validated using various spectral features computed with the Micasense© Red Edge band reflectance images to classify 300 healthy (H) and 300 infected (I) pixels from cucumber images collected inside a greenhouse.

Feature
Confusion

Discussion
In this study, we evaluated the feasibility of using multispectral images acquired at close-distance over greenhouse cucumber plants to detect powdery mildew using a Micasense ® RedEdge camera attached to a mechanic extension of a mobile cart. Images were obtained at 1.5 m from the top of the canopy but such a short distance between the camera and the plants meant that a GPS-based method for band alignment and image stitching would not work accurately [67]. In this case, the images that needed to be registered (the moving images) were registered to a reference image (the fixed image) [68]. The use of close-range distance imagery for plant disease detection, however, had been previously reported: Thomas et al. [69] reported a distance of 80 cm from the top of the canopy of six barley cultivars in a study of their susceptibility to powdery mildew caused by Blumeria graminis f. sp. Hordei in a greenhouse environment. Under laboratory conditions, a distance of 40 cm from the plants was reported by Kong et al. [18], who acquired hyperspectral images to detect Sclerotinia sclerotiorum on oilseed rape plants, and Fahrentrapp et al. [67] used MicaSense RedEdge imagery to detect gray mold infection caused by Botrytis cinerea on tomato leaflets (Solanum lycopersicum) from a distance of 66 cm from the plants.
Another problem with these images was the presence of an image background even though removing it is a standard preprocessing technique. As already noted by Wspanialy and Moussa [10], disease detection techniques can be difficult with images having a high degree of background clutter in a greenhouse setting. In our study inside a commercial greenhouse, the first 700 columns of each image were cropped from left to right to remove the white greenhouse aisle. Cropping a section of the image was also used if there were a complex background in a real environment [70]. We also noticed that due to the camera position, only the most exposed leaves were detected, which created dark areas inside the canopy vegetation that also presented some band misalignment. To remove the background cuttler noise as well as some structural wires present in the greenhouse, we created a binary mask using Bradley's adaptative threshold method [46], which computes a threshold for each pixel using the local mean intensity around the neighborhood of the pixel. This approach is useful due to the internal and external illumination differences among images. It gave better results than the global threshold, which was used in Otsu's method [71]. The resulting binary mask had a value of 0 for the background area and 1 for the vegetation.
The binary mask overlayed each respective aligned band to obtain band images with their background clutter noise removed and the foreground vegetation retained. The approach we used to remove the background is simpler than the algorithm proposed by Bai et al. [72], which removes non-green background from images acquired over single cucumber leaves that had scabs. Indeed, in their algorithm, a correction method of abnormal points in a 3D histogram of the image was performed followed by rapid segmentation by decreasing histogram dimensions to construct a Gaussian optimization framework to obtain the optimal threshold to isolate the scabs. In addition, Bai et al. [72] worked on images of single cucumber leaves, while in our case we removed the noise from images acquired over the whole plant canopy. Our approach did not need to acquire multiple images as did the method of Wspanialy and Moussa, [10], who studied powdery mildew on greenhouse tomato plants. In this method, which focused on foreground leaves, two images of plants needed to be collected: one was illuminated with a red light to increase the contrast between the foreground leaves and the background. Then the images were converted to grayscale, and Otsu's method was applied to create a binary mask. The noise was then removed by applying a 3 × 3 median filter to the mask. The filtered mask was then applied to the RGB images. Wspanialy and Moussa's [10] method was also applied to images acquired at 50 cm from the plant, while in our case the images were collected at 1.5 m above the plant canopy.
One challenge when working with close-range imagery inside a greenhouse is related to the differences in illumination during the image acquisition. Indeed, we observed a bluish-purple coloration over the aligned RGB images due to high illuminance in the blue band. To achieve color consistency, we applied a simple and efficient illumination estimation method developed by Cheng et al. [47]. The method selects bright and dark pixels using a projection distance in the color distribution and then applies a principal component analysis to estimate the illumination direction. Such selection of bright and dark pixels in the image allows obtaining clusters of points with the largest color differences between these pixels.
We trained a fine Gaussian SVM to sort healthy and infected pixels using either individual band reflectance, the RGB composite, several vegetation indices, and all bands together. The overall classification accuracies for the RGB and all the band reflectances together were 93 and 99%, respectively, for the trained model. A major reduction in the overall classification accuracies (57%) was observed when all band reflectances were used. The validation of the SVM model produced the highest overall accuracy with the RGB composite (89%). It was higher than those of Ashhourloo et al. [73], who applied a maximum likelihood classifier to sort healthy wheat leaves and those infected with Puccinia triticina using various vegetation index images derived from hyperspectral data: narrowband normalized difference vegetation index (83%), NDVI (81%), greenness index (77%), anthocyanin reflectance index (ARI) (75%), structural independent pigment index (73%), physiological reflectance index (71%), plant senescence reflectance index (69%), triangular vegetation index (69%), modified simple ratio (68%), normalized pigment chlorophyll ratio index (68%), and nitrogen reflectance index (67%). Our accuracy was also higher than those reported by Pineda et al. [34], who tested an artificial neural network (70%), a logistic regression analysis (73%), and an SVM (46%) to classify multicolor fluorescence images acquired over healthy and infected zucchini leaves with powdery mildew.
However, our accuracies with both the training and validating datasets were lower than in Berdugo et al. [33] (overall accuracy of 100%), who applied a stepwise discriminant analysis to discriminate healthy cucumber leaves from those infected with powdery mildew using the following features: effective quantum yield, SPAD 502 Plus Chlorophyll Meter (Spectrum Technologies, Inc., Aurora, IL, USA) values, maximum temperature difference (MTD), NDVI and ARI. Our overall classification accuracies were also lower than those reported by Thomas et al. [69] (94%), who applied a non-linear SVM over a combination of pseudo-RGB representation and spectral information from hyperspectral images to classify powdery mildew caused by Blumeria graminis f. sp. Hordei in barley. Our overall classification accuracies were also lower than those of Kong et al. [18], who applied a partial least square discriminant analysis (PLS-DA) (94%), a radial basis function neural network (RBFNN) (98%), an extreme learning machine (ELM) (99%), and a support vector machine (SVM) (99%) to hyperspectral images for classifying healthy pixels and those infected with Sclerotinia sclerotiorum on oilseed rape plants

Conclusions
Our study tested the use of ungeoreferenced close-range multispectral Micasense RedEdge images to detect powdery mildew over greenhouse cucumber plants. The images were spatially and spectrally corrected and used to compute vegetation index images. The resulting images were entered in an SVM classifier to sort pixels as a function of their status (healthy and infected). The best overall accuracy obtained with the validation dataset was achieved with the RGB composite (89%) and with the PMVI-2 image (81.83%). The present work is part of an ongoing research project to build an automatic cucumber powdery mildew detection system for commercial greenhouses. While presenting preliminary results, we introduced for the first time close-range multispectral imagery for detecting cucumber powdery mildew under real commercial conditions. The proposed methodology is based on proved methods for image registration and band alignment of close-distance imagery collected at the canopy level. At the same time, we achieved color constancy over each image by using the illumination correction method of Cheng et al. [47] and calibrated an SVM classifier to sort healthy and infected pixels.
However, further work is needed to test the classifier over a large dataset that will allow the use of other classifiers such as ANN or Deep Learning algorithms. There is also the need to determine the best distance to collect the image of the individual leaves. The images were acquired with a camera mounted on a cart and it is necessary to analyze the effect of potential cart vibrations on the image quality. The analysis used commercial software (MATLAB) and there is the need to adapt the method to open-source software. Our study was able to detect infected pixels in MicaSense RedEdge imagery but only Table A2. Confusion matrix and related statistics when a fine Gaussian Support Vector Machine (SVM) classifier is trained with spectral information acquired from various spectral features computed with the Micasense© Red Edge band images to classify 700 healthy (H) and 700 infected (I) pixels from cucumber images collected inside a greenhouse. The feature is presented from the highest to the lowest overall accuracy.

Feature
Confusion Matrix Specificity Precision F1 Score Overall Accuracy (%)