Modeling and Testing of Growth Status for Chinese Cabbage and White Radish with UAV-Based RGB Imagery

: Conventional crop-monitoring methods are time-consuming and labor-intensive, necessitating new techniques to provide faster measurements and higher sampling intensity. This study reports on mathematical modeling and testing of growth status for Chinese cabbage and white radish using unmanned aerial vehicle-red, green and blue (UAV-RGB) imagery for measurement of their biophysical properties. Chinese cabbage seedlings and white radish seeds were planted at 7–10-day intervals to provide a wide range of growth rates. Remotely sensed digital imagery data were collected for test ﬁelds at approximately one-week intervals using a UAV platform equipped with an RGB digital camera ﬂying at 2 m/s at 20 m above ground. Radiometric calibrations for the RGB band sensors were performed on every UAV ﬂight using standard calibration panels to minimize the effect of ever-changing light conditions on the RGB images. Vegetation fractions (VFs) of crops in each region of interest from the mosaicked ortho-images were calculated as the ratio of pixels classiﬁed as crops segmented using the Otsu threshold method and a vegetation index of excess green (ExG). Plant heights (PHs) were estimated using the structure from motion (SfM) algorithm to create 3D surface models from crop canopy data. Multiple linear regression equations consisting of three predictor variables (VF, PH, and VF × PH) and four different response variables (fresh weight, leaf length, leaf width, and leaf count) provided good ﬁts with coefﬁcients of determination (R 2 ) ranging from 0.66 to 0.90. The validation results using a dataset of crop growth obtained in a different year also showed strong linear relationships (R 2 > 0.76) between the developed regression models and standard methods, conﬁrming that the models make it possible to use UAV-RGB images for quantifying spatial and temporal variability in biophysical properties of Chinese cabbage and white radish over the growing season.


Introduction
On-site monitoring of crop growth throughout the growing season plays an important role in assessing overall crop conditions, determining when to irrigate, and forecasting potential yields [1][2][3][4]. Particularly, periodic monitoring of various biophysical properties of crops grown in a field, such as biomass, leaf area index, and plant height, can help growers to effectively optimize inputs such as fertilizers and herbicides as well as to accurately estimate final yields [5][6][7]. Traditionally, crop-monitoring studies have used in-field measurements or airborne/satellite data to effectively cover wide areas. Field-based methods involving on-site sampling and laboratory analysis have disadvantages in collection of data because they are often destructive, labor-intensive, costly, and time consuming, thereby limiting the number of samples required for establishment of efficient crop growth management [8,9].
Precision agriculture is a site-specific soil and crop management system that assesses variability in soil properties (e.g., pH, organic matter, and soil nutrient levels) and field (e.g., slope and elevation) and crop parameters (e.g., yield and biomass) using various tools including the global positioning system (GPS), geographic information systems (GIS), and remote sensing (RS). To manage crops site -specifically, it is necessary to collect information such as crop and soil conditions and weed distribution at different locations in a field. Remote sensing of crops can be more attractive than traditional methods of crop monitoring due to the ability to cover large areas rapidly and repeatedly. Remote sensing techniques from manned airborne or satellite platforms have been widely adopted for crop monitoring [3,10] since measurements are non-destructive and non-invasive and enable scalable implementation in space and time [11]. A common use of remote sensing is evaluation of crop growth status based on canopy greenness by quantifying the distribution of vegetation index (VI) in the crop field. Various vegetation indices, including Normalized Difference Vegetation Index (NDVI) and Excess Green (ExG), have been defined as representative reflectance values of the vegetation canopy [12].
In recent years, unmanned aerial vehicles (UAVs) have been commonly used for low-altitude and high resolution-based remote sensing applications due to advantages such as versatility, light weight, and low operational costs [2,13]. In addition, UAVs offer a customizable aerial platform from which a variety of sensors can be mounted and flown to collect aerial imagery with much finer spatial and temporal resolutions compared to piloted aircraft or satellite remote sensing systems despite several limitations, such as relatively short flight time, lower payload, and the sensitivity to weather and terrain conditions. Advancements in the accuracy, economic efficiency, and miniaturization of many technologies, including GPS receivers and computer processors, have pushed UAV systems into a cost-effective, innovative remote sensing platform [14]. Especially, multi rotor-based UAVs have been commonly used to assess the vegetation status of crops and predict their yields because the flexibility of vertical takeoff and landing platforms with various image sensors make it easy to fly over agricultural fields [15]. The acquired aerial images can help farmers evaluate the status of crop growth such as canopy greenness, leaf area, water stress estimation, and various geographic conditions including crop area, digital surface models (DSMs), and depth contour lines [16,17].
Several review articles have highlighted the wide range of applications for UAVs and mounted sensors. In the area of agriculture, based on optical diffuse reflectance sensing in the visible and near-infrared (NIR) ranges studying the interaction between incident light and crop surface properties, UAVs have been adopted for monitoring of water status and drought stresses in fruit trees using NIR band data [18]; additionally, they have been used for collecting multispectral and hyperspectral imagery for use in spectral indices [19] and even chlorophyll fluorescence [20]. Baluja et al. analyzed the relationships between various indices derived from UAV imagery for assessing the water status variability of a commercial vineyard [21]. RGB (red, green, and blue) data in the visible range were also utilized by several researchers to investigate the relationships between biophysical parameters of various crops and their UAV imagery. In a study by Torres-Sánchez et al., the visible spectral indices were calculated for multi-temporal mapping of the vegetation fraction from UAV images, and an automatic object-based method was proposed to detect vegetation in herbaceous crops [15,22]. Yun et al. conducted multi-temporal monitoring of soybean vegetation fraction to evaluate crop conditions using UAV-RGB images [23]. Additionally, Bendig et al. estimated biomass of barley using crop height derived from UAV imagery [24], while Anthony et al. presented a micro-UAV system mounted with a laser scanner to measure crop heights [25]. Especially, Geipel et al. used both vegetation indices and crop height based on UAV-RGB imagery for predicting corn yields [26].
Crop growth models require use of a wide range of biophysical parameters, including biomass, leaf area index, and plant height, which are all closely related to future yield [27,28]. The biophysical properties of a crop measured at different locations in a field may further deliver vital information about specific disease situations, enabling field-specific decisions on plant protection [29]. In addition, yield maps generated using crop growth models can provide information about the spatial and temporal variability of yields in previous years [30]. However, these maps have limitations in explaining current growing conditions. To address this issue, several studies have demonstrated the feasibility of using crop growth models to predict yield using linear regression models built with additional information on crop management [31] or weather and soil attributes [32][33][34].
Since plant height is a critical indicator of crop evapotranspiration [35], crop yield [36], crop biomass [24], and crop health [25], 3D image-based plant phenotyping techniques have been utilized to obtain the plant architecture, such as height, size and shape [14,37,38]. In particular, in combination with a non-vegetation ground model, plant height can be obtained using crop surface models (CSMs) [39,40]. Bendig et al. [24,39] defined the CSM as the absolute height of crop canopies, and Geipel et al. [26] defined the CSM as the difference between a digital terrain model (DTM) and a digital surface model (DSM). Multi-temporal CSMs derived from 3D point clouds can deliver a high resolution to the centimeter level [39,41]. Such CSMs have been applied to various crops such as sugar, beets, rice, and summer barely [24,[39][40][41]. Since light detection and ranging (LiDAR) sensors can allow users to determine the distance from the sensor to target objects based on discrete return or continuous wave signals, in spite of the relatively high cost of the LiDAR sensor, the LiDAR measurements have been successfully used for constructing 3D canopy structure with satisfactory point densities [42][43][44]. The emergence of structure from motion (SfM)-based software has enabled efficient creation of 3D point clouds and super high detailed ortho-photos without LiDAR sensors [45,46]. The SfM photogrammetry is a computer vision method that offers high resolution 3D topographic or structural reconstruction from overlapping imagery [47]. In principle, SfM performs a bundle adjustment among UAV images based on matching features between the overlapped images to estimate interior and exterior orientation of the onboard sensor. The first step of SfM algorithms is to extract features in each image that can be matched to corresponding features in other images for establishing relative location and parameters of the sensor. The key to SfM methods is the ability to calculate camera position, orientation, and scene geometry purely from the set of overlapping images provided, offering a simple processing workflow compared to alternative photogrammetry techniques [48,49]. The workflow of SfM for generating 3D digital reconstructions of landscapes or scenes makes it applicable in a variety of research fields including the modeling of urban and vegetation features. However, the SfM approach with vegetation has proven more difficult than with urban and other features because of the more complex and inconsistent structures resulting from leaf gaps, repeating structures of the same color, and random geometrics. Nevertheless, satisfactory results of vegetation modeling that estimates canopy height with SfM have been reported using colored field markers and increasing the number of acquired photographs.
Chinese cabbage (Brassica rapa subsp. pekinensis) and white radish (Raphanus sativus) are commonly cultivated in Korea because they are the main ingredients in Kimchi [50,51]. On-site monitoring of their growth status in the field using UAVs with SfM can allow the identification of spatial variation in various biophysical factors, such as canopy coverage, leaf area, and plant height, thereby helping to efficiently regulate the application of fertilizers and water as well as accurately estimate yields prior to harvest. Although previous studies have evaluated the effectiveness of the UAV system for agricultural purposes, yield estimation of Chinese cabbage and white radish using a UAV with only an RGB camera has not yet been studied.
The overall goal of this study was to develop UAV-RGB imagery-based crop growth estimation models that can quantify various biophysical parameters of Chinese cabbage and white radish over the entire growing season, as a means of assessing growth status and estimating potential yields before harvest. Specific objectives were (i) to develop regression models consisting of RGB-based vegetation index and SfM-estimated plant height that can quantify four different biophysical parameters of Chinese cabbage and white radish crops, i.e., leaf length, leaf count, leaf area, and fresh weight, and (ii) to investigate applicability of the developed regression models to a separate dataset of UAV-RGB images obtained from a different year for quantitative analysis of growth status of Chinese cabbage and white radish during the growing season.  Figure 1). The different areas in 2015 and 2016 were used for evaluating whether regression data developed in the 2015 could be applied to data sets obtained at a different year and in different fields or not. Three different sets of 21-day-old Chinese cabbage seedlings and white radish seeds were planted at 7-10-day intervals (denoted A, B, and C) in each of two separate fields to provide a wide range of growth rates under conventional tillage practices with a sprinkler irrigation system. Chinese cabbage and white radish were first planted on 7 and 6 September 2015 and 5 and 2 September 2016, respectively. A split plot arrangement of treatments for each crop was used with three replications (denoted 1, 2, and 3) in a randomized complete block design. Individual sub-plot dimensions were 3 by 9 m, consisting of four 0.5 m rows. However, only three different sets of Chinse cabbage data without replication were obtained in Field C16 since the growth quality of the other data sets was inadequate for analysis due to damage resulting from inappropriate application of herbicide prior to planting. Data obtained from Field C15 and W15 were used to build statistical models that could quantify various biophysical parameters of the cabbage and radish crops. Testing was performed using data obtained from Field C16 and W16 to investigate the predictive validity of the developed regression models to estimate the growth status of Chinese cabbage and white radish grown in different fields. Granular fertilizer was hand-applied in the furrow at planting. White plastic mulch films were used to suppress weed growth before planting.

Unmanned Aerial Vehicles Flight and Image Acquisition
The unmanned aerial vehicle (UAV) platform used in this study was a DJI F550 hexa-rotor airframe (DJI Innovations, Shenzhen, China) equipped with a Canon Powershot S110 RGB digital camera (Canon, Tokyo, Japan) and had a total weight of 1.8 kg including batteries and an additional payload capability of up to 0.6 kg. The platform could be connected to a PC (personal computer) ground station via a 433 MHz datalink to monitor the UAV's flight status and send flight path mission instructions. Details of the UAV platform specifications are described in Table 1. The UAV was set to automatically fly over the experimental field using a Pixhawk automatic flight controller (3D Robotics, Berkeley, USA) while tracking waypoints according to the pre-programmed flight path generated using an Mission Planner open source program (Ardupilot Development Team and Community). A sequence of overlapped images was collected on each flight mission to cover the entire experimental field. The flight path was designed to ensure overlapping images of at least 70% side overlap and 85% forward overlap. The Pix4Dmapper Pro 3.0.17 (Pix4D SA, Lausanne, Switzerland), which allows image mosaicking, was used to generate a complete crop map in the total study area. The RGB camera used in this study acquired 12-megapixel images using a 1/1.7" CMOS sensor and a 24-120 mm zoom lens. The field of views (FOVs) of the camera were 72.3 • and 57.5 • for horizontal and vertical directions, respectively. The images were acquired based on a time-lapse function that took one image every two seconds. From a preliminary test of determining appropriate parameters of the camera to decrease blurriness in images, the shutter speed and F-stop (aperture) were set at 1/2000 s and 4.0, respectively, with the focus distance set at infinity. The internal camera parameters, such as principal point and radial distortion, were auto-compensated by processing the bundle block adjustment in Pix4Dmapper Pro 3.0.17. As shown in Table 2, in 2015, remotely-sensed digital imagery data were collected for the test field on several dates at an approximately one-week interval during both growing seasons, beginning in late September and ending in early November, flying the UAV at 2 m/s at 20 m above ground level (AGL). To obtain ground-truth data on biophysical properties of each crop, plants located along a randomly chosen row were removed from the field within two days after every UAV flight. In the laboratory, fresh weights of plant samples were measured with an electronic balance, and leaf length and widths were measured using a 1 m ruler. In Field C15 and W15, plants sampled in each field were used for building regression models that relate their biophysical properties to the corresponding UAV-RGB images. In Field C16 and W16, a total of 62 Chinese cabbages and 42 white radishes were used to validate the regression models developed using the data in Field C15 and W15. Plant growth stages were determined according to the 10 principal growth stages and 10 secondary growth stages of the "Biologische Bundesanstalt, Bundessortenamt und CHemische Industrie" (BBCH) scale [52].
For geo-referencing UAV-images, similar to that reported in a previous study [53], a set of ground control points (GCPs) consisting of five 15 by 25 cm paper sheets were placed in the corners and center of research plot for each of the 4 fields, i.e., Field W15, W16, C15, and C16 ( Figure 1). The GCP locations were measured with a Novatel OEM 615 virtual reference station (VRS)-based real-time kinematic-global positioning system (RTK-GPS) to provide sub-decimeter positioning accuracy within 2 and 5 cm in the horizontal and vertical directions, respectively.  Figure 2 shows the flow chart of the image processing and analysis steps, including image acquisition, image preprocessing, calculation of vegetation fraction and plant height, and data analysis. Basically, the Pix4Dmapper Pro 3.0.17 performed both image alignment and 3D reconstruction for imagery. To accurately geo-reference the UAV image, the GCPs measured with the RTK-GPS were imported to the Pix4D program, thereby producing the geo-referencing images with a real-world coordinate system [14], which corresponded to both geometric calibration of image sensor and lens distortion correction [54]. As mentioned by An et al. [38], since it was important to evaluate the generation of the mosaicked images reconstructed from 3D meshes, the geo-referencing accuracy was assessed by root mean square errors (RMSEs) of horizontal (X and Y) and vertical (Z) coordinates at GCP locations using the five GCPs, which were installed in each of the four fields (W15, W16, C15, and C16). After the geo-referencing process, converting the individual images into an image orthomosaic and generating digital surface model (DSM) and digital terrain model (DTM) were performed through the SfM processing built in the Pix4Dmapper. Before image analysis for calculating vegetation fraction, radiometric calibration was conducted into the orthomosaic images. ExG index was then selected as a vegetation index, and the Otsu method was used to extract crop images [12,55]. Once the ExG-based crop images were segmented, vegetation fractions (VFs) were calculated to represent the area of vegetation [15]. The DSM and DTM were generated by performing a bundle adjustment based on matching features between the images, thereby calculating plant heights (PHs) [24]. Finally, crop growth estimation models were built using the two predictor variables, VF and PH.

Radiometric Calibration and Region of Interest
To minimize the effects of ever-changing light and atmospheric conditions on UAV images taken at different times, imagery radiometric calibration was conducted on every flight by placing 1.2 by 1.2 m Group 8 Technology Type 822 ground calibration panels for airborne sensors with seven gray scales (3%, 5%, 11%, 22%, 33%, 44%, and 55%) in a location within the flight path of the UAV platform ( Figure 3a). The mean reflectance values of the calibration targets for each of the R, G, and B bands in the RGB camera were determined using Equation (1). For this, as shown in Figure 3b, the standard reference reflectance spectrum of the calibration targets in the 400-800 nm range was measured with an ASD Fieldspec4 (Analytical Spectral Devices, Inc., Longmont, USA). The spectral response of the RGB camera was obtained from the sensor specification provided by the manufacturer (Figure 3c).
where r x,k represents the calculated mean reflectance values of the calibration targets, R x (λ) represents the standard reflectance spectrum of the targets measured with the field spectrometer, C k (λ) represents the spectral response of the image sensor, x is the calibration target, and k is one of R, G, and B bands.
Assuming that the reflectance values of calibration targets were exponentially proportional to RGB band digital numbers (DNs) based on the empirical line method [23,56], the coefficients of Equation (2) were derived by fitting the DNs of the images to the reflectance spectra of the calibration targets for each of the R, G, and B bands. As a result, to combine all the UAV images obtained at different dates, the DNs of each of the RGB bands measured with the RGB camera on every UAV flight were converted into normalized reflectance values.
where r k represents the reflectance values of the acquired images, DN represents the digital numbers of the images, and A k and B k are the coefficients of the exponential relationship. To effectively perform a bivariate analysis between the aerial images and ground truth data for Chinese cabbage and white radish crops having different biophysical characteristics, as shown in Figure 4, different regions of interest (ROIs) that represent the area of each grid were used, i.e., 60 × 60 cm and 80 × 150 cm for Chinese cabbage and white radish, respectively. The dimensions of the ROIs were determined based on geometric characteristics of the two crops, such as maximum size and inter-row spacing. As shown in Figure 4, it was possible to extract images of individual Chinese cabbage due to its growth pattern to independently stand with a constant spacing of 50 cm, thereby providing an individual crop grid along the transplanting rows. The growth pattern of white radish plants, mixed up with each other at a planting spacing of 30 cm, did not allow the extraction of individual crop grid, but a bulk extraction was possible using an average of image values for 10 plants of two crop rows to represent the ROI.

Quantification of Vegetation Fraction and Plant Height
A vegetation index of ExG was used for quantifying the vegetation fractions of Chinese cabbage and white radish in each ROI because it was reported that the ExG values could effectively assess canopy variation in green crop biomass based on RGB ortho-images [15]. The ExG (Equation (3)) was calculated using the radiometrically calibrated RGB reflectance values, instead of the RGB digital numbers [12].
where r is R R+G+B , g is G R+G+B , b is B R+G+B , and R, G, and B represent the reflectance values of the R, G, and B bands in the original images, respectively.
Crop segmentation, a process for extracting only crops from a background that includes a mixture of soil and other interfering objects in an image, is an important step performed prior to the calculation of vegetation fractions in each ROI. In this study, since plastic mulch was used to suppress weed growth along with herbicide application to soil prior to crop planting, main interfering objects were soil and plastic mulch. As shown in Figure 5, in a histogram analysis of UAV-RGB images in terms of ExG, it was possible to effectively separate the images of the cabbage and radish plants from the background using the Otsu method, which automatically calculates optimal threshold values, thereby minimizing inter-class variance and maximizing intra-class variance [57]. The ExG-based RGB images segmented based on the Otsu method were converted to binary images, i.e., 1 or 0, classified into two different groups. That is, if the ExG value was equal to or higher than the threshold, it was recognized as a vegetation pixel of 1; on the other hand, a pixel with a value of 0 was considered non-vegetation. The basic principle used in the study was an assumption that dense green vegetation produces a high value, while soil has a low value, thus producing a contrast between vegetation and soil. Finally, the vegetation fractions (VF) in each ROI were calculated as the ratio of the number of pixels segmented as a crop to the number of total pixels following Equation (4) [15]. Accuracy of the Otsu method-based crop segmentation applied in the study was assessed by comparing with manually identified actual crop images.

Vegetation Fraction =
Number of pixels determined to be 1 in an ROI Number of total pixels in an ROI (4) where ROI represents a region of interest. As described in previous studies [58,59], plant height in this study was defined as the shortest distance between the upper boundary of the main photosynthetic tissues on a plant and ground level. In this study, the 3D points of the DTM and the DSM were created for calculating plant height using the Pix4Dmapper Pro 3.0.17 ( Figure 6). That is, the DTM was defined as a model of the underlying field topography without crop features, which is corresponding to the state of no crop grown on the ground, and the DSM was defined as a combined model of the underlying topography and field features such as crops, corresponding to the state of crop grown [60]. The DTM was acquired on the first UAV flight when the crops were not germinated on the ground within 7 days after sowing and the DSMs were acquired on each of the UAV flight dates as shown in Table 2. Finally, as shown in Figure 6, the plant height, defined as a model of field features only, was then calculated by subtracting the DSM from the DTM (Equation (5)): where DSM represents the model of the underlying topography with crops, and DTM represents the underlying field topography without crops.

Statistical Analysis
Multiple linear regression models were developed to quantify the growth status of Chinese cabbage and white radish from UAV-based imagery using VF and PH as predictor variables and biophysical data as response variables. Since highly significant interaction effects between VF and PH were found from a preliminary correlation analysis, an interaction term of VF × PH was added to the predictor variables as shown in the following equation (Equation (6)): where Y represents biophysical parameters, i.e., leaf length, leaf width, leaf number, and fresh weight, and X VF represents the variable of VF, X PH represents the variable of PH, and A, B, C, and D represent the estimates of each of the predictor variable terms. The SAS 9.4 Software (SAS, Cary, NC, USA) was used to determine the four estimates for Equation (6) by fitting the image data acquired from the UAV in terms of VF and PH to the equation. Validation of the developed regression models was conducted through comparison of UAV-measured biophysical values and actual 2016 data measured with standard methods. Finally, to investigate the ability of the UAV-RGB system to estimate spatial variations in biophysical parameters of vegetables in a field, fresh weight maps of Chinese cabbage and white radish were generated using the ArcGIS 10.1 (Esri, Redlands, CA, USA) software. Table 3 shows RMSEs of the GCP coordinates in all the four fields. Horizontal (X and Y coordinates) RMSEs ranged from 0.10 to 0.20 m, whereas vertical (Z coordinates) RMSEs ranged from 0.025 to 0.034 m, indicating that the geo-referencing data obtained in this study provided the sub-decimeter positioning accuracy within 2 and 5 cm in the horizontal and vertical directions. Table 3. Resulting root mean square error (RMSE) at ground control point (GCP) locations for GCP-based geo-referenced imagery for all fields (W15, W16, C15, and C16 in Figure 1).  Figure 7 shows an example of calibration curves that relate the digital numbers (DNs) obtained with each of the three individual RGB bands to the corresponding reflectance values calculated using Equation (2) from a flight on 7 October 2016. Table 4 shows coefficients derived by fitting the DNs of the images to the reflectance spectra of the calibration targets for each of the R, G, and B bands obtained at all flight dates. The results indicate that the DNs could be successfully converted into reflectance spectra, showing strong exponential relationships with coefficient of determination (R 2 ) ranging from 0.93 to 0.99, and their reflectance data could be normalized to minimize the effect of varying sunlight conditions on UAV-RGB images.     Figure 8d,h shows crop images in red, manually segmented as Chinese cabbage and white radish, respectively, using the ENVI 5.4 software, which could be compared with the images automatically segmented (Figure 8c,g). The Otsu method-based segmented images were compared with the manually cropped images based on the crop area calculated using the number of pixels. As shown in Table 5, which reports the results of 10 samples for each of the two different crops randomly selected from the early stage of growth to the late stage of growth, the errors of segmentation performance for Chinese cabbage ranged from −8.72% to 6.01%, whereas those for white radish were in the range of −14.9% to 17.1%. However, in the early growth stage of white radish, relatively higher errors were found because the boundary area between white radish and background was blurred because of the growth characteristics of white radish, which had overlapping leaves (Figure 8g).

Validation of Plant Height Estimation Based on the SfM Algorithm
In this study, accuracy in 3D measurement was evaluated to validate plant height estimation based on our 3D extraction method. Figure 9 shows examples of the DSMs of Chinese cabbage (Figure 9a) and white radish (Figure 9b) and the DTM (Figure 9c) created using the SfM algorithm, implying that it was possible to obtain 3D images similar to the actual shapes. Similar to a previous study of validation of the SfM method based on subtraction of DTM from DSM [61], as ground truth data, the maximum standing heights of the five plants in each ROI were manually measured using a 1 m ruler and then averaged for each ROI. As shown in Figure 10, which compares plant heights of Chinese cabbage and white radish estimated using the SfM method with the ground truth data, there were significantly linear relationships between the two methods, with R 2 > 0.9 and regression slopes near unity, implying that use of the SfM method would be effective in estimating the heights of Chinese cabbage and white radish in the range of 10 to 48 cm and 10 to 60 cm, respectively. However, the height estimates retained offsets of −6.59 and −1.86 cm for Chinese cabbage and white radish between estimated and actual heights, respectively.    Figure 11 shows changes in ExG-based VF and SfM-estimated plant heights (PH) of Chinese cabbage (Figure 11a,b) and white radish (Figure 11c,d), obtained during the growing period ranging from 18 to 58 days and from 19 to 59 days after transplanting (DAT) and sowing (DAS), respectively. As expected, both VFs and PHs were linearly proportional to DAT and DAS, due to increases in canopy greenness over time. Especially, until 46 DAT and 47 DAS for Chinese cabbage and white radish, respectively, it was observed that the change rates of VF and PH with respect to time were almost constant, while there were significant differences in VF and PH between the UAV images obtained at different dates. In addition, since it was found that the biophysical parameters of Chinese cabbage and white radish, i.e., fresh weight, leaf length, leaf area and leaf count, were highly correlated with the VF and PH (Table 6), it seemed plausible that the VF and PH could be used as predictor variables in linear modeling to quantify the growth status of the two crops. However, their growths were observed to stop after approximately 46 DAT and 47 DAS for Chinese cabbage and white radish, respectively, showing no significant differences in VF and PH between the UAV images. Since this was related to the saturation phenomena of the VF and PH, the UAV data measured at 58 DAT for Chinese cabbage and 59 DAS for white radish were not included in building multiple linear regression models in this study.

Biophysical Parameter Modeling
Results of SAS regression (REG) analysis to model the growth status of the two crops based on UAV-RGB images are shown in Table 7. All of the multiple regression equations for Chinese cabbage and white radish, when using three predictor variables (VF, PH, and VF × PH) and four different response variables (fresh weight, leaf length, leaf width, and leaf count), provided good fits with coefficients of determination (R 2 > 0.8), whereas relatively low estimations in leaf width and count for white radish were obtained (R 2 = 0.68 and 0.76, respectively). In particular, it was expected that use of the developed models would make it possible to measure the fresh weights of Chinese cabbage and white radish with an acceptable level of performance and could be used as a method to predict the potential yields of the two vegetables prior to harvesting. In addition, for predicting the root weights of white radish, since a correlation between root weight and above-ground weight exists [24], the estimation of above-ground fresh weight would be feasible in predicting the potential yield of white radish during the growing season. Table 7. Results of multiple linear regression equations to estimate the biophysical parameters of Chinese cabbage and white radish using their vegetation fractions (VFs) and plant heights (PHs) obtained from unmanned aerial vehicle-red, green and blue (UAV-RGB) images: Y = Biophysical parameters; X VF = Vegetation fraction value; X PH = Plant height value; n = Number of samples; R 2 = Coefficient of determination; SE = Standard error.

Crop
Biophysical

Validation of Biophysical Parameter Estimation Models
Validation of the developed growth estimation models for Chinese cabbage and white radish was conducted using a dataset of UAV images with known biophysical data obtained from the second-year experiment conducted from September and November 2016. A total of 62 and 42 ROIs of Chinese cabbage and white radish, respectively, were used to quantify their biophysical properties during the growing season by converting the RGB images into two indices of VF and PHs used as predictor variables of the developed regression models. Figures 12 and 13 compare the biophysical values of Chinese cabbage and white radish, respectively, grown in the test field determined by the developed UAV image-based prediction models with those obtained by standard methods of simple linear regression analysis.
As shown in Figure 12a-c, the developed models performed well in measuring leaf length, leaf width, and fresh weight of Chinese cabbage, showing strong linear relationships between slopes of 0.89 to 1.12 and coefficients of determination >0.76, even though their estimates retained offsets of 3.55 to 4.54 cm and a fresh weight of 178.88 g. However, leaf counts of Chinese cabbage were highly underestimated (57%) when using the UAV image-based estimation model. As shown in Figure 13 that quantifies the growth status of white radish, all estimates obtained with the developed regression models were lower than those measured with standard methods, showing slopes of 0.44 to 0.85. In particular, the UAV method measured 29% less fresh weight than did the standard method, which uses an electronic balance.

Application to Spatial Mapping of Potential Yield
To investigate the feasibility of using the UAV method for potential yield mapping of Chinese cabbage and white radish during the growing season, the UAV-RGB images (Figures 14a and 15a) collected on 27 October 2016 were converted into fresh weight maps (Figures 14b and 15b) estimated with the developed regression models. As shown in Figures 14a and 15a, it appeared that there was high variation in vegetation fraction of both crops in the two test fields from the UAV-RGB images, implying that different fresh weights would be predicted depending on location.
Georeferenced data on individual Chinese cabbages and a portion of white radishes extracted from each ROI were collected by sequentially locating the center points of each 60 cm × 60 cm and 80 cm × 150 cm plot, respectively, along the transplanting and planting rows on the two orthomosaic UAV-RGB images (Figures 14a and 15a) using the ArcGIS 10.1. As a result, almost 2700 and 160 center points corresponding to each of the ROIs with coordinate information were determined in order to calculate the VFs and PHs for use as predictor variables for determining the fresh weights of Chinese cabbage and white radish in each ROI, respectively. Finally, maps of each crop were generated in ArcGIS 10.1 to visually show spatial variability in fresh weight representative of each ROI, ranging from 0 to 12,000 g/m 2 and 0 to 4500 g/m 2 for Chinese cabbage and white radish, respectively. This reveals that the fresh weight maps generated using the UAV-RGB images in conjunction with use of the developed prediction models could be used for evaluating the potential yields of the two crops prior to harvesting.

Discussion
In analyzing multi-temporal UAV images, radiometric calibration is required to minimize the effects of ever-changing light and atmospheric conditions on UAV images taken at different times. Yun et al. conducted radiometric calibration based on the empirical line method [56] using color-scale calibration targets [23]. The results showed linear relationships between DNs and reflectance spectra with R 2 ranging from 0.85 to 0.99. However, in general, since the linear relationship might not be suitable when high saturation effects at high DN values are observed. In this study, it was found that the RGB band DNs were exponentially proportional to reflectance values obtained with gray-scale calibration targets, showing significant relationships with R 2 ranging from 0.93 to 0.99 (Table 4). Therefore, their reflectance data could be normalized to minimize the effect of varying sunlight conditions on UAV-RGB images.
Excess green (ExG) is an efficient vegetation index that can separate crops from a background that includes a mixture of soil and other interfering objects in images with only RGB band. In a study by Torres-Sánchez et al., ExG was used for multi-temporal mapping of the vegetation fraction from UAV images [15]. In our study, the ExG was applied to automatic crop segmentation with Otsu threshold (Figure 8). The segmentation results obtained using 10 samples taken from each of randomly selected Chinese cabbage and white radish showed errors ranging from −8.72% to 6.01% for Chinese cabbage and from −14.9% to 17.1% for white radish (Table 5). This indicates that use of the Otsu threshold method based on the ExG would be satisfactory in segmenting crop images from a background consisting of soil and plastic mulch with accuracies >80%. As mentioned earlier, relatively higher errors were found in the early growth stage of white radish because the boundary area between white radish and background was blurred because of the growth characteristics of white radish, which had overlapping leaves (Figure 8g). This requires the use of a robust image processing method to improve the segmentation performance for up to 30-day-old white radishes with overlapping leaves.
Plant height can be obtained using CSMs [39,40]. In the previous studies, the CSMs have been applied to various crops such as sugar, beets, rice, and summer barely [24,[39][40][41]. At the same time, the emergence of SfM-based software has enabled efficient creation of 3D point clouds and super high detailed ortho-photos [45,46]. In this study, the SfM algorithm estimated plant heights of Chinese cabbage and white radish, with approximately 1:1 relationships and coefficients of determination (R 2 ) >0.9 between the heights determined by both the SfM method and manually using a 1 m ruler ( Figure 10). However, the height estimates retained offsets of −6.59 and −1.86 cm for Chinese cabbage and white radish crops between estimated and actual heights. As mentioned in previous studies by Bendig et al. [24,39] and Ruiz et al. [62], this might be related to an inherent error in vertical locations measured with the RTK-GPS that affects the accuracy of GCPs used to correct georeferenced data on UAV images. Another reason responsible for the height measurement error might be the use of a ruler to measure the maximum heights of the crops. Nevertheless, the overall results showed an improvement in accuracy compared to similar studies [24,39,61,63].
Several studies have applied UAV images to modeling crop growth over the growing season. Bendig et al. applied crop surface models and various vegetation indices to estimate biomass of barely [64]. Brocks and Bareth also conducted biomass estimation of barley using plant height with crop surface models showing R 2 between 0.55 and 0.79 for dry biomass [65]. Although many studies have been conducted, yield estimation of Chinese cabbage and white radish using a UAV with an RGB camera has not yet been studied. In this study, prior to building growth estimation models for Chinese cabbage and white radish based on UAV images, time-series analysis of the VF and PH was conducted to characterize their changes with respect to time ( Figure 11). As expected from the results of some previous studies [15,23,61], both VF and PH were linearly proportional to DAT and DAS due to an increase in canopy greenness over time. Saturation phenomena for both the VF and PH were observed, however, beginning from 46 DAT for Chinese cabbage and 47 DAS for white radish, which are similar to the general growth patterns of crops [66]. Therefore, the saturated VF and PH data were not used to build multiple linear regression models in this study. As shown in Table 5, it was found that on average, relatively high correlation coefficients existed between the biophysical parameters and the VF compared to the PH data. A possible cause might be explained by the growth characteristics of Chinese cabbage and white radish with a narrower PH range of lower than 60 cm as compared to those obtained with other crops, such as maize and sorghum growing higher than 1 m, whereas Chinese cabbage and white radish grow relatively fast, producing a higher change in VF in almost 2 months.
Multiple regression equations for Chinese cabbage and white radish developed from this study, using three predictor variables (VF, PH, and VF × PH) and four different response variables (fresh weight, leaf length, leaf width, and leaf count), provided good fits (R 2 > 0.8) except for relatively decreased estimations in leaf width and count for white radish (R 2 = 0.68 and 0.76, respectively). The results of validation testing showed that strong linear relationships (R 2 > 0.76) existing between the developed models and standard methods would make it possible to use UAV-RGB images for predicting biophysical properties of the two crops in a quantitative manner. However, on average, the prediction accuracies for white radish were worse than those for Chinese cabbage. Likely causes for the lower estimates of white radish growth status might be related to the higher irregularity and narrower leaves of white radish, thereby reducing the performance of the image segmentation due to a difficulty in extracting white radish from the background ( Table 5). An improvement in image segmentation would enhance the ability of the UAV system to estimate the biophysical parameters of white radish.
In addition, there was an issue to address slopes of non-unity and offsets of non-zero in the validation testing. For example, leaf count of Chinese cabbage and other biophysical parameters of white radish were highly underestimated when using the UAV image-based estimation model (Figures 14 and 15). Possible causes responsible for the lower estimates of leaf count with the UAV system are difficult to explain. However, it might be related to limited spatial resolutions of the RGB images obtained with the UAV flying at 20 m that could not separate their individual leaves with an acceptable level of accuracy. The slopes and offsets can be adjusted using a two-point normalization method. The two-point normalization method is an algorithm to compensate for differences in slope and offset between model estimates and actual values prior to analysis using two known samples of different values [67]. When using the two-point normalization, it is necessary to select two known samples having the highest difference in growth status if possible to maximize the effect of the two-point normalization using a wide range of data. The slope is directly compensated by comparing the actual value obtained based on destructive sampling and the predicted value obtained with the UAV-RGB system. As shown in Figure 16, it is apparent that the estimated leaf count can be adjusted, improving the slope and offset from 0.43 to 0.93 and from 17.56 to 5.05, respectively, after two-point normalization. In addition, when the accuracy was assessed using a RMSE [68], the RMSE of leaf count was decreased from 13.31 to 7.23 by use of the two-point normalization. Future studies include the application of the developed UAV-estimated models in conjunction with use of the two-point normalization method to commercial fields growing Chinese cabbage and white radish.

Conclusions
In this study, crop growth estimation models based on UAV-RGB imagery were developed and validated for quantifying various biophysical parameters of field-grown Chinese cabbage and white radish. This study differs from previous studies in respect of (i) using the combination of RGB-based vegetation index (VI) and SfM-estimated PH to build their growth estimation models and (ii) conducting a different-year field experiment to investigate applicability of the developed regression models to a separate dataset of UAV-RGB images. Analysis of the modeling and validating results indicates that the two physical parameters (VI and PH) obtained using the UAV-RGB camera can be used as viable predictor variables in quantifying the growth status of Korean Chinese cabbage and white radish, due to strong linear relationships between UAV-RGB and standard methods for fresh weight, leaf length, leaf width, and leaf count. Additionally, since the use of a UAV-RGB system will make it possible to obtain measurements at a closer spatial resolution than is feasible with sample collection and laboratory analysis, we believe this approach will be able to map crop growth status with greater accuracy than current methods. However, one drawback to this UAV-RGB system is slopes of non-unity and offsets of non-zero found in the validation testing. To address this issue, a two-point normalization method, consisting of a sensitivity compensation followed by an offset adjustment using two known samples of different values, will be necessary to implement in the UAV-RGB system. Future studies include the application of the developed UAV-estimated models in conjunction with use of the two-point normalization method to commercial fields growing Chinese cabbage and white radish to confirm their suitability for estimating in-season biophysical properties.