Next Article in Journal
Geomorphological Evidence of Ice Activity on Mars Surface at Mid-Latitudes
Next Article in Special Issue
Fine-Scale Grassland Classification Using UAV-Based Multi-Sensor Image Fusion and Deep Learning
Previous Article in Journal
MFAFNet: A Multi-Feature Attention Fusion Network for Infrared Small Target Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimating Rice SPAD Values via Multi-Sensor Data Fusion of Multispectral and RGB Cameras Using Machine Learning with a Phenotyping Robot

National Engineering and Technology Center for Information Agriculture, Key Laboratory for Crop System Analysis and Decision Making (Ministry of Agriculture and Rural Affairs), Engineering Research Center of Smart Agriculture (Ministry of Education), Jiangsu Key Laboratory for Information Agriculture, Jiangsu Collaborative Innovation Center for Modern Crop Production, Nanjing Agricultural University, Nanjing 210095, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(17), 3069; https://doi.org/10.3390/rs17173069
Submission received: 15 August 2025 / Revised: 1 September 2025 / Accepted: 1 September 2025 / Published: 3 September 2025

Abstract

Chlorophyll is crucial for crop photosynthesis and useful for monitoring crop growth and predicting yield. Its content can be indicated by SPAD meter readings. However, SPAD-based monitoring of rice is time- and labor-intensive, whereas remote sensing offers non-destructive, rapid, real-time solutions. Compared with mainstream unmanned aerial vehicle, emerging phenotyping robots can carry multiple sensors and acquire higher-resolution data. Nevertheless, the feasibility of estimating rice SPAD using multi-sensor data obtained by phenotyping robots remains unknown, and whether the integration of machine learning algorithms can improve the accuracy of rice SPAD monitoring also requires investigation. This study utilizes phenotyping robots to acquire multispectral and RGB images of rice across multiple growth stages, while simultaneously collecting SPAD values. Subsequently, four machine learning algorithms—random forest, partial least squares regression, extreme gradient boosting, and boosted regression trees—are employed to construct SPAD monitoring models with different features. The random forest model combining vegetation indices, color indices, and texture features achieved the highest accuracy (R2 = 0.83, RMSE = 1.593). In summary, integrating phenotyping robot-derived multi-sensor data with machine learning enables high-precision, efficient, and non-destructive rice SPAD estimation, providing technical and theoretical support for rice phenotyping and precision cultivation.

1. Introduction

As one of the most important food crops, rice plays a crucial role in addressing food security challenges [1]. The content of chlorophyll is an important indicator for photosynthesis, exerting a substantial impact on rice yield and growth status [2]. The real-time, non-destructive, and high-precision monitoring of rice SPAD is of great significance. Traditional measurement methods can accurately determine chlorophyll content but are destructive, cumbersome, and incapable of providing real-time monitoring. In practical production, leaf SPAD values are commonly used to characterize plant chlorophyll [3,4]. Consequently, non-destructive, real-time, and high-throughput monitoring of rice SPAD values holds important implications for rice breeding phenotyping and precision cultivation management.
The Internet of Robotic Things (IoRT) integrates robotics with Internet of Things (IoT) technologies to enable seamless data collection, processing, and exchange. Characterized by enhanced intelligence, mobility, and autonomy, IoRT holds great potential in precision agriculture [5]. Phenotyping monitoring robot systems are also recognized as crucial platforms for high-throughput plant phenotyping [6]. Commonly used unmanned aerial vehicle (UAV) platforms are typically equipped with a single type of sensor, such as spectral cameras, RGB cameras, thermal imagers, and LiDAR. The information obtained by a single sensor is limited, which, to some extent, affects the accuracy of remote sensing monitoring. Currently, some spectral cameras are integrated with RGB cameras, but the imaging quality of the miniature industrial camera modules used is not comparable to that of professional RGB cameras, thus compromising the information contained in the captured color images. In contrast, phenotyping robots can be equipped with multiple sensors and are capable of acquiring phenotypic parameters at the population, individual plant, and organ levels with high precision and high throughput in large-scale fields. Compared with other phenotyping monitoring platforms, such as satellites and UAVs, they exhibit the most balanced overall performance [7].
UGV-based phenotyping systems have been increasingly applied in agricultural scenarios, which can navigate through complex field environments to capture high-resolution imagery of crop canopies and monitor dynamic changes in plant growth over time. Fan et al. utilized an ultra-narrow wheeled robot equipped with an RGB-D camera, which could identify stalks using a convolutional neural network and effectively measure stalk diameter based on point clouds in maize rows [8]. The self-propelled phenotyping robot platform in reference [9], with a 2D LiDAR and a low-cost spectral reflectance sensor, achieved extremely high correlation between the acquired canopy height and manual measurements, and the spectral indices (NDVI and PRI) showed good consistency with results from expensive spectrometers. Thus, phenotyping robots equipped with multiple sensors demonstrate substantial application potential for the high-throughput and high-precision monitoring of rice SPAD.
Crop canopy chlorophyll significantly affects the reflectance of vegetation spectra, which enables the quantitative analysis of chlorophyll content using spectral information. SPAD values are measured manually by clamping leaves with a SPAD meter, which are used to characterize the chlorophyll content of leaves [10,11]. Ma et al. developed SPAD prediction models for maize under different irrigation levels in Inner Mongolia using plant height, leaf area index, and vegetation indices derived from UAV multispectral images [12]. Xie et al. employed UAV multispectral remote sensing combined with machine learning methods to predict the SPAD values of litchi fruits [13]. Color indices and texture indices extracted from RGB images have also been used for monitoring SPAD [14]. Liu et al. linked 21 color indices calculated from UAV-acquired RGB images to barley SPAD values, thereby constructing a barley SPAD monitoring model [15]. Moreover, with the rise in multi-source information fusion methods in recent years, integrating data from different platforms and sensors can significantly improve the accuracy of phenotypic monitoring. For example, the multi-source data fusion of image texture features and spectral reflectance data has notably enhanced the estimation of crop leaf nitrogen content [16]. The fusion of spectral and texture indices derived from UAV multispectral images enables the more effective inversion of SPAD values in wheat canopies during the heading stage [17]. Mechanistic methods based on radiative transfer models have been widely applied in crop phenotyping monitoring. Some scholars used UAV multispectral images and the PROSAIL model to remove canopy shadows, thereby improving the estimation accuracy of chlorophyll content in individual apple leaves [18]. Other scholars also estimated the chlorophyll content at the apple tree canopy scale using 3D radiative transfer models and UAV multispectral images [19]. In recent years, machine learning algorithms have played an important role in data modeling for crop phenotypic monitoring, such as random forest, partial least squares, and extreme gradient boosting regression. Some scholars have established chlorophyll content estimation models for wheat [20], maize [21], and cotton [22] based on SVR and RF algorithms, which exhibit better accuracy and robustness compared with traditional model construction methods. Therefore, it is necessary to investigate the combination of machine learning algorithms and multi-source data fusion to develop more accurate and efficient rice SPAD monitoring models.
To explore and evaluate the potential of multi-sensor data from phenotyping robot platforms combined with machine learning algorithms in phenotypic monitoring, as well as to improve monitoring accuracy, this study takes multispectral and RGB cameras as examples to investigate the application potential of multi-sensor data fusion in rice SPAD monitoring and to enhance the accuracy of rice SPAD monitoring by integrating machine learning algorithms. The main contributions of this research are as follows: (1) evaluating the feasibility of estimating rice SPAD using multi-sensor data from phenotyping robot; (2) analyzing the impacts of different input features and various machine learning algorithms on the accuracy of rice SPAD estimation; (3) constructing a high-precision and robust model for SPAD monitoring by integrating multi-sensor data from phenotypic robots with machine learning algorithms. In summary, this study provides a convenient, rapid, and accurate method for rice SPAD monitoring.

2. Materials and Methods

To monitor rice SPAD values using multi-source data from multispectral and RGB images acquired by a phenotyping robot, two ecological sites were selected for rice nitrogen experiments. We collected multispectral and RGB images at the jointing, heading, and filling stages, respectively, and concurrently measured SPAD values. Vegetation indices were extracted from multispectral images, while color indices and texture features were derived from RGB images. Subsequently, four machine learning algorithms—random forest regression (RF), partial least squares regression (PLSR), extreme gradient boosting (XGBoost), and gradient boosting regression trees (BRT)—were employed to construct models for monitoring rice SPAD values through multi-source data fusion. The overall workflow of the research content in this paper is illustrated in Figure 1.

2.1. Experimental Setup

The experiments conducted in this study were located in Rugao City (32°15′0.1″N, 120°45′43.8″E) and Taixing City (32°14′30.876″N, 120°14′48.408″E), Jiangsu Province. Both Rugao and Taixing feature a subtropical monsoon climate. We illustrate the layout of the experimental plots in Figure 2, and summarize additional details of the experiments in Table 1.
In the Rugao region, 36 experimental plots were established, whereas 24 plots were set up in the Taixing experiment. Both experiments incorporated nitrogen gradient treatments and different rice varieties. Specifically, the plot size in Rugao was 4 × 6 m, with three nitrogen application rates (0, 150, and 300 kg/ha) applied in a basal–topdressing ratio of 4:2:2:2. Two rice cultivars, Nangeng46 and Yongyou1540, were planted at densities of 30 × 15 cm and 50 × 15 cm. In Taixing, the plot size was 6 × 8 m, with five nitrogen levels (0, 240, 270, 300, and 330 kg/ha), using the same basal–topdressing ratio. Three rice cultivars (Huruang1211, Yanjing15, and Nangeng9108) were planted at densities of 18 × 25 cm, 14 × 25 cm, and 12 × 25 cm. Other field management practices in both locations adhered to local high-yield cultivation protocols. This experimental design allowed for comprehensive evaluations of the interactive effects of nitrogen management, planting density, and rice variety on crop performance under subtropical monsoon climate conditions.

2.2. Data Collection

The phenotyping robot employed in this study was independently developed by our laboratory in previous research. It is capable of operating in both paddy fields and upland environments, featuring adjustable wheelbases and the versatility to carry multiple sensors [23]. Equipped with a multi-spectral imager and an RGB camera, this robotic platform was utilized to collect data during the jointing, heading, and grain-filling stages of rice growth across two experimental sites. Concurrently, SPAD values were measured, as illustrated in Figure 3. Data collection activities were carried out between 10:00 and 13:00 on clear, windless days to ensure optimal conditions. The multi-spectral imager used is a mosaic snapshot-type device developed in-house, capable of simultaneously capturing eight spectral bands (467, 474, 527, 566, 679, 717, 731, 822 nm, with bandwidths ranging from 12 to 20 nm) in a single exposure. This design ensures the perfect spatial alignment of the multi-band images. Radiometric calibration was performed prior to each data acquisition session, and three replicate images were captured for each experimental plot. The RGB camera used was a Nikon Z5 (Nikon Co., Yokohama, Japan) model with a 24–70 mm F4 lens fixed at 50 mm focal length, maintaining consistent settings to ensure data consistency. Both sensors were mounted on a stabilized gimbal of the phenotypic robot, positioned 50–60 cm above the crop canopy to avoid shadows and ensure uniform data collection.
The SPAD values of rice leaves were measured using a SPAD-502 chlorophyll meter (Konica Minolta Co., Tokyo, Japan). This portable handheld device enables the rapid, non-destructive assessment of relative chlorophyll content in plant leaves by measuring light transmittance at wavelengths of 650 nm and 940 nm. Measurements were taken at the same locations where spectral data and RGB images were collected. For each plot, a “five-point sampling method” was adopted, involving 10 randomly selected rice plants. Measurements were targeted at the upper, middle, and lower parts of each leaf, avoiding leaf veins. The average of these measurements was calculated to determine the leaf SPAD value for each plot.

2.3. Data Processing and Feature Extraction

2.3.1. Extraction of Vegetation Index

ENVI (Boulder, CO, USA) was employed for image cropping, ROI (Region of Interest) selection, and reflectance calculation. Specifically, the ENVI 5.3 software was utilized to extract the spectral reflectance values of each band from the preprocessed multispectral images. In this study, various vegetation indices (VIs) were used as inputs for developing the SPAD inversion model. Among the eight available bands, five bands with central wavelengths of 474, 566, 679, 717, and 822 nm were selected as the band data for subsequent analyses. Based on these selected bands, commonly used VIs were calculated to serve as inputs for machine learning, as illustrated in Table 2.

2.3.2. Color Index Extraction

The average grayscale values were extracted from the red, green, and blue channels of each sampling point in the RGB images using ENVI 5.3. These grayscale values were normalized to obtain standardized values (r, g, and b), which were then used to calculate various color indices (CIs) as shown in Table 3. Equations (1)–(3) were used to calculate the normalized values (r, g, and b).
r = R ( R + G + B )
g = G ( R + G + B )
b = B ( R + G + B )
where R, G, and B are the DN values of the red, green, and blue wavebands, respectively. In this study, seven common Cls were used to analyze the relationship with SPAD.

2.3.3. Extraction of Texture Features

In this study, the Gray-Level Co-Occurrence Matrix (GLCM) was employed as the texture feature. GLCM is widely recognized as a statistical method for describing the spatial correlation of grayscale levels [35], and it plays a crucial role in establishing quantitative relationships with physiological parameters. In this study, GLCM was computed using ENVI software. Detailed texture information was determined by setting four key parameters within adjacent windows: (1) the smallest window size (rows × columns = 3 × 3) was adopted to provide detailed texture information; (2) the moving distance was set to 1 pixel; (3) the moving directions included angles of 0°, 45°, 90°, and 135°, with the 45° direction selected for calculation since its grayscale values are close to the average grayscale values of the four directions [35]; and (4) the highest grayscale quantization level of 64 (including four levels: 0, 16, 32, and 64) was chosen to ensure the high quality of GLCM.
Eight texture features from the red, green, and blue bands were extracted from the GLCM, namely mean (MEA), variance (VAR), homogeneity (HOM), contrast (CON), dissimilarity (DIS), entropy (ENT), second moment (SEM), and correlation (COR). For the convenience of subsequent analyses, these texture features were labeled with prefixes such as “R-”, “G-”, and “B-” to indicate that they were extracted from the three channels based on the GLCM (for example, “B-Con” represents the contrast of the blue band).

2.4. Model Construction and Evaluation

IBM SPSS Statistics 25 (IBM Corp., Armonk, NY, USA) was utilized to analyze the experimental data of rice SPAD, aiming to explore the correlations between three types of features and SPAD. By quantifying the degree of association among features, this analysis helped identify redundant information, optimize feature selection, and determine the most sensitive features to SPAD values. These selected features were then used as inputs for machine learning algorithm models, providing a basis for subsequent modeling.
The chosen features were further integrated into predictive models using Random Forest (RF), Partial Least Squares (PLS), Extreme Gradient Boosting (XGBoost), and Boosted Regression Trees (BRT) methods, all of which were implemented using the sklearn library in Python 3.8. During the algorithm training process, 70% of the data was used as the training set and 30% as the validation set. A 10-fold cross-validation approach was adopted, and grid search was employed to obtain the optimal model parameters.
Random Forest is an ensemble algorithm that reduces overfitting by averaging predictions from multiple decision trees. It involves preprocessing data, training trees on bootstrapped subsets with random feature selection, aggregating outputs, and optimizing via parameters like tree depth. Its random sampling of samples and features ensures high accuracy and robustness, suited for nonlinear and high-dimensional data.
Partial Least Squares Regression integrates principal component analysis and linear regression to handle multicollinearity by extracting latent variables correlated with the dependent variable. It standardizes data, extracts components (count via cross-validation), evaluates fit with residuals, and interprets relationships through loadings, simplifying models while preserving key correlations.
Extreme Gradient Boosting, an optimized gradient boosting method, uses regularization to control complexity and second-order derivatives for faster convergence. It processes data as DMatrix, tunes hyperparameters via grid search, trains trees iteratively with early stopping, and evaluates via metrics like RMSE, enhancing speed and stability.
Boosted Regression Trees iteratively trains decision trees to correct prior errors, following the “weak learner integration” principle. It prepares data, initializes weak learners, adjusts weights by residuals, optimizes via cross-validation, and explains impacts via partial dependence plots. It excels at nonlinear relationships with minimal tuning and high interpretability.
The coefficient of determination (R2) and root mean square error (RMSE) were used as two accuracy evaluation metrics. The equations for these metrics are as follows.
R 2 = 1 i = 1 n ( y i y i ) 2 i = 1 n ( y i y ¯ ) 2
R S M E = i = 1 n ( y i y i ) 2 n
where yi and yi′ represent the measured values and predicted values of the samples, respectively; y ¯ denotes the mean of the measured values; and n is the number of samples.

3. Results

3.1. Statistical Analysis of SPAD

The analysis results of the SPAD values obtained in this study are presented in Table 4 and Figure 4. We collected a total of 180 data samples, with values ranging from 29.3 to 47.1 and an overall coefficient of variation (CV) of 8.5%. Specifically, the coefficients of variation at the jointing stage, heading stage, and filling stage were 6.7%, 6.5%, and 10%, respectively. Overall, the variability of SPAD values was relatively small, indicating a relatively uniform distribution across the samples. Figure 4 and Table 4 illustrate the distribution characteristics of the SPAD values of rice plants at different growth stages in this experiment. The constructed model can demonstrate the spatial variability within the experimental field.

3.2. The Correlation of Characteristic Parameters to SPAD

A Pearson correlation analysis was performed of SPAD values and vegetation indices, color indices, and texture features, as well as among the features themselves. The correlation analysis results among features are shown in Figure 5, and the correlation results comparing features and SPAD are presented in Figure 6.
Vegetation indices showed the strongest overall correlation, followed by color indices, while texture features had the weakest correlation. The correlation coefficients of vegetation indices ranged from 0.254 to 0.809, with NDVI being the feature with the highest correlation. For color indices, the correlation coefficients ranged from 0.241 to 0.63, and the color index with the highest correlation was WI. As for texture features, their correlation coefficients ranged from 0.182 to 0.428, among which the B_Homogeneity feature had the highest correlation. Finally, the top 8 vegetation indices, 5 color indices, and 10 texture feature parameters with the highest correlations were selected as the input feature set for the model.

3.3. Evaluation of SPAD Monitoring Model Based on Machine Learning and Multi-Sensor Data

A combination of four machine learning algorithms and parameter optimization was used to establish quantitative relationships between the feature information and SPAD values of rice at three growth stages, thereby developing robust models for estimating SPAD. Three machine learning models were trained using the optimal algorithm and parameters for each type of feature, and one machine learning model with optimized parameters was trained through the fusion of the three types of features. Table 5 presents the results of the constructed SPAD monitoring models, including those for the 70% training set and 30% test set. All models were built using data from the three growth stages as input. Models trained with four different machine learning algorithms using single-type features showed that vegetation indices yielded relatively high accuracy (R2 = 0.67~0.78). Specifically, the combination of vegetation indices and Random Forest (RF) achieved the best performance among single-type features, with an R2 of 0.78. Models trained with color indices exhibited moderate accuracy (R2 = 0.61~0.70), while those trained with texture features performed the poorest (R2 = 0.55~0.64). Models based on the fusion of the three types of features achieved the highest accuracy in rice SPAD estimation (R2 = 0.75~0.83), with all four machine learning algorithms outperforming their counterparts trained with single-type features. Among them, the RF algorithm achieved an R2 of 0.83 and an RMSE of 1.593; the Partial Least Squares (PLS) algorithm had an R2 of 0.75; the Extreme Gradient Boosting (XGBoost) algorithm reached an R2 of 0.80; and the Boosted Regression Trees (BRT) algorithm obtained an R2 of 0.78. The rice SPAD monitoring model based on the RF algorithm across all growth stages demonstrated higher predictive accuracy, characterized by a higher R2 value and a lower RMSE value. Among the models trained with four different feature inputs using the four machine learning algorithms, the RF model consistently maintained the best performance, while the other three algorithms showed a relatively balanced performance across different inputs. It is noteworthy that the XGBoost model performed exceptionally well on the 70% training set, with an R2 of 1.0 in all cases, which may indicate overfitting or could be attributed to the inherent characteristics of the algorithm.
In the process of training machine learning algorithm models, grid search and cross-validation methods were employed to determine the optimal parameters for the models, aiming to achieve the best performance. For the machine learning models trained with the fusion of the three types of features, the results of the grid search are shown in Figure 7, where the ordinate represents RMSE. The RMSE was minimized through the optimization of model parameters.
Grid search was performed for the learning rate and n_estimators of the BRT model, with the learning rate ranging from 0 to 0.30 and n_estimators ranging from 100 to 1000 at intervals of 100. The optimal parameters were determined as 0.15 and 300, respectively. For the XGBoost model, grid search was conducted for n_estimators and max depth, with the search ranges set as 100 to 800 and 0 to 20, respectively; the optimal parameters were identified as 800 and 9. Regarding the RF model, optimization was performed for mtry and n_estimators via grid search, where mtry ranged from 0 to 1 and n_estimators ranged from 100 to 2100 at a step size of 200. The optimal values were found to be 0.82 and 300, respectively. For the PLS model, n_components was optimized within the range of 0 to 22, and the optimal number was determined as 20.
Figure 8 presents scatter plots of the optimal test set results for rice SPAD estimation models across the entire growth cycle, constructed using different input features and all based on the random forest algorithm. The results in the figure indicate that the developed estimation models exhibit a relatively stable inversion performance for rice SPAD (R2 = 0.64~0.83). The integration of multi-sensor data with machine learning algorithms proves to be an effective approach for improving SPAD inversion accuracy, among which the SPAD estimation model built by combining RF with three types of features achieves the highest precision (R2 = 0.83). Additionally, the figure includes a fitting verification for each growth stage, revealing that the SPAD inversion accuracy is relatively high during the heading stage, with R2 values ranging from 0.74 to 0.88 across RF models with different feature inputs.
In the rice SPAD estimation model constructed using the RF algorithm with the fused input of three types of features, Gini Importance values were employed to evaluate the contribution of each input feature to the model, as shown in Figure 9. The range of the Gini importance values is between 0 and 1. Among them, the Gini importance values of NDVI and Int1 are the highest and the second highest. The range of Gini importance values is between 0 and 1, among which the Gini importance value of G_SEC is the lowest. Among the feature types, vegetation indices contributed the most to the model, followed by color indices, with moderate contributions, while texture features made the lowest contribution. These findings highlight the importance of integrating vegetation indices, color indices, and texture features to enhance the accuracy and robustness of rice SPAD monitoring models.
The results of the feature sensitivity analysis for the random forest model with three features co-input are shown in Figure 10. The ordinate in the figure is R2, and the abscissa represents the input features of the model. The R2 of this optimal model is 0.83. DropScore refers to the performance metric obtained on the validation set after removing a specific feature from the training set and retraining the model. Delta is the contribution value of the feature, and a larger Delta indicates that the feature is more critical. It can be seen from Figure 9 that the feature with the highest contribution is NDVI, with a Delta of 0.147, while the one with the lowest contribution is EXG, with a Delta of 0.073. The results in Figure 10 are generally consistent with those in Figure 9, which can indicate the sensitivity and variability of different input features.

4. Discussion

The introduction of such phenotyping robots as data collection platforms offers numerous advantages over traditional manual sampling or fixed-equipment-based methods, including a high degree of automation, repeatability, and flexible operation in complex field environments. Compared with UAV platforms, they can be equipped with multiple sensors to enable phenotypic monitoring using multi-sensor data. Moreover, the data acquired are at close range and of high resolution, providing high-quality data for phenotypic monitoring. The phenotyping robot described in this study is equipped with two sensors and operates according to a pre-set program, allowing it to accurately and efficiently acquire multispectral images and RGB photos of rice at different growth stages. Specifically, multispectral images capture the reflection information of rice across different spectral bands, thereby reflecting its physiological and biochemical characteristics, while RGB images provide intuitive visual texture and color information. The fusion of these two data sources can comprehensively describe rice canopy features from multiple dimensions, offering a reliable means for the long-term dynamic monitoring of SPAD value changes during rice growth. Instead of traditional modeling methods, this study employs advanced machine learning algorithms to construct rice SPAD monitoring models. As a data-driven modeling approach, machine learning can overcome the limitations of traditional empirical models, thereby improving the accuracy of SPAD value estimation. Four algorithms are utilized: random forest regression, partial least squares regression, extreme gradient boosting regression, and boosted regression trees. Among them, random forest regression achieves the highest accuracy in models constructed with different data inputs, demonstrating the superiority and applicability of this machine learning algorithm. Specifically, the monitoring model built by inputting three types of features—vegetation indices from multispectral images, color indices from RGB images, and texture features—into the random forest algorithm yields an R2 of 0.83. This validates that the integration of multi-sensor data from phenotyping robot platforms with machine learning algorithms is an effective and practical method for monitoring rice SPAD. In SPAD estimation, machine learning-driven methods do not require knowledge of physical mechanisms, and they exhibit strong adaptability for multi-source feature fusion, ease of implementation, and high accuracy, but they rely on sample distribution. Physics-based methods have good interpretability but depend on high-precision data and professional knowledge, resulting in poor generalizability. Hybrid methods achieve a good balance but involve complex coupling design and face difficulties in practical application. The machine learning-based multi-source feature integration method proposed in this study is more suitable for practical agricultural monitoring needs.
A notable limitation of this study lies in the relatively limited sample size during model construction and validation, which may compromise the generalization ability and stability of the machine learning models to a certain extent. Although the existing samples cover combinations of different nitrogen gradients, varieties, and planting densities across the two experimental sites, the overall sample size remains insufficient to fully support complex machine learning algorithms in deeply exploring the underlying patterns in the data. This issue is particularly prominent when the data contain noise, feature dimensions are high, or there is a local imbalance in sample distribution—circumstances under which a small sample size may lead to model overfitting to the training data. Evidence of this is presented in Table 5, where the R2 values of the extreme gradient boosting model reach 1 for the training set, yet the model fails to achieve satisfactory accuracy for the test set. This study constructed SPAD monitoring models using data from the entire growth period of rice. The results of SPAD estimation models built for different rice growth stages—using three types of feature fusion and four distinct machine learning algorithms—are presented in Figure 11. As shown in the figure, the accuracy of SPAD retrieval was highest during the heading stage. Compared with the jointing and grain-filling stages, rice leaves at the heading stage exhibit peak chlorophyll content, with fully expanded leaves of uniform thickness. This minimizes interference from mesophyll structure on spectral signals, while soil background reflection interference is nearly eliminated, allowing the spectra to primarily reflect the true signals of the leaves. At the level of machine learning algorithms, the random forest regression model outperformed the other three models, with PLSR showing the poorest performance. This finding is consistent with the results reported by other scholars [36,37].
Further expanding the dataset—for instance, by incorporating experimental data from more years and diverse climatic regions, or increasing the frequency of sampling throughout the growth period in existing experiments—would provide machine learning models with richer feature distribution information and broader scenario coverage. Sufficient data not only reduces the interference of random errors in model training but also enhances the model’s ability to analyze different growth stages. This enables algorithms to more accurately identify the nonlinear relationships between multi-source data and SPAD values, ultimately improving model robustness and providing more reliable technical support for large-scale, high-precision SPAD estimation. In subsequent studies, broader multi-source feature fusion could be employed by integrating climatic factors and soil property data with the existing multispectral and RGB data. Such environmental and soil data can explain variations in rice physiological status from a mechanistic perspective, complementing spectral and visual features to help the model more comprehensively capture the driving factors of SPAD values. Compared with traditional machine learning methods, deep learning exhibits stronger automatic feature extraction capabilities, making it particularly suitable for processing multi-source heterogeneous data. Through hierarchical feature learning, deep learning can excavate hidden high-order correlations within the data, further improving the prediction accuracy and generalization ability of SPAD values in complex agricultural scenarios. By combining expanded datasets, multi-source feature fusion, and deep learning algorithms, it is expected that a more robust SPAD estimation model can be developed, offering more powerful technical support for crop growth monitoring and precision fertilization.

5. Conclusions

Multispectral images and RGB images of rice across multiple growth stages were acquired via a phenotyping robot. By extracting vegetation indices, color indices, and texture features from these images, and integrating multi-sensor data with machine learning algorithms, the effective monitoring of rice SPAD was achieved. Specifically, the fusion of the three feature types outperformed single-type features, and the Random Forest regression algorithm exhibited superior performance compared to the other three machine learning algorithms. The constructed SPAD monitoring model yielded an R2 of 0.83 and an RMSE of 1.593. Moreover, the heading stage achieved higher accuracy than the jointing and filling stages. In conclusion, the method proposed in this study is characterized by automation, high efficiency, and non-destructive monitoring, which can provide technical support and a theoretical foundation for rice phenotyping monitoring and precision crop cultivation management.

Author Contributions

Conceptualization, M.S. and D.Z.; methodology, M.S. and S.L.; software, M.S. and Y.Y.; validation, Y.Y. and G.Z.; formal analysis, S.L.; investigation, M.S., S.L., Y.Y. and G.Z.; writing—original draft preparation, M.S., W.C. and S.L.; writing—review and editing, Y.Z.; visualization, X.Y. and D.Z.; supervision, X.Y. and D.Z.; project administration, W.C.; funding acquisition, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China (No. 2021YFD2000101), the Natural Science Foundation of Jiangsu Province (No. BK20241544), the Postgraduate Research & Practice Innovation Program of Jiangsu Province (No. KYCX25_0974).

Data Availability Statement

The datasets generated and/or analyzed during the current study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Huang, M.; Zou, Y.B. Integrating mechanization with agronomy and breeding to ensure food security in China. Field Crop. Res. 2018, 224, 22–27. [Google Scholar] [CrossRef]
  2. Zhang, L.Y.; Han, W.T.; Niu, Y.X.; Chávez, J.L.; Shao, G.M.; Zhang, H.H. Evaluating the sensitivity of water stressed maize chlorophyll and structure based on UAV derived vegetation indices. Comput. Electron. Agric. 2021, 185, 106174. [Google Scholar] [CrossRef]
  3. Huang, Y.; Ma, Q.; Wu, X.; Li, H.; Xu, K.; Ji, G.; Qian, F.; Li, L.; Huang, Q.; Long, Y.; et al. Estimation of chlorophyll content in Brassica napus based on unmanned aerial vehicle images. Oil Crop. Sci. 2022, 7, 149–155. [Google Scholar] [CrossRef]
  4. Zhang, Y.; Liang, K.; Zhu, F.; Zhong, X.; Lu, Z.; Chen, Y.; Pan, J.; Lu, C.; Huang, J.; Ye, Q.; et al. Differential Study on Estimation Models for Indica Rice Leaf SPAD Value and Nitrogen Concentration Based on Hyperspectral Monitoring. Remote Sens. 2024, 16, 4604. [Google Scholar] [CrossRef]
  5. Li, S.; Jin, J.; Afrin, M.; Ge, X.; Fu, J.; Tian, Y.-C. Mobility-as-a-Resilience-Service in Internet of Robotic Things Through Robust Multi-Agent Deep Reinforcement Learning. IEEE Internet Things J. 2025, 1. [Google Scholar] [CrossRef]
  6. Yang, W.N.; Feng, H.; Zhang, X.H.; Zhang, J.; Doonan, J.H.; Batchelor, W.D.; Xiong, L.Z.; Yan, J.B. Crop Phenomics and High-Throughput Phenotyping: Past Decades, Current Challenges, and Future Perspectives. Mol. Plant 2020, 13, 187–214. [Google Scholar] [CrossRef]
  7. Xu, R.; Li, C. A Review of High-Throughput Field Phenotyping Systems: Focusing on Ground Robots. Plant Phenomics 2022, 2022, 9760269. [Google Scholar] [CrossRef]
  8. Fan, Z.; Sun, N.; Qiu, Q.; Li, T.; Feng, Q.; Zhao, C. In Situ Measuring Stem Diameters of Maize Crops with a High-Throughput Phenotyping Robot. Remote Sens. 2022, 14, 1030. [Google Scholar] [CrossRef]
  9. Pérez-Ruiz, M.; Prior, A.; Martinez-Guanter, J.; Apolo-Apolo, O.E.; Andrade-Sanchez, P.; Egea, G. Development and evaluation of a self-propelled electric platform for high-throughput field phenotyping in wheat breeding trials. Comput. Electron. Agric. 2020, 169, 105237. [Google Scholar] [CrossRef]
  10. Zhang, R.; Yang, P.; Liu, S.; Wang, C.; Liu, J. Evaluation of the Methods for Estimating Leaf Chlorophyll Content with SPAD Chlorophyll Meters. Remote Sens. 2022, 14, 5144. [Google Scholar] [CrossRef]
  11. Dong, T.; Shang, J.; Chen, J.M.; Liu, J.; Qian, B.; Ma, B.; Morrison, M.J.; Zhang, C.; Liu, Y.; Shi, Y.; et al. Assessment of Portable Chlorophyll Meters for Measuring Crop Leaf Chlorophyll Concentration. Remote Sens. 2019, 11, 2706. [Google Scholar] [CrossRef]
  12. Ma, W.T.; Han, W.T.; Zhang, H.H.; Cui, X.; Zhai, X.D.; Zhang, L.Y.; Shao, G.M.; Niu, Y.X.; Huang, S.J. UAV multispectral remote sensing for the estimation of SPAD values at various growth stages of maize under different irrigation levels. Comput. Electron. Agric. 2024, 227, 109566. [Google Scholar] [CrossRef]
  13. Xie, J.; Wang, J.; Chen, Y.; Gao, P.; Yin, H.; Chen, S.; Sun, D.; Wang, W.; Mo, H.; Shen, J.; et al. Estimating the SPAD of Litchi in the Growth Period and Autumn Shoot Period Based on UAV Multi-Spectrum. Remote Sens. 2023, 15, 5767. [Google Scholar] [CrossRef]
  14. Reyes, J.F.; Correa, C.; Zúñiga, J. Reliability of different color spaces to estimate nitrogen SPAD values in maize. Comput. Electron. Agric. 2017, 143, 14–22. [Google Scholar] [CrossRef]
  15. Liu, Y.; Hatou, K.; Aihara, T.; Kurose, S.; Akiyama, T.; Kohno, Y.; Lu, S.; Omasa, K. A Robust Vegetation Index Based on Different UAV RGB Images to Estimate SPAD Values of Naked Barley Leaves. Remote Sens. 2021, 13, 686. [Google Scholar] [CrossRef]
  16. Zheng, H.; Ma, J.; Zhou, M.; Li, D.; Yao, X.; Cao, W.; Zhu, Y.; Cheng, T. Enhancing the Nitrogen Signals of Rice Canopies across Critical Growth Stages through the Integration of Textural and Spectral Information from Unmanned Aerial Vehicle (UAV) Multispectral Imagery. Remote Sens. 2020, 12, 957. [Google Scholar] [CrossRef]
  17. Yin, Q.; Zhang, Y.; Li, W.; Wang, J.; Wang, W.; Ahmad, I.; Zhou, G.; Huo, Z. Better Inversion of Wheat Canopy SPAD Values before Heading Stage Using Spectral and Texture Indices Based on UAV Multispectral Imagery. Remote Sens. 2023, 15, 4935. [Google Scholar] [CrossRef]
  18. Zhang, C.; Chen, Z.; Yang, G.; Xu, B.; Feng, H.; Chen, R.; Qi, N.; Zhang, W.; Zhao, D.; Cheng, J.; et al. Removal of canopy shadows improved retrieval accuracy of individual apple tree crowns LAI and chlorophyll content using UAV multispectral imagery and PROSAIL model. Comput. Electron. Agric. 2024, 221, 108959. [Google Scholar] [CrossRef]
  19. Cheng, J.; Yang, H.; Qi, J.; Sun, Z.; Han, S.; Feng, H.; Jiang, J.; Xu, W.; Li, Z.; Yang, G.; et al. Estimating canopy-scale chlorophyll content in apple orchards using a 3D radiative transfer model and UAV multispectral imagery. Comput. Electron. Agric. 2022, 202, 107401. [Google Scholar] [CrossRef]
  20. Wang, F.; Yang, M.; Ma, L.; Zhang, T.; Qin, W.; Li, W.; Zhang, Y.; Sun, Z.; Wang, Z.; Li, F.; et al. Estimation of Above-Ground Biomass of Winter Wheat Based on Consumer-Grade Multi-Spectral UAV. Remote Sens. 2022, 14, 1251. [Google Scholar] [CrossRef]
  21. Guo, Y.; Wang, H.; Wu, Z.; Wang, S.; Sun, H.; Senthilnath, J.; Wang, J.; Robin Bryant, C.; Fu, Y. Modified Red Blue Vegetation Index for Chlorophyll Estimation and Yield Prediction of Maize from Visible Images Captured by UAV. Sensors 2020, 20, 5055. [Google Scholar] [CrossRef] [PubMed]
  22. Xiao, Q.L.; Tang, W.T.; Zhang, C.; Zhou, L.; Feng, L.; Shen, J.X.; Yan, T.Y.; Gao, P.; He, Y.; Wu, N. Spectral Preprocessing Combined with Deep Transfer Learning to Evaluate Chlorophyll Content in Cotton Leaves. Plant Phenomics 2022, 2022. [Google Scholar] [CrossRef] [PubMed]
  23. Su, M.; Zhou, D.; Yun, Y.Z.; Ding, B.; Xia, P.; Yao, X.; Ni, J.; Zhu, Y.; Cao, W.X. Design and implementation of a high-throughput field phenotyping robot for acquiring multisensor data in wheat. Plant Phenomics 2025, 7, 100014. [Google Scholar] [CrossRef]
  24. Narmilan, A.; Gonzalez, F.; Salgadoe, A.S.A.; Kumarasiri, U.W.L.M.; Weerasinghe, H.A.S.; Kulasekara, B.R. Predicting Canopy Chlorophyll Content in Sugarcane Crops Using Machine Learning Algorithms and Spectral Vegetation Indices Derived from UAV Multispectral Imagery. Remote Sens. 2022, 14, 1140. [Google Scholar] [CrossRef]
  25. Gitelson, A.A.; Viña, A.; Ciganda, V.; Rundquist, D.C.; Arkebauer, T.J. Remote estimation of canopy chlorophyll content in crops. Geophys. Res. Lett. 2005, 32, L08403. [Google Scholar] [CrossRef]
  26. Cao, Q.; Miao, Y.X.; Feng, G.H.; Gao, X.W.; Li, F.; Liu, B.; Yue, S.C.; Cheng, S.S.; Ustin, S.L.; Khosla, R. Active canopy sensing of winter wheat nitrogen status: An evaluation of two sensor systems. Comput. Electron. Agric. 2015, 112, 54–67. [Google Scholar] [CrossRef]
  27. Bandyopadhyay, D.; Bhavsar, D.; Pandey, K.; Gupta, S.; Roy, A. Red Edge Index as an Indicator of Vegetation Growth and Vigor Using Hyperspectral Remote Sensing Data. Proc. Natl. Acad. Sci. India Sect. A Phys. Sci. 2017, 87, 879–888. [Google Scholar] [CrossRef]
  28. Riihimäki, H.; Luoto, M.; Heiskanen, J. Estimating fractional cover of tundra vegetation at multiple scales using unmanned aerial systems and optical satellite data. Remote Sens. Environ. 2019, 224, 119–132. [Google Scholar] [CrossRef]
  29. Gitelson, A.A.; Kaufman, Y.J.; Merzlyak, M.N. Use of a green channel in remote sensing of global vegetation from EOS-MODIS. Remote Sens. Environ. 1996, 58, 289–298. [Google Scholar] [CrossRef]
  30. Shao, G.M.; Wang, Y.J.; Han, W.T. Estimation Method of Leaf Area Index for Summer Maize Using UAV-Based Multispectral Remote Sensing. Smart Agric. 2020, 2, 118–128. [Google Scholar]
  31. Woebbecke, D.M.; Meyer, G.E.; Vonbargen, K.; Mortensen, D.A. Color Indexes for Weed Identification under Various Soil, Residue, and Lighting Conditions. Trans. ASAE 1995, 38, 259–269. [Google Scholar] [CrossRef]
  32. Meyer, G.E.; Neto, J.C. Verification of color vegetation indices for automated crop imaging applications. Comput. Electron. Agric. 2008, 63, 282–293. [Google Scholar] [CrossRef]
  33. Gitelson, A.A.; Kaufman, Y.J.; Stark, R.; Rundquist, D. Novel algorithms for remote estimation of vegetation fraction. Remote Sens. Environ. 2002, 80, 76–87. [Google Scholar] [CrossRef]
  34. Louhaichi, M.; Borman, M.M.; Johnson, D.E. Spatially located platform and aerial photography for documentation of grazing impacts on wheat. Geocarto Int. 2001, 16, 65–70. [Google Scholar] [CrossRef]
  35. Haralick, R.M.; Shanmugam, K.; Dinstein, I.H. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef]
  36. Wang, Z.L.; Tan, X.M.; Ma, Y.M.; Liu, T.; He, L.M.; Yang, F.; Shu, C.H.; Li, L.L.; Fu, H.; Li, B.; et al. Combining canopy spectral reflectance and RGB images to estimate leaf chlorophyll content and grain yield in rice. Comput. Electron. Agric. 2024, 221, 108975. [Google Scholar] [CrossRef]
  37. Xu, X.Q.; Lu, J.S.; Zhang, N.; Yang, T.C.; He, J.Y.; Yao, X.; Cheng, T.; Zhu, Y.; Cao, W.X.; Tian, Y.C. Inversion of rice canopy chlorophyll content and leaf area index based on coupling of radiative transfer and Bayesian network models. ISPRS J. Photogramm. Remote Sens. 2019, 150, 185–196. [Google Scholar] [CrossRef]
Figure 1. The flowchart of this study.
Figure 1. The flowchart of this study.
Remotesensing 17 03069 g001
Figure 2. Diagram of experimental setup.
Figure 2. Diagram of experimental setup.
Remotesensing 17 03069 g002
Figure 3. Data collection scenes using the phenotyping robot.
Figure 3. Data collection scenes using the phenotyping robot.
Remotesensing 17 03069 g003
Figure 4. Boxplot of SPAD data.
Figure 4. Boxplot of SPAD data.
Remotesensing 17 03069 g004
Figure 5. Heatmap of correlations between features.
Figure 5. Heatmap of correlations between features.
Remotesensing 17 03069 g005
Figure 6. Heatmap of correlations between SPAD and features.
Figure 6. Heatmap of correlations between SPAD and features.
Remotesensing 17 03069 g006
Figure 7. Optimization of machine learning models: (a) BRT, (b) XGBoost, (c) RF, and (d) PLSR.
Figure 7. Optimization of machine learning models: (a) BRT, (b) XGBoost, (c) RF, and (d) PLSR.
Remotesensing 17 03069 g007
Figure 8. 10-fold cross-validation results of RF models with different feature inputs: (a) vegetation index, (b) color index, (c) texture features, and (d) the integration of these three features.
Figure 8. 10-fold cross-validation results of RF models with different feature inputs: (a) vegetation index, (b) color index, (c) texture features, and (d) the integration of these three features.
Remotesensing 17 03069 g008
Figure 9. Ranking of Gini Importance values.
Figure 9. Ranking of Gini Importance values.
Remotesensing 17 03069 g009
Figure 10. Ranking of feature sensitivity analysis.
Figure 10. Ranking of feature sensitivity analysis.
Remotesensing 17 03069 g010
Figure 11. R2 and RMSE of four machine learning models with three feature fusion inputs across different growth stages.
Figure 11. R2 and RMSE of four machine learning models with three feature fusion inputs across different growth stages.
Remotesensing 17 03069 g011
Table 1. Experiment information.
Table 1. Experiment information.
ExperimentSowing DateSampling/Testing DateGrowth Period
2024 in Rugao14 June
4 AugustJointing Stage
1 SeptemberHeading Stage
30 SeptemberFilling Stage
2024 in Taixing30 June
7 AugustJointing Stage
2 SeptemberHeading Stage
3 OctoberFilling Stage
Table 2. Formulas for vegetation indices.
Table 2. Formulas for vegetation indices.
Spectral IndicesFormulationsReference
RVI (Ratio Vegetation Index)NIR/R[24]
CIgreen (Green Chlorophyll Index)NIR/G − 1[25]
Clrededge (Red-Edge Chlorophyll Index)NIR/RE − 1[25]
MDD (Modified Double Difference Index)(NIR − RE) − (NIR − R)[26]
Int1 (Intensity Index 1 red-edge 1)(G + R)/2[27]
Int2 (Intensity index 1 Red-Edge 2)(G + NIR + R)/2[27]
Red-Edge NDVI (Red-Edge versions of SR and NDVI)(NIR − RE)/(NIR + RE)[28]
GARI (Green Atmospherically Resistant Vegetation Index)NIR − [G − (B − R)][29]
SIPI (Structure Insensitive Pigment Index)(NIR − B)/(NIR + R)[24]
ARVI (Atmospherically Resistant Vegetation Index)[NIR − (R − 2(B − R))]/[NIR + (R − 2(B − R))][24]
EVI (Enhanced Vegetation Index)2.5(NIR − R)/(NIR + 6R − 7.5B + 1)[30]
GNDVl (Green Normalized Difference Vegetation Index)(NIR − G)/(NIR + G)[30]
NDVI (Normalized Difference Vegetation Index)(NIR − R)/(NIR + R)[30]
SAVI (Soil-Adjusted Vegetation Index)1.5(NIR − R)/(NIR + R + 0.5)[30]
VARI (Visualization Atmospheric Resistance Index)(G − R)/(G + R − B)[30]
Table 3. Formulas for color indices.
Table 3. Formulas for color indices.
Spectral IndicesFormulationsReference
NDI (Normalized Difference Index)(g − r)/(g + r)[31]
ExG (Excess Green Index)2g − r − b[32]
ExR (Excess Red Index)1.4r − g[32]
ExGR (Excess Green Minus Excess Red)ExG − ExR[32]
VARI (Visible Atmospherically Resistant Index)(g − r)/(g + r − b)[33]
GLI (Green Leaf Index)(2g − r − b)/(2g + r + b)[34]
WI (Woebbecke Index)(g − b)/(r − g)[31]
Table 4. Summary SPAD statistics for four crops.
Table 4. Summary SPAD statistics for four crops.
Sample SizeMaximumMinimumMeanStandard DeviationVarianceCV
Three growth period18047.1029.3441.103.5212.420.085
Jointing6046.8735.6742.632.888.340.067
Heading6047.136.8141.532.737.470.065
Filling6045.929.3439.133.9215.360.101
Table 5. Construction of SPAD prediction models.
Table 5. Construction of SPAD prediction models.
FeatureMethodValidation SetTraining Set
R2RMSER2RMSE
Vegetation IndexRandom Forest Regression0.781.8720.921.247
Partial Least Squares Regression0.673.0800.832.107
Extreme Gradient Boosting Regression0.752.30210.238
Boosted Regression Tree0.762.2360.882.098
Color IndexRandom Forest Regression0.702.2750.891.421
Partial Least Squares Regression0.613.0200.722.662
Extreme Gradient Boosting Regression0.672.76210.001
Boosted Regression Tree0.652.6910.831.960
Texture featuresRandom Forest Regression0.642.580.911.111
Partial Least Squares Regression0.552.9920.692.331
Extreme Gradient Boosting Regression0.582.87010.003
Boosted Regression Tree0.612.6320.920.636
Integration of three featuresRandom Forest Regression0.831.5930.921.013
Partial Least Squares Regression0.752.3990.792.230
Extreme Gradient Boosting Regression0.801.99710.000
Boosted Regression Tree0.782.1970.881.391
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Su, M.; Cao, W.; Luo, S.; Yun, Y.; Zhang, G.; Zhu, Y.; Yao, X.; Zhou, D. Estimating Rice SPAD Values via Multi-Sensor Data Fusion of Multispectral and RGB Cameras Using Machine Learning with a Phenotyping Robot. Remote Sens. 2025, 17, 3069. https://doi.org/10.3390/rs17173069

AMA Style

Su M, Cao W, Luo S, Yun Y, Zhang G, Zhu Y, Yao X, Zhou D. Estimating Rice SPAD Values via Multi-Sensor Data Fusion of Multispectral and RGB Cameras Using Machine Learning with a Phenotyping Robot. Remote Sensing. 2025; 17(17):3069. https://doi.org/10.3390/rs17173069

Chicago/Turabian Style

Su, Miao, Weixing Cao, Shaoyang Luo, Yaze Yun, Guangzheng Zhang, Yan Zhu, Xia Yao, and Dong Zhou. 2025. "Estimating Rice SPAD Values via Multi-Sensor Data Fusion of Multispectral and RGB Cameras Using Machine Learning with a Phenotyping Robot" Remote Sensing 17, no. 17: 3069. https://doi.org/10.3390/rs17173069

APA Style

Su, M., Cao, W., Luo, S., Yun, Y., Zhang, G., Zhu, Y., Yao, X., & Zhou, D. (2025). Estimating Rice SPAD Values via Multi-Sensor Data Fusion of Multispectral and RGB Cameras Using Machine Learning with a Phenotyping Robot. Remote Sensing, 17(17), 3069. https://doi.org/10.3390/rs17173069

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop