Next Article in Journal
Research on Detection Methods for Major Soil Nutrients Based on Pyrolysis-Electronic Nose Time-Frequency Domain Feature Fusion and PSO-SVM-RF Model
Previous Article in Journal
Molecular Biochemistry and Physiology of Postharvest Chilling Injury in Fruits: Mechanisms and Mitigation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation of Leaf Nitrogen Content in Rice Coupling Feature Fusion and Deep Learning with Multi-Sensor Images from UAV

by
Xinlei Xu
1,2,
Xingang Xu
1,2,*,
Sizhe Xu
1,
Yang Meng
1,
Guijun Yang
1,
Bo Xu
1,
Xiaodong Yang
1,
Xiaoyu Song
1,
Hanyu Xue
1,2,
Yuekun Song
1 and
Tuo Wang
1
1
Key Laboratory of Quantitative Remote Sensing in Ministry of Agriculture and Rural Affairs, Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
2
College of Intelligent Science and Engineering, Beijing University of Agriculture, Beijing 102206, China
*
Author to whom correspondence should be addressed.
Agronomy 2025, 15(12), 2915; https://doi.org/10.3390/agronomy15122915
Submission received: 17 November 2025 / Revised: 12 December 2025 / Accepted: 16 December 2025 / Published: 18 December 2025
(This article belongs to the Section Precision and Digital Agriculture)

Abstract

Assessing Leaf Nitrogen Content (LNC) is critical for evaluating crop nutritional status and monitoring growth. While Unmanned Aerial Vehicle (UAV) remote sensing has become a pivotal tool for nitrogen monitoring at the field scale, current research predominantly relies on uni-modal feature variables. Consequently, the integration of multidimensional feature information for nitrogen assessment remains largely underutilized in existing literature. In this study, the four types of feature variables (two kinds of spectral indices, color space parameters and texture features from UAV images of RGB and multispectral sensors) were extracted from three dimensions, and crop nitrogen-sensitive feature variables were selected by GCA (Gray Correlation Analysis), followed by one fused deep neural network (DNN-F2) for remote sensing monitoring of rice nitrogen and a comparative analysis with five common machine learning algorithms (RF, GPR, PLSR, SVM and ANN). Experimental results indicate that the DNN-F2 model consistently outperformed conventional machine learning algorithms across all three growth stages. Notably, the model achieved an average R2 improvement of 40%, peaking at the rice jointing stage with R2 of 0.72, RMSE of 0.08, and NRMSE of 0.019. The study shows that the fusion of multidimensional feature information from UAVs combined with deep learning algorithms has great potential for nitrogen nutrient monitoring in rice crops, and can also provide technical support to guide decisions on fertilizer application in rice fields.

1. Introduction

The theoretical basis for remote sensing of LNC lies in the strong correlation between nitrogen status and leaf optical properties. Nitrogen is a key constituent of chlorophyll; thus, nitrogen deficiency directly affects photosynthetic capacity and spectral reflectance characteristics [1]. Traditional manual field measurements of rice LNC are usually time-consuming and laborious, with potential for human error, and usually capture crop information on a point scale, which in practice is often replaced wrongly and difficult to apply on a large field scale [2,3]. Recent advancements in Unmanned Aerial Vehicle (UAV) technology have demonstrated significant capacity for remote sensing applications, particularly in the realm of monitoring crop growth. Non-destructive assessment of Leaf Nitrogen Content (LNC) is pivotal for real-time crop growth diagnostics, which enables precise fertilization strategies and reduces environmental pollution. Despite its importance, current methodologies often lack the integration of multidimensional feature fusion with advanced deep learning algorithms. To bridge this gap and provide a validated tool for field application, this study focuses on developing a rice LNC monitoring model using a novel deep learning approach (DNN-F2) based on UAV imagery. The model’s accuracy and robustness were comprehensively validated to demonstrate its efficacy in non-destructive estimation [4,5]. Almeida et al. used a multispectral sensor carried on a UAV remote sensing platform to accurately assess the above-ground biomass and structural attributes of crops and successfully monitored the effects of restoration farming [6]. Li et al. efficiently predicted the leaf area index of rice based on UAV HD color sensor data combined with the normalized difference texture index [7]. However, the above studies were mostly based on a single UAV color sensor or multispectral sensor, and studies that consider multiple sensors simultaneously and consider the fusion of different dimensional features are still very limited.
Most previous studies have used one single feature variable as input to a rice prediction model, but it usually contains a single piece of information and is not very accurate. For a single growth stage, there is a clear linear relationship between a single spectral index feature and nitrogen, but its sensitivity to LNC will decrease when multiple growth stages are considered. Wang et al. combined the digital spectral index and multispectral index for accurate nitrogen management in rice and wheat with good results [8]. However, spectral index features usually suffer from saturation problems in estimating LNC, and their reflectance intensities in bands on the visible scale have limited responsiveness to changes in LNC in rice [9,10]. Conversely, color space parameters capture more than just reflectance intensity and chromatic details; they also encompass partial indicators of canopy cover and aboveground biomass. Fu et al. used color space parameters extracted from UAV digital images to invert several nitrogen indices of winter wheat, and then combined five color space parameters to improve the corresponding prediction accuracy [11]. This can alleviate the problem of saturation based on vegetation indices. Due to their capacity to capture variations in canopy cover and structural dynamics, texture parameter features are extensively employed for estimating vegetation biomass and Leaf Area Index (LAI) [12,13], but they tend to use only one feature variable in isolation, while studies considering the fusion of multidimensional feature variables to predict LNC in rice are limited.
Studies based on traditional machine learning algorithms for crop growth monitoring have been well established [14]. Muro et al. successfully made spatial predictions of grassland biodiversity based on a random forest algorithm combined with vegetation indices [15], and Meacham-Hensold et al. used partial least squares regression and spectral information to measure crop physiological traits, avoiding data redundancy and missing information [16]. Although these models are usually less data intensive and easy to use, they tend to target only a single type of feature variable and do not capture more information from multiple dimensions for prediction. Deep learning algorithms, on the other hand, are often more complex, computationally powerful and able to process multidimensional information in parallel. Using a variety of deep neural networks, Maimaitijiang et al. attempted to fuse information from multiple dimensions such as temperature, height, texture and spectrum to predict soybean yield and showed superior performance to machine learning algorithms [17]. Sun et al. used a convolutional neural network algorithm to extract crop spatial features, weather data and surface temperature and fused them to predict crop multi-nutrient content well [18]. However, there is limited research on fused deep learning algorithms combined with multidimensional feature fusion for LNC monitoring of rice.
This study centers on developing a robust monitoring model for rice Leaf Nitrogen Content (LNC) by integrating multidimensional features from digital and multispectral UAV imagery with deep learning architectures, specifically the DNN-F2 model. A primary objective is to evaluate the efficacy of this feature fusion approach and benchmark the performance of the proposed deep learning algorithm against traditional machine learning methods.

2. Materials and Methods

2.1. Study Area and Experimental Design

Field experiments were conducted at the Demonstration Centre for Quality Agricultural Products in Ninghe District, Tianjin, China (39°26′34″ N, 117°33′13″ E). Situated between the North China Plain to the west and Bohai Bay to the southeast, the region is characterized by a typical warm temperate monsoon continental climate. As a traditional rice-cultivating area, it maintains an average annual temperature of 11.1° and receives approximately 2801.7 h of sunshine annually [19]. The experiment utilized the rice variety ‘Jinyuan 89’, planted across 12 plots, each with dimensions of 82 m × 56 m. Within each main plot, two sub-plots were established, resulting in a total of 24 sampling units; one leaf sample was collected from each unit (24 samples in total). Six gradient fertilizer treatments were applied: 600 kg/ha (N1), 540 kg/ha (N2), 480 kg/ha (N3), 420 kg/ha (N4), 360 kg/ha (N5), and 300 kg/ha (N6). Each treatment was replicated twice using a compound fertilizer (N-P-K ratio of 23-13-6). Figure 1 illustrates the geographical location and the specific UAV sampling zones.
The statistical analysis workflow following data acquisition is illustrated in Figure 2 and encompasses both data collection and preprocessing. The dataset comprises four distinct types of features, derived, respectively, from UAV-based digital (RGB) and multispectral (MS) imagery, to comprehensively represent multiple feature variables under varying experimental treatments. In this study, six distinct algorithms were employed to construct and systematically evaluate rice LNC prediction models. The comparative analysis included five conventional machine learning methods—Random Forest (RF), Gaussian Process Regression (GPR), Partial Least Squares Regression (PLSR), Support Vector Machine (SVM), and Artificial Neural Network (ANN)—alongside a proposed Deep Neural Network architecture (DNN-F2). Model development was based on four categories of input features and evaluated through a 10-fold cross-validated resampling strategy to ensure robustness and generalizability. The end-to-end pipeline is designed specifically for estimating rice canopy LNC from UAV-acquired RGB and MS imagery. It begins with field experiments, followed by a series of image preprocessing steps—including image selection, alignment and mosaicking, geometric correction, radiometric calibration, and initial feature extraction. To enhance the available spectral data, the original RGB imagery was converted into distinct color spaces, such as HSV and CIELAB*. Thereafter, complementary features—such as visual descriptors and color space parameters—are extracted independently from both RGB and MS data streams and fused into a unified, high-dimensional input feature set for modeling, as depicted in the provided diagram:

2.2. Data Acquisition

2.2.1. UAV Imagery Acquisition and Pre-Processing

Image acquisition was performed using a DJI P4M quadrotor UAV (SZ DJI Technology Co., Ltd., Shenzhen, China), equipped with an integrated imaging system in Figure 3. This system comprises a 22-megapixel visible light (RGB) sensor and an array of five 2.08-megapixel monochrome sensors for multispectral data. The spectral bands cover Blue (450 nm ± 16 nm), Green (560 nm ± 16 nm), Red (650 nm ± 16 nm), Red Edge (730 nm ± 16 nm), and Near-Infrared (840 nm ± 26 nm). Flight missions corresponded to the rice jointing (5 July), booting (30 July), and filling (27 August) stages. To ensure data quality, all operations were conducted under stable illumination between 10:00 and 13:00. The flight parameters were standardized at an altitude of 50 m and a speed of 6 m/s, with a forward overlap of 80% and a side overlap of 70%. Radiometric calibration was carried out using reflectivity whiteboards prior to each flight.
Post-acquisition image processing was conducted using DJI Terra and ENVI 5.31 software to mitigate sensor-induced geometric and radiometric distortions. For geometric accuracy, images were rectified using 10 uniformly distributed Ground Control Points (GCPs). Regarding radiometric calibration, a pseudo-standard feature method was applied to convert Digital Number (DN) values into surface reflectance. This process relied on calibration data from a white reference panel, which was measured via an ASD spectrometer prior to the flights. Finally, the corrected DN values for each spectral band were synthesized to generate the complete multispectral imagery.

2.2.2. Field Data Acquisition

Rice seedlings were transplanted on May 10, and leaf samples were subsequently collected during three pivotal growth stages: jointing (5 July), booting (30 July), and filling (27 August). From each of the 24 sampling units across the 12 plots, three representative plants were harvested, sealed in paper bags, and transported to the laboratory. Upon arrival, leaves were separated from stems and subjected to de-enzyming at 105 °C for 30 min. Subsequently, the samples were dried at 80 °C for over 48 h until a constant weight was achieved. The dried leaves were weighed, ground, and analyzed for Leaf Nitrogen Content (LNC) using the Kjeldahl method [20]. This procedure involved three standard phases: digestion, distillation, and titration. Specifically, organic nitrogen was decomposed using concentrated sulfuric acid to form ammonium sulfate, which was then converted to ammonia under alkaline conditions. The released ammonia was distilled and titrated to calculate the final nitrogen concentration.

2.3. Feature Extraction and Analysis

The feature variables used to characterize the data are critical and greatly influence the inverse performance of the model [21]. In this study, four feature variables were extracted from three dimensions to provide a comprehensive estimation of rice LNC. Before assessing their predictability, gray correlation analysis was performed between these index features and rice LNC values to evaluate the consistency between the feature variables and rice LNC by determining the degree of correlation between them, to reduce the correlation and covariance between the features, and to select the most appropriate index features for modeling analysis with rice LNC.

2.3.1. Spectral Index Feature

Plant spectral signatures are intrinsically linked to morphological and physiological variations. Spectral indices, which combine multiple bands, serve as robust indicators of these biochemical states. Based on prior literature, this study initially screened 20 candidate features—comprising ten digital (RGB) and ten multispectral indices—to estimate rice LNC (see Table 1) [22,23,24]. A key challenge in remote sensing is that direct nitrogen absorption features in the shortwave infrared (SWIR) region are frequently masked by strong water absorption, particularly in fresh vegetation [25]. However, given that nitrogen is a primary constituent of chlorophyll and exhibits a high correlation with it, chlorophyll-related indices can serve as effective proxies for nitrogen assessment [26]. Consequently, this study prioritized indices sensitive to chlorophyll content alongside standard nitrogen and soil-adjusted indices. For calculation, the mean reflectance value within each Region of Interest (ROI) was extracted to compute the respective vegetation indices.

2.3.2. Color Space Parameter Feature

While canopy color is conventionally captured in the standard RGB color space, transforming these data into alternative color models yields a diverse set of parameters. These derived metrics offer distinct layers of chromatic information, serving as robust indicators for assessing the nitrogen status of rice [11]. In Envi software, the traditional RGB color space was transformed into HSV and L*a*b* color spaces based on the ‘Color Transform’ module, and the average pixel values of the corresponding color space parameters were extracted to reduce errors and used for subsequent analysis, as shown in Table 2.

2.3.3. Texture Parameter Feature

In this research, we employed the Gray-Level Co-occurrence Matrix (GLCM), a widely adopted algorithm for texture analysis, to estimate the LNC of rice. Gray Relational Analysis (GRA) is a dimensionless statistical method that evaluates the degree of association between feature variables and a reference sequence. By calculating the relational grades among variables, GRA provides a ranking of these variables, thereby facilitating the analysis of relationships between independent and dependent variables. A higher relational grade indicates that the corresponding parameter exhibits a variation trend more similar to that of the reference sequence. The GLCM characterizes the spatial relationships between pixels of varying gray levels, thereby facilitating the extraction of texture features that quantify the image’s spatial complexity. In this study, we computed five distinct GLCM-derived metrics, Mean (ME), Variance (VA), Entropy (EN), Second Moment (SE), and Dissimilarity (DI), as detailed in Table 3 [14]. For the calculation of these textures, the window size was set to 5 × 5, which is considered to have the least impact on the texture metric for UAV images with high spatial resolution [45].

2.4. Modeling Methods for Rice Nitrogen Estimation

To validate the efficacy of the DNN-F2 model, a comparative analysis was conducted against five traditional machine learning algorithms: RF, GPR, PLSR, SVM, and ANN. The primary objective was to identify the optimal model for rice LNC estimation and to evaluate the predictive superiority of deep learning over conventional methods. All models were implemented in Python using the Jupyter Notebook environment. Key libraries included Scikit-learn, NumPy (v1.20.1), Pandas, Matplotlib, TensorFlow, and Keras. To ensure robust performance assessment, K-fold cross-validation was employed. This method partitions the dataset into K distinct subsets; in each iteration, one subset serves as the test set while the remaining K-1 subsets form the training set. The process is repeated K times, and the final results are averaged to minimize random bias and maximize data utilization [46].

2.4.1. Traditional Machine Learning Algorithms

The Random Forest (RF) algorithm is widely used in agriculture, biomedicine and other fields; it has a wide impact by assessing the importance of feature parameters and reducing the multicollinearity between them [7]. Random Forest (RF) is a nonlinear ensemble modeling technique composed of multiple decision trees. It effectively integrates Leo Breiman’s bootstrap aggregating (bagging) method [47] with the random subspace method proposed by Tin Kam Ho [48]. Fundamentally, RF mitigates the correlation issues inherent in single decision trees by introducing randomness through sample and feature selection. In this architecture, each tree functions as an independent computational unit, and the final prediction is derived by aggregating the outputs (e.g., via averaging) from the entire forest.
The Gaussian Process Regression (GPR) algorithm is a typical non-parametric machine learning regression algorithm that learns the relationship between independent variables like vegetation index parameters and dependent variables like nitrogen indicators by fitting a probabilistic Bayesian model [49], and for most regression problems, typically uses a covariance function as the kernel function of the algorithm. In this study, the corresponding contribution of each parameter feature is evaluated by a marginal likelihood approach in the training set for parameter selection and optimization [50].
The Partial Least Squares Regression (PLSR) algorithm was proposed by Herman Wold and Alban in 1983 [51], which combines methods such as principal component analysis and multiple linear regression to transform highly correlated feature parameters into a small number of uncorrelated parameters to solve covariance problems, and is commonly used in studies dealing with small sample relationships [52]. In this study, the tuning of the algorithm is achieved by adjusting the principal parameters of n_components.
Cortes and Vapnik introduced the Support Vector Machine (SVM) algorithm in 1995 [53]. It is a powerful machine learning algorithm that can be used not only to identify outliers, predict discrete variables and solve dichotomous problems, but also to build regression models to predict continuous variables. Its core is to minimize the value of the loss function with width ε between the predicted and true values, and to ensure that the predicted values of the model fall within an inter-band width of 2ε centered on f(x) [54]. In this study, a radial basis function was selected as the kernel for the SVM model, and the tuning parameter optimization of the model is achieved by adjusting the rectification parameter σ and the cost parameter cost.
Artificial Neural Networks (ANNs) are computational algorithms inspired by the parallel processing capabilities of the biological nervous system [55]. In this study, we implemented a fully connected ANN architecture consisting of three distinct layers: input, hidden, and output. The network topology was designed with three neurons in both the input and output layers, bridged by a hidden layer containing four neurons. The Log-sigmoid function was employed for activation. Regarding hyperparameters, the learning rate was fixed at 0.01, with the training process capped at a maximum of 10,000 iterations.

2.4.2. Fused Deep Neural Network DNN-F2

The fused deep neural network (DNN-F2) proposed by Maitiniyazi and Vasit in 2020 is based on a framework of feed-forward deep neural networks [56] and the principle of feature-level multimodal data fusion [57], which has cross-category learning properties and can significantly improve the learning capability for single-category features [17].
The DNN-F2 architecture employed in this study adopts a parallel multi-branch design, specifically tailored to leverage heterogeneous feature sources derived from UAV imagery. As illustrated in Figure 4, the model comprises four distinct subnetworks—referred to as a “four-seeded” structure—each dedicated to processing a specific category of input features: (1) digital vegetation indices extracted from RGB imagery, (2) color space parameters (e.g., HSV, CIELAB*), (3) texture feature parameters derived from gray-level co-occurrence matrices (GLCM) or similar methods, and (4) multispectral vegetation indices computed from calibrated MS bands (e.g., NDVI, GNDVI, etc.).Each of these subnetworks is implemented as an independent, single-peaked deep neural network (DNN)—meaning it follows a conventional feedforward architecture with multiple hidden layers but without branching or residual connections within the subnetwork itself. This design allows each branch to learn specialized representations optimized for its respective feature type, preserving modality-specific information while avoiding premature feature mixing. After individual training, the high-level feature embeddings from all four subnetworks are concatenated through tandem (i.e., fully connected fusion) layers. These fusion layers integrate the complementary information across modalities into a unified, joint representation, which is then passed through additional dense layers to produce the final prediction of rice leaf nitrogen content (LNC). This hybrid architecture effectively balances feature specialization and cross-modal integration, enabling the model to capture both intra-feature nuances and inter-feature synergies, thereby enhancing predictive accuracy and robustness.

2.5. Statistical Analysis

The overall workflow, encompassing data acquisition and processing, is illustrated in Figure 3. The dataset comprised four distinct feature categories derived from UAV digital and multispectral imagery, covering multiple variables across different treatments. Rice LNC prediction models were established and systematically compared using six algorithms combined with a 10-fold cross-validation resampling strategy. Model performance was assessed using three key metrics: the Coefficient of Determination (R2), Root Mean Square Error (RMSE), and Normalized Root Mean Square Error (NRMSE). Generally, a superior model fit is characterized by higher R2 values coupled with lower RMSE and NRMSE values. All statistical analyses were conducted using IBM SPSS Statistics (v26.0, 2019), according to the equations below:
R 2 = 1 i = 1 n x i y i 2 i = 1 n x i x ¯ 2
R M S E = i = 1 n y i x i 2 n
N R M S E = R M S E X ¯
where x i denotes the measured value of rice canopy LNC, x ¯ denotes the mean of the measured value, y i denotes the predicted value of rice canopy LNC, and n denotes the number of samples in the model.

3. Results and Analysis

3.1. Descriptive Statistics of Rice LNC

The Least Significant Difference (LSD) test revealed significant disparities in rice LNC across different nitrogen treatments and growth phases, as detailed in Table 4. Quantitatively, LNC values ranged from 2.89 to 4.65, exhibiting a consistent downward trend as the crop developed. Regarding data variability, descriptive statistics yielded Coefficients of Variation (CV) between 4.36% and 5.43% for individual stages, and 14.28% for the entire growth period. This moderate level of variability confirms the feasibility of utilizing UAV remote sensing for reliable LNC estimation [14].

3.2. Correlation Analysis of Feature Variables with Rice LNC

To identify the optimal input variables for further analysis, Gray Relational Analysis (GRA) was employed to quantify the relationship between the four categories of feature variables and rice LNC. The outcomes of this feature selection process are illustrated in Figure 5, Figure 6, Figure 7 and Figure 8. The higher the height of the feature variables in the bar chart, the stronger their correlation with rice LNC, and the corresponding gray correlation coefficients are indicated at the top of the bar chart. For the RGB VIs, the five indices with the highest gray correlation coefficients, ExG, ExGR, MGRVI, CIVE and RGBVI, were selected; for the RGB color space parameters, the five parameters with the highest gray correlation coefficients, S, L*, R, H and a*, were selected; for the MS VIs, the four indices with the highest gray correlation coefficients, TCARI, RECI, TVI and MCARI, were selected; and for the MS texture parameters, the four parameters with the highest gray correlation coefficients, EN, ME, VA and SE, were selected to construct the rice LNC monitoring model.
The Gray Relational Analysis (GRA) results indicate significant variations in the correlation between different feature dimensions and rice LNC (Figure 5, Figure 6, Figure 7 and Figure 8). Among all categories, multispectral spectral indices exhibited the strongest correlation, with coefficients ranging from 0.72 to 0.83. Specifically, TCARI achieved the highest value (0.83), followed closely by RECI and TVI (0.81), demonstrating the superior sensitivity of red-edge and NIR bands to nitrogen variations.For digital imagery, spectral indices also performed robustly, with ExG, ExGR, MGRVI, CIVE, and RGBVI all reaching a coefficient of 0.76, significantly outperforming raw color space parameters. The color space parameters (e.g., R, G, B, H, S, V) and multispectral texture features showed relatively lower correlations, with values concentrated between 0.55 and 0.65 and 0.58–0.66, respectively. This suggests that while structural and color information contribute to the model, mathematical indices that enhance spectral differences are more effective for LNC monitoring.

3.3. The Best Estimation Model for Rice LNC

In this section, a combination of four feature variables screened by gray correlation analysis was used as the input of the rice LNC estimation model, and five traditional machine learning algorithms (RF, GPR, PLSR, SVM and ANN) were compared with a fused deep neural network (DNN-F2) to explore the potential of deep learning algorithms in rice LNC estimation and to select a best model to be applied in rice LNC estimation.
Table 5 shows that the prediction accuracy of the fused deep neural network (DNN-F2)-based model was higher than that of the traditional machine learning algorithm in three fertility stages after removing the low-correlation feature variables, and the model usually performed best at the jointing stage. Among the traditional machine learning algorithms evaluated, the Random Forest (RF) algorithm yielded the highest predictive accuracy, achieving performance levels comparable to the Artificial Neural Network (ANN). Conversely, the Support Vector Machine (SVM) consistently exhibited the lowest performance across all three growth stages. The DNN-F2 model demonstrates the highest accuracy, achieving an R2 score of over 0.7 for the Jointing stage and peaking at ~0.65 for the Booting stage. Conversely, the SVM model performs the poorest, with R2 values significantly lower than the other models across all stages. Consistent with the R2 results, the DNN-F2 model exhibits the lowest error rates (RMSE), dropping to approximately 0.11 during the Booting stage. The SVM model shows the highest error (RMSE > 0.25). The DNN-F2 model is the most effective and robust model for this specific application, outperforming traditional machine learning methods like SVM and RF, as illustrated in Figure 9 and Figure 10.
The fused deep neural network (DNN-F2) performed best in three fertility stages, where it could achieve an average 40% improvement in the R2 of the prediction model compared to traditional machine learning algorithms, and achieved the best prediction accuracy at jointing stage with R2 = 0.72, RMSE = 0.08 and NRMSE = 0.019. The model performed better during the jointing stage. First, rice plants at this stage exhibit higher Leaf Nitrogen Content (LNC) and vigorous metabolic activity, resulting in stronger spectral response signals related to chlorophyll and nitrogen [58]. Second, the dataset for this stage likely contains greater variability in nitrogen levels, which helps the deep learning model learn more robust features and reduces overfitting. However, it is crucial to address the complexity introduced by canopy structure, particularly the Leaf Area Index (LAI). While this study focuses on LNC monitoring, LAI plays a mediating role in the upscaling of spectral features from the leaf level to the canopy level [59]. At the canopy scale, the spectral reflectance captured by UAVs is a mixed signal of leaf biochemistry (LNC) and canopy structure (LAI). Variations in LAI can alter the multiple scattering of light within the canopy, potentially confounding the retrieval of LNC. Therefore, the fused deep learning approach (DNN-F2) likely outperforms traditional methods partly because its multi-dimensional feature extraction capability implicitly accounts for these structural non-linearities, mitigating the influence of varying LAI on the spectral signal. Furthermore, while the majority of data points were tightly clustered along the 1:1 diagonal, distinct outliers remained evident. The specific prediction outcomes for the three growth stages are illustrated in Figure 11.

3.4. Construction of Spatial Distribution Map of LNC

Comparative analysis revealed that the DNN-F2 algorithm yielded the most accurate estimation results among the tested modeling approaches. Consequently, this model was applied to generate spatial distribution maps of rice LNC, as presented in Figure 12.
Temporally, the maps illustrate a gradual decline in predicted LNC values from the jointing stage to the filling stage, a trend that is consistent with in situ measurements. Spatially, nitrogen treatments induced noticeable variations in crop growth. Plots N2–N5 exhibited minor variations in predicted LNC, likely attributable to the incremental differences in fertilization gradients; yet, the overall predictive accuracy remained robust. Conversely, growth performance in plots N1 (high nitrogen) and N6 (low nitrogen) appeared moderate, potentially driven by the extreme fertilization regimes. Quantitatively, the highest LNC predictions were observed during the jointing stage (3.98–4.57%), followed by the booting stage (3.39–3.83%), while the lowest values were recorded at the filling stage (2.89–3.35%).

4. Discussions

In this study, four different feature variables like spectral indices based on digital and multispectral images, color space parameters and texture parameters were extracted in three dimensions and filtered by gray correlation analysis. Five common machine learning algorithms like RF, GPR, PLSR, SVM and ANN were compared with a fused deep learning algorithm (DNN-F2) for rice LNC prediction to find an optimal model for rice LNC estimation and to explore the potential of deep learning algorithms for crop nutrient monitoring.

4.1. Estimation of Rice LNC Based on Feature Fusion

As the input of prediction model, the feature variables have an important influence on the prediction accuracy of the whole model. Most previous studies on rice LNC prediction have been based on a single feature variable, which contains single information and has low accuracy. In general, single spectral index features suffer from saturation problems in LNC estimation and are sensitive to reflected intensity radiation. In contrast, color space parameter features can provide a variety of information related to rice canopy, which can alleviate the saturation problem based on vegetation indices. In the gray correlation analysis of color parameter features with rice LNC, color parameters S and L* were best correlated with LNC. By definition, the color parameter S represents saturation, while L* represents brightness, as shown in Table 2. The Gray Relational Analysis (GRA) revealed that multispectral indices, particularly TCARI and RECI, exhibited the strongest correlation with rice LNC, outperforming digital color parameters and texture features. This superior performance can be attributed to the strong physiological link between LNC and chlorophyll concentration [60]. Indices incorporating red-edge and near-infrared (NIR) bands are highly sensitive to chlorophyll variations and effectively mitigate saturation effects in high-biomass conditions, which visible bands alone cannot achieve [61]. Conversely, while raw digital color parameters (e.g., R, G, B) performed poorly due to susceptibility to illumination changes, constructed digital indices like ExG showed high correlation (0.76), proving that mathematical transformation can effectively suppress soil background noise [28]. The main reason is that rice plants with different nitrogen contents have different saturation and brightness. Since variation in nitrogen content changes the color and structure of the rice canopy, the texture parameter features have great potential for quantifying crop LNC.
Previous studies usually only consider the fusion of vegetation indices from different images [14,62], without considering multiple dimensions such as spatial texture and color space, so information reflecting changes in plant structure and color changes is often missing from their prediction models. In this study, four different feature variables were extracted from three dimensions and then fused by a special algorithm, and the prediction results were significantly improved, probably because the model included both spectral information and physiological and biochemical information from two spectral index features, color space information from color space parameter features, and crop structure change information from texture parameter features, which may be useful for other related studies. A notable limitation of this study is that the potential interference from the image background was not explicitly isolated during feature extraction. Consequently, future research should prioritize investigating the impact of soil background noise on the correlation between feature variables and rice LNC. Specifically, it is crucial to assess whether implementing background removal techniques can enhance the accuracy of rice nitrogen status estimation.

4.2. Estimation of Rice LNC Based on Deep Learning Algorithms

Traditional machine learning algorithms are stable and widely used, but are more applicable to single-type feature variables. Most previous studies for rice LNC prediction have been based on traditional machine learning algorithms such as RF, GPR and PLSR, and modeled using combinations of single-type features. In this study, the five most common methods were selected and compared with fused deep neural networks. The prediction accuracy of the fused deep neural network prediction model was higher than that of the traditional machine learning algorithms for three fertility stages, probably because it incorporates multidimensional information such as spectral information and physiological and biochemical information from spectral index features, color space information from color space parameter features, and plant structure change information from texture parameter features. In addition, the comparative results of R2 and RMSE across different growth stages (Figure 9 and Figure 10) demonstrate the distinct advantages of the DNN-F2 model. As shown in the figures, DNN-F2 consistently achieved the highest coefficient of determination (R2) and the lowest Root Mean Square Error (RMSE) across the Jointing, Filling, and Booting stages. Specifically, during the Booting stage, traditional models like SVM and PLSR showed significant performance degradation with lower R2 and higher RMSE. In contrast, DNN-F2 maintained high accuracy (R2 > 0.65) and low error (RMSE = 0.11), significantly outperforming the shallow Artificial Neural Network (ANN) and Random Forest (RF). This robustness is attributed to the deep architecture’s ability to model complex, non-linear relationships that are inherent in agricultural data [63,64]. Unlike traditional algorithms that struggle with feature heterogeneity, the feature fusion strategy in DNN-F2 adaptively integrates spectral, color, and textural information. This allows the model to learn hierarchical representations, thereby enhancing generalization capabilities across different phenological stages of rice [17].
The algorithm is able to form a parallel structure with four feature variables as sub-networks, train each sub-network separately using single-peak DNNs, and then combine the four sub-networks through tandem layers for a joint representation that better incorporates different types of feature variables in multiple dimensions. However, a slight decline in prediction accuracy was observed during the booting and filling stages. This reduction may be attributed to spectral interference arising from the growth and development of rice panicles. Consequently, future research will focus on mitigating this panicle-induced noise to further enhance model performance.

5. Conclusions

In this research, the Leaf Nitrogen Content (LNC) of the rice canopy was successfully quantified by leveraging both digital (RGB) and multispectral imagery acquired from an Unmanned Aerial Vehicle (UAV). Four different feature variables were extracted from three dimensions and filtered by gray correlation analysis, and five common machine learning algorithms were compared with a fused deep neural network, DNN-F2, with the aim of exploring the potential of feature fusion approaches and deep learning algorithms for crop growth monitoring.
By comparing the accuracy of these models, it was found that the overall accuracy of the models was significantly improved by including three dimensions of fused information on the input side, including spectral information and physiological and biochemical information from two spectral index features, color space information from color space parameter features, and crop structure change information from texture parameter features. The fused deep neural network DNN-F2 is more suitable for modeling multidimensional information due to its special architecture, and has better prediction results compared to traditional machine learning algorithms. These findings provide a valuable theoretical reference for the precise estimation of rice canopy LNC, playing a pivotal role in facilitating optimized nitrogen management strategies in rice cultivation. Practically, the implementation of this high-precision monitoring framework contributes significantly to sustainable agriculture by reducing excessive fertilizer use and environmental impact. Looking ahead, future research should focus on validating the generalizability of the DNN-F2 model across different crop varieties and integrating environmental variables to further enhance model robustness under varying field conditions.

Author Contributions

X.X. (Xinlei Xu) and X.X. (Xingang Xu) contributed to the conceptualization, writing—original draft, software, writing—review and editing, validation, and funding acquisition. S.X., Y.M. and G.Y. contributed to the investigation, data curation, formal analysis, resources, methodology, and validation. B.X., X.Y., X.S. and H.X. contributed to the software, visualization, supervision, and project administration. Y.S. and T.W. contributed to the formal analysis and validation. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China (No. 2023YFD2300503 and 2023YFD1701001-03), the National Natural Science Foundation of China (No. 42371323).

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Blackburn, G.A. Hyperspectral remote sensing of plant pigments. J. Exp. Bot. 2007, 58, 855–867. [Google Scholar] [CrossRef] [PubMed]
  2. Li, S.; Ding, X.; Kuang, Q.; Ata-Ul-Karim, S.T.; Qiang, C. Potential of UAV-Based Active Sensing for Monitoring Rice Leaf Nitrogen Status. Front. Plant Sci. 2018, 9, 1834. [Google Scholar] [CrossRef] [PubMed]
  3. Yao, Y.; Liu, Q.; Qiang, L.; Li, X. LAI retrieval and uncertainty evaluations for typical row-planted crops at different growth stages. Remote Sens. Environ. 2008, 112, 94–106. [Google Scholar] [CrossRef]
  4. Guijun, Y.; Jiangang, L.; Chunjiang, Z.; Zhenhong, L.; Yanbo, H.; Haiyang, Y.; Bo, X.; Xiaodong, Y.; Dongmei, Z.; Xiaoyan, Z. Unmanned Aerial Vehicle Remote Sensing for Field-Based Crop Phenotyping: Current Status and Perspectives. Front. Plant Sci. 2017, 8, 1111. [Google Scholar] [CrossRef] [PubMed]
  5. Xia, Y.; Ni, W.; Yong, L.; Tao, C.; Yongchao, T.; Qi, C.; Yan, Z. Estimation of Wheat LAI at Middle to High Levels Using Unmanned Aerial Vehicle Narrowband Multispectral Imagery. Remote Sens. 2017, 9, 1304. [Google Scholar]
  6. Almeida, D.R.A.d.; Broadbent, E.N.; Ferreira, M.P.; Meli, P.; Zambrano, A.M.A.; Gorgens, E.B.; Resende, A.F.; de Almeida, C.T.; do Amaral, C.H.; Corte, A.P.D.; et al. Monitoring restored tropical forest diversity and structure through UAV-borne hyperspectral and lidar fusion. Remote Sens. Environ. 2021, 264, 112582. [Google Scholar] [CrossRef]
  7. Li, S.; Yuan, F.; Ata-Ui-Karim, S.T.; Zheng, H.; Cao, Q. Combining Color Indices and Textures of UAV-Based Digital Imagery for Rice LAI Estimation. Remote Sens. 2019, 11, 1763. [Google Scholar] [CrossRef]
  8. Wang, W.; Xia, Y.; Yao, X.F.; Tian, Y.C.; Liu, X.J.; Ni, J.; Cao, W.X.; Yan, Z. Estimating leaf nitrogen concentration with three-band vegetation indices in rice and wheat. Field Crops Res. 2012, 129, 90–98. [Google Scholar] [CrossRef]
  9. Oppelt, N.; Mauser, W. Hyperspectral monitoring of physiological parameters of wheat during a vegetation period using AVIS data. Int. J. Remote Sens. 2004, 25, 145–159. [Google Scholar] [CrossRef]
  10. He, L.; Zhang, H.; Zhang, Y.; Song, X.; Feng, W.; Kang, G.; Wang, C.; Guo, T. Estimating canopy leaf nitrogen concentration in winter wheat based on multi-angular hyperspectral remote sensing. Eur. J. Agron. 2016, 73, 170–185. [Google Scholar] [CrossRef]
  11. Fu, Y.; Yang, G.; Li, Z.; Song, X.; Li, Z.; Xu, X.; Wang, P.; Zhao, C. Winter Wheat Nitrogen Status Estimation Using UAV-Based RGB Imagery and Gaussian Processes Regression. Remote Sens. 2020, 12, 3778. [Google Scholar] [CrossRef]
  12. Peña-Barragán, J.M.; Ngugi, M.K.; Plant, R.E.; Six, J. Object-based crop identification using multiple vegetation indices, textural features and crop phenology. Remote Sens. Environ. 2011, 115, 1301–1316. [Google Scholar] [CrossRef]
  13. Farwell, L.S.; Gudex-Cross, D.; Anise, I.E.; Bosch, M.J.; Olah, A.M.; Radeloff, V.C.; Razenkova, E.; Rogova, N.; Silveira, E.M.O.; Smith, M.M.; et al. Satellite image texture captures vegetation heterogeneity and explains patterns of bird richness. Remote Sens. Environ. 2021, 253, 112175. [Google Scholar] [CrossRef]
  14. Xu, S.; Xu, X.; Blacker, C.; Gaulton, R.; Zhu, Q.; Yang, M.; Yang, G.; Zhang, J.; Yang, Y.; Yang, M.; et al. Estimation of Leaf Nitrogen Content in Rice Using Vegetation Indices and Feature Variable Optimization with Information Fusion of Multiple-Sensor Images from UAV. Remote Sens. 2023, 15, 854. [Google Scholar] [CrossRef]
  15. Muro, J.; Linstädter, A.; Magdon, P.; Wöllauer, S.; Männer, F.A.; Schwarz, L.-M.; Ghazaryan, G.; Schultz, J.; Malenovský, Z.; Dubovyk, O. Predicting plant biomass and species richness in temperate grasslands across regions, time, and land management with remote sensing and deep learning. Remote Sens. Environ. 2022, 282, 113262. [Google Scholar] [CrossRef]
  16. Meacham-Hensold, K.; Montes, C.M.; Wu, J.; Guan, K.; Fu, P.; Ainsworth, E.A.; Pederson, T.; Moore, C.E.; Brown, K.L.; Raines, C.; et al. High-throughput field phenotyping using hyperspectral reflectance and partial least squares regression (PLSR) reveals genetic modifications to photosynthetic capacity. Remote Sens. Environ. 2019, 231, 111176. [Google Scholar] [CrossRef]
  17. Maimaitijiang, M.; Sagan, V.; Sidike, P.; Hartling, S.; Esposito, F.; Fritschi, F.B. Soybean yield prediction from UAV using multimodal data fusion and deep learning. Remote Sens. Environ. 2020, 237, 111599. [Google Scholar]
  18. Sun, J.; Di, L.; Sun, Z.; Shen, Y.; Lai, Z. County-Level Soybean Yield Prediction Using Deep CNN-LSTM Model. Sensors 2019, 19, 4363. [Google Scholar]
  19. Tan, M.; Li, X.; Hui, X.; Lu, C. Urban land expansion and arable land loss in China—A case study of Beijing–Tianjin–Hebei region. Land Use Policy 2005, 22, 187–196. [Google Scholar] [CrossRef]
  20. Mckenzie, H.A.; Wallace, H.S.A. The Kjeldahl determination of Nitrogen: A critical study of digestion conditions-Temperature, Catalyst, and Oxidizing agent. Aust. J. Chem. 1954, 7, 55–70. [Google Scholar] [CrossRef]
  21. Xiao, M.; Hu, Q.; Ren, D.; Lu, A.; An, Y. Automatic selection of xrf spectral feature variables for soil heavy metal based on Fipls and Bipls. Int. J. Robot. Autom. 2022, 37, 52–59. [Google Scholar] [CrossRef]
  22. Zhang, J.; Cheng, T.; Guo, W.; Xu, X.; Ma, X. Leaf area index estimation model for UAV image hyperspectral data based on wavelength variable selection and machine learning methods. Plant Methods 2021, 17, 49. [Google Scholar] [CrossRef] [PubMed]
  23. Li, W.; Niu, Z.; Wang, C.; Huang, W.; Chen, H.; Gao, S.; Li, D.; Muhammad, S. Combined Use of Airborne LiDAR and Satellite GF-1 Data to Estimate Leaf Area Index, Height, and Aboveground Biomass of Maize During Peak Growing Season. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 4489–4501. [Google Scholar] [CrossRef]
  24. Maimaitijiang, M.; Ghulam, A.; Sidike, P.; Hartling, S.; Maimaitiyiming, M.; Peterson, K.; Shavers, E.; Fishman, J.; Peterson, J.; Kadam, S.; et al. Unmanned Aerial System (UAS)-based phenotyping of soybean using multi-sensor data fusion and extreme learning machine. ISPRS J. Photogramm. Remote Sens. 2017, 134, 43–58. [Google Scholar] [CrossRef]
  25. Ge, X. Leaf salt ion content estimation of halophyte plants based on geographically weighted regression model combined with hyperspectral data. Nongye Gongcheng Xuebao/Trans. Chin. Soc. Agric. Eng. 2019, 35, 115–124. [Google Scholar]
  26. Grodowitz, M.J.; Harms, N.E.; Freedman, J.E. Use of an inexpensive chlorophyll meter to predict nitrogen levels in leaf tissues of water hyacinth (Eichhornia crassipes). J. Aquat. Plant Manag. 2016, 54, 106–108. [Google Scholar]
  27. Woebbecke, D.M.; Meyer, G.E.; Bargen, K.V.; Mortensen, D.A. Color Indices for Weed Identification Under Various Soil, Residue, and Lighting Conditions. Trans. Asae 1995, 38, 259–269. [Google Scholar] [CrossRef]
  28. Meyer, G.E.; Neto, J.C. Verification of color vegetation indices for automated crop imaging applications. Comput. Electron. Agric. 2008, 63, 282–293. [Google Scholar] [CrossRef]
  29. Louhaichi, M.; Borman, M.M.; Johnson, D.E. Spatially Located Platform and Aerial Photography for Documentation of Grazing Impacts on Wheat. Geocarto Int. 2001, 16, 65–70. [Google Scholar] [CrossRef]
  30. Gitelson, A.A.; Kaufman, Y.J.; Stark, R.; Rundquist, D. Novel algorithms for remote estimation of vegetation fraction. Remote Sens. Environ. 2002, 80, 76–87. [Google Scholar] [CrossRef]
  31. Bendig, J.; Yu, K.; Aasen, H.; Bolten, A.; Bennertz, S.; Broscheit, J.; Gnyp, M.L.; Bareth, G. Combining UAV-based plant height from crop surface models, visible, and near infrared vegetation indices for biomass monitoring in barley. Int. J. Appl. Earth Obs. Geoinf. 2015, 39, 79–87. [Google Scholar] [CrossRef]
  32. Kataoka, T.; Kaneko, T.; Okamoto, H.; Hata, S. Crop growth estimation system using machine vision. In Proceedings of the 2003 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM 2003), Kobe, Japan, 20–24 July 2003. [Google Scholar]
  33. Shigeto, K.; Makoto, N. An Algorithm for Estimating Chlorophyll Content in Leaves Using a Video Camera. Ann. Bot. 1998, 81, 49–54. [Google Scholar] [CrossRef]
  34. Mao, W.; Wang, Y.; Wang, Y. Real-time Detection of Between-row Weeds Using Machine Vision. In Proceedings of the 2003 ASABE Annual International Meeting, Las Vegas, NV, USA, 27–30 July 2003. [Google Scholar]
  35. Guo, J.; Bai, Q.; Guo, W.; Bu, Z.; Zhang, W. Soil moisture content estimation in winter wheat planting area for multi-source sensing data using CNNR. Comput. Electron. Agric. 2022, 193, 106670. [Google Scholar] [CrossRef]
  36. Gitelson, A.A.; Kaufman, Y.J.; Merzlyak, M.N. Use of a green channel in remote sensing of global vegetation from EOS-MODIS. Remote Sens. Environ. 1996, 58, 289–298. [Google Scholar] [CrossRef]
  37. Haboudane, D.; Miller, J.R.; Pattey, E.; Zarco-Tejada, P.J.; Strachan, I.B. Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: Modeling and validation in the context of precision agriculture. Remote Sens. Environ. 2004, 90, 337–352. [Google Scholar] [CrossRef]
  38. Jordan, C.F. Derivation of Leaf-Area Index from Quality of Light on the Forest Floor. Ecology 1969, 50, 663–666. [Google Scholar] [CrossRef]
  39. Tedesco, D.; Moreira, B.R.d.A.; Barbosa Júnior, M.R.; Papa, J.P.; da Silva, R.P. Predicting on multi-target regression for the yield of sweet potato by the market class of its roots upon vegetation indices. Comput. Electron. Agric. 2021, 191, 106544. [Google Scholar] [CrossRef]
  40. Bagheri, N. Application of aerial remote sensing technology for detection of fire blight infected pear trees. Comput. Electron. Agric. 2020, 168, 105147. [Google Scholar] [CrossRef]
  41. Li, Z.; Li, Z.; Fairbairn, D.; Li, N.; Xu, B.; Feng, H.; Yang, G. Multi-LUTs method for canopy nitrogen density estimation in winter wheat by field and UAV hyperspectral. Comput. Electron. Agric. 2019, 162, 174–182. [Google Scholar] [CrossRef]
  42. Jiang, Z.; Huete, A.R.; Didan, K.; Miura, T. Development of a two-band enhanced vegetation index without a blue band. Remote Sens. Environ. 2008, 112, 3833–3845. [Google Scholar] [CrossRef]
  43. Daughtry, C.S.T.; Walthall, C.L.; Kim, M.S.; de Colstoun, E.B.; McMurtrey, J.E. Estimating Corn Leaf Chlorophyll Concentration from Leaf and Canopy Reflectance. Remote Sens. Environ. 2000, 74, 229–239. [Google Scholar] [CrossRef]
  44. Zhou, D.; Li, M.; Li, Y.; Qi, J.; Liu, K.; Cong, X.; Tian, X. Detection of ground straw coverage under conservation tillage based on deep learning. Comput. Electron. Agric. 2020, 172, 105369. [Google Scholar] [CrossRef]
  45. Manjunath, B.S.; Ma, W.Y. Texture features for browsing and retrieval of image data. IEEE Trans. Pattern Anal. Mach. Intell. 1996, 18, 837–842. [Google Scholar] [CrossRef]
  46. Bengio, Y.; Grandvalet, Y. No Unbiased Estimator of the Variance of K-Fold Cross-Validation. J. Mach. Learn. Res. 2004, 5, 1089–1105. [Google Scholar]
  47. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  48. Ho, T.K. The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 832–844. [Google Scholar] [CrossRef]
  49. Milgrom, G. Bid, ask and transaction prices in a specialist market with heterogeneously informed traders. J. Financ. Econ. 1985, 14, 71–100. [Google Scholar]
  50. Verrelst, J.; Rivera, J.P.; Gitelson, A.; Delegido, J.; Moreno, J.; Camps-Valls, G. Spectral band selection for vegetation properties retrieval using Gaussian processes regression. Int. J. Appl. Earth Obs. Geoinf. 2016, 52, 554–567. [Google Scholar] [CrossRef]
  51. Martens, H.; Martens, M. Modified Jack-knife estimation of parameter uncertainty in bilinear modelling by partial least squares regression (PLSR). Food Qual. Prefer. 2000, 11, 5–16. [Google Scholar] [CrossRef]
  52. Dimitris, R.; Kuhn, M.; Johnson, K. Applied Predictive Modeling; Springer: New York, NY, USA, 2018. [Google Scholar]
  53. Joachims, T. Making large-Scale SVM Learning Practical. In Advances in Kernel Methods: Support Vector Learning; MIT Press: Cambridge, MA, USA, 1999. [Google Scholar]
  54. Santos, E.; Gomes, H.M. A Comparative Study of Polynomial Kernel SVM Applied to Appearance-Based Object Recognition. In Proceedings of the International Workshop on Pattern Recognition with Support Vector Machines, Niagara Falls, ON, Canada, 10 August 2002. [Google Scholar]
  55. Mcculloch, W.S.; Pitts, W. A Logical Calculus of the Ideas Immanent in Nervous Activity. Bull. Math. Biophys. 1943, 5, 115–133. [Google Scholar] [CrossRef]
  56. Schumacher, M.; Rossner, R.; Vach, W. Neural networks and logistic regression: Part I. Comput. Stat. Data Anal. 1996, 21, 661–682. [Google Scholar] [CrossRef]
  57. Williams, J.; Comanescu, R.; Radu, O.; Tian, L. DNN Multimodal Fusion Techniques for Predicting Video Sentiment. In Proceedings of the Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML), Melbourne, Australia, 20 July 2018. [Google Scholar]
  58. Lemaire, G.; Jeuffroy, M.H.; Gastal, F. Diagnosis tool for plant and crop nitrogen status in vegetative stage. Eur. J. Agron. 2008, 28, 614–624. [Google Scholar] [CrossRef]
  59. Haboudane, D.; Miller, J.R.; Tremblay, N.; Zarco-Tejada, P.J.; Dextraze, L. Integrated narrow-band vegetation indices for prediction of crop chlorophyll content for application to precision agriculture. Remote Sens. Environ. 2002, 81, 416–426. [Google Scholar] [CrossRef]
  60. Evans, J.R. Photosynthesis and nitrogen relationships in leaves of C3 plants. Oecologia 1989, 78, 9–19. [Google Scholar] [CrossRef]
  61. Gitelson, A.A.; Viña, A.; Ciganda, V.; Rundquist, D.C.; Arkebauer, T.J. Remote estimation of canopy chlorophyll content in crops. Geophys. Res. Lett. 2005, 32, L08403. [Google Scholar] [CrossRef]
  62. Xu, X.G.; Fan, L.L.; Li, Z.H.; Meng, Y.; Feng, H.K.; Yang, H.; Xu, B. Estimating Leaf Nitrogen Content in Corn Based on Information Fusion of Multiple-Sensor Imagery from UAV. Remote Sens. 2021, 13, 340. [Google Scholar] [CrossRef]
  63. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef]
  64. Ma, J.; Li, Y.; Eckert, S.; Nie, Y.; Ren, H.; Wu, J.; Li, X.; Yuan, Y. Deep learning for quantitative remote sensing of vegetation parameters: A review. Sci. China Earth Sci. 2019, 62, 1485–1502. [Google Scholar]
Figure 1. Overview of the study area and experimental design.
Figure 1. Overview of the study area and experimental design.
Agronomy 15 02915 g001
Figure 2. The overall workflow of this study.
Figure 2. The overall workflow of this study.
Agronomy 15 02915 g002
Figure 3. Composition of the UAV system: DJI P4M UAV (top), integrated imaging sensor (bottom-right), and reflectance panel (bottom-left).
Figure 3. Composition of the UAV system: DJI P4M UAV (top), integrated imaging sensor (bottom-right), and reflectance panel (bottom-left).
Agronomy 15 02915 g003
Figure 4. Schematic diagram of the DNN-F2 architecture for fused deep neural networks.
Figure 4. Schematic diagram of the DNN-F2 architecture for fused deep neural networks.
Agronomy 15 02915 g004
Figure 5. Correlation analysis between spectral indices derived from digital imagery and rice leaf nitrogen content (LNC) based on gray relational degree analysis.
Figure 5. Correlation analysis between spectral indices derived from digital imagery and rice leaf nitrogen content (LNC) based on gray relational degree analysis.
Agronomy 15 02915 g005
Figure 6. Correlation analysis between color space parameters derived from digital imagery and rice leaf nitrogen content (LNC) based on gray relational degree analysis.
Figure 6. Correlation analysis between color space parameters derived from digital imagery and rice leaf nitrogen content (LNC) based on gray relational degree analysis.
Agronomy 15 02915 g006
Figure 7. Correlation analysis between spectral indices derived from multispectral imagery and rice leaf nitrogen content (LNC) based on gray relational degree analysis.
Figure 7. Correlation analysis between spectral indices derived from multispectral imagery and rice leaf nitrogen content (LNC) based on gray relational degree analysis.
Agronomy 15 02915 g007
Figure 8. Correlation analysis between texture feature parameters derived from multispectral imagery and rice leaf nitrogen content (LNC) based on gray relational degree analysis.
Figure 8. Correlation analysis between texture feature parameters derived from multispectral imagery and rice leaf nitrogen content (LNC) based on gray relational degree analysis.
Agronomy 15 02915 g008
Figure 9. Results of rice LNC prediction under multiple growth stages for R2.
Figure 9. Results of rice LNC prediction under multiple growth stages for R2.
Agronomy 15 02915 g009
Figure 10. Results of rice LNC prediction under multiple growth stages for RMSE.
Figure 10. Results of rice LNC prediction under multiple growth stages for RMSE.
Agronomy 15 02915 g010
Figure 11. Prediction results of rice LNC based on DNN-F2. (a) Jointing; (b) Booting; (c) Filling.
Figure 11. Prediction results of rice LNC based on DNN-F2. (a) Jointing; (b) Booting; (c) Filling.
Agronomy 15 02915 g011
Figure 12. Spatial distribution of rice LNC based on the DNN-F2 method.
Figure 12. Spatial distribution of rice LNC based on the DNN-F2 method.
Agronomy 15 02915 g012
Table 1. Spectral Index Features used in this study.
Table 1. Spectral Index Features used in this study.
Type of FeaturesAbbreviationNameFormulaRef.
RGB Spectral Index FeatureWIWoebbecke Index(gb)/(rg)[27]
ExGExcess Green Vegetation Index2grb[27]
ExGRExcess Green Red Vegetation Index3g − 2.4rb[28]
GLIGreen Leaf Index(2grb)/(2g + r + b)[29]
VARIVisible Atmospherically Resistant Index(gr)/(g + rb)[30]
MGRVIModified Green Red Vegetation Index(g2r2)/(g2 + r2)[31]
CIVEColor Index of Vegetation Extraction0.441r − 0.881g +
0.385b + 18.7874
[32]
RGBVIRed Green Blue Vegetation Index(g2b × r)/(g2 + b × r)[31]
IKAWKawashima Index(rb)/(r + b)[33]
ExBExcess Blue Vegetation Index1.4bg[34]
MS Spectral Index FeatureDVIDifference Vegetation IndexRnirRr[35]
NDVINormalized Difference Vegetation Index(RnirRr)/(Rnir + Rr)[36]
RVIRatio Vegetation IndexRnir/Rr[37]
MNLIModified Nonlinear Vegetation Index(1.5Rnir2 − 1.5Rg)/(Rnir2 + Rr + 0.5)[38]
SAVISoil Adjusted Vegetation Index(RnirRr)/1.5(Rnir + Rr + 0.5)[39]
TCARITransformed Chlorophyll
Absorption Ratio Index
3[(RreRr) − 0.2(RreRg) × (Rre/Rr)][40]
MCARIModified Chlorophyll
Absorption Ratio Index
[(RreRr) − 0.2(RreRg)] × (Rre/Rr)[41]
RECIRed Edge Chlorophyll Index(Rnir/Rre) − 1[42]
MSRIModified Simple Ratio Index ( R nir / R r     1 ) / ( R n i r / R r + 1 )[43]
TVITriangular Vegetation Index0.5(120(RnirRre) − 200(RrRre))[44]
Table 2. Color Space Parameter Features used in this study.
Table 2. Color Space Parameter Features used in this study.
Color SpaceColor Space ParameterDefinitionFormula
RGBRRed (range 0–255, normalized here to [0, 1])/
GGreen (range 0–255, normalized here to [0, 1])/
BBlue (range 0–255, normalized here to [0, 1])/
HSVHHue (range 0–360)H = p(gb), if Cmax = r;
p(br) + 120, if Cmax = g;
p(rg) + 240, if Cmax = b;
= Cmax − Cmin, Cmax = max(r,g,b),
Cmin = min(r,g,b), p = 60/Δ)
SSaturation (range 0–100)Δ/Cmax
VValue (range 0 [black]–100 [white])Cmax
L*a*b*L*Value (range 0 (black)–100 (white))L* = 116(Y/Y0)1/3 − 16, if Y/Y0 > 0.008856; 903.3(Y/Y0) otherwise
(Y = 0.213r + 0.751g + 0.072b, Y0 = 100)
a*Chroma (positive values mean red, negative values mean green)a* = 500 × [(X/X0)1/3(Y/Y0)1/3] (X = 0.412r + 0.358g + 0.180,
X0 = 95.047)
b*Chroma (positive values mean yellow, negative values mean blue)b* = 200[(Y/Y0)1/3(Z/Z0)1/3]
(Z = 0.019r + 0.119g + 0.950,
Z0 = 108.883)
Table 3. Texture Parameter Features used in this study.
Table 3. Texture Parameter Features used in this study.
Type of FeaturesMS Texture Feature Parameter
AbbreviationMEENDISEVA
NameMeanEntropyDissimilaritySecond MomentVariance
Formula i , j = 0 N 1 i ( P i , j ) i , j = 0 N 1 P i , j l n P i , j i , j = 0 N 1 P i , j i j i , j = 0 N 1 P i , j 2 i , j = 0 N 1 i P i , j i M E 2
Table 4. Descriptive statistics for the rice LNC under different growth stages. Reprinted with permission from Xu et al. [14].
Table 4. Descriptive statistics for the rice LNC under different growth stages. Reprinted with permission from Xu et al. [14].
Fertility StageJointingFillingFull Fertility StageBooting
Numbers24247224
Mean (%)4.343.133.713.67
Standard Deviation0.210.170.530.16
Range (%)3.95–4.652.89–3.522.89–4.653.34–3.89
Coefficient of Variation (%)4.845.4314.284.36
Table 5. Comparison of results for rice LNC prediction models based on multiple algorithms.
Table 5. Comparison of results for rice LNC prediction models based on multiple algorithms.
AlgorithmsEvaluation IndicatorJointingBootingFilling
RFR20.670.520.45
RMSE0.120.180.13
GPRR20.510.330.35
RMSE0.210.190.21
PLSRR20.500.420.41
RMSE0.200.150.16
SVMR20.370.270.25
RMSE0.250.210.24
ANNR20.640.530.56
RMSE0.110.150.16
DNN-F2R20.720.660.64
RMSE0.080.110.11
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, X.; Xu, X.; Xu, S.; Meng, Y.; Yang, G.; Xu, B.; Yang, X.; Song, X.; Xue, H.; Song, Y.; et al. Estimation of Leaf Nitrogen Content in Rice Coupling Feature Fusion and Deep Learning with Multi-Sensor Images from UAV. Agronomy 2025, 15, 2915. https://doi.org/10.3390/agronomy15122915

AMA Style

Xu X, Xu X, Xu S, Meng Y, Yang G, Xu B, Yang X, Song X, Xue H, Song Y, et al. Estimation of Leaf Nitrogen Content in Rice Coupling Feature Fusion and Deep Learning with Multi-Sensor Images from UAV. Agronomy. 2025; 15(12):2915. https://doi.org/10.3390/agronomy15122915

Chicago/Turabian Style

Xu, Xinlei, Xingang Xu, Sizhe Xu, Yang Meng, Guijun Yang, Bo Xu, Xiaodong Yang, Xiaoyu Song, Hanyu Xue, Yuekun Song, and et al. 2025. "Estimation of Leaf Nitrogen Content in Rice Coupling Feature Fusion and Deep Learning with Multi-Sensor Images from UAV" Agronomy 15, no. 12: 2915. https://doi.org/10.3390/agronomy15122915

APA Style

Xu, X., Xu, X., Xu, S., Meng, Y., Yang, G., Xu, B., Yang, X., Song, X., Xue, H., Song, Y., & Wang, T. (2025). Estimation of Leaf Nitrogen Content in Rice Coupling Feature Fusion and Deep Learning with Multi-Sensor Images from UAV. Agronomy, 15(12), 2915. https://doi.org/10.3390/agronomy15122915

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop