1. Introduction
The integration of optical sensors and machine learning (ML) technologies has revolutionized agricultural monitoring, enabling precise, real-time insights into crop health, growth dynamics, and environmental interactions [1]. As global agriculture faces mounting challenges from climate change, population growth, and resource scarcity, these tools offer scalable solutions for enhancing productivity, sustainability, and resilience [2]. Optical sensors, ranging from multispectral and hyperspectral cameras to RGB cameras, capture detailed spectral signatures that reveal subtle physiological changes in crops, such as chlorophyll content, water stress, and nutrient deficiencies [3]. When combined with ML algorithms, these data streams are transformed into actionable intelligence, supporting decisions in precision farming, yield prediction, and disease management.
The second edition of the Special Issue “Novel Applications of Optical Sensors and Machine Learning in Agricultural Monitoring” expands on the foundational work established by previous studies in the field, including related editorials that highlight the synergy between optical sensing and deep learning for smart agriculture. For instance, advancements in unmanned aerial vehicle (UAV)-based multispectral imaging and deep neural networks have demonstrated substantial improvements in production management, as evidenced by recent compilations of research emphasizing real-time detection and phenotyping. The first edition of this thematic issue highlighted early breakthroughs in applying these technologies to staple crops, whereas the present edition broadens the scope to encompass a wider range of applications across different crop types, growth stages, and environmental conditions.
Comprising 17 original research articles, this collection addresses critical gaps in current methodologies, such as handling data imbalance, enhancing model transferability, and integrating multi-sensor data for robust monitoring. The contributions in this issue span a wide array of innovations, from hyperspectral data processing for disease severity estimation to UAV-derived digital surface models for biomass prediction. They collectively underscore the evolving role of optical sensors in capturing high-resolution, multi-dimensional data, which ML models then leverage for pattern recognition and predictive analytics. Key themes include the use of radiative transfer models (RTMs) coupled with ML for parameter inversion, time-series analysis for phenological tracking, and lightweight deep learning architectures for on-field deployment [4]. These studies not only refine existing techniques but also lay the groundwork for future integrations, such as edge computing and artificial intelligence (AI)-driven automation in agricultural systems.
2. Overview of Contributions
Optical sensors can provide non-invasive, high-throughput data for monitoring crop status. Multispectral sensors, deployed on UAVs or satellites platforms such as Sentinel-2, capture reflectance across visible, near-infrared, and red-edge spectral bands, enabling the derivation of vegetation indices (VIs) such as NDRE, CIRE, and NDVI [5]. These indices are sensitive to chlorophyll, LAI, and biomass, but saturation in dense canopies remains a challenge. Hyperspectral sensors offer finer resolution (e.g., 400–1000 nm), revealing subtle biochemical signatures for disease detection and nutrient estimation, as demonstrated in studies using ASD spectrometers or UAV-mounted imagers. Using RTMs such as PROSAIL to simulate spectra, aiding parameter inversion with reduced-field measurements [6]. Moreover, integration with platforms such as UAVs has democratized access, enabling plot-scale phenotyping at high resolution [7].
ML has transformed raw sensor data into predictive insights, with deep learning dominating owing to its strong feature-extraction capabilities [8]. Convolutional neural networks (CNNs), transformers, and hybrid models can handle complex tasks such as image segmentation [9], classification [10], and regression [11]. Emerging trends integrate optical sensors with AI for opto-intelligent agriculture, including edge–cloud collaborative inference and meta-learning for improved model adaptability. In addition, multi-sensor data fusion, particularly the integration of optical and SAR observations, helps overcome weather-related limitations and has proven effective in crop parameter estimation [12].
This editorial summarizes the 17 papers by grouping them thematically and provides an in-depth analysis of the current state and emerging trends in the field, leveraging recent advancements to forecast future research directions. They can be broadly categorized into four themes (Table 1): (1) hyperspectral and multispectral data for crop parameter estimation; (2) UAV-based imagery for yield and growth monitoring; (3) disease and stress detection using spectral analysis and deep learning; and (4) phenological and structural modeling with integrated sensor data. Each paper presents novel methodologies validated through field experiments and emphasizes their practical implications for precision agriculture.
Table 1.
Summary of publications in this Special Issue.
Several papers focus on leveraging hyperspectral and multispectral sensors to estimate key crop parameters like leaf area index (LAI), chlorophyll content (LCC), and nitrogen levels. In one study, researchers developed a coupled radiative transfer model integrated with UAV hyperspectral data to estimate rice canopy LAI and LCC [13]. The model outperformed the traditional PROSAIL by reducing RMSE by 0.0359 in spectral simulations, and when combined with extreme learning machine (ELM), achieved RMSE values of 0.6357 for LAI and 6.0101 μg·cm−2 for LCC. This approach demonstrates the value of mechanistic models in enhancing ML-driven inversions. Another contribution explored multispectral UAV imaging combined with PROSAIL and ML (random forest (RF) and partial least squares regression (PLSR)) to estimate potato LAI across growth stages [14]. The hybrid model improved R2 by up to 263% over lookup table methods, with final R2 = 0.87, emphasizing the role of red-edge bands in overcoming saturation issues. In a novel frequency-domain approach, hyperspectral data from rice were transformed using wavelet packet transform, first-order differentiation, and harmonic analysis, then fed into a deep neural network. This yielded R2 > 0.9 for single-stage chlorophyll estimation and R2 = 0.971 across periods, showcasing data compression techniques for improved stability [15]. Similarly, for winter wheat nitrogen content, multispectral features (spectral, texture, structural) were fused in ML models, with RF achieving R2 = 0.90 across stages and varieties, highlighting variety-specific sensitivities [16]. Near-infrared spectroscopy was also applied for fertilizer detection, with back propagation neural networks and competitive adaptive reweighted sampling achieving 93% accuracy in identifying fertilizer types and concentrations, with RMSE values ranging from 1.0034 to 2.4947 [17].
UAV platforms dominate several studies for their high-resolution capabilities in yield prediction and biomass estimation. One study clustered time-series UAV RGB images and multispectral data using k-shape to predict multi-genotype rice yield, with canopy volume curves yielding R2 = 0.82 and RMSE = 315.39 kg/ha, outperforming spectral features [18]. For winter wheat biomass, UAV-derived digital surface models provided plant height, integrated into a back propagation (BP) neural network via AGB/height ratio, boosting R2 to 0.88 and reducing RMSE by 51.72% [19]. Yield estimation in winter wheat used UAV digital images to extract color indexes and texture features, fused in PLSR and RF models, with fused features at filling stage achieving R2 = 0.76 [20]. Another study refined Cropland Data Layer using confidence intervals and filters, improving DL segmentation accuracy by 1.5% for major crops via Sentinel-2 composites [21].
Disease monitoring features prominently, with hyperspectral imaging and ML detecting mechanical damage in corn seeds. A ResNeSt_E network with joint dimensionality reduction (PCA + FA) achieved 99.0% accuracy [22]. For peanut southern blight, SMOTE and fractional-order differentiation enhanced 1D-CNN performance, yielding OA = 88.81% validation and 82.76% testing [23]. Peach fruit segmentation under adverse conditions used Swin Transformer in Mask R-CNN, achieving AP = 60.2 overall and 40.4 for small objects, doubling CNN-based performance [24]. Crop residue coverage was segmented using a CCRSNet deep learning model from smartphone images, with mIoU = 92.73% and RMSE = 1.05–3.56% for proportions [25].
MODIS NDVI and the phenofit package were used for Phenology tracking for post-extreme events, revealing EOS delays (e.g., 4.97 days for cropland) after 2021 rainfall in Henan, with recovery by 2022 [26]. Anthocyanin phenotyping in purple corn employed digital cameras and color indexes, with linear models achieving NRMSE < 30% across organs [27]. Corn growth parameters were estimated by integrating optical vegetation indexes with SAR differential radar information in the Water Cloud Model, reducing nRMSE to 14.64% for height using red-edge features [28]. Phenological impacts on MODIS-derived fractional vegetation cover were analyzed, indicating that Bidirectional Reflectance Distribution Function corrections enhance accuracy during stable growth phases but not at season transitions [29]. These papers collectively advance the field by addressing data challenges, model robustness, and practical deployment, with many achieving R2 > 0.8 and RMSE reductions of 20–50% over baselines.
The sector involving optical sensors and ML-driven precision farming tools is advancing rapidly, propelled by advancing technological capabilities and critical global demands arising from shifting weather patterns, workforce shortages, and the imperative to feed an anticipated 10.3 billion inhabitants by the middle of the 2080s. Optical sensing technologies, including hyperspectral imaging and near-infrared spectroscopy, are increasingly deployed via UAVs, satellites, and ground-based platforms to monitor crop traits non-destructively. Machine learning, particularly deep learning variants such as CNNs and transformers, has enabled the extraction of complex features from these data, improving accuracy in tasks such as yield estimation and stress detection. Challenges include data volume, noise, and cost, but innovations such as lightweight sensors and edge processing are mitigating these challenges. Deep-tech domains such as AI-driven robotics will enhance climate resilience. Challenges related to ethical AI and data privacy must be carefully addressed; nevertheless, the overall trajectory points to transformative impacts on global food systems. This second edition advances the application of optical sensors and ML in agricultural monitoring, with 17 papers offering innovative solutions validated in real-world settings. From hyperspectral chlorophyll estimation to UAV biomass modeling, the contributions enhance accuracy and practicality. Looking ahead, the field’s status is robust, with trends toward fusion, automation, and sustainability promising to bolster food security.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Hultgren, A.; Carleton, T.; Delgado, M.; Gergel, D.R.; Greenstone, M.; Houser, T.; Hsiang, S.; Jina, A.; Kopp, R.E.; Malevich, S.B.; et al. Impacts of Climate Change on Global Agriculture Accounting for Adaptation. Nature 2025, 642, 644–652. [Google Scholar] [CrossRef] [PubMed]
- Umapathi, R.; Park, B.; Sonwal, S.; Rani, G.M.; Cho, Y.; Huh, Y.S. Advances in Optical-Sensing Strategies for the on-Site Detection of Pesticides in Agricultural Foods. Trends Food Sci. Technol. 2022, 119, 69–89. [Google Scholar] [CrossRef]
- Yu, F.; Bai, J.; Fang, J.; Guo, S.; Zhu, S.; Xu, T. Integration of a Parameter Combination Discriminator Improves the Accuracy of Chlorophyll Inversion from Spectral Imaging of Rice. Agric. Commun. 2024, 2, 100055. [Google Scholar] [CrossRef]
- Rouse, J.W.; Haas, R.H.; Schell, J.A.; Deering, D.W. Monitoring Vegetation Systems in the Great Plains with ERTS. In Proceedings of the Third ERTS-1 Symposium; NASA: Washington, DC, USA, 1974; Volume 1, pp. 309–317. [Google Scholar]
- Sun, Q.; Jiao, Q.; Chen, X.; Xing, H.; Huang, W.; Zhang, B. Machine Learning Algorithms for the Retrieval of Canopy Chlorophyll Content and Leaf Area Index of Crops Using the PROSAIL-D Model with the Adjusted Average Leaf Angle. Remote Sens. 2023, 15, 2264. [Google Scholar] [CrossRef]
- Bongomin, O.; Lamo, J.; Guina, J.M.; Okello, C.; Ocen, G.G.; Obura, M.; Alibu, S.; Owino, C.A.; Akwero, A.; Ojok, S. UAV Image Acquisition and Processing for High-throughput Phenotyping in Agricultural Research and Breeding Programs. Plant Phenome J. 2024, 7, e20096. [Google Scholar] [CrossRef]
- Zhu, W.; Rezaei, E.E.; Nouri, H.; Sun, Z.; Li, J.; Yu, D.; Siebert, S. UAV-Based Indicators of Crop Growth Are Robust for Distinct Water and Nutrient Management but Vary between Crop Development Phases. Field Crops Res. 2022, 284, 108582. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015; Spring: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
- Sangeetha, V.; Prasad, K.J.R. Deep Residual Learning for Image Recognition. ChemInform 2006, 37, 1951–1954. [Google Scholar] [CrossRef]
- Yue, J.; Li, T.; Feng, H.; Fu, Y.; Liu, Y.; Tian, J.; Yang, H.; Yang, G. Enhancing Field Soil Moisture Content Monitoring Using Laboratory-Based Soil Spectral Measurements and Radiative Transfer Models. Agric. Commun. 2024, 2, 100060. [Google Scholar] [CrossRef]
- Zhang, R.; Yang, Y.; Li, Z.; Li, P.; Wang, H. Optical and SAR Image Fusion: A Review of Theories, Methods, and Applications. Remote Sens. 2025, 18, 73. [Google Scholar] [CrossRef]
- Jin, Z.; Liu, H.; Cao, H.; Li, S.; Yu, F.; Xu, T. Hyperspectral Remote Sensing Estimation of Rice Canopy LAI and LCC by UAV Coupled RTM and Machine Learning. Agriculture 2024, 15, 11. [Google Scholar] [CrossRef]
- Li, S.; Lin, Y.; Zhu, P.; Jin, L.; Bian, C.; Liu, J. Combining UAV Multispectral Imaging and PROSAIL Model to Estimate LAI of Potato at Plot Scale. Agriculture 2024, 14, 2159. [Google Scholar] [CrossRef]
- Du, L.; Luo, S. Spectral-Frequency Conversion Derived from Hyperspectral Data Combined with Deep Learning for Estimating Chlorophyll Content in Rice. Agriculture 2024, 14, 1186. [Google Scholar] [CrossRef]
- Shu, M.; Wang, Z.; Guo, W.; Qiao, H.; Fu, Y.; Guo, Y.; Wang, L.; Ma, Y.; Gu, X. Effects of Variety and Growth Stage on UAV Multispectral Estimation of Plant Nitrogen Content of Winter Wheat. Agriculture 2024, 14, 1775. [Google Scholar] [CrossRef]
- Ma, Y.; Wu, Z.; Cheng, Y.; Chen, S.; Li, J. Rapid Detection of Fertilizer Information Based on Near-Infrared Spectroscopy and Machine Learning and the Design of a Detection Device. Agriculture 2024, 14, 1184. [Google Scholar] [CrossRef]
- Li, Q.; Zhao, S.; Du, L.; Luo, S. Multi-Genotype Rice Yield Prediction Based on Time-Series Remote Sensing Images and Dynamic Process Clustering. Agriculture 2024, 15, 64. [Google Scholar] [CrossRef]
- Guo, Y.; He, J.; Zhang, H.; Shi, Z.; Wei, P.; Jing, Y.; Yang, X.; Zhang, Y.; Wang, L.; Zheng, G. Improvement of Winter Wheat Aboveground Biomass Estimation Using Digital Surface Model Information Extracted from Unmanned-Aerial-Vehicle-Based Multispectral Images. Agriculture 2024, 14, 378. [Google Scholar] [CrossRef]
- Yang, F.; Liu, Y.; Yan, J.; Guo, L.; Tan, J.; Meng, X.; Xiao, Y.; Feng, H. Winter Wheat Yield Estimation with Color Index Fusion Texture Feature. Agriculture 2024, 14, 581. [Google Scholar] [CrossRef]
- Maleki, R.; Wu, F.; Oubara, A.; Fathollahi, L.; Yang, G. Refinement of Cropland Data Layer with Effective Confidence Layer Interval and Image Filtering. Agriculture 2024, 14, 1285. [Google Scholar] [CrossRef]
- Huang, H.; Liu, Y.; Zhu, S.; Feng, C.; Zhang, S.; Shi, L.; Sun, T.; Liu, C. Detection of Mechanical Damage in Corn Seeds Using Hyperspectral Imaging and the ResNeSt_E Deep Learning Network. Agriculture 2024, 14, 1780. [Google Scholar] [CrossRef]
- Sun, H.; Zhou, L.; Shu, M.; Zhang, J.; Feng, Z.; Feng, H.; Song, X.; Yue, J.; Guo, W. Estimation of Peanut Southern Blight Severity in Hyperspectral Data Using the Synthetic Minority Oversampling Technique and Fractional-Order Differentiation. Agriculture 2024, 14, 476. [Google Scholar] [CrossRef]
- Seo, D.; Lee, S.K.; Kim, J.G.; Oh, I.-S. High-Precision Peach Fruit Segmentation under Adverse Conditions Using Swin Transformer. Agriculture 2024, 14, 903. [Google Scholar] [CrossRef]
- Gao, G.; Zhang, S.; Shen, J.; Hu, K.; Tian, J.; Yao, Y.; Tian, Q.; Fu, Y.; Feng, H.; Liu, Y.; et al. Segmentation and Proportion Extraction of Crop, Crop Residues, and Soil Using Digital Images and Deep Learning. Agriculture 2024, 14, 2240. [Google Scholar] [CrossRef]
- Lin, Y.; Guo, X.; Liu, Y.; Zhou, L.; Wang, Y.; Ge, Q.; Wang, Y. Vegetation Phenology Changes and Recovery after an Extreme Rainfall Event: A Case Study in Henan Province, China. Agriculture 2024, 14, 1649. [Google Scholar] [CrossRef]
- Wang, Z.; Liu, Y.; Wang, K.; Wang, Y.; Wang, X.; Liu, J.; Xu, C.; Song, Y. Phenotyping the Anthocyanin Content of Various Organs in Purple Corn Using a Digital Camera. Agriculture 2024, 14, 744. [Google Scholar] [CrossRef]
- Wang, Y.; Wu, Z.; Luo, S.; Liu, X.; Liu, S.; Huang, X. Estimating Corn Growth Parameters by Integrating Optical and Synthetic Aperture Radar Features into the Water Cloud Model. Agriculture 2024, 14, 695. [Google Scholar] [CrossRef]
- Lin, Y.; Fan, T.; Wang, D.; Cai, K.; Liu, Y.; Wang, Y.; Yu, T.; Xu, N. Influence of Vegetation Phenology on the Temporal Effect of Crop Fractional Vegetation Cover Derived from Moderate-Resolution Imaging Spectroradiometer Nadir Bidirectional Reflectance Distribution Function–Adjusted Reflectance. Agriculture 2024, 14, 1759. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.