Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (91)

Search Parameters:
Keywords = multimodal agricultural data

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 3231 KB  
Review
A Review of Smart Crop Technologies for Resource Constrained Environments: Leveraging Multimodal Data Fusion, Edge-to-Cloud Computing, and IoT Virtualization
by Damilola D. Olatinwo, Herman C. Myburgh, Allan De Freitas and Adnan M. Abu-Mahfouz
J. Sens. Actuator Netw. 2025, 14(5), 99; https://doi.org/10.3390/jsan14050099 - 9 Oct 2025
Viewed by 347
Abstract
Smart crop technologies offer promising solutions for enhancing agricultural productivity and sustainability, particularly in the face of global challenges such as resource scarcity and climate variability. However, their deployment in infrastructure-limited regions, especially across Africa, faces persistent barriers, including unreliable power supply, intermittent [...] Read more.
Smart crop technologies offer promising solutions for enhancing agricultural productivity and sustainability, particularly in the face of global challenges such as resource scarcity and climate variability. However, their deployment in infrastructure-limited regions, especially across Africa, faces persistent barriers, including unreliable power supply, intermittent internet connectivity, and limited access to technical expertise. This study presents a PRISMA-guided systematic review of literature published between 2015 and 2025, sourced from the Scopus database including indexed content from ScienceDirect and IEEE Xplore. It focuses on key technological components including multimodal sensing, data fusion, IoT resource management, edge-cloud integration, and adaptive network design. The analysis of these references reveals a clear trend of increasing research volume and a major shift in focus from foundational unimodal sensing and cloud computing to more complex solutions involving machine learning post-2019. This review identifies critical gaps in existing research, particularly the lack of integrated frameworks for effective multimodal sensing, data fusion, and real-time decision support in low-resource agricultural contexts. To address this, we categorize multimodal sensing approaches and then provide a structured taxonomy of multimodal data fusion approaches for real-time monitoring and decision support. The review also evaluates the role of IoT virtualization as a pathway to scalable, adaptive sensing systems, and analyzes strategies for overcoming infrastructure constraints. This study contributes a comprehensive overview of smart crop technologies suited to infrastructure-limited agricultural contexts and offers strategic recommendations for deploying resilient smart agriculture solutions under connectivity and power constraints. These findings provide actionable insights for researchers, technologists, and policymakers aiming to develop sustainable and context-aware agricultural innovations in underserved regions. Full article
(This article belongs to the Special Issue Remote Sensing and IoT Application for Smart Agriculture)
Show Figures

Figure 1

21 pages, 1768 KB  
Review
Evolution of Deep Learning Approaches in UAV-Based Crop Leaf Disease Detection: A Web of Science Review
by Dorijan Radočaj, Petra Radočaj, Ivan Plaščak and Mladen Jurišić
Appl. Sci. 2025, 15(19), 10778; https://doi.org/10.3390/app151910778 - 7 Oct 2025
Viewed by 393
Abstract
The integration of unmanned aerial vehicles (UAVs) and deep learning (DL) has significantly advanced crop disease detection by enabling scalable, high-resolution, and near real-time monitoring within precision agriculture. This systematic review analyzes peer-reviewed literature indexed in the Web of Science Core Collection as [...] Read more.
The integration of unmanned aerial vehicles (UAVs) and deep learning (DL) has significantly advanced crop disease detection by enabling scalable, high-resolution, and near real-time monitoring within precision agriculture. This systematic review analyzes peer-reviewed literature indexed in the Web of Science Core Collection as articles or proceeding papers through 2024. The main selection criterion was combining “unmanned aerial vehicle*” OR “UAV” OR “drone” with “deep learning”, “agriculture” and “leaf disease” OR “crop disease”. Results show a marked surge in publications after 2019, with China, the United States, and India leading research contributions. Multirotor UAVs equipped with RGB sensors are predominantly used due to their affordability and spatial resolution, while hyperspectral imaging is gaining traction for its enhanced spectral diagnostic capability. Convolutional neural networks (CNNs), along with emerging transformer-based and hybrid models, demonstrate high detection performance, often achieving F1-scores above 95%. However, critical challenges persist, including limited annotated datasets for rare diseases, high computational costs of hyperspectral data processing, and the absence of standardized evaluation frameworks. Addressing these issues will require the development of lightweight DL architectures optimized for edge computing, improved multimodal data fusion techniques, and the creation of publicly available, annotated benchmark datasets. Advancements in these areas are vital for translating current research into practical, scalable solutions that support sustainable and data-driven agricultural practices worldwide. Full article
Show Figures

Figure 1

24 pages, 3017 KB  
Article
Tree-Guided Transformer for Sensor-Based Ecological Image Feature Extraction and Multitarget Recognition in Agricultural Systems
by Yiqiang Sun, Zigang Huang, Linfeng Yang, Zihuan Wang, Mingzhuo Ruan, Jingchao Suo and Shuo Yan
Sensors 2025, 25(19), 6206; https://doi.org/10.3390/s25196206 - 7 Oct 2025
Viewed by 364
Abstract
Farmland ecosystems present complex pest–predator co-occurrence patterns, posing significant challenges for image-based multitarget recognition and ecological modeling in sensor-driven computer vision tasks. To address these issues, this study introduces a tree-guided Transformer framework enhanced with a knowledge-augmented co-attention mechanism, enabling effective feature extraction [...] Read more.
Farmland ecosystems present complex pest–predator co-occurrence patterns, posing significant challenges for image-based multitarget recognition and ecological modeling in sensor-driven computer vision tasks. To address these issues, this study introduces a tree-guided Transformer framework enhanced with a knowledge-augmented co-attention mechanism, enabling effective feature extraction from sensor-acquired images. A hierarchical ecological taxonomy (Phylum–Family Species) guides prompt-driven semantic reasoning, while an ecological knowledge graph enriches visual representations by embedding co-occurrence priors. A multimodal dataset containing 60 pest and predator categories with annotated images and semantic descriptions was constructed for evaluation. Experimental results demonstrate that the proposed method achieves 90.4% precision, 86.7% recall, and 88.5% F1-score in image classification, along with 82.3% hierarchical accuracy. In detection tasks, it attains 91.6% precision and 86.3% mAP@50, with 80.5% co-occurrence accuracy. For hierarchical reasoning and knowledge-enhanced tasks, F1-scores reach 88.5% and 89.7%, respectively. These results highlight the framework’s strong capability in extracting structured, semantically aligned image features under real-world sensor conditions, offering an interpretable and generalizable approach for intelligent agricultural monitoring. Full article
Show Figures

Figure 1

36 pages, 4484 KB  
Review
Research Progress of Deep Learning-Based Artificial Intelligence Technology in Pest and Disease Detection and Control
by Yu Wu, Li Chen, Ning Yang and Zongbao Sun
Agriculture 2025, 15(19), 2077; https://doi.org/10.3390/agriculture15192077 - 3 Oct 2025
Viewed by 362
Abstract
With the rapid advancement of artificial intelligence technology, the widespread application of deep learning in computer vision is driving the transformation of agricultural pest detection and control toward greater intelligence and precision. This paper systematically reviews the evolution of agricultural pest detection and [...] Read more.
With the rapid advancement of artificial intelligence technology, the widespread application of deep learning in computer vision is driving the transformation of agricultural pest detection and control toward greater intelligence and precision. This paper systematically reviews the evolution of agricultural pest detection and control technologies, with a special focus on the effectiveness of deep-learning-based image recognition methods for pest identification, as well as their integrated applications in drone-based remote sensing, spectral imaging, and Internet of Things sensor systems. Through multimodal data fusion and dynamic prediction, artificial intelligence has significantly improved the response times and accuracy of pest monitoring. On the control side, the development of intelligent prediction and early-warning systems, precision pesticide-application technologies, and smart equipment has advanced the goals of eco-friendly pest management and ecological regulation. However, challenges such as high data-annotation costs, limited model generalization, and constrained computing power on edge devices remain. Moving forward, further exploration of cutting-edge approaches such as self-supervised learning, federated learning, and digital twins will be essential to build more efficient and reliable intelligent control systems, providing robust technical support for sustainable agricultural development. Full article
Show Figures

Figure 1

42 pages, 5042 KB  
Review
A Comprehensive Review of Remote Sensing and Artificial Intelligence Integration: Advances, Applications, and Challenges
by Nikolay Kazanskiy, Roman Khabibullin, Artem Nikonorov and Svetlana Khonina
Sensors 2025, 25(19), 5965; https://doi.org/10.3390/s25195965 - 25 Sep 2025
Viewed by 1336
Abstract
The integration of remote sensing (RS) and artificial intelligence (AI) has revolutionized Earth observation, enabling automated, efficient, and precise analysis of vast and complex datasets. RS techniques, leveraging satellite imagery, aerial photography, and ground-based sensors, provide critical insights into environmental monitoring, disaster response, [...] Read more.
The integration of remote sensing (RS) and artificial intelligence (AI) has revolutionized Earth observation, enabling automated, efficient, and precise analysis of vast and complex datasets. RS techniques, leveraging satellite imagery, aerial photography, and ground-based sensors, provide critical insights into environmental monitoring, disaster response, agriculture, and urban planning. The rapid developments in AI, specifically machine learning (ML) and deep learning (DL), have significantly enhanced the processing and interpretation of RS data. AI-powered models, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and reinforcement learning (RL) algorithms, have demonstrated remarkable capabilities in feature extraction, classification, anomaly detection, and predictive modeling. This paper provides a comprehensive survey of the latest developments at the intersection of RS and AI, highlighting key methodologies, applications, and emerging challenges. While AI-driven RS offers unprecedented opportunities for automation and decision-making, issues related to model generalization, explainability, data heterogeneity, and ethical considerations remain significant hurdles. The review concludes by discussing future research directions, emphasizing the need for improved model interpretability, multimodal learning, and real-time AI deployment for global-scale applications. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

21 pages, 3946 KB  
Article
Research on Non Destructive Detection Method and Model Op-Timization of Nitrogen in Facility Lettuce Based on THz and NIR Hyperspectral
by Yixue Zhang, Jialiang Zheng, Jingbo Zhi, Jili Guo, Jin Hu, Wei Liu, Tiezhu Li and Xiaodong Zhang
Agronomy 2025, 15(10), 2261; https://doi.org/10.3390/agronomy15102261 - 24 Sep 2025
Viewed by 304
Abstract
Considering the growing demand for modern facility agriculture, it is essential to develop non-destructive technologies for assessing lettuce nutritional status. To overcome the limitations of traditional methods, which are destructive and time-consuming, this study proposes a multimodal non-destructive nitrogen detection method for lettuce [...] Read more.
Considering the growing demand for modern facility agriculture, it is essential to develop non-destructive technologies for assessing lettuce nutritional status. To overcome the limitations of traditional methods, which are destructive and time-consuming, this study proposes a multimodal non-destructive nitrogen detection method for lettuce based on multi-source imaging. The approach integrates terahertz time-domain spectroscopy (THz-TDS) and near-infrared hyperspectral imaging (NIR-HSI) to achieve rapid and non-invasive nitrogen detection. Spectral imaging data of lettuce samples under different nitrogen gradients (20–150%) were simultaneously acquired using a THz-TDS system (0.2–1.2 THz) and a NIR-HSI system (1000–1600 nm), with image segmentation applied to remove background interference. During data processing, Savitzky–Golay smoothing, MSC (for THz data), and SNV (for NIR data) were employed for combined preprocessing, and sample partitioning was performed using the SPXY algorithm. Subsequently, SCARS/iPLS/IRIV algorithms were applied for THz feature selection, while RF/SPA/ICO methods were used for NIR feature screening, followed by nitrogen content prediction modeling with LS-SVM and KELM. Furthermore, small-sample learning was utilized to fuse crop feature information from the two modalities, providing a more comprehensive and effective detection strategy. The results demonstrated that the THz-based model with SCARS-selected power spectrum features and an RBF-kernel LS-SVM achieved the best predictive performance (R2 = 0.96, RMSE = 0.20), while the NIR-based model with ICO features and an RBF-kernel LS-SVM achieved the highest accuracy (R2 = 0.967, RMSE = 0.193). The fusion model, combining SCARS and ICO features, exhibited the best overall performance, with training accuracy of 96.25% and prediction accuracy of 95.94%. This dual-spectral technique leverages the complementary responses of nitrogen in molecular vibrations (THz) and organic chemical bonds (NIR), significantly enhancing model performance. To the best of our knowledge, this is the first study to realize the synergistic application of THz and NIR spectroscopy in nitrogen detection of facility-grown lettuce, providing a high-precision, non-destructive solution for rapid crop nutrition diagnosis. Full article
(This article belongs to the Special Issue Crop Nutrition Diagnosis and Efficient Production)
Show Figures

Figure 1

27 pages, 3625 KB  
Article
Digital Twin-Driven Sorting System for 3D Printing Farm
by Zeyan Wang, Fei Xie, Zhiyuan Wang, Yijian Liu, Qi Mao and Jun Chen
Appl. Sci. 2025, 15(18), 10222; https://doi.org/10.3390/app151810222 - 19 Sep 2025
Viewed by 470
Abstract
Modern agricultural intelligent manufacturing faces critical challenges including low automation levels, safety hazards in high-temperature processing, and insufficient production data integration. Digital twin technology and 3D printing offer promising solutions through real-time virtual–physical synchronization and customized equipment manufacturing, respectively. However, existing research exhibits [...] Read more.
Modern agricultural intelligent manufacturing faces critical challenges including low automation levels, safety hazards in high-temperature processing, and insufficient production data integration. Digital twin technology and 3D printing offer promising solutions through real-time virtual–physical synchronization and customized equipment manufacturing, respectively. However, existing research exhibits significant limitations: inadequate real-time synchronization mechanisms causing delayed response, poor environmental adaptability in unstructured agricultural settings, and limited human–machine collaboration capabilities. To address these deficiencies, this study develops a digital twin-driven intelligent sorting system for 3D-printed agricultural tools, integrating an Articulated Robot Arm, 16 industrial-grade 3D printers, and the Unity3D 2024.x platform to establish a complete “printing–sorting–warehousing” digitalized production loop. Unlike existing approaches, our system achieves millisecond-level bidirectional physical–virtual synchronization, implements an adaptive grasping algorithm combining force control and thermal sensing for safe high-temperature handling, employs improved RRT-Connect path planning with ellipsoidal constraint sampling, and features AR/VR/MR-based multimodal interaction. Validation testing in real agricultural production environments demonstrates a 98.7% grasping success rate, a 99% reduction in burn accidents, and a 191% sorting efficiency improvement compared to traditional methods, providing breakthrough solutions for sustainable agricultural development and smart farming ecosystem construction. Full article
(This article belongs to the Section Additive Manufacturing Technologies)
Show Figures

Figure 1

20 pages, 3058 KB  
Article
An Interpretable Wheat Yield Estimation Model Using Time Series Remote Sensing Data and Considering Meteorological and Soil Influences
by Xiangquan Zeng, Dong Han, Kevin Tansey, Pengxin Wang, Mingyue Pei, Yun Li, Fanghao Li and Ying Du
Remote Sens. 2025, 17(18), 3192; https://doi.org/10.3390/rs17183192 - 15 Sep 2025
Viewed by 508
Abstract
Accurate estimation of winter wheat yield is essential for ensuring food security. Recent studies on winter wheat yield estimation based on deep learning methods rarely explore the interpretability of the model from the perspective of crop growth mechanism. In this study, a multiscale [...] Read more.
Accurate estimation of winter wheat yield is essential for ensuring food security. Recent studies on winter wheat yield estimation based on deep learning methods rarely explore the interpretability of the model from the perspective of crop growth mechanism. In this study, a multiscale winter wheat yield estimation framework (called MultiScaleWheatNet model) was proposed, which was based on time series remote sensing data and further takes into account meteorological and soil factors that affect wheat growth. The model integrated multimodal data from different temporal and spatial scales, extracting growth characteristics specific to particular growth stage based on the growth pattern of wheat phenological phase. It focuses on enhancing model accuracy and interpretability from the perspective of crop growth mechanisms. The results showed that, compared to mainstream deep learning architectures, the MultiScaleWheatNet model had good estimation accuracy in both rain-fed and irrigated farmlands, with higher accuracy in rain-fed farmlands (R2 = 0.86, RMSE = 0.15 t·ha−1). At the county scale, the accuracy of the model in estimating winter wheat yield was stable across three years (from 2021 to 2023, R2 ≥ 0.35, RMSE ≤ 0.73 t·ha−1, nRMSE ≤ 20.4%). Model interpretability results showed that, taking all growth stages together, the remotely sensed indices had relatively high contribution to wheat yield, with roughly equal contributions from meteorological and soil variables. From the perspective of the growth stages, the contribution of LAI in remote sensing factors demonstrated greater stability throughout the growth stages, particularly during the jointing, heading-filling and milky maturity stage; the combined impact of meteorological factors exhibited a discernible temporal sequence, initially dominated by water availability and subsequently transitioning to temperature and sunlight in the middle and late stages; soil factors demonstrated a close correlation with soil pH and cation exchange capacity in the early and late stages, and with organic carbon content in the middle stage. By deeply combining remote sensing, meteorological and soil data, the framework not only achieves high accuracy in winter wheat yield estimation, but also effectively interprets the dynamic influence mechanism of remote sensing data on yield from the perspective of crop growth, providing a scientific basis for precise field water and fertiliser management and agricultural decision-making. Full article
Show Figures

Figure 1

40 pages, 2568 KB  
Review
Intelligent Edge Computing and Machine Learning: A Survey of Optimization and Applications
by Sebastián A. Cajas Ordóñez, Jaydeep Samanta, Andrés L. Suárez-Cetrulo and Ricardo Simón Carbajo
Future Internet 2025, 17(9), 417; https://doi.org/10.3390/fi17090417 - 11 Sep 2025
Cited by 2 | Viewed by 1738
Abstract
Intelligent edge machine learning has emerged as a paradigm for deploying smart applications across resource-constrained devices in next-generation network infrastructures. This survey addresses the critical challenges of implementing machine learning models on edge devices within distributed network environments, including computational limitations, memory constraints, [...] Read more.
Intelligent edge machine learning has emerged as a paradigm for deploying smart applications across resource-constrained devices in next-generation network infrastructures. This survey addresses the critical challenges of implementing machine learning models on edge devices within distributed network environments, including computational limitations, memory constraints, and energy-efficiency requirements for real-time intelligent inference. We provide comprehensive analysis of soft computing optimization strategies essential for intelligent edge deployment, systematically examining model compression techniques including pruning, quantization methods, knowledge distillation, and low-rank decomposition approaches. The survey explores intelligent MLOps frameworks tailored for network edge environments, addressing continuous model adaptation, monitoring under data drift, and federated learning for distributed intelligence while preserving privacy in next-generation networks. Our work covers practical applications across intelligent smart agriculture, energy management, healthcare, and industrial monitoring within network infrastructures, highlighting domain-specific challenges and emerging solutions. We analyze specialized hardware architectures, cloud offloading strategies, and distributed learning approaches that enable intelligent edge computing in heterogeneous network environments. The survey identifies critical research gaps in multimodal model deployment, streaming learning under concept drift, and integration of soft computing techniques with intelligent edge orchestration frameworks for network applications. These gaps directly manifest as open challenges in balancing computational efficiency with model robustness due to limited multimodal optimization techniques, developing sustainable intelligent edge AI systems arising from inadequate streaming learning adaptation, and creating adaptive network applications for dynamic environments resulting from insufficient soft computing integration. This comprehensive roadmap synthesizes current intelligent edge machine learning solutions with emerging soft computing approaches, providing researchers and practitioners with insights for developing next-generation intelligent edge computing systems that leverage machine learning capabilities in distributed network infrastructures. Full article
Show Figures

Graphical abstract

23 pages, 7451 KB  
Article
Comparing Machine Learning and Statistical Models for Remote Sensing-Based Forest Aboveground Biomass Estimations
by Shashika Himandi Gardeye Lamahewage, Chandi Witharana, Rachel Riemann, Robert Fahey and Thomas Worthley
Forests 2025, 16(9), 1430; https://doi.org/10.3390/f16091430 - 7 Sep 2025
Viewed by 724
Abstract
Understanding the distribution of forest aboveground biomass (AGB) is pivotal for carbon monitoring. Field-based inventorying is time-consuming and costly for large-area AGB estimations. The integration of multimodal remote sensing (RS) observations with single-year, field-based forest inventory analysis (FIA) data has the potential to [...] Read more.
Understanding the distribution of forest aboveground biomass (AGB) is pivotal for carbon monitoring. Field-based inventorying is time-consuming and costly for large-area AGB estimations. The integration of multimodal remote sensing (RS) observations with single-year, field-based forest inventory analysis (FIA) data has the potential to improve the efficiency of large-scale AGB modeling and carbon monitoring initiatives. Our main objective was to systematically compare the AGB prediction accuracies of machine learning algorithms (e.g., random forest (RF) and support vector machine (SVM)) with those of conventional statistical methods (e.g., multiple linear regression (MLR)) using multimodal RS variables as predictors. We implemented a method combining AGB estimates of actual FIA subplot locations with airborne LiDAR, National Agriculture Imagery Program (NAIP) aerial imagery, and Sentinel-2 satellite images for model training, validation, and testing. The hyperparameter-tuned RF model produced a root mean square error (RMSE) of 27.19 Mgha−1 and an R2 of 0.41, which outperformed the evaluation metrics of SVM and MLR models. Among the 28 most important explanatory variables used to build the best RF model, 68% of variables were derived from the LiDAR height data. The hyperparameter-tuned linear SVM model exhibited an R2 of 0.10 and an RMSE of 32.17 Mgha−1. Additionally, we developed an MLR using eight explanatory variables, which yielded an RMSE of 22.59 Mgha−1 and an R2 of 0.22. The linear ensemble model, which was developed using the predictions of all three models, yielded an R2 of 0.79. Our results suggested that more field data are required to better generalize the ensemble model. Overall, our findings highlight the importance of variable selection methods, the hyperparameter tuning of ML algorithms, and the integration of multimodal RS data in improving large-area AGB prediction models. Full article
(This article belongs to the Special Issue Forest Inventory: The Monitoring of Biomass and Carbon Stocks)
Show Figures

Figure 1

24 pages, 2357 KB  
Article
From Vision-Only to Vision + Language: A Multimodal Framework for Few-Shot Unsound Wheat Grain Classification
by Yuan Ning, Pengtao Lv, Qinghui Zhang, Le Xiao and Caihong Wang
AI 2025, 6(9), 207; https://doi.org/10.3390/ai6090207 - 29 Aug 2025
Viewed by 758
Abstract
Precise classification of unsound wheat grains is essential for crop yields and food security, yet most existing approaches rely on vision-only models that demand large labeled datasets, which is often impractical in real-world, data-scarce settings. To address this few-shot challenge, we propose UWGC, [...] Read more.
Precise classification of unsound wheat grains is essential for crop yields and food security, yet most existing approaches rely on vision-only models that demand large labeled datasets, which is often impractical in real-world, data-scarce settings. To address this few-shot challenge, we propose UWGC, a novel vision-language framework designed for few-shot classification of unsound wheat grains. UWGC integrates two core modules: a fine-tuning module based on Adaptive Prior Refinement (APE) and a text prompt enhancement module that incorporates Advancing Textual Prompt (ATPrompt) and the multimodal model Qwen2.5-VL. The synergy between the two modules, leveraging cross-modal semantics, enhances generalization of UWGC in low-data regimes. It is offered in two variants: UWGC-F and UWGC-T, in order to accommodate different practical needs. Across few-shot settings on a public grain dataset, UWGC-F and UWGC-T consistently outperform existing vision-only and vision-language methods, highlighting their potential for unsound wheat grain classification in real-world agriculture. Full article
Show Figures

Figure 1

29 pages, 59556 KB  
Review
Application of Deep Learning Technology in Monitoring Plant Attribute Changes
by Shuwei Han and Haihua Wang
Sustainability 2025, 17(17), 7602; https://doi.org/10.3390/su17177602 - 22 Aug 2025
Viewed by 1930
Abstract
With the advancement of remote sensing imagery and multimodal sensing technologies, monitoring plant trait dynamics has emerged as a critical area of research in modern agriculture. Traditional approaches, which rely on handcrafted features and shallow models, struggle to effectively address the complexity inherent [...] Read more.
With the advancement of remote sensing imagery and multimodal sensing technologies, monitoring plant trait dynamics has emerged as a critical area of research in modern agriculture. Traditional approaches, which rely on handcrafted features and shallow models, struggle to effectively address the complexity inherent in high-dimensional and multisource data. In contrast, deep learning, with its end-to-end feature extraction and nonlinear modeling capabilities, has substantially improved monitoring accuracy and automation. This review summarizes recent developments in the application of deep learning methods—including CNNs, RNNs, LSTMs, Transformers, GANs, and VAEs—to tasks such as growth monitoring, yield prediction, pest and disease identification, and phenotypic analysis. It further examines prominent research themes, including multimodal data fusion, transfer learning, and model interpretability. Additionally, it discusses key challenges related to data scarcity, model generalization, and real-world deployment. Finally, the review outlines prospective directions for future research, aiming to inform the integration of deep learning with phenomics and intelligent IoT systems and to advance plant monitoring toward greater intelligence and high-throughput capabilities. Full article
(This article belongs to the Section Sustainable Agriculture)
Show Figures

Figure 1

45 pages, 2283 KB  
Review
Agricultural Image Processing: Challenges, Advances, and Future Trends
by Xuehua Song, Letian Yan, Sihan Liu, Tong Gao, Li Han, Xiaoming Jiang, Hua Jin and Yi Zhu
Appl. Sci. 2025, 15(16), 9206; https://doi.org/10.3390/app15169206 - 21 Aug 2025
Viewed by 1337
Abstract
Agricultural image processing technology plays a critical role in enabling precise disease detection, accurate yield prediction, and various smart agriculture applications. However, its practical implementation faces key challenges, including environmental interference, data scarcity and imbalance datasets, and the difficulty of deploying models on [...] Read more.
Agricultural image processing technology plays a critical role in enabling precise disease detection, accurate yield prediction, and various smart agriculture applications. However, its practical implementation faces key challenges, including environmental interference, data scarcity and imbalance datasets, and the difficulty of deploying models on resource-constrained edge devices. This paper presents a systematic review of recent advances in addressing these challenges, with a focus on three core aspects: environmental robustness, data efficiency, and model deployment. The study identifies that attention mechanisms, Transformers, multi-scale feature fusion, and domain adaptation can enhance model robustness under complex conditions. Self-supervised learning, transfer learning, GAN-based data augmentation, SMOTE improvements, and Focal loss optimization effectively alleviate data limitations. Furthermore, model compression techniques such as pruning, quantization, and knowledge distillation facilitate efficient deployment. Future research should emphasize multi-modal fusion, causal reasoning, edge–cloud collaboration, and dedicated hardware acceleration. Integrating agricultural expertise with AI is essential for promoting large-scale adoption, as well as achieving intelligent, sustainable agricultural systems. Full article
(This article belongs to the Special Issue Pattern Recognition Applications of Neural Networks and Deep Learning)
Show Figures

Figure 1

14 pages, 831 KB  
Article
Migratory Bird-Inspired Adaptive Kalman Filtering for Robust Navigation of Autonomous Agricultural Planters in Unstructured Terrains
by Zijie Zhou, Yitao Huang and Jiyu Sun
Biomimetics 2025, 10(8), 543; https://doi.org/10.3390/biomimetics10080543 - 19 Aug 2025
Viewed by 455
Abstract
This paper presents a bionic extended Kalman filter (EKF) state estimation algorithm for agricultural planters, inspired by the bionic mechanism of migratory birds navigating in complex environments, where migratory birds achieve precise localization behaviors by fusing multi-sensory information (e.g., geomagnetic field, visual landmarks, [...] Read more.
This paper presents a bionic extended Kalman filter (EKF) state estimation algorithm for agricultural planters, inspired by the bionic mechanism of migratory birds navigating in complex environments, where migratory birds achieve precise localization behaviors by fusing multi-sensory information (e.g., geomagnetic field, visual landmarks, and somatosensory balance). The algorithm mimics the migratory bird’s ability to integrate multimodal information by fusing laser SLAM, inertial measurement unit (IMU), and GPS data to estimate the position, velocity, and attitude of the planter in real time. Adopting a nonlinear processing approach, the EKF effectively handles nonlinear dynamic characteristics in complex terrain, similar to the adaptive response of a biological nervous system to environmental perturbations. The algorithm demonstrates bio-inspired robustness through the derivation of the nonlinear dynamic teaching model and measurement model and is able to provide high-precision state estimation in complex environments such as mountainous or hilly terrain. Simulation results show that the algorithm significantly improves the navigation accuracy of the planter in unstructured environments. A new method of bio-inspired adaptive state estimation is provided. Full article
(This article belongs to the Special Issue Computer-Aided Biomimetics: 3rd Edition)
Show Figures

Figure 1

23 pages, 1657 KB  
Article
High-Precision Pest Management Based on Multimodal Fusion and Attention-Guided Lightweight Networks
by Ziye Liu, Siqi Li, Yingqiu Yang, Xinlu Jiang, Mingtian Wang, Dongjiao Chen, Tianming Jiang and Min Dong
Insects 2025, 16(8), 850; https://doi.org/10.3390/insects16080850 - 16 Aug 2025
Viewed by 1083
Abstract
In the context of global food security and sustainable agricultural development, the efficient recognition and precise management of agricultural insect pests and their predators have become critical challenges in the domain of smart agriculture. To address the limitations of traditional models that overly [...] Read more.
In the context of global food security and sustainable agricultural development, the efficient recognition and precise management of agricultural insect pests and their predators have become critical challenges in the domain of smart agriculture. To address the limitations of traditional models that overly rely on single-modal inputs and suffer from poor recognition stability under complex field conditions, a multimodal recognition framework has been proposed. This framework integrates RGB imagery, thermal infrared imaging, and environmental sensor data. A cross-modal attention mechanism, environment-guided modality weighting strategy, and decoupled recognition heads are incorporated to enhance the model’s robustness against small targets, intermodal variations, and environmental disturbances. Evaluated on a high-complexity multimodal field dataset, the proposed model significantly outperforms mainstream methods across four key metrics, precision, recall, F1-score, and mAP@50, achieving 91.5% precision, 89.2% recall, 90.3% F1-score, and 88.0% mAP@50. These results represent an improvement of over 6% compared to representative models such as YOLOv8 and DETR. Additional ablation studies confirm the critical contributions of key modules, particularly under challenging scenarios such as low light, strong reflections, and sensor data noise. Moreover, deployment tests conducted on the Jetson Xavier edge device demonstrate the feasibility of real-world application, with the model achieving a 25.7 FPS inference speed and a compact size of 48.3 MB, thus balancing accuracy and lightweight design. This study provides an efficient, intelligent, and scalable AI solution for pest surveillance and biological control, contributing to precision pest management in agricultural ecosystems. Full article
Show Figures

Figure 1

Back to TopTop