Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (502)

Search Parameters:
Keywords = image-based phenology

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 3010 KB  
Article
Monitoring Maize Phenology Using Multi-Source Data by Integrating Convolutional Neural Networks and Transformers
by Yugeng Guo, Wenzhi Zeng, Haoze Zhang, Jinhan Shao, Yi Liu and Chang Ao
Remote Sens. 2026, 18(2), 356; https://doi.org/10.3390/rs18020356 - 21 Jan 2026
Viewed by 62
Abstract
Effective monitoring of maize phenology under stress conditions is crucial for optimizing agricultural management and mitigating yield losses. Crop prediction models constructed from Convolutional Neural Network (CNN) have been widely applied. However, CNNs often struggle to capture long-range temporal dependencies in phenological data, [...] Read more.
Effective monitoring of maize phenology under stress conditions is crucial for optimizing agricultural management and mitigating yield losses. Crop prediction models constructed from Convolutional Neural Network (CNN) have been widely applied. However, CNNs often struggle to capture long-range temporal dependencies in phenological data, which are crucial for modeling seasonal and cyclic patterns. The Transformer model complements this by leveraging self-attention mechanisms to effectively handle global contexts and extended sequences in phenology-related tasks. The Transformer model has the global understanding ability that CNN does not have due to its multi-head attention. This study, proposes a synergistic framework, in combining CNN with Transformer model to realize global-local feature synergy using two models, proposes an innovative phenological monitoring model utilizing near-ground remote sensing technology. High-resolution imagery of maize fields was collected using unmanned aerial vehicles (UAVs) equipped with multispectral and thermal infrared cameras. By integrating this data with CNN and Transformer architectures, the proposed model enables accurate inversion and quantitative analysis of maize phenological traits. In the experiment, a network was constructed adopting multispectral and thermal infrared images from maize fields, and the model was validated using the collected experimental data. The results showed that the integration of multispectral imagery and accumulated temperature achieved an accuracy of 92.9%, while the inclusion of thermal infrared imagery further improved the accuracy to 97.5%. This study highlights the potential of UAV-based remote sensing, combined with CNN and Transformer as a transformative approach for precision agriculture. Full article
Show Figures

Figure 1

8 pages, 2719 KB  
Proceeding Paper
Predictive Potential of Three Red-Edge Vegetation Index from Sentinel-2 Images and Machine Learning for Maize Yield Assessment
by Dorijan Radočaj, Ivan Plaščak, Željko Barač and Mladen Jurišić
Eng. Proc. 2026, 125(1), 1; https://doi.org/10.3390/engproc2026125001 - 20 Jan 2026
Viewed by 34
Abstract
This study aimed to evaluate the prediction potential of phenology metrics from two vegetation indices using Sentinel-2 images, the Normalized Difference Vegetation Index (NDVI) and Three Red-Edge Vegetation Index (NDVI3RE), for maize yield prediction. Ground truth maize yield samples were collected near Koška, [...] Read more.
This study aimed to evaluate the prediction potential of phenology metrics from two vegetation indices using Sentinel-2 images, the Normalized Difference Vegetation Index (NDVI) and Three Red-Edge Vegetation Index (NDVI3RE), for maize yield prediction. Ground truth maize yield samples were collected near Koška, Croatia, on 13 October 2023, using a Quantimeter yield mapping sensor on Claas Lexion 6900 combine harvester. The phenology analysis was performed based on a time-series of all available Sentinel-2 images during 2023, using the Beck logistic model for determining the start of season (SOS), peak of season (POS), end of season (EOS), greenup, maturity, senescence, and dormancy. A total of fourteen covariates, including vegetation indices at phenology metrics and their occurrence dates, were used for machine learning prediction of maize yield using Random Forest (RF) and Support Vector Machine (SVM) regression. The results suggested that the SVM method based on NDVI phenology metrics produced the highest accuracy for maize yield prediction (R2 = 0.935, RMSE = 0.558 t ha−1, MAE = 0.399 t ha−1). Vegetation index values at greenup, dormancy and POS were the most important covariates for the prediction, while day of year (DOY) in which they occurred had only a minor effect on the prediction accuracy. This suggests that, despite its limitations regarding the saturation effect, NDVI outperformed NDVI3RE for maize yield prediction when combined with phenology metrics. Full article
Show Figures

Figure 1

24 pages, 43005 KB  
Article
Accurate Estimation of Spring Maize Aboveground Biomass in Arid Regions Based on Integrated UAV Remote Sensing Feature Selection
by Fengxiu Li, Yanzhao Guo, Yingjie Ma, Ning Lv, Zhijian Gao, Guodong Wang, Zhitao Zhang, Lei Shi and Chongqi Zhao
Agronomy 2026, 16(2), 219; https://doi.org/10.3390/agronomy16020219 - 16 Jan 2026
Viewed by 230
Abstract
Maize is one of the top three crops globally, ranking only behind rice and wheat, making it an important crop of interest. Aboveground biomass is a key indicator for assessing maize growth and its yield potential. This study developed an efficient and stable [...] Read more.
Maize is one of the top three crops globally, ranking only behind rice and wheat, making it an important crop of interest. Aboveground biomass is a key indicator for assessing maize growth and its yield potential. This study developed an efficient and stable biomass prediction model to estimate the aboveground biomass (AGB) of spring maize (Zea mays L.) under subsurface drip irrigation in arid regions, based on UAV multispectral remote sensing and machine learning techniques. Focusing on typical subsurface drip-irrigated spring maize in arid Xinjiang, multispectral images and field-measured AGB data were collected from 96 sample points (selected via stratified random sampling across 24 plots) over four key phenological stages in 2024 and 2025. Sixteen vegetation indices were calculated and 40 texture features were extracted using the gray-level co-occurrence matrix method, while an integrated feature-selection strategy combining Elastic Net and Random Forest was employed to effectively screen key predictor variables. Based on the selected features, six machine learning models were constructed, including Elastic Net Regression (ENR), Gradient Boosting Decision Trees (GBDT), Gaussian Process Regression (GPR), Partial Least Squares Regression (PLSR), Random Forest (RF), and Extreme Gradient Boosting (XGB). Results showed that the fused feature set comprised four vegetation indices (GRDVI, RERVI, GRVI, NDVI) and five texture features (R_Corr, NIR_Mean, NIR_Vari, B_Mean, B_Corr), thereby retaining red-edge and visible-light texture information highly sensitive to AGB. The GPR model based on the fused features exhibited the best performance (test set R2 = 0.852, RMSE = 2890.74 kg ha−1, MAE = 1676.70 kg ha−1), demonstrating high fitting accuracy and stable predictive ability across both the training and test sets. Spatial inversions over the two growing seasons of 2024 and 2025, derived from the fused-feature GPR optimal model at four key phenological stages, revealed pronounced spatiotemporal heterogeneity and stage-dependent dynamics of spring maize AGB: the biomass accumulates rapidly from jointing to grain filling, slows thereafter, and peaks at maturity. At a constant planting density, AGB increased markedly with nitrogen inputs from N0 to N3 (420 kg N ha−1), with the high-nitrogen N3 treatment producing the greatest biomass; this successfully captured the regulatory effect of the nitrogen gradient on maize growth, provided reliable data for variable-rate fertilization, and is highly relevant for optimizing water–fertilizer coordination in subsurface drip irrigation systems. Future research may extend this integrated feature selection and modeling framework to monitor the growth and estimate the yield of other crops, such as rice and cotton, thereby validating its generalizability and robustness in diverse agricultural scenarios. Full article
Show Figures

Figure 1

26 pages, 4221 KB  
Article
Predicting Phenological Stages for Cherry and Apple Orchards: A Comparative Study with Meteorological and Satellite Data
by Valentin Kazandjiev, Dessislava Ganeva, Eugenia Roumenina, Georgi Jelev, Veska Georgieva, Boryana Tsenova, Petia Malasheva, Marieta Nesheva, Svetoslav Malchev, Stanislava Dimitrova and Anita Stoeva
Agronomy 2026, 16(2), 200; https://doi.org/10.3390/agronomy16020200 - 14 Jan 2026
Viewed by 287
Abstract
Fruit growing is a traditional component of Bulgarian agricultural production. According to the latest statistical data, the share of areas planted with cherries is 10.5% of the total orchard area, and with apples, 7.2%, totaling 67,800 ha. This article presents the results of [...] Read more.
Fruit growing is a traditional component of Bulgarian agricultural production. According to the latest statistical data, the share of areas planted with cherries is 10.5% of the total orchard area, and with apples, 7.2%, totaling 67,800 ha. This article presents the results of ground and remote (satellite) measurements and observations of cherry and apple orchards, along with the methods for their processing and interpretation, to define the current state and forecast their expected development. This research aims to combine the capabilities of the two approaches by improving and expanding observation and forecasting activities. Ground-based measurements and observations consider the dates of a permanent transition in air temperature above 5 °C and several cardinal phenological stages, based on the idea that a certain temperature sum (CU, GDH, GDD) must accumulate to move from one phenological stage to another. The obtained data were statistically analyzed, and by means of classification with the Random Forest algorithm, the dates for the occurrence of the stages of bud break, flowering, and fruit ripening in the development of cherry and apple orchards were predicted with an accuracy of −6 to +2 days. Satellite studies include creating a database of Sentinel-2 digital images across different spectral bands for the studied orchards, investigating various post-processing approaches, and deriving indicators of developmental phenostages. Ground data from the 2021–2023 experiment in Kyustendil and Plovdiv were used to determine the phases of fruit bursting, flowering, and ripening through satellite images. An assessment of the two approaches to predicting the development of the accuracy of the models was carried out by comparing their predictions for bud swelling and bursting (BBCH 57), flowering (BBCH 65), and fruit ripening (BBCH 87/89) of the observed phenological events in the two selected orchard types, representatives of stone and pome fruit species. Full article
(This article belongs to the Section Innovative Cropping Systems)
Show Figures

Figure 1

28 pages, 25509 KB  
Article
Deep Learning for Semantic Segmentation in Crops: Generalization from Opuntia spp.
by Arturo Duarte-Rangel, César Camacho-Bello, Eduardo Cornejo-Velazquez and Mireya Clavel-Maqueda
AgriEngineering 2026, 8(1), 18; https://doi.org/10.3390/agriengineering8010018 - 5 Jan 2026
Viewed by 460
Abstract
Semantic segmentation of UAV–acquired RGB orthomosaics is a key component for quantifying vegetation cover and monitoring phenology in precision agriculture. This study evaluates a representative set of CNN–based architectures (U–Net, U–Net Xception–Style, SegNet, DeepLabV3+) and Transformer–based models (Swin–UNet/Swin–Transformer, SegFormer, and Mask2Former) under a [...] Read more.
Semantic segmentation of UAV–acquired RGB orthomosaics is a key component for quantifying vegetation cover and monitoring phenology in precision agriculture. This study evaluates a representative set of CNN–based architectures (U–Net, U–Net Xception–Style, SegNet, DeepLabV3+) and Transformer–based models (Swin–UNet/Swin–Transformer, SegFormer, and Mask2Former) under a unified and reproducible protocol. We propose a transfer–and–consolidation workflow whose performance is assessed not only through region–overlap and pixel–wise discrepancy metrics, but also via boundary–sensitive criteria that are explicitly linked to orthomosaic–scale vegetation–cover estimation by pixel counting under GSD (Ground Sample Distance) control. The experimental design considers a transfer scenario between morphologically related crops: initial training on Opuntia spp. (prickly pear), direct (“zero–shot”) inference on Agave salmiana, fine–tuning using only 6.84% of the agave tessellated set as limited target–domain supervision, and a subsequent consolidation stage to obtain a multi–species model. The evaluation integrates IoU, Dice, RMSE, pixel accuracy, and computational cost (time per image), and additionally reports the BF score and HD95 to characterize contour fidelity, which is critical when area is derived from orthomosaic–scale masks. Results show that Transformer-based approaches tend to provide higher stability and improved boundary delineation on Opuntia spp., whereas transfer to Agave salmiana exhibits selective degradation that is mitigated through low–annotation–cost fine-tuning. On Opuntia spp., Mask2Former achieves the best test performance (IoU 0.897 +/− 0.094; RMSE 0.146 +/− 0.002) and, after consolidation, sustains the highest overlap on both crops (IoU 0.894 +/− 0.004 on Opuntia and IoU 0.760 +/− 0.046 on Agave), while preserving high contour fidelity (BF score 0.962 +/− 0.102/0.877 +/− 0.153; HD95 2.189 +/− 3.447 px/8.458 +/− 16.667 px for Opuntia/Agave), supporting its use for final vegetation–cover quantification. Overall, the study provides practical guidelines for architecture selection under hardware constraints, a reproducible transfer protocol, and an orthomosaic–oriented implementation that facilitates integration into agronomic and remote–sensing workflows. Full article
Show Figures

Figure 1

21 pages, 5128 KB  
Article
Influence of Vegetation Phenology on Urban Microclimate and Thermal Comfort in Cold Regions: A Case Study of Beiyang Plaza, Tianjin University
by Yaolong Wang, Yueheng Tong, Yi Lei, Rong Chen and Tiantian Huang
Buildings 2026, 16(1), 115; https://doi.org/10.3390/buildings16010115 - 26 Dec 2025
Viewed by 161
Abstract
Vegetation phenology significantly influences urban microclimate and thermal comfort in cold regions, yet its quantitative impact—specifically the potential of deciduous trees to enhance winter solar access—remains underexplored. This study investigates how seasonal vegetation changes affect thermal conditions in an urban plaza. Field measurements [...] Read more.
Vegetation phenology significantly influences urban microclimate and thermal comfort in cold regions, yet its quantitative impact—specifically the potential of deciduous trees to enhance winter solar access—remains underexplored. This study investigates how seasonal vegetation changes affect thermal conditions in an urban plaza. Field measurements were conducted at Beiyang Plaza, Tianjin University, during the autumn–winter transition. High-precision Sky View Factors (SVF) were extracted from panoramic images using a deep learning-based semantic segmentation model (PSPNet), validated against field observations. The Universal Thermal Climate Index (UTCI) was calculated to assess thermal stress. Results indicate that the leaf-off phase significantly increases SVF, shifting the radiative balance. Areas experiencing phenological changes exhibited a marked improvement in UTCI, effectively alleviating cold stress by maximizing solar gain. Advanced statistical models (ARIMAX and GAM) confirmed that, after controlling for background climatic variations, the positive effect of vegetation phenology on thermal comfort is statistically significant. These findings challenge the traditional focus on summer shading, highlighting the “winter-warming” potential of deciduous trees and providing quantitative evidence for climate-responsive urban design. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

28 pages, 10191 KB  
Article
A Novel Dataset Generation Strategy and a Multi-Period Farmland Cultivation Zones Dataset from Unmanned Aerial Vehicle Imagery
by Zirui Li, Jinping Gu, Siying Shang, Yang Zhou, Qing Luo, Mingxue Zheng, Xiaokai Li, Chengjun Lin and Xuefeng Guan
Agriculture 2026, 16(1), 32; https://doi.org/10.3390/agriculture16010032 - 22 Dec 2025
Viewed by 393
Abstract
Accurate delineation of farmland cultivation zones (FCZs) is crucial for advancing precision agriculture. However, identifying FCZs in landscapes where standardized and non-standard (fragmented) farmlands coexist remains a pressing challenge, primarily due to the lack of high-quality datasets covering such mixed patterns. To address [...] Read more.
Accurate delineation of farmland cultivation zones (FCZs) is crucial for advancing precision agriculture. However, identifying FCZs in landscapes where standardized and non-standard (fragmented) farmlands coexist remains a pressing challenge, primarily due to the lack of high-quality datasets covering such mixed patterns. To address this, we propose a novel tiling-based dataset generation method that integrates boundary probes and minimum-overlap Poisson-disk sampling (BP-MOPS). Using this strategy, we constructed a multi-temporal unmanned aerial vehicle (UAV) imagery dataset of FCZs—the multi-period farmland cultivation zones (MPFCZ) dataset—which encompasses three critical phenological stages: the dormant period (DP), the intermediate growing period (IGP), and the vigorous growing period (VGP). The source imagery was acquired over Zhouhu Village in China. The MPFCZ dataset comprises 6467 image patches (1024 × 1024 pixels), containing both standardized fields and fragmented cultivation zones typically missed by conventional methods. Both Transformer- and CNN-based models trained on MPFCZ surpassed those trained on the dataset generated by conventional segmentation strategy. The best-performing model achieved remarkable temporal change detection accuracy (mIoU > 0.82 across three phenological stages) and demonstrated strong cross-region generalization capability (0.8817 precision under zero-shot transfer). MPFCZ thus provides essential support for precise farmland identification in complex agricultural landscapes with standard and nonstandard fields mixed. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

21 pages, 21928 KB  
Article
HieraEdgeNet: A Multi-Scale Edge-Enhanced Framework for Automated Pollen Recognition
by Yuchong Long, Wen Sun, Ningxiao Sun, Wenxiao Wang, Chao Li and Shan Yin
Agriculture 2025, 15(23), 2518; https://doi.org/10.3390/agriculture15232518 - 4 Dec 2025
Cited by 1 | Viewed by 485
Abstract
Automated pollen recognition is a foundational tool for diverse scientific domains, including paleoclimatology, biodiversity monitoring, and agricultural science. However, conventional methods create a critical data bottleneck, limiting the temporal and spatial resolution of ecological analysis. Existing deep learning models often fail to achieve [...] Read more.
Automated pollen recognition is a foundational tool for diverse scientific domains, including paleoclimatology, biodiversity monitoring, and agricultural science. However, conventional methods create a critical data bottleneck, limiting the temporal and spatial resolution of ecological analysis. Existing deep learning models often fail to achieve the requisite localization accuracy for microscopic pollen grains, which are characterized by their minute size, indistinct edges, and complex backgrounds. To overcome this, we introduce HieraEdgeNet, a novel object detection framework. The core principle of our architecture is to explicitly extract and hierarchically fuse multi-scale edge information with deep semantic features. This synergistic approach, combined with a computationally efficient large-kernel operator for fine-grained feature refinement, significantly enhances the model’s ability to perceive and precisely delineate object boundaries. On a large-scale dataset comprising 44,471 annotated microscopic images containing 342,706 pollen grains from 120 classes, HieraEdgeNet achieves a mean Average Precision of 0.9501 (mAP@0.5) and 0.8444 (mAP@0.5:0.95), substantially outperforming state-of-the-art models such as YOLOv12n and the Transformer-based RT-DETR family in terms of the accuracy–efficiency trade-off. This work provides a powerful computational tool for generating the high-throughput, high-fidelity data essential for modern ecological research, including tracking phenological shifts, assessing plant biodiversity, and reconstructing paleoenvironments. At the same time, we acknowledge that the current two-dimensional design cannot directly exploit volumetric Z-stack microscopy and that strong domain shifts between training data and real-world deployments may still degrade performance, which we identify as key directions for future work. By also enabling applications in precision agriculture, HieraEdgeNet contributes broadly to advancing ecosystem monitoring and sustainable food security. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

26 pages, 55777 KB  
Article
DELTA-SoyStage: A Lightweight Detection Architecture for Full-Cycle Soybean Growth Stage Monitoring
by Abdellah Lakhssassi, Yasser Salhi, Naoufal Lakhssassi, Khalid Meksem and Khaled Ahmed
Sensors 2025, 25(23), 7303; https://doi.org/10.3390/s25237303 - 1 Dec 2025
Viewed by 489
Abstract
The accurate identification of soybean growth stages is critical for optimizing agricultural interventions, where mistimed treatments can result in yield losses ranging from 2.5% to 40%. Existing deep learning approaches remain limited in scope, targeting isolated developmental phases rather than providing comprehensive phenological [...] Read more.
The accurate identification of soybean growth stages is critical for optimizing agricultural interventions, where mistimed treatments can result in yield losses ranging from 2.5% to 40%. Existing deep learning approaches remain limited in scope, targeting isolated developmental phases rather than providing comprehensive phenological coverage. This paper presents a novel object detection architecture DELTA-SoyStage, combining an EfficientNet backbone with a lightweight ChannelMapper neck and a newly proposed DELTA (Denoising Enhanced Lightweight Task Alignment) detection head for soybean growth stage classification. We introduce a dataset of 17,204 labeled RGB images spanning nine growth stages from emergence (VE) through full maturity (R8), collected under controlled greenhouse conditions with diverse imaging angles and lighting variations. DELTA-SoyStage achieves 73.9% average precision with only 24.4 GFLOPs computational cost, demonstrating 4.2× fewer FLOPs than the best-performing baseline (DINO-Swin: 74.7% AP, 102.5 GFLOPs) with only 0.8% accuracy difference. The lightweight DELTA head combined with the efficient ChannelMapper neck requires only 8.3 M parameters—a 43.5% reduction compared to standard architectures—while maintaining competitive accuracy. Extensive ablation studies validate key design choices including task alignment mechanisms, multi-scale feature extraction strategies, and encoder–decoder depth configurations. The proposed model’s computational efficiency makes it suitable for deployment on resource-constrained edge devices in precision agriculture applications, enabling timely decision-making without reliance on cloud infrastructure. Full article
(This article belongs to the Special Issue Application of Sensors Technologies in Agricultural Engineering)
Show Figures

Figure 1

20 pages, 9178 KB  
Article
Graph-Based Relaxation for Over-Normalization Avoidance in Reflectance Normalization of Multi-Temporal Satellite Imagery
by Gabriel Yedaya Immanuel Ryadi, Chao-Hung Lin and Bo-Yi Lin
Remote Sens. 2025, 17(23), 3877; https://doi.org/10.3390/rs17233877 - 29 Nov 2025
Viewed by 335
Abstract
Reflectance normalization is critical for minimizing temporal discrepancies and facilitating reliable multi-temporal satellite analysis. However, this process is challenged by the risks of under-normalization and over-normalization, which stem from the inherent complexities of varying atmospheric conditions, data acquisition, and environmental dynamics. Under-normalization occurs [...] Read more.
Reflectance normalization is critical for minimizing temporal discrepancies and facilitating reliable multi-temporal satellite analysis. However, this process is challenged by the risks of under-normalization and over-normalization, which stem from the inherent complexities of varying atmospheric conditions, data acquisition, and environmental dynamics. Under-normalization occurs when multi-temporal variations are insufficiently corrected, resulting in temporal reflectance inconsistencies. Over-normalization arises when overly aggressive adjustments suppress meaningful variability, such as seasonal and phenological patterns, thereby compromising data integrity. Effectively addressing these challenges is essential for preserving the spatial and temporal fidelity of satellite imagery, which is crucial for applications such as environmental monitoring and long-term change analysis. This study introduces a novel graph-based relaxation for reflectance normalization aimed at addressing issues of under- and over-normalization through a two-stage structural normalization strategy: intra-normalization and inter-normalization. A graph structure represents adjacency and similarity among image instances, enabling an iterative relaxation process to adjust reflectance values. In the proposed framework, the intra-normalization stage aligns images within the same reflectance group to preserve temporally local reflectance patterns, while the inter-normalization stage harmonizes reflectance across different groups, ensuring smooth temporal transitions and maintaining essential temporal variability. Experimental results with the metrics root mean squared error (RMSE) and Structural Similarity Index Measure (SSIM) demonstrate the effectiveness of the proposed method. Specifically, the proposed method achieves around 37% improvement measured by RMSE in the transition of two adjacent image groups compared with related normalization methods. Graph-based relaxation preserves seasonal dynamics, ensures smooth transitions, and improves vegetation indices, making it suitable for both short-term and long-term environmental change analysis. Full article
Show Figures

Figure 1

24 pages, 10480 KB  
Article
Detecting Abandoned Cropland in Monsoon-Influenced Regions Using HLS Imagery and Interpretable Machine Learning
by Sinyoung Park, Sanae Kang, Byungmook Hwang and Dongwook W. Ko
Agronomy 2025, 15(12), 2702; https://doi.org/10.3390/agronomy15122702 - 24 Nov 2025
Viewed by 1990
Abstract
Abandoned cropland has been expanding due to complex socio-economic factors such as urbanization, demographic shifts, and declining agricultural profitability. As abandoned cropland simultaneously brings ecological, environmental, and social risks and benefits, quantitative monitoring is essential to assess its overall impact. Satellite image-based spatial [...] Read more.
Abandoned cropland has been expanding due to complex socio-economic factors such as urbanization, demographic shifts, and declining agricultural profitability. As abandoned cropland simultaneously brings ecological, environmental, and social risks and benefits, quantitative monitoring is essential to assess its overall impact. Satellite image-based spatial data are suitable for identifying spectral characteristics related to crop phenology, and recent research has advanced in detecting large-scale abandoned cropland through changes in time-series spectral characteristics. However, frequent cloud covers and highly fragmented croplands, which vary across regions and climatic conditions, still pose significant challenges for satellite-based detection. This study combined Harmonized Landsat and Sentinel-2 (HLS) imagery, offering high temporal (2–3 days) and spatial (30 m) resolution, with the eXtreme Gradient Boosting (XGBoost) algorithm to capture seasonal spectral variations among rice paddy, upland fields, and abandoned croplands. An XGBoost model with a Balanced Bagging Classifier (BBC) was used to mitigate class imbalance. The model achieved an accuracy of 0.84, Cohens kappa 0.71, and F2 score 0.84. SHapley Additive exPlanations (SHAP) analysis identified major features such as NIR (May–June), SWIR2 (January), MCARI (September), and BSI (January–April), reflecting phenological differences among cropland types. Overall, this study establishes a robust framework for large-scale cropland monitoring that can be adapted to different regional and climatic settings. Full article
Show Figures

Figure 1

21 pages, 7707 KB  
Article
Tomato Growth Monitoring and Phenological Analysis Using Deep Learning-Based Instance Segmentation and 3D Point Cloud Reconstruction
by Warut Timprae, Tatsuki Sagawa, Stefan Baar, Satoshi Kondo, Yoshifumi Okada, Kazuhiko Sato, Poltak Sandro Rumahorbo, Yan Lyu, Kyuki Shibuya, Yoshiki Gama, Yoshiki Hatanaka and Shinya Watanabe
Sustainability 2025, 17(22), 10120; https://doi.org/10.3390/su172210120 - 12 Nov 2025
Viewed by 642
Abstract
Accurate and nondestructive monitoring of tomato growth is essential for large-scale greenhouse production; however, it remains challenging for small-fruited cultivars such as cherry tomatoes. Traditional 2D image analysis often fails to capture precise morphological traits, limiting its usefulness in growth modeling and yield [...] Read more.
Accurate and nondestructive monitoring of tomato growth is essential for large-scale greenhouse production; however, it remains challenging for small-fruited cultivars such as cherry tomatoes. Traditional 2D image analysis often fails to capture precise morphological traits, limiting its usefulness in growth modeling and yield estimation. This study proposes an automated phenotyping framework that integrates deep learning-based instance segmentation with high-resolution 3D point cloud reconstruction and ellipsoid fitting to estimate fruit size and ripeness from daily video recordings. These techniques enable accurate camera pose estimation and dense geometric reconstruction (via SfM and MVS), while Nerfacto enhances surface continuity and photorealistic fidelity, resulting in highly precise and visually consistent 3D representations. The reconstructed models are followed by CIELAB color analysis and logistic curve fitting to characterize the growth dynamics. When applied to real greenhouse conditions, the method achieved an average size estimation error of 8.01% compared to manual caliper measurements. During summer, the maximum growth rate (gmax) of size and ripeness were 24.14%, and 95.24% higher than in winter, respectively. Seasonal analysis revealed that winter-grown tomatoes matured approximately 10 days later than summer-grown fruits, highlighting environmental influences on phenological development. By enabling precise, noninvasive tracking of size and ripeness progression, this approach is a novel tool for smart and sustainable agriculture. Full article
(This article belongs to the Special Issue Green Technology and Biological Approaches to Sustainable Agriculture)
Show Figures

Figure 1

27 pages, 5186 KB  
Article
Detailed Hierarchical Classification of Coastal Wetlands Using Multi-Source Time-Series Remote Sensing Data Based on Google Earth Engine
by Haonan Xu, Shaoliang Zhang, Huping Hou, Haoran Hu, Jinting Xiong and Jichen Wan
Remote Sens. 2025, 17(21), 3640; https://doi.org/10.3390/rs17213640 - 4 Nov 2025
Cited by 1 | Viewed by 1005
Abstract
Accurate and detailed mapping of coastal wetlands is essential for effective wetland resource management. However, due to periodic tidal inundation, frequent cloud cover, and spectral similarity of land cover types, reliable coastal wetland classification methods remain limited. To address these issues, we developed [...] Read more.
Accurate and detailed mapping of coastal wetlands is essential for effective wetland resource management. However, due to periodic tidal inundation, frequent cloud cover, and spectral similarity of land cover types, reliable coastal wetland classification methods remain limited. To address these issues, we developed an integrated pixel- and object-based hierarchical classification strategy based on multi-source remote sensing data to achieve fine-grained coastal wetland classification on Google Earth Engine. With the random forest classifier, pixel-level classification was performed to classify rough wetland and non-wetland types, followed by object-based classification to differentiate artificial and natural attributes of water bodies. In this process, multi-dimensional features including water level, phenology, variation, topography, geography, and geometry were extracted from Sentinel-1/2 time-series images, topographic data and shoreline data, which can fully capture the variability and dynamics of coastal wetlands. Feature combinations were then optimized through Recursive Feature Elimination and Jeffries–Matusita analysis to ensure the model’s ability to distinguish complex wetland types while improving efficiency. The classification strategy was applied to typical coastal wetlands in central Jiangsu in 2020 and finally generated a 10 m wetland map including 7 wetland types and 3 non-wetland types, with an overall accuracy of 92.50% and a Kappa coefficient of 0.915. Comparative analysis with existing datasets confirmed the reliability of this strategy, particularly in extracting intertidal mudflats, salt marshes, and artificial wetlands. This study can provide a robust framework for fine-grained wetland mapping and support the inventory and conservation of coastal wetland resources. Full article
Show Figures

Figure 1

25 pages, 7226 KB  
Article
BudCAM: An Edge Computing Camera System for Bud Detection in Muscadine Grapevines
by Chi-En Chiang, Wei-Zhen Liang, Jingqiu Chen, Xin Qiao, Violeta Tsolova, Zonglin Yang and Joseph Oboamah
Agriculture 2025, 15(21), 2220; https://doi.org/10.3390/agriculture15212220 - 24 Oct 2025
Viewed by 639
Abstract
Bud break is a critical phenological stage in muscadine grapevines, marking the start of the growing season and the increasing need for irrigation management. Real-time bud detection enables irrigation to match muscadine grape phenology, conserving water and enhancing performance. This study presents BudCAM, [...] Read more.
Bud break is a critical phenological stage in muscadine grapevines, marking the start of the growing season and the increasing need for irrigation management. Real-time bud detection enables irrigation to match muscadine grape phenology, conserving water and enhancing performance. This study presents BudCAM, a low-cost, solar-powered, edge computing camera system based on Raspberry Pi 5 and integrated with a LoRa radio board, developed for real-time bud detection. Nine BudCAMs were deployed at Florida A&M University Center for Viticulture and Small Fruit Research from mid-February to mid-March, 2024, monitoring three wine cultivars (A27, noble, and Floriana) with three replicates each. Muscadine grape canopy images were captured every 20 min between 7:00 and 19:00, generating 2656 high-resolution (4656 × 3456 pixels) bud break images as a database for bud detection algorithm development. The dataset was divided into 70% training, 15% validation, and 15% test. YOLOv11 models were trained using two primary strategies: a direct single-stage detector on tiled raw images and a refined two-stage pipeline that first identifies the grapevine cordon. Extensive evaluation of multiple model configurations identified the top performers for both the single-stage (mAP@0.5 = 86.0%) and two-stage (mAP@0.5 = 85.0%) approaches. Further analysis revealed that preserving image scale via tiling was superior to alternative inference strategies like resizing or slicing. Field evaluations conducted during the 2025 growing season demonstrated the system’s effectiveness, with the two-stage model exhibiting superior robustness against environmental interference, particularly lens fogging. A time-series filter smooths the raw daily counts to reveal clear phenological trends for visualization. In its final deployment, the autonomous BudCAM system captures an image, performs on-device inference, and transmits the bud count in under three minutes, demonstrating a complete, field-ready solution for precision vineyard management. Full article
Show Figures

Figure 1

20 pages, 33056 KB  
Article
Spatiotemporal Analysis of Vineyard Dynamics: UAS-Based Monitoring at the Individual Vine Scale
by Stefan Ruess, Gernot Paulus and Stefan Lang
Remote Sens. 2025, 17(19), 3354; https://doi.org/10.3390/rs17193354 - 2 Oct 2025
Viewed by 806
Abstract
The rapid and reliable acquisition of canopy-related metrics is essential for improving decision support in viticultural management, particularly when monitoring individual vines for targeted interventions. This study presents a spatially explicit workflow that integrates Uncrewed Aerial System (UAS) imagery, 3D point-cloud analysis, and [...] Read more.
The rapid and reliable acquisition of canopy-related metrics is essential for improving decision support in viticultural management, particularly when monitoring individual vines for targeted interventions. This study presents a spatially explicit workflow that integrates Uncrewed Aerial System (UAS) imagery, 3D point-cloud analysis, and Object-Based Image Analysis (OBIA) to detect and monitor individual grapevines throughout the growing season. Vines are identified directly from 3D point clouds without the need for prior training data or predefined row structures, achieving a mean Euclidean distance of 10.7 cm to the reference points. The OBIA framework segments vine vegetation based on spectral and geometric features without requiring pre-clipping or manual masking. All non-vine elements—including soil, grass, and infrastructure—are automatically excluded, and detailed canopy masks are created for each plant. Vegetation indices are computed exclusively from vine canopy objects, ensuring that soil signals and internal canopy gaps do not bias the results. This enables accurate per-vine assessment of vigour. NDRE values were calculated at three phenological stages—flowering, veraison, and harvest—and analyzed using Local Indicators of Spatial Association (LISA) to detect spatial clusters and outliers. In contrast to value-based clustering methods, LISA accounts for spatial continuity and neighborhood effects, allowing the detection of stable low-vigour zones, expanding high-vigour clusters, and early identification of isolated stressed vines. A strong correlation (R2 = 0.73) between per-vine NDRE values and actual yield demonstrates that NDRE-derived vigour reliably reflects vine productivity. The method provides a transferable, data-driven framework for site-specific vineyard management, enabling timely interventions at the individual plant level before stress propagates spatially. Full article
Show Figures

Figure 1

Back to TopTop