Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (833)

Search Parameters:
Keywords = RGB indices

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 5068 KiB  
Article
Estimating Household Green Space in Composite Residential Community Solely Using Drone Oblique Photography
by Meiqi Kang, Kaiyi Song, Xiaohan Liao and Jiayuan Lin
Remote Sens. 2025, 17(15), 2691; https://doi.org/10.3390/rs17152691 - 3 Aug 2025
Abstract
Residential green space is an important component of urban green space and one of the major indicators for evaluating the quality of a residential community. Traditional indicators such as the green space ratio only consider the relationship between green space area and total [...] Read more.
Residential green space is an important component of urban green space and one of the major indicators for evaluating the quality of a residential community. Traditional indicators such as the green space ratio only consider the relationship between green space area and total area of the residential community while ignoring the difference in the amount of green space enjoyed by household residents in high-rise and low-rise buildings. Therefore, it is meaningful to estimate household green space and its spatial distribution in residential communities. However, there are frequent difficulties in obtaining specific green space area and household number through ground surveys or consulting with property management units. In this study, taking a composite residential community in Chongqing, China, as the study site, we first employed a five-lens drone to capture its oblique RGB images and generated the DOM (Digital Orthophoto Map). Subsequently, the green space area and distribution in the entire residential community were extracted from the DOM using VDVI (Visible Difference Vegetation Index). The YOLACT (You Only Look At Coefficients) instance segmentation model was used to recognize balconies from the facade images of high-rise buildings to determine their household numbers. Finally, the average green space per household in the entire residential community was calculated to be 67.82 m2, and those in the high-rise and low-rise building zones were 51.28 m2 and 300 m2, respectively. Compared with the green space ratios of 65.5% and 50%, household green space more truly reflected the actual green space occupation in high- and low-rise building zones. Full article
(This article belongs to the Special Issue Application of Remote Sensing in Landscape Ecology)
Show Figures

Figure 1

20 pages, 4847 KiB  
Article
FCA-STNet: Spatiotemporal Growth Prediction and Phenotype Extraction from Image Sequences for Cotton Seedlings
by Yiping Wan, Bo Han, Pengyu Chu, Qiang Guo and Jingjing Zhang
Plants 2025, 14(15), 2394; https://doi.org/10.3390/plants14152394 - 2 Aug 2025
Viewed by 127
Abstract
To address the limitations of the existing cotton seedling growth prediction methods in field environments, specifically, poor representation of spatiotemporal features and low visual fidelity in texture rendering, this paper proposes an algorithm for the prediction of cotton seedling growth from images based [...] Read more.
To address the limitations of the existing cotton seedling growth prediction methods in field environments, specifically, poor representation of spatiotemporal features and low visual fidelity in texture rendering, this paper proposes an algorithm for the prediction of cotton seedling growth from images based on FCA-STNet. The model leverages historical sequences of cotton seedling RGB images to generate an image of the predicted growth at time t + 1 and extracts 37 phenotypic traits from the predicted image. A novel STNet structure is designed to enhance the representation of spatiotemporal dependencies, while an Adaptive Fine-Grained Channel Attention (FCA) module is integrated to capture both global and local feature information. This attention mechanism focuses on individual cotton plants and their textural characteristics, effectively reducing the interference from common field-related challenges such as insufficient lighting, leaf fluttering, and wind disturbances. The experimental results demonstrate that the predicted images achieved an MSE of 0.0086, MAE of 0.0321, SSIM of 0.8339, and PSNR of 20.7011 on the test set, representing improvements of 2.27%, 0.31%, 4.73%, and 11.20%, respectively, over the baseline STNet. The method outperforms several mainstream spatiotemporal prediction models. Furthermore, the majority of the predicted phenotypic traits exhibited correlations with actual measurements with coefficients above 0.8, indicating high prediction accuracy. The proposed FCA-STNet model enables visually realistic prediction of cotton seedling growth in open-field conditions, offering a new perspective for research in growth prediction. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

28 pages, 4026 KiB  
Article
Multi-Trait Phenotypic Analysis and Biomass Estimation of Lettuce Cultivars Based on SFM-MVS
by Tiezhu Li, Yixue Zhang, Lian Hu, Yiqiu Zhao, Zongyao Cai, Tingting Yu and Xiaodong Zhang
Agriculture 2025, 15(15), 1662; https://doi.org/10.3390/agriculture15151662 - 1 Aug 2025
Viewed by 176
Abstract
To address the problems of traditional methods that rely on destructive sampling, the poor adaptability of fixed equipment, and the susceptibility of single-view angle measurements to occlusions, a non-destructive and portable device for three-dimensional phenotyping and biomass detection in lettuce was developed. Based [...] Read more.
To address the problems of traditional methods that rely on destructive sampling, the poor adaptability of fixed equipment, and the susceptibility of single-view angle measurements to occlusions, a non-destructive and portable device for three-dimensional phenotyping and biomass detection in lettuce was developed. Based on the Structure-from-Motion Multi-View Stereo (SFM-MVS) algorithms, a high-precision three-dimensional point cloud model was reconstructed from multi-view RGB image sequences, and 12 phenotypic parameters, such as plant height, crown width, were accurately extracted. Through regression analyses of plant height, crown width, and crown height, and the R2 values were 0.98, 0.99, and 0.99, respectively, the RMSE values were 2.26 mm, 1.74 mm, and 1.69 mm, respectively. On this basis, four biomass prediction models were developed using Adaptive Boosting (AdaBoost), Support Vector Regression (SVR), Gradient Boosting Decision Tree (GBDT), and Random Forest Regression (RFR). The results indicated that the RFR model based on the projected convex hull area, point cloud convex hull surface area, and projected convex hull perimeter performed the best, with an R2 of 0.90, an RMSE of 2.63 g, and an RMSEn of 9.53%, indicating that the RFR was able to accurately simulate lettuce biomass. This research achieves three-dimensional reconstruction and accurate biomass prediction of facility lettuce, and provides a portable and lightweight solution for facility crop growth detection. Full article
(This article belongs to the Section Crop Production)
Show Figures

Figure 1

21 pages, 4657 KiB  
Article
A Semi-Automated RGB-Based Method for Wildlife Crop Damage Detection Using QGIS-Integrated UAV Workflow
by Sebastian Banaszek and Michał Szota
Sensors 2025, 25(15), 4734; https://doi.org/10.3390/s25154734 (registering DOI) - 31 Jul 2025
Viewed by 116
Abstract
Monitoring crop damage caused by wildlife remains a significant challenge in agricultural management, particularly in the case of large-scale monocultures such as maize. The given study presents a semi-automated process for detecting wildlife-induced damage using RGB imagery acquired from unmanned aerial vehicles (UAVs). [...] Read more.
Monitoring crop damage caused by wildlife remains a significant challenge in agricultural management, particularly in the case of large-scale monocultures such as maize. The given study presents a semi-automated process for detecting wildlife-induced damage using RGB imagery acquired from unmanned aerial vehicles (UAVs). The method is designed for non-specialist users and is fully integrated within the QGIS platform. The proposed approach involves calculating three vegetation indices—Excess Green (ExG), Green Leaf Index (GLI), and Modified Green-Red Vegetation Index (MGRVI)—based on a standardized orthomosaic generated from RGB images collected via UAV. Subsequently, an unsupervised k-means clustering algorithm was applied to divide the field into five vegetation vigor classes. Within each class, 25% of the pixels with the lowest average index values were preliminarily classified as damaged. A dedicated QGIS plugin enables drone data analysts (Drone Data Analysts—DDAs) to adjust index thresholds, based on visual interpretation, interactively. The method was validated on a 50-hectare maize field, where 7 hectares of damage (15% of the area) were identified. The results indicate a high level of agreement between the automated and manual classifications, with an overall accuracy of 81%. The highest concentration of damage occurred in the “moderate” and “low” vigor zones. Final products included vigor classification maps, binary damage masks, and summary reports in HTML and DOCX formats with visualizations and statistical data. The results confirm the effectiveness and scalability of the proposed RGB-based procedure for crop damage assessment. The method offers a repeatable, cost-effective, and field-operable alternative to multispectral or AI-based approaches, making it suitable for integration with precision agriculture practices and wildlife population management. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

28 pages, 7240 KiB  
Article
MF-FusionNet: A Lightweight Multimodal Network for Monitoring Drought Stress in Winter Wheat Based on Remote Sensing Imagery
by Qiang Guo, Bo Han, Pengyu Chu, Yiping Wan and Jingjing Zhang
Agriculture 2025, 15(15), 1639; https://doi.org/10.3390/agriculture15151639 - 29 Jul 2025
Viewed by 231
Abstract
To improve the identification of drought-affected areas in winter wheat, this paper proposes a lightweight network called MF-FusionNet based on multimodal fusion of RGB images and vegetation indices (NDVI and EVI). A multimodal dataset covering various drought levels in winter wheat was constructed. [...] Read more.
To improve the identification of drought-affected areas in winter wheat, this paper proposes a lightweight network called MF-FusionNet based on multimodal fusion of RGB images and vegetation indices (NDVI and EVI). A multimodal dataset covering various drought levels in winter wheat was constructed. To enable deep fusion of modalities, a Lightweight Multimodal Fusion Block (LMFB) was designed, and a Dual-Coordinate Attention Feature Extraction module (DCAFE) was introduced to enhance semantic feature representation and improve drought region identification. To address differences in scale and semantics across network layers, a Cross-Stage Feature Fusion Strategy (CFFS) was proposed to integrate multi-level features and enhance overall performance. The effectiveness of each module was validated through ablation experiments. Compared to traditional single-modal methods, MF-FusionNet achieved higher accuracy, recall, and F1-score—improved by 1.35%, 1.43%, and 1.29%, respectively—reaching 96.71%, 96.71%, and 96.64%. A basis for real-time monitoring and precise irrigation management under winter wheat drought stress was provided by this study. Full article
Show Figures

Figure 1

18 pages, 8446 KiB  
Article
Evaluation of Single-Shot Object Detection Models for Identifying Fanning Behavior in Honeybees at the Hive Entrance
by Tomyslav Sledevič
Agriculture 2025, 15(15), 1609; https://doi.org/10.3390/agriculture15151609 - 25 Jul 2025
Viewed by 269
Abstract
Thermoregulatory fanning behavior in honeybees is a vital indicator of colony health and environmental response. This study presents a novel dataset of 18,000 annotated video frames containing 57,597 instances capturing fanning behavior at the hive entrance across diverse conditions. Three state-of-the-art single-shot object [...] Read more.
Thermoregulatory fanning behavior in honeybees is a vital indicator of colony health and environmental response. This study presents a novel dataset of 18,000 annotated video frames containing 57,597 instances capturing fanning behavior at the hive entrance across diverse conditions. Three state-of-the-art single-shot object detection models (YOLOv8, YOLO11, YOLO12) are evaluated using standard RGB input and two motion-enhanced encodings: Temporally Stacked Grayscale (TSG) and Temporally Encoded Motion (TEM). Results show that models incorporating temporal information via TSG and TEM significantly outperform RGB-only input, achieving up to 85% mAP@50 with real-time inference capability on high-performance GPUs. Deployment tests on the Jetson AGX Orin platform demonstrate feasibility for edge computing, though with accuracy–speed trade-offs in smaller models. This work advances real-time, non-invasive monitoring of hive health, with implications for precision apiculture and automated behavioral analysis. Full article
Show Figures

Figure 1

23 pages, 3301 KiB  
Article
An Image-Based Water Turbidity Classification Scheme Using a Convolutional Neural Network
by Itzel Luviano Soto, Yajaira Concha-Sánchez and Alfredo Raya
Computation 2025, 13(8), 178; https://doi.org/10.3390/computation13080178 - 23 Jul 2025
Viewed by 261
Abstract
Given the importance of turbidity as a key indicator of water quality, this study investigates the use of a convolutional neural network (CNN) to classify water samples into five turbidity-based categories. These classes were defined using ranges inspired by Mexican environmental regulations and [...] Read more.
Given the importance of turbidity as a key indicator of water quality, this study investigates the use of a convolutional neural network (CNN) to classify water samples into five turbidity-based categories. These classes were defined using ranges inspired by Mexican environmental regulations and generated from 33 laboratory-prepared mixtures with varying concentrations of suspended clay particles. Red, green, and blue (RGB) images of each sample were captured under controlled optical conditions, and turbidity was measured using a calibrated turbidimeter. A transfer learning (TL) approach was applied using EfficientNet-B0, a deep yet computationally efficient CNN architecture. The model achieved an average accuracy of 99% across ten independent training runs, with minimal misclassifications. The use of a lightweight deep learning model, combined with a standardized image acquisition protocol, represents a novel and scalable alternative for rapid, low-cost water quality assessment in future environmental monitoring systems. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

13 pages, 1228 KiB  
Brief Report
Lipopolysaccharide-Activated Macrophages Suppress Cellular Senescence and Promote Rejuvenation in Human Dermal Fibroblasts
by Hiroyuki Inagawa, Chie Kohchi, Miyuki Uehiro and Gen-Ichiro Soma
Int. J. Mol. Sci. 2025, 26(15), 7061; https://doi.org/10.3390/ijms26157061 - 22 Jul 2025
Viewed by 274
Abstract
Tissue-resident macrophages are essential for skin homeostasis. This study investigated whether lipopolysaccharide (LPS)-activated macrophages affect senescence and rejuvenation in human dermal fibroblasts. Human monocytic THP-1 cells were stimulated with Pantoea agglomerans–derived LPS (1–1000 ng/mL), and culture supernatants were collected. These were applied [...] Read more.
Tissue-resident macrophages are essential for skin homeostasis. This study investigated whether lipopolysaccharide (LPS)-activated macrophages affect senescence and rejuvenation in human dermal fibroblasts. Human monocytic THP-1 cells were stimulated with Pantoea agglomerans–derived LPS (1–1000 ng/mL), and culture supernatants were collected. These were applied to two NB1RGB fibroblast populations: young, actively dividing cells (Young cells) and senescent cells with high population doubling levels and reduced proliferation (Old cells). Senescence markers P16, P21, and Ki-67 were analyzed at gene and protein levels. Conditioned medium from Old cells induced senescence in Young cells, increasing P16 and P21 expression levels. This effect was suppressed by cotreatment with LPS-activated THP-1 supernatant. Old cells treated with the LPS-activated supernatant exhibited decreased P16 and P21 levels as well as increased Ki-67 expression, indicating partial rejuvenation. These effects were not observed following treatment with unstimulated THP-1 supernatants or LPS alone. Overall, these findings suggest that secretory factors from LPS-activated macrophages can suppress cellular senescence and promote human dermal fibroblast rejuvenation, highlighting the potential role of macrophage activation in regulating cellular aging and offering a promising strategy for skin aging intervention. Full article
(This article belongs to the Special Issue Lipopolysaccharide in the Health and Disease)
Show Figures

Figure 1

22 pages, 4406 KiB  
Article
Colorectal Cancer Detection Tool Developed with Neural Networks
by Alex Ede Danku, Eva Henrietta Dulf, Alexandru George Berciu, Noemi Lorenzovici and Teodora Mocan
Appl. Sci. 2025, 15(15), 8144; https://doi.org/10.3390/app15158144 - 22 Jul 2025
Viewed by 253
Abstract
In the last two decades, there has been a considerable surge in the development of artificial intelligence. Imaging is most frequently employed for the diagnostic evaluation of patients, as it is regarded as one of the most precise methods for identifying the presence [...] Read more.
In the last two decades, there has been a considerable surge in the development of artificial intelligence. Imaging is most frequently employed for the diagnostic evaluation of patients, as it is regarded as one of the most precise methods for identifying the presence of a disease. However, a study indicates that approximately 800,000 individuals in the USA die or incur permanent disability because of misdiagnosis. The present study is based on the use of computer-aided diagnosis of colorectal cancer. The objective of this study is to develop a practical, low-cost, AI-based decision-support tool that integrates clinical test data (blood/stool) and, if needed, colonoscopy images to help reduce misdiagnosis and improve early detection of colorectal cancer for clinicians. Convolutional neural networks (CNNs) and artificial neural networks (ANNs) are utilized in conjunction with a graphical user interface (GUI), which caters to individuals lacking programming expertise. The performance of the artificial neural network (ANN) is measured using the mean squared error (MSE) metric, and the obtained performance is 7.38. For CNN, two distinct cases are under consideration: one with two outputs and one with three outputs. The precision of the models is 97.2% for RGB and 96.7% for grayscale, respectively, in the first instance, and 83% for RGB and 82% for grayscale in the second instance. However, using a pretrained network yielded superior performance with 99.5% for 2-output models and 93% for 3-output models. The GUI is composed of two panels, with the best ANN model and the best CNN model being utilized in each. The primary function of the tool is to assist medical personnel in reducing the time required to make decisions and the probability of misdiagnosis. Full article
Show Figures

Figure 1

17 pages, 2307 KiB  
Article
DeepBiteNet: A Lightweight Ensemble Framework for Multiclass Bug Bite Classification Using Image-Based Deep Learning
by Doston Khasanov, Halimjon Khujamatov, Muksimova Shakhnoza, Mirjamol Abdullaev, Temur Toshtemirov, Shahzoda Anarova, Cheolwon Lee and Heung-Seok Jeon
Diagnostics 2025, 15(15), 1841; https://doi.org/10.3390/diagnostics15151841 - 22 Jul 2025
Viewed by 323
Abstract
Background/Objectives: The accurate identification of insect bites from images of skin is daunting due to the fine gradations among diverse bite types, variability in human skin response, and inconsistencies in image quality. Methods: For this work, we introduce DeepBiteNet, a new [...] Read more.
Background/Objectives: The accurate identification of insect bites from images of skin is daunting due to the fine gradations among diverse bite types, variability in human skin response, and inconsistencies in image quality. Methods: For this work, we introduce DeepBiteNet, a new ensemble-based deep learning model designed to perform robust multiclass classification of insect bites from RGB images. Our model aggregates three semantically diverse convolutional neural networks—DenseNet121, EfficientNet-B0, and MobileNetV3-Small—using a stacked meta-classifier designed to aggregate their predicted outcomes into an integrated, discriminatively strong output. Our technique balances heterogeneous feature representation with suppression of individual model biases. Our model was trained and evaluated on a hand-collected set of 1932 labeled images representing eight classes, consisting of common bites such as mosquito, flea, and tick bites, and unaffected skin. Our domain-specific augmentation pipeline imputed practical variability in lighting, occlusion, and skin tone, thereby boosting generalizability. Results: Our model, DeepBiteNet, achieved a training accuracy of 89.7%, validation accuracy of 85.1%, and test accuracy of 84.6%, and surpassed fifteen benchmark CNN architectures on all key indicators, viz., precision (0.880), recall (0.870), and F1-score (0.875). Our model, optimized for mobile deployment with quantization and TensorFlow Lite, enables rapid on-client computation and eliminates reliance on cloud-based processing. Conclusions: Our work shows how ensemble learning, when carefully designed and combined with realistic data augmentation, can boost the reliability and usability of automatic insect bite diagnosis. Our model, DeepBiteNet, forms a promising foundation for future integration with mobile health (mHealth) solutions and may complement early diagnosis and triage in dermatologically underserved regions. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Diagnostics and Analysis 2024)
Show Figures

Figure 1

32 pages, 6622 KiB  
Article
Health Monitoring of Abies nebrodensis Combining UAV Remote Sensing Data, Climatological and Weather Observations, and Phytosanitary Inspections
by Lorenzo Arcidiaco, Manuela Corongiu, Gianni Della Rocca, Sara Barberini, Giovanni Emiliani, Rosario Schicchi, Peppuccio Bonomo, David Pellegrini and Roberto Danti
Forests 2025, 16(7), 1200; https://doi.org/10.3390/f16071200 - 21 Jul 2025
Viewed by 300
Abstract
Abies nebrodensis L. is a critically endangered conifer endemic to Sicily (Italy). Its residual population is confined to the Madonie mountain range under challenging climatological conditions. Despite the good adaptation shown by the relict population to the environmental conditions occurring in its habitat, [...] Read more.
Abies nebrodensis L. is a critically endangered conifer endemic to Sicily (Italy). Its residual population is confined to the Madonie mountain range under challenging climatological conditions. Despite the good adaptation shown by the relict population to the environmental conditions occurring in its habitat, Abies nebrodensis is subject to a series of threats, including climate change. Effective conservation strategies require reliable and versatile methods for monitoring its health status. Combining high-resolution remote sensing data with reanalysis of climatological datasets, this study aimed to identify correlations between vegetation indices (NDVI, GreenDVI, and EVI) and key climatological variables (temperature and precipitation) using advanced machine learning techniques. High-resolution RGB (Red, Green, Blue) and IrRG (infrared, Red, Green) maps were used to delineate tree crowns and extract statistics related to the selected vegetation indices. The results of phytosanitary inspections and multispectral analyses showed that the microclimatic conditions at the site level influence both the impact of crown disorders and tree physiology in terms of water content and photosynthetic activity. Hence, the correlation between the phytosanitary inspection results and vegetation indices suggests that multispectral techniques with drones can provide reliable indications of the health status of Abies nebrodensis trees. The findings of this study provide significant insights into the influence of environmental stress on Abies nebrodensis and offer a basis for developing new monitoring procedures that could assist in managing conservation measures. Full article
Show Figures

Figure 1

26 pages, 9183 KiB  
Review
Application of Image Computing in Non-Destructive Detection of Chinese Cuisine
by Xiaowei Huang, Zexiang Li, Zhihua Li, Jiyong Shi, Ning Zhang, Zhou Qin, Liuzi Du, Tingting Shen and Roujia Zhang
Foods 2025, 14(14), 2488; https://doi.org/10.3390/foods14142488 - 16 Jul 2025
Viewed by 486
Abstract
Food quality and safety are paramount in preserving the culinary authenticity and cultural integrity of Chinese cuisine, characterized by intricate ingredient combinations, diverse cooking techniques (e.g., stir-frying, steaming, and braising), and region-specific flavor profiles. Traditional non-destructive detection methods often struggle with the unique [...] Read more.
Food quality and safety are paramount in preserving the culinary authenticity and cultural integrity of Chinese cuisine, characterized by intricate ingredient combinations, diverse cooking techniques (e.g., stir-frying, steaming, and braising), and region-specific flavor profiles. Traditional non-destructive detection methods often struggle with the unique challenges posed by Chinese dishes, including complex textural variations in staple foods (e.g., noodles, dumplings), layered seasoning compositions (e.g., soy sauce, Sichuan peppercorns), and oil-rich cooking media. This study pioneers a hyperspectral imaging framework enhanced with domain-specific deep learning algorithms (spatial–spectral convolutional networks with attention mechanisms) to address these challenges. Our approach effectively deciphers the subtle spectral fingerprints of Chinese-specific ingredients (e.g., fermented black beans, lotus root) and quantifies critical quality indicators, achieving an average classification accuracy of 97.8% across 15 major Chinese dish categories. Specifically, the model demonstrates high precision in quantifying chili oil content in Mapo Tofu with a Mean Absolute Error (MAE) of 0.43% w/w and assessing freshness gradients in Cantonese dim sum (Shrimp Har Gow) with a classification accuracy of 95.2% for three distinct freshness levels. This approach leverages the detailed spectral information provided by hyperspectral imaging to automate the classification and detection of Chinese dishes, significantly improving both the accuracy of image-based food classification by >15 percentage points compared to traditional RGB methods and enhancing food quality safety assessment. Full article
Show Figures

Graphical abstract

20 pages, 10320 KiB  
Article
Advancing Grapevine Disease Detection Through Airborne Imaging: A Pilot Study in Emilia-Romagna (Italy)
by Virginia Strati, Matteo Albéri, Alessio Barbagli, Stefano Boncompagni, Luca Casoli, Enrico Chiarelli, Ruggero Colla, Tommaso Colonna, Nedime Irem Elek, Gabriele Galli, Fabio Gallorini, Enrico Guastaldi, Ghulam Hasnain, Nicola Lopane, Andrea Maino, Fabio Mantovani, Filippo Mantovani, Gian Lorenzo Mazzoli, Federica Migliorini, Dario Petrone, Silvio Pierini, Kassandra Giulia Cristina Raptis and Rocchina Tisoadd Show full author list remove Hide full author list
Remote Sens. 2025, 17(14), 2465; https://doi.org/10.3390/rs17142465 - 16 Jul 2025
Viewed by 375
Abstract
Innovative applications of high-resolution airborne imaging are explored for detecting grapevine diseases. Driven by the motivation to enhance early disease detection, the method’s effectiveness lies in its capacity to identify isolated cases of grapevine yellows (Flavescence dorée and Bois Noir) and trunk disease [...] Read more.
Innovative applications of high-resolution airborne imaging are explored for detecting grapevine diseases. Driven by the motivation to enhance early disease detection, the method’s effectiveness lies in its capacity to identify isolated cases of grapevine yellows (Flavescence dorée and Bois Noir) and trunk disease (Esca complex), crucial for preventing the disease from spreading to unaffected areas. Conducted over a 17 ha vineyard in the Forlì municipality in Emilia-Romagna (Italy), the aerial survey utilized a photogrammetric camera capturing centimeter-level resolution images of the whole area in 17 minutes. These images were then processed through an automated analysis leveraging RGB-based spectral indices (Green–Red Vegetation Index—GRVI, Green–Blue Vegetation Index—GBVI, and Blue–Red Vegetation Index—BRVI). The analysis scanned the 1.24 · 109 pixels of the orthomosaic, detecting 0.4% of the vineyard area showing evidence of disease. The instances, density, and incidence maps provide insights into symptoms’ spatial distribution and facilitate precise interventions. High specificity (0.96) and good sensitivity (0.56) emerged from the ground field observation campaign. Statistical analysis revealed a significant edge effect in symptom distribution, with higher disease occurrence near vineyard borders. This pattern, confirmed by spatial autocorrelation and non-parametric tests, likely reflects increased vector activity and environmental stress at the vineyard margins. The presented pilot study not only provides a reliable detection tool for grapevine diseases but also lays the groundwork for an early warning system that, if extended to larger areas, could offer a valuable system to guide on-the-ground monitoring and facilitate strategic decision-making by the authorities. Full article
Show Figures

Figure 1

32 pages, 6589 KiB  
Article
Machine Learning (AutoML)-Driven Wheat Yield Prediction for European Varieties: Enhanced Accuracy Using Multispectral UAV Data
by Krstan Kešelj, Zoran Stamenković, Marko Kostić, Vladimir Aćin, Dragana Tekić, Tihomir Novaković, Mladen Ivanišević, Aleksandar Ivezić and Nenad Magazin
Agriculture 2025, 15(14), 1534; https://doi.org/10.3390/agriculture15141534 - 16 Jul 2025
Viewed by 511
Abstract
Accurate and timely wheat yield prediction is valuable globally for enhancing agricultural planning, optimizing resource use, and supporting trade strategies. Study addresses the need for precision in yield estimation by applying machine-learning (ML) regression models to high-resolution Unmanned Aerial Vehicle (UAV) multispectral (MS) [...] Read more.
Accurate and timely wheat yield prediction is valuable globally for enhancing agricultural planning, optimizing resource use, and supporting trade strategies. Study addresses the need for precision in yield estimation by applying machine-learning (ML) regression models to high-resolution Unmanned Aerial Vehicle (UAV) multispectral (MS) and Red-Green-Blue (RGB) imagery. Research analyzes five European wheat cultivars across 400 experimental plots created by combining 20 nitrogen, phosphorus, and potassium (NPK) fertilizer treatments. Yield variations from 1.41 to 6.42 t/ha strengthen model robustness with diverse data. The ML approach is automated using PyCaret, which optimized and evaluated 25 regression models based on 65 vegetation indices and yield data, resulting in 66 feature variables across 400 observations. The dataset, split into training (70%) and testing sets (30%), was used to predict yields at three growth stages: 9 May, 20 May, and 6 June 2022. Key models achieved high accuracy, with the Support Vector Regression (SVR) model reaching R2 = 0.95 on 9 May and R2 = 0.91 on 6 June, and the Multi-Layer Perceptron (MLP) Regressor attaining R2 = 0.94 on 20 May. The findings underscore the effectiveness of precisely measured MS indices and a rigorous experimental approach in achieving high-accuracy yield predictions. This study demonstrates how a precise experimental setup, large-scale field data, and AutoML can harness UAV and machine learning’s potential to enhance wheat yield predictions. The main limitations of this study lie in its focus on experimental fields under specific conditions; future research could explore adaptability to diverse environments and wheat varieties for broader applicability. Full article
(This article belongs to the Special Issue Applications of Remote Sensing in Agricultural Soil and Crop Mapping)
Show Figures

Figure 1

36 pages, 5913 KiB  
Article
Design and Temperature Control of a Novel Aeroponic Plant Growth Chamber
by Ali Guney and Oguzhan Cakir
Electronics 2025, 14(14), 2801; https://doi.org/10.3390/electronics14142801 - 11 Jul 2025
Viewed by 401
Abstract
It is projected that the world population will quadruple over the next century, and to meet future food demands, agricultural production will need to increase by 70%. Therefore, there has been a transition from traditional farming methods to autonomous modern agriculture. One such [...] Read more.
It is projected that the world population will quadruple over the next century, and to meet future food demands, agricultural production will need to increase by 70%. Therefore, there has been a transition from traditional farming methods to autonomous modern agriculture. One such modern technique is aeroponic farming, in which plants are grown without soil under controlled and hygienic conditions. In aeroponic farming, plants are significantly less affected by climatic conditions, infectious diseases, and biotic and abiotic stresses, such as pest infestations. Additionally, this method can reduce water, nutrient, and pesticide usage by 98%, 60%, and 100%, respectively, while increasing the yield by 45–75% compared to traditional farming. In this study, a three-dimensional industrial design of an innovative aeroponic plant growth chamber was presented for use by individuals, researchers, and professional growers. The proposed chamber design is modular and open to further innovation. Unlike existing chambers, it includes load cells that enable real-time monitoring of the fresh weight of the plant. Furthermore, cameras were integrated into the chamber to track plant growth and changes over time and weight. Additionally, RGB power LEDs were placed on the inner ceiling of the chamber to provide an optimal lighting intensity and spectrum based on the cultivated plant species. A customizable chamber design was introduced, allowing users to determine the growing tray and nutrient nozzles according to the type and quantity of plants. Finally, system models were developed for temperature control of the chamber. Temperature control was implemented using a proportional-integral-derivative controller optimized with particle swarm optimization, radial movement optimization, differential evolution, and mayfly optimization algorithms for the gain parameters. The simulation results indicate that the temperatures of the growing and feeding chambers in the cabinet reached a steady state within 260 s, with an offset error of no more than 0.5 °C. This result demonstrates the accuracy of the derived model and the effectiveness of the optimized controllers. Full article
(This article belongs to the Special Issue Intelligent and Autonomous Sensor System for Precision Agriculture)
Show Figures

Figure 1

Back to TopTop