Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (827)

Search Parameters:
Keywords = multispectral image classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 17172 KB  
Article
Local Climate Zone Mapping by Integrating Hyperspectral and Multispectral Data with a Spectral–Spatial Fusion Network
by Ximing Liu, Luigi Russo, Wenbo Li, Alim Samat, Silvia Liberata Ullo and Paolo Gamba
Remote Sens. 2026, 18(5), 696; https://doi.org/10.3390/rs18050696 - 26 Feb 2026
Abstract
Local Climate Zone (LCZ) classification provides a standardized framework for characterizing urban morphology and its climatic implications. However, most existing remote sensing-based LCZ mapping methods rely on pixel-level classification and multispectral data alone, which limits their ability to capture urban scene heterogeneity and [...] Read more.
Local Climate Zone (LCZ) classification provides a standardized framework for characterizing urban morphology and its climatic implications. However, most existing remote sensing-based LCZ mapping methods rely on pixel-level classification and multispectral data alone, which limits their ability to capture urban scene heterogeneity and to distinguish structurally similar LCZ classes. In this paper, we propose LCZ-HMSSNet, a deep learning framework for scene-level LCZ classification that integrates PRISMA hyperspectral images with Sentinel-2 multispectral data. The proposed approach exploits both the spectral richness of hyperspectral data and the spatial context provided by multispectral observations, and incorporates a spatial–spectral feature separation mechanism to enhance the discriminability of the fused representations. Experiments conducted across six representative European cities evaluate the proposed method from multiple perspectives, including comparisons with different classification models, data contribution analysis, and structural ablation studies. The results demonstrate that the proposed method consistently outperforms MSI-only and existing LCZ classification approaches, achieving an overall accuracy (OA) of 0.988 and a Kappa of 0.985. In addition, the small-sample experiments indicate the robustness and potential of the proposed model, providing a practical reference for future LCZ mapping in data-scarce scenarios. Full article
(This article belongs to the Special Issue Geospatial Artificial Intelligence (GeoAI) in Remote Sensing)
23 pages, 3515 KB  
Article
Characterizing Cotton Defoliation Progress via UAV-Based Multispectral-Derived Leaf Area Index and Analysis of Influencing Factors
by Yukun Wang, Zhenwang Zhang, Chenyu Xiao, Te Zhang, Keke Yu, Chong Zhang, Qinghua Liao, Fangjun Li, Sumei Wan, Guodong Chen, Xiaoli Tian, Mingwei Du and Zhaohu Li
Remote Sens. 2026, 18(4), 609; https://doi.org/10.3390/rs18040609 - 15 Feb 2026
Viewed by 182
Abstract
Timely monitoring of cotton defoliation progress is crucial for optimizing the quality of mechanical harvesting. To accurately assess the defoliation status prior to mechanical picking, a field experiment was conducted in Hejian, Hebei Province, China, in 2022. Using a DJI P4M multispectral drone, [...] Read more.
Timely monitoring of cotton defoliation progress is crucial for optimizing the quality of mechanical harvesting. To accurately assess the defoliation status prior to mechanical picking, a field experiment was conducted in Hejian, Hebei Province, China, in 2022. Using a DJI P4M multispectral drone, canopy images of cotton were collected before and after defoliation at three flight altitudes: 25 m, 50 m, and 100 m. The study employed machine learning algorithms including linear regression, Support Vector Machine (SVM), Generalized Additive Model (GAM), and Random Forest (RF) to invert the Leaf Area Index (LAI). Additionally, SVM-based supervised classification was introduced to eliminate background interference from soil and open cotton bolls, while the XGBoost model and SHAP method were used to analyze the main factors influencing LAI inversion. Key findings include the following: The univariate linear relationship between EVI and LAI proved to be the most robust, with the model constructed from 100 m flight altitude data performing best (validation set: R2 = 0.921, RMSE = 0.284). The rate of LAI change showed a strong positive correlation with field-measured defoliation rate (r = 0.83–0.88), confirming its reliability as a proxy indicator for defoliation progress. Soil and open cotton bolls were identified as major negative factors affecting LAI inversion accuracy. The optimal machine learning prediction model varied with days after spraying, demonstrating significant temporal variability. This study demonstrates that high-throughput LAI inversion based on drone-derived multispectral EVI enables precise and dynamic monitoring of cotton defoliation. The approach provides farmers and field managers with an efficient, non-destructive monitoring tool. By delivering real-time insight into defoliation progress, it plays a pivotal role in enabling precision defoliation management, reducing excessive chemical use, optimizing the scheduling of mechanical operations, and ultimately enhancing both the sustainability and profitability of cotton production. Full article
Show Figures

Figure 1

30 pages, 12006 KB  
Article
Comparison of CNN-Based Image Classification Approaches for Implementation of Low-Cost Multispectral Arcing Detection
by Elizabeth Piersall and Peter Fuhr
Sensors 2026, 26(4), 1268; https://doi.org/10.3390/s26041268 - 15 Feb 2026
Viewed by 284
Abstract
Camera-based sensing has benefited in recent years from developments in machine learning data processing methods, as well as improved data collection options such as Unmanned Aerial Vehicles (UAV) mounted sensors. However, cost considerations, both for the initial purchase of sensors as well as [...] Read more.
Camera-based sensing has benefited in recent years from developments in machine learning data processing methods, as well as improved data collection options such as Unmanned Aerial Vehicles (UAV) mounted sensors. However, cost considerations, both for the initial purchase of sensors as well as updates, maintenance, or potential replacement if damaged, can limit adoption of more expensive sensing options for some applications. To evaluate more affordable options with less expensive, more available, and more easily replaceable hardware, we examine the use of machine learning-based image classification with custom datasets, utilizing deep learning based-image classification and the use of ensemble models for sensor fusion. Utilizing the same models for each camera to reduce technical overhead, we showed that for a very representative training dataset, camera-based detection can be successful for detection of electrical arcing. We also use multiple validation datasets, based on conditions expected to be of varying difficulty, to evaluate custom data. These results show that ensemble models of different data sources can mitigate risks from gaps in training data, though the system will be less redundant for those cases unless other precautions are taken. We found that with good quality custom datasets, data fusion models can be utilized without specialization in design to the specific cameras utilized, allowing for less specialized, more accessible equipment to be utilized as multispectral camera components. This approach can provide an alternative to expensive sensing equipment for applications in which lower-cost or more easily replaceable sensing equipment is desirable. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Graphical abstract

25 pages, 7216 KB  
Article
A CNN-LSTM-XGBoost Hybrid Framework for Interpretable Nitrogen Stress Classification Using Multimodal UAV Imagery
by Xiaohui Kuang, Dawei Wang, Bohan Mao, Yafeng Li, Deshan Chen, Wanna Fu, Qian Cheng, Fuyi Duan, Hao Li, Xinyue Hou and Zhen Chen
Remote Sens. 2026, 18(4), 538; https://doi.org/10.3390/rs18040538 - 7 Feb 2026
Viewed by 346
Abstract
Accurate diagnosis of nitrogen status is essential for precision fertilization in winter wheat. Single-modal or single-temporal remote sensing often fails to capture the multidimensional crop responses to nitrogen stress. In this study, we propose a hybrid framework based on CNN-LSTM-XGBoost for interpretable classification [...] Read more.
Accurate diagnosis of nitrogen status is essential for precision fertilization in winter wheat. Single-modal or single-temporal remote sensing often fails to capture the multidimensional crop responses to nitrogen stress. In this study, we propose a hybrid framework based on CNN-LSTM-XGBoost for interpretable classification of wheat nitrogen stress gradients using multimodal unmanned aerial vehicle (UAV) multispectral and thermal infrared (TIR) imagery. Field experiments were conducted at the Xinxiang base in Henan Province during the 2023–2024, following a randomized block design involving 10 cultivars, four nitrogen levels, and four water treatments. Multisource UAV images acquired at jointing, heading, and filling stages were used to construct a multimodal feature set consisting of manual features (spectral bands, vegetation indices (VIs), TIR, and their interaction terms) and seven temporal statistical features. A deep learning model (CNN-LSTM) was utilized to further extract deep spatiotemporal features, and its performance was systematically compared with traditional machine learning models. The results show that multimodal feature fusion significantly enhanced classification performance. The CNN-LSTM model achieved an accuracy of 89.38% with fused multimodal features, outperforming all traditional machine learning models. Incorporating multi-temporal features improved the F1macro of the XGBoost model to 0.9131, a 9.42 percentage-point increase over using the single heading stage alone. The hybrid model (CNN-LSTM-XGBoost) achieved the highest overall performance (Accuracy = 0.9208; F1macro = 0.9212; AUCmacro = 0.9879; Kappa = 0.8944). SHAP analysis identified TIR × NDRE as the most influential indicator, reflecting the coupled physiological response of reduced chlorophyll content and increased canopy temperature under nitrogen deficiency. The proposed multimodal, multi-temporal, and interpretable framework provides a robust technical foundation for UAV-assisted precision nitrogen management. Full article
Show Figures

Figure 1

18 pages, 6437 KB  
Article
Comprehensive and Region-Specific Retinal Health Assessment Using Phasor Analysis of Multispectral Images and Machine Learning
by Armin Eskandarinasab, Laura Rey-Barroso, Francisco J. Burgos-Fernández and Meritxell Vilaseca
Sensors 2026, 26(3), 1021; https://doi.org/10.3390/s26031021 - 4 Feb 2026
Viewed by 213
Abstract
This study examines the efficacy of phasor analysis in distinguishing between healthy and diseased retinas using multispectral imaging data together with machine learning approaches. Our results demonstrate that phasor analysis of multispectral images surpasses average reflectance values in classification performance, serving as an [...] Read more.
This study examines the efficacy of phasor analysis in distinguishing between healthy and diseased retinas using multispectral imaging data together with machine learning approaches. Our results demonstrate that phasor analysis of multispectral images surpasses average reflectance values in classification performance, serving as an effective dimensionality reduction technique to extract essential features, with the first harmonic yielding optimal results when paired with Z-score normalization. To compare the effectiveness of multispectral images with that of a conventional color fundus camera, we extracted three spectral bands corresponding to the red, green, and blue regions and combined them to create RGB-like images, which were then subjected to the same analysis. Our study found that phasor analysis of multispectral images provided more accurate classification results than phasor analysis of RGB-like images. An examination of different regions of interest showed that using the entire retina yields the best classification performance, likely due to the advanced stage of the diseases, which had progressed to affect the entire fundus. Our findings suggest that phasor analysis of multispectral images and machine learning are a powerful tools for retinal disease classification. Full article
(This article belongs to the Special Issue Recent Trends and Advances in Biomedical Optics and Imaging)
Show Figures

Figure 1

20 pages, 7359 KB  
Article
Urban Land Cover Mapping Enhanced with LiDAR Canopy Height Data to Quantify Urbanisation in an Arctic City: A Case Study of the City of Tromsø, Norway, 1984–2024
by Liliia Hebryn-Baidy, Gareth Rees, Sophie Weeks and Vadym Belenok
Geomatics 2026, 6(1), 11; https://doi.org/10.3390/geomatics6010011 - 28 Jan 2026
Viewed by 243
Abstract
Intensifying urbanisation in the Arctic, particularly in spatially constrained coastal and island cities, requires reliable information on long-term land-use/land-cover (LULC) change to assess environmental impacts and support urban planning. However, multi-decadal, high-resolution LULC datasets for Arctic cities remain limited. In this study, we [...] Read more.
Intensifying urbanisation in the Arctic, particularly in spatially constrained coastal and island cities, requires reliable information on long-term land-use/land-cover (LULC) change to assess environmental impacts and support urban planning. However, multi-decadal, high-resolution LULC datasets for Arctic cities remain limited. In this study, we quantify LULC change on Tromsøya (Tromsø, Norway) from 1984 to 2024 using a Random Forest classifier applied to multispectral satellite imagery from Landsat and PlanetScope, complemented by LiDAR-derived canopy height models (CHM) and building footprints. We mapped LULC change trajectories and examined how these shifts relate to district-level population redistribution using gridded population data. The integration of a LiDAR-derived CHM was found to substantially improve the accuracy of Landsat-based LULC mapping and to represent the dominant source of classification gains, particularly for spectrally similar urban classes such as residential areas, roads, and other paved surfaces. Landsat augmented with CHM was shown to achieve practical equivalence to PlanetScope when the latter was modelled using spectral features only, supporting the feasibility of scalable and cost-effective long-term monitoring of urbanisation in Arctic cities. Based on the best-performing Landsat configuration, the proportions of artificial and green surfaces were estimated, indicating that approximately 20% of green areas were transformed into artificial classes. Spatially, population growth was concentrated in a small number of districts and broadly coincided with hotspots of green-to-artificial conversion The workflow provides a reproducible basis for long-term, district-scale LULC monitoring in small Arctic cities where data constraints limit the consistent use of high-resolution image. Full article
Show Figures

Figure 1

26 pages, 4764 KB  
Article
Hybrid ConvLSTM U-Net Deep Neural Network for Land Use and Land Cover Classification from Multi-Temporal Sentinel-2 Images: Application to Yaoundé, Cameroon
by Ange Gabriel Belinga, Stéphane Cédric Tékouabou Koumetio and Mohammed El Haziti
Math. Comput. Appl. 2026, 31(1), 18; https://doi.org/10.3390/mca31010018 - 26 Jan 2026
Viewed by 258
Abstract
Accurate mapping of land use and land cover (LULC) is crucial for various applications such as urban planning, environmental management, and sustainable development, particularly in rapidly growing urban areas. African cities such as Yaoundé, Cameroon, are particularly affected by this rapid and often [...] Read more.
Accurate mapping of land use and land cover (LULC) is crucial for various applications such as urban planning, environmental management, and sustainable development, particularly in rapidly growing urban areas. African cities such as Yaoundé, Cameroon, are particularly affected by this rapid and often uncontrolled urban growth with complex spatio-temporal dynamics. Effective modeling of LULC indicators in such areas requires robust algorithms for high-resolution images segmentation and classification, as well as reliable data with great spatio-temporal distributions. Among the most suitable data sources for these types of studies, Sentinel-2 image time series, thanks to their high spatial (10 m) and temporal (5 days) resolution, are a valuable source of data for this task. However, for an effective LULC modeling purpose in such dynamic areas, many challenges remain, including spectral confusion between certain classes, seasonal variability, and spatial heterogeneity. This study proposes a hybrid deep learning architecture combining U-Net and Convolutional Long Short-Term Memory (ConvLSTM) layers, allowing the spatial structures and temporal dynamics of the Sentinel-2 series to be exploited jointly. Applied to the Yaoundé region (Cameroon) over the period 2018–2025, the hybrid model significantly outperforms the U-Net and ConvLSTM models alone. It achieves a macro-average F1 score of 0.893, an accuracy of 0.912, and an average IoU of 0.811 on the test set. These segmentation performances reached up to 0.948, 0.953, and 0.910 for precision, F1-score, and IoU, respectively, on the built-up areas class. Moreover, despite its better performance, in terms of complexity, the figures confirm that the hybrid does not significantly penalize evaluation speed. These results demonstrate the relevance of jointly integrating space and time for robust LULC classification from multi-temporal satellite images. Full article
Show Figures

Figure 1

32 pages, 8079 KB  
Article
Daytime Sea Fog Detection in the South China Sea Based on Machine Learning and Physical Mechanism Using Fengyun-4B Meteorological Satellite
by Jie Zheng, Gang Wang, Wenping He, Qiang Yu, Zijing Liu, Huijiao Lin, Shuwen Li and Bin Wen
Remote Sens. 2026, 18(2), 336; https://doi.org/10.3390/rs18020336 - 19 Jan 2026
Viewed by 263
Abstract
Sea fog is a major meteorological hazard that severely disrupts maritime transportation and economic activities in the South China Sea. As China’s next-generation geostationary meteorological satellite, Fengyun-4B (FY-4B) supplies continuous observations that are well suited for sea fog monitoring, yet a satellite-specific recognition [...] Read more.
Sea fog is a major meteorological hazard that severely disrupts maritime transportation and economic activities in the South China Sea. As China’s next-generation geostationary meteorological satellite, Fengyun-4B (FY-4B) supplies continuous observations that are well suited for sea fog monitoring, yet a satellite-specific recognition method has been lacking. A key obstacle is the radiometric inconsistency between the Advanced Geostationary Radiation Imager (AGRI) sensors on FY-4A and FY-4B, compounded by the cessation of Cloud–Aerosol Lidar with Orthogonal Polarization (CALIOP) observations, which prevents direct transfer of fog labels. To address these challenges and fill this research gap, we propose a machine learning framework that integrates cross-satellite radiometric recalibration and physical mechanism constraints for robust daytime sea fog detection. First, we innovatively apply a radiation recalibration transfer technique based on the radiative transfer model to normalize FY-4A/B radiances and, together with Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) cloud/fog classification products and ERA5 reanalysis, construct a highly consistent joint training set of FY-4A/B for the winter-spring seasons since 2019. Secondly, to enhance the model’s physical performance, we incorporate key physical parameters related to the sea fog formation process (such as temperature inversion, near-surface humidity, and wind field characteristics) as physical constraints, and combine them with multispectral channel sensitivity and the brightness temperature (BT) standard deviation that characterizes texture smoothness, resulting in an optimized 13-dimensional feature matrix. Using this, we optimize the sea fog recognition model parameters of decision tree (DT), random forest (RF), and support vector machine (SVM) with grid search and particle swarm optimization (PSO) algorithms. The validation results show that the RF model outperforms others with the highest overall classification accuracy (0.91) and probability of detection (POD, 0.81) that surpasses prior FY-4A-based work for the South China Sea (POD 0.71–0.76). More importantly, this study demonstrates that the proposed FY-4B framework provides reliable technical support for operational, continuous sea fog monitoring over the South China Sea. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

11 pages, 4436 KB  
Proceeding Paper
SRGAN-Based Deep Learning Framework for Wind Turbine Damage Detection from Sentinel-2 Imagery
by Kübra Çakır, Onur Elma and Murat Kuzlu
Eng. Proc. 2026, 122(1), 19; https://doi.org/10.3390/engproc2026122019 - 19 Jan 2026
Viewed by 198
Abstract
The operational reliability of wind turbines is critical for sustainable energy production in smart grids. This study proposes a remote monitoring approach using perceptually enhanced satellite imagery. Sentinel-2 multispectral data (10 m resolution) has been processed with a Super-Resolution Generative Adversarial Network (SRGAN) [...] Read more.
The operational reliability of wind turbines is critical for sustainable energy production in smart grids. This study proposes a remote monitoring approach using perceptually enhanced satellite imagery. Sentinel-2 multispectral data (10 m resolution) has been processed with a Super-Resolution Generative Adversarial Network (SRGAN) to improve visual quality to a perceptual resolution of 30 cm. Although true spatial refinement is not achieved, the sharper structural details enhance classification accuracy. The data set comprises 15,000 images—10,000 SRGAN-enhanced and 5000 augmented through rotation, zoom in, increasing brightness, noise addition, and blurring. A custom Convolutional Neural Network (CNN) has been trained to classify turbines as damaged or intact, achieving 95% accuracy, a 0.99 ROC-AUC, and a 0.95 F1 score. These results demonstrate that perceptually sharpened satellite data can effectively support automated wind turbine damage detection and predictive maintenance. The proposed framework also lays the groundwork for broader real-time and multimodal monitoring and cost-efficient applications in renewable energy systems. Full article
Show Figures

Figure 1

26 pages, 38465 KB  
Article
High-Resolution Snapshot Multispectral Imaging System for Hazardous Gas Classification and Dispersion Quantification
by Zhi Li, Hanyuan Zhang, Qiang Li, Yuxin Song, Mengyuan Chen, Shijie Liu, Dongjing Li, Chunlai Li, Jianyu Wang and Renbiao Xie
Micromachines 2026, 17(1), 112; https://doi.org/10.3390/mi17010112 - 14 Jan 2026
Viewed by 261
Abstract
Real-time monitoring of hazardous gas emissions in open environments remains a critical challenge. Conventional spectrometers and filter wheel systems acquire spectral and spatial information sequentially, which limits their ability to capture multiple gas species and dynamic dispersion patterns rapidly. A High-Resolution Snapshot Multispectral [...] Read more.
Real-time monitoring of hazardous gas emissions in open environments remains a critical challenge. Conventional spectrometers and filter wheel systems acquire spectral and spatial information sequentially, which limits their ability to capture multiple gas species and dynamic dispersion patterns rapidly. A High-Resolution Snapshot Multispectral Imaging System (HRSMIS) is proposed to integrate high spatial fidelity with multispectral capability for near real-time plume visualization, gas species identification, and concentration retrieval. Operating across the 7–14 μm spectral range, the system employs a dual-path optical configuration in which a high-resolution imaging path and a multispectral snapshot path share a common telescope, allowing for the simultaneous acquisition of fine two-dimensional spatial morphology and comprehensive spectral fingerprint information. Within the multispectral path, two 5×5 microlens arrays (MLAs) combined with a corresponding narrowband filter array generate 25 distinct spectral channels, allowing concurrent detection of up to 25 gas species in a single snapshot. The high-resolution imaging path provides detailed spatial information, facilitating spatio-spectral super-resolution fusion for multispectral data without complex image registration. The HRSMIS demonstrates modulation transfer function (MTF) values of at least 0.40 in the high-resolution channel and 0.29 in the multispectral channel. Monte Carlo tolerance analysis confirms imaging stability, enabling the real-time visualization of gas plumes and the accurate quantification of dispersion dynamics and temporal concentration variations. Full article
(This article belongs to the Special Issue Gas Sensors: From Fundamental Research to Applications, 2nd Edition)
Show Figures

Figure 1

22 pages, 3834 KB  
Article
Image-Based Spatio-Temporal Graph Learning for Diffusion Forecasting in Digital Management Systems
by Chenxi Du, Zhengjie Fu, Yifan Hu, Yibin Liu, Jingwen Cao, Siyuan Liu and Yan Zhan
Electronics 2026, 15(2), 356; https://doi.org/10.3390/electronics15020356 - 13 Jan 2026
Viewed by 328
Abstract
With the widespread application of high-resolution remote sensing imagery and unmanned aerial vehicle technologies in agricultural scenarios, accurately characterizing spatial pest diffusion from multi-temporal images has become a critical issue in intelligent agricultural management. To overcome the limitations of existing machine learning approaches [...] Read more.
With the widespread application of high-resolution remote sensing imagery and unmanned aerial vehicle technologies in agricultural scenarios, accurately characterizing spatial pest diffusion from multi-temporal images has become a critical issue in intelligent agricultural management. To overcome the limitations of existing machine learning approaches that focus mainly on static recognition and lack effective spatio-temporal diffusion modeling, a UAV-based pest diffusion prediction and simulation framework is proposed. Multi-temporal UAV RGB and multispectral imagery are jointly modeled using a graph-based representation of farmland parcels, while temporal modeling and environmental embedding mechanisms are incorporated to enable simultaneous prediction of diffusion intensity and propagation paths. Experiments conducted on two real agricultural regions, Bayan Nur and Tangshan, demonstrate that the proposed method consistently outperforms representative spatio-temporal baselines. Compared with ST-GCN, the proposed framework achieves approximately 17–22% reductions in MAE and MSE, together with 8–12% improvements in PMR, while maintaining robust classification performance with precision, recall, and F1-score exceeding 0.82. These results indicate that the proposed approach can provide reliable support for agricultural information systems and diffusion-aware decision generation. Full article
(This article belongs to the Special Issue Application of Machine Learning in Graphics and Images, 2nd Edition)
Show Figures

Figure 1

22 pages, 10535 KB  
Article
Morphology of Chinese Chive and Onion (Allium; Amaryllidaceae) Crop Wild Relatives: Taxonomical Relations and Implications
by Min Su Jo, Ji Eun Kim, Ye Rin Chu, Gyu Young Chung and Chae Sun Na
Plants 2026, 15(2), 192; https://doi.org/10.3390/plants15020192 - 7 Jan 2026
Viewed by 563
Abstract
The genus Allium L. includes economically significant crops such as Chinese chives (Allium tuberosum Rottler ex Spreng.) and onions (Allium cepa L.), and is utilized in diverse agricultural applications, with numerous cultivars developed to date. However, these cultivars are facing a [...] Read more.
The genus Allium L. includes economically significant crops such as Chinese chives (Allium tuberosum Rottler ex Spreng.) and onions (Allium cepa L.), and is utilized in diverse agricultural applications, with numerous cultivars developed to date. However, these cultivars are facing a reduction in genetic diversity, raising concerns regarding their long-term sustainability. Crop wild relatives (CWRs), which possess a wide range of genetic traits, have recently gained attention as important genetic resources and priorities for conservation. In this study, the taxonomy of Allium species distributed in Korea is assessed using morphological characteristics. Two types of morphological analyses were conducted: macro-morphological traits were examined using stereomicroscopy and multi-spectral image analyses, while micro-morphological traits were analyzed using scanning electron microscopy. We detected significant interspecific and intraspecific variation in macro-morphological traits. Among the micro-morphological features, the seed outline on the x-axis and structural patterns of the testa and periclinal walls were identified as reliable diagnostic characters for subgenus classification. Moreover, micro-morphological evidence contributed to inferences about evolutionary trends within the genus Allium. Based on phylogenetic relationships between wild and cultivated taxa, we propose an updated framework for the CWR inventory of Allium. Full article
(This article belongs to the Special Issue Integrative Taxonomy, Systematics, and Morphology of Land Plants)
Show Figures

Figure 1

21 pages, 4969 KB  
Article
Analysis of Temporal Changes in the Floating Vegetation and Algae Surface of the Water Bodies of Kis-Balaton Based on Aerial Image Classification and Meteorological Data
by Kristóf Kozma-Bognár, Angéla Anda, Ariel Tóth, Veronika Kozma-Bognár and József Berke
Geomatics 2026, 6(1), 3; https://doi.org/10.3390/geomatics6010003 - 3 Jan 2026
Viewed by 391
Abstract
Climate change and related weather extremes are increasingly having an impact on all aspects of life. The main objective of the research was to analyze the impact of the most important meteorological elements and the image data of various water bodies of the [...] Read more.
Climate change and related weather extremes are increasingly having an impact on all aspects of life. The main objective of the research was to analyze the impact of the most important meteorological elements and the image data of various water bodies of the Kis-Balaton wetland, Hungary. The primary question was which meteorological elements have a positive or negative influence on vegetational surface cover. Drones have facilitated the visual surveying and monitoring of challenging-to-reach water bodies in the area, including a lake and multiple channels. The individual channels had different flow conditions. Aerial surveys were conducted monthly, based on pre-prepared flight plans. Images captured by a Mavic 3 drone flying at an altitude of 150 m and equipped with a multispectral sensor were processed. The time-series images were aligned and assembled into orthophotos. The image details relevant to the research were segregated and classified using Maximum Likelihood classification algorithm. The reliability of the image data used was checked by Shannon entropy and spectral fractal dimension measurements. The results of the classification were compared with the meteorological data collected by a QLC-50 automatic climate station of Keszthely. The investigations revealed that the surface cover of the examined water bodies was different in the two years but showed a kind of periodicity during the year. In those periods, where photosynthetic organisms multiplied in a higher proportion in the water body, higher monthly average air temperatures and higher monthly global solar radiation sums were observed. Full article
Show Figures

Figure 1

25 pages, 19231 KB  
Article
Mapping Olive Crops (Olea europaea L.) in the Atacama Desert (Peru): An Integration of UAV-Satellite Multispectral Images and Ensemble Machine Learning Models
by Edwin Pino-Vargas, German Huayna, Jorge Muchica-Huamantuma, Elgar Barboza, Samuel Pizarro, Bertha Vera-Barrios, Carolina Cruz-Rodriguez and Fredy Cabrera-Olivera
AgriEngineering 2026, 8(1), 9; https://doi.org/10.3390/agriengineering8010009 - 1 Jan 2026
Viewed by 940
Abstract
Spatial monitoring of olive systems in arid regions is essential for understanding agricultural expansion, water pressure, and productive sustainability. This study aimed to map coverage and estimate olive plantation density (Olea europaea L.) in the Atacama Desert, Tacna (Peru) through the integration [...] Read more.
Spatial monitoring of olive systems in arid regions is essential for understanding agricultural expansion, water pressure, and productive sustainability. This study aimed to map coverage and estimate olive plantation density (Olea europaea L.) in the Atacama Desert, Tacna (Peru) through the integration of UAV-satellite multispectral images and machine learning algorithms (CART, Random Forest, and Gradient Tree Boosting). Forty-eight optical, radar, and topographic covariates were analyzed. Fifteen were selected for coverage classification and 16 for plantation density, using Pearson’s correlation (|r| > 0.75). The classification maps reported an area of 23,059.87 ha (38.21%) of olive groves, followed by 5352.10 ha (8.87%) of oregano cultivation and 725.74 ha (1.20%) of orange cultivation, with respect to the total study area, with overall accuracy (OA) of 86.6% and a Kappa coefficient of 0.81. Meanwhile, the RF and GTB regression models showed R2 ≈ 0.89 and RPD > 2.8, demonstrating excellent predictive performance for estimating tree density (between 1 and 8 trees per 100 m2). Furthermore, the highest concentration of olive trees was found in the central and southern zones of the study area, associated with favorable soil and microclimatic conditions. This work constitutes the first comprehensive approach for olive mapping in southern Peru using UAV–satellite fusion, demonstrating the capability of ensemble models to improve agricultural mapping accuracy and support water and productive management in arid ecosystems. Full article
Show Figures

Figure 1

17 pages, 44594 KB  
Article
Pansharpened WorldView-3 Imagery and Machine Learning for Detecting Mal secco Disease in a Citrus Orchard
by Adriano Palma, Antonio Tiberini, Marco Caruso, Silvia Di Silvestro and Marco Bascietto
Remote Sens. 2026, 18(1), 110; https://doi.org/10.3390/rs18010110 - 28 Dec 2025
Viewed by 408
Abstract
Mal secco disease (MSD), caused by Plenodomus tracheiphilus, poses a serious threat to Citrus limon production across the Mediterranean Basin. This study investigates the potential of high-resolution WorldView-3 imagery for detecting early-stage MSD symptoms in lemon orchards through the integration of three [...] Read more.
Mal secco disease (MSD), caused by Plenodomus tracheiphilus, poses a serious threat to Citrus limon production across the Mediterranean Basin. This study investigates the potential of high-resolution WorldView-3 imagery for detecting early-stage MSD symptoms in lemon orchards through the integration of three pansharpening algorithms(Gram–Schmidt, NNDiffuse, and Brovey) with two machine learning classifiers (Random Forest and Support Vector Machine). The Brovey-based fusion combined with Random Forest yielded the best results, achieving 80% overall accuracy, 90% precision, and 84% recall, with high spatial reliability confirmed by 10-fold cross-validation. Spectral analysis revealed that Brovey introduced the largest radiometric deviation, particularly in the NIR band, which nonetheless enhanced class separability between healthy and symptomatic crowns. These findings demonstrate that moderate spectral distortion can be tolerated, or even beneficial, for vegetation disease detection. The proposed workflow—efficient, transferable, and based solely on visible and NIR bands—offers a practical foundation for satellite-driven disease monitoring and precision management in Mediterranean citrus systems. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

Back to TopTop