Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (56)

Search Parameters:
Keywords = RGB profiling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 2232 KB  
Article
Image-Based Deep Learning for Brain Tumour Transcriptomics: A Benchmark of DeepInsight, Fotomics, and Saliency-Guided CNNs
by Ali Alyatimi, Vera Chung, Muhammad Atif Iqbal and Ali Anaissi
Mach. Learn. Knowl. Extr. 2025, 7(4), 119; https://doi.org/10.3390/make7040119 - 15 Oct 2025
Viewed by 383
Abstract
Classifying brain tumour transcriptomic data is crucial for precision medicine but remains challenging due to high dimensionality and limited interpretability of conventional models. This study benchmarks three image-based deep learning approaches, DeepInsight, Fotomics, and a novel saliency-guided convolutional neural network (CNN), for transcriptomic [...] Read more.
Classifying brain tumour transcriptomic data is crucial for precision medicine but remains challenging due to high dimensionality and limited interpretability of conventional models. This study benchmarks three image-based deep learning approaches, DeepInsight, Fotomics, and a novel saliency-guided convolutional neural network (CNN), for transcriptomic classification. DeepInsight utilises dimensionality reduction to spatially arrange gene features, while Fotomics applies Fourier transforms to encode expression patterns into structured images. The proposed method transforms each single-cell gene expression profile into an RGB image using PCA, UMAP, or t-SNE, enabling CNNs such as ResNet to learn spatially organised molecular features. Gradient-based saliency maps are employed to highlight gene regions most influential in model predictions. Evaluation is conducted on two biologically and technologically different datasets: single-cell RNA-seq from glioblastoma GSM3828672 and bulk microarray data from medulloblastoma GSE85217. Outcomes demonstrate that image-based deep learning methods, particularly those incorporating saliency guidance, provide a robust and interpretable framework for uncovering biologically meaningful patterns in complex high-dimensional omics data. For instance, ResNet-18 achieved the highest accuracy of 97.25% on the GSE85217 dataset and 91.02% on GSM3828672, respectively, outperforming other baseline models across multiple metrics. Full article
Show Figures

Graphical abstract

26 pages, 9183 KB  
Review
Application of Image Computing in Non-Destructive Detection of Chinese Cuisine
by Xiaowei Huang, Zexiang Li, Zhihua Li, Jiyong Shi, Ning Zhang, Zhou Qin, Liuzi Du, Tingting Shen and Roujia Zhang
Foods 2025, 14(14), 2488; https://doi.org/10.3390/foods14142488 - 16 Jul 2025
Viewed by 1231
Abstract
Food quality and safety are paramount in preserving the culinary authenticity and cultural integrity of Chinese cuisine, characterized by intricate ingredient combinations, diverse cooking techniques (e.g., stir-frying, steaming, and braising), and region-specific flavor profiles. Traditional non-destructive detection methods often struggle with the unique [...] Read more.
Food quality and safety are paramount in preserving the culinary authenticity and cultural integrity of Chinese cuisine, characterized by intricate ingredient combinations, diverse cooking techniques (e.g., stir-frying, steaming, and braising), and region-specific flavor profiles. Traditional non-destructive detection methods often struggle with the unique challenges posed by Chinese dishes, including complex textural variations in staple foods (e.g., noodles, dumplings), layered seasoning compositions (e.g., soy sauce, Sichuan peppercorns), and oil-rich cooking media. This study pioneers a hyperspectral imaging framework enhanced with domain-specific deep learning algorithms (spatial–spectral convolutional networks with attention mechanisms) to address these challenges. Our approach effectively deciphers the subtle spectral fingerprints of Chinese-specific ingredients (e.g., fermented black beans, lotus root) and quantifies critical quality indicators, achieving an average classification accuracy of 97.8% across 15 major Chinese dish categories. Specifically, the model demonstrates high precision in quantifying chili oil content in Mapo Tofu with a Mean Absolute Error (MAE) of 0.43% w/w and assessing freshness gradients in Cantonese dim sum (Shrimp Har Gow) with a classification accuracy of 95.2% for three distinct freshness levels. This approach leverages the detailed spectral information provided by hyperspectral imaging to automate the classification and detection of Chinese dishes, significantly improving both the accuracy of image-based food classification by >15 percentage points compared to traditional RGB methods and enhancing food quality safety assessment. Full article
Show Figures

Graphical abstract

20 pages, 8096 KB  
Article
Low-Cost Hyperspectral Imaging in Macroalgae Monitoring
by Marc C. Allentoft-Larsen, Joaquim Santos, Mihailo Azhar, Henrik C. Pedersen, Michael L. Jakobsen, Paul M. Petersen, Christian Pedersen and Hans H. Jakobsen
Sensors 2025, 25(9), 2652; https://doi.org/10.3390/s25092652 - 22 Apr 2025
Cited by 1 | Viewed by 1639
Abstract
This study presents an approach to macroalgae monitoring using a cost-effective hyperspectral imaging (HSI) system and artificial intelligence (AI). Kelp beds are vital habitats and support nutrient cycling, making ongoing monitoring crucial amid environmental changes. HSI emerges as a powerful tool in this [...] Read more.
This study presents an approach to macroalgae monitoring using a cost-effective hyperspectral imaging (HSI) system and artificial intelligence (AI). Kelp beds are vital habitats and support nutrient cycling, making ongoing monitoring crucial amid environmental changes. HSI emerges as a powerful tool in this context, due to its ability to detect pigment-characteristic fingerprints that are often missed altogether by standard RGB cameras. Still, the high costs of these systems are a barrier to large-scale deployment for in situ monitoring. Here, we showcase the development of a cost-effective HSI setup that combines a GoPro camera with a continuous linear variable spectral bandpass filter. We empirically validate the operational capabilities through the analysis of two brown macroalgae, Fucus serratus and Fucus versiculosus, and two red macroalgae, Ceramium sp. and Vertebrata byssoides, in a controlled aquatic environment. Our HSI system successfully captured spectral information from the target species, which exhibit considerable similarity in morphology and spectral profile, making them difficult to differentiate using traditional RGB imaging. Using a one-dimensional convolutional neural network, we reached a high average classification precision, recall, and F1-score of 99.9%, 89.5%, and 94.4%, respectively, demonstrating the effectiveness of our custom low-cost HSI setup. This work paves the way to achieving large-scale and automated ecological monitoring. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Graphical abstract

17 pages, 9448 KB  
Article
Plant Height and Soil Compaction in Coffee Crops Based on LiDAR and RGB Sensors Carried by Remotely Piloted Aircraft
by Nicole Lopes Bento, Gabriel Araújo e Silva Ferraz, Lucas Santos Santana, Rafael de Oliveira Faria, Giuseppe Rossi and Gianluca Bambi
Remote Sens. 2025, 17(8), 1445; https://doi.org/10.3390/rs17081445 - 17 Apr 2025
Viewed by 1389
Abstract
Remotely Piloted Aircraft (RPA) as sensor-carrying airborne platforms for indirect measurement of plant physical parameters has been discussed in the scientific community. The utilization of RGB sensors with photogrammetric data processing based on Structure-from-Motion (SfM) and Light Detection and Ranging (LiDAR) sensors for [...] Read more.
Remotely Piloted Aircraft (RPA) as sensor-carrying airborne platforms for indirect measurement of plant physical parameters has been discussed in the scientific community. The utilization of RGB sensors with photogrammetric data processing based on Structure-from-Motion (SfM) and Light Detection and Ranging (LiDAR) sensors for point cloud construction are applicable in this context and can yield high-quality results. In this sense, this study aimed to compare coffee plant height data obtained from RGB/SfM and LiDAR point clouds and to estimate soil compaction through penetration resistance in a coffee plantation located in Minas Gerais, Brazil. A Matrice 300 RTK RPA equipped with a Zenmuse L1 sensor was used, with RGB data processed in PIX4D software (version 4.5.6) and LiDAR data in DJI Terra software (version V4.4.6). Canopy Height Model (CHM) analysis and cross-sectional profile, together with correlation and statistical difference studies between the height data from the two sensors, were conducted to evaluate the RGB sensor’s capability to estimate coffee plant height compared to LiDAR data considered as reference. Based on the height data obtained by the two sensors, soil compaction in the coffee plantation was estimated through soil penetration resistance. The results demonstrated that both sensors provided dense point clouds from which plant height (R2 = 0.72, R = 0.85, and RMSE = 0.44) and soil penetration resistance (R2 = 0.87, R = 0.8346, and RMSE = 0.14 m) were accurately estimated, with no statistically significant differences determined between the analyzed sensor data. It is concluded, therefore, that the use of remote sensing technologies can be employed for accurate estimation of coffee plantation heights and soil compaction, emphasizing a potential pathway for reducing laborious manual field measurements. Full article
Show Figures

Figure 1

20 pages, 11233 KB  
Article
Capturing Free Surface Dynamics of Flows over a Stepped Spillway Using a Depth Camera
by Megh Raj K C, Brian M. Crookston and Daniel B. Bung
Sensors 2025, 25(8), 2525; https://doi.org/10.3390/s25082525 - 17 Apr 2025
Cited by 1 | Viewed by 771
Abstract
Spatio-temporal measurements of turbulent free surface flows remain challenging with in situ point methods. This study explores the application of an inexpensive depth-sensing RGB-D camera, the Intel® RealSense™ D455, to capture detailed water surface measurements of a highly turbulent, self-aerated flow in [...] Read more.
Spatio-temporal measurements of turbulent free surface flows remain challenging with in situ point methods. This study explores the application of an inexpensive depth-sensing RGB-D camera, the Intel® RealSense™ D455, to capture detailed water surface measurements of a highly turbulent, self-aerated flow in the case of a stepped spillway. Ambient lighting conditions and various sensor settings, including configurations and parameters affecting data capture and quality, were assessed. A free surface profile was extracted from the 3D measurements and compared against phase detection conductivity probe (PDCP) and ultrasonic sensor (USS) measurements. Measurements in the non-aerated region were influenced by water transparency and a lack of detectable surface features, with flow depths consistently smaller than USS measurements (up to 32.5% less). Measurements in the clear water region also resulted in a “no data” region with holes in the depth map due to shiny reflections. In the aerated flow region, the camera effectively detected the dynamic water surface, with mean surface profiles close to characteristic depths measured with PDCP and within one standard deviation of the mean USS flow depths. The flow depths were within 10% of the USS depths and corresponded to depths with 80–90% air concentration levels obtained with the PDCP. Additionally, the depth camera successfully captured temporal fluctuations, allowing for the calculation of time-averaged entrapped air concentration profiles and dimensionless interface frequency distributions. This facilitated a direct comparison with PDCP and USS sensors, demonstrating that this camera sensor is a practical and cost-effective option for detecting free surfaces of high velocity, aerated, and dynamic flows in a stepped chute. Full article
(This article belongs to the Special Issue 3D Reconstruction with RGB-D Cameras and Multi-sensors)
Show Figures

Figure 1

17 pages, 3844 KB  
Article
Comprehensive Characterization (Chromatography, Spectroscopy, Isotopic, and Digital Color Image) of Tequila 100% Agave Cristalino as Evidence of the Preservation of the Characteristics of Its Aging Process
by Walter M. Warren-Vega, Rocío Fonseca-Aguiñaga, Arantza Villa-González, Camila S. Gómez-Navarro and Luis A. Romero-Cano
Beverages 2025, 11(2), 42; https://doi.org/10.3390/beverages11020042 - 20 Mar 2025
Cited by 1 | Viewed by 1702
Abstract
To obtain fundamental information on the Tequila 100% agave Cristalino commercial samples were characterized in their different classes. For this purpose, 12 samples were chosen, defined as: G1 (aged; n = 3, or extra-aged; n = 3) and G2 (aged-Cristalino; n [...] Read more.
To obtain fundamental information on the Tequila 100% agave Cristalino commercial samples were characterized in their different classes. For this purpose, 12 samples were chosen, defined as: G1 (aged; n = 3, or extra-aged; n = 3) and G2 (aged-Cristalino; n = 3 or extra-aged-Cristalino; n = 3). Analytical characterization was performed on these beverages, consisting of isotope ratio mass spectrometry, gas and liquid chromatography, UV-Vis spectroscopy, and color using digital image processing. The results corroborate that the chromatographic characterization (mg/100 mL A.A.)—higher alcohols (299.53 ± 46.56), methanol (212.02 ± 32.28), esters (26.02 ± 4.60), aldehydes (8.93 ± 4.61), and furfural (1.02 ± 0.56)—and isotopic characterization—δ13CVPDB = −13.02 ± 0.35 ‰ and δ18OVSMOW = 21.31 ± 1.33 ‰—do not present statistically significant differences (p > 0.05) between groups. From these techniques, it was possible to reinforce that isotopic ratios can provide information about that the ethanol of these alcoholic beverages come from Agave tequilana Weber blue variety and it is not affected in the filtration process. Based on the UV-Vis analysis, I280 and I365 were obtained, which were related to the presence of polyphenols and flavonoids—expressed as mg quercetin equivalents/L—only found in group 1. Due to the presence of flavonoids in aged beverages, the oxidation process results in the formation of an amber color, which can be measured by an RGB color model; therefore, the analysis shows that there is a statistically significant difference (p < 0.05) between groups. It can be concluded that Tequila 100% agave Cristalino is a Tequila 100% agave aged or extra-aged without color in which its chromatographic and isotopic profile is not affected. Full article
Show Figures

Figure 1

18 pages, 6634 KB  
Article
Development and Evaluation of a Multiaxial Modular Ground Robot for Estimating Soybean Phenotypic Traits Using an RGB-Depth Sensor
by James Kemeshi, Young Chang, Pappu Kumar Yadav, Maitiniyazi Maimaitijiang and Graig Reicks
AgriEngineering 2025, 7(3), 76; https://doi.org/10.3390/agriengineering7030076 - 11 Mar 2025
Viewed by 1712
Abstract
Achieving global sustainable agriculture requires farmers worldwide to adopt smart agricultural technologies, such as autonomous ground robots. However, most ground robots are either task- or crop-specific and expensive for small-scale farmers and smallholders. Therefore, there is a need for cost-effective robotic platforms that [...] Read more.
Achieving global sustainable agriculture requires farmers worldwide to adopt smart agricultural technologies, such as autonomous ground robots. However, most ground robots are either task- or crop-specific and expensive for small-scale farmers and smallholders. Therefore, there is a need for cost-effective robotic platforms that are modular by design and can be easily adapted to varying tasks and crops. This paper describes the hardware design of a unique, low-cost multiaxial modular agricultural robot (ModagRobot), and its field evaluation for soybean phenotyping. The ModagRobot’s chassis was designed without any welded components, making it easy to adjust trackwidth, height, ground clearance, and length. For this experiment, the ModagRobot was equipped with an RGB-Depth (RGB-D) sensor and adapted to safely navigate over soybean rows to collect RGB-D images for estimating soybean phenotypic traits. RGB images were processed using the Excess Green Index to estimate the percent canopy ground coverage area. 3D point clouds generated from RGB-D images were used to estimate canopy height (CH) and the 3D Profile Index of sample plots using linear regression. Aboveground biomass (AGB) was estimated using extracted phenotypic traits. Results showed an R2, RMSE, and RRMSE of 0.786, 0.0181 m, and 2.47%, respectively, between estimated CH and measured CH. AGB estimated using all extracted traits showed an R2, RMSE, and RRMSE of 0.59, 0.0742 kg/m2, and 8.05%, respectively, compared to the measured AGB. The results demonstrate the effectiveness of the ModagRobot for in-row crop phenotyping. Full article
Show Figures

Figure 1

20 pages, 3883 KB  
Article
Smartphone Biosensors for Non-Invasive Drug Monitoring in Saliva
by Atheer Awad, Lucía Rodríguez-Pombo, Paula Esteiro Simón, André Campos Álvarez, Carmen Alvarez-Lorenzo, Abdul W. Basit and Alvaro Goyanes
Biosensors 2025, 15(3), 163; https://doi.org/10.3390/bios15030163 - 4 Mar 2025
Cited by 1 | Viewed by 2998
Abstract
In recent years, biosensors have emerged as a promising solution for therapeutic drug monitoring (TDM), offering automated systems for rapid chemical analyses with minimal pre-treatment requirements. The use of saliva as a biological sample matrix offers distinct advantages, including non-invasiveness, cost-effectiveness, and reduced [...] Read more.
In recent years, biosensors have emerged as a promising solution for therapeutic drug monitoring (TDM), offering automated systems for rapid chemical analyses with minimal pre-treatment requirements. The use of saliva as a biological sample matrix offers distinct advantages, including non-invasiveness, cost-effectiveness, and reduced susceptibility to fluid intake fluctuations compared to alternative methods. The aim of this study was to explore and compare two types of low-cost biosensors, namely, the colourimetric and electrochemical methodologies, for quantifying paracetamol (acetaminophen) concentrations within artificial saliva using the MediMeter app, which has been specifically developed for this application. The research encompassed extensive optimisations and methodological refinements to ensure the results were robust and reliable. Material selection and parameter adjustments minimised external interferences, enhancing measurement accuracy. Both the colourimetric and electrochemical methods successfully determined paracetamol concentrations within the therapeutic range of 0.01–0.05 mg/mL (R2 = 0.939 for colourimetric and R2 = 0.988 for electrochemical). While both techniques offered different advantages, the electrochemical approach showed better precision (i.e., standard deviation of response = 0.1041 mg/mL) and speed (i.e., ~1 min). These findings highlight the potential use of biosensors in drug concentration determination, with the choice of technology dependent on specific application requirements. The development of an affordable, non-invasive and rapid biosensing system holds promise for remote drug concentration monitoring, reducing the need for invasive approaches and hospital visits. Future research could extend these methodologies to practical clinical applications, encouraging the use of TDM for enhanced precision, accessibility, and real-time patient-centric care. Full article
(This article belongs to the Section Biosensors and Healthcare)
Show Figures

Graphical abstract

47 pages, 4501 KB  
Review
Micronutrient Biofortification in Wheat: QTLs, Candidate Genes and Molecular Mechanism
by Adnan Nasim, Junwei Hao, Faiza Tawab, Ci Jin, Jiamin Zhu, Shuang Luo and Xiaojun Nie
Int. J. Mol. Sci. 2025, 26(5), 2178; https://doi.org/10.3390/ijms26052178 - 28 Feb 2025
Cited by 2 | Viewed by 2373
Abstract
Micronutrient deficiency (hidden hunger) is one of the serious health problems globally, often due to diets dominated by staple foods. Genetic biofortification of a staple like wheat has surfaced as a promising, cost-efficient, and sustainable strategy. Significant genetic diversity exists in wheat and [...] Read more.
Micronutrient deficiency (hidden hunger) is one of the serious health problems globally, often due to diets dominated by staple foods. Genetic biofortification of a staple like wheat has surfaced as a promising, cost-efficient, and sustainable strategy. Significant genetic diversity exists in wheat and its wild relatives, but the nutritional profile in commercial wheat varieties has inadvertently declined over time, striving for better yield and disease resistance. Substantial efforts have been made to biofortify wheat using conventional and molecular breeding. QTL and genome-wide association studies were conducted, and some of the identified QTLs/marker-trait association (MTAs) for grain micronutrients like Fe have been exploited by MAS. The genetic mechanisms of micronutrient uptake, transport, and storage have also been investigated. Although wheat biofortified varieties are now commercially cultivated in selected regions worldwide, further improvements are needed. This review provides an overview of wheat biofortification, covering breeding efforts, nutritional evaluation methods, nutrient assimilation and bioavailability, and microbial involvement in wheat grain enrichment. Emerging technologies such as non-destructive hyperspectral imaging (HSI)/red, green, and blue (RGB) phenotyping; multi-omics integration; CRISPR-Cas9 alongside genomic selection; and microbial genetics hold promise for advancing biofortification. Full article
(This article belongs to the Special Issue Wheat Genetics and Genomics: 3rd Edition)
Show Figures

Figure 1

19 pages, 4101 KB  
Article
Development of a Method for Soil Tilth Quality Evaluation from Crumbling Roller Baskets Using Deep Machine Learning Models
by Mehari Z. Tekeste, Junxian Guo, Desale Habtezgi, Jia-Hao He and Marcin Waz
Sensors 2024, 24(11), 3379; https://doi.org/10.3390/s24113379 - 24 May 2024
Cited by 1 | Viewed by 1351
Abstract
A combination tillage with disks, rippers, and roller baskets allows the loosening of compacted soils and the crumbling of soil clods. Statistical methods for evaluating the soil tilth quality of combination tillage are limited. Light Detection and Ranging (LiDAR) data and machine learning [...] Read more.
A combination tillage with disks, rippers, and roller baskets allows the loosening of compacted soils and the crumbling of soil clods. Statistical methods for evaluating the soil tilth quality of combination tillage are limited. Light Detection and Ranging (LiDAR) data and machine learning models (Random Forest (RF), Support Vector Machine (SVM), and Neural Network (NN)) are proposed to investigate roller basket pressure settings on soil tilth quality. Soil profiles were measured using LiDAR (stop and go and on-the-go) and RGB visual images from a Completely Randomized Design (CRD) tillage experiment on clay loam soil with treatments of roller basket down, roller basket up, and no-till in three replicates. Utilizing RF, SVM, and NN methods on the LiDAR data set identified median, mean, maximum, and standard deviation as the top features of importance variables that were statistically affected by the roller settings. Applying multivariate discriminatory analysis on the four statistical measures, three soil tilth classes were predicted with mean prediction rates of 77% (Roller-basket down), 64% (Roller-basket up), and 90% (No till). The LiDAR data analytics-inspired soil tilth classes correlated well with the RGB image discriminatory analysis. Soil tilth machine learning models were shown to be successful in classifying soil tilth with regard to onboard operator pressure control settings on the roller basket of the combination tillage implement. Full article
(This article belongs to the Special Issue AI, IoT and Smart Sensors for Precision Agriculture)
Show Figures

Figure 1

18 pages, 10031 KB  
Article
Action Recognition of Taekwondo Unit Actions Using Action Images Constructed with Time-Warped Motion Profiles
by Junghwan Lim, Chenglong Luo, Seunghun Lee, Young Eun Song and Hoeryong Jung
Sensors 2024, 24(8), 2595; https://doi.org/10.3390/s24082595 - 18 Apr 2024
Cited by 5 | Viewed by 2541
Abstract
Taekwondo has evolved from a traditional martial art into an official Olympic sport. This study introduces a novel action recognition model tailored for Taekwondo unit actions, utilizing joint-motion data acquired via wearable inertial measurement unit (IMU) sensors. The utilization of IMU sensor-measured motion [...] Read more.
Taekwondo has evolved from a traditional martial art into an official Olympic sport. This study introduces a novel action recognition model tailored for Taekwondo unit actions, utilizing joint-motion data acquired via wearable inertial measurement unit (IMU) sensors. The utilization of IMU sensor-measured motion data facilitates the capture of the intricate and rapid movements characteristic of Taekwondo techniques. The model, underpinned by a conventional convolutional neural network (CNN)-based image classification framework, synthesizes action images to represent individual Taekwondo unit actions. These action images are generated by mapping joint-motion profiles onto the RGB color space, thus encapsulating the motion dynamics of a single unit action within a solitary image. To further refine the representation of rapid movements within these images, a time-warping technique was applied, adjusting motion profiles in relation to the velocity of the action. The effectiveness of the proposed model was assessed using a dataset compiled from 40 Taekwondo experts, yielding remarkable outcomes: an accuracy of 0.998, a precision of 0.983, a recall of 0.982, and an F1 score of 0.982. These results underscore this time-warping technique’s contribution to enhancing feature representation, as well as the proposed method’s scalability and effectiveness in recognizing Taekwondo unit actions. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

15 pages, 4110 KB  
Article
Green Extraction of Natural Colorants from Food Residues: Colorimetric Characterization and Nanostructuring for Enhanced Stability
by Victoria Baggi Mendonça Lauria and Luciano Paulino Silva
Foods 2024, 13(6), 962; https://doi.org/10.3390/foods13060962 - 21 Mar 2024
Viewed by 3982
Abstract
Food residues are a promising resource for obtaining natural pigments, which may replace artificial dyes in the industry. However, their use still presents challenges due to the lack of suitable sources and the low stability of these natural compounds when exposed to environmental [...] Read more.
Food residues are a promising resource for obtaining natural pigments, which may replace artificial dyes in the industry. However, their use still presents challenges due to the lack of suitable sources and the low stability of these natural compounds when exposed to environmental variations. In this scenario, the present study aims to identify different food residues (such as peels, stalks, and leaves) as potential candidates for obtaining natural colorants through eco-friendly extractions, identify the colorimetric profile of natural pigments using the RGB color model, and develop alternatives using nanotechnology (e.g., liposomes, micelles, and polymeric nanoparticles) to increase their stability. The results showed that extractive solution and residue concentration influenced the RGB color profile of the pigments. Furthermore, the external leaves of Brassica oleracea L. var. capitata f. rubra, the peels of Cucurbita maxima, Cucurbita maxima x Cucurbita moschata, and Beta vulgaris L. proved to be excellent resources for obtaining natural pigments. Finally, the use of nanotechnology proved to be a viable alternative for increasing the stability of natural colorants over storage time. Full article
Show Figures

Graphical abstract

25 pages, 12024 KB  
Article
Linking High-Resolution UAV-Based Remote Sensing Data to Long-Term Vegetation Sampling—A Novel Workflow to Study Slow Ecotone Dynamics
by Fabian Döweler, Johan E. S. Fransson and Martin K.-F. Bader
Remote Sens. 2024, 16(5), 840; https://doi.org/10.3390/rs16050840 - 28 Feb 2024
Cited by 1 | Viewed by 2556
Abstract
Unravelling slow ecosystem migration patterns requires a fundamental understanding of the broad-scale climatic drivers, which are further modulated by fine-scale heterogeneities just outside established ecosystem boundaries. While modern Unoccupied Aerial Vehicle (UAV) remote sensing approaches enable us to monitor local scale ecotone dynamics [...] Read more.
Unravelling slow ecosystem migration patterns requires a fundamental understanding of the broad-scale climatic drivers, which are further modulated by fine-scale heterogeneities just outside established ecosystem boundaries. While modern Unoccupied Aerial Vehicle (UAV) remote sensing approaches enable us to monitor local scale ecotone dynamics in unprecedented detail, they are often underutilised as a temporal snapshot of the conditions on site. In this study in the Southern Alps of New Zealand, we demonstrate how the combination of multispectral and thermal data, as well as LiDAR data (2019), supplemented by three decades (1991–2021) of treeline transect data can add great value to field monitoring campaigns by putting seedling regeneration patterns at treeline into a spatially explicit context. Orthorectification and mosaicking of RGB and multispectral imagery produced spatially extensive maps of the subalpine area (~4 ha) with low spatial offset (Craigieburn: 6.14 ± 4.03 cm; Mt Faust: 5.11 ± 2.88 cm, mean ± standard error). The seven multispectral bands enabled a highly detailed delineation of six ground cover classes at treeline. Subalpine shrubs were detected with high accuracy (up to 90%), and a clear identification of the closed forest canopy (Fuscospora cliffortioides, >95%) was achieved. Two thermal imaging flights revealed the effect of existing vegetation classes on ground-level thermal conditions. UAV LiDAR data acquisition at the Craigieburn site allowed us to model vegetation height profiles for ~6000 previously classified objects and calculate annual fine-scale variation in the local solar radiation budget (20 cm resolution). At the heart of the proposed framework, an easy-to-use extrapolation procedure was used for the vegetation monitoring datasets with minimal georeferencing effort. The proposed method can satisfy the rapidly increasing demand for high spatiotemporal resolution mapping and shed further light on current treeline recruitment bottlenecks. This low-budget framework can readily be expanded to other ecotones, allowing us to gain further insights into slow ecotone dynamics in a drastically changing climate. Full article
(This article belongs to the Section Ecological Remote Sensing)
Show Figures

Figure 1

17 pages, 5620 KB  
Article
Image Segmentation and Filtering of Anaerobic Lagoon Floating Cover in Digital Elevation Model and Orthomosaics Using Unsupervised k-Means Clustering for Scum Association Analysis
by Benjamin Steven Vien, Thomas Kuen, Louis Raymond Francis Rose and Wing Kong Chiu
Remote Sens. 2023, 15(22), 5357; https://doi.org/10.3390/rs15225357 - 14 Nov 2023
Cited by 6 | Viewed by 2177
Abstract
In various engineering applications, remote sensing images such as digital elevation models (DEMs) and orthomosaics provide a convenient means of generating 3D representations of physical assets, enabling the discovery of new insights and analyses. However, the presence of noise and artefacts, particularly unwanted [...] Read more.
In various engineering applications, remote sensing images such as digital elevation models (DEMs) and orthomosaics provide a convenient means of generating 3D representations of physical assets, enabling the discovery of new insights and analyses. However, the presence of noise and artefacts, particularly unwanted natural features, poses significant challenges, and their removal requires the application of filtering techniques prior to conducting analysis. Unmanned aerial vehicle-based photogrammetry is used at Melbourne Water’s Western Treatment Plant as a cost-effective and efficient method of inspecting the floating covers on the anaerobic lagoons. The focus of interest is the elevation profile of the floating covers for these sewage-processing lagoons and its implications for sub-surface scum accumulation, which can compromise the structural integrity of the engineered assets. However, unwanted artefacts due to trapped rainwater, debris, dirt, and other irrelevant structures can significantly distort the elevation profile. In this study, a machine learning algorithm is utilised to group distinct features on the floating cover based on an image segmentation process. An unsupervised k-means clustering algorithm is employed, which operates on a stacked 4D array composed of the elevation of the DEM and the RGB channels of the associated orthomosaic. In the cluster validation process, seven cluster groups were considered optimal based on the Calinski–Harabasz criterion. Furthermore, by utilising the k-means method as a filtering technique, three clusters contain features related to the elevations associated with the floating cover membrane, collectively representing 84% of the asset, with each cluster contributing at least 19% of the asset. The artefact groups constitute less than 6% of the asset and exhibit significantly different features, colour characteristics, and statistical measurements from those of the membrane groups. The study found notable improvements using the k-means filtering method, including a 59.4% average reduction in outliers and a 36.3% decrease in standard deviation compared to raw data. Additionally, employing the proposed method in the scum hardness analysis improved correlation strength by 13.1%, removing approximately 16% of the artefacts in total assets, in contrast to a 3.6% improvement with the median filtering method. This improved imaging will lead to significant benefits when integrating imagery into deep learning models for structural health monitoring and asset performance. Full article
Show Figures

Graphical abstract

14 pages, 3545 KB  
Technical Note
Early Detection and Analysis of an Unpredicted Convective Storm over the Negev Desert
by Shilo Shiff, Amir Givati, Steve Brenner and Itamar M. Lensky
Remote Sens. 2023, 15(21), 5241; https://doi.org/10.3390/rs15215241 - 4 Nov 2023
Viewed by 2139
Abstract
On 15 September 2015, a convective storm yielded heavy rainfalls that caused the strongest flash flood in the last 50 years in the South Negev Desert (Israel). None of the operational forecast models predicted the event, and thus, no warning was provided. We [...] Read more.
On 15 September 2015, a convective storm yielded heavy rainfalls that caused the strongest flash flood in the last 50 years in the South Negev Desert (Israel). None of the operational forecast models predicted the event, and thus, no warning was provided. We analyzed this event using satellite, radar, and numerical weather prediction model data. We generated cloud-free climatological values on a pixel basis using Temporal Fourier Analysis on a time series of MSG geostationary satellite data. The discrepancy between the measured and climatological values was used to detect “cloud-contaminated” pixels. This simple, robust, fast, and accurate method is valuable for the early detection of convection. The first clouds were detected 30 min before they were detected by the official MSG cloud mask, 4.5 h before the radar, and 10 h before the flood reached the main road. We used the “severe storms” RGB composite and the satellite-retrieved vertical profiles of cloud top temperature–particle’s effective radius relations as indicators for the development of a severe convective storm. We also reran the model with different convective schemes, with much-improved results. Both the satellite and model-based analysis provided early warning for a very high probability of flooding a few hours before the actual flooding occurred. Full article
(This article belongs to the Special Issue Remote Sensing of Extreme Weather Events: Monitoring and Modeling)
Show Figures

Figure 1

Back to TopTop