Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (63)

Search Parameters:
Keywords = RGB profiling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 4367 KB  
Article
A Neuro-Symbolic Approach to Fall Detection via Monocular Depth Estimation
by Yinghai Xu, Bongjun Kim, In-Nea Wang and Junho Jeong
Appl. Sci. 2026, 16(4), 1895; https://doi.org/10.3390/app16041895 - 13 Feb 2026
Viewed by 138
Abstract
Falls remain a critical safety concern in surveillance settings, yet monocular RGB methods often degrade in multi-person scenes with occlusion and loss of three-dimensional cues. This study proposes a neuro-symbolic framework that restores physically interpretable depth proxies from monocular video and fuses them [...] Read more.
Falls remain a critical safety concern in surveillance settings, yet monocular RGB methods often degrade in multi-person scenes with occlusion and loss of three-dimensional cues. This study proposes a neuro-symbolic framework that restores physically interpretable depth proxies from monocular video and fuses them with skeleton-based spatio-temporal inference for robust fall detection. The pipeline estimates per-frame depth and 2D skeletons, recovers world coordinates for key joints, and derives absolute neck height and vertical descent rate for rule-based adjudication, while a neural method operates on joint trajectories; final decisions combine both streams with a logical policy and short-horizon temporal consistency. Experiments in a realistic indoor testbed with multi-person activity compare three configurations—neural, symbolic, and fused. The fused neuro-symbolic method achieved an accuracy of 0.88 and an F1 score of 0.76 on the real surveillance test set, outperforming the neural method alone (accuracy 0.81, F1 0.64) and the symbolic method alone (accuracy 0.77, F1 0.35). Gains arise from complementary error profiles: depth-derived, rule-based cues suppress spurious positives on non-fall frames, while the neural stream recovers true falls near rule boundaries. These findings indicate that integrating monocular depth proxies with interpretable rules improves reliability without additional sensors, supporting deployment in complex, multi-person surveillance environments. Full article
Show Figures

Figure 1

28 pages, 7334 KB  
Article
I-GhostNetV3: A Lightweight Deep Learning Framework for Vision-Sensor-Based Rice Leaf Disease Detection in Smart Agriculture
by Puyu Zhang, Rui Li, Yuxuan Liu, Guoxi Sun and Chenglin Wen
Sensors 2026, 26(3), 1025; https://doi.org/10.3390/s26031025 - 4 Feb 2026
Viewed by 284
Abstract
Accurate and timely diagnosis of rice leaf diseases is crucial for smart agriculture leveraging vision sensors. However, existing lightweight convolutional neural networks (CNNs) often struggle in complex field environments, where small lesions, cluttered backgrounds, and varying illumination complicate recognition. This paper presents I-GhostNetV3, [...] Read more.
Accurate and timely diagnosis of rice leaf diseases is crucial for smart agriculture leveraging vision sensors. However, existing lightweight convolutional neural networks (CNNs) often struggle in complex field environments, where small lesions, cluttered backgrounds, and varying illumination complicate recognition. This paper presents I-GhostNetV3, an incrementally improved GhostNetV3-based network for RGB rice leaf disease recognition. I-GhostNetV3 introduces two modular enhancements with controlled overhead: (1) Adaptive Parallel Attention (APA), which integrates edge-guided spatial and channel cues and is selectively inserted to enhance lesion-related representations (at the cost of additional computation), and (2) Fusion Coordinate-Channel Attention (FCCA), a near-neutral SE replacement that enables efficient spatial–channel feature fusion to suppress background interference. Experiments on the Rice Leaf Bacterial and Fungal Disease (RLBF) dataset show that I-GhostNetV3 achieves 90.02% Top-1 accuracy with 1.831 million parameters and 248.694 million FLOPs, outperforming MobileNetV2 and EfficientNet-B0 under our experimental setup while remaining compact relative to the original GhostNetV3. In addition, evaluation on PlantVillage-Corn serves as a supplementary transfer sanity check; further validation on independent real-field target domains and on-device profiling will be explored in future work. These results indicate that I-GhostNetV3 is a promising efficient backbone for future edge deployment in precision agriculture. Full article
Show Figures

Figure 1

23 pages, 5736 KB  
Article
A Model for Identifying the Fermentation Degree of Tieguanyin Oolong Tea Based on RGB Image and Hyperspectral Data
by Yuyan Huang, Yongkuai Chen, Chuanhui Li, Tao Wang, Chengxu Zheng and Jian Zhao
Foods 2026, 15(2), 280; https://doi.org/10.3390/foods15020280 - 12 Jan 2026
Viewed by 243
Abstract
The fermentation process of oolong tea is a critical step in shaping its quality and flavor profile. In this study, the fermentation degree of Anxi Tieguanyin oolong tea was assessed using image and hyperspectral features. Machine learning algorithms, including Support Vector Machine (SVM), [...] Read more.
The fermentation process of oolong tea is a critical step in shaping its quality and flavor profile. In this study, the fermentation degree of Anxi Tieguanyin oolong tea was assessed using image and hyperspectral features. Machine learning algorithms, including Support Vector Machine (SVM), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU), were employed to develop models based on both single-source features and multi-source fused features. First, color and texture features were extracted from RGB images and then processed through Pearson correlation-based feature selection and Principal Component Analysis (PCA) for dimensionality reduction. For the hyperspectral data, preprocessing was conducted using Normalization (Nor) and Standard Normal Variate (SNV), followed by feature selection and dimensionality reduction with Competitive Adaptive Reweighted Sampling (CARS), Successive Projections Algorithm (SPA), and PCA. We then performed mid-level fusion on the two feature sets and selected the most relevant features using L1 regularization for the final modeling stage. Finally, SHapley Additive exPlanations (SHAP) analysis was conducted on the optimal models to reveal key features from both hyperspectral bands and image data. The results indicated that models based on single features achieved test set accuracies of 68.06% to 87.50%, while models based on data fusion achieved 77.78% to 94.44%. Specifically, the Pearson+Nor-SPA+L1+SVM fusion model achieved the highest accuracy of 94.44%. This demonstrates that data feature fusion enables a more comprehensive characterization of the fermentation process, significantly improving model accuracy. SHAP analysis revealed that the hyperspectral bands at 967, 942, 814, 784, 781, 503, 413, and 416 nm, along with the image features Hσ and H, played the most crucial roles in distinguishing tea fermentation stages. These findings provide a scientific basis for assessing the fermentation degree of Tieguanyin oolong tea and support the development of intelligent detection systems. Full article
(This article belongs to the Section Food Analytical Methods)
Show Figures

Figure 1

25 pages, 4363 KB  
Article
Demand Response Potential Evaluation Based on Multivariate Heterogeneous Features and Stacking Mechanism
by Chong Gao, Zhiheng Xu, Ran Cheng, Junxiao Zhang, Xinghang Weng, Huahui Zhang, Tao Yu and Wencong Xiao
Energies 2026, 19(1), 194; https://doi.org/10.3390/en19010194 - 30 Dec 2025
Viewed by 245
Abstract
Accurate evaluation of demand response (DR) potential at the individual user level is critical for the effective implementation and optimization of demand response programs. However, existing data-driven methods often suffer from insufficient feature representation, limited characterization of load profile dynamics, and ineffective fusion [...] Read more.
Accurate evaluation of demand response (DR) potential at the individual user level is critical for the effective implementation and optimization of demand response programs. However, existing data-driven methods often suffer from insufficient feature representation, limited characterization of load profile dynamics, and ineffective fusion of heterogeneous features, leading to suboptimal evaluation performance. To address these challenges, this paper proposes a novel demand response potential evaluation method based on multivariate heterogeneous features and a Stacking-based ensemble mechanism. First, multidimensional indicator features are extracted from historical electricity consumption data and external factors (e.g., weather, time-of-use pricing), capturing load shape, variability, and correlation characteristics. Second, to enrich the information space and preserve temporal dynamics, typical daily load profiles are transformed into two-dimensional image features using the Gramian Angular Difference Field (GADF), the Markov Transition Field (MTF), and an Improved Recurrence Plot (IRP), which are then fused into a single RGB image. Third, a differentiated modeling strategy is adopted: scalar indicator features are processed by classical machine learning models (Support Vector Machine, Random Forest, XGBoost), while image features are fed into a deep convolutional neural network (SE-ResNet-20). Finally, a Stacking ensemble learning framework is employed to intelligently integrate the outputs of base learners, with a Decision Tree as the meta-learner, thereby enhancing overall evaluation accuracy and robustness. Experimental results on a real-world dataset demonstrate that the proposed method achieves superior performance compared to individual models and conventional fusion approaches, effectively leveraging both structured indicators and unstructured image representations for high-precision demand response potential evaluation. Full article
(This article belongs to the Section F1: Electrical Power System)
Show Figures

Figure 1

16 pages, 3000 KB  
Article
Can Culture Imaging Implement Radial Growth Parameters to Disentangle Intraspecific Variability in Fomes fomentarius?
by Carolina Elena Girometta, Simone Buratti, Hajar Akridiss, Ewa Zapora, Marek Wołkowycki, Eugene Yurchenko, Daniel Skowron and Lidia Nicola
Forests 2026, 17(1), 19; https://doi.org/10.3390/f17010019 - 23 Dec 2025
Viewed by 345
Abstract
Fomes fomentarius (L.) Fr. sensu lato is a common, widespread polypore and a pathological decayer in many hosts such as poplar, beech, and birch. It is either regarded as a single species, a species complex, or displaying a significant intraspecific variability. Limits between [...] Read more.
Fomes fomentarius (L.) Fr. sensu lato is a common, widespread polypore and a pathological decayer in many hosts such as poplar, beech, and birch. It is either regarded as a single species, a species complex, or displaying a significant intraspecific variability. Limits between populations are fuzzy, and local differences have been mainly related to the current distribution of preferred hosts. The aim of the work was to test an imaging technique (RGB profiling) of cultures’ macromorphology on Petri plates to implement the traditional growth profiles of pure cultures in order to point out differences between strains from different European regions, hosts, and climates. Growth rates at 24 °C and 30 °C poorly segregated strains based on the origin, whereas there is a marked difference at 15 °C between strains from oceanic climates and continental climates. K-means clustering of RGB profiles also marked a difference at 15 °C between Central/North European strains and the Italian strains, although this variability gradually attenuates by increasing temperature. The combined approach, including a radial growth measuring and RGB profiling, successfully pointed out the intraspecific diversity in F. fomentarius, suggesting local adaptations. This study contributes to establishing a methodology to investigate the ecotype concept in polypores. Full article
(This article belongs to the Special Issue Advances in Fungal Diseases in Forests)
Show Figures

Figure 1

21 pages, 10123 KB  
Article
Bulk Tea Shoot Detection and Profiling Method for Tea Plucking Machines Using an RGB-D Camera
by Yuyang Cai, Xurui Li, Wenyu Yi and Guangshuai Liu
Sensors 2025, 25(23), 7204; https://doi.org/10.3390/s25237204 - 25 Nov 2025
Viewed by 498
Abstract
Due to the shortage of rural labor and an increasingly aging population, promoting the mechanized plucking of bulk tea and improving plucking efficiency have become urgent problems for tea plantations. Previous bulk tea plucking machines have not fully adapted to tea plantations in [...] Read more.
Due to the shortage of rural labor and an increasingly aging population, promoting the mechanized plucking of bulk tea and improving plucking efficiency have become urgent problems for tea plantations. Previous bulk tea plucking machines have not fully adapted to tea plantations in hilly areas, necessitating enhancements in the performance of cutter profiling. In this paper, we present an automatic cutter profiling method based on an RGB-D camera, which utilizes the depth information of bulk tea shoots to tackle the issues mentioned above. Specifically, we use improved super-green features and the Otsu method to detect and segment the shoots from the RGB images of the tea canopy taken from different lighting conditions. Furthermore, the cutting pose based on the depth value of the tea shoots can be generated as a basis for cutter profiling. Lastly, the profiling task is completed by the upper computer controlling motors to adjust the cutter pose. Field tests were conducted in the tea plantation to verify the proposed profiling method’s effectiveness. The average bud and leaf integrity rate, leakage rate, loss rate, tea making rate, and qualified rate were 81.2%, 0.91%, 0.66%, and 90.4%, respectively. The results show that the developed algorithm can improve cutting pose calculation accuracy and that the harvested bulk tea shoots meet the requirements of machine plucking quality standards and the subsequent processing process. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

24 pages, 2454 KB  
Article
Low-Cost Eye-Tracking Fixation Analysis for Driver Monitoring Systems Using Kalman Filtering and OPTICS Clustering
by Jonas Brandstetter, Eva-Maria Knoch and Frank Gauterin
Sensors 2025, 25(22), 7028; https://doi.org/10.3390/s25227028 - 17 Nov 2025
Viewed by 845
Abstract
Driver monitoring systems benefit from fixation-related eye-tracking features, yet dedicated eye-tracking hardware is costly and difficult to integrate at scale. This study presents a practical software pipeline that extracts fixation-related features from conventional RGB video. Facial and pupil landmarks obtained with MediaPipe are [...] Read more.
Driver monitoring systems benefit from fixation-related eye-tracking features, yet dedicated eye-tracking hardware is costly and difficult to integrate at scale. This study presents a practical software pipeline that extracts fixation-related features from conventional RGB video. Facial and pupil landmarks obtained with MediaPipe are denoised using a Kalman filter, fixation centers are identified with the OPTICS algorithm within a sliding window, and an affine normalization compensates for head motion and camera geometry. Fixation segments are derived from smoothed velocity profiles based on a moving average. Experiments with laptop camera recordings show that the combined Kalman and OPTICS pipeline reduces landmark jitter and yields more stable fixation centroids, while the affine normalization further improves referential pupil stability. The pipeline operates with minimal computational overhead and can be implemented as a software update in existing driver monitoring or advanced driver assistance systems. This work is a proof of concept that demonstrates feasibility in a low-cost RGB setting with a limited evaluation scope. Remaining challenges include sensitivity to lighting conditions and head motion that future work may address through near-infrared sensing, adaptive calibration, and broader validation across subjects, environments, and cameras. The extracted features are relevant for future studies on cognitive load and attention, although cognitive state inference is not validated here. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

15 pages, 2232 KB  
Article
Image-Based Deep Learning for Brain Tumour Transcriptomics: A Benchmark of DeepInsight, Fotomics, and Saliency-Guided CNNs
by Ali Alyatimi, Vera Chung, Muhammad Atif Iqbal and Ali Anaissi
Mach. Learn. Knowl. Extr. 2025, 7(4), 119; https://doi.org/10.3390/make7040119 - 15 Oct 2025
Cited by 1 | Viewed by 1005
Abstract
Classifying brain tumour transcriptomic data is crucial for precision medicine but remains challenging due to high dimensionality and limited interpretability of conventional models. This study benchmarks three image-based deep learning approaches, DeepInsight, Fotomics, and a novel saliency-guided convolutional neural network (CNN), for transcriptomic [...] Read more.
Classifying brain tumour transcriptomic data is crucial for precision medicine but remains challenging due to high dimensionality and limited interpretability of conventional models. This study benchmarks three image-based deep learning approaches, DeepInsight, Fotomics, and a novel saliency-guided convolutional neural network (CNN), for transcriptomic classification. DeepInsight utilises dimensionality reduction to spatially arrange gene features, while Fotomics applies Fourier transforms to encode expression patterns into structured images. The proposed method transforms each single-cell gene expression profile into an RGB image using PCA, UMAP, or t-SNE, enabling CNNs such as ResNet to learn spatially organised molecular features. Gradient-based saliency maps are employed to highlight gene regions most influential in model predictions. Evaluation is conducted on two biologically and technologically different datasets: single-cell RNA-seq from glioblastoma GSM3828672 and bulk microarray data from medulloblastoma GSE85217. Outcomes demonstrate that image-based deep learning methods, particularly those incorporating saliency guidance, provide a robust and interpretable framework for uncovering biologically meaningful patterns in complex high-dimensional omics data. For instance, ResNet-18 achieved the highest accuracy of 97.25% on the GSE85217 dataset and 91.02% on GSM3828672, respectively, outperforming other baseline models across multiple metrics. Full article
Show Figures

Graphical abstract

26 pages, 9183 KB  
Review
Application of Image Computing in Non-Destructive Detection of Chinese Cuisine
by Xiaowei Huang, Zexiang Li, Zhihua Li, Jiyong Shi, Ning Zhang, Zhou Qin, Liuzi Du, Tingting Shen and Roujia Zhang
Foods 2025, 14(14), 2488; https://doi.org/10.3390/foods14142488 - 16 Jul 2025
Viewed by 2362
Abstract
Food quality and safety are paramount in preserving the culinary authenticity and cultural integrity of Chinese cuisine, characterized by intricate ingredient combinations, diverse cooking techniques (e.g., stir-frying, steaming, and braising), and region-specific flavor profiles. Traditional non-destructive detection methods often struggle with the unique [...] Read more.
Food quality and safety are paramount in preserving the culinary authenticity and cultural integrity of Chinese cuisine, characterized by intricate ingredient combinations, diverse cooking techniques (e.g., stir-frying, steaming, and braising), and region-specific flavor profiles. Traditional non-destructive detection methods often struggle with the unique challenges posed by Chinese dishes, including complex textural variations in staple foods (e.g., noodles, dumplings), layered seasoning compositions (e.g., soy sauce, Sichuan peppercorns), and oil-rich cooking media. This study pioneers a hyperspectral imaging framework enhanced with domain-specific deep learning algorithms (spatial–spectral convolutional networks with attention mechanisms) to address these challenges. Our approach effectively deciphers the subtle spectral fingerprints of Chinese-specific ingredients (e.g., fermented black beans, lotus root) and quantifies critical quality indicators, achieving an average classification accuracy of 97.8% across 15 major Chinese dish categories. Specifically, the model demonstrates high precision in quantifying chili oil content in Mapo Tofu with a Mean Absolute Error (MAE) of 0.43% w/w and assessing freshness gradients in Cantonese dim sum (Shrimp Har Gow) with a classification accuracy of 95.2% for three distinct freshness levels. This approach leverages the detailed spectral information provided by hyperspectral imaging to automate the classification and detection of Chinese dishes, significantly improving both the accuracy of image-based food classification by >15 percentage points compared to traditional RGB methods and enhancing food quality safety assessment. Full article
Show Figures

Graphical abstract

20 pages, 8096 KB  
Article
Low-Cost Hyperspectral Imaging in Macroalgae Monitoring
by Marc C. Allentoft-Larsen, Joaquim Santos, Mihailo Azhar, Henrik C. Pedersen, Michael L. Jakobsen, Paul M. Petersen, Christian Pedersen and Hans H. Jakobsen
Sensors 2025, 25(9), 2652; https://doi.org/10.3390/s25092652 - 22 Apr 2025
Cited by 2 | Viewed by 2298
Abstract
This study presents an approach to macroalgae monitoring using a cost-effective hyperspectral imaging (HSI) system and artificial intelligence (AI). Kelp beds are vital habitats and support nutrient cycling, making ongoing monitoring crucial amid environmental changes. HSI emerges as a powerful tool in this [...] Read more.
This study presents an approach to macroalgae monitoring using a cost-effective hyperspectral imaging (HSI) system and artificial intelligence (AI). Kelp beds are vital habitats and support nutrient cycling, making ongoing monitoring crucial amid environmental changes. HSI emerges as a powerful tool in this context, due to its ability to detect pigment-characteristic fingerprints that are often missed altogether by standard RGB cameras. Still, the high costs of these systems are a barrier to large-scale deployment for in situ monitoring. Here, we showcase the development of a cost-effective HSI setup that combines a GoPro camera with a continuous linear variable spectral bandpass filter. We empirically validate the operational capabilities through the analysis of two brown macroalgae, Fucus serratus and Fucus versiculosus, and two red macroalgae, Ceramium sp. and Vertebrata byssoides, in a controlled aquatic environment. Our HSI system successfully captured spectral information from the target species, which exhibit considerable similarity in morphology and spectral profile, making them difficult to differentiate using traditional RGB imaging. Using a one-dimensional convolutional neural network, we reached a high average classification precision, recall, and F1-score of 99.9%, 89.5%, and 94.4%, respectively, demonstrating the effectiveness of our custom low-cost HSI setup. This work paves the way to achieving large-scale and automated ecological monitoring. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Graphical abstract

17 pages, 9448 KB  
Article
Plant Height and Soil Compaction in Coffee Crops Based on LiDAR and RGB Sensors Carried by Remotely Piloted Aircraft
by Nicole Lopes Bento, Gabriel Araújo e Silva Ferraz, Lucas Santos Santana, Rafael de Oliveira Faria, Giuseppe Rossi and Gianluca Bambi
Remote Sens. 2025, 17(8), 1445; https://doi.org/10.3390/rs17081445 - 17 Apr 2025
Cited by 1 | Viewed by 2042
Abstract
Remotely Piloted Aircraft (RPA) as sensor-carrying airborne platforms for indirect measurement of plant physical parameters has been discussed in the scientific community. The utilization of RGB sensors with photogrammetric data processing based on Structure-from-Motion (SfM) and Light Detection and Ranging (LiDAR) sensors for [...] Read more.
Remotely Piloted Aircraft (RPA) as sensor-carrying airborne platforms for indirect measurement of plant physical parameters has been discussed in the scientific community. The utilization of RGB sensors with photogrammetric data processing based on Structure-from-Motion (SfM) and Light Detection and Ranging (LiDAR) sensors for point cloud construction are applicable in this context and can yield high-quality results. In this sense, this study aimed to compare coffee plant height data obtained from RGB/SfM and LiDAR point clouds and to estimate soil compaction through penetration resistance in a coffee plantation located in Minas Gerais, Brazil. A Matrice 300 RTK RPA equipped with a Zenmuse L1 sensor was used, with RGB data processed in PIX4D software (version 4.5.6) and LiDAR data in DJI Terra software (version V4.4.6). Canopy Height Model (CHM) analysis and cross-sectional profile, together with correlation and statistical difference studies between the height data from the two sensors, were conducted to evaluate the RGB sensor’s capability to estimate coffee plant height compared to LiDAR data considered as reference. Based on the height data obtained by the two sensors, soil compaction in the coffee plantation was estimated through soil penetration resistance. The results demonstrated that both sensors provided dense point clouds from which plant height (R2 = 0.72, R = 0.85, and RMSE = 0.44) and soil penetration resistance (R2 = 0.87, R = 0.8346, and RMSE = 0.14 m) were accurately estimated, with no statistically significant differences determined between the analyzed sensor data. It is concluded, therefore, that the use of remote sensing technologies can be employed for accurate estimation of coffee plantation heights and soil compaction, emphasizing a potential pathway for reducing laborious manual field measurements. Full article
Show Figures

Figure 1

20 pages, 11233 KB  
Article
Capturing Free Surface Dynamics of Flows over a Stepped Spillway Using a Depth Camera
by Megh Raj K C, Brian M. Crookston and Daniel B. Bung
Sensors 2025, 25(8), 2525; https://doi.org/10.3390/s25082525 - 17 Apr 2025
Cited by 1 | Viewed by 1317
Abstract
Spatio-temporal measurements of turbulent free surface flows remain challenging with in situ point methods. This study explores the application of an inexpensive depth-sensing RGB-D camera, the Intel® RealSense™ D455, to capture detailed water surface measurements of a highly turbulent, self-aerated flow in [...] Read more.
Spatio-temporal measurements of turbulent free surface flows remain challenging with in situ point methods. This study explores the application of an inexpensive depth-sensing RGB-D camera, the Intel® RealSense™ D455, to capture detailed water surface measurements of a highly turbulent, self-aerated flow in the case of a stepped spillway. Ambient lighting conditions and various sensor settings, including configurations and parameters affecting data capture and quality, were assessed. A free surface profile was extracted from the 3D measurements and compared against phase detection conductivity probe (PDCP) and ultrasonic sensor (USS) measurements. Measurements in the non-aerated region were influenced by water transparency and a lack of detectable surface features, with flow depths consistently smaller than USS measurements (up to 32.5% less). Measurements in the clear water region also resulted in a “no data” region with holes in the depth map due to shiny reflections. In the aerated flow region, the camera effectively detected the dynamic water surface, with mean surface profiles close to characteristic depths measured with PDCP and within one standard deviation of the mean USS flow depths. The flow depths were within 10% of the USS depths and corresponded to depths with 80–90% air concentration levels obtained with the PDCP. Additionally, the depth camera successfully captured temporal fluctuations, allowing for the calculation of time-averaged entrapped air concentration profiles and dimensionless interface frequency distributions. This facilitated a direct comparison with PDCP and USS sensors, demonstrating that this camera sensor is a practical and cost-effective option for detecting free surfaces of high velocity, aerated, and dynamic flows in a stepped chute. Full article
(This article belongs to the Special Issue 3D Reconstruction with RGB-D Cameras and Multi-sensors)
Show Figures

Figure 1

17 pages, 3844 KB  
Article
Comprehensive Characterization (Chromatography, Spectroscopy, Isotopic, and Digital Color Image) of Tequila 100% Agave Cristalino as Evidence of the Preservation of the Characteristics of Its Aging Process
by Walter M. Warren-Vega, Rocío Fonseca-Aguiñaga, Arantza Villa-González, Camila S. Gómez-Navarro and Luis A. Romero-Cano
Beverages 2025, 11(2), 42; https://doi.org/10.3390/beverages11020042 - 20 Mar 2025
Cited by 1 | Viewed by 2652
Abstract
To obtain fundamental information on the Tequila 100% agave Cristalino commercial samples were characterized in their different classes. For this purpose, 12 samples were chosen, defined as: G1 (aged; n = 3, or extra-aged; n = 3) and G2 (aged-Cristalino; n [...] Read more.
To obtain fundamental information on the Tequila 100% agave Cristalino commercial samples were characterized in their different classes. For this purpose, 12 samples were chosen, defined as: G1 (aged; n = 3, or extra-aged; n = 3) and G2 (aged-Cristalino; n = 3 or extra-aged-Cristalino; n = 3). Analytical characterization was performed on these beverages, consisting of isotope ratio mass spectrometry, gas and liquid chromatography, UV-Vis spectroscopy, and color using digital image processing. The results corroborate that the chromatographic characterization (mg/100 mL A.A.)—higher alcohols (299.53 ± 46.56), methanol (212.02 ± 32.28), esters (26.02 ± 4.60), aldehydes (8.93 ± 4.61), and furfural (1.02 ± 0.56)—and isotopic characterization—δ13CVPDB = −13.02 ± 0.35 ‰ and δ18OVSMOW = 21.31 ± 1.33 ‰—do not present statistically significant differences (p > 0.05) between groups. From these techniques, it was possible to reinforce that isotopic ratios can provide information about that the ethanol of these alcoholic beverages come from Agave tequilana Weber blue variety and it is not affected in the filtration process. Based on the UV-Vis analysis, I280 and I365 were obtained, which were related to the presence of polyphenols and flavonoids—expressed as mg quercetin equivalents/L—only found in group 1. Due to the presence of flavonoids in aged beverages, the oxidation process results in the formation of an amber color, which can be measured by an RGB color model; therefore, the analysis shows that there is a statistically significant difference (p < 0.05) between groups. It can be concluded that Tequila 100% agave Cristalino is a Tequila 100% agave aged or extra-aged without color in which its chromatographic and isotopic profile is not affected. Full article
Show Figures

Figure 1

18 pages, 6634 KB  
Article
Development and Evaluation of a Multiaxial Modular Ground Robot for Estimating Soybean Phenotypic Traits Using an RGB-Depth Sensor
by James Kemeshi, Young Chang, Pappu Kumar Yadav, Maitiniyazi Maimaitijiang and Graig Reicks
AgriEngineering 2025, 7(3), 76; https://doi.org/10.3390/agriengineering7030076 - 11 Mar 2025
Cited by 1 | Viewed by 2305
Abstract
Achieving global sustainable agriculture requires farmers worldwide to adopt smart agricultural technologies, such as autonomous ground robots. However, most ground robots are either task- or crop-specific and expensive for small-scale farmers and smallholders. Therefore, there is a need for cost-effective robotic platforms that [...] Read more.
Achieving global sustainable agriculture requires farmers worldwide to adopt smart agricultural technologies, such as autonomous ground robots. However, most ground robots are either task- or crop-specific and expensive for small-scale farmers and smallholders. Therefore, there is a need for cost-effective robotic platforms that are modular by design and can be easily adapted to varying tasks and crops. This paper describes the hardware design of a unique, low-cost multiaxial modular agricultural robot (ModagRobot), and its field evaluation for soybean phenotyping. The ModagRobot’s chassis was designed without any welded components, making it easy to adjust trackwidth, height, ground clearance, and length. For this experiment, the ModagRobot was equipped with an RGB-Depth (RGB-D) sensor and adapted to safely navigate over soybean rows to collect RGB-D images for estimating soybean phenotypic traits. RGB images were processed using the Excess Green Index to estimate the percent canopy ground coverage area. 3D point clouds generated from RGB-D images were used to estimate canopy height (CH) and the 3D Profile Index of sample plots using linear regression. Aboveground biomass (AGB) was estimated using extracted phenotypic traits. Results showed an R2, RMSE, and RRMSE of 0.786, 0.0181 m, and 2.47%, respectively, between estimated CH and measured CH. AGB estimated using all extracted traits showed an R2, RMSE, and RRMSE of 0.59, 0.0742 kg/m2, and 8.05%, respectively, compared to the measured AGB. The results demonstrate the effectiveness of the ModagRobot for in-row crop phenotyping. Full article
Show Figures

Figure 1

20 pages, 3883 KB  
Article
Smartphone Biosensors for Non-Invasive Drug Monitoring in Saliva
by Atheer Awad, Lucía Rodríguez-Pombo, Paula Esteiro Simón, André Campos Álvarez, Carmen Alvarez-Lorenzo, Abdul W. Basit and Alvaro Goyanes
Biosensors 2025, 15(3), 163; https://doi.org/10.3390/bios15030163 - 4 Mar 2025
Cited by 5 | Viewed by 4203
Abstract
In recent years, biosensors have emerged as a promising solution for therapeutic drug monitoring (TDM), offering automated systems for rapid chemical analyses with minimal pre-treatment requirements. The use of saliva as a biological sample matrix offers distinct advantages, including non-invasiveness, cost-effectiveness, and reduced [...] Read more.
In recent years, biosensors have emerged as a promising solution for therapeutic drug monitoring (TDM), offering automated systems for rapid chemical analyses with minimal pre-treatment requirements. The use of saliva as a biological sample matrix offers distinct advantages, including non-invasiveness, cost-effectiveness, and reduced susceptibility to fluid intake fluctuations compared to alternative methods. The aim of this study was to explore and compare two types of low-cost biosensors, namely, the colourimetric and electrochemical methodologies, for quantifying paracetamol (acetaminophen) concentrations within artificial saliva using the MediMeter app, which has been specifically developed for this application. The research encompassed extensive optimisations and methodological refinements to ensure the results were robust and reliable. Material selection and parameter adjustments minimised external interferences, enhancing measurement accuracy. Both the colourimetric and electrochemical methods successfully determined paracetamol concentrations within the therapeutic range of 0.01–0.05 mg/mL (R2 = 0.939 for colourimetric and R2 = 0.988 for electrochemical). While both techniques offered different advantages, the electrochemical approach showed better precision (i.e., standard deviation of response = 0.1041 mg/mL) and speed (i.e., ~1 min). These findings highlight the potential use of biosensors in drug concentration determination, with the choice of technology dependent on specific application requirements. The development of an affordable, non-invasive and rapid biosensing system holds promise for remote drug concentration monitoring, reducing the need for invasive approaches and hospital visits. Future research could extend these methodologies to practical clinical applications, encouraging the use of TDM for enhanced precision, accessibility, and real-time patient-centric care. Full article
(This article belongs to the Section Biosensors and Healthcare)
Show Figures

Graphical abstract

Back to TopTop