Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,809)

Search Parameters:
Keywords = point based monitoring

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 617 KB  
Article
Integrating ESP32-Based IoT Architectures and Cloud Visualization to Foster Data Literacy in Early Engineering Education
by Jael Zambrano-Mieles, Miguel Tupac-Yupanqui, Salutar Mari-Loardo and Cristian Vidal-Silva
Computers 2026, 15(1), 51; https://doi.org/10.3390/computers15010051 - 13 Jan 2026
Abstract
This study presents the design and implementation of a full-stack IoT ecosystem based on ESP32 microcontrollers and web-based visualization dashboards to support scientific reasoning in first-year engineering students. The proposed architecture integrates a four-layer model—perception, network, service, and application—enabling students to deploy real-time [...] Read more.
This study presents the design and implementation of a full-stack IoT ecosystem based on ESP32 microcontrollers and web-based visualization dashboards to support scientific reasoning in first-year engineering students. The proposed architecture integrates a four-layer model—perception, network, service, and application—enabling students to deploy real-time environmental monitoring systems for agriculture and beekeeping. Through a sixteen-week Project-Based Learning (PBL) intervention with 91 participants, we evaluated how this technological stack influences technical proficiency. Results indicate that the transition from local code execution to cloud-based telemetry increased perceived learning confidence from μ=3.9 (Challenge phase) to μ=4.6 (Reflection phase) on a 5-point scale. Furthermore, 96% of students identified the visualization dashboards as essential Human–Computer Interfaces (HCI) for debugging, effectively bridging the gap between raw sensor data and evidence-based argumentation. These findings demonstrate that integrating open-source IoT architectures provides a scalable mechanism to cultivate data literacy in early engineering education. Full article
Show Figures

Figure 1

19 pages, 6871 KB  
Article
A BIM-Derived Synthetic Point Cloud (SPC) Dataset for Construction Scene Component Segmentation
by Yiquan Zou, Tianxiang Liang, Wenxuan Chen, Zhixiang Ren and Yuhan Wen
Data 2026, 11(1), 16; https://doi.org/10.3390/data11010016 - 12 Jan 2026
Abstract
In intelligent construction and BIM–Reality integration applications, high-quality, large-scale construction scene point cloud data with component-level semantic annotations constitute a fundamental basis for three-dimensional semantic understanding and automated analysis. However, point clouds acquired from real construction sites commonly suffer from high labeling costs, [...] Read more.
In intelligent construction and BIM–Reality integration applications, high-quality, large-scale construction scene point cloud data with component-level semantic annotations constitute a fundamental basis for three-dimensional semantic understanding and automated analysis. However, point clouds acquired from real construction sites commonly suffer from high labeling costs, severe occlusion, and unstable data distributions. Existing public datasets remain insufficient in terms of scale, component coverage, and annotation consistency, limiting their suitability for data-driven approaches. To address these challenges, this paper constructs and releases a BIM-derived synthetic construction scene point cloud dataset, termed the Synthetic Point Cloud (SPC), targeting component-level point cloud semantic segmentation and related research tasks.The dataset is generated from publicly available BIM models through physics-based virtual LiDAR scanning, producing multi-view and multi-density three-dimensional point clouds while automatically inheriting component-level semantic labels from BIM without any manual intervention. The SPC dataset comprises 132 virtual scanning scenes, with an overall scale of approximately 8.75×109 points, covering typical construction components such as walls, columns, beams, and slabs. By systematically configuring scanning viewpoints, sampling densities, and occlusion conditions, the dataset introduces rich geometric and spatial distribution diversity. This paper presents a comprehensive description of the SPC data generation pipeline, semantic mapping strategy, virtual scanning configurations, and data organization scheme, followed by statistical analysis and technical validation in terms of point cloud scale evolution, spatial coverage characteristics, and component-wise semantic distributions. Furthermore, baseline experiments on component-level point cloud semantic segmentation are provided. The results demonstrate that models trained solely on the SPC dataset can achieve stable and engineering-meaningful component-level predictions on real construction point clouds, validating the dataset’s usability in virtual-to-real research scenarios. As a scalable and reproducible BIM-derived point cloud resource, the SPC dataset offers a unified data foundation and experimental support for research on construction scene point cloud semantic segmentation, virtual-to-real transfer learning, scan-to-BIM updating, and intelligent construction monitoring. Full article
Show Figures

Figure 1

36 pages, 741 KB  
Review
Artificial Intelligence Algorithms for Insulin Management and Hypoglycemia Prevention in Hospitalized Patients—A Scoping Review
by Eileen R. Faulds, Melanie Natasha Rayan, Matthew Mlachak, Kathleen M. Dungan, Ted Allen and Emily Patterson
Diabetology 2026, 7(1), 19; https://doi.org/10.3390/diabetology7010019 - 12 Jan 2026
Abstract
Background: Dysglycemia remains a persistent challenge in hospital care. Despite advances in outpatient diabetes technology, inpatient insulin management largely depends on intermittent point-of-care glucose testing, static insulin dosing protocols and rule-based decision support systems. Artificial intelligence (AI) offers potential to transform this care [...] Read more.
Background: Dysglycemia remains a persistent challenge in hospital care. Despite advances in outpatient diabetes technology, inpatient insulin management largely depends on intermittent point-of-care glucose testing, static insulin dosing protocols and rule-based decision support systems. Artificial intelligence (AI) offers potential to transform this care through predictive modeling and adaptive insulin control. Methods: Following Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) guidelines, a scoping review was conducted to characterize AI algorithms for insulin dosing and glycemic management in hospitalized patients. An interdisciplinary team of clinicians and engineers reached consensus on AI definitions to ensure inclusion of machine learning, deep learning, and reinforcement learning approaches. A librarian-assisted search of five databases identified 13,768 citations. After screening and consensus review, 26 studies (2006–2025) met the inclusion criteria. Data were extracted on study design, population, AI methods, data inputs, outcomes, and implementation findings. Results: Studies included ICU (N = 13) and general ward (N = 9) patients, including patients with diabetes and stress hyperglycemia. Early randomized trials of model predictive control demonstrated improved mean glucose (5.7–6.2 mmol/L) and time in target range compared with standard care. Later machine learning models achieved strong predictive accuracy (AUROC 0.80–0.96) for glucose forecasting or hypoglycemia risk. Most algorithms used data from Medical Information Mart for Intensive Care (MIMIC) databases; few incorporated continuous glucose monitoring (CGM). Implementation and usability outcomes were seldom reported. Conclusions: Hospital AI-driven models showed strong algorithmic performance but limited clinical validation. Future co-designed, interpretable systems integrating CGM and real-time workflow testing are essential to advance safe, adaptive insulin management in hospital settings. Full article
Show Figures

Figure 1

34 pages, 4355 KB  
Review
Thin-Film Sensors for Industry 4.0: Photonic, Functional, and Hybrid Photonic-Functional Approaches to Industrial Monitoring
by Muhammad A. Butt
Coatings 2026, 16(1), 93; https://doi.org/10.3390/coatings16010093 - 12 Jan 2026
Abstract
The transition toward Industry 4.0 requires advanced sensing platforms capable of delivering real-time, high-fidelity data under extreme industrial conditions. Thin-film sensors, leveraging both photonic and functional approaches, are emerging as key enablers of this transformation. By exploiting optical phenomena such as Fabry–Pérot interference, [...] Read more.
The transition toward Industry 4.0 requires advanced sensing platforms capable of delivering real-time, high-fidelity data under extreme industrial conditions. Thin-film sensors, leveraging both photonic and functional approaches, are emerging as key enablers of this transformation. By exploiting optical phenomena such as Fabry–Pérot interference, guided-mode resonance, plasmonics, and photonic crystal effects, thin-film photonic devices provide highly sensitive, electromagnetic interference-immune, and remotely interrogated solutions for monitoring temperature, strain, and chemical environments. Complementarily, functional thin films including oxide-based chemiresistors, nanoparticle coatings, and flexible electronic skins extend sensing capabilities to diverse industrial contexts, from hazardous gas detection to structural health monitoring. This review surveys the fundamental optical principles, material platforms, and deposition strategies that underpin thin-film sensors, emphasizing advances in nanostructured oxides, 2D materials, hybrid perovskites, and additive manufacturing methods. Application-focused sections highlight their deployment in temperature and stress monitoring, chemical leakage detection, and industrial safety. Integration into Internet of Things (IoT) networks, cyber-physical systems, and photonic integrated circuits is examined, alongside challenges related to durability, reproducibility, and packaging. Future directions point to AI-driven signal processing, flexible and printable architectures, and autonomous self-calibration. Together, these developments position thin-film sensors as foundational technologies for intelligent, resilient, and adaptive manufacturing in Industry 4.0. Full article
(This article belongs to the Section Thin Films)
Show Figures

Figure 1

25 pages, 3861 KB  
Article
Semantically Guided 3D Reconstruction and Body Weight Estimation Method for Dairy Cows
by Jinshuo Zhang, Xinzhong Wang, Hewei Meng, Junzhu Huang, Xinran Zhang, Kuizhou Zhou, Yaping Li and Huijie Peng
Agriculture 2026, 16(2), 182; https://doi.org/10.3390/agriculture16020182 - 11 Jan 2026
Viewed by 38
Abstract
To address the low efficiency and stress-inducing nature of traditional manual weighing for dairy cows, this study proposes a semantically guided 3D reconstruction and body weight estimation method for dairy cows. First, a dual-viewpoint Kinect V2 camera synchronous acquisition system captures top-view and [...] Read more.
To address the low efficiency and stress-inducing nature of traditional manual weighing for dairy cows, this study proposes a semantically guided 3D reconstruction and body weight estimation method for dairy cows. First, a dual-viewpoint Kinect V2 camera synchronous acquisition system captures top-view and side-view point cloud data from 150 calves and 150 lactating cows. Subsequently, the CSS-PointNet++ network model was designed. Building upon PointNet++, it incorporates Convolutional Block Attention Module (CBAM) and Attention-Weighted Hybrid Pooling Module (AHPM) to achieve precise semantic segmentation of the torso and limbs in the side-view point cloud. Based on this, point cloud registration algorithms were applied to align the dual-view point clouds. Missing parts were mirrored and completed using semantic information to achieve 3D reconstruction. Finally, a body weight estimation model was established based on volume and surface area through surface reconstruction. Experiments demonstrate that CSS-PointNet++ achieves an Overall Accuracy (OA) of 98.35% and a mean Intersection over Union (mIoU) of 95.61% in semantic segmentation tasks, representing improvements of 2.2% and 4.65% over PointNet++, respectively. In the weight estimation phase, the BP neural network (BPNN) delivers optimal performance: For the calf group, the Mean Absolute Error (MAE) was 1.8409 kg, Root Mean Square Error (RMSE) was 2.4895 kg, Mean Relative Error (MRE) was 1.49%, and Coefficient of Determination (R2) was 0.9204; for the lactating cows group, MAE was 12.5784 kg, RMSE was 14.4537 kg, MRE was 1.75%, and R2 was 0.8628. This method enables 3D reconstruction and body weight estimation of cows during walking, providing an efficient and precise body weight monitoring solution for precision farming. Full article
(This article belongs to the Section Farm Animal Production)
Show Figures

Figure 1

25 pages, 1726 KB  
Article
Spatial Analysis of the Distribution of Air Pollutants Along a Selected Section of a Transport Corridor: Comparison of the Results with Stationary Measurements of the European Air Quality Index
by Agata Jaroń, Anna Borucka and Paulina Jaczewska
Appl. Sci. 2026, 16(2), 736; https://doi.org/10.3390/app16020736 - 10 Jan 2026
Viewed by 90
Abstract
Civilisational progress contributes to an increase in the number of vehicles on the road, thereby intensifying air pollutant emissions and accelerating the degradation of the natural environment. Effective protection of urban areas against air pollution enhances safeguarding against numerous allergies and diseases resulting [...] Read more.
Civilisational progress contributes to an increase in the number of vehicles on the road, thereby intensifying air pollutant emissions and accelerating the degradation of the natural environment. Effective protection of urban areas against air pollution enhances safeguarding against numerous allergies and diseases resulting from unplanned and unintended absorption of harmful pollutants into the human body. Sustainable urban planning requires the collaboration of multiple scientific disciplines. In this context, measurement becomes crucial, as it reveals the spatial scale of the problem and identifies existing disparities. This study uses an integrated approach of standard measurement methods and statistical and geostatistical data analysis, identifying PM1 fractions that are not included in EU air quality monitoring. The hypothesis explores how surface-based results correspond to point-based results from national air quality monitoring. The presented implications demonstrate similarities and differences between the studied measurement methods and the spatial distributions of PM10, PM2.5, and PM1 dust. Full article
24 pages, 7954 KB  
Article
Machine Learning-Based Prediction of Maximum Stress in Observation Windows of HOV
by Dewei Li, Zhijie Wang, Zhongjun Ding and Xi An
J. Mar. Sci. Eng. 2026, 14(2), 151; https://doi.org/10.3390/jmse14020151 - 10 Jan 2026
Viewed by 140
Abstract
With advances in deep-sea exploration technologies, utilizing human-occupied vehicles (HOV) in marine science has become widespread. The observation window is a critical component, as its structural strength affects submersible safety and performance. Under load, it experiences stress concentration, deformation, cracking, and catastrophic failure. [...] Read more.
With advances in deep-sea exploration technologies, utilizing human-occupied vehicles (HOV) in marine science has become widespread. The observation window is a critical component, as its structural strength affects submersible safety and performance. Under load, it experiences stress concentration, deformation, cracking, and catastrophic failure. The observation window will experience different stress distributions in high-pressure environments. The maximum principal stress is the most significant phenomenon that determines the most likely failure of materials in windows of HOV. This study proposes an artificial intelligence-based method to predict the maximum principal stress of observation windows in HOV for rapid safety assessment. Samples were designed, while strain data with corresponding maximum principal stress values were collected under different loading conditions. Three machine learning algorithms—transformer–CNN-BiLSTM, CNN-LSTM, and Gaussian process regression (GP)—were employed for analysis. Results show that the transformer–CNN-BiLSTM model achieved the highest accuracy, particularly at the point exhibiting the maximum the principal stress value. Evaluation metrics, including mean squared error (MSE), mean absolute error (MAE), and root squared residual (RSR), confirmed its superior performance. The proposed hybrid model incorporates a positional encoding layer to enrich input data with locational information and combines the strengths of bidirectional long short-term memory (LSTM), one-dimensional CNN, and transformer–CNN-BiLSTM encoders. This approach effectively captures local and global stress features, offering a reliable predictive tool for health monitoring of submersible observation windows. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

18 pages, 10127 KB  
Article
A Monitoring Method for Steep Slopes in Mountainous Canyon Regions Using Multi-Temporal UAV POT Technology Assisted by TLS
by Qing-Wen Wen, Zhi-Yu Li, Zhong-Hua Jiang, Hao Wu, Jia-Wen Zhou, Nan Jiang, Yu-Xiang Hu and Hai-Bo Li
Drones 2026, 10(1), 50; https://doi.org/10.3390/drones10010050 - 10 Jan 2026
Viewed by 50
Abstract
Monitoring steep slopes in mountainous canyon areas has always been a challenging problem, especially during the construction of large hydropower projects. Effective monitoring is crucial for construction safety and operational security. However, under complex terrain conditions, existing monitoring methods have significant limitations and [...] Read more.
Monitoring steep slopes in mountainous canyon areas has always been a challenging problem, especially during the construction of large hydropower projects. Effective monitoring is crucial for construction safety and operational security. However, under complex terrain conditions, existing monitoring methods have significant limitations and cannot comprehensively and accurately cover steep slopes. To address the above challenges, this study proposes a multi-temporal UAV-based photogrammetric offset tracking (POT) monitoring method assisted by terrestrial laser scanning (TLS), which is primarily applicable to rocky and texture-rich steep slopes. This method utilizes TLS point cloud data to provide supplementary ground control points (TLS-GCPs) for UAV image modeling, effectively overcoming the difficulty of deploying conventional RTK ground control points (RTK-GCPs) on high and steep slopes, thereby significantly improving the accuracy of UAV-based Structure-from-Motion (SfM) models. In a case study at a hydropower station, we employed TLS-assisted UAV modeling to produce high-precision UAV images. Using POT technology, we successfully identified signs of slope deformation between January 2024 and December 2024. Comparative experiments with traditional algorithms demonstrated that in areas where RTK-GCPs cannot be deployed, this method greatly enhances UAV modeling accuracy, fully meeting the monitoring requirements for steep slopes in complex terrains. Full article
Show Figures

Figure 1

15 pages, 1846 KB  
Article
A Temperature-Based Statistical Model for Real-Time Thermal Deformation Prediction in End-Milling of Complex Workpiece
by Mengmeng Yang, Yize Yang, Fangyuan Zhang, Tong Li, Xiyuan Qu, Wei Wang, Ren Zhang, Dezhi Ren, Feng Zhang and Koji Teramoto
Machines 2026, 14(1), 85; https://doi.org/10.3390/machines14010085 - 9 Jan 2026
Viewed by 87
Abstract
Thermally induced deformation is a major source of dimensional error in end-milling, especially under high-speed or high-load conditions. Direct measurement of workpiece deformation during machining is impractical, while temperature signals can be obtained with good stability using embedded thermocouples. This study proposes an [...] Read more.
Thermally induced deformation is a major source of dimensional error in end-milling, especially under high-speed or high-load conditions. Direct measurement of workpiece deformation during machining is impractical, while temperature signals can be obtained with good stability using embedded thermocouples. This study proposes an indirect method for predicting milling-induced thermal deformation based on temperature measurements. A three-dimensional thermo-mechanical finite element model is established to simulate the transient temperature field and corresponding deformation of the workpiece during milling. The numerical model is validated using cutting experiments performed under the same boundary conditions and machining parameters. Based on the validated results, the relationship between deformation at critical machining locations and temperature responses at candidate monitoring points is analyzed. To improve applicability to complex workpieces, a statistical prediction model is developed. Temperature monitoring points are optimized, and significant temperature–deformation correlations are identified using multiple linear regression combined with information-criterion-based model selection. The final model is constructed using simulation-derived datasets and provides stable deformation prediction over the entire milling process. Full article
(This article belongs to the Section Advanced Manufacturing)
16 pages, 260 KB  
Commentary
COMPASS Guidelines for Conducting Welfare-Focused Research into Behaviour Modification of Animals
by Paul D. McGreevy, David J. Mellor, Rafael Freire, Kate Fenner, Katrina Merkies, Amanda Warren-Smith, Mette Uldahl, Melissa Starling, Amy Lykins, Andrew McLean, Orla Doherty, Ella Bradshaw-Wiley, Rimini Quinn, Cristina L. Wilkins, Janne Winther Christensen, Bidda Jones, Lisa Ashton, Barbara Padalino, Claire O’ Brien, Caleigh Copelin, Colleen Brady and Cathrynne Henshalladd Show full author list remove Hide full author list
Animals 2026, 16(2), 206; https://doi.org/10.3390/ani16020206 - 9 Jan 2026
Viewed by 254
Abstract
Researchers are increasingly engaged in studies to determine and correct negative welfare consequences of animal husbandry and behaviour modification procedures, not least in response to industries’ growing need to maintain their social licence through demonstrable welfare standards that address public expectations. To ensure [...] Read more.
Researchers are increasingly engaged in studies to determine and correct negative welfare consequences of animal husbandry and behaviour modification procedures, not least in response to industries’ growing need to maintain their social licence through demonstrable welfare standards that address public expectations. To ensure that welfare recommendations are scientifically credible, the studies must be rigorously designed and conducted, and the data produced must be interpreted with full regard to conceptual, methodological, and experimental design limitations. This commentary provides guidance on these matters. In addition to, and complementary with, the ARRIVE guidelines that deal with animal studies in general, there is a need for additional specific advice on the design of studies directed at procedures that alter behaviour, whether through training, handling, or restraint. The COMPASS Guidelines offer clear direction for conducting welfare-focused behaviour modification research. They stand for the following: Controls and Calibration, emphasising rigorous design, baseline measures, equipment calibration, and replicability; Objectivity and Open data, ensuring transparency, validated tools, and data accessibility; Motivation and Methods, with a focus on learning theory, behavioural science, and evidence-based application of positive reinforcers and aversive stimuli; Precautions and Protocols, embedding the precautionary principle, minimising welfare harms, listing stop criteria, and using real-time monitoring; Animal-centred Assessment, with multimodal welfare evaluation, using physiological, behavioural, functional, and objective indicators; Study ethics and Standards, noting the 3Rs (replacement, reduction, and refinement), welfare endpoints, long-term effects, industry independence, and risk–benefit analysis; and Species-relevance and Scientific rigour, facilitating cross-species applicability with real-world relevance and robust methodology. To describe these guidelines, the current article is organised into seven major sections that outline detailed, point-by-point considerations for ethical and scientifically rigorous design. It concludes with a call for continuous improvement and collaboration. A major purpose is to assist animal ethics committees when considering the design of experiments. It is also anticipated that these Guidelines will assist reviewers and editorial teams in triaging manuscripts that report studies in this context. Full article
(This article belongs to the Section Companion Animals)
39 pages, 10760 KB  
Article
Automated Pollen Classification via Subinstance Recognition: A Comprehensive Comparison of Classical and Deep Learning Architectures
by Karol Struniawski, Aleksandra Machlanska, Agnieszka Marasek-Ciolakowska and Aleksandra Konopka
Appl. Sci. 2026, 16(2), 720; https://doi.org/10.3390/app16020720 - 9 Jan 2026
Viewed by 137
Abstract
Pollen identification is critical for melissopalynology (honey authentication), ecological monitoring, and allergen tracking, yet manual microscopic analysis remains labor-intensive, subjective, and error-prone when multiple grains overlap in realistic samples. Existing automated approaches often fail to address multi-grain scenarios or lack systematic comparison across [...] Read more.
Pollen identification is critical for melissopalynology (honey authentication), ecological monitoring, and allergen tracking, yet manual microscopic analysis remains labor-intensive, subjective, and error-prone when multiple grains overlap in realistic samples. Existing automated approaches often fail to address multi-grain scenarios or lack systematic comparison across classical and deep learning paradigms, limiting their practical deployment. This study proposes a subinstance-based classification framework combining YOLOv12n object detection for grain isolation, independent classification via classical machine learning (ML), convolutional neural networks (CNNs), or Vision Transformers (ViTs), and majority voting aggregation. Five classical classifiers with systematic feature selection, three CNN architectures (ResNet50, EfficientNet-B0, ConvNeXt-Tiny), and three ViT variants (ViT-B/16, ViT-B/32, ViT-L/16) are evaluated on four datasets (full images vs. isolated grains; raw vs. CLAHE-preprocessed) for four berry pollen species (Ribes nigrum, Ribes uva-crispa, Lonicera caerulea, and Amelanchier alnifolia). Stratified image-level splits ensure no data leakage, and explainable AI techniques (SHAP, Grad-CAM++, and gradient saliency) validate biological interpretability across all paradigms. Results demonstrate that grain isolation substantially improves classical ML performance (F1 from 0.83 to 0.91 on full images to 0.96–0.99 on isolated grains, +8–13 percentage points), while deep learning excels on both levels (CNNs: F1 = 1.000 on full images with CLAHE; ViTs: F1 = 0.99). At the instance level, all paradigms converge to near-perfect discrimination (F1 ≥ 0.96), indicating sufficient capture of morphological information. Majority voting aggregation provides +3–5% gains for classical methods but only +0.3–4.8% for deep models already near saturation. Explainable AI analysis confirms that models rely on biologically meaningful cues: blue channel moments and texture features for classical ML (SHAP), grain boundaries and exine ornamentation for CNNs (Grad-CAM++), and distributed attention across grain structures for ViTs (gradient saliency). Qualitative validation on 211 mixed-pollen images confirms robust generalization to realistic multi-species samples. The proposed framework (YOLOv12n + SVC/ResNet50 + majority voting) is practical for deployment in honey authentication, ecological surveys, and fine-grained biological image analysis. Full article
(This article belongs to the Special Issue Latest Research on Computer Vision and Image Processing)
Show Figures

Figure 1

15 pages, 10595 KB  
Article
Light Sources in Hyperspectral Imaging Simultaneously Influence Object Detection Performance and Vase Life of Cut Roses
by Yong-Tae Kim, Ji Yeong Ham and Byung-Chun In
Plants 2026, 15(2), 215; https://doi.org/10.3390/plants15020215 - 9 Jan 2026
Viewed by 100
Abstract
Hyperspectral imaging (HSI) is a noncontact camera-based technique that enables deep learning models to learn various plant conditions by detecting light reflectance under illumination. In this study, we investigated the effects of four light sources—halogen (HAL), incandescent (INC), fluorescent (FLU), and light-emitting diodes [...] Read more.
Hyperspectral imaging (HSI) is a noncontact camera-based technique that enables deep learning models to learn various plant conditions by detecting light reflectance under illumination. In this study, we investigated the effects of four light sources—halogen (HAL), incandescent (INC), fluorescent (FLU), and light-emitting diodes (LED)—on the quality of spectral images and the vase life (VL) of cut roses, which are vulnerable to abiotic stresses. Cut roses ‘All For Love’ and ‘White Beauty’ were used to compare cultivar-specific visible reflectance characteristics associated with contrasting petal pigmentation. HSI was performed at four time points, yielding 640 images per light source from 40 cut roses. The results revealed that the light source strongly affected both the image quality (mAP@0.5 60–80%) and VL (0–3 d) of cut roses. The HAL lamp produced high-quality spectral images across wavelengths (WL) ranging from 480 to 900 nm and yielded the highest object detection performance (ODP), reaching mAP@0.5 of 85% in ‘All For Love’ and 83% in ‘White Beauty’ with the YOLOv11x models. However, it increased petal temperature by 2.7–3 °C, thereby stimulating leaf transpiration and consequently shortening the VL of the flowers by 1–2.5 d. In contrast, INC produced unclear images with low spectral signals throughout the WL and consequently resulted in lower ODP, with mAP@0.5 of 74% and 69% in ‘All For Love’ and ‘White Beauty’, respectively. The INC only slightly increased petal temperature (1.2–1.3 °C) and shortened the VL by 1 d in the both cultivars. Although FLU and LED had only minor effects on petal temperature and VL, these illuminations generated transient spectral peaks in the WL range of 480–620 nm, resulting in decreased ODP (mAP@0.5 60–75%). Our results revealed that HAL provided reliable, high-quality spectral image data and high object detection accuracy, but simultaneously had negative effects on flower quality. Our findings suggest an alternative two-phase approach for illumination applications that uses HAL during the initial exploration of spectra corresponding to specific symptoms of interest, followed by LED for routine plant monitoring. Optimizing illumination in HSI will improve the accuracy of deep learning-based prediction and thereby contribute to the development of an automated quality sorting system that is urgently required in the cut flower industry. Full article
(This article belongs to the Special Issue Application of Optical and Imaging Systems to Plants)
Show Figures

Figure 1

38 pages, 1376 KB  
Review
Risk Assessment of Chemical Mixtures in Foods: A Comprehensive Methodological and Regulatory Review
by Rosana González Combarros, Mariano González-García, Gerardo David Blanco-Díaz, Kharla Segovia Bravo, José Luis Reino Moya and José Ignacio López-Sánchez
Foods 2026, 15(2), 244; https://doi.org/10.3390/foods15020244 - 9 Jan 2026
Viewed by 98
Abstract
Over the last 15 years, mixture risk assessment for food xenobiotics has evolved from conceptual discussions and simple screening tools, such as the Hazard Index (HI), towards operational, component-based and probabilistic frameworks embedded in major food-safety institutions. This review synthesizes methodological and regulatory [...] Read more.
Over the last 15 years, mixture risk assessment for food xenobiotics has evolved from conceptual discussions and simple screening tools, such as the Hazard Index (HI), towards operational, component-based and probabilistic frameworks embedded in major food-safety institutions. This review synthesizes methodological and regulatory advances in cumulative risk assessment for dietary “cocktails” of pesticides, contaminants and other xenobiotics, with a specific focus on food-relevant exposure scenarios. At the toxicological level, the field is now anchored in concentration/dose addition as the default model for similarly acting chemicals, supported by extensive experimental evidence that most environmental mixtures behave approximately dose-additively at low effect levels. Building on this paradigm, a portfolio of quantitative metrics has been developed to operationalize component-based mixture assessment: HI as a conservative screening anchor; Relative Potency Factors (RPF) and Toxic Equivalents (TEQ) to express doses within cumulative assessment groups; the Maximum Cumulative Ratio (MCR) to diagnose whether risk is dominated by one or several components; and the combined Margin of Exposure (MOET) as a point-of-departure-based integrator that avoids compounding uncertainty factors. Regulatory frameworks developed by EFSA, the U.S. EPA and FAO/WHO converge on tiered assessment schemes, biologically informed grouping of chemicals and dose addition as the default model for similarly acting substances, while differing in scope, data infrastructure and legal embedding. Implementation in food safety critically depends on robust exposure data streams. Total Diet Studies provide population-level, “as eaten” exposure estimates through harmonized food-list construction, home-style preparation and composite sampling, and are increasingly combined with conventional monitoring. In parallel, human biomonitoring quantifies internal exposure to diet-related xenobiotics such as PFAS, phthalates, bisphenols and mycotoxins, embedding mixture assessment within a dietary-exposome perspective. Across these developments, structured uncertainty analysis and decision-oriented communication have become indispensable. By integrating advances in toxicology, exposure science and regulatory practice, this review outlines a coherent, tiered and uncertainty-aware framework for assessing real-world dietary mixtures of xenobiotics, and identifies priorities for future work, including mechanistically and data-driven grouping strategies, expanded use of physiologically based pharmacokinetic modelling and refined mixture-sensitive indicators to support public-health decision-making. Full article
(This article belongs to the Special Issue Research on Food Chemical Safety)
19 pages, 5060 KB  
Review
Electrochemical Biosensors for Exosome Detection: Current Advances, Challenges, and Prospects for Glaucoma Diagnosis
by María Moreno-Guzmán, Juan Pablo Hervás-Pérez, Laura Martín-Carbajo, María José Crespo Carballés and Marta Sánchez-Paniagua
Sensors 2026, 26(2), 433; https://doi.org/10.3390/s26020433 - 9 Jan 2026
Viewed by 75
Abstract
Glaucoma is a leading cause of irreversible blindness worldwide, with its asymptomatic progression highlighting the urgent need for early, minimally invasive biomarkers. Exosomes derived from the aqueous humor (AH) have emerged as promising candidates, as they carry proteins, nucleic acids, and lipids that [...] Read more.
Glaucoma is a leading cause of irreversible blindness worldwide, with its asymptomatic progression highlighting the urgent need for early, minimally invasive biomarkers. Exosomes derived from the aqueous humor (AH) have emerged as promising candidates, as they carry proteins, nucleic acids, and lipids that reflect the physiological and pathological state of ocular tissues such as the trabecular meshwork and ciliary body. However, their low abundance, nanoscale size, and the limited volume of AH complicate detection and characterization. Conventional methods, including Western blotting, PCR or mass spectrometry, are labor-intensive, time-consuming, and often incompatible with microliter-scale samples. Electrochemical biosensors offer a highly sensitive, rapid, and low-volume alternative, enabling the detection of exosomal surface markers and internal cargos such as microRNAs, proteins, and lipids. Recent advances in nanomaterial-enhanced electrodes, microfluidic integration, enzyme- and nanozyme-mediated signal amplification, and ratiometric detection strategies have significantly improved sensitivity, selectivity, and multiplexing capabilities. While most studies focus on blood or serum, these platforms hold great potential for AH-derived exosome analysis, supporting early-stage glaucoma diagnosis, monitoring of disease progression, and evaluation of therapeutic responses. Continued development of miniaturized, point-of-care electrochemical biosensors could facilitate clinically viable, noninvasive exosome-based diagnostics for glaucoma. Full article
(This article belongs to the Special Issue Feature Review Papers in Biosensors Section 2025)
Show Figures

Figure 1

18 pages, 5554 KB  
Article
The Assimilation of CFOSAT Wave Heights Using Statistical Background Errors
by Leqiang Sun, Natacha Bernier, Benoit Pouliot, Patrick Timko and Lotfi Aouf
Remote Sens. 2026, 18(2), 217; https://doi.org/10.3390/rs18020217 - 9 Jan 2026
Viewed by 113
Abstract
This paper discusses the assimilation of significant wave height (Hs) observations from the China France Oceanography SATellite (CFOSAT) into the Global Deterministic Wave Prediction System developed by Environment and Climate Change Canada. We focus on the quantification of background errors in an effort [...] Read more.
This paper discusses the assimilation of significant wave height (Hs) observations from the China France Oceanography SATellite (CFOSAT) into the Global Deterministic Wave Prediction System developed by Environment and Climate Change Canada. We focus on the quantification of background errors in an effort to address the conventional, simplified, homogeneous assumptions made in previous studies using Optimal Interpolation (OI) to generate Hs analysis. A map of Best Correlation Length, L, is generated to count for the inhomogeneity in the wave field. This map was calculated from pairs of Hs forecasts of two grid points shifted in space and time from which a look-up table is derived and used to infer the spatial extent of correlations within the wave field. The wave spectra are then updated from Hs analysis using a frequency shift scheme. Results reveal significant spatial variance in the distribution of L, with notably high values located in the eastern tropical Pacific Ocean, a pattern that is expected due to the persistent swells dominating in this region. Experiments are conducted with spatially varying correlation lengths and a set correlation length of eight grid points in the analysis step. Forecasts from these analyses are validated independently with the Global Telecommunications System buoys and the Copernicus Marine Environment Monitoring Service (CMEMS) altimetry wave height observations. It is found that the proposed statistical method generally outperforms the conventional method with lower standard deviation and bias for both Hs and peak period forecasts. The conventional method has more drastic corrections on Hs forecasts, but such corrections are not robust, particularly in regions with relatively short spatial correlation length scales. Based on the analysis of the CMEMS comparison, the globally varying correlation length produces a positive increment of the Hs forecast, which is globally associated with forecast error reduction lasting up to 24 h into the forecast. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

Back to TopTop