Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (331)

Search Parameters:
Keywords = usability metrics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 1016 KB  
Article
PA-FRIM: An Adaptive Hybrid FOX–RUN Framework with Adaptive Intensive Mutation for Multi-Metric Big Data Anonymization
by M. Faruk Şahin and Can Eyüpoğlu
Symmetry 2026, 18(5), 734; https://doi.org/10.3390/sym18050734 (registering DOI) - 25 Apr 2026
Abstract
Background/Objectives: Privacy preservation in big data environments is an NP-hard optimization task that requires the satisfaction of k-anonymity and l-diversity constraints to ensure data utility. Methods: This study proposes a novel hybrid optimization approach, adaptive hybrid FOX–RUN Intensive Mutation (PA-FRIM), to address the [...] Read more.
Background/Objectives: Privacy preservation in big data environments is an NP-hard optimization task that requires the satisfaction of k-anonymity and l-diversity constraints to ensure data utility. Methods: This study proposes a novel hybrid optimization approach, adaptive hybrid FOX–RUN Intensive Mutation (PA-FRIM), to address the privacy–utility trade-off in anonymization process. The proposed approach integrates FOX-based global exploration with RUN-based local search using a hybrid adaptive control strategy and intensive mutation search to improve solution diversity in highly constrained solution spaces. Results: The experimental study on the Adult and Bank Marketing datasets shows that PA-FRIM exhibits stable convergence behavior compared to competing methods. The results indicate that full privacy is achieved on the Adult dataset with a violation value of 0.00, and information loss is minimized with an NIL measure of 0.5686. From the analytical utility perspective, PA-FRIM ensures data usability, even in the constrained region, achieving classification accuracies of 89.61% on the Bank Marketing dataset and 84.90% on the Adult dataset. Conclusions: By using a multi-metric evaluation strategy, PA-FRIM provides a robust optimization framework that eliminates privacy violations while maintaining high analytical performance. Full article
(This article belongs to the Special Issue Studies of Symmetry and Asymmetry in Big Data)
20 pages, 2912 KB  
Article
Leveraging Generative AI for High-Fidelity 360° Spatial Images: Methodological Validation for Use as Experimental Stimuli
by Yoojin Han and Joowon Jeong
Buildings 2026, 16(9), 1679; https://doi.org/10.3390/buildings16091679 - 24 Apr 2026
Abstract
Despite its efficiency, the structural integrity and geometric accuracy of artificial intelligence (AI)-generated imagery used in environmental psychology experiments have not been sufficiently validated. This study investigated the methodological validity and substitutability of generative AI-generated 360° images as experimental stimuli for indoor environmental [...] Read more.
Despite its efficiency, the structural integrity and geometric accuracy of artificial intelligence (AI)-generated imagery used in environmental psychology experiments have not been sufficiently validated. This study investigated the methodological validity and substitutability of generative AI-generated 360° images as experimental stimuli for indoor environmental research. Using a three-stage framework, we generated base panoramas with controlled structural parameters, integrated greenery via AI-based inpainting, and conducted multifaceted validation through objective quality metrics and expert assessments. Quantitative results confirmed high technical integrity, indicating that structural distortions at panoramic stitching points were effectively minimized. Furthermore, the AI-generated stimuli maintained stable visual quality across varying greenery densities. Expert evaluations confirmed that the AI-driven approach significantly outperforms conventional 3D modeling, particularly in terms of presence and realism. By achieving high usability and spatial integrity scores, we established a novel standard for employing generative AI to create high-fidelity virtual environments for architectural and psychological research. Full article
(This article belongs to the Special Issue Artificial Intelligence in Architecture and Interior Design)
Show Figures

Figure 1

17 pages, 1247 KB  
Article
Report-Level Impact of DL Assistance on Teleradiology Quality Support for Brain Metastases: Real-World Clinical Practice at a Single Tertiary Center
by Jieun Roh, Hye Jin Baek, Seung Kug Baik, Bora Chung, Kwang Ho Choi, Hwaseong Ryu and Bong Kyeong Son
Diagnostics 2026, 16(8), 1211; https://doi.org/10.3390/diagnostics16081211 - 17 Apr 2026
Viewed by 174
Abstract
Objective: Existing deep learning (DL) studies on brain metastasis have largely focused on algorithm or reader performance in controlled settings, whereas its role in routine teleradiology quality support remains unestablished. We evaluated the report-level impact of DL assistance on brain metastasis interpretation in [...] Read more.
Objective: Existing deep learning (DL) studies on brain metastasis have largely focused on algorithm or reader performance in controlled settings, whereas its role in routine teleradiology quality support remains unestablished. We evaluated the report-level impact of DL assistance on brain metastasis interpretation in a real-world teleradiology workflow using dual-sequence MRI. Materials and Methods: In this retrospective study, 600 patients who underwent contrast-enhanced dual-sequence brain MRI during two consecutive 3-month periods before (pre-DL, n = 286) and after (post-DL, n = 314) DL integration into teleradiology workflow were analyzed. Ten board-certified teleradiologists interpreted all the cases with or without DL-generated overlays. Report-level diagnostic metrics were assessed against a consensus reference standard established by faculty neuroradiologists. Subsequently, exploratory case-level stratified sensitivity analyses were performed for metastasis-positive examinations based on lesion multiplicity and the largest lesion size. Teleradiologists’ perceptions were assessed using a post-interpretation survey. Results: Compared with the pre-DL group, the post-DL group showed higher sensitivity (77.7% vs. 90.8%, p < 0.001), specificity (82.3% vs. 90.8%, p = 0.002), accuracy (80.8% vs. 90.8%, p < 0.001), positive predictive value (68.2% vs. 85.7%, p < 0.001), and negative predictive value (88.3% vs. 94.2%, p = 0.011). False-positive and false-negative rates were lower after DL implementation (11.9% vs. 5.7%, p = 0.009; 7.3% vs. 3.5%, p = 0.045). Sensitivity gains were most pronounced for cases with single metastasis (74.6% vs. 91.2%, p = 0.007) and with the largest lesion ≤ 5 mm (74.3% vs. 92.0%, p = 0.004), whereas sensitivity was similar for multiple metastases and for cases with a largest lesion > 5 mm. Survey responses suggested favorable usability and diagnostic support. Conclusions: In this real-world teleradiology workflow, DL implementation was associated with higher report-level diagnostic metrics and fewer false interpretations. DL assistance may help support quality control for brain metastasis interpretation, particularly in more subtle and diagnostically challenging cases, although radiologist judgment remains essential for subtle or borderline lesions. Full article
(This article belongs to the Special Issue AI-Assisted Diagnostics in Telemedicine and Digital Health)
28 pages, 4829 KB  
Article
OH-MEMA: An Integrated One Health Mixed-Effects Modeling Approach for Syndromic Surveillance
by Aseel Basheer, Parisa Masnadi Khiabani, Wolfgang Jentner, Aaron Wendelboe, Jason R. Vogel, Katrin Gaardbo Kuhn, Michael C. Wimberly, Dean Hougen and David Ebert
J. Clin. Med. 2026, 15(8), 2966; https://doi.org/10.3390/jcm15082966 - 14 Apr 2026
Viewed by 377
Abstract
Background/Objectives: Integrating heterogeneous One Health time series into transparent and usable surveillance workflows remains difficult because data preparation, modeling, and interpretation are often separated across tools. In this paper, we introduce OH-MEMA (One Health Mixed-Effects Modeling and Analytics), an interactive visual analytics framework [...] Read more.
Background/Objectives: Integrating heterogeneous One Health time series into transparent and usable surveillance workflows remains difficult because data preparation, modeling, and interpretation are often separated across tools. In this paper, we introduce OH-MEMA (One Health Mixed-Effects Modeling and Analytics), an interactive visual analytics framework that integrates heterogeneous One Health data streams, including human clinical outcomes, environmental factors, and wastewater surveillance data, to support syndromic surveillance and pandemic preparedness. Methods: The system enables users to upload and analyze multi-source datasets through an interactive web-based interface. The modeling component supports fixed effects for multi-source predictors, random effects for spatial, temporal, and demographic grouping variables, optional random slopes, and rolling time-series validation. Model results are visualized as time series comparing observed and predicted outcomes, with evaluation metrics including Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and correlation. To support iterative exploration, the system incorporates analytic provenance through a visual model tree that records prior configurations. Results: OH-MEMA was validated through both quantitative and qualitative evaluations. Quantitatively, mixed-effects models were assessed across multiple counties and outcomes using RMSE, MAE, and correlation, demonstrating robust predictive performance. Qualitatively, expert users, including epidemiologists and disease surveillance analysts, evaluated the system using the NASA Task Load Index and open-ended interviews, indicating improved interpretability, manageable cognitive workload, and effective workflow integration. Conclusions: OH-MEMA provides an interpretable, human-in-the-loop platform for exploratory forecasting and comparative model analysis in syndromic surveillance. The framework effectively bridges data integration, modeling, and interpretation, supporting user-centered analytical reasoning and decision-making in One Health applications. Full article
(This article belongs to the Special Issue New Advances of Infectious Disease Epidemiology)
Show Figures

Figure 1

19 pages, 4124 KB  
Article
Prediction of Maximum Usable Frequency Based on a New Hybrid Deep Learning Model
by Yuyang Li, Zhigang Zhang and Jian Shen
Electronics 2026, 15(7), 1539; https://doi.org/10.3390/electronics15071539 - 7 Apr 2026
Viewed by 275
Abstract
The reliability of high-frequency (HF) frequency selection technology relies on the prediction accuracy of the Maximum Usable Frequency of the ionospheric F2 layer (MUF-F2). To improve its short-term prediction performance, a novel hybrid deep learning prediction model is proposed, which achieves accurate modeling [...] Read more.
The reliability of high-frequency (HF) frequency selection technology relies on the prediction accuracy of the Maximum Usable Frequency of the ionospheric F2 layer (MUF-F2). To improve its short-term prediction performance, a novel hybrid deep learning prediction model is proposed, which achieves accurate modeling of the complex spatiotemporal variation patterns of MUF-F2 by integrating a feature enhancement mechanism, a dual-branch feature extraction structure, and a bidirectional temporal dependency capture network. The hybrid prediction model integrates the Channel Attention mechanism (CA), Dual-Branch Convolutional Neural Network (DCNN), and Bidirectional Long Short-Term Memory network (BiLSTM). The model is trained and validated using MUF-F2 data from 5 communication links over China during geomagnetically quiet periods and 4 during geomagnetic storm periods, with the difference in the number of links attributed to experimental constraints and the disruptive effects of geomagnetic storms. Its performance is evaluated via multiple metrics, and a comparative analysis is conducted with commonly used prediction models such as the Long Short-Term Memory (LSTM) network. Experimental results show that during geomagnetically quiet periods, the proposed model achieves lower prediction errors (Root Mean Square Error (RMSE) < 1.1 MHz, Mean Absolute Percentage Error (MAPE) < 3.8%) and a higher goodness of fit (coefficient of determination (R2) > 0.94), with the average error reduction across all links ranging 8 from 6.2% to 46.9% compared with the baseline model. Under geomagnetic storm disturbance conditions, the model still maintains robust prediction performance, with R2 > 0.89 for all communication links, as well as RMSE < 0.6 MHz, Mean Absolute Error (MAE) < 0.4 MHz, and MAPE < 3.3%. The study demonstrates that the proposed CA-DCNN-BiLSTM model exhibits excellent prediction accuracy and anti-interference capability under different geomagnetic activity conditions, which can effectively improve the short-term prediction accuracy of MUF-F2 and provide more reliable technical support for HF communication frequency decision-making. Full article
Show Figures

Figure 1

19 pages, 1077 KB  
Article
Usability of a Patch-Type Ultrasound System for Non-Invasive Hemodynamic Monitoring: A Simulation Study in Anesthesiologists
by Soyeon Noh, Hyungmin Kim, Hyeonkyeong Choi and Wonseuk Jang
Healthcare 2026, 14(7), 971; https://doi.org/10.3390/healthcare14070971 - 7 Apr 2026
Viewed by 327
Abstract
Background/Objectives: Non-invasive hemodynamic monitoring technologies are being developed to support clinical decisions while reducing risks from invasive procedures. Usability evaluation is essential to assess safety and effectiveness before commercial release. This study examined the usability of a novel patch-type ultrasound-based system (CW10) [...] Read more.
Background/Objectives: Non-invasive hemodynamic monitoring technologies are being developed to support clinical decisions while reducing risks from invasive procedures. Usability evaluation is essential to assess safety and effectiveness before commercial release. This study examined the usability of a novel patch-type ultrasound-based system (CW10) designed for continuous monitoring in perioperative settings. Methods: A summative evaluation was conducted following IEC 62366-1 with 15 anesthesiologists. Potential hazards were identified via the FDA MAUDE database (Code: DQK) to inform test scenarios. Participants were stratified by clinical experience (1–<5, 5–<10, and ≥10 years) to observe potential variations in operation. In a simulated operating room, users performed 9 clinical scenarios (49 tasks). Metrics included task success rates, subjective satisfaction (5-point Likert scale), and the System Usability Scale (SUS). Results: The overall task success rate was 98.2%. No statistically significant differences were observed across groups in performance, subjective ratings, or SUS scores (p > 0.05). The mean SUS score was 78.5, corresponding to a “Good” usability level. While some use errors occurred in tasks like probe orientation, root cause analysis suggested these were likely due to negative transfer from prior device experience rather than interface complexity. Conclusions: The results suggest the system demonstrates acceptable usability and consistent operation across experience levels. Integrated automated features and the patch design may contribute to reducing inter-user variability for continuous monitoring. This study provides usability evidence that may inform the development of similar non-invasive technologies. Full article
(This article belongs to the Section Healthcare Quality, Patient Safety, and Self-care Management)
Show Figures

Figure 1

28 pages, 1463 KB  
Systematic Review
Evaluating UX and Usability in Automotive Human–Machine Interfaces: A Systematic Review
by Marco Cescon and Margherita Peruzzini
Appl. Sci. 2026, 16(7), 3437; https://doi.org/10.3390/app16073437 - 1 Apr 2026
Viewed by 670
Abstract
Human–Machine Interfaces (HMIs) are increasingly important in vehicles and other safety-critical systems, yet approaches to their usability and User eXperience (UX) evaluation remain fragmented. This systematic literature review investigates how HMIs are empirically evaluated across domains, with a primary focus on automotive HMIs, [...] Read more.
Human–Machine Interfaces (HMIs) are increasingly important in vehicles and other safety-critical systems, yet approaches to their usability and User eXperience (UX) evaluation remain fragmented. This systematic literature review investigates how HMIs are empirically evaluated across domains, with a primary focus on automotive HMIs, complemented by evidence from related safety-critical domains. The review examines UX and usability evaluation methodologies, tools, standards, and technological trends reported in recent research. Peer-reviewed journal articles published between 2015 and 2025 were considered if they addressed empirical usability or UX evaluation of HMIs. Searches were conducted in Scopus and ScienceDirect databases following PRISMA guidelines. From n = 659 records initially identified, n = 82 papers were included in the final analysis. The literature was synthesized using a descriptive and narrative approach, focusing on evaluation contexts, testing methodologies, sensor-based tools, applied standards, and assessment metrics. Most papers investigated automotive HMIs, while fewer addressed aerospace, industrial, maritime, and other safety-critical applications. Simulation-based user testing emerged as the dominant evaluation approach, frequently supported by eye-tracking and physiological sensing technologies and subjective evaluation questionnaires. A more detailed analysis revealed that adherence to international standards (e.g., ISO 9241 and ISO 26262) was not always consistently evident. Overall, the evidence highlights substantial methodological heterogeneity, fragmented adoption of standards, and limited cross-domain comparability. While today UX and usability evaluation can benefit from continuous technological advances, the field lacks standardized and replicable assessment protocols. Future research should prioritize stronger integration of standards, multimodal evaluation approaches, and longitudinal study designs. Full article
(This article belongs to the Special Issue Enhancing User Experience in Automation and Control Systems)
Show Figures

Figure 1

19 pages, 903 KB  
Review
Monitoring Inputs, Control Architectures, and Failure Modes in Closed-Loop Vasopressor Systems: A Comprehensive Review
by Vitor Felippe, Hiorrana Sousa Dias, Carlos Darcy Alves Bersot, Gustavo Guimaraes Torres, Bruno Wegner, Gabriel Lemos González, Gustavo Wegner and Marcos Adriano Lessa
Sensors 2026, 26(7), 2180; https://doi.org/10.3390/s26072180 - 1 Apr 2026
Viewed by 424
Abstract
Closed-loop vasopressor systems integrate real-time blood pressure monitoring with automated decision logic to support hemodynamic stability in perioperative and critical care environments. These technologies sit at the intersection of biomedical sensing, signal processing, and clinician-supervised automation: the quality, latency, and failure behavior of [...] Read more.
Closed-loop vasopressor systems integrate real-time blood pressure monitoring with automated decision logic to support hemodynamic stability in perioperative and critical care environments. These technologies sit at the intersection of biomedical sensing, signal processing, and clinician-supervised automation: the quality, latency, and failure behavior of the monitoring input can directly shape controller performance, safety margins, and clinical usability. In this comprehensive review, we synthesize the major closed-loop vasopressor architectures reported in the literature, examine how sensor modality and signal integrity influence algorithm behavior, and summarize recurrent reliability vulnerabilities spanning sensors, control logic, and device integration. We organize the field through an end-to-end information pipeline—monitoring input, signal conditioning and quality assessment, decision and control strategy, actuation via infusion technology, and supervisory safety layers—highlighting common performance metrics used to benchmark control quality. We then discuss clinical validation patterns across settings, emphasizing practical considerations for deployment and the evidence gaps that remain most relevant to high-risk populations. Finally, we propose reporting and validation priorities for future studies, with a focus on sensor robustness, transparency of algorithm design, integration safeguards, and standardized documentation of failures and overrides. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

26 pages, 2252 KB  
Review
Detection and Source Identification of Goaf Water Accumulation in Chinese Coal Mines: A Review and Evaluation
by Jianying Zhang and Wenfeng Wang
Appl. Sci. 2026, 16(7), 3370; https://doi.org/10.3390/app16073370 - 31 Mar 2026
Viewed by 252
Abstract
Water accumulation in goafs in Chinese coal mines is a major hidden hazard that can trigger water inrush accidents and may also affect aquifer integrity and regional water security. Reliable delineation of goaf water distribution and identification of water-source types are therefore essential [...] Read more.
Water accumulation in goafs in Chinese coal mines is a major hidden hazard that can trigger water inrush accidents and may also affect aquifer integrity and regional water security. Reliable delineation of goaf water distribution and identification of water-source types are therefore essential for mine water-hazard control and groundwater protection. This paper reviews the main technical routes for goaf groundwater investigation, including geophysical prospecting, hydrogeochemical and isotopic identification, direct inspection tools, and data-driven intelligent workflows. For geophysical detection, the mechanisms, engineering applicability, and key constraints of the Transient Electromagnetic Method (TEM), Surface Nuclear Magnetic Resonance (NMR), the High-Density Resistivity Method (HDRM), and the Coherent Frequency Component (CFC) electromagnetic wave reflection coherence method are synthesized, with emphasis on interpretation boundaries and uncertainty sources under complex geological conditions. For source identification, conventional hydrochemistry, stable isotopes, and laser-induced fluorescence are summarized, and intelligent recognition models such as neural networks and support vector machines are discussed in terms of workflow positioning and practical performance limits. A unified evaluation rationale is established and a semi-quantitative method–metric matrix is constructed to compare techniques in terms of reliability, deployability, cost level, environmental adaptability, and information value, thereby clarifying their functional roles and complementarities within staged engineering workflows. The synthesis indicates that major bottlenecks include limited deep capability under strong interference, pronounced interpretational non-uniqueness caused by complex geology and irregular goaf geometries, and constrained timeliness and generalization for mixed-source identification. Future directions are summarized as multi-method integration with fusion-driven interpretation, intelligent and quantitative decision support with quality control, and sensor–platform advances enabling more practical three-dimensional investigation, aiming to improve the reliability and engineering usability of goaf groundwater hazard assessment. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

18 pages, 740 KB  
Systematic Review
A Systematic Review of Wearable Assistive Technologies for Hearing Impairment: Current Landscape, User Experience, and Future Directions
by Mihai Emanuel Spiţă and Ovidiu Andrei Schipor
Appl. Syst. Innov. 2026, 9(4), 70; https://doi.org/10.3390/asi9040070 - 25 Mar 2026
Viewed by 711
Abstract
Background: Hearing impairment affects a significant portion of the global population. The development of assistive technologies, particularly wearable devices, has been pivotal in mitigating these challenges. Methods: We present a systematic literature review on wearable assistive technologies for individuals with hearing [...] Read more.
Background: Hearing impairment affects a significant portion of the global population. The development of assistive technologies, particularly wearable devices, has been pivotal in mitigating these challenges. Methods: We present a systematic literature review on wearable assistive technologies for individuals with hearing impairment, analyzing 106 scientific articles identified from diverse sources (IEEE Xplore, ACM Digital Library, and Web of Science). Our comprehensive analysis is structured around device types, body locations, user study methodologies, sensory modalities, and application domains. Results: Findings reveal a strong emphasis on auditory and visual feedback, a mix of traditional hearing aids complemented by smart wearable devices, and experimental evaluations focusing on speech comprehension and usability. Visual analysis highlights a significant anatomical shift towards body-worn and wrist-worn haptic devices. While speech accuracy is rigorously reported, user-centric metrics like comfort and battery life are frequently neglected. Conclusions: Addressing these disparities, we propose the HEAR framework (Hybrid Architectures, Engaging Experiences, Adaptive Systems, Real-world Validation). This strategic roadmap advocates for a diversification of sensory outputs, more extensive longitudinal user studies, and the development of adaptive, multi-modal solutions that seamlessly integrate into users’ everyday lives. Full article
(This article belongs to the Special Issue Autonomous Robotics and Hybrid Intelligent Systems)
Show Figures

Figure 1

22 pages, 1170 KB  
Article
Analysis of Methods for Reducing Fuel Consumption in Shipping, Taking into Account Applicable Legal Regulations
by Cezary Behrendt, Włodzimierz Kamiński and Oleh Klyus
Fuels 2026, 7(2), 19; https://doi.org/10.3390/fuels7020019 - 25 Mar 2026
Viewed by 394
Abstract
The International Maritime Organization’s (IMO) greenhouse gas (GHG) strategy aims for a 40% reduction in carbon intensity by 2030 and a 70% reduction by 2050, relative to 2008 levels. Attainment of these objectives necessitates an integrated strategy encompassing technological advancements, operational optimization, and [...] Read more.
The International Maritime Organization’s (IMO) greenhouse gas (GHG) strategy aims for a 40% reduction in carbon intensity by 2030 and a 70% reduction by 2050, relative to 2008 levels. Attainment of these objectives necessitates an integrated strategy encompassing technological advancements, operational optimization, and the adoption of innovative practices to curtail fuel consumption and enhance vessel performance. The Ship Energy Efficiency Management Plan (SEEMP), mandated by MEPC 62 in 2011, establishes a systematic framework for the continual enhancement of energy efficiency. SEEMP is intrinsically associated with reductions in fuel consumption, enabling maritime organizations to systematically monitor and control energy performance via the Energy Efficiency Operational Indicator (EEOI). This metric enables operators to assess operational energy performance and implement measures such as optimized voyage planning and fuel-saving technologies. However, the effectiveness of SEEMP varies widely across companies and vessel types, often due to limited crew awareness. To enhance daily implementation, it is essential to improve crew training and streamline SEEMP documentation. Simplifying SEEMP structures within ship management companies can further facilitate usability and compliance. By focusing on these areas, the maritime industry can better align with IMO’s GHG reduction targets and promote more sustainable operations and fuel-saving technologies. Full article
Show Figures

Figure 1

17 pages, 1141 KB  
Article
Rapid and Accurate ED-XRF Quantification of Trace Arsenic in Rice-Based Foods Employing ANNs to Resolve Lead Spectral Interference
by Murphy Carroll, Zili Gao and Lili He
Foods 2026, 15(7), 1130; https://doi.org/10.3390/foods15071130 - 25 Mar 2026
Viewed by 370
Abstract
Trace quantifications of arsenic (As) in foods by energy-dispersive X-ray fluorescence (ED-XRF) spectrometry are hindered by spectral overlap from lead (Pb) at characteristic emission lines. This study employed artificial neural networks (ANN) to statistically model and correct for As/Pb spectral overlap, enabling accurate [...] Read more.
Trace quantifications of arsenic (As) in foods by energy-dispersive X-ray fluorescence (ED-XRF) spectrometry are hindered by spectral overlap from lead (Pb) at characteristic emission lines. This study employed artificial neural networks (ANN) to statistically model and correct for As/Pb spectral overlap, enabling accurate As quantifications in rice-based foods. Calibration standards were prepared by pelletizing milled rice spiked with As and Pb, and validation was performed using a certified reference material, commercial rice-based foods, and Pb-spiked commercial foods. As calibration metrics were great (R2 = 0.92, standard error in calibration = 41.20 µg kg−1). The validation assessment achieved acceptable error using the As reference material (−19.43% error) and in commercial rice-based foods containing low Pb content (6 of 11 As determinations in agreement with the reference method). Additionally, accurate predictions of As were found in the presence of significant Pb interference (absolute mean error = 14.11% in Pb-spiked commercial foods). Overall, ANN modeling for Pb exhibited poor performance during both calibration and validation. This work demonstrates the usability of an ANN to address the As/Pb overlapping issue while offering insights into the strengths and weaknesses of ANNs when coupled with ED-XRF for trace elemental quantifications in foods. Full article
Show Figures

Figure 1

27 pages, 9437 KB  
Article
Real-Time Digital Twin Architecture for Immersive Industrial Automation Training
by Jessica S. Ortiz, Víctor H. Andaluz and Christian P. Carvajal
Sensors 2026, 26(7), 2023; https://doi.org/10.3390/s26072023 - 24 Mar 2026
Viewed by 610
Abstract
Industrial automation laboratories often face limitations related to restricted access to industrial equipment, safety constraints, and limited scalability for hands-on experimentation. To address these challenges, this work proposes a real-time multi-layer Digital Twin architecture integrating a physical Siemens S7-1500 PLC, an immersive Unity-based [...] Read more.
Industrial automation laboratories often face limitations related to restricted access to industrial equipment, safety constraints, and limited scalability for hands-on experimentation. To address these challenges, this work proposes a real-time multi-layer Digital Twin architecture integrating a physical Siemens S7-1500 PLC, an immersive Unity-based virtual environment, HMI supervision, and IoT-enabled remote monitoring within a unified communication framework. The architecture is structured into physical, digital, and integration layers, enabling modular scalability and bidirectional synchronization between the physical process and its virtual representation through Ethernet TCP/IP communication. System performance was evaluated using synchronization metrics including communication latency, jitter, deterministic timing deviation, and event synchronization accuracy. Experimental results demonstrated stable PLC–Digital Twin communication with average latencies below 15 ms and jitter below 0.5 ms, ensuring reliable real-time interaction during continuous operation. A comparative evaluation with engineering students also showed improved learning conditions, achieving high perceived usability (SUS = 86/100) and reduced cognitive workload (NASA-TLX = 34/100). These results confirm the effectiveness of the proposed architecture as a scalable platform for Industry 4.0 training environments. Full article
Show Figures

Figure 1

50 pages, 4289 KB  
Article
Study on the Validity of Volatility Trading
by Alberto Castillo and Jose Manuel Mira Mcwilliams
FinTech 2026, 5(1), 26; https://doi.org/10.3390/fintech5010026 - 20 Mar 2026
Viewed by 678
Abstract
This study examines the role of volatility mean reversion in option pricing and evaluates the performance of commonly used volatility estimators within a broad market context. Using a comprehensive dataset of end-of-day option chains for the 100 most actively traded U.S. equities from [...] Read more.
This study examines the role of volatility mean reversion in option pricing and evaluates the performance of commonly used volatility estimators within a broad market context. Using a comprehensive dataset of end-of-day option chains for the 100 most actively traded U.S. equities from 2018 to 2023, we apply several established statistical techniques—including unit root tests, variance ratio analysis, Hurst exponent estimation, and GARCH modeling—to quantify the presence and strength of mean reversion in volatility. To assess the accuracy and practical usability of volatility metrics for option valuation, we compare realized volatility, GARCH-based forecasts, range-based estimators, and widely used implied volatility measures such as the VIX and daily implied volatility averages, benchmarking each against contract-specific implied volatility. The results indicate that more than 65% of the analyzed tickers exhibit statistically significant mean-reverting behavior, and that the 30-day average implied volatility consistently provides the most reliable predictive performance among the tested metrics, while range-based estimators perform poorly when applied to end-of-day data. Finally, backtests of six delta-neutral option strategies informed by these findings did not yield consistent profitability or statistically significant outperformance, suggesting that although volatility mean reversion is measurable, its direct application to systematic trading remains challenging. Full article
Show Figures

Figure 1

25 pages, 2220 KB  
Article
HRC Metrology: Assessment Criteria, Metrics and Methods for Human–Robot Co-Manipulation Tasks
by S. M. Mizanoor Rahman
Machines 2026, 14(3), 336; https://doi.org/10.3390/machines14030336 - 16 Mar 2026
Viewed by 509
Abstract
We developed a human–robot collaborative manipulation system (co-manipulation system) in the form of a power assist robotic system (PARS) where a human and a robot collaborated to perform the co-manipulation of an object with power assistance. We conducted an experiment (the first experiment), [...] Read more.
We developed a human–robot collaborative manipulation system (co-manipulation system) in the form of a power assist robotic system (PARS) where a human and a robot collaborated to perform the co-manipulation of an object with power assistance. We conducted an experiment (the first experiment), where in each trial of the experiment, a human subject performed the co-manipulation of the object with the PARS, and an expert human–robot co-manipulation researcher observed the co-manipulation task. We collected the co-manipulation and observation data, analyzed the data, and conducted reviews of the related literature, and developed the HRC (human–robot collaboration) metrology, which consisted of necessary criteria, metrics and methods to assess human–robot collaborative manipulation tasks. The proposed HRC metrology consisted of both human–robot collaborative performance and human–robot interactions (HRI) related assessment criteria. Then, we developed another human–robot co-manipulation system using a robot manipulator. In this system, the human–robot co-manipulation task was performed in conjunction with a collaborative assembly task between the robot and human co-workers. In another experiment (the second experiment), we assessed the co-manipulation task for each robotic system separately based on the developed HRC metrology (set of assessment criteria, metrics and methods) to verify and validate the practicality, usability and effectiveness of the criteria, metrics and methods. The results showed that the HRC metrology was effective and practical in assessing the co-manipulation tasks. We then discussed the strengths and limitations of the assessment criteria, metrics and methods. The proposed HRC metrology can be used to assess human–robot collaborative performance and human–robot interactions in human–robot co-manipulation tasks with potential real-world applications in industrial manipulation and manufacturing, transport, logistics, civil construction, rescue and disaster management, timber processing, etc. Full article
(This article belongs to the Special Issue Design and Control of Assistive Robots)
Show Figures

Figure 1

Back to TopTop