Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,844)

Search Parameters:
Keywords = task-based metrics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 7638 KB  
Article
Advanced Consumer Behaviour Analysis: Integrating Eye Tracking, Machine Learning, and Facial Recognition
by José Augusto Rodrigues, António Vieira de Castro and Martín Llamas-Nistal
J. Eye Mov. Res. 2026, 19(1), 9; https://doi.org/10.3390/jemr19010009 (registering DOI) - 19 Jan 2026
Abstract
This study presents DeepVisionAnalytics, an integrated framework that combines eye tracking, OpenCV-based computer vision (CV), and machine learning (ML) to support objective analysis of consumer behaviour in visually driven tasks. Unlike conventional self-reported surveys, which are prone to cognitive bias, recall errors, and [...] Read more.
This study presents DeepVisionAnalytics, an integrated framework that combines eye tracking, OpenCV-based computer vision (CV), and machine learning (ML) to support objective analysis of consumer behaviour in visually driven tasks. Unlike conventional self-reported surveys, which are prone to cognitive bias, recall errors, and social desirability effects, the proposed approach relies on direct behavioural measurements of visual attention. The system captures gaze distribution and fixation dynamics during interaction with products or interfaces. It uses AOI-level eye tracking metrics as the sole behavioural signal to infer candidate choice under constrained experimental conditions. In parallel, OpenCV and ML perform facial analysis to estimate demographic attributes (age, gender, and ethnicity). These attributes are collected independently and linked post hoc to gaze-derived outcomes. Demographics are not used as predictive features for choice inference. Instead, they are used as contextual metadata to support stratified, segment-level interpretation. Empirical results show that gaze-based inference closely reproduces observed choice distributions in short-horizon, visually driven tasks. Demographic estimates enable meaningful post hoc segmentation without affecting the decision mechanism. Together, these results show that multimodal integration can move beyond descriptive heatmaps. The platform produces reproducible decision-support artefacts, including AOI rankings, heatmaps, and segment-level summaries, grounded in objective behavioural data. By separating the decision signal (gaze) from contextual descriptors (demographics), this work contributes a reusable end-to-end platform for marketing and UX research. It supports choice inference under constrained conditions and segment-level interpretation without demographic priors in the decision mechanism. Full article
Show Figures

Figure 1

22 pages, 5297 KB  
Article
A Space-Domain Gravity Forward Modeling Method Based on Voxel Discretization and Multiple Observation Surfaces
by Rui Zhang, Guiju Wu, Jiapei Wang, Yufei Xi, Fan Wang and Qinhong Long
Symmetry 2026, 18(1), 180; https://doi.org/10.3390/sym18010180 - 19 Jan 2026
Abstract
Geophysical forward modeling serves as a fundamental theoretical approach for characterizing subsurface structures and material properties, essentially involving the computation of gravity responses at surface or spatial observation points based on a predefined density distribution. With the rapid development of data-driven techniques such [...] Read more.
Geophysical forward modeling serves as a fundamental theoretical approach for characterizing subsurface structures and material properties, essentially involving the computation of gravity responses at surface or spatial observation points based on a predefined density distribution. With the rapid development of data-driven techniques such as deep learning in geophysical inversion, forward algorithms are facing increasing demands in terms of computational scale, observable types, and efficiency. To address these challenges, this study develops an efficient forward modeling method based on voxel discretization, the enabling rapid calculation of gravity anomalies and radial gravity gradients on multiple observational surfaces. Leveraging the parallel computing capabilities of graphics processing units (GPU), together with tensor acceleration, Compute Unified Device Architecture (CUDA) execution, and Just-in-time (JIT) compilation strategies, the method achieves high efficiency and automation in the forward computation process. Numerical experiments conducted on several typical theoretical models demonstrate the convergence and stability of the calculated results, indicating that the proposed method significantly reduces computation time while maintaining accuracy, thus being well-suited for large-scale 3D modeling and fast batch simulation tasks. This research can efficiently generate forward datasets with multi-view and multi-metric characteristics, providing solid data support and a scalable computational platform for deep-learning-based geophysical inversion studies. Full article
Show Figures

Figure 1

35 pages, 22348 KB  
Article
Performance Assessment of Portable SLAM-Based Systems for 3D Documentation of Historic Built Heritage
by Valentina Bonora and Martina Colapietro
Sensors 2026, 26(2), 657; https://doi.org/10.3390/s26020657 (registering DOI) - 18 Jan 2026
Abstract
The rapid and reliable geometric documentation of historic built heritage is a key requirement for a wide range of conservation, analysis, and risk assessment activities. In recent years, portable and wearable Simultaneous Localization and Mapping (SLAM)-based systems have emerged as efficient tools for [...] Read more.
The rapid and reliable geometric documentation of historic built heritage is a key requirement for a wide range of conservation, analysis, and risk assessment activities. In recent years, portable and wearable Simultaneous Localization and Mapping (SLAM)-based systems have emerged as efficient tools for fast 3D data acquisition, offering significant advantages in terms of operational speed, accessibility, and flexibility. This paper presents an experimental performance assessment of three portable SLAM-based mobile mapping systems applied to the 3D documentation of historic religious buildings. Two historic parish churches in the Lunigiana region (Italy) are used as case studies to evaluate the systems under real-world conditions. The analysis focuses on key performance indicators relevant to metric documentation, including georeferencing accuracy, 3D model accuracy, point cloud density and resolution, and model completeness. The results highlight the capabilities and limitations of the tested systems, showing that all instruments can efficiently capture the primary geometries of complex historic buildings, while differences emerge in terms of accuracy, data consistency, and readability of architectural details. Although the work is framed within a broader research project addressing seismic vulnerability of historic structures, this contribution specifically focuses on the experimental evaluation of SLAM-based surveying performance. The results demonstrate that portable SLAM systems provide reliable geometric datasets suitable for preliminary documentation tasks and for supporting further multidisciplinary analyses, representing a valuable resource for the rapid 3D documentation of historic built heritage. Full article
Show Figures

Figure 1

24 pages, 785 KB  
Article
Weighted Sum-Rate Maximization and Task Completion Time Minimization for Multi-Tag MIMO Symbiotic Radio Networks
by Long Suo, Dong Wang, Wenxin Zhou and Xuefei Peng
Sensors 2026, 26(2), 644; https://doi.org/10.3390/s26020644 (registering DOI) - 18 Jan 2026
Abstract
Symbiotic radio (SR) has recently emerged as a promising paradigm for enabling spectrum- and energy-efficient massive connectivity in low-power Internet-of-Things (IoT) networks. By allowing passive backscatter devices (BDs) to coexist with active primary link transmissions, SR significantly improves spectrum utilization without requiring dedicated [...] Read more.
Symbiotic radio (SR) has recently emerged as a promising paradigm for enabling spectrum- and energy-efficient massive connectivity in low-power Internet-of-Things (IoT) networks. By allowing passive backscatter devices (BDs) to coexist with active primary link transmissions, SR significantly improves spectrum utilization without requiring dedicated spectrum resources. However, most existing studies on multi-tag multiple-input multiple-output (MIMO) SR systems assume homogeneous traffic demands among BDs and primarily focus on rate-based performance metrics, while neglecting system-level task completion time (TCT) optimization under heterogeneous data requirements. In this paper, we investigate a joint performance optimization framework for a multi-tag MIMO symbiotic radio network. We first formulate a weighted sum-rate (WSR) maximization problem for the secondary backscatter links. The original non-convex WSR maximization problem is transformed into an equivalent weighted minimum mean square error (WMMSE) problem, and then solved by a block coordinate descent (BCD) approach, where the transmit precoding matrix, decoding filters, backscatter reflection coefficients are alternatively optimized. Second, to address the transmission delay imbalance caused by heterogeneous data sizes among BDs, we further propose a rate weight adaptive task TCT minimization scheme, which dynamically updates the rate weight of each BD to minimize the overall TCT. Simulation results demonstrate that the proposed framework significantly improves the WSR of the secondary system without degrading the primary link performance, and achieves substantial TCT reduction in multi-tag heterogeneous traffic scenarios, validating its effectiveness and robustness for MIMO symbiotic radio networks. Full article
22 pages, 1347 KB  
Article
Multi-Source Data Fusion for Anime Pilgrimage Recommendation: Integrating Accessibility, Seasonality, and Popularity
by Yusong Zhou and Yuanyuan Wang
Electronics 2026, 15(2), 419; https://doi.org/10.3390/electronics15020419 (registering DOI) - 18 Jan 2026
Abstract
Anime pilgrimage refers to the act of fans visiting real-world locations featured in anime works, offering visual familiarity alongside cultural depth. However, existing studies on anime tourism provide limited computational support for selecting pilgrimage sites based on contextual and experiential factors. This study [...] Read more.
Anime pilgrimage refers to the act of fans visiting real-world locations featured in anime works, offering visual familiarity alongside cultural depth. However, existing studies on anime tourism provide limited computational support for selecting pilgrimage sites based on contextual and experiential factors. This study proposes an intelligent recommendation framework based on multi-source data fusion that integrates three key elements: transportation accessibility, seasonal alignment between the current environment and the anime’s depicted scene, and a Cross-Platform Popularity Index (CPPI) derived from major global platforms. We evaluate each pilgrimage location using route-based accessibility analysis, season-scene discrepancy scoring, and robustly normalized popularity metrics. These factors are combined into a weighted Multi-Criteria Decision Making (MCDM) model to generate context-aware recommendations. To rigorously validate the proposed approach, a user study was conducted using a ranking task involving popular destinations in Tokyo. Participants were presented with travel conditions, spatial relationships, and popularity scores and then asked to rank their preferences. We used standard ranking-based metrics to compare system-generated rankings with participant choices. Furthermore, we conducted an ablation study to quantify the individual contribution of accessibility, seasonality, and popularity. The results demonstrate strong alignment between the model and user preferences, confirming that incorporating these three dimensions significantly enhances the reliability and satisfaction of real-world anime pilgrimage planning. Full article
Show Figures

Figure 1

23 pages, 2419 KB  
Article
Building and Validating a Coal Mine Safety Question-Answering System with a Large Language Model Through a Two-Stage Fine-Tuning Method
by Zongyu Li, Xingli Liu, Shiqun Liu, He Ma and Gang Wu
Appl. Sci. 2026, 16(2), 971; https://doi.org/10.3390/app16020971 (registering DOI) - 17 Jan 2026
Viewed by 62
Abstract
Artificial intelligence technology holds significant importance for building intelligent question-answering systems in the field of coal mine safety and enhancing safety management levels. Currently, there is a lack of specialized large language models and high-quality question-answering datasets in this field. To address this, [...] Read more.
Artificial intelligence technology holds significant importance for building intelligent question-answering systems in the field of coal mine safety and enhancing safety management levels. Currently, there is a lack of specialized large language models and high-quality question-answering datasets in this field. To address this, this study proposes a two-stage fine-tuning method based on Low-Rank Adaptation (LoRA) and Group Sequence Policy Optimization (GSPO) for training a question-answering model tailored to the coal mine safety domain. The research begins by constructing a dedicated question-answering dataset based on domain-specific regulatory documents. Subsequently, using Qwen2.5-7B Instruct as the base model, the study fine-tunes the model through supervised learning with LoRA technology, followed by further optimization of the model’s performance using the GSPO reinforcement learning algorithm. Experiments show that the model trained with this method exhibits significant improvements in coal mine safety-related tasks, achieving superior results on multiple automated evaluation metrics compared to contrast models of similar scale. This study validates the effectiveness of the two-stage fine-tuning method in adapting large language models (LLMs) to specific domains, providing a new technical approach for the intelligentization of coal mine safety. It should be noted that due to the lack of external data, this study relies on a self-constructed dataset and has not yet undergone external independent validation, which constitutes the main limitation of the current work. Full article
27 pages, 13508 KB  
Article
Investigating XR Pilot Training Through Gaze Behavior Analysis Using Sensor Technology
by Aleksandar Knežević, Branimir Krstić, Aleksandar Bukvić, Dalibor Petrović and Boško Rašuo
Aerospace 2026, 13(1), 97; https://doi.org/10.3390/aerospace13010097 - 16 Jan 2026
Viewed by 187
Abstract
This research aims to characterize extended reality flight trainers and to provide a detailed account of the sensors employed to collect data essential for qualitative task performance analysis, with a particular focus on gaze behavior within the extended reality environment. A comparative study [...] Read more.
This research aims to characterize extended reality flight trainers and to provide a detailed account of the sensors employed to collect data essential for qualitative task performance analysis, with a particular focus on gaze behavior within the extended reality environment. A comparative study was conducted to evaluate the effectiveness of an extended reality environment relative to traditional flight simulators. Eight flight instructor candidates, advanced pilots with comparable flight-hour experience, were divided into four groups based on airplane or helicopter type and cockpit configuration (analog or digital). In the traditional simulator, fixation numbers, dwell time percentages, revisit numbers, and revisit time percentages were recorded, while in the extended reality environment, the following metrics were analyzed: fixation numbers and durations, saccade numbers and durations, smooth pursuits and durations, and number of blinks. These eye-tracking parameters were evaluated alongside flight performance metrics across all trials. Each scenario involved a takeoff and initial climb task within the traffic pattern of a fixed-wing aircraft. Despite the diversity of pilot groups, no statistically significant differences were observed in either flight performance or gaze behavior metrics between the two environments. Moreover, differences identified between certain pilot groups within one scenario were consistently observed in another, indicating the sensitivity of the proposed evaluation procedure. The enhanced realism and validated effectiveness are therefore crucial for establishing standards that support the formal adoption of extended reality technologies in pilot training programs. Integrating this digital space significantly enhances the overall training experience and provides a higher level of simulation fidelity for next-generation cadet training. Full article
(This article belongs to the Special Issue New Trends in Aviation Development 2024–2025)
Show Figures

Figure 1

23 pages, 5052 KB  
Article
Exploratory Study on Hybrid Systems Performance: A First Approach to Hybrid ML Models in Breast Cancer Classification
by Francisco J. Rojas-Pérez, José R. Conde-Sánchez, Alejandra Morlett-Paredes, Fernando Moreno-Barbosa, Julio C. Ramos-Fernández, José Luna-Muñoz, Genaro Vargas-Hernández, Blanca E. Jaramillo-Loranca, Juan M. Xicotencatl-Pérez and Eucario G. Pérez-Pérez
AI 2026, 7(1), 29; https://doi.org/10.3390/ai7010029 - 15 Jan 2026
Viewed by 144
Abstract
The classification of breast cancer using machine learning techniques has become a critical tool in modern medical diagnostics. This study analyzes the performance of hybrid models that combine traditional machine learning algorithms (TMLAs) with a convolutional neural network (CNN)-based VGG16 model for feature [...] Read more.
The classification of breast cancer using machine learning techniques has become a critical tool in modern medical diagnostics. This study analyzes the performance of hybrid models that combine traditional machine learning algorithms (TMLAs) with a convolutional neural network (CNN)-based VGG16 model for feature extraction to improve accuracy for classifying eight breast cancer subtypes (BCS). The methodology consists of three steps. First, image preprocessing is performed on the BreakHis dataset at 400× magnification, which contains 1820 histopathological images classified into eight BCS. Second, the CNN VGG16 is modified to function as a feature extractor that converts images into representative vectors. These vectors constitute the training set for TMLAs, such as Random Forest (RF), Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and Naive Bayes (NB), leveraging VGG16’s ability to capture relevant features. Third, k-fold cross-validation is applied to evaluate the model’s performance by averaging the metrics obtained across all folds. The results reveal that hybrid models leveraging a CNN-based VGG16 model for feature extraction, followed by TMLAs, achieve accuracy outstanding experimental accuracy. The KNN-based hybrid model stood out with a precision of 0.97, accuracy of 0.96, sensitivity of 0.96, specificity of 0.99, F1-score of 0.96, and ROC-AUC of 0.97. These findings suggest that, with an appropriate methodology, hybrid models based on TMLA have strong potential in classification tasks, offering a balance between performance and predictive capability. Full article
Show Figures

Figure 1

21 pages, 5194 KB  
Article
A Typhoon Clustering Model for the Western Pacific Coast Based on Interpretable Machine Learning
by Yanhe Wang, Yinzhen Lv, Lei Zhang, Tianrun Gao, Ruiqi Feng, Yihan Zhou and Wei Zhang
Electronics 2026, 15(2), 379; https://doi.org/10.3390/electronics15020379 - 15 Jan 2026
Viewed by 140
Abstract
As a complex and destructive natural disaster, the characteristics of typhoons are closely related to human activities, and their accurate categorization is of vital significance for improving disaster warning and management capabilities. This study highlights the key role of typhoon clustering in analyzing [...] Read more.
As a complex and destructive natural disaster, the characteristics of typhoons are closely related to human activities, and their accurate categorization is of vital significance for improving disaster warning and management capabilities. This study highlights the key role of typhoon clustering in analyzing typhoon behaviors, aiming to provide reliable support for disaster prevention and control. Based on the NOAA meteorological dataset from 2003 to 2024, this study firstly adopts the K-means clustering algorithm to classify typhoons into seven categories and then utilizes eight machine learning models to train and validate the classification results, and introduces the Shapley’s additive interpretation (SHAP) algorithm to enhance the interpretability of the models. The study data covers a variety of features such as air temperature, wind speed, atmospheric pressure, and weather station observations, etc. After a systematic preprocessing process, a feature matrix containing key variables such as typhoon intensity and moving speed is constructed. The results show that the XGBoost model outperforms others across multiple evaluation metrics (Accuracy: 0.992, Precision: 0.989, Recall: 0.992, F1.5 Score: 0.990), highlighting its exceptional capability in managing complex weather classification tasks. The seven categories of typhoon types classified by K-means exhibit different feature patterns, while the SHAP analysis further reveals the effects of each feature on the classification and its potential interactions. This study not only verifies the effectiveness of K-means combined with machine learning in typhoon classification but also lays a solid scientific foundation for accurate prediction, risk assessment and optimization of management strategies for typhoon disasters through the in-depth analysis of feature impacts. Full article
(This article belongs to the Special Issue AI-Driven Data Analytics and Mining)
Show Figures

Figure 1

14 pages, 4400 KB  
Article
Simulator Training on Neurointerventional Skill Acquisition in Novices: A Pilot Study
by Alexander von Hessling, Tim von Wyl, Dirk Lehnick, Chloé Sieber, Justus E. Roos and Grzegorz M. Karwacki
Neurol. Int. 2026, 18(1), 16; https://doi.org/10.3390/neurolint18010016 - 14 Jan 2026
Viewed by 108
Abstract
Background: Simulation-based training may offer a useful approach to support skill acquisition in neurointerventional stroke treatment without exposing patients to procedural risks. As the global demand for thrombectomy rises, training strategies that ensure procedural competence while addressing workforce constraints are increasingly important. With [...] Read more.
Background: Simulation-based training may offer a useful approach to support skill acquisition in neurointerventional stroke treatment without exposing patients to procedural risks. As the global demand for thrombectomy rises, training strategies that ensure procedural competence while addressing workforce constraints are increasingly important. With this pilot study, we aim to generate a hypothesis as to whether additional exposure of trainees to mechanical thrombectomy could benefit from simulator training on top of the standard training carried out on flow models. This study was designed as an exploratory pilot investigation and was not able to provide inferential or confirmatory statistical conclusions. Methods: Six novice participants (advanced clinical-year medical students with completed anatomical and preclinical training, but without previous exposure to catheter-based interventions) performed two neurointerventional tasks, vascular access and mechanical thrombectomy (MTE), on flow models. After a baseline assessment, three participants received standard model-based training (control group), and three received additional simulator training using a high-fidelity angiography simulator (Mentice VIST G5). Performance was reassessed after four weeks using technical and clinical surrogate metrics, which were ranked and descriptively analyzed. Results: No relevant differences were observed between groups for the vascular access task. In contrast, the simulator group demonstrated a trend toward improved performance in the MTE task, with greater gains in efficiency, autonomy, and procedural safety. Conclusions: Our findings indicate a possible benefit of even brief simulator exposure for skill acquisition for complex endovascular procedures such as MTE. While conventional training may suffice for basic skills, simulation may be particularly helpful in supporting learning in more advanced tasks. Full article
Show Figures

Graphical abstract

21 pages, 2930 KB  
Article
Robust Model Predictive Control with a Dynamic Look-Ahead Re-Entry Strategy for Trajectory Tracking of Differential-Drive Robots
by Diego Guffanti, Moisés Filiberto Mora Murillo, Santiago Bustamante Sanchez, Javier Oswaldo Obregón Gutiérrez, Marco Alejandro Hinojosa, Alberto Brunete, Miguel Hernando and David Álvarez
Sensors 2026, 26(2), 520; https://doi.org/10.3390/s26020520 - 13 Jan 2026
Viewed by 88
Abstract
Accurate trajectory tracking remains a central challenge in differential-drive mobile robots (DDMRs), particularly when operating under real-world conditions. Model Predictive Control (MPC) provides a powerful framework for this task, but its performance degrades when the robot deviates significantly from the nominal path. To [...] Read more.
Accurate trajectory tracking remains a central challenge in differential-drive mobile robots (DDMRs), particularly when operating under real-world conditions. Model Predictive Control (MPC) provides a powerful framework for this task, but its performance degrades when the robot deviates significantly from the nominal path. To address this limitation, robust recovery mechanisms are required to ensure stable and precise tracking. This work presents an experimental validation of an MPC controller applied to a four-wheel DDMR, whose odometry is corrected by a SLAM algorithm running in ROS 2. The MPC is formulated as a quadratic program with state and input constraints on linear (v) and angular (ω) velocities, using a prediction horizon of Np=15 future states, adjusted to the computational resources of the onboard computer. A novel dynamic look-ahead re-entry strategy is proposed, which activates when the robot exits a predefined lateral error band (δ=0.05 m) and interpolates a smooth reconnection trajectory based on a forward look-ahead point, ensuring gradual convergence and avoiding abrupt re-entry actions. Accuracy was evaluated through lateral and heading errors measured via geometric projection onto the nominal path, ensuring fair comparison. From these errors, RMSE, MAE, P95, and in-band percentage were computed as quantitative metrics. The framework was tested on real hardware at 50 Hz through 5 nominal experiments and 3 perturbed experiments. Perturbations consisted of externally imposed velocity commands at specific points along the path, while configuration parameters were systematically varied across trials, including the weight R, smoothing distance Lsmooth, and activation of the re-entry strategy. In nominal conditions, the best configuration (ID 2) achieved a lateral RMSE of 0.05 m, a heading RMSE of 0.06 rad, and maintained 68.8% of the trajectory within the validation band. Under perturbations, the proposed strategy substantially improved robustness. For instance, in experiment ID 6 the robot sustained a lateral RMSE of 0.12 m and preserved 51.4% in-band, outperforming MPC without re-entry, which suffered from larger deviations and slower recoveries. The results confirm that integrating MPC with the proposed re-entry strategy enhances both accuracy and robustness in DDMR trajectory tracking. By combining predictive control with a spatially grounded recovery mechanism, the approach ensures consistent performance in challenging scenarios, underscoring its relevance for reliable mobile robot navigation in uncertain environments. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

27 pages, 1264 KB  
Systematic Review
Radiomics from Routine CT and PET/CT Imaging in Laryngeal Squamous Cell Carcinoma: A Systematic Review with Radiomics Quality Score Assessment
by Amar Rajgor, Terrenjit Gill, Eric Aboagye, Aileen Mill, Stephen Rushton, Boguslaw Obara and David Winston Hamilton
Cancers 2026, 18(2), 237; https://doi.org/10.3390/cancers18020237 - 13 Jan 2026
Viewed by 129
Abstract
Background/Objectives: Radiomics, the high-throughput extraction of quantitative features from medical imaging, offers a promising method for identifying laryngeal cancer imaging biomarkers. We aim to systematically review the literature on radiomics in laryngeal squamous cell carcinoma, assessing applications in tumour staging, prognosis, recurrence [...] Read more.
Background/Objectives: Radiomics, the high-throughput extraction of quantitative features from medical imaging, offers a promising method for identifying laryngeal cancer imaging biomarkers. We aim to systematically review the literature on radiomics in laryngeal squamous cell carcinoma, assessing applications in tumour staging, prognosis, recurrence prediction, and treatment response evaluation. PROSPERO ID: CRD420251117983. Methods: MEDLINE and EMBASE databases were searched in May 2025. Inclusion criteria: studies published between 1 January 2010 and 31 January 2024, extracted radiomic features from CT, PET/CT, or MRI, and analysed outcomes related to diagnosis, staging, survival, recurrence, or treatment response in laryngeal cancer. Exclusion criteria: case reports, abstracts, editorials, reviews, or conference proceedings, exclusive focus on preclinical or animal models, lack of a clear radiomics methodology, or did not include imaging-based feature extraction. Results were synthesised narratively by modelling objective, alongside formal assessment of methodological quality using the Radiomics Quality Score (RQS). Results: Twenty studies met the inclusion criteria, with most using CT-based radiomics. Seven incorporated PET/CT. Radiomic models demonstrated moderate-to-high accuracy across tasks including T-staging, thyroid cartilage invasion, survival prediction, and local failure. Key predictive features included first-order entropy, skewness, and texture metrics such as size zone non-uniformity and GLCM correlation. Methodological variability, limited external validation, and small samples were frequent limitations. Conclusions: Radiomics holds strong promise as a non-invasive biomarker for laryngeal cancer. However, methodological heterogeneity identified through formal quality assessment indicates that improved standardisation, reproducibility, and multicentre validation are required before widespread clinical implementation. Full article
(This article belongs to the Section Systematic Review or Meta-Analysis in Cancer Research)
Show Figures

Figure 1

26 pages, 29009 KB  
Article
Quantifying the Relationship Between Speech Quality Metrics and Biometric Speaker Recognition Performance Under Acoustic Degradation
by Ajan Ahmed and Masudul H. Imtiaz
Signals 2026, 7(1), 7; https://doi.org/10.3390/signals7010007 - 12 Jan 2026
Viewed by 285
Abstract
Self-supervised learning (SSL) models have achieved remarkable success in speaker verification tasks, yet their robustness to real-world audio degradation remains insufficiently characterized. This study presents a comprehensive analysis of how audio quality degradation affects three prominent SSL-based speaker verification systems (WavLM, Wav2Vec2, and [...] Read more.
Self-supervised learning (SSL) models have achieved remarkable success in speaker verification tasks, yet their robustness to real-world audio degradation remains insufficiently characterized. This study presents a comprehensive analysis of how audio quality degradation affects three prominent SSL-based speaker verification systems (WavLM, Wav2Vec2, and HuBERT) across three diverse datasets: TIMIT, CHiME-6, and Common Voice. We systematically applied 21 degradation conditions spanning noise contamination (SNR levels from 0 to 20 dB), reverberation (RT60 from 0.3 to 1.0 s), and codec compression (various bit rates), then measured both objective audio quality metrics (PESQ, STOI, SNR, SegSNR, fwSNRseg, jitter, shimmer, HNR) and speaker verification performance metrics (EER, AUC-ROC, d-prime, minDCF). At the condition level, multiple regression with all eight quality metrics explained up to 80% of the variance in minDCF for HuBERT and 78% for WavLM, but only 35% for Wav2Vec2; EER predictability was lower (69%, 67%, and 28%, respectively). PESQ was the strongest single predictor for WavLM and HuBERT, while Shimmer showed the highest single-metric correlation for Wav2Vec2; fwSNRseg yielded the top single-metric R2 for WavLM, and PESQ for HuBERT and Wav2Vec2 (with much smaller gains for Wav2Vec2). WavLM and HuBERT exhibited more predictable quality-performance relationships compared to Wav2Vec2. These findings establish quantitative relationships between measurable audio quality and speaker verification accuracy at the condition level, though substantial within-condition variability limits utterance-level prediction accuracy. Full article
Show Figures

Figure 1

18 pages, 1386 KB  
Article
Long-Term and Short-Term Photovoltaic Power Generation Forecasting Using a Multi-Scale Fusion MHA-BiLSTM Model
by Mengkun Li, Letian Sun and Yitian Sun
Energies 2026, 19(2), 363; https://doi.org/10.3390/en19020363 - 12 Jan 2026
Viewed by 168
Abstract
As the proportion of photovoltaic (PV) power generation continues to increase in power systems, high-precision PV power forecasting has become a critical challenge for smart grid scheduling. Traditional forecasting methods often struggle with accuracy and error propagation, particularly when handling short-term fluctuations and [...] Read more.
As the proportion of photovoltaic (PV) power generation continues to increase in power systems, high-precision PV power forecasting has become a critical challenge for smart grid scheduling. Traditional forecasting methods often struggle with accuracy and error propagation, particularly when handling short-term fluctuations and long-term trends. To address these issues, this paper proposes a multi-time scale forecasting model, MHA-BiLSTM, based on Bidirectional Long Short-Term Memory (BiLSTM) and Multi-Head Attention (MHA). The model combines the short-term dependency modeling ability of BiLSTM with the long-term trend capturing ability of the multi-head attention mechanism, effectively addressing both short-term (within 6 h) and long-term (up to 72 h) dependencies in PV power data. The experimental results on a simulated PV dataset demonstrate that the MHA-BiLSTM model outperforms traditional models such as LSTM, BiLSTM, and Transformer in multiple evaluation metrics (e.g., MSE, RMSE, R2), particularly showing stronger robustness and generalization ability in long-term forecasting tasks. The results prove that MHA-BiLSTM effectively improves the accuracy of both short-term and long-term PV power predictions, providing valuable support for future microgrid scheduling, energy storage optimization, and the development of smart energy systems. Full article
(This article belongs to the Section A2: Solar Energy and Photovoltaic Systems)
Show Figures

Figure 1

25 pages, 4608 KB  
Article
Comparison of Multi-View and Merged-View Mining Vehicle Teleoperation Systems Through Eye-Tracking
by Alireza Kamran Pishhesari, Mahdi Shahsavar, Amin Moniri-Morad and Javad Sattarvand
Mining 2026, 6(1), 3; https://doi.org/10.3390/mining6010003 - 12 Jan 2026
Viewed by 110
Abstract
While multi-view visualization systems are widely used for mining vehicle teleoperation, they often impose high cognitive load and restrict operator attention. To explore a more efficient alternative, this study evaluated a merged-view interface that integrates multiple camera perspectives into a single coherent display. [...] Read more.
While multi-view visualization systems are widely used for mining vehicle teleoperation, they often impose high cognitive load and restrict operator attention. To explore a more efficient alternative, this study evaluated a merged-view interface that integrates multiple camera perspectives into a single coherent display. In a controlled experiment, 35 participants navigated a teleoperated robot along a 50 m lab-scale path representative of an underground mine under both multi-view and merged-view conditions. Task performance and eye-tracking data—including completion time, path adherence, and speed-limit violations—were collected for comparison. The merged-view system enabled 6% faster completion times, 21% higher path adherence, and 28% fewer speed-limit violations. Eye-tracking metrics indicated more efficient and distributed attention: blink rate decreased by 29%, fixation duration shortened by 18%, saccade amplitude increased by 11%, and normalized gaze-transition entropy rose by 14%, reflecting broader and more adaptive scanning. NASA-TLX scores further showed a 27% reduction in perceived workload. Regression-based sensitivity analysis revealed that gaze entropy was the strongest predictor of efficiency in the multi-view condition, while fixation duration dominated under merged-view visualization. For path adherence, blink rate was most influential in the multi-view setup, whereas fixation duration became key in merged-view operation. Overall, the results indicated that merged-view visualization improved visual attention distribution and reduced cognitive tunneling indicators in a controlled laboratory teleoperation task, offering early-stage, interface-level insights motivated by mining-relevant teleoperation challenges. Full article
(This article belongs to the Special Issue Mine Automation and New Technologies, 2nd Edition)
Show Figures

Figure 1

Back to TopTop