Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,512)

Search Parameters:
Keywords = ground truthing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 8679 KB  
Article
Real-Time Cardiac Arrhythmia Classification Using TinyML on Ultra-Low-Cost Microcontrollers: A Feasibility Study for Resource-Constrained Environments
by Misael Zambrano-de la Torre, Sebastian Guzman-Alfaro, Andrea Acuña-Correa, Manuel A. Soto-Murillo, Maximiliano Guzmán-Fernández, Ricardo Robles-Ortiz, Karen E. Villagrana-Bañuelos, Jose G. Arceo-Olague, Carlos H. Espino-Salinas, Ana G. Sánchez-Reyna and Erik O. Cuevas-Rodriguez
Bioengineering 2026, 13(5), 532; https://doi.org/10.3390/bioengineering13050532 - 1 May 2026
Abstract
Recent advances in edge computing and Tiny Machine Learning (TinyML) have enabled the deployment of artificial intelligence models directly on microcontrollers with extremely limited computational and memory resources. In this context, this work presents the design, implementation, and validation of a real-time cardiac [...] Read more.
Recent advances in edge computing and Tiny Machine Learning (TinyML) have enabled the deployment of artificial intelligence models directly on microcontrollers with extremely limited computational and memory resources. In this context, this work presents the design, implementation, and validation of a real-time cardiac arrhythmia classification system based on a quantized one-dimensional convolutional neural network (1D-CNN), deployed on an 8-bit Arduino UNO microcontroller. The proposed system integrates end-to-end processing, including ECG signal acquisition using a low-cost AD8232 analog front-end, signal preprocessing, heartbeat segmentation, classification, and real-time visualization on an OLED display. The model was trained and evaluated using the MIT-BIH Arrhythmia Database, considering a reduced three-class problem (Normal, Ventricular, and Supraventricular) to meet the constraints of ultra-low-cost hardware deployment. Under benchmark conditions, the quantized model achieved an accuracy of 97.6%, with a memory footprint below 24 KB and an average inference time of 200 ms per heartbeat, enabling real-time operation on a resource-constrained microcontroller. Real-time experiments were conducted using signals acquired from healthy volunteers to validate system functionality, although no annotated ground truth was available for these recordings, and therefore no diagnostic performance was derived from them. The results demonstrate the feasibility of deploying lightweight deep learning models on ultra-constrained embedded systems using the TinyML paradigm, implemented using TensorFlow 2.15 and TensorFlow Lite. This work should be interpreted as a proof-of-concept platform that highlights the trade-off between classification performance and hardware limitations, providing a foundation for future development of low-cost cardiac monitoring technologies in resource-limited environments. Full article
30 pages, 2472 KB  
Article
Energy Consumption Prediction for an Electric Vehicle Using Machine Learning: A Comparative Study of Regression, Ensemble, and LSTM-Based Models
by Juan Diego Valladolid and Juan P. Ortiz
Vehicles 2026, 8(5), 99; https://doi.org/10.3390/vehicles8050099 - 1 May 2026
Abstract
Accurate energy consumption prediction is fundamental for enhancing range estimation and trip planning in battery electric vehicles (BEVs) under real-world conditions. This study develops a route-level benchmark utilizing 1 Hz data acquired via ECU/OBD-II interfaces (CAN 500 kbps) across ten diverse real-world driving [...] Read more.
Accurate energy consumption prediction is fundamental for enhancing range estimation and trip planning in battery electric vehicles (BEVs) under real-world conditions. This study develops a route-level benchmark utilizing 1 Hz data acquired via ECU/OBD-II interfaces (CAN 500 kbps) across ten diverse real-world driving routes. The input feature set comprises vehicle speed, longitudinal acceleration, estimated motor torque, road altitude, and accelerator pedal position. Ground truth energy consumption was derived from battery voltage and current, integrated via the trapezoidal rule. We performed a comparative analysis between five memoryless regressors (FNN, SVR, GPR, QRNN, and Bagged Trees) and three sequence models (LSTM, GRU, and BiLSTM) trained on 20-second temporal windows. The results indicate that the GRU model achieved the highest overall performance (mean RMSE = 0.1142 kWh, R2 = 0.9545 and MAE = 0.072 kWh), while Bagged Trees emerged as the most robust static model (mean RMSE = 0.1587 kWh). Temporal models outperformed static ones on routes with high dynamic variability, whereas Bagged Trees excelled in five specific scenarios. These findings provide a controlled within-route benchmark for time-resolved cumulative energy estimation and highlight the need for chronological and cross-route validation before drawing deployment-oriented generalization claims. Full article
(This article belongs to the Special Issue Application of Machine Learning in Electric Vehicles)
Show Figures

Graphical abstract

20 pages, 29170 KB  
Article
Hyperspectral Mapping of Pasture Nitrogen Content and Metabolizable Energy in New Zealand Hill Country Grasslands
by Nitin Bhatia and Maxence Plouviez
AgriEngineering 2026, 8(5), 170; https://doi.org/10.3390/agriengineering8050170 - 30 Apr 2026
Abstract
Hyperspectral airborne data combined with machine learning has proven effective for characterizing plant nutritional quality. However, terrain, viewing geometry, and illumination can distort spectral signatures, leading to biased models with limited generalizability for large-scale mapping across farms with a heterogeneous landscape. In this [...] Read more.
Hyperspectral airborne data combined with machine learning has proven effective for characterizing plant nutritional quality. However, terrain, viewing geometry, and illumination can distort spectral signatures, leading to biased models with limited generalizability for large-scale mapping across farms with a heterogeneous landscape. In this study, we developed a framework for mapping pasture quality using airborne hyperspectral imaging while explicitly accounting for in-field acquisition and environmental effects. Nitrogen content (N%) and metabolizable energy (ME) were used as reference indicators across four hill country farms in New Zealand with contrasting environmental and management conditions. Ground truth was obtained using standard laboratory wet chemistry methods and paired with AisaFENIX airborne hyperspectral data, resulting in 1610 spectral samples derived from 161 spatially independent ground plots. Gaussian Process Regression (GPR) and a one-dimensional convolutional neural network (1D-CNN) were trained and evaluated on an independent test dataset. Both models achieved strong predictive performance (R2 > 0.8); however, GPR provided more reliable estimates through predictive uncertainty. Using a 95% confidence interval threshold to mask uncertain predictions increased overall performance (R2 > 0.9) and consequently improved the reliability of the mapped outputs. This approach enables spatially explicit pasture nutrient assessment to support precision land management for carbon and nitrogen. Full article
Show Figures

Figure 1

19 pages, 2281 KB  
Article
Melt-Pool Dynamics Quantification in LPBF via Move Contrast X-Ray Imaging
by Zenghao Song, Chengcong Ma, Yuelu Chen, Ke Li, Feixiang Wang and Tiqiao Xiao
Metals 2026, 16(5), 487; https://doi.org/10.3390/met16050487 - 30 Apr 2026
Abstract
The dynamic behavior within the melt pool governs the final quality of components fabricated by laser powder bed fusion (LPBF). To address key technical challenges—rapid keyhole evolution, low absorption contrast from metal vapor, and difficulties in quantifying internal flow fields—this study introduces move [...] Read more.
The dynamic behavior within the melt pool governs the final quality of components fabricated by laser powder bed fusion (LPBF). To address key technical challenges—rapid keyhole evolution, low absorption contrast from metal vapor, and difficulties in quantifying internal flow fields—this study introduces move contrast X-ray imaging (MCXI), a technique leveraging time-series frequency characteristics. Combined with a multi-scale Horn–Schunck global optical flow method, MCXI enables full-field quantitative extraction of the melt-pool velocity field. Experimental validation across feature points shows a relative deviation of less than 2% compared to independent manual feature-point tracking, confirming consistency with the best available experimental ground truth. Analysis reveals the keyhole tail evolution cycle comprises three distinct dynamic stages: expansion, stratification, and contraction, with its area increasing from 1329 μm2 to 6508 μm2 before stabilizing. For the first time, pore pinch-off events were quantitatively measured, revealing front and rear wall collision velocities of 7.98 m/s and 8.04 m/s, respectively, consistent with available high-fidelity simulations. Furthermore, analysis of the overall melt-pool momentum field demonstrates a near-equal distribution of positive and negative momentum, providing an internal self-consistency check confirming the absence of systematic directional bias in the extracted velocity field. This study enables quantitative analysis of LPBF melt-pool dynamics, providing a novel tool for process optimization and defect control. Full article
Show Figures

Figure 1

34 pages, 36077 KB  
Article
Modular Multi-Attribute Vehicle Analysis by Color, License Plate, Make and Sub-Model Using YOLO and OCR: A Benchmark Across YOLO Versions
by Cristian Japhet Islas-Yañez, Viridiana Hernández-Herrera and Moisés Márquez-Olivera
Sensors 2026, 26(9), 2785; https://doi.org/10.3390/s26092785 - 29 Apr 2026
Viewed by 68
Abstract
We present a modular multi-attribute vehicle analysis pipeline that integrates YOLO-based models and an OCR engine into a single workflow. The system detects vehicles, classifies color, recognizes make and sub-model, detects license plates, and extracts plate characters to generate a structured vehicle record. [...] Read more.
We present a modular multi-attribute vehicle analysis pipeline that integrates YOLO-based models and an OCR engine into a single workflow. The system detects vehicles, classifies color, recognizes make and sub-model, detects license plates, and extracts plate characters to generate a structured vehicle record. Vehicle detection is reported with standard metrics (precision, recall, and mAP@0.5), while license plate detection is reported at IoU = 0.3 to reflect the small-object nature of plates and downstream OCR usability. Among the evaluated versions, YOLOv8 provides the most balanced overall performance across modules, while maintaining real-time-equivalent throughput of approximately 18–22 FPS for the full pipeline on recorded traffic videos, depending on scene complexity. We emphasize module-level evaluation and runtime benchmarking; instance-level end-to-end identification across unique vehicles is defined as future work once track-based ground truth becomes available. Full article
(This article belongs to the Topic Deep Visual Recognition: Methods, and Applications)
Show Figures

Figure 1

23 pages, 650 KB  
Article
The “Snapping Point”: Mental Health as a Credibility Technology in Portuguese News on Sexual Violence (2014–2023)
by Rita Alcaire
Soc. Sci. 2026, 15(5), 287; https://doi.org/10.3390/socsci15050287 - 29 Apr 2026
Viewed by 12
Abstract
This article examines how mental health discourse functions as a credibility technology in Portuguese news reporting on sexual violence between 2014 and 2023. Using Critical Thematic Analysis and grounded in feminist media studies and critical mental health scholarship, the article analyses a qualitative [...] Read more.
This article examines how mental health discourse functions as a credibility technology in Portuguese news reporting on sexual violence between 2014 and 2023. Using Critical Thematic Analysis and grounded in feminist media studies and critical mental health scholarship, the article analyses a qualitative corpus of reporting-oriented news items published in Público and Observador. The dataset consists of systematically selected articles in which mental health discourse functions as a substantive explanatory frame for sexual violence. Psychiatric, psychological, therapeutic, and metaphorical registers grant, withhold, or condition believability, allocating responsibility and organising care through norms of stability, risk, and expert verification. The analysis identified eight recurring discursive clusters through which mental health language stabilises truth claims: it can legitimise institutional authority, regulate survivors’ credibility, and explain perpetration through pathologising tropes, while often displacing structural accounts of gendered violence and reproducing ableist stigma. By specifying the credibility work performed by mental health discourse, the article contributes to debates on trauma-informed, survivor-centred, and anti-ableist reporting and proposes a transferable framework for analysing the sexual violence–mental health nexus in journalism. Full article
25 pages, 42045 KB  
Article
Automated Landslide Identification from Time-Series InSAR Using Improved Hot Spot Analysis
by Xiaoxiao Yang, Jinmin Zhang, Wu Zhu, Quan Sun and Jing Li
Sensors 2026, 26(9), 2771; https://doi.org/10.3390/s26092771 - 29 Apr 2026
Viewed by 44
Abstract
To address the key limitations of traditional automated landslide detection methods—namely their reliance on large training datasets, insufficient detection accuracy, and high false positive rates—this study proposes an InSAR-based automated landslide detection approach integrating multi-weight factor coupling, referred to as an Improved Hot [...] Read more.
To address the key limitations of traditional automated landslide detection methods—namely their reliance on large training datasets, insufficient detection accuracy, and high false positive rates—this study proposes an InSAR-based automated landslide detection approach integrating multi-weight factor coupling, referred to as an Improved Hot Spot Analysis (IHSA) method. Built upon InSAR-derived surface deformation data, the proposed method optimizes the hotspot detection model through a spatial weighting matrix that incorporates multi-feature fusion. Morphological processing is further applied to refine landslide boundaries. Validation against manually interpreted ground truth data demonstrates that the proposed method achieves a precision of 90.20%, representing an improvement of 53.61 percentage points over the conventional hotspot analysis method, while maintaining a stable recall rate of 92.00%. The extracted landslide boundaries exhibit high consistency with manual interpretation results, effectively overcoming common issues in traditional approaches such as fragmented outputs and internal voids. This study provides an efficient, training-free solution for large-scale early identification of potential landslides, offering critical methodological support and data foundations for regional landslide detection and hazard mitigation. Full article
(This article belongs to the Topic Advanced Risk Assessment in Geotechnical Engineering)
21 pages, 41291 KB  
Article
Unraveling the Spectral–Spatial Mechanisms of Mineral Identification: A Case Study on CASI Data Using SpectralFormer and Traditional Classifiers
by Huilin Yang, Kai Qin, Yuxi Hao, Ming Li, Ling Zhu, Yuechao Yang and Yingjun Zhao
Remote Sens. 2026, 18(9), 1365; https://doi.org/10.3390/rs18091365 - 29 Apr 2026
Viewed by 133
Abstract
Traditional diagnostic spectroscopy provides a physically interpretable basis for mineral identification. However, how modern classifiers balance spectral and spatial information remains insufficiently understood. This study investigates this issue using CASI airborne hyperspectral data from the Liuyuan area, China. A geologically constrained ground-truth dataset [...] Read more.
Traditional diagnostic spectroscopy provides a physically interpretable basis for mineral identification. However, how modern classifiers balance spectral and spatial information remains insufficiently understood. This study investigates this issue using CASI airborne hyperspectral data from the Liuyuan area, China. A geologically constrained ground-truth dataset was constructed based on expert knowledge and a semi-automatic Spectral Hourglass workflow. We evaluated representative shallow machine learning methods and deep learning models, including a three-dimensional convolutional neural network (3D-CNN), Vision Transformer (ViT), and SpectralFormer. The Support Vector Machine (SVM) achieved the highest overall accuracy but showed a strong bias toward dominant background classes and failed to reliably detect rare minerals such as jarosite. Deep learning models improved class balance by incorporating broader spectral features. However, excessive spatial aggregation reduced their sensitivity to small and fragmented alteration zones. SpectralFormer models hyperspectral data as ordered spectral sequences and showed more stable performance for spectrally similar and rare minerals. Multi-scale experiments reveal a spectral-dominant discrimination mechanism. Increasing the spectral receptive field improves classification up to an optimal level. In contrast, overly large spatial patches introduce background interference and obscure diagnostic absorption features. These findings highlight the fundamental role of spectral continuity in airborne hyperspectral alteration mineral mapping and clarify the trade-offs involved in integrating spatial context. Full article
(This article belongs to the Special Issue Advanced Hyperspectral Imaging and AI for Geological Applications)
Show Figures

Figure 1

23 pages, 827 KB  
Article
A Comparative Study of Unsupervised Machine Learning and Deep Learning Techniques for Anomaly Detection in Recommender Systems
by Rodolfo Bojorque, Remigio Hurtado, Miguel Arcos-Argudo and Mauricio Ortiz
Information 2026, 17(5), 426; https://doi.org/10.3390/info17050426 - 29 Apr 2026
Viewed by 99
Abstract
Recommender systems are increasingly exposed to anomalous user behavior that can distort recommendation outcomes and compromise system reliability. In real-world settings, explicit labels identifying malicious activity are rarely available, motivating the adoption of unsupervised detection approaches. This study presents a systematic comparative analysis [...] Read more.
Recommender systems are increasingly exposed to anomalous user behavior that can distort recommendation outcomes and compromise system reliability. In real-world settings, explicit labels identifying malicious activity are rarely available, motivating the adoption of unsupervised detection approaches. This study presents a systematic comparative analysis of classical machine learning and deep learning techniques for anomaly detection in recommender systems. Using the MovieLens 1M dataset, we construct a user-level behavioral representation based on statistical, temporal, and interaction-based features derived from explicit rating data. Three unsupervised detection models are evaluated: Isolation Forest, One-Class Support Vector Machine, and an autoencoder-based neural network. To address the absence of ground-truth labels, evaluation is conducted using a comprehensive label-free protocol, including score distribution analysis, percentile-based thresholding, ranking stability, and inter-model agreement. In addition, controlled experiments with synthetic attack profiles are conducted to assess detection performance under different adversarial strategies. Results indicate that individual models capture complementary aspects of anomalous behavior, exhibiting low to moderate agreement. An ensemble scoring strategy improves ranking stability and provides a consistent mechanism for identifying highly deviant user profiles. The findings suggest that ensemble-based unsupervised detection constitutes a practical and interpretable first-layer screening approach for recommender system monitoring under label-scarce conditions. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Graphical abstract

32 pages, 7017 KB  
Article
Individual Tree Species Classification in a Mining Area of the Yellow River Basin Using UAV-Based LiDAR, Hyperspectral, and RGB Data
by Guo Wang, Sheng Nie, Xiaohuan Xi, Cheng Wang and Hongtao Wang
Remote Sens. 2026, 18(9), 1361; https://doi.org/10.3390/rs18091361 - 28 Apr 2026
Viewed by 177
Abstract
The Yellow River Basin contains abundant coal resources; however, its ecological environment is inherently fragile, and vegetation degradation has been further intensified by extensive mining activities. Accurate classification of individual tree species in mining-affected areas is therefore essential for assessing ecological conditions and [...] Read more.
The Yellow River Basin contains abundant coal resources; however, its ecological environment is inherently fragile, and vegetation degradation has been further intensified by extensive mining activities. Accurate classification of individual tree species in mining-affected areas is therefore essential for assessing ecological conditions and establishing a scientific foundation for targeted restoration and sustainable management. To address this need, an evaluated machine learning framework was developed and evaluated for individual tree species classification in a coal mining area of the Yellow River Basin using integrated unmanned aerial vehicle (UAV) data. A comprehensive feature set was constructed by extracting 278 attributes per tree. These attributes included 224 spectral bands and 29 hyperspectral indices derived from hyperspectral imagery, 24 textural metrics obtained from RGB orthophotos, and one canopy height feature generated from a LiDAR-derived model. Based on ground-truth data from 1095 individual trees, seven machine learning algorithms were trained and systematically compared: Random Forest (RF), Support Vector Machine (SVM), K-Nearest Neighbors (KNN), Decision Tree (DT), Gradient Boosting (GB), Logistic Regression (LR), and XGBoost. Statistical significance testing using 5 × 5 repeated cross-validation, together with the Friedman test and post hoc Nemenyi test, and additional model stability analysis consistently identified XGBoost as the optimal classifier. On an independent test set, XGBoost achieved high accuracy (Overall Accuracy = 0.897, Kappa = 0.811) with an efficient training time of 2.36 s. Further analysis demonstrated the critical and complementary roles of hyperspectral and structural features in species discrimination. The optimized model was subsequently applied to generate a detailed wall-to-wall tree species map across the entire mining area. Overall, this study presents a statistically informed comparison of classifiers for multi-source feature-based species discrimination and delivers an evaluated and practical pipeline for effective vegetation monitoring. The proposed framework provides a scientific tool for assessing and managing ecological recovery in complex mining environments, particularly within ecologically sensitive regions such as the Yellow River Basin. Full article
(This article belongs to the Special Issue Remote Sensing and Smart Forestry (Third Edition))
Show Figures

Figure 1

15 pages, 852 KB  
Article
Validating Temporal Eye Tracking Metrics as Orthogonal Biomarkers for Aggressive Traits: A Mixed-Effects Analysis
by Omar Alvarado-Cando, Oscar Casanova-Carvajal and José-Javier Serrano-Olmedo
J. Eye Mov. Res. 2026, 19(3), 44; https://doi.org/10.3390/jemr19030044 - 28 Apr 2026
Viewed by 121
Abstract
Atypical visual attention to aversive or threatening stimuli is a clinically relevant feature of aggressive behavior. However, the developmental dissociation between sustained visual allocation and early orienting remains unclear. This study examined the temporal dynamics of visual attentional biases in a sample of [...] Read more.
Atypical visual attention to aversive or threatening stimuli is a clinically relevant feature of aggressive behavior. However, the developmental dissociation between sustained visual allocation and early orienting remains unclear. This study examined the temporal dynamics of visual attentional biases in a sample of 119 children and adolescents (51 males, 68 females), clinically and behaviorally categorized into aggressive and non-aggressive cohorts. Using a free-viewing paradigm with standardized emotional stimulus pairs selected from the International Affective Picture System (IAPS), eye-tracking analysis focused on first-fixation direction and dwell time. Inferential analyses were conducted using Linear Mixed-Effect Models (LMM) and Generalized Linear Mixed-Effects Models (GLMM). The linear model revealed a significant main effect of behavioral condition: individuals with aggressive traits, regardless of their stage of development, showed greater sustained visual allocation toward negative stimuli. In contrast, the GLMM for first-fixation direction identified a significant age-by-condition interaction, indicating that early orienting differences were more clearly expressed in the aggressive adolescent cohort. These findings suggest that sustained visual preference for negative content may represent a relatively stable correlate of aggressive traits, whereas early orienting differences may vary across developmental stages. Together, these two temporal eye-tracking measures may provide complementary information for future computational approaches to aggression screening. In conclusion, these two temporal oculomotor dimensions may provide a useful feature space for future machine-learning pipelines and may serve as complementary candidate markers for comparing computational predictions against clinically established ground truth in aggression screening research. Full article
Show Figures

Figure 1

26 pages, 6162 KB  
Article
TD-RCRF: A Privacy-Preserving Truth Discovery Resistant to Collusion and Reputation Fraud in Mobile Crowdsensing
by Libo Ban, Lei Wu, Wei Wu and Haipeng Peng
Mathematics 2026, 14(9), 1474; https://doi.org/10.3390/math14091474 - 27 Apr 2026
Viewed by 133
Abstract
Privacy-preserving truth discovery (PPTD) has garnered significant attention in mobile crowdsensing (MCS). However, existing research lacks sufficient privacy protection and is often vulnerable to collusion attacks among malicious participants. Moreover, incorrect data submitted by unreliable users and their weights may reduce the accuracy [...] Read more.
Privacy-preserving truth discovery (PPTD) has garnered significant attention in mobile crowdsensing (MCS). However, existing research lacks sufficient privacy protection and is often vulnerable to collusion attacks among malicious participants. Moreover, incorrect data submitted by unreliable users and their weights may reduce the accuracy of truth discovery. To address these issues, this paper proposes a privacy-preserving truth discovery framework resistant to collusion and reputation fraud (TD-RCRF) that is highly resistant to collusion and reputation fraud. The scheme employs additive secret sharing to protect sensing data, weights, intermediate results, and ground truth. To screen trustworthy users who meet reputation requirements under the non-colluding dual-server model, we propose a privacy-preserving reputation verification algorithm that combines Pedersen commitment and zero-knowledge proof to verify the validity of mobile users’ reputation values. Additionally, we propose a homomorphic strategy that converts shares between multiplication and addition and use it to design a lightweight truth discovery algorithm that further improves the accuracy of the “truth” using reputation values. Security analysis proves that TD-RCRF is privacy-preserving and secure under the non-colluding dual-server assumption. Theoretical analysis and experiments show that it is practical and efficient. Full article
Show Figures

Figure 1

24 pages, 453 KB  
Article
Reason2Decide-C: Adaptive Cycle-Consistent Training for Clinical Rationales
by H M Quamran Hasan, Housam Khalifa Bashier Babiker, Mi-Young Kim and Randy Goebel
Computers 2026, 15(5), 279; https://doi.org/10.3390/computers15050279 - 27 Apr 2026
Viewed by 167
Abstract
Large Language Models (LLMs) used for clinical decision support must not only make accurate predictions but also generate rationales that are consistent with, and sufficient for, those predictions. Building on Reason2Decide, a two-stage rationale-driven multi-task framework, we propose Reason2Decide-C (R2D-C, where C denotes [...] Read more.
Large Language Models (LLMs) used for clinical decision support must not only make accurate predictions but also generate rationales that are consistent with, and sufficient for, those predictions. Building on Reason2Decide, a two-stage rationale-driven multi-task framework, we propose Reason2Decide-C (R2D-C, where C denotes cycle consistency), which augments Reason2Decide’s stage 2 training with confidence-adaptive scheduled sampling and cycle-consistent rationale-to-label training. In stage 1, we pretrain our model on rationale generation. In stage 2, we jointlytrain on label prediction and rationale generation, gradually replacing gold labels with model-predicted labels based on confidence. Simultaneously, we feed the rationale logits back into the model to recover the label, thus enforcing explanation sufficiency. We evaluate R2D-C on one proprietary triage dataset, as well as public biomedical QA and reasoning datasets. Across model sizes, R2D-C substantially improves rationale–prediction consistency (where stage 1 and stage 2 predictions agree) and sufficiency (where the rationale alone recovers the ground-truth label) over other baselines while matching or modestly improving predictive performance (F1); in several settings R2D-C surpasses 40× larger foundation models. Ablations confirm that the full combination is optimal, maximizing alignment and LLM-as-a-Judge rationale quality. These results demonstrate that confidence-adaptive scheduled sampling and cycle-consistent rationale-to-label training substantially enhance explanation alignment without sacrificing accuracy. Full article
Show Figures

Figure 1

20 pages, 976 KB  
Article
Decoupling Fairness Perception from Grading Validity in Digitally Mediated Peer Assessment: A Two-Stage fsQCA Study
by Duen-Huang Huang and Yu-Cheng Wang
Information 2026, 17(5), 411; https://doi.org/10.3390/info17050411 - 25 Apr 2026
Viewed by 125
Abstract
Artificial intelligence (AI) has become increasingly embedded in technology-enhanced learning environments, where peer assessment now serves both instructional and analytic purposes. Beyond allocating feedback and grades, it also produces data that is later interpreted through learning analytics systems. In practice, visible indicators such [...] Read more.
Artificial intelligence (AI) has become increasingly embedded in technology-enhanced learning environments, where peer assessment now serves both instructional and analytic purposes. Beyond allocating feedback and grades, it also produces data that is later interpreted through learning analytics systems. In practice, visible indicators such as students’ fairness perceptions and the degree of agreement among peer raters are often treated as signs that the assessment process is functioning effectively. However, these indicators do not necessarily correspond to grading validity. Students may regard a peer assessment process as fair even when peer-generated ratings remain weakly aligned with expert judgement. This study, therefore, examines whether the socio-technical configurations associated with high perceived fairness in a digitally mediated peer assessment environment also correspond to criterion-referenced grading validity. Data were collected from 215 undergraduate students enrolled in an Artificial Intelligence Foundations course over two consecutive semesters at a university in Taiwan, with instructor ratings serving as an external expert reference within the course context, rather than as a universal ground truth. Because anonymity conditions and semester were fully confounded in the study design, differences linked to anonymity should not be interpreted as isolated causal effects. A two-stage fuzzy-set Qualitative Comparative Analysis (fsQCA) was used. In the first stage, three equifinal configurations associated with high perceived fairness were identified. In the second stage, these configurations were examined against four grading objectivity outcomes: peer–instructor alignment, peer convergence, familiarity bias, and leniency bias. The findings show that fairness perception and grading validity are only partially aligned. Configurations anchored in explicit criterion transparency consistently supported both experiential legitimacy and evaluative accuracy. By contrast, one configuration was associated with high peer convergence while remaining weakly aligned with instructor standards, a pattern described here as false objectivity; this context-dependent configurational finding warrants further investigation across other settings. The study contributes to research on digitally enhanced assessment and learning analytics by showing that fairness perception, peer convergence, and grading validity should be treated as analytically distinct dimensions of assessment quality. Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
24 pages, 8042 KB  
Article
Ship Target Detection Method Based on Feature Fusion and Bi-Level Routing Attention
by Danfeng Zuo, Liang Qi, Hao Ni, Song Song, Haifeng Li and Xinwen Wang
Symmetry 2026, 18(5), 729; https://doi.org/10.3390/sym18050729 - 24 Apr 2026
Viewed by 122
Abstract
Ship target detection is a prerequisite for achieving automated monitoring in ship detection systems. To address the challenge of accurately detecting ship targets in complex water environments, this study proposes a ship target detection method based on an improved YOLOv11 framework. To enhance [...] Read more.
Ship target detection is a prerequisite for achieving automated monitoring in ship detection systems. To address the challenge of accurately detecting ship targets in complex water environments, this study proposes a ship target detection method based on an improved YOLOv11 framework. To enhance the model’s ability to perceive and fuse features across multiple scales and in complex backgrounds, an Iterative Attention Feature Fusion (iAFF) module and a Biformer module are integrated at the end of the backbone network. The iAFF module iteratively optimizes multi-scale features through a two-stage attention mechanism, effectively focusing on key target regions, thereby improving the model’s detection capability for small, medium-sized, and occluded ships. The Biformer module leverages its innovative Bi-level Routing Attention (BRA) mechanism to enhance the modeling of global semantic information while reducing computational complexity, mitigating false detections caused by occlusions among ship targets, and consequently improving detection precision. This study employs the Minimum Point Distance Intersection over Union (MPDIoU) loss function, which more comprehensively measures the similarity between predicted and ground-truth bounding boxes by optimizing the distances of their key geometric points, effectively enhancing the accuracy of bounding box regression. Experimental results show that the proposed model achieved 93.96% mAP, 92.93% recall, and 94.97% precision on a self-built ship dataset, surpassing mainstream detection algorithms including YOLOv11 in multiple metrics. The model has only 2.90 M parameters, achieving a good balance between accuracy and efficiency. This provides an accurate and efficient solution for intelligent ship supervision. Full article
(This article belongs to the Section Computer)
Back to TopTop