Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (7,906)

Search Parameters:
Keywords = biases

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 2465 KB  
Review
Interstitial Terrestrialization in Arthropoda
by Samuel J. Bolton
Diversity 2026, 18(5), 250; https://doi.org/10.3390/d18050250 - 23 Apr 2026
Abstract
It has long been hypothesized that some arthropod lineages transitioned to land by following an interstitial pathway through the spaces between sand grains. In recent years, various molecular phylogenetic analyses suggest a greater number of terrestrialization events within Arthropoda than previously hypothesized. The [...] Read more.
It has long been hypothesized that some arthropod lineages transitioned to land by following an interstitial pathway through the spaces between sand grains. In recent years, various molecular phylogenetic analyses suggest a greater number of terrestrialization events within Arthropoda than previously hypothesized. The relative importance of an interstitial route to land is likely to have been underestimated because of biases in the fossil record and the choice of techniques used for collecting extant arthropods from sands and other types of mineral regolith (sediment with low organic content). A number of early-branching taxa are microarthropods that are common in mineral regolith, providing phyloecological evidence for an interstitial pathway onto land. Following interstitial terrestrialization, hexapods and early-branching arachnids may have remained minute and soft-bodied within mineral regolith until the Early Devonian, when organically rich soils developed on much of the land surface, resulting in increased food resources but also increased rates of predation. This led to defensive modifications and increases in surface abundance and body size, which would have all elevated the probability of fossilization. Full article
(This article belongs to the Section Phylogeny and Evolution)
29 pages, 704 KB  
Systematic Review
Reassessing Minimum Wage Impacts: What the Spanish Case Contributes to International Evidence
by Manuela Adelaida de Paz-Báñez, Celia Sánchez-López and María José Asensio-Coto
Sustainability 2026, 18(9), 4206; https://doi.org/10.3390/su18094206 (registering DOI) - 23 Apr 2026
Abstract
Minimum wage policies have become a central instrument for promoting social and economic sustainability by ensuring sufficient income to cover basic needs and reduce inequalities. They align with recent predistribution approaches in the literature and with goal 10.4 of the United Nations 2030 [...] Read more.
Minimum wage policies have become a central instrument for promoting social and economic sustainability by ensuring sufficient income to cover basic needs and reduce inequalities. They align with recent predistribution approaches in the literature and with goal 10.4 of the United Nations 2030 Agenda. In the European context, these policies are explicitly embedded within the sustainable development and just transition agenda, where the European Union emphasises that securing fair wages is a necessary condition for inclusive, balanced and equality-enhancing growth. At the same time, the methodological debate has evolved from early time-series-based approaches to a new generation of quasi-experimental studies, which provide more rigorous and less biased evidence. Within this framework, Spain represents a relevant case due to the scale and persistence of its minimum wage reforms since 2019, yet the Spanish case has lacked a systematic synthesis comparable to those available for other advanced economies (e.g., Germany, the UK, the USA). This article offers the first systematic synthesis of empirical evidence on the effects of the minimum wage in Spain from the 1990s to 2025, following the PRISMA 2020 methodology. This process yielded a large number of articles, from which an initial selection of 249 was made. Following the full screening and eligibility assessment, 34 articles were retained. The results allow for an analysis of the current state of research on the effects of the minimum wage across multiple dimensions, especially on employment and inequality. Other aspects, such as productivity, prices, other business adjustments, administrative obstacles, and public finances, are still poorly addressed in the available literature. In any case, this is a valuable exercise in understanding how wage policies can help to clarify the relationship between minimum wage policies and the transformation of labour markets. Full article
(This article belongs to the Special Issue Innovation in Circular Economy and Sustainable Development)
25 pages, 4331 KB  
Article
Comparative Study of Satellite Clock Bias Prediction Models Based on Genetic Algorithm and Mind Evolutionary Algorithm-Optimized BP Neural Networks
by Hongwei Bai, Chao Liu, Yifei Shen and Zhongchen Guo
Appl. Sci. 2026, 16(9), 4130; https://doi.org/10.3390/app16094130 - 23 Apr 2026
Abstract
Satellite clock bias (SCB) is a critical error source affecting the positioning and timing accuracy of Global Navigation Satellite Systems (GNSSs). The conventional back propagation neural network (BP) model, when applied to SCB prediction, is prone to local optima and exhibits rapid error [...] Read more.
Satellite clock bias (SCB) is a critical error source affecting the positioning and timing accuracy of Global Navigation Satellite Systems (GNSSs). The conventional back propagation neural network (BP) model, when applied to SCB prediction, is prone to local optima and exhibits rapid error divergence. To address these limitations, this study proposes and investigates two enhanced BP models: one optimized by the genetic algorithm (GA) and another by the mind evolutionary algorithm (MEA). A comprehensive comparative analysis is conducted against the standard BP model. Experiments utilize precise clock products from the International GNSS Service (IGS), with data from six representative satellites featuring different atomic clock types (IIR, IIR-M, IIF rubidium, and cesium clocks). The models are trained on 24 h of historical data and evaluated by forecasting clock biases for 2, 6, 12, and 24 h ahead. Prediction accuracy is assessed using root mean square error (RMS), range, and mean error. The results demonstrate that optimization algorithms significantly improve the BP neural network’s performance. The genetic algorithm optimized back propagation neural network (GABP) model demonstrates comprehensive superiority, achieving the highest accuracy across all forecast horizons and satellite types. For instance, in 24 h predictions, the average RMS error of the GABP model (6.516 ns) is merely 10.9% of the standard BP model’s error. Notably, for the cesium clock on satellite G24, the GABP model’s 24 h RMS (1.600 ns) is approximately 23 times lower than that of the mind evolutionary algorithm optimized back propagation neural network (MEABP) model. The GABP model also shows strong adaptability, maintaining high precision for both rubidium and cesium clocks and exhibiting gradual error growth with extended forecast duration, indicating excellent generalization and resistance to overfitting. To further evaluate generalization across different seasons and time periods, additional experiments were conducted using data from February–March, June, and October 2021 on six different satellites. The results consistently show that GABP outperforms MEABP and BP across all tested conditions. While the MEABP model outperforms the standard BP, it shows limitations in long-term forecasts, particularly for cesium clocks, due to tendencies for premature convergence and sensitivity to data noise. In conclusion, the GABP model, leveraging the robust global optimization capability of the genetic algorithm is validated as a highly effective and reliable solution for high-accuracy short- and long-term satellite clock bias prediction. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

22 pages, 6139 KB  
Article
Global Observability Analysis of Rotational MEMS Inertial Navigation System for Land Vehicle Applications
by Wenhui Yang, Xin Zhao, Jinhao Song and Yong Li
Sensors 2026, 26(9), 2599; https://doi.org/10.3390/s26092599 - 23 Apr 2026
Abstract
In GNSS-denied environments, Micro-Electromechanical Systems–Inertial Navigation Systems (MEMS-INS) play a critical role in sustaining the autonomous operation of vehicles. However, the inherent accuracy constraints of MEMS inertial sensors often hinder their broader application. To address this limitation, this paper proposes a novel integrated [...] Read more.
In GNSS-denied environments, Micro-Electromechanical Systems–Inertial Navigation Systems (MEMS-INS) play a critical role in sustaining the autonomous operation of vehicles. However, the inherent accuracy constraints of MEMS inertial sensors often hinder their broader application. To address this limitation, this paper proposes a novel integrated navigation system based on a rotating MEMS architecture. Through a global observability analysis, the proposed method enables the effective separation of sensor biases, mitigates long-term error accumulation, and significantly enhances the estimation accuracy of both attitude and velocity. Simulation results indicate that a rotation scheme around the X-axis allows for the reliable estimation of sensor biases. Moreover, road test results reveal that when the vehicle experiences variations in angular velocity, the rotating configuration consistently outperforms its fixed counterpart, reducing heading, position, and velocity errors by 75.3%, 62.9%, and 76.0%, respectively. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

28 pages, 2417 KB  
Systematic Review
Comparative Performance and Species-Specific Recovery Biases of Culture-Based Methods for Campylobacter Detection in Food Products: A Systematic Review and Meta-Analysis
by Chatruthai Meethai, Preeda Phothawon, Janet Yakubu Nale and Sueptrakool Wisessombat
Vet. Sci. 2026, 13(5), 415; https://doi.org/10.3390/vetsci13050415 - 23 Apr 2026
Abstract
Campylobacter is the primary bacterial cause of foodborne gastroenteritis globally. While international standards recommend a tiered approach for detection, emerging evidence suggests that selective protocols may introduce species-specific recovery biases. This systematic review and meta-analysis evaluated the diagnostic performance of established culture-based detection [...] Read more.
Campylobacter is the primary bacterial cause of foodborne gastroenteritis globally. While international standards recommend a tiered approach for detection, emerging evidence suggests that selective protocols may introduce species-specific recovery biases. This systematic review and meta-analysis evaluated the diagnostic performance of established culture-based detection protocols across diverse food matrices. Following PRISMA 2020 guidelines, we searched multiple databases for studies reporting 2 × 2 diagnostic accuracy data through October 2024. Ten studies comprising 43 method comparisons and 4599 samples met the inclusion criteria. The overall pooled sensitivity was 95.8% (95% CI: 93.6–97.4%), and the specificity was 90.2% (95% CI: 86.8–92.9%). Even with a limited number of comparisons (n = 2), direct culture demonstrated high diagnostic sensitivity (99.1%) and significantly faster turnaround times. Crucially, selective enrichment exhibited a profound species-specific bias: C. jejuni showed 59.4 percent lower recovery than C. coli in Bolton broth, likely due to differential polymyxin B susceptibility. These findings highlight the importance of context-dependent method selection within the ISO 10272-1:2017 framework, suggesting that direct culture (Procedure C) should be prioritized for high-contamination matrices to ensure unbiased recovery of C. jejuni. Large-scale multicenter validation is warranted to confirm these exploratory findings for direct culture. Full article
(This article belongs to the Section Veterinary Microbiology, Parasitology and Immunology)
Show Figures

Graphical abstract

14 pages, 685 KB  
Article
Comparative Effectiveness of Bevacizumab and Olaparib Maintenance Therapy in Platinum-Sensitive Recurrent Ovarian Cancer: A Real-World Study with Exploratory Evaluation of Dose Reduction
by Shunsuke Tatsuki, Tadahiro Shoji, Ami Jo, Nanako Jonai, Yohei Chiba, Sho Sato, Eriko Takatori, Yoshitaka Kaido, Takayuki Nagasawa, Masahiro Kagabu, Takeshi Aida, Fumiharu Miura and Tsukasa Baba
Cancers 2026, 18(9), 1332; https://doi.org/10.3390/cancers18091332 - 22 Apr 2026
Abstract
Objective: To compare real-world PFS between BEV and OLA as maintenance therapy for PSROC, with an exploratory evaluation of clinical outcomes after OLA dose reduction. Methods: This retrospective multicenter study included 101 patients with first platinum-sensitive recurrent ovarian, fallopian tube, or [...] Read more.
Objective: To compare real-world PFS between BEV and OLA as maintenance therapy for PSROC, with an exploratory evaluation of clinical outcomes after OLA dose reduction. Methods: This retrospective multicenter study included 101 patients with first platinum-sensitive recurrent ovarian, fallopian tube, or primary peritoneal cancer who achieved a response to platinum-based chemotherapy and then received maintenance therapy. Patients were classified into three groups: BEV (n = 34), standard-dose OLA (n = 31), and dose-reduced OLA (n = 36). The primary endpoint was PFS; secondary endpoints were OS and adverse events. Survival outcomes were evaluated using Kaplan–Meier methods and Cox proportional hazards models. Results: In the primary comparison of all OLA-treated patients versus BEV, OLA was associated with longer PFS (HR 0.48, 95% CI 0.29–0.77), with median PFS of 19 months versus 16 months, respectively. OS did not differ significantly between groups (HR 0.60, 95% CI 0.34–1.05). In exploratory subgroup analyses, patients who underwent OLA dose reduction had numerically longer PFS than those who remained on the full dose; however, this comparison is vulnerable to time-dependent and selection biases and should be interpreted cautiously. Grade ≥ 3 hematologic toxicities were more frequent in the OLA groups but were clinically manageable. Conclusions: In real-world practice, OLA was associated with longer PFS than BEV in PSROC. Clinically necessary dose reduction appeared feasible without an obvious loss of benefit, although this finding requires cautious interpretation. Full article
(This article belongs to the Special Issue Novel Drugs for Treating Gynecologic Cancers: 2nd Edition)
16 pages, 6386 KB  
Article
Nano-Power OTA-Based Low-Pass Filter for Ultra-Low-Energy Biomedical Signal Processing
by Tomasz Kulej, Montree Kumngern and Fabian Khateb
Sensors 2026, 26(9), 2586; https://doi.org/10.3390/s26092586 - 22 Apr 2026
Abstract
This paper presents a nanowatt-scale operational transconductance amplifier (OTA) and an electronically tunable third-order low-pass filter (LPF) designed for energy-constrained biomedical signal conditioning. The circuits are implemented in a 65 nm CMOS process and verified through comprehensive schematic-level simulations. Biased in the deep [...] Read more.
This paper presents a nanowatt-scale operational transconductance amplifier (OTA) and an electronically tunable third-order low-pass filter (LPF) designed for energy-constrained biomedical signal conditioning. The circuits are implemented in a 65 nm CMOS process and verified through comprehensive schematic-level simulations. Biased in the deep subthreshold region at 1 nA, the OTA achieves a 50 dB low-frequency gain, a 225 Hz unity-gain bandwidth at 10 pF load capacitance and an input-referred noise floor of 1.55 μV/√Hz, with a total power consumption of only 1.75 nW. The integrated third-order LPF provides a wide tuning range (37–668 Hz) via bias current modulation, exhibiting excellent linearity with a THD of 0.059% and a 65.3 dB dynamic range. Monte Carlo and PVT corner analyses demonstrate the design’s theoretical robustness against process variations and environmental fluctuations. ECG signal simulations validate the circuit’s effectiveness in suppressing high-frequency artifacts while preserving morphological integrity, providing a proof-of-concept for ultra-low-power wearable healthcare architectures. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

42 pages, 4479 KB  
Article
Fractional Diffusion on Graphs: Superposition of Laplacian Semigroups Incorporating Memory
by Nikita Deniskin and Ernesto Estrada
Fractal Fract. 2026, 10(4), 273; https://doi.org/10.3390/fractalfract10040273 - 21 Apr 2026
Abstract
Subdiffusion on graphs is often modeled by time-fractional diffusion equations; yet, its structural and dynamical consequences remain unclear. We show that subdiffusive transport on graphs is a memory-driven process generated by a random time change that compresses operational time, produces long-tailed waiting times, [...] Read more.
Subdiffusion on graphs is often modeled by time-fractional diffusion equations; yet, its structural and dynamical consequences remain unclear. We show that subdiffusive transport on graphs is a memory-driven process generated by a random time change that compresses operational time, produces long-tailed waiting times, and breaks Markovianity while preserving linearity and mass conservation. While the subordination representation and complete monotonicity properties of the Mittag-Leffler function are classical, we develop a graph-based synthesis in which Mittag-Leffler dynamics admit an exact convex, mass-preserving representation as a superposition of Laplacian semigroups evaluated at rescaled times. This perspective reveals fractional diffusion as ordinary diffusion acting across multiple intrinsic time scales and enables new structural and dynamical interpretations of graphs. This framework uncovers heterogeneous, vertex-dependent memory effects and induces transport biases absent in classical diffusion, including algebraic relaxation, degree-dependent waiting times, and early-time asymmetries between sources and neighbors. These features define a subdiffusive geometry on graphs, enabling the recovery of global shortest paths, in contrast to the graph exploration of diffusive geometry, while simultaneously favoring high-degree regions. Finally, we show that time-fractional diffusion can be interpreted as a singular limit of multi-rate diffusion, in an appropriate asymptotic sense. Full article
(This article belongs to the Special Issue Fractal Analysis and Data-Driven Complex Systems)
20 pages, 5741 KB  
Article
Objective Classification of Convective Precipitation in Chengdu Terminal Area Using a Self-Organizing Map and Its Impacts on Terminal Area Operations
by Haotian Li, Haoya Liu, Lian Duan, Ran Li, Yecheng Zhang and Xiaowei Hu
Atmosphere 2026, 17(4), 421; https://doi.org/10.3390/atmos17040421 - 21 Apr 2026
Abstract
Based on hourly reanalysis data during 2010–2020, the Self-Organizing Map method is used to objectively classify convective precipitation events in the Chengdu terminal area. Combined with circulation background characteristics, the results are further grouped into three typical synoptic types. Among these three types, [...] Read more.
Based on hourly reanalysis data during 2010–2020, the Self-Organizing Map method is used to objectively classify convective precipitation events in the Chengdu terminal area. Combined with circulation background characteristics, the results are further grouped into three typical synoptic types. Among these three types, Type 1, characterized by a pattern with strong high pressure and abundant water vapor, yields the most intense precipitation. Type 2, a pattern with moderately strong high pressure and water vapor convergence, produces the second-highest precipitation. Type 3, associated with a low trough and weak water vapor conditions, has the weakest precipitation. Two indicators of the Weather Severity Index (WSI) and Node Coverage Index (NCI), respectively describing the coverage extent of heavy precipitation over the terminal area and over key arrival and departure nodes, are established and calculated based on heavy precipitation samples. The results show that Type 1 exhibits the highest WSI and NCI values, indicating the greatest potential impact. Type 2 displays a lower WSI than Type 1 but retains a relatively higher NCI, suggesting a more directionally biased impact, whereas Type 3 records the lowest values for both indicators, indicating a relatively weak impact. The integration of synoptic weather classification and spatial impact indicators offers a reference for weather-impact identification and scenario-based operational assessment in terminal areas. However, some limitations remain in the current study. The weather classification is primarily based on reanalysis data, and the correspondence between the WSI/NCI and actual airport operational constraints requires further validation. Full article
(This article belongs to the Special Issue Meteorological Extreme in China)
Show Figures

Figure 1

42 pages, 10596 KB  
Systematic Review
Measurement and Modeling of Sustainable Food Choice and Purchasing Behavior: A Systematic Review of Methods and Models
by Tiago Negrão Andrade and Helena Maria André Bolini
Foods 2026, 15(8), 1442; https://doi.org/10.3390/foods15081442 - 21 Apr 2026
Abstract
Despite decades of methodological sophistication, research on sustainable food behavior remains critically limited in predicting actual purchases. This study aims to examine how methodological fragmentation across psychometric, econometric, and behavioral approaches affects the predictive validity of sustainable food choice and purchasing behavior. This [...] Read more.
Despite decades of methodological sophistication, research on sustainable food behavior remains critically limited in predicting actual purchases. This study aims to examine how methodological fragmentation across psychometric, econometric, and behavioral approaches affects the predictive validity of sustainable food choice and purchasing behavior. This integrative systematic review of 62 empirical studies across psychometric validation, discrete choice experiments (DCEs), trust and cognitive biases, and objective behavioral measurement diagnoses the structural disarticulation between these traditions as the primary cause of limited predictive validity. Findings reveal a pronounced inversion of the evidence hierarchy: while self-report studies report moderate attitude–behavior correlations (β ≈ 0.40–0.50, self-report), the only long-term study using objective scanner data demonstrates that this relationship collapses to a virtually null effect (β = 0.022), representing a 95.6% decay in predictive capacity. Psychometric instruments demonstrate strong structural validity but lack ecological validation against actual purchases. DCEs have evolved econometrically (from MNL to GMNL models), yet remain isolated from psychological theory and real-world validation. Critically, no reviewed study integrated validated scales, a DCE, and objective behavioral data within a single design. Key moderators—skepticism, halo effects, and affective heuristics—are systematically underoperationalized. To overcome this impasse, we propose Hybrid Choice Models (HCM) as the central tool to formally articulate latent attitudes, stated preferences, and observed behavior, enabling cumulative evidence to inform policy and market strategies with greater predictive accuracy. These findings indicate that predictive advances depend on integrating measurement paradigms to achieve ecologically valid and policy-relevant models of sustainable consumer behavior. Full article
Show Figures

Figure 1

28 pages, 2835 KB  
Review
Unlocking Microbial Dark Matter: A Comprehensive Review of Isolation Technologies from Traditional Culturing to Single-Cell Technologies
by Xi Sun, Xiaoxuan Zhang and Jia Zhang
Microorganisms 2026, 14(4), 933; https://doi.org/10.3390/microorganisms14040933 - 21 Apr 2026
Abstract
Microorganisms represent the Earth’s most abundant biomass and a vast reservoir of genetic diversity. However, traditional agar plate methods fail to recover the vast majority of these species, leaving a “microbial dark matter” that holds immense potential for the discovery of novel antibiotics [...] Read more.
Microorganisms represent the Earth’s most abundant biomass and a vast reservoir of genetic diversity. However, traditional agar plate methods fail to recover the vast majority of these species, leaving a “microbial dark matter” that holds immense potential for the discovery of novel antibiotics and bioactive compounds. While conventional techniques such as selective media and enrichment culture remain foundational, they are inherently limited by community biases and the inability to support low-abundance, oligotrophic species. To address these bottlenecks, a diverse array of innovative isolation strategies has emerged. This review systematically categorizes and evaluates these methodologies, ranging from in situ cultivation to high-resolution single-cell manipulation. We first examine membrane diffusion-based cultivation (e.g., iChip), which mimics natural microenvironments to resuscitate recalcitrant microbes. Subsequently, we explore high-throughput single-cell technologies, including microfluidics for physicochemical separation, optical tweezers for precise manipulation, and fluorescence-activated cell sorting (FACS). Special attention is given to Raman-activated cell sorting (RACS) as a label-free functional screening tool and reverse genomics for targeted capture. By synthesizing the strengths and limitations of these approaches, we propose integrated workflows designed to accelerate the mining of untapped microbial resources. Full article
(This article belongs to the Section Microbial Biotechnology)
Show Figures

Figure 1

27 pages, 2500 KB  
Article
Injury Severity Prediction for Older Driver Accidents via Denoised Cascade Framework and Probability Calibration
by Yiyong Pan, Xilai Jia, Jieru Huang, Gen Li and Pengyu Xu
World Electr. Veh. J. 2026, 17(4), 219; https://doi.org/10.3390/wevj17040219 - 20 Apr 2026
Abstract
Accurately estimating the severity of crash injuries among older drivers is paramount for enhancing traffic safety, a task challenged by class imbalance and label noise. Traditional predictive paradigms often struggle to identify rare severe cases, as they tend to prioritize global accuracy, thereby [...] Read more.
Accurately estimating the severity of crash injuries among older drivers is paramount for enhancing traffic safety, a task challenged by class imbalance and label noise. Traditional predictive paradigms often struggle to identify rare severe cases, as they tend to prioritize global accuracy, thereby compromising sensitivity to high-risk outcomes. To overcome these limitations, this study develops a Log-Loss Cleaned and Probability-Calibrated Cascade (L-CSC) framework by strategically integrating existing advanced algorithmic components for robust and reliable severity prediction. Initially, a Log-Loss-based noise filtering mechanism is implemented to purge outliers and ambiguous samples from the training data, thereby enabling higher-quality representation learning. Subsequently, a two-stage cascade architecture is designed to decouple the classification task. Stage I employs a Preliminary Screening Model, optimized via Bayesian optimization for F2-score, to specifically maximize the recall for severe and fatal cases. In Stage II, a Stacking ensemble classifier is deployed to achieve a fine-grained classification of injury levels among the cases identified in the initial screening. Finally, Isotonic Regression is employed to calibrate the output probabilities from both stages, ensuring that the resulting risk estimations are statistically sound and reliable. Empirical evaluations demonstrate that the L-CSC framework effectively balances overall performance with critical risk detection, achieving a robust Macro-F1 of 0.7296. Specifically, compared to the best-performing baseline, the recall and F1-score for the critical severe and fatal category showed relative improvements of over 82% and 62%, respectively. Ablation analyses further substantiate the vital contributions of both the data cleaning and calibration modules. This research demonstrates that the cascaded framework effectively mitigates the biases inherent in imbalanced datasets, providing a robust algorithmic foundation to potentially support future traffic safety interventions. Full article
(This article belongs to the Section Marketing, Promotion and Socio Economics)
Show Figures

Figure 1

21 pages, 1699 KB  
Article
Three-Way Multimodal Learning with Severely Missing Modalities
by Hanrui Wang, Yu Fang, Xin Wang and Fan Min
Information 2026, 17(4), 384; https://doi.org/10.3390/info17040384 - 19 Apr 2026
Viewed by 75
Abstract
Missing modalities remain a major obstacle to the real-world deployment of multimodal learning systems, as incomplete inputs can substantially degrade model performance. Existing methods often suffer from biased imputation under high missing rates and lack uncertainty-aware, differentiated processing. Inspired by three-way decision, a [...] Read more.
Missing modalities remain a major obstacle to the real-world deployment of multimodal learning systems, as incomplete inputs can substantially degrade model performance. Existing methods often suffer from biased imputation under high missing rates and lack uncertainty-aware, differentiated processing. Inspired by three-way decision, a framework for handling uncertainty by adding a deferment option to acceptance and rejection, we propose three-way multimodal learning with severely missing modalities (3WML-SMMs), a novel framework that introduces a three-way decision mechanism into both missing-modality imputation and feature regularization for the first time. Specifically, 3WML-SMM treats variance not merely as a descriptive measure of uncertainty, but as a decision signal for adaptive processing. Based on this idea, the framework incorporates (1) a variance-guided three-way imputation strategy with accept–delay–reject decisions to reduce unreliable reconstruction when only a limited number of complete samples are available and (2) a dimension-wise adaptive feature enhancement module that performs fine-grained regularization according to perturbation uncertainty. Experiments on the CMU Multimodal Opinion Sentiment Intensity (CMU-MOSI) and Multimodal Internet Movie Database (MM-IMDb) datasets show that 3WML-SMM consistently outperforms representative baselines, including reconstruction-based methods, complete-input multimodal methods, and missing-modality-specific methods under severe missing-modality settings, with statistically significant improvements over the multimodal learning with severely missing modality (SMIL) baseline (p<0.05). These results demonstrate the effectiveness of the proposed framework, even in extreme settings where only 10% of the text modality is available. Full article
(This article belongs to the Section Artificial Intelligence)
22 pages, 2108 KB  
Review
A Short Review of Arabic Aspect-Based Sentiment Analysis: Methods, Challenges and Future Directions
by Hamza Youseef, Luis Gonzaga Baca Ruiz, David Criado Ramón and María del Carmen Pegalajar Jimenez
AI 2026, 7(4), 147; https://doi.org/10.3390/ai7040147 - 19 Apr 2026
Viewed by 207
Abstract
The need for Arabic Aspect-Based Sentiment Analysis (ABSA) has grown steadily alongside the expansion of digital content, while the linguistic complexity of Modern Standard Arabic and its diverse dialects introduces significant challenges. However, progress in the field remains constrained by methodological fragmentation, inconsistent [...] Read more.
The need for Arabic Aspect-Based Sentiment Analysis (ABSA) has grown steadily alongside the expansion of digital content, while the linguistic complexity of Modern Standard Arabic and its diverse dialects introduces significant challenges. However, progress in the field remains constrained by methodological fragmentation, inconsistent task definitions, heterogeneous datasets, and non-standardized evaluation practices. Based on a systematic analysis of 57 studies, this work presents an analytical and interpretive review that moves beyond performance-oriented surveys to examine the methodological foundations of Arabic ABSA research. The review follows a rigorous and transparent study selection process and applies a structured analytical framework to analyze task formulations, dataset characteristics, modeling approaches and evaluation strategies. Our findings reveal persistent challenges, including ambiguous aspect definitions, insufficiently documented annotation protocols, structural annotation biases, and limited robustness across domains and dialects. A heavy reliance on Transformer-based architectures and new Arabic foundation models can create an illusion of progress. Researchers often evaluate these models on small and homogeneous datasets. Consequently, strong in-domain performance obscures limited cross-domain and cross-dialectal generalizability. This study concludes by outlining actionable research directions, emphasizing clearer task standardization, more rigorous annotation guidelines, unified evaluation, and broader dialectal coverage to enhance reproducibility and scalability in Arabic ABSA systems. Full article
Show Figures

Figure 1

29 pages, 2383 KB  
Article
Multi-Scale Spectral Recurrent Network Based on Random Fourier Features for Wind Speed Forecasting
by Eder Arley Leon-Gomez, Víctor Elvira, Jorge Iván Montes-Monsalve, Andrés Marino Álvarez-Meza, Alvaro Orozco-Gutierrez and German Castellanos-Dominguez
Technologies 2026, 14(4), 238; https://doi.org/10.3390/technologies14040238 - 18 Apr 2026
Viewed by 98
Abstract
Accurate wind speed forecasting is critical for reliable wind-power integration, yet it remains challenging due to the strongly non-stationary and inherently multi-scale nature of atmospheric processes. While deep learning models—such as LSTM, GRU, and Transformer architectures—achieve competitive short- and medium-term performance, they frequently [...] Read more.
Accurate wind speed forecasting is critical for reliable wind-power integration, yet it remains challenging due to the strongly non-stationary and inherently multi-scale nature of atmospheric processes. While deep learning models—such as LSTM, GRU, and Transformer architectures—achieve competitive short- and medium-term performance, they frequently suffer from spectral bias, hyperparameter sensitivity, and reduced generalization under heterogeneous operating regimes. To address these limitations, we propose a multi-scale spectral–recurrent framework, termed RFF-RNN, which integrates multi-band Random Fourier Feature (RFF) encodings with parameterizable recurrent backbones. A key innovation of our approach is the deliberate relaxation of strict shift-invariance constraints; by jointly optimizing spectral frequencies, phase biases, and bandwidth scales alongside the neural weights, the framework dynamically shapes a fully data-driven spectral embedding. To ensure robust adaptation, we employ a two-stage optimization strategy combining gradient-based inner-loop learning with outer-loop Bayesian hyperparameter tuning. Our extensive evaluations on a controlled synthetic benchmark and six geographically diverse real-world wind datasets (spanning the USA, China, and the Netherlands) demonstrate the superiority of the proposed framework. Statistical validation via the Friedman test confirms that RFF-enhanced models—particularly RFF-GRU and RFF-LSTM—systematically outperform standard recurrent networks and state-of-the-art Transformer architectures (Autoformer and FEDformer). The proposed approach yields significantly lower error metrics (MAE and RMSE) and higher explained variance (R2), while exhibiting remarkable resilience against error accumulation at extended forecasting horizons. Full article
(This article belongs to the Special Issue AI for Smart Engineering Systems)
Show Figures

Graphical abstract

Back to TopTop