Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (7,901)

Search Parameters:
Keywords = biases

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 685 KB  
Article
Comparative Effectiveness of Bevacizumab and Olaparib Maintenance Therapy in Platinum-Sensitive Recurrent Ovarian Cancer: A Real-World Study with Exploratory Evaluation of Dose Reduction
by Shunsuke Tatsuki, Tadahiro Shoji, Ami Jo, Nanako Jonai, Yohei Chiba, Sho Sato, Eriko Takatori, Yoshitaka Kaido, Takayuki Nagasawa, Masahiro Kagabu, Takeshi Aida, Fumiharu Miura and Tsukasa Baba
Cancers 2026, 18(9), 1332; https://doi.org/10.3390/cancers18091332 (registering DOI) - 22 Apr 2026
Abstract
Objective: To compare real-world PFS between BEV and OLA as maintenance therapy for PSROC, with an exploratory evaluation of clinical outcomes after OLA dose reduction. Methods: This retrospective multicenter study included 101 patients with first platinum-sensitive recurrent ovarian, fallopian tube, or [...] Read more.
Objective: To compare real-world PFS between BEV and OLA as maintenance therapy for PSROC, with an exploratory evaluation of clinical outcomes after OLA dose reduction. Methods: This retrospective multicenter study included 101 patients with first platinum-sensitive recurrent ovarian, fallopian tube, or primary peritoneal cancer who achieved a response to platinum-based chemotherapy and then received maintenance therapy. Patients were classified into three groups: BEV (n = 34), standard-dose OLA (n = 31), and dose-reduced OLA (n = 36). The primary endpoint was PFS; secondary endpoints were OS and adverse events. Survival outcomes were evaluated using Kaplan–Meier methods and Cox proportional hazards models. Results: In the primary comparison of all OLA-treated patients versus BEV, OLA was associated with longer PFS (HR 0.48, 95% CI 0.29–0.77), with median PFS of 19 months versus 16 months, respectively. OS did not differ significantly between groups (HR 0.60, 95% CI 0.34–1.05). In exploratory subgroup analyses, patients who underwent OLA dose reduction had numerically longer PFS than those who remained on the full dose; however, this comparison is vulnerable to time-dependent and selection biases and should be interpreted cautiously. Grade ≥ 3 hematologic toxicities were more frequent in the OLA groups but were clinically manageable. Conclusions: In real-world practice, OLA was associated with longer PFS than BEV in PSROC. Clinically necessary dose reduction appeared feasible without an obvious loss of benefit, although this finding requires cautious interpretation. Full article
(This article belongs to the Special Issue Novel Drugs for Treating Gynecologic Cancers: 2nd Edition)
16 pages, 6386 KB  
Article
Nano-Power OTA-Based Low-Pass Filter for Ultra-Low-Energy Biomedical Signal Processing
by Tomasz Kulej, Montree Kumngern and Fabian Khateb
Sensors 2026, 26(9), 2586; https://doi.org/10.3390/s26092586 - 22 Apr 2026
Abstract
This paper presents a nanowatt-scale operational transconductance amplifier (OTA) and an electronically tunable third-order low-pass filter (LPF) designed for energy-constrained biomedical signal conditioning. The circuits are implemented in a 65 nm CMOS process and verified through comprehensive schematic-level simulations. Biased in the deep [...] Read more.
This paper presents a nanowatt-scale operational transconductance amplifier (OTA) and an electronically tunable third-order low-pass filter (LPF) designed for energy-constrained biomedical signal conditioning. The circuits are implemented in a 65 nm CMOS process and verified through comprehensive schematic-level simulations. Biased in the deep subthreshold region at 1 nA, the OTA achieves a 50 dB low-frequency gain, a 225 Hz unity-gain bandwidth at 10 pF load capacitance and an input-referred noise floor of 1.55 μV/√Hz, with a total power consumption of only 1.75 nW. The integrated third-order LPF provides a wide tuning range (37–668 Hz) via bias current modulation, exhibiting excellent linearity with a THD of 0.059% and a 65.3 dB dynamic range. Monte Carlo and PVT corner analyses demonstrate the design’s theoretical robustness against process variations and environmental fluctuations. ECG signal simulations validate the circuit’s effectiveness in suppressing high-frequency artifacts while preserving morphological integrity, providing a proof-of-concept for ultra-low-power wearable healthcare architectures. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

42 pages, 4479 KB  
Article
Fractional Diffusion on Graphs: Superposition of Laplacian Semigroups Incorporating Memory
by Nikita Deniskin and Ernesto Estrada
Fractal Fract. 2026, 10(4), 273; https://doi.org/10.3390/fractalfract10040273 - 21 Apr 2026
Abstract
Subdiffusion on graphs is often modeled by time-fractional diffusion equations; yet, its structural and dynamical consequences remain unclear. We show that subdiffusive transport on graphs is a memory-driven process generated by a random time change that compresses operational time, produces long-tailed waiting times, [...] Read more.
Subdiffusion on graphs is often modeled by time-fractional diffusion equations; yet, its structural and dynamical consequences remain unclear. We show that subdiffusive transport on graphs is a memory-driven process generated by a random time change that compresses operational time, produces long-tailed waiting times, and breaks Markovianity while preserving linearity and mass conservation. While the subordination representation and complete monotonicity properties of the Mittag-Leffler function are classical, we develop a graph-based synthesis in which Mittag-Leffler dynamics admit an exact convex, mass-preserving representation as a superposition of Laplacian semigroups evaluated at rescaled times. This perspective reveals fractional diffusion as ordinary diffusion acting across multiple intrinsic time scales and enables new structural and dynamical interpretations of graphs. This framework uncovers heterogeneous, vertex-dependent memory effects and induces transport biases absent in classical diffusion, including algebraic relaxation, degree-dependent waiting times, and early-time asymmetries between sources and neighbors. These features define a subdiffusive geometry on graphs, enabling the recovery of global shortest paths, in contrast to the graph exploration of diffusive geometry, while simultaneously favoring high-degree regions. Finally, we show that time-fractional diffusion can be interpreted as a singular limit of multi-rate diffusion, in an appropriate asymptotic sense. Full article
(This article belongs to the Special Issue Fractal Analysis and Data-Driven Complex Systems)
20 pages, 14689 KB  
Article
Objective Classification of Convective Precipitation in Chengdu Terminal Area Using a Self-Organizing Map and Its Impacts on Terminal Area Operations
by Haotian Li, Haoya Liu, Lian Duan, Ran Li, Yecheng Zhang and Xiaowei Hu
Atmosphere 2026, 17(4), 421; https://doi.org/10.3390/atmos17040421 - 21 Apr 2026
Abstract
Based on hourly reanalysis data during 2010–2020, the Self-Organizing Map method is used to objectively classify convective precipitation events in the Chengdu terminal area. Combined with circulation background characteristics, the results are further grouped into three typical synoptic types. Among these three types, [...] Read more.
Based on hourly reanalysis data during 2010–2020, the Self-Organizing Map method is used to objectively classify convective precipitation events in the Chengdu terminal area. Combined with circulation background characteristics, the results are further grouped into three typical synoptic types. Among these three types, Type 1, characterized by a pattern with strong high pressure and abundant water vapor, yields the most intense precipitation. Type 2, a pattern with moderately strong high pressure and water vapor convergence, produces the second-highest precipitation. Type 3, associated with a low trough and weak water vapor conditions, has the weakest precipitation. Two indicators of the Weather Severity Index (WSI) and Node Coverage Index (NCI), respectively describing the coverage extent of heavy precipitation over the terminal area and over key arrival and departure nodes, are established and calculated based on heavy precipitation samples. The results show that Type 1 exhibits the highest WSI and NCI values, indicating the greatest potential impact. Type 2 displays a lower WSI than Type 1 but retains a relatively higher NCI, suggesting a more directionally biased impact, whereas Type 3 records the lowest values for both indicators, indicating a relatively weak impact. The integration of synoptic weather classification and spatial impact indicators offers a reference for weather-impact identification and scenario-based operational assessment in terminal areas. However, some limitations remain in the current study. The weather classification is primarily based on reanalysis data, and the correspondence between the WSI/NCI and actual airport operational constraints requires further validation. Full article
(This article belongs to the Special Issue Meteorological Extreme in China)
42 pages, 10596 KB  
Systematic Review
Measurement and Modeling of Sustainable Food Choice and Purchasing Behavior: A Systematic Review of Methods and Models
by Tiago Negrão Andrade and Helena Maria André Bolini
Foods 2026, 15(8), 1442; https://doi.org/10.3390/foods15081442 - 21 Apr 2026
Abstract
Despite decades of methodological sophistication, research on sustainable food behavior remains critically limited in predicting actual purchases. This study aims to examine how methodological fragmentation across psychometric, econometric, and behavioral approaches affects the predictive validity of sustainable food choice and purchasing behavior. This [...] Read more.
Despite decades of methodological sophistication, research on sustainable food behavior remains critically limited in predicting actual purchases. This study aims to examine how methodological fragmentation across psychometric, econometric, and behavioral approaches affects the predictive validity of sustainable food choice and purchasing behavior. This integrative systematic review of 62 empirical studies across psychometric validation, discrete choice experiments (DCEs), trust and cognitive biases, and objective behavioral measurement diagnoses the structural disarticulation between these traditions as the primary cause of limited predictive validity. Findings reveal a pronounced inversion of the evidence hierarchy: while self-report studies report moderate attitude–behavior correlations (β ≈ 0.40–0.50, self-report), the only long-term study using objective scanner data demonstrates that this relationship collapses to a virtually null effect (β = 0.022), representing a 95.6% decay in predictive capacity. Psychometric instruments demonstrate strong structural validity but lack ecological validation against actual purchases. DCEs have evolved econometrically (from MNL to GMNL models), yet remain isolated from psychological theory and real-world validation. Critically, no reviewed study integrated validated scales, a DCE, and objective behavioral data within a single design. Key moderators—skepticism, halo effects, and affective heuristics—are systematically underoperationalized. To overcome this impasse, we propose Hybrid Choice Models (HCM) as the central tool to formally articulate latent attitudes, stated preferences, and observed behavior, enabling cumulative evidence to inform policy and market strategies with greater predictive accuracy. These findings indicate that predictive advances depend on integrating measurement paradigms to achieve ecologically valid and policy-relevant models of sustainable consumer behavior. Full article
Show Figures

Figure 1

28 pages, 2835 KB  
Review
Unlocking Microbial Dark Matter: A Comprehensive Review of Isolation Technologies from Traditional Culturing to Single-Cell Technologies
by Xi Sun, Xiaoxuan Zhang and Jia Zhang
Microorganisms 2026, 14(4), 933; https://doi.org/10.3390/microorganisms14040933 - 21 Apr 2026
Abstract
Microorganisms represent the Earth’s most abundant biomass and a vast reservoir of genetic diversity. However, traditional agar plate methods fail to recover the vast majority of these species, leaving a “microbial dark matter” that holds immense potential for the discovery of novel antibiotics [...] Read more.
Microorganisms represent the Earth’s most abundant biomass and a vast reservoir of genetic diversity. However, traditional agar plate methods fail to recover the vast majority of these species, leaving a “microbial dark matter” that holds immense potential for the discovery of novel antibiotics and bioactive compounds. While conventional techniques such as selective media and enrichment culture remain foundational, they are inherently limited by community biases and the inability to support low-abundance, oligotrophic species. To address these bottlenecks, a diverse array of innovative isolation strategies has emerged. This review systematically categorizes and evaluates these methodologies, ranging from in situ cultivation to high-resolution single-cell manipulation. We first examine membrane diffusion-based cultivation (e.g., iChip), which mimics natural microenvironments to resuscitate recalcitrant microbes. Subsequently, we explore high-throughput single-cell technologies, including microfluidics for physicochemical separation, optical tweezers for precise manipulation, and fluorescence-activated cell sorting (FACS). Special attention is given to Raman-activated cell sorting (RACS) as a label-free functional screening tool and reverse genomics for targeted capture. By synthesizing the strengths and limitations of these approaches, we propose integrated workflows designed to accelerate the mining of untapped microbial resources. Full article
(This article belongs to the Section Microbial Biotechnology)
Show Figures

Figure 1

27 pages, 2500 KB  
Article
Injury Severity Prediction for Older Driver Accidents via Denoised Cascade Framework and Probability Calibration
by Yiyong Pan, Xilai Jia, Jieru Huang, Gen Li and Pengyu Xu
World Electr. Veh. J. 2026, 17(4), 219; https://doi.org/10.3390/wevj17040219 - 20 Apr 2026
Abstract
Accurately estimating the severity of crash injuries among older drivers is paramount for enhancing traffic safety, a task challenged by class imbalance and label noise. Traditional predictive paradigms often struggle to identify rare severe cases, as they tend to prioritize global accuracy, thereby [...] Read more.
Accurately estimating the severity of crash injuries among older drivers is paramount for enhancing traffic safety, a task challenged by class imbalance and label noise. Traditional predictive paradigms often struggle to identify rare severe cases, as they tend to prioritize global accuracy, thereby compromising sensitivity to high-risk outcomes. To overcome these limitations, this study develops a Log-Loss Cleaned and Probability-Calibrated Cascade (L-CSC) framework by strategically integrating existing advanced algorithmic components for robust and reliable severity prediction. Initially, a Log-Loss-based noise filtering mechanism is implemented to purge outliers and ambiguous samples from the training data, thereby enabling higher-quality representation learning. Subsequently, a two-stage cascade architecture is designed to decouple the classification task. Stage I employs a Preliminary Screening Model, optimized via Bayesian optimization for F2-score, to specifically maximize the recall for severe and fatal cases. In Stage II, a Stacking ensemble classifier is deployed to achieve a fine-grained classification of injury levels among the cases identified in the initial screening. Finally, Isotonic Regression is employed to calibrate the output probabilities from both stages, ensuring that the resulting risk estimations are statistically sound and reliable. Empirical evaluations demonstrate that the L-CSC framework effectively balances overall performance with critical risk detection, achieving a robust Macro-F1 of 0.7296. Specifically, compared to the best-performing baseline, the recall and F1-score for the critical severe and fatal category showed relative improvements of over 82% and 62%, respectively. Ablation analyses further substantiate the vital contributions of both the data cleaning and calibration modules. This research demonstrates that the cascaded framework effectively mitigates the biases inherent in imbalanced datasets, providing a robust algorithmic foundation to potentially support future traffic safety interventions. Full article
(This article belongs to the Section Marketing, Promotion and Socio Economics)
Show Figures

Figure 1

21 pages, 1699 KB  
Article
Three-Way Multimodal Learning with Severely Missing Modalities
by Hanrui Wang, Yu Fang, Xin Wang and Fan Min
Information 2026, 17(4), 384; https://doi.org/10.3390/info17040384 - 19 Apr 2026
Viewed by 75
Abstract
Missing modalities remain a major obstacle to the real-world deployment of multimodal learning systems, as incomplete inputs can substantially degrade model performance. Existing methods often suffer from biased imputation under high missing rates and lack uncertainty-aware, differentiated processing. Inspired by three-way decision, a [...] Read more.
Missing modalities remain a major obstacle to the real-world deployment of multimodal learning systems, as incomplete inputs can substantially degrade model performance. Existing methods often suffer from biased imputation under high missing rates and lack uncertainty-aware, differentiated processing. Inspired by three-way decision, a framework for handling uncertainty by adding a deferment option to acceptance and rejection, we propose three-way multimodal learning with severely missing modalities (3WML-SMMs), a novel framework that introduces a three-way decision mechanism into both missing-modality imputation and feature regularization for the first time. Specifically, 3WML-SMM treats variance not merely as a descriptive measure of uncertainty, but as a decision signal for adaptive processing. Based on this idea, the framework incorporates (1) a variance-guided three-way imputation strategy with accept–delay–reject decisions to reduce unreliable reconstruction when only a limited number of complete samples are available and (2) a dimension-wise adaptive feature enhancement module that performs fine-grained regularization according to perturbation uncertainty. Experiments on the CMU Multimodal Opinion Sentiment Intensity (CMU-MOSI) and Multimodal Internet Movie Database (MM-IMDb) datasets show that 3WML-SMM consistently outperforms representative baselines, including reconstruction-based methods, complete-input multimodal methods, and missing-modality-specific methods under severe missing-modality settings, with statistically significant improvements over the multimodal learning with severely missing modality (SMIL) baseline (p<0.05). These results demonstrate the effectiveness of the proposed framework, even in extreme settings where only 10% of the text modality is available. Full article
(This article belongs to the Section Artificial Intelligence)
22 pages, 2108 KB  
Review
A Short Review of Arabic Aspect-Based Sentiment Analysis: Methods, Challenges and Future Directions
by Hamza Youseef, Luis Gonzaga Baca Ruiz, David Criado Ramón and María del Carmen Pegalajar Jimenez
AI 2026, 7(4), 147; https://doi.org/10.3390/ai7040147 - 19 Apr 2026
Viewed by 207
Abstract
The need for Arabic Aspect-Based Sentiment Analysis (ABSA) has grown steadily alongside the expansion of digital content, while the linguistic complexity of Modern Standard Arabic and its diverse dialects introduces significant challenges. However, progress in the field remains constrained by methodological fragmentation, inconsistent [...] Read more.
The need for Arabic Aspect-Based Sentiment Analysis (ABSA) has grown steadily alongside the expansion of digital content, while the linguistic complexity of Modern Standard Arabic and its diverse dialects introduces significant challenges. However, progress in the field remains constrained by methodological fragmentation, inconsistent task definitions, heterogeneous datasets, and non-standardized evaluation practices. Based on a systematic analysis of 57 studies, this work presents an analytical and interpretive review that moves beyond performance-oriented surveys to examine the methodological foundations of Arabic ABSA research. The review follows a rigorous and transparent study selection process and applies a structured analytical framework to analyze task formulations, dataset characteristics, modeling approaches and evaluation strategies. Our findings reveal persistent challenges, including ambiguous aspect definitions, insufficiently documented annotation protocols, structural annotation biases, and limited robustness across domains and dialects. A heavy reliance on Transformer-based architectures and new Arabic foundation models can create an illusion of progress. Researchers often evaluate these models on small and homogeneous datasets. Consequently, strong in-domain performance obscures limited cross-domain and cross-dialectal generalizability. This study concludes by outlining actionable research directions, emphasizing clearer task standardization, more rigorous annotation guidelines, unified evaluation, and broader dialectal coverage to enhance reproducibility and scalability in Arabic ABSA systems. Full article
Show Figures

Figure 1

29 pages, 2377 KB  
Article
Multi-Scale Spectral Recurrent Network Based on Random Fourier Features for Wind Speed Forecasting
by Eder Arley Leon-Gomez, Víctor Elvira, Jorge Iván Montes-Monsalve, Andrés Marino Álvarez-Meza, Alvaro Orozco-Gutierrez and German Castellanos-Dominguez
Technologies 2026, 14(4), 238; https://doi.org/10.3390/technologies14040238 - 18 Apr 2026
Viewed by 98
Abstract
Accurate wind speed forecasting is critical for reliable wind-power integration, yet it remains challenging due to the strongly non-stationary and inherently multi-scale nature of atmospheric processes. While deep learning models—such as LSTM, GRU, and Transformer architectures—achieve competitive short- and medium-term performance, they frequently [...] Read more.
Accurate wind speed forecasting is critical for reliable wind-power integration, yet it remains challenging due to the strongly non-stationary and inherently multi-scale nature of atmospheric processes. While deep learning models—such as LSTM, GRU, and Transformer architectures—achieve competitive short- and medium-term performance, they frequently suffer from spectral bias, hyperparameter sensitivity, and reduced generalization under heterogeneous operating regimes. To address these limitations, we propose a multi-scale spectral–recurrent framework, termed RFF-RNN, which integrates multi-band Random Fourier Feature (RFF) encodings with parameterizable recurrent backbones. A key innovation of our approach is the deliberate relaxation of strict shift-invariance constraints; by jointly optimizing spectral frequencies, phase biases, and bandwidth scales alongside the neural weights, the framework dynamically shapes a fully data-driven spectral embedding. To ensure robust adaptation, we employ a two-stage optimization strategy combining gradient-based inner-loop learning with outer-loop Bayesian hyperparameter tuning. Our extensive evaluations on a controlled synthetic benchmark and six geographically diverse real-world wind datasets (spanning the USA, China, and the Netherlands) demonstrate the superiority of the proposed framework. Statistical validation via the Friedman test confirms that RFF-enhanced models—particularly RFF-GRU and RFF-LSTM—systematically outperform standard recurrent networks and state-of-the-art Transformer architectures (Autoformer and FEDformer). The proposed approach yields significantly lower error metrics (MAE and RMSE) and higher explained variance (R2), while exhibiting remarkable resilience against error accumulation at extended forecasting horizons. Full article
(This article belongs to the Special Issue AI for Smart Engineering Systems)
33 pages, 5329 KB  
Article
Interpreting Satellite Rainfall Bias Correction Through a Rainfall–Runoff Framework in a Monsoon-Influenced River Basin: The Phetchaburi River Basin, Thailand
by Jutithep Vongphet, Thirasak Saion, Ketvara Sittichok, Songsak Puttrawutichai, Chaiyapong Thepprasit, Polpech Samanmit, Bancha Kwanyuen and Sasiwimol Khawkomol
Water 2026, 18(8), 964; https://doi.org/10.3390/w18080964 - 18 Apr 2026
Viewed by 117
Abstract
Accurate rainfall information is essential for rainfall–runoff modeling in monsoon-influenced basins, where pronounced spatial variability and limited gauge coverage introduce significant uncertainty. Satellite precipitation products provide spatially continuous estimates but are affected by systematic biases, and improvements in statistical rainfall accuracy do not [...] Read more.
Accurate rainfall information is essential for rainfall–runoff modeling in monsoon-influenced basins, where pronounced spatial variability and limited gauge coverage introduce significant uncertainty. Satellite precipitation products provide spatially continuous estimates but are affected by systematic biases, and improvements in statistical rainfall accuracy do not necessarily translate into hydrologically consistent model forcing. This study interpreted satellite rainfall bias correction through a rainfall–runoff framework in the Phetchaburi River Basin, Thailand, using the DWCM-AgWU hydrological model. Simulations were driven by gauge observations and multiple satellite-based rainfall products (GSMaP, CMORPH, CHIRPS, and PERSIANN-CCS), with bias correction applied using Linear Scaling and Quantile Mapping under rainfall-specific calibration. Results showed that bias correction significantly modified rainfall characteristics in distinct ways. Linear Scaling primarily preserved temporal and spatial structure while adjusting rainfall magnitude, whereas Quantile Mapping improved the distributional representation of rainfall intensities. These differences propagated through hydrological processes, leading to systematic variations in runoff responses across multiple metrics, including water balance consistency, peak magnitude, and timing errors. This suggests that each method performs differently depending on the aspect of system response. Rather than identifying a universally optimal method, the findings highlight trade-offs in how rainfall correction strategies influence hydrological system response. Runoff behavior is interpreted as a process-level indicator of rainfall representation, emphasizing that hydrological consistency depends not only on rainfall accuracy but also on its interaction with model structure. These results suggest a process-oriented perspective for interpreting the role of satellite rainfall products in regulated and monsoon-affected basins. Full article
(This article belongs to the Section Hydrology)
24 pages, 1243 KB  
Review
Bovine Spongiform Encephalopathy: An Integrated Review of Prion Mechanisms, Neuroanatomy, and Control
by Giovanna Pires Marzola, Rodrigo Paolo Flores Abuna, Lucas de Assis Ribeiro, João Paulo Ruiz Lucio de Lima Parra, Matheus Henrique Hermínio Garcia, Sandra Maria Barbalho and Maria Angélica Miglino
Vet. Sci. 2026, 13(4), 398; https://doi.org/10.3390/vetsci13040398 - 18 Apr 2026
Viewed by 276
Abstract
Bovine spongiform encephalopathy (BSE) is a fatal transmissible spongiform encephalopathy caused by the misfolding of the host prion protein (PrP), representing a unique intersection between molecular pathology, neuroanatomy, and public health regulation. Although historically framed as a single feedborne epizootic, BSE is now [...] Read more.
Bovine spongiform encephalopathy (BSE) is a fatal transmissible spongiform encephalopathy caused by the misfolding of the host prion protein (PrP), representing a unique intersection between molecular pathology, neuroanatomy, and public health regulation. Although historically framed as a single feedborne epizootic, BSE is now recognized as a spectrum of strain-defined prion disorders encompassing classical and atypical forms with distinct origins, neuroanatomical trajectories, and surveillance implications. This review integrates advances in prion biology, neurodegenerative mechanisms, and anatomical pathways of neuroinvasion to reframe BSE as a heterogeneous disease entity. We synthesize evidence on PrP^C structure, trafficking, and proteolytic processing to explain how normal cellular physiology enables strain-specific conversion to pathogenic PrP^Sc and subsequent neurotoxicity. Distinct patterns of neuroinvasion and regional vulnerability are discussed for classical versus atypical (H- and L-type) BSE, highlighting differences in lymphoid involvement, brainstem targeting, and cortical or cerebellar tropism. We further examine how these biological differences translate into diagnostic sensitivity, surveillance design, and zoonotic risk assessment. By integrating molecular strain diversity with neuroanatomical connectivity, this review underscores the limitations of obex-centered surveillance for atypical BSE and emphasizes the need for proportionate yet precautionary monitoring strategies. These considerations should be interpreted in light of surveillance-dependent detection biases, which influence the apparent distribution of BSE forms. Ultimately, BSE emerges as a critical model for understanding how protein misfolding disorders bridge cellular mechanisms, animal health, and human public health policy. Full article
(This article belongs to the Special Issue Exploring Innovative Approaches in Veterinary Health)
Show Figures

Figure 1

43 pages, 3418 KB  
Systematic Review
IEC 61850 GOOSE: A Systematic Literature Review on the State of the Art and Current Applications
by Arthur Kniphoff da Cruz, Ana Clara Hackenhaar Kellermann, Ingridy Caroliny da Silva, Jaine Mercia Fernandes de Oliveira, Marcia Elena Jochims Kniphoff da Cruz and Lorenz Däubler
Automation 2026, 7(2), 62; https://doi.org/10.3390/automation7020062 - 17 Apr 2026
Viewed by 119
Abstract
To develop secure, fast, and interoperable smart substations, it is vital to understand the current situation and potential future directions of the technologies involved. This study presents the evolution and state of the art of the Generic Object Oriented Substation Event (GOOSE) communication [...] Read more.
To develop secure, fast, and interoperable smart substations, it is vital to understand the current situation and potential future directions of the technologies involved. This study presents the evolution and state of the art of the Generic Object Oriented Substation Event (GOOSE) communication protocol, defined by the International Electrotechnical Commission (IEC) 61850 standard. A Systematic Literature Review (SLR) was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol. This included journal articles published from 2004 to 2025 and conference papers from 2020 to 2025, written in English within Engineering. Only studies primarily focusing on GOOSE, citing it at least ten times, and indexed in the Scopus, IEEE Xplore, and Web of Science databases were included. The quantitative analysis used SciMAT software, complemented by a qualitative analysis. Due to the bibliometric and thematic nature of this review, potential biases were considered at the review level rather than by applying a formal study-level risk-of-bias tool. The final analysis comprised 82 journal articles and 84 conference papers. The results offer a comprehensive mapping of GOOSE research evolution, identify nine main challenges and limitations from the last 22 years, and highlight current research directions. The literature reveals methodological heterogeneity, a predominance of simulation-based approaches, and limited large-scale empirical validation. Full article
(This article belongs to the Special Issue Substation Automation, Protection and Control Based on IEC 61850)
17 pages, 6497 KB  
Article
Optimization Trade-Offs in Memristor-Based Crossbar Arrays for MAC Acceleration
by Hassen Aziza, Hanzhi Xun, Moritz Fieback, Mottaqiallah Taouil and Said Hamdioui
Electronics 2026, 15(8), 1710; https://doi.org/10.3390/electronics15081710 - 17 Apr 2026
Viewed by 215
Abstract
Vector–matrix multiplication (VMM), implemented through multiply–accumulate (MAC) operations, represents the dominant computational primitive in many artificial intelligence (AI) workloads. When executed on conventional von Neumann architectures, VMM operations suffer from important energy consumption and latency due to the separation between memory and processing [...] Read more.
Vector–matrix multiplication (VMM), implemented through multiply–accumulate (MAC) operations, represents the dominant computational primitive in many artificial intelligence (AI) workloads. When executed on conventional von Neumann architectures, VMM operations suffer from important energy consumption and latency due to the separation between memory and processing units. To overcome these limitations, crossbar arrays built from Resistive Random Access Memory (RRAM) cells have been proposed for accelerating VMM computations. In this work, we investigate the key optimization trade-offs associated with implementing RRAM-based neural networks for classification applications. A simple two-layer neural network is first defined and trained in software to generate the weight matrices and bias parameters. Next, three hardware implementation scenarios are evaluated depending on whether negative floating-point numbers are used: Positive Weights Only (PWO), Positive and Negative Weights Only (PNWO), and Positive and Negative Weights with Biases (PNWB). The different implementations are analyzed at the hardware level by examining classification accuracy, energy efficiency, latency, and area overhead. The study further incorporates important RRAM limitations, including restricted conductance range and device variability. Hardware results show that the PWO scenario offers the lowest energy consumption (189 fJ/MAC) and area overhead but results in the lowest accuracy. PNWO and PNWB significantly improve accuracy (+177% and +180%) but increase energy consumption (+63% and +87%) and area (×2 and ×2.1). Under variability effects, PWO achieves better accuracy (94.65%), followed by PNWO (93.11%) and PNWB (92.11%). Full article
(This article belongs to the Special Issue Prospective of Semiconductor Memory Devices)
Show Figures

Figure 1

23 pages, 5622 KB  
Article
Principal Component-Based Spectral Standardization for Optical Spectrometers
by Qiguang Yang, Xu Liu, Wan Wu, Rajendra Bhatt, Yolanda Shea, Xiaozhen Xiong, Ming Zhao, Paul Smith, Greg Kopp and Peter Pilewskie
Remote Sens. 2026, 18(8), 1209; https://doi.org/10.3390/rs18081209 - 17 Apr 2026
Viewed by 185
Abstract
A Principal Component-Based Spectral Standardization (PCSS) method was developed to standardize hyperspectral radiance spectra onto a fixed wavelength grid. This enables the direct comparison of radiance or reflectance spectra across different spatial pixels of an imaging spectrometer or between different instruments. The method [...] Read more.
A Principal Component-Based Spectral Standardization (PCSS) method was developed to standardize hyperspectral radiance spectra onto a fixed wavelength grid. This enables the direct comparison of radiance or reflectance spectra across different spatial pixels of an imaging spectrometer or between different instruments. The method was validated using simulated Climate Absolute Radiance and Refractivity Observatory (CLARREO) Pathfinder (CPF) spectra. The PCSS approach demonstrated high accuracy: the average root-mean-square uncertainty across all CPF channels remained below 0.07%, with maximum individual-channel uncertainties under 1%. Compared to methods based on spectral interpolation, PCSS produced significantly lower biases with tighter error distributions, particularly in spectrally rich regions. Measured Hyper Spectral Imager for Climate Science (HySICS) balloon data provided further validation. PCSS successfully estimated wavelength shifts that closely matched measured data, even when utilizing approximated Jacobians, demonstrating the method’s robustness. Because it relies on a pre-computed lookup table for model parameters, PCSS bypasses the need for intensive radiative transfer calculations, making it highly computationally efficient. Beyond CPF, this method can easily be adapted for other hyperspectral sensors by substituting their respective wavelength grids and instrument line shape functions, offering a powerful tool to improve cross-calibration between different satellite sensors. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

Back to TopTop