Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,246)

Search Parameters:
Keywords = probability neural network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 2618 KB  
Article
A Cascaded Batch Bayesian Yield Optimization Method for Analog Circuits via Deep Transfer Learning
by Ziqi Wang, Kaisheng Sun and Xiao Shi
Electronics 2026, 15(3), 516; https://doi.org/10.3390/electronics15030516 - 25 Jan 2026
Viewed by 154
Abstract
In nanometer integrated-circuit (IC) manufacturing, advanced technology scaling has intensified the effects of process variations on circuit reliability and performance. Random fluctuations in parameters such as threshold voltage, channel length, and oxide thickness further degrade design margins and increase the likelihood of functional [...] Read more.
In nanometer integrated-circuit (IC) manufacturing, advanced technology scaling has intensified the effects of process variations on circuit reliability and performance. Random fluctuations in parameters such as threshold voltage, channel length, and oxide thickness further degrade design margins and increase the likelihood of functional failures. These variations often lead to rare circuit failure events, underscoring the importance of accurate yield estimation and robust design methodologies. Conventional Monte Carlo yield estimation is computationally infeasible as millions of simulations are required to capture failure events with extremely low probability. This paper presents a novel reliability-based circuit design optimization framework that leverages deep transfer learning to improve the efficiency of repeated yield analysis in optimization iterations. Based on pre-trained neural network models from prior design knowledge, we utilize model fine-tuning to accelerate importance sampling (IS) for yield estimation. To improve estimation accuracy, adversarial perturbations are introduced to calibrate uncertainty near the model decision boundary. Moreover, we propose a cascaded batch Bayesian optimization (CBBO) framework that incorporates a smart initialization strategy and a localized penalty mechanism, guiding the search process toward high-yield regions while satisfying nominal performance constraints. Experimental validation on SRAM circuits and amplifiers reveals that CBBO achieves a computational speedup of 2.02×–4.63× over state-of-the-art (SOTA) methods, without compromising accuracy and robustness. Full article
(This article belongs to the Topic Advanced Integrated Circuit Design and Application)
Show Figures

Figure 1

25 pages, 5754 KB  
Article
Heatmap-Assisted Reinforcement Learning Model for Solving Larger-Scale TSPs
by Guanqi Liu and Donghong Xu
Electronics 2026, 15(3), 501; https://doi.org/10.3390/electronics15030501 - 23 Jan 2026
Viewed by 139
Abstract
Deep reinforcement learning (DRL)-based algorithms for solving the Traveling Salesman Problem (TSP) have demonstrated competitive potential compared to traditional heuristic algorithms on small-scale TSP instances. However, as the problem size increases, the NP-hard nature of the TSP leads to exponential growth in the [...] Read more.
Deep reinforcement learning (DRL)-based algorithms for solving the Traveling Salesman Problem (TSP) have demonstrated competitive potential compared to traditional heuristic algorithms on small-scale TSP instances. However, as the problem size increases, the NP-hard nature of the TSP leads to exponential growth in the combinatorial search space, state–action space explosion, and sharply increased sample complexity, which together cause significant performance degradation for most existing DRL-based models when directly applied to large-scale instances. This research proposes a two-stage reinforcement learning framework, termed GCRL-TSP (Graph Convolutional Reinforcement Learning for the TSP), which consists of a heatmap generation stage based on a graph convolutional neural network, and a heatmap-assisted Proximal Policy Optimization (PPO) training stage, where the generated heatmaps are used as auxiliary guidance for policy optimization. First, we design a divide-and-conquer heatmap generation strategy: a graph convolutional network infers m-node sub-heatmaps, which are then merged into a global edge-probability heatmap. Second, we integrate the heatmap into PPO by augmenting the state representation and restricting the action space toward high-probability edges, improving training efficiency. On standard instances with 200/500/1000 nodes, GCRL-TSP achieves a Gap% of 4.81/4.36/13.20 (relative to Concorde) with runtimes of 36 s/1.12 min/4.65 min. Experimental results show that GCRL-TSP achieves more than twice the solving speed compared to other TSP solving algorithms, while obtaining solution quality comparable to other algorithms on TSPs ranging from 200 to 1000 nodes. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

13 pages, 3858 KB  
Article
Time Series Prediction of Open Quantum System Dynamics by Transformer Neural Networks
by Zhao-Wei Wang, Lian-Ao Wu and Zhao-Ming Wang
Entropy 2026, 28(2), 133; https://doi.org/10.3390/e28020133 - 23 Jan 2026
Viewed by 144
Abstract
The dynamics of open quantum systems play a crucial role in quantum information science. However, obtaining numerically exact solutions for the Lindblad master equation is often computationally expensive. Recently, machine learning techniques have gained considerable attention for simulating open quantum system dynamics. In [...] Read more.
The dynamics of open quantum systems play a crucial role in quantum information science. However, obtaining numerically exact solutions for the Lindblad master equation is often computationally expensive. Recently, machine learning techniques have gained considerable attention for simulating open quantum system dynamics. In this paper, we propose a deep learning model based on time series prediction (TSP) to forecast the dynamical evolution of open quantum systems. We employ the positive operator-valued measure (POVM) approach to convert the density matrix of the system into a probability distribution and construct a TSP model based on Transformer neural networks. This model effectively captures the historical evolution patterns of the system and accurately predicts its future behavior. Our results show that the model achieves high-fidelity predictions of the system’s evolution trajectory in both short- and long-term scenarios, and exhibits robust generalization under varying initial states and coupling strengths. Moreover, we successfully predicted the steady-state behavior of the system, further proving the practicality and scalability of the method. Full article
(This article belongs to the Special Issue Non-Markovian Open Quantum Systems)
Show Figures

Figure 1

28 pages, 905 KB  
Article
An Explainable Voting Ensemble Framework for Early-Warning Forecasting of Corporate Financial Distress
by Lersak Phothong, Anupong Sukprasert, Sutana Boonlua, Prapaporn Chubsuwan, Nattakron Seetha and Rotcharin Kunsrison
Forecasting 2026, 8(1), 10; https://doi.org/10.3390/forecast8010010 - 23 Jan 2026
Viewed by 235
Abstract
Accurate early-warning forecasting of corporate financial distress remains a critical challenge due to nonlinear financial relationships, severe data imbalance, and the high operational costs of false alarms in risk-monitoring systems. This study proposes an explainable voting ensemble framework for early-warning forecasting of corporate [...] Read more.
Accurate early-warning forecasting of corporate financial distress remains a critical challenge due to nonlinear financial relationships, severe data imbalance, and the high operational costs of false alarms in risk-monitoring systems. This study proposes an explainable voting ensemble framework for early-warning forecasting of corporate financial distress using lagged accounting-based financial information. The proposed framework integrates heterogeneous base learners, including Decision Tree, Neural Network, and k-Nearest Neighbors models, and is evaluated using financial statement data from 752 publicly listed firms in Thailand, comprising sixteen financial ratios across six dimensions: liquidity, operating efficiency, debt management, profitability, earnings quality, and solvency. To ensure robustness under imbalanced and rare-event conditions, the study employs feature selection, data normalization, stratified cross-validation, resampling techniques, and repeated validation procedures. Empirical results demonstrate that the proposed Voting Ensemble delivers a precision-oriented and decision-relevant forecasting profile, outperforming classical classifiers and maintaining greater early-warning reliability when benchmarked against advanced tree-based ensemble models. Probability-based evaluation further confirms the robustness and calibration stability of the proposed framework under repeated cross-validation. By adopting a forward-looking, early-warning perspective and integrating ensemble learning with explainable machine learning principles, this study offers a transparent and scalable approach to financial distress forecasting. The findings offer practical implications for auditors, investors, and regulators seeking reliable early-warning tools for corporate risk assessment, particularly in emerging market environments characterized by data imbalance and heightened uncertainty. Full article
Show Figures

Figure 1

23 pages, 3958 KB  
Article
Performance of the Novel Reactive Access-Barring Scheme for NB-IoT Systems Based on the Machine Learning Inference
by Anastasia Daraseliya, Eduard Sopin, Julia Kolcheva, Vyacheslav Begishev and Konstantin Samouylov
Sensors 2026, 26(2), 636; https://doi.org/10.3390/s26020636 - 17 Jan 2026
Viewed by 190
Abstract
Modern 5G+grade low power wide area network (LPWAN) technologies such as Narrowband Internet-of-Things (NB-IoT) operate utilizing a multi-channel slotted ALOHA algorithm at the random access phase. As a result, the random access phase in such systems is characterized by relatively low throughput and [...] Read more.
Modern 5G+grade low power wide area network (LPWAN) technologies such as Narrowband Internet-of-Things (NB-IoT) operate utilizing a multi-channel slotted ALOHA algorithm at the random access phase. As a result, the random access phase in such systems is characterized by relatively low throughput and is highly sensitive to traffic fluctuations that could lead the system outside of its stable operational regime. Although theoretical results specifying the optimal transmission probability that maximizes the successful preamble transmission probability are well known, the lack of knowledge about the current offered traffic load at the BS makes the problem of maintaining the optimal throughput a challenging task. In this paper, we propose and analyze a new reactive access-barring scheme for NB+IoT systems based on machine learning (ML) techniques. Specifically, we first demonstrate that knowing the number of user equipments (UE) experiencing a collision at the BS is sufficient to make conclusions about the current offered traffic load. Then, we show that through utilizing ML-based techniques, one can safely differentiate between events in the Physical Random Access Channel (PRACH) at the base station (BS) side based on only the signal-to-noise ratio (SNR). Finally, we mathematically characterize the delay experienced under the proposed reactive access-barring technique. In our numerical results, we show that by utilizing modern neural network approaches, such as the XGBoost classifier, one can precisely differentiate between events on the PRACH channel with accuracy reaching 0.98 and then associate it with the number of user equipment (UE) competing at the random access phase. Our simulation results show that the proposed approach can keep the successful preamble transmission probability constant at approximately 0.3 in overloaded conditions, when for conventional NB-IoT access, this value is less than 0.05. The proposed scheme achieves near-optimal throughput in multi-channel ALOHA by employing dynamic traffic awareness to adjust the non-unit transmission probability. This proactive congestion control ensures a controlled and bounded delay, preventing latency from exceeding the system’s maximum load capacity. Full article
Show Figures

Figure 1

34 pages, 2968 KB  
Article
Emergency Regulation Method Based on Multi-Load Aggregation in Rainstorm
by Hong Fan, Feng You and Haiyu Liao
Appl. Sci. 2026, 16(2), 952; https://doi.org/10.3390/app16020952 - 16 Jan 2026
Viewed by 137
Abstract
With the rapid development of the Internet of Things (IOT), 5G, and modern power systems, demand-side loads are becoming increasingly observable and remotely controllable, which enables demand-side flexibility to participate more actively in grid dispatch and emergency support. Under extreme rainstorm conditions, however, [...] Read more.
With the rapid development of the Internet of Things (IOT), 5G, and modern power systems, demand-side loads are becoming increasingly observable and remotely controllable, which enables demand-side flexibility to participate more actively in grid dispatch and emergency support. Under extreme rainstorm conditions, however, component failure risk rises and the availability and dispatchability of demand-side flexibility can change rapidly. This paper proposes a risk-aware emergency regulation framework that translates rainstorm information into actionable multi-load aggregation decisions for urban power systems. First, demand-side resources are quantified using four response attributes, including response speed, response capacity, maximum response duration, and response reliability, to enable a consistent characterization of heterogeneous flexibility. Second, a backpropagation (BP) neural network is trained on long-term real-world meteorological observations and corresponding reliability outcomes to estimate regional- or line-level fault probabilities from four rainstorm drivers: wind speed, rainfall intensity, lightning warning level, and ambient temperature. The inferred probabilities are mapped onto the IEEE 30-bus benchmark to identify high-risk areas or lines and define spatial priorities for emergency response. Third, guided by these risk signals, a two-level coordination model is formulated for a load aggregator (LA) to schedule building air conditioning loads, distributed photovoltaics, and electric vehicles through incentive-based participation, and the resulting optimization problem is solved using an adaptive genetic algorithm. Case studies verify that the proposed strategy can coordinate heterogeneous resources to meet emergency regulation requirements and improve the aggregator–user economic trade-off compared with single-resource participation. The proposed method provides a practical pathway for risk-informed emergency regulation under rainstorm conditions. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

21 pages, 830 KB  
Article
Predicting Breast Cancer Mortality Using SEER Data: A Comparative Analysis of L1-Logistic Regression and Neural Networks
by Mayra Cruz-Fernandez, Francisco Antonio Castillo-Velásquez, Carlos Fuentes-Silva, Omar Rodríguez-Abreo, Rafael Rojas-Galván, Marcos Avilés and Juvenal Rodríguez-Reséndiz
Technologies 2026, 14(1), 66; https://doi.org/10.3390/technologies14010066 - 15 Jan 2026
Viewed by 234
Abstract
Breast cancer remains a leading cause of mortality among women worldwide, motivating the development of transparent and reproducible risk models for clinical decision making. Using the open-access SEER Breast Cancer dataset (November 2017 release), we analyzed 4005 women diagnosed between 2006 and 2010 [...] Read more.
Breast cancer remains a leading cause of mortality among women worldwide, motivating the development of transparent and reproducible risk models for clinical decision making. Using the open-access SEER Breast Cancer dataset (November 2017 release), we analyzed 4005 women diagnosed between 2006 and 2010 with infiltrating duct and lobular carcinoma (ICD-O-3 8522/3). Thirty-one clinical and demographic variables were preprocessed with one-hot encoding and z-score standardization, and the lymph node ratio was derived to characterize metastatic burden. Two supervised models, L1-regularized logistic regression and a feedforward artificial neural network, were compared under identical preprocessing, fixed 60/20/20 data splits, and stratified five-fold cross-validation. To define clinically meaningful endpoints and handle censoring, we reformulated mortality prediction as fixed-horizon classification at 3 and 5 years, and evaluated discrimination, calibration, and operating thresholds. Logistic regression demonstrated consistently strong performance, achieving test ROC-AUC values of 0.78 at 3 years and 0.75 at 5 years, with substantially superior calibration (Brier score less than or equal to 0.12, ECE less than or equal to 0.03). A structured hyperparameter search with repeated-seed evaluation identified optimal neural network architectures for each horizon, yielding test ROC-AUC values of 0.74 at 3 years and 0.73 at 5 years, but with markedly poorer calibration (ECE 0.19 to 0.23). Bootstrap analysis showed no significant AUC difference between models at 3 years, but logistic regression exhibited greater stability across folds and lower sensitivity to feature pruning. Overall, L1-regularized logistic regression provides competitive discrimination (ROC-AUC 0.75 to 0.78), markedly superior probability calibration (ECE below 0.03 versus 0.19 to 0.23 for the neural network), and approximately 40% lower cross-validation variance, supporting its use for scalable screening, risk stratification, and triage workflows on structured registry data. Full article
(This article belongs to the Section Assistive Technologies)
Show Figures

Figure 1

16 pages, 336 KB  
Article
Bayesian Neural Networks with Regularization for Sparse Zero-Inflated Data Modeling
by Sunghae Jun
Information 2026, 17(1), 81; https://doi.org/10.3390/info17010081 - 13 Jan 2026
Viewed by 228
Abstract
Zero inflation is pervasive across text mining, event log, and sensor analytics, and it often degrades the predictive performance of analytical models. Classical approaches, most notably the zero-inflated Poisson (ZIP) and zero-inflated negative binomial (ZINB) models, address excess zeros but rely on rigid [...] Read more.
Zero inflation is pervasive across text mining, event log, and sensor analytics, and it often degrades the predictive performance of analytical models. Classical approaches, most notably the zero-inflated Poisson (ZIP) and zero-inflated negative binomial (ZINB) models, address excess zeros but rely on rigid parametric assumptions and fixed model structures, which can limit flexibility in high-dimensional, sparse settings. We propose a Bayesian neural network (BNN) with regularization for sparse zero-inflated data modeling. The method separately parameterizes the zero inflation probability and the count intensity under ZIP/ZINB likelihoods, while employing Bayesian regularization to induce sparsity and control overfitting. Posterior inference is performed using variational inference. We evaluate the approach through controlled simulations with varying zero ratios and a real-world dataset, and we compare it against Poisson generalized linear models, ZIP, and ZINB baselines. The present study focuses on predictive performance measured by mean squared error (MSE). Across all settings, the proposed method achieves consistently lower prediction error and improved uncertainty problems, with ablation studies confirming the contribution of the regularization components. These results demonstrate that a regularized BNN provides a flexible and robust framework for sparse zero-inflated data analysis in information-rich environments. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
Show Figures

Graphical abstract

20 pages, 1686 KB  
Article
Spatiotemporal Graph Neural Networks for PM2.5 Concentration Forecasting
by Vongani Chabalala, Craig Rudolph, Karabo Mosala, Edward Khomotso Nkadimeng, Chuene Mosomane, Thuso Mathaha, Pallab Basu, Muhammad Ahsan Mahboob, Jude Kong, Nicola Bragazzi, Iqra Atif, Mukesh Kumar and Bruce Mellado
Air 2026, 4(1), 2; https://doi.org/10.3390/air4010002 - 13 Jan 2026
Viewed by 404
Abstract
Air pollution, particularly fine particulate matter (PM2.5), poses significant public health and environmental risks. This study explores the effectiveness of spatiotemporal graph neural networks (ST-GNNs) in forecasting PM2.5 concentrations by integrating remote-sensing hyperspectral indices with traditional meteorological and pollutant [...] Read more.
Air pollution, particularly fine particulate matter (PM2.5), poses significant public health and environmental risks. This study explores the effectiveness of spatiotemporal graph neural networks (ST-GNNs) in forecasting PM2.5 concentrations by integrating remote-sensing hyperspectral indices with traditional meteorological and pollutant data. The model was evaluated using data from Switzerland and the Gauteng province in South Africa, with datasets spanning from January 2016 to December 2021. Key performance metrics, including root mean squared error (RMSE), mean absolute error (MAE), probability of detection (POD), critical success index (CSI), and false alarm rate (FAR), were employed to assess model accuracy. For Switzerland, the integration of spectral indices improved RMSE from 1.4660 to 1.4591, MAE from 1.1147 to 1.1053, CSI from 0.8345 to 0.8387, POD from 0.8961 to 0.8972, and reduced FAR from 0.0760 to 0.0719. In Gauteng, RMSE decreased from 6.3486 to 6.2319, MAE from 4.4891 to 4.4066, CSI from 0.9555 to 0.9560, and POD from 0.9699 to 0.9732, while FAR slightly increased from 0.0154 to 0.0181. Error analysis revealed that while the initial one-day ahead forecast without spectral indices had a marginally lower error, the dataset with spectral indices outperformed from the two-day ahead mark onwards. The error for Swiss monitoring stations stabilized over longer prediction lengths, indicating the robustness of the spectral indices for extended forecasts. The study faced limitations, including the exclusion of the Planetary Boundary Layer (PBL) height and K-index, lack of terrain data for South Africa, and significant missing data in remote sensing indices. Despite these challenges, the results demonstrate that ST-GNNs, enhanced with hyperspectral data, provide a more accurate and reliable tool for PM2.5 forecasting. Future work will focus on expanding the dataset to include additional regions and further refining the model by incorporating additional environmental variables. This approach holds promise for improving air quality management and mitigating health risks associated with air pollution. Full article
(This article belongs to the Special Issue Air Pollution Exposure and Its Impact on Human Health)
Show Figures

Figure 1

23 pages, 1141 KB  
Article
Randomized Algorithms and Neural Networks for Communication-Free Multiagent Singleton Set Cover
by Guanchu He, Colton Hill, Joshua H. Seaton and Philip N. Brown
Games 2026, 17(1), 3; https://doi.org/10.3390/g17010003 - 12 Jan 2026
Viewed by 240
Abstract
This paper considers how a system designer can program a team of autonomous agents to coordinate with one another such that each agent selects (or covers) an individual resource with the goal that all agents collectively cover the maximum number of resources. Specifically, [...] Read more.
This paper considers how a system designer can program a team of autonomous agents to coordinate with one another such that each agent selects (or covers) an individual resource with the goal that all agents collectively cover the maximum number of resources. Specifically, we study how agents can formulate strategies without information about other agents’ actions so that system-level performance remains robust in the presence of communication failures. First, we use an algorithmic approach to study the scenario in which all agents lose the ability to communicate with one another, have a symmetric set of resources to choose from, and select actions independently according to a probability distribution over the resources. We show that the distribution that maximizes the expected system-level objective under this approach can be computed by solving a convex optimization problem, and we introduce a novel polynomial-time heuristic based on subset selection. Further, both of the methods are guaranteed to be within 11/e of the system’s optimal in expectation. Second, we use a learning-based approach to study how a system designer can employ neural networks to approximate optimal agent strategies in the presence of communication failures. The neural network, trained on system-level optimal outcomes obtained through brute-force enumeration, generates utility functions that enable agents to make decisions in a distributed manner. Empirical results indicate the neural network often outperforms greedy and randomized baseline algorithms. Collectively, these findings provide a broad study of optimal agent behavior and its impact on system-level performance when the information available to agents is extremely limited. Full article
(This article belongs to the Section Algorithmic and Computational Game Theory)
Show Figures

Figure 1

24 pages, 1788 KB  
Article
Uncertainty-Aware Machine Learning for NBA Forecasting in Digital Betting Markets
by Matteo Montrucchio, Enrico Barbierato and Alice Gatti
Information 2026, 17(1), 56; https://doi.org/10.3390/info17010056 - 8 Jan 2026
Viewed by 408
Abstract
This study introduces a fully uncertainty-aware forecasting framework for NBA games that integrates team-level performance metrics, rolling-form indicators, and spatial shot-chart embeddings. The predictive backbone is a recurrent neural network equipped with Monte Carlo dropout, yielding calibrated sequential probabilities. The model is evaluated [...] Read more.
This study introduces a fully uncertainty-aware forecasting framework for NBA games that integrates team-level performance metrics, rolling-form indicators, and spatial shot-chart embeddings. The predictive backbone is a recurrent neural network equipped with Monte Carlo dropout, yielding calibrated sequential probabilities. The model is evaluated against strong baselines including logistic regression, XGBoost, convolutional models, a GRU sequence model, and both market-only and non-market-only benchmarks. All experiments rely on strict chronological partitioning (train ≤ 2022, validation 2023, test 2024), ablation tests designed to eliminate any circularity with bookmaker odds, and cross-season robustness checks spanning 2012–2024. Predictive performance is assessed through accuracy, Brier score, log-loss, AUC, and calibration metrics (ECE/MCE), complemented by SHAP-based interpretability to verify that only pre-game information influences predictions. To quantify economic value, calibrated probabilities are fed into a frictionless betting simulator using fractional-Kelly staking, an expected-value threshold, and bootstrap-based uncertainty estimation. Empirically, the uncertainty-aware model delivers systematically better calibration than non-Bayesian baselines and benefits materially from the combination of shot-chart embeddings and recent-form features. Economic value emerges primarily in less-efficient segments of the market: The fused predictor outperforms both market-only and non-market-only variants on moneylines, while spreads and totals show limited exploitable edge, consistent with higher pricing efficiency. Sensitivity studies across Kelly multipliers, EV thresholds, odds caps, and sequence lengths confirm that the findings are robust to modelling and decision-layer perturbations. The paper contributes a reproducible, decision-focused framework linking uncertainty-aware prediction to economic outcomes, clarifying when predictive lift can be monetized in NBA markets, and outlining methodological pathways for improving robustness, calibration, and execution realism in sports forecasting. Full article
Show Figures

Graphical abstract

33 pages, 1141 KB  
Review
The Protonic Brain: Nanoscale pH Dynamics, Proton Wires, and Acid–Base Information Coding in Neural Tissue
by Valentin Titus Grigorean, Catalina-Ioana Tataru, Cosmin Pantu, Felix-Mircea Brehar, Octavian Munteanu and George Pariza
Int. J. Mol. Sci. 2026, 27(2), 560; https://doi.org/10.3390/ijms27020560 - 6 Jan 2026
Viewed by 334
Abstract
Emerging research indicates that neuronal activity is maintained by an architectural system of protons in a multi-scale fashion. Proton architecture is formed when organelles (such as mitochondria, endoplasmic reticulum, lysosomes, synaptic vesicles, etc.) are coupled together to produce dynamic energy domains. Techniques have [...] Read more.
Emerging research indicates that neuronal activity is maintained by an architectural system of protons in a multi-scale fashion. Proton architecture is formed when organelles (such as mitochondria, endoplasmic reticulum, lysosomes, synaptic vesicles, etc.) are coupled together to produce dynamic energy domains. Techniques have been developed to visualize protons in neurons; recent advances include near-atomic structural imaging of organelle interfaces using cryo-tomography and nanoscale resolution imaging of organelle interfaces and proton tracking using ultra-fast spectroscopy. Results of these studies indicate that protons in neurons do not diffuse randomly throughout the neuron but instead exist in organized geometric configurations. The cristae of mitochondrial cells create oscillating proton micro-domains that are influenced by the curvature of the cristae, hydrogen bonding between molecules, and localized changes in dielectric properties that result in time-patterned proton signals that can be used to determine the metabolic load of the cell and the redox state of its mitochondria. These proton patterns also communicate to the rest of the cell via hydrated aligned proton-conductive pathways at the mitochon-dria-endoplasmic reticulum junctions, through acidic lipid regions, and through nano-tethered contact sites between mitochondria and other organelles, which are typically spaced approximately 10–25 nm apart. Other proton architectures exist in lysosomes, endosomes, and synaptic vesicles. In each of these organelles, the V-ATPase generates steep concentration gradients across their membranes, controlling the rate of cargo removal from the lumen of the organelle, recycling receptors from the surface of the membrane, and loading neurotransmitters into the vesicles. Recent super-resolution pH mapping has indicated that populations of synaptic vesicles contain significant heterogeneity in the amount of protons they contain, thereby influencing the amount of neurotransmitter released per vesicle, the probability of vesicle release, and the degree of post-synaptic receptor protonation. Additionally, proton gradients in each organelle interact with the cytoskeleton: the protonation status of actin and microtubules influences filament stiffness, protein–protein interactions, and organelle movement, resulting in the formation of localized spatial structures that may possess some type of computational significance. At multiple scales, it appears that neurons integrate the proton micro-domains with mechanical tension fields, dielectric nanodomains, and phase-state transitions to form distributed computing elements whose behavior is determined by the integration of energy flow, organelle geometry, and the organization of soft materials. Alterations to the proton landscape in neurons (e.g., due to alterations in cristae structure, drift in luminal pH, disruption in the hydration-structure of the cell, or imbalance in the protonation of cytoskeletal components) could disrupt the intracellular signaling network well before the onset of measurable electrical or biochemical pathologies. This article will summarize evidence indicating that proton–organelle interaction provides a previously unknown source of energetic substrate for neural computation. Using an integrated approach combining nanoscale proton energy, organelle interface geometry, cytoskeletal mechanics, and AI-based multiscale models, this article outlines current principles and unresolved questions related to the subject area as well as possible new approaches to early detection and precise intervention of pathological conditions related to altered intracellular energy flow. Full article
(This article belongs to the Special Issue Molecular Synapse: Diversity, Function and Signaling)
Show Figures

Figure 1

40 pages, 2940 KB  
Article
Hybrid GNN–LSTM Architecture for Probabilistic IoT Botnet Detection with Calibrated Risk Assessment
by Tetiana Babenko, Kateryna Kolesnikova, Yelena Bakhtiyarova, Damelya Yeskendirova, Kanibek Sansyzbay, Askar Sysoyev and Oleksandr Kruchinin
Computers 2026, 15(1), 26; https://doi.org/10.3390/computers15010026 - 5 Jan 2026
Viewed by 361
Abstract
Detecting botnets in IoT environments is difficult because most intrusion detection systems treat network events as independent observations. In practice, infections spread through device relationships and evolve through distinct temporal phases. A system that ignores either aspect will miss important patterns. This paper [...] Read more.
Detecting botnets in IoT environments is difficult because most intrusion detection systems treat network events as independent observations. In practice, infections spread through device relationships and evolve through distinct temporal phases. A system that ignores either aspect will miss important patterns. This paper explores a hybrid architecture combining Graph Neural Networks with Long Short-Term Memory networks to capture both structural and temporal dynamics. The GNN component models behavioral similarity between traffic flows in feature space, while the LSTM tracks how patterns change as attacks progress. The two components are trained jointly so that relational context is preserved during temporal learning. We evaluated the approach on two datasets with different characteristics. N-BaIoT contains traffic from nine devices infected with Mirai and BASHLITE, while CICIoT2023 covers 105 devices across 33 attack types. On N-BaIoT, the model achieved 99.88% accuracy with F1 of 0.9988 and Brier score of 0.0015. Cross-validation on CICIoT2023 yielded 99.73% accuracy with Brier score of 0.0030. The low Brier scores suggest that probability outputs are reasonably well calibrated for risk-based decision making. Consistent performance across both datasets provides some evidence that the architecture generalizes beyond a single benchmark setting. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Show Figures

Figure 1

17 pages, 1247 KB  
Article
Development of a Machine Learning-Based Prognostic Model Using Systemic Inflammation Markers in Patients Receiving Nivolumab Immunotherapy: A Real-World Cohort Study
by Ugur Ozkerim, Deniz Isik, Oguzcan Kinikoglu, Sila Oksuz, Yunus Emre Altintas, Goncagul Akdag, Sedat Yildirim, Tugba Basoglu, Heves Surmeli, Hatice Odabas and Nedim Turan
J. Pers. Med. 2026, 16(1), 8; https://doi.org/10.3390/jpm16010008 - 31 Dec 2025
Viewed by 182
Abstract
Background: Systemic inflammation is an essential factor in the formation of the tumor microenvironment and has an impact on patient response to immune checkpoint inhibitors. Although there is a growing interest in biomarkers of inflammation, there is a gap in understanding their predictive [...] Read more.
Background: Systemic inflammation is an essential factor in the formation of the tumor microenvironment and has an impact on patient response to immune checkpoint inhibitors. Although there is a growing interest in biomarkers of inflammation, there is a gap in understanding their predictive value for response to nivolumab in clinical practice. The objective of this research was to design and assess a multi-algorithmic machine learning (ML) model based on regular systemic inflammation measurements to forecast the response of treatment to nivolumab. Methods: An analysis of a retrospective real-world cohort of 177 nivolumab-treated patients was performed. Baseline inflammatory biomarkers, such as neutrophils, lymphocytes, platelets, CRP, LDH, albumin, and derived indices (NLR, PLR, SII), were derived. After preprocessing, 5 ML models (Logistic Regression, Random Forest, Gradient Boosting, Support Vector Machine, and Neural Network) were trained and tested on a 70/30 stratified split. Accuracy, AUC, precision, recall, F1-score, and Brier score were used to evaluate predictive performance. The interpretability of the model was analyzed based on feature-importance ranking and SHAP. Results: Gradient Boosting performed best in terms of discriminative (AUC = 0.816), whereas Support Vector Machine performed best on overall predictive profile (accuracy = 0.833; F1 = 0.909; recall = 1.00; and Brier Score = 0.134) performance. CRP and LDH became the most common predictors of all models, and then neutrophils and platelets. SHAP analysis has verified that high CRP and LDH were strong predictors that forced the prediction to non-response, whereas higher lymphocyte levels were weak predictors that increased the response probability prediction. Conclusions: Machine learning models based on common inflammatory systemic markers give useful predictive information about nivolumab response. Their discriminative ability is moderate, but the high performance of SVM and Gradient Boosting pays attention to the opportunities of inflammation-based ML tools in making personalized decisions regarding immunotherapy. A combination of clinical, radiomic, and molecular biomarkers in the future can increase predictive capabilities and clinical use. Full article
(This article belongs to the Section Disease Biomarkers)
Show Figures

Figure 1

67 pages, 7998 KB  
Article
Neural Network Method for Detecting UDP Flood Attacks in Critical Infrastructure Microgrid Protection Systems with Law Enforcement Agencies’ Rapid Response
by Serhii Vladov, Łukasz Ścisło, Anatoliy Sachenko, Jan Krupiński, Victoria Vysotska, Maksym Korniienko, Oleh Uhrovetskyi, Vyacheslav Krykun, Kateryna Levchenko and Alina Sachenko
Energies 2026, 19(1), 209; https://doi.org/10.3390/en19010209 - 30 Dec 2025
Viewed by 360
Abstract
This article develops a hybrid neural network method for detecting UDP flooding in critical infrastructure microgrid protection systems. This method combines sequential statistics (CUSUM) and a multimodal convolutional 1D-CNN architecture with a composite scoring criterion. Input features are generated using packet-aggregated one-minute vectors [...] Read more.
This article develops a hybrid neural network method for detecting UDP flooding in critical infrastructure microgrid protection systems. This method combines sequential statistics (CUSUM) and a multimodal convolutional 1D-CNN architecture with a composite scoring criterion. Input features are generated using packet-aggregated one-minute vectors with metrics for packet count, average size, source entropy, and HHI concentration index, as well as compact sketches of top sources. To ensure forensically relevant incident recording, a greedy artefact selection policy based on the knapsack problem with a limited forensic buffer is implemented. The developed method is theoretically justified using a likelihood ratio criterion and adaptive threshold tuning, which ensures control over the false alarm probability. Experimental validation on traffic datasets demonstrated high efficiency, with an overall accuracy of 98.7%, a sensitivity of 97.4%, an average model inference time of 5.3 ms (2.5 times faster than its LSTM counterpart), a controlled FPR of 0.96%, and a reduction in asymptotic detection latency with an increase in intensity from 35 to 12 s. Moreover, with a storage budget of 10 MB, 28 priority bins were selected (their total size was 7.39 MB), ensuring the approximate preservation of 85% of the most informative packets for subsequent examination. This research contribution involves the creation of a ready-to-deploy, resource-efficient detector with low latency, explainable statistical layers, and a built-in mechanism for generating a standardized evidence package to facilitate rapid law enforcement response. Full article
(This article belongs to the Special Issue Cyber Security in Microgrids and Smart Grids—2nd Edition)
Show Figures

Figure 1

Back to TopTop