Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,139)

Search Parameters:
Keywords = neural connectivity

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1471 KB  
Article
Proposal for an Expanded Classification of the Superficial Musculoaponeurotic System (SMAS) in the Human Forehead, Based on Anatomical and Microscopic Study
by Yuriy L. Vasil’ev, Olesya Kytko, Elena O. Bakhrushina, Irina Smilyk, Pavel Sarygin and Dmitriy Kalinin
Life 2026, 16(5), 765; https://doi.org/10.3390/life16050765 (registering DOI) - 2 May 2026
Abstract
Introduction. The superficial musculoaponeurotic system (SMAS) is fundamental for facial soft tissue support and surgical rejuvenation. Although its morphology in the midface and neck is well characterized, the structure of its cranial extension to the forehead remains a subject of terminological uncertainty. The [...] Read more.
Introduction. The superficial musculoaponeurotic system (SMAS) is fundamental for facial soft tissue support and surgical rejuvenation. Although its morphology in the midface and neck is well characterized, the structure of its cranial extension to the forehead remains a subject of terminological uncertainty. The aim of this study was to conduct a detailed histological and immunohistochemical examination of the forehead supporting structures to characterize their morphology and propose an expanded, region-specific classification of the SMAS. Material and methods. Full-thickness tissue specimens (n = 30) were obtained from five standardized facial regions (parotid, buccal, temporal, frontal, and cervical) from 12 male and 18 female body donors (aged 25–70 years). Specimens were processed for histological analysis using hematoxylin and eosin, van Gieson staining, and Masson’s trichrome. Immunohistochemical staining for S100 protein was used to identify neural structures. Morphometric analysis was performed on digitized sections to quantify interseptal distances and the depth of superficial nerve trunks. Results. The analysis confirmed the established SMAS types (I–V) in the cheek, parotid gland, and neck, confirming the validity of the method. Two distinct, sequentially arranged structures were identified on the forehead, proposed as new types. Type VI (neurovascular arborization) is a discrete fan-shaped structures with a central collagen core surrounding a neurovascular bundle, showing positive S100 staining. These structures, spaced approximately 2.2 mm apart, function as true retaining ligaments. Type VII (fibroseptal) SMAS patterns is vertically oriented, purely fibrous septa (retinacula cutis) connecting the aponeurosis to the dermis, devoid of neural elements, and spaced approximately 9.2 mm apart. Importantly, the superficial nerve trunks were located at an average depth of only 1.09 mm (range: 0.57–1.97 mm) from the skin surface. Conclusion. This study identified two novel SMAS patterns in the forehead—neurovascular arborization (type VI) and fibroseptal (type VII)—supporting the expanded functional seven-type classification of the SMAS. The extremely superficial location of the forehead nerves (average 1.1 mm) defines a critical “danger zone” for aesthetic procedures. These findings provide a refined anatomical basis for improving the precision and safety of both surgical and minimally invasive facial procedures. Full article
(This article belongs to the Section Physiology and Pathology)
20 pages, 4185 KB  
Article
A Deep Learning Method Integrating Meteorological Data for Heavy Precipitation Nowcasting in the Alps Region
by Yilin Mu, Jiahe Liu, Yang Li and Ruidong Zhang
Appl. Sci. 2026, 16(9), 4481; https://doi.org/10.3390/app16094481 (registering DOI) - 2 May 2026
Abstract
Forecasting short-term heavy precipitation is crucial for the early warning of disasters such as flash floods, landslides, and urban flooding. However, under complex topographic conditions, traditional numerical forecasts still fall short in capturing high-resolution heavy precipitation events, and conventional radar extrapolation methods struggle [...] Read more.
Forecasting short-term heavy precipitation is crucial for the early warning of disasters such as flash floods, landslides, and urban flooding. However, under complex topographic conditions, traditional numerical forecasts still fall short in capturing high-resolution heavy precipitation events, and conventional radar extrapolation methods struggle to accurately characterize the nonlinear evolution of weather systems during advection, deformation, and intensity adjustment processes. To address the challenge of short-term heavy rainfall forecasting in high-altitude, complex terrain, this paper proposes Nowcast with Flow-Net (Nwf-Net), a short-term precipitation forecasting framework that integrates deep learning with multi-source meteorological data. This framework consists of a Morphological Evolution Track Module (MET) and a Rainfall Intensity Correction Module (RIC) connected in series: the former combines upper-air wind fields with traditional optical flow algorithms to jointly characterize the displacement of and morphological changes in radar echoes; the latter utilizes a deep recurrent neural network to correct the intensity of forecast results, thereby enhancing the model’s ability to characterize the evolution of strong convective echoes. Experiments in the Alpine region demonstrate that Nwf-Net achieves CSI, HSS, and F1 scores of 0.392, 0.506, and 0.546, respectively, at 32 dBz. These results outperform those of traditional numerical models and some mainstream models, indicating that Nwf-Net can accurately capture multiscale severe convective information and consistently generate precise forecasts. Full article
(This article belongs to the Section Earth Sciences)
21 pages, 8939 KB  
Article
Enhancing Battery Consistency Through Physics-Machine Learning Integration: A Calendering Process-Oriented Optimization Strategy
by Wenhao Zhu, Yankun Liao, Gang Wu and Fei Lei
Energies 2026, 19(9), 2186; https://doi.org/10.3390/en19092186 - 30 Apr 2026
Viewed by 88
Abstract
Manufacturing tolerances inevitably induce cell-to-cell inconsistencies. These inconsistent cells are connected in series and parallel to form battery packs, which will affect the safety and reliability of the battery system. This study presents a novel optimization framework integrating the multi-level physical model with [...] Read more.
Manufacturing tolerances inevitably induce cell-to-cell inconsistencies. These inconsistent cells are connected in series and parallel to form battery packs, which will affect the safety and reliability of the battery system. This study presents a novel optimization framework integrating the multi-level physical model with machine learning to improve battery consistency from the manufacturing perspective. The multi-level physical modeling approach is applied to establish the link between the parameter deviations of the calendering process and the battery inconsistency performance. Based on the multi-level physical model, the Monte Carlo method is used to describe parameter deviations and generate datasets of electrochemical properties. The coefficients of variations in battery capacity and resistance are calculated as the consistency evaluation index based on these datasets. The proposed optimization approach applies machine learning to reduce the computational cost of the multi-level physical simulations due to lots of Monte Carlo simulations. Combined with the multi-level physical model and neural network model, the multi-objective particle swarm optimization algorithm is adopted to provide the optimal calendering process parameter deviations by achieving the trade-off between battery consistency performance and manufacturing cost. Results indicate that the battery consistency performance is improved by controlling the precision of the calendering process and manufacturing cost. This approach can effectively give feedback and guidance to the inverse design of the manufacturing process. Full article
33 pages, 4775 KB  
Article
Neural Network-Augmented Actuation Control System Designed for Path Tracking of Autonomous Underwater-Transportation Systems Under Sensor and Process Noise
by Faheem Ur Rehman, Syed Muhammad Tayyab, Hammad Khan, Aijun Li and Paolo Pennacchi
Actuators 2026, 15(5), 246; https://doi.org/10.3390/act15050246 - 30 Apr 2026
Viewed by 107
Abstract
Underwater-transportation systems have significant potential for both military and commercial applications. Neural Network (NN)-based control offers enhanced robustness for actuators to manage the states of autonomous underwater-transportation systems which include Rigid-Connection Transportation Systems (RCTSs), Flexible-Connection Transportation Systems (FCTSs) and Leader–Follower-Formation Control Transportation Systems [...] Read more.
Underwater-transportation systems have significant potential for both military and commercial applications. Neural Network (NN)-based control offers enhanced robustness for actuators to manage the states of autonomous underwater-transportation systems which include Rigid-Connection Transportation Systems (RCTSs), Flexible-Connection Transportation Systems (FCTSs) and Leader–Follower-Formation Control Transportation Systems (LFFCTSs). In this study, NN-Augmented Control (NNAC) is applied to the aforementioned three transportation systems to enable accurate path tracking by the actuators installed onboard these systems under both ideal operating conditions and in the presence of sensor and process noise. The Extended Kalman Filter (EKF) is employed to estimate the system states under noisy conditions. The results demonstrate that NNAC provides robust and adaptive control of actuators, achieving efficient trajectory tracking via the transportation systems despite the influence of sensor and process noise disturbances. NNAC predominance was also observed in comparison with the conventional PID controller. Among the transportation configurations under the NNAC strategy, the RCTS exhibited the highest tracking accuracy with the lowest power consumption by the actuators. The power consumption of actuators installed on the LFFCTS was marginally higher than that of the RCTS. However, the translational motion accuracy of the follower vehicle in the LFFCTS was the lowest due to indirect actuation control through the formation controller. In contrast, actuators in the FCTS showed the highest power consumption while motion accuracy was comparatively lowest, attributed to the increased complexity of its dynamic positioning requirements. Full article
(This article belongs to the Special Issue Fault Diagnosis and Prognosis in Actuators)
34 pages, 13121 KB  
Article
Mortality Forecasting Using LSTM-CNN Model
by Ning Zhang, Jingyang Chen, Hao Chen and Jingzhen Liu
Axioms 2026, 15(5), 324; https://doi.org/10.3390/axioms15050324 - 29 Apr 2026
Viewed by 92
Abstract
Accurate mortality prediction is essential to actuarial practice as it is directly linked to insurance pricing, reserving, and the management of longevity risk. This study proposes a deep neural network (DNN) model for the mortality rates of multiple populations; it is composed of [...] Read more.
Accurate mortality prediction is essential to actuarial practice as it is directly linked to insurance pricing, reserving, and the management of longevity risk. This study proposes a deep neural network (DNN) model for the mortality rates of multiple populations; it is composed of long short-term memory (LSTM) and convolutional neural network (CNN) components. As mortality trends evolve over long time horizons, and as capturing the complex dependencies among mortality rates across countries or regions with a linear model is challenging, the LSTM and CNN were applied to mortality modeling. The former can automatically learn long-term dependencies of sequential data, whereas the latter can extract local features from grid or sequential data. Formulated as a nonlinear generalization of the Lee–Carter decomposition, the model maps the log-mortality matrix logM to future logm(x,t) end-to-end and generates multi-step forecasts through dynamic recursive prediction. Then, the DNN and baseline models were used to fit mortality data of 21 countries from the Human Mortality Database (HMD), which were divided into training and test sets with the year 2000 as the split point. Extensive numerical experiments from the perspectives of accuracy, stability, and reliability of long-term forecasting revealed that DNN models yield better predictive performance, particularly the LSTM-CNN model. It combines the LSTM, CNN, and fully connected network (FCN) layers and thus exploits each deep neural network to fit nonlinear age, period, and cohort effects as well as their interactive terms to achieve better predictive performance. However, the CNN still outperformed other models for certain groups. In addition, the conclusions hold for remaining life expectancy. Full article
(This article belongs to the Special Issue Financial Mathematics and Econophysics)
31 pages, 5607 KB  
Article
A Causality-Guided Graph Framework for National AI Competitiveness Assessment, Forecasting, and Multi-Objective Fund Allocation
by Xuexin Sun, Weizhi Zhang, Yiteng Li, Jingchuan Zhang, Xinran Wang, Jianfei Pan and Xianpeng Wang
Mathematics 2026, 14(9), 1502; https://doi.org/10.3390/math14091502 - 29 Apr 2026
Viewed by 95
Abstract
As artificial intelligence (AI) increasingly reshapes the global technological and economic landscape, understanding and forecasting national AI competitiveness has become an important yet challenging task. Unlike conventional Analytic Hierarchy Process (AHP)–Entropy-based evaluation methods and machine learning approaches that treat indicators as isolated or [...] Read more.
As artificial intelligence (AI) increasingly reshapes the global technological and economic landscape, understanding and forecasting national AI competitiveness has become an important yet challenging task. Unlike conventional Analytic Hierarchy Process (AHP)–Entropy-based evaluation methods and machine learning approaches that treat indicators as isolated or weakly connected features, this study proposes an integrated framework that explicitly represents inter-indicator dependencies as a structured global topology. Based on an Input–Process–Output–Environment (IPOE) system with 24 indicators for 10 major economies during 2016–2025, AHP–Entropy, XGBoost, Design of Experiments (DOE), and Bayesian networks are combined to identify dependency pathways among indicators. These structural relations are embedded into a graph neural network (GNN) for competitiveness assessment, while a Dynamic GNN-ARIMA module is developed to project future competitiveness trajectories under limited samples. Building on these projections, a multi-objective fund allocation optimization model is constructed and solved via the NSGA-II algorithm to reduce policy volatility while maintaining future AI competitiveness with a strategic investment of RMB 500 billion. Results show that the U.S. remains the clear leader, followed by China, while mid-tier economies show noticeable reshuffling. Under the Min-Variance strategy with the investment, China is projected to significantly narrow the gap with the United States, reaching a comparable level of competitiveness. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
30 pages, 8060 KB  
Article
Modeling and Optimization of Deep and Machine Learning Methods for Credit Card Fraud Risk Management
by Slavi Georgiev, Maya Markova, Vesela Mihova and Venelin Todorov
Mathematics 2026, 14(9), 1496; https://doi.org/10.3390/math14091496 - 29 Apr 2026
Viewed by 247
Abstract
As digital payment infrastructures expand, the incidence of card-not-present fraud has become a major source of operational and financial risk for banks, payment processors, and merchants. In response, financial institutions increasingly rely on data-driven decision systems, yet fraudsters continuously adapt their strategies to [...] Read more.
As digital payment infrastructures expand, the incidence of card-not-present fraud has become a major source of operational and financial risk for banks, payment processors, and merchants. In response, financial institutions increasingly rely on data-driven decision systems, yet fraudsters continuously adapt their strategies to evade conventional rule-based controls. A promising way to strengthen risk management is to model transactional data so as to uncover non-trivial, high-dimensional patterns characteristic of fraudulent behavior and to embed these models into real-time decision pipelines. In this work, we develop and compare a suite of learning-based fraud detectors, including a convolutional neural network and several machine learning classifiers, within a unified quantitative risk-management framework. The problem is formulated as a supervised classification task within a quantitative risk management framework, where the cost of missed fraud is particularly critical. The mathematical contribution is methodological rather than architectural: we design a leakage-safe and prevalence-faithful evaluation protocol for extremely imbalanced binary classification, combine cross-validated hyperparameter optimization with risk-aligned model selection based on metrics such as recall and Matthews correlation coefficient, and quantify uncertainty by bootstrap confidence intervals and paired McNemar tests. In addition, we connect statistical evaluation with deployment-time decisioning through a decision-theoretic, cost-sensitive threshold rule, showing how institution-specific false-positive and false-negative costs determine the operating point of the classifier. Because fraudulent transactions constitute only a small proportion of the total volume, we employ resampling strategies to mitigate severe class imbalance and systematically calibrate the models via cross-validated hyperparameter optimization. The empirical analysis on real transaction data shows that carefully tuned deep and ensemble methods can achieve strong fraud-detection performance, while the proposed framework clarifies which performance differences are statistically meaningful and which operating points are most suitable under institution-specific false-positive and false-negative costs. Full article
Show Figures

Figure 1

21 pages, 548 KB  
Article
Interplay Between Vertical and Horizontal Schemes of Computation: From Bayesian Inference to Quantum Logic via Gluing Boolean Algebras
by Yukio-Pegio Gunji, Kyoko Nakamura, Kazuto Sasai, Iori Tani, Mayo Kuroki, Alessandro Chiolerio, Andrew Adamatzky and Andrei Khrennikov
Entropy 2026, 28(5), 498; https://doi.org/10.3390/e28050498 - 28 Apr 2026
Viewed by 122
Abstract
Artificial intelligence is typically formulated as an information-processing system composed of artificial neurons, where computation is understood as recursive operations connecting inputs and outputs. However, real neural systems are materially embodied and continuously reconfigured by metabolic and physical processes, suggesting that computation cannot [...] Read more.
Artificial intelligence is typically formulated as an information-processing system composed of artificial neurons, where computation is understood as recursive operations connecting inputs and outputs. However, real neural systems are materially embodied and continuously reconfigured by metabolic and physical processes, suggesting that computation cannot be reduced to fixed causal structures. In this paper, we propose a theoretical framework that captures the interplay between informational and material processes as the interaction between two computational schemes: a vertical scheme, representing fixed cause–effect relations, and a horizontal scheme, representing transformations between such relations. We show that the vertical scheme corresponds to Bayesian inference, which updates probability distributions over a fixed hypothesis space, and is consistent with the free-energy minimization principle. In contrast, the horizontal scheme is formalized as inverse Bayesian inference, which modifies the hypothesis space itself by updating likelihood structures based on experienced data. We further demonstrate that the interplay between these schemes can be expressed algebraically as a process of continuously gluing Boolean algebras. This construction yields a non-distributive orthomodular lattice, i.e., quantum logic, without invoking Hilbert space formalism. In this view, quantum logic emerges not as a static logical system but as a structural consequence of dynamically reconfiguring causal contexts. This framework provides a unified perspective in which inference is understood not only as optimization within a fixed model but also as a process that generates and transforms the model itself. It offers a formal basis for describing open-ended computation and suggests a connection to approaches such as unconventional computing and Natural Born Intelligence, where computational structures evolve through interaction with material processes. Unlike existing approaches, this framework derives quantum-logic-like structure from the continual reconfiguration of causal contexts rather than from Hilbert-space assumptions or optimization within a fixed hypothesis space. Full article
16 pages, 2446 KB  
Article
fNIRS as a Biomarker for Preoperative Assessment: Correlating Brain Activity with Clinical Evaluation for Lumbar Disc Herniation
by Chengjie Huang, Changqing Li, Zhihai Su, Qiwei Guo, Quan Wang, Tao Chen, Yuhan Wang, Zhen Yuan and Hai Lu
Bioengineering 2026, 13(5), 508; https://doi.org/10.3390/bioengineering13050508 - 28 Apr 2026
Viewed by 488
Abstract
Background: Lumbar disc herniation (LDH) is the most common etiological cause of low back pain (LBP). Objective and precise pain evaluation is of significant clinical value. Functional near-infrared spectroscopy (fNIRS) as a noninvasive neuroimaging modality, has been increasingly validated to reflect subjective pain [...] Read more.
Background: Lumbar disc herniation (LDH) is the most common etiological cause of low back pain (LBP). Objective and precise pain evaluation is of significant clinical value. Functional near-infrared spectroscopy (fNIRS) as a noninvasive neuroimaging modality, has been increasingly validated to reflect subjective pain perception through hemodynamic correlates. This study aimed to analyze the fNIRS changes in patients with LDH about to receive Unilateral Biportal Endoscopy and to further explore the feasibility of fNIRS as an objective biomarkers for clinical assessment of LDH. Methods: Resting-state fNIRS data were acquired from 67 preoperative LDH patients and 20 healthy controls (HC). Brain functional maps—including z-standardized fractional amplitude of low-frequency fluctuations (zfALFF) and seed-based functional connectivity (FC)—were extracted and quantified. Group-level comparisons were performed between LDH and HC groups across four predefined regions of interest; additionally, correlation analyses were conducted between fNIRS metrics and clinical assessment scores within the LDH cohort. Results: Compared with HC, LDH patients exhibited significantly altered zfALFF in the medial prefrontal cortex (mPFC): decreased amplitude at channel CH12 (t = −2.031, p = 0.045) and increased amplitude at CH21 (t = 2.462, p = 0.016). Whole-brain FC analysis further revealed widespread changes—particularly between the parietal somatosensory cortex and prefrontal regions. Among all tested FC–clinical indicator associations, 56 reached statistical significance after FDR correction (q < 0.05). VAS_ lumbar and SF-36_SF exhibited the highest number of significant connections. Conclusions: LDH patients with LBP exhibit notable alterations in prefrontal resting-state ALFF and FC between the parietal somatosensory cortex and prefrontal cortex relative to HC. Importantly, these neural alterations exhibit significant associations with both pain severity (VAS) and long-term health-related quality of life (SF-36), thereby strengthening their candidacy as neural correlates meriting prospective validation as objective, mechanism-informed biomarkers for clinical evaluation of lumbar disc herniation (LDH). Moreover, these findings highlight candidate neural targets for future longitudinal studies investigating early prognostic prediction and treatment response monitoring in LDH. Full article
(This article belongs to the Section Biomedical Engineering and Biomaterials)
Show Figures

Graphical abstract

27 pages, 6783 KB  
Article
A Robust Intelligent CNN Model Enhanced with Gabor-Based Feature Extraction, SMOTE Balancing, and Adam Optimization for Multi-Grade Diabetic Retinopathy Classification
by Asri Mulyani, Muljono, Purwanto and Moch Arief Soeleman
J. Imaging 2026, 12(5), 188; https://doi.org/10.3390/jimaging12050188 - 27 Apr 2026
Viewed by 234
Abstract
Diabetic retinopathy (DR) is a leading cause of vision impairment and permanent blindness worldwide, requiring accurate and automated systems for multi-grade severity classification. However, standard Convolutional Neural Networks (CNNs) often struggle to capture fine, high-frequency microvascular patterns critical for diagnosis. This study proposes [...] Read more.
Diabetic retinopathy (DR) is a leading cause of vision impairment and permanent blindness worldwide, requiring accurate and automated systems for multi-grade severity classification. However, standard Convolutional Neural Networks (CNNs) often struggle to capture fine, high-frequency microvascular patterns critical for diagnosis. This study proposes a Robust Intelligent CNN Model (RICNN) that integrates Gabor-based feature extraction with deep learning to improve DR classification. Specifically, Gabor filters are applied during preprocessing to extract orientation- and frequency-sensitive texture features, which are transformed into feature maps and concatenated with CNN feature representations at the fully connected layer (feature-level fusion). The model also incorporates the Synthetic Minority Oversampling Technique (SMOTE) for data balancing and the Adam optimizer for efficient convergence. This integration enhances sensitivity to microvascular structures such as microaneurysms and hemorrhages. The proposed RICNN was evaluated on the Messidor dataset (1200 images) across four severity levels: Mild, Moderate, Severe, and Proliferative DR. The model achieved an accuracy of 89%, a precision of 88.75%, a recall of 89%, and an F1-score of 89%, with AUCs of 97% for Severe DR and 99% for Proliferative DR. Comparative analysis confirms that the proposed texture-aware Gabor enhancement significantly outperforms LBP and Color Histogram approaches, indicating its potential for reliable clinical decision support. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

25 pages, 5808 KB  
Article
Identifying Principal Investors in Crowdfunding Initiatives for E-Commerce Entrepreneurship: An Integrated BTS Framework
by Lihuan Guo and Yenchun Jim Wu
J. Theor. Appl. Electron. Commer. Res. 2026, 21(5), 136; https://doi.org/10.3390/jtaer21050136 - 27 Apr 2026
Viewed by 298
Abstract
The phenomenon of followership is widely observed in the e-commerce industry. Crowdfunding, as a model of e-commerce entrepreneurship, has attracted many investors. Principal investors function as “leaders” who exert influence on follow-on (subsequent) investors. Accurately identifying principal investors in online entrepreneurial ventures and [...] Read more.
The phenomenon of followership is widely observed in the e-commerce industry. Crowdfunding, as a model of e-commerce entrepreneurship, has attracted many investors. Principal investors function as “leaders” who exert influence on follow-on (subsequent) investors. Accurately identifying principal investors in online entrepreneurial ventures and analyzing their preferences could enhance the success rate of fundraising. Grounded in the BTS (Behavior–Text–Social) framework, this study constructs a multi-dimensional model comprising 15 sub-indicators across three domains: user behavior, textual data, and social connections. A neural network is employed for training and prediction. By integrating the central and peripheral routes elicited from the Elaboration Likelihood Model (ELM), which ranks influence, principal investors are identified. The experiment results indicate that ELM-derived ranking demonstrates the highest consistency (error = 0.15), followed by user behavior (error = 0.30), social metrics (error = 0.71), and textual features (error = 0.95). Weight analysis using SHAP highlights the relative importance of structural holes, out-degree centrality, investment times, and investment moments. Furthermore, principal investors exhibit a preference for local projects and occupy dual roles. This study provides a theoretical foundation and practical guidance for identifying principal investors, thereby improving financing performance and mitigating investment risks for follow-on investors. Full article
(This article belongs to the Section Entrepreneurship, Innovation, and Digital Business Models)
Show Figures

Figure 1

15 pages, 4576 KB  
Article
Impact of Hyperparameters on Surrogate Model Performance: Calcite Dissolution Under Geological Disposal Conditions
by Gintautas Poškas, Asta Narkūnienė and Ernestas Narkūnas
Appl. Sci. 2026, 16(9), 4252; https://doi.org/10.3390/app16094252 - 27 Apr 2026
Viewed by 174
Abstract
Efficient simulation of geochemical reactions is critical for predicting the long-term chemical evolution of geological disposal repositories for radioactive waste. In large-scale reactive transport simulations, geochemical equilibrium calculations often represent a major computational bottleneck because they must be repeatedly solved for many spatial [...] Read more.
Efficient simulation of geochemical reactions is critical for predicting the long-term chemical evolution of geological disposal repositories for radioactive waste. In large-scale reactive transport simulations, geochemical equilibrium calculations often represent a major computational bottleneck because they must be repeatedly solved for many spatial cells and time steps. This study investigates the development of machine-learning-based surrogate models that are designed to approximate geochemical equilibrium calculations and thereby significantly accelerate reactive transport simulations while reducing computational resource requirements. Calcite dissolution induced by magnesium-rich fluid inflow is used as a representative test case to evaluate the feasibility and performance of such surrogate models. Training and validation datasets were generated using the IPhreeqc C++ API, enabling the automated execution of a large number of PHREEQC equilibrium simulations across a chemically relevant parameter space. The resulting dataset captures nonlinear relationships between initial aqueous composition and outputs of interest after chemical equilibration, including aqueous species concentrations and amounts of minerals. Fully connected feed-forward neural networks were designed and implemented in TensorFlow to reproduce PHREEQC results, and the influence of key hyperparameters—such as network depth, width, activation functions, learning rate, and batch size—was systematically investigated. The results demonstrate that surrogate model accuracy and training stability are sensitive to hyperparameter selection, even for a relatively simple chemical system. Properly configured neural network architectures reproduce equilibrium geochemical responses with high accuracy and provide a computationally efficient alternative to repeated PHREEQC calculations, highlighting their potential for accelerating large-scale reactive transport modelling workflows. Full article
Show Figures

Figure 1

26 pages, 24595 KB  
Article
Deep Learning-Driven Adaptive-Weight Kalman Filtering for Low-Cost GNSS in Challenging Environments
by Hongxin Zhang, Sizhe Shen, Longjiang Li, Jinglei Zhang, Haobo Li, Dingyi Liu, Zhe Li, Zhiqiang Zhang and Xiaoming Wang
Sensors 2026, 26(9), 2694; https://doi.org/10.3390/s26092694 - 27 Apr 2026
Viewed by 454
Abstract
The quality of Global Navigation Satellite System (GNSS) observations on smartphones is highly susceptible to multipath and non-line-of-sight (NLOS) effects in urban environments, resulting in complex and highly variable observation errors. These challenges highlight the necessity of a reliable stochastic model to ensure [...] Read more.
The quality of Global Navigation Satellite System (GNSS) observations on smartphones is highly susceptible to multipath and non-line-of-sight (NLOS) effects in urban environments, resulting in complex and highly variable observation errors. These challenges highlight the necessity of a reliable stochastic model to ensure robust and unbiased parameter estimation. However, conventional empirical stochastic models, such as elevation-dependent or signal-to-noise ratio (SNR)-based weighting schemes, are often insufficient to capture the rapidly changing stochastic behavior of observations in dense urban environments. To overcome this limitation, an adaptive GNSS stochastic model based on a deep neural network (DNN) is developed by integrating SNR, satellite elevation angle, and post-fit pseudorange residuals, which provide a strong indicator of observation quality and environmental context. Specifically, a fully connected DNN is designed to use SNR, satellite elevation angle, and post-fit pseudorange residual as input features, representing signal strength, satellite geometry, and residual information, respectively, and to learn their nonlinear relationship with measurement uncertainty. The network output is then used to adaptively update the diagonal elements of the measurement noise covariance matrix, thereby realizing epoch-wise adaptive weighting within the Kalman filtering process. The proposed DNN-based stochastic model, together with several conventional models, was evaluated using GNSS observations collected by a low-cost u-blox ZED-F9P receiver (u-blox AG, Thalwil, Switzerland) and a Samsung Galaxy S21+ smartphone (Samsung Electronics Co., Ltd., Suwon, Republic of Korea) during vehicle experiments in dense urban canyons. The code-based single point positioning (SPP) results demonstrate that the DNN-based model consistently outperforms traditional stochastic models under both open-sky and urban conditions. The improvement is particularly pronounced for smartphone observations in severely obstructed environments. The proposed DNN-based model reduces the 3D RMSE from 14.25 m, 13.68 m, and 13.05 m, obtained with the elevation-, SNR-, and integrated elevation–SNR-based models, respectively, to 8.94 m, representing an improvement of approximately 35%. A similar improvement is observed for the u-blox ZED-F9P receiver, where the 3D RMSE decreases from 5.71 m, 4.69 m, and 5.15 m to 3.10 m. These results suggest the effectiveness of the proposed DNN-based stochastic model in mitigating complex observation errors and improving positioning accuracy, providing a promising solution for reliable positioning of low-cost GNSS receivers in challenging urban environments. Full article
Show Figures

Figure 1

27 pages, 2217 KB  
Article
Speech Recognition with an fMRISNN Constrained by Human Functional Brain Networks: A Study of Enhanced MFCC-Driven Sparse Spike Encoding
by Lei Guo, Nancheng Ma, Zhuoxuan Wang and Rumeng Liu
Biomimetics 2026, 11(5), 302; https://doi.org/10.3390/biomimetics11050302 - 26 Apr 2026
Viewed by 411
Abstract
Spiking neural networks (SNNs) offer inherent advantages in processing temporal information. However, their network topologies are predominantly algorithm-generated, lacking constraints from biological brain connectivity, which limits their bio-plausibility. In our previous work, we constructed a spiking neural network (SNN) by incorporating the topological [...] Read more.
Spiking neural networks (SNNs) offer inherent advantages in processing temporal information. However, their network topologies are predominantly algorithm-generated, lacking constraints from biological brain connectivity, which limits their bio-plausibility. In our previous work, we constructed a spiking neural network (SNN) by incorporating the topological structure of functional brain networks derived from fMRI data of healthy subjects and proposed an fMRISNN model. This model was further employed as the reservoir layer of a liquid state machine (LSM) to build a speech recognition framework. In this framework, the Lyon ear model and the BSA were used to encode speech signals into spike sequences; however, this approach suffers from high computational cost and limited adaptability to temporal variations. To address these limitations, we propose an enhanced Mel-frequency cepstral coefficient (MFCC)-driven sparse spike encoding method. For the speech recognition task, we systematically compare the two preprocessing pipelines in terms of spike number, spike sparsity, encoding time, and downstream speech recognition performance. Experimental results show that the proposed method generates substantially fewer spikes, achieves markedly higher sparsity, and requires significantly less encoding time, while maintaining nearly the same recognition accuracy under the same LSM-based framework. These findings indicate that improved speech input representation can enhance the computational efficiency of SNN-based speech recognition without compromising recognition capability. In addition, the fMRISNN model significantly outperforms several baseline models with algorithmically generated topologies. Compared with mainstream models reported in the literature, although the deep convolutional neural network (CNN) still achieves higher absolute recognition accuracy, the fMRISNN exhibits clear advantages in terms of model parameter size and theoretical energy efficiency. Full article
(This article belongs to the Section Biological Optimisation and Management)
32 pages, 4668 KB  
Article
Aggressive Guided Exploitation Optimized Sparse-Dual Attention Enabled Meta-Learning-Based Deep Learning Model for Quantum Error Correction
by Umesh Uttamrao Shinde, Ravi Kumar Bandaru and Amal S. Alali
Mathematics 2026, 14(9), 1459; https://doi.org/10.3390/math14091459 - 26 Apr 2026
Viewed by 154
Abstract
Quantum error-correcting codes are essential for achieving fault-tolerant quantum computing. Heavy hexagonal code is a type of topological code that leverages the arrangement of qubits to find and correct errors. The heavy hexagonal code is suitable for superconducting architectures, specifically graph layouts with [...] Read more.
Quantum error-correcting codes are essential for achieving fault-tolerant quantum computing. Heavy hexagonal code is a type of topological code that leverages the arrangement of qubits to find and correct errors. The heavy hexagonal code is suitable for superconducting architectures, specifically graph layouts with a limited number of connections. The topological error correction methods work well, but they need more qubits, cannot be used for different sizes of quantum systems, are less reliable, and do not work well with changing quantum distributions. Thus, the research proposes an Ardea-guided exploit optimized sparse-dual attention enabled meta-learning-based convolutional neural network with bi-directional long short-term memory model (AGuESD-MCBiTM). The method exhibits effective correction over dynamic environments with the utilization of meta-learning and the extraction of statistical information, which provides a detailed representation of the qubit patterns. The Ardea-guided exploit optimized (AGuEO) algorithm tunes the weights of MCBiTM and acquires optimal solutions with higher convergence. Moreover, the sparse-dual attention module and meta-learning-based MCBiTM model, which together provide scalable, real-time identification of non-linear qubit noise fluctuations with lower computational cost. Comparatively, the proposed AGuESD-MCBiTM exhibits superior error correction with a higher correlation of 0.97, accuracy of 98.93%, and R-squared value of 0.93, as well as a lower Root mean square error of 1.87, Mean absolute error of 1.20, Bit error rate of 1.85, Logical error rate of 3.82, and mean square error of 3.49 in circuit 2, respectively. Full article
(This article belongs to the Special Issue Recent Advances in Quantum Information and Quantum Computing)
Back to TopTop