Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,071)

Search Parameters:
Keywords = simulation network statistics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 5403 KB  
Article
Pollution Source Identification and Parameter Sensitivity Analysis in Urban Drainage Networks Using a Coupled SWMM–Bayesian Framework
by Ronghuan Wang, Xuekai Chen, Xiaobo Liu, Guoxin Lan, Fei Dong and Jiangnan Yang
Processes 2026, 14(4), 699; https://doi.org/10.3390/pr14040699 - 19 Feb 2026
Abstract
Addressing the challenge of tracing hidden and transient cross-connections in urban drainage networks, this study develops a SWMM–Bayesian coupled model based on the Py SWMM interface using the Daming Lake area in Jinan as a case study. By employing a Markov Chain Monte [...] Read more.
Addressing the challenge of tracing hidden and transient cross-connections in urban drainage networks, this study develops a SWMM–Bayesian coupled model based on the Py SWMM interface using the Daming Lake area in Jinan as a case study. By employing a Markov Chain Monte Carlo (MCMC) algorithm to drive the interaction between dynamic simulation and statistical inference, the model achieves multidimensional joint posterior estimation of pollution source location (Jx), discharge intensity (M), and discharge timing (T). The results indicate: (1) Model accuracy: The coupled model demonstrates strong source tracing capability, with mean absolute errors below 0.6% in single-parameter inversion. Under multi-parameter joint inversion, the true values of all parameters consistently fall within the 95% confidence intervals. (2) Parameter sensitivity: The influence of MCMC step size on the uncertainty of pollution tracing results is systematically clarified. Discrete source location estimates (Jx) exhibit high robustness to step size variation due to spatial heterogeneity in hydraulic responses, whereas continuous physical parameters (M and T) show strong dependence on the selected step size scale. (3) Practical application: The impact of spatial monitoring network configuration on pollution tracing performance is examined. By deploying a complementary monitoring system integrating trunk and branch pipelines, the inversion accuracy for mass (M) and time (T) parameters is significantly improved by 84.2% and 88.5%, respectively. Overall, the proposed pollution source tracing method for urban drainage networks effectively overcomes the multi-solution challenge in complex network inversion, providing critical technical support for refined urban water environment management. Full article
(This article belongs to the Special Issue Advances in Hydrodynamics, Pollution and Bioavailable Transfers)
Show Figures

Figure 1

52 pages, 4958 KB  
Review
Structural Characterisation of Disordered Porous Materials Using Gas Sorption and Complementary Techniques
by Sean P. Rigby and Suleiman Mousa
Surfaces 2026, 9(1), 20; https://doi.org/10.3390/surfaces9010020 - 17 Feb 2026
Viewed by 86
Abstract
While advanced imaging techniques and ordered porous materials like MOFs have gained prominence, gas sorption remains the indispensable tool for characterizing the multiscale heterogeneity of industrially important disordered solids, such as catalysts and shales. This review examines recent developments in gas sorption methodologies [...] Read more.
While advanced imaging techniques and ordered porous materials like MOFs have gained prominence, gas sorption remains the indispensable tool for characterizing the multiscale heterogeneity of industrially important disordered solids, such as catalysts and shales. This review examines recent developments in gas sorption methodologies specifically tailored for rigid, disordered porous media. We discuss experimental advances, including the choice of adsorbate and the utility of the overcondensation method for probing macroporosity and ensuring saturation. Furthermore, we critically evaluate theoretical approaches for determining pore size distributions (PSDs), contrasting classical methods with Density Functional Theory (DFT) and Grand Canonical Monte Carlo (GCMC) simulations. Special emphasis is placed on the impact of pore-to-pore cooperative effects, such as advanced condensation, cavitation, and pore-blocking, on the interpretation of sorption isotherms. We highlight how complementary techniques, including integrated mercury porosimetry, NMR, and computerized X-ray tomography (CXT), are essential for deconvolving these complex network effects and validating void space descriptors. We conclude that, while “brute force” molecular simulations on image-based reconstructions are progressing, “minimalist” pore network models, which incorporate cooperative mechanisms, currently offer the most empirically adequate approach. Ultimately, gas sorption remains unique in its ability to statistically characterize void spaces from Angstroms to millimeters in a single experiment. Full article
(This article belongs to the Collection Featured Articles for Surfaces)
Show Figures

Figure 1

24 pages, 3006 KB  
Article
A Digital-Twin-Enabled AI-Driven Adaptive Planning Platform for Sustainable and Reliable Manufacturing
by Mingyuan Li, Chun-Ming Yang, Wei Lo and Yi-Wei Kao
Machines 2026, 14(2), 197; https://doi.org/10.3390/machines14020197 - 9 Feb 2026
Viewed by 224
Abstract
The manufacturing systems face growing demands due to the instability of the market, the demanding sustainability policies, and the high rate of old equipment, but traditional planning structures are mostly fixed and deterministic, leading to the inefficiency of joint optimization of operational stability [...] Read more.
The manufacturing systems face growing demands due to the instability of the market, the demanding sustainability policies, and the high rate of old equipment, but traditional planning structures are mostly fixed and deterministic, leading to the inefficiency of joint optimization of operational stability and environmental sustainability in unpredictable situations. This research proposed and empirically tested an artificial-intelligence-based adaptive planning platform, which combines a physics-based Digital Twin (DT) and a Pareto-conditioned Multi-Objective Proximal Policy Optimization (MO-PPO) algorithm to be able to co-optimize reliability and sustainability indicators in real-time. The platform reinvents manufacturing planning as a Constrained Multi-Objective Markov Decision Process (CMDP), optimizing an Overall Equipment Effectiveness (OEE) and energy carbon intensity as well as material waste, and strongly adhering to operational restrictions. The study utilizes a four-layer cyber–physical architecture, which includes an edge-based data acquisition layer, a high-fidelity stochastic simulation engine that is calibrated via Bayesian inference, a graph attention network-based state-encoding layer, and a closed-loop execution loop that runs with 60 s long planning cycles. In this study, a statistically significant enhancement was shown in 10,000 stochastic simulation experiments and a 12-week industrial pilot deployment: 96.8% schedule performance, 84.7% OEE, 16.5% cut in specific energy usage (2.38 kWh/kg), 17.1% reduction in material-waste rate (6.8%), and 21.4% enhancement in carbon effectiveness, outperforming all baseline strategies (p = 0.001). The analysis showed that there was a surprising synergistic correlation between waste minimization and OEE enhancement (r = −0.73), and 34.1% of overall OEE improvement could be explained by sustainability strategies. This study provides a robust framework for adaptive, resilient, and eco-friendly manufacturing processes in line with Industry 5.0 ideologies. Full article
(This article belongs to the Special Issue Digital Twins in Smart Manufacturing)
Show Figures

Figure 1

30 pages, 4934 KB  
Article
Green Coconut Biorefinery: RSM and ANN–GA Optimization of Coconut Water Microfiltration with IntegratedTechno-Economic Analysis
by José Diogo da Rocha Viana, Moacir Jean Rodrigues, Arthur Claudio Rodrigues de Souza, Raimundo Marcelino da Silva Neto, Paulo Riceli Vasconcelos Ribeiro, José Carlos Cunha Petrus and Ana Paula Dionísio
Foods 2026, 15(4), 623; https://doi.org/10.3390/foods15040623 - 9 Feb 2026
Viewed by 278
Abstract
The coconut water market continues to expand, but industrial supply is constrained by the high perishability of fresh coconut water and the need for stabilization routes that preserve quality. This study optimized crossflow microfiltration of coconut water using a silicon carbide (SiC) ceramic [...] Read more.
The coconut water market continues to expand, but industrial supply is constrained by the high perishability of fresh coconut water and the need for stabilization routes that preserve quality. This study optimized crossflow microfiltration of coconut water using a silicon carbide (SiC) ceramic membrane, high permeability, chemical/thermal robustness, and cleanability, and assessed the techno-economic feasibility of a green coconut biorefinery producing microfiltered coconut water and coconut pulp. Pressure and temperature were modeled and optimized using a face-centered design (FCD) and artificial neural networks coupled with a genetic algorithm (ANN–GA), considering permeate flux and fouling index (p < 0.05). Both approaches converged to the same operating point, and experimental validation at 75 kPa and 30 °C achieved 605.32 ± 15.34 L h−1 m−2 and 82.79 ± 1.35% at VRR = 1. Sample-level fit statistics favored ANN (higher R2 and lower sample-level errors), whereas condition-wise grouped cross-validation (leave-one-condition-out) indicated higher predictivity and lower RMSECV for the quadratic FCD/RSM models across experimental conditions, highlighting response-dependent generalization within the investigated domain. Fouling analysis indicated concentration polarization as the main resistance contribution and a flux-decline behavior best described by the intermediate blocking mechanism. A SuperPro Designer® simulation over a 20-year project life indicated economic feasibility under baseline assumptions (Internal rate of return—IRR = 23.80%, Net present value—NPV = US$733,761, payback = 2.96 years), with profitability remaining attractive under ±10% selling-price variation. Overall, the process optimization and modeling outcomes align with the economic case, reinforcing the potential of this biorefinery concept for industrial deployment. Full article
(This article belongs to the Section Nutraceuticals, Functional Foods, and Novel Foods)
Show Figures

Graphical abstract

18 pages, 2627 KB  
Article
Application of Machine Learning Techniques in the Prediction of Surface Geometry
by Aneta Gądek-Moszczak, Dominik Nowakowski and Norbert Radek
Materials 2026, 19(4), 661; https://doi.org/10.3390/ma19040661 - 9 Feb 2026
Viewed by 258
Abstract
The article presents an attempt by the authors to generate a digital representation of the analyzed surface layer of WC-Co-Al2O3 coating deposited by the ESD method. The WC-Co-Al2O3 surface layer is superhard and abrasion-resistant, significantly increasing the [...] Read more.
The article presents an attempt by the authors to generate a digital representation of the analyzed surface layer of WC-Co-Al2O3 coating deposited by the ESD method. The WC-Co-Al2O3 surface layer is superhard and abrasion-resistant, significantly increasing the exploitation time of working elements. The authors aim to develop a method for generating series of digital surfaces with similar geometry parameters based on data collected through profilometric analysis. Therefore, the advanced integration of machine learning (ML) techniques with classical statistical approaches for modeling and predicting stochastic processes. While traditional models such as ARMA/ARIMA and hidden Markov models (HMMs) offer mathematical rigor, they often impose assumptions of stationarity and linearity, which limits their application to complex, noisy data. This paper proposes a model for surface geometry generation based on experimental data that combines recurrent neural networks (RNNs) and Monte Carlo simulation. Additionally, the study reviews emerging methods, including generative adversarial networks (GANs) for stochastic simulation and expectation-maximization (EM) algorithms for parameter estimation. An empirical case study on WC-Co-AL2O3 surface geometries demonstrates the effectiveness of ML–stochastic hybrids in capturing both deterministic structures and random fluctuations. The findings underscore not only the benefits but also the limitations of such models, including high computational demands and interpretability challenges, while proposing future research directions toward physics-informed ML and explainable AI. Full article
(This article belongs to the Special Issue Advances in Surface Engineering: Functional Films and Coatings)
Show Figures

Figure 1

23 pages, 1687 KB  
Article
Machine Learning-Based Dry Gas Reservoirs Z-Factor Prediction for Sustainable Energy Transitions to Net Zero
by Progress Bougha, Foad Faraji, Parisa Khalili Nejad, Niloufar Zarei, Perk Lin Chong, Sajid Abdullah, Pengyan Guo and Lip Kean Moey
Sustainability 2026, 18(4), 1742; https://doi.org/10.3390/su18041742 - 8 Feb 2026
Viewed by 235
Abstract
Dry gas reservoirs play a pivotal transitional role in meeting the net-zero target worldwide. Accurate modelling and simulation of this energy source require fast and reliable prediction of the gas compressibility factor (Z-factor). The experimental measurements of Z-factor are the most reliable source; [...] Read more.
Dry gas reservoirs play a pivotal transitional role in meeting the net-zero target worldwide. Accurate modelling and simulation of this energy source require fast and reliable prediction of the gas compressibility factor (Z-factor). The experimental measurements of Z-factor are the most reliable source; however, they are expensive and time-consuming. This makes developing accurate predictive models essential. Traditional methods, such as empirical correlations and Equations of States (EoSs), often lack accuracy and computational efficiency. This study aims to address these limitations by leveraging the predictive power of machine learning (ML) techniques. Hence in this study three ML models of Artificial Neural Network (ANN), Group Method of Data Handling (GMDH), and Genetic Programming (GP) were developed. These models were trained on a comprehensive dataset comprising 1079 samples where pseudo-reduced pressure (Ppr) and pseudo-reduced temperature (Tpr) served as input and experimentally measured Z-factors as output. The performance of the developed ML models was benchmarked against two cubic EoSs of Peng–Robinson (PR) and van der Waals (vdW), and two semi-empirical correlations of Dranchuk-Abou-Kassem (DAK) and Hall and Yarborough (HY), and recent developed ML based models, using statistical metrics of Mean Squared Error (MSE), coefficient of determination (R2), and Average Absolute Relative Deviation Percentage (AARD%). The proposed ANN model reduces average prediction error by approximately 70% relative to the PR equation of state and by over 35% compared with the DAK correlation, while maintaining robust performance across the full Ppr and Tpr of dry gas systems. Additionally paired t-tests and Wilcoxon signed-rank tests performed on the ML results confirmed that the ANN model achieved statistically significant improvements over the other models. Moreover, two physical equations using the white-box models of GMDH and GP were proposed as a function of Ppr and Tpr for prediction of the dry gas Z-factor. The sensitivity analysis of the data shows that the Ppr has the highest positive effect of 88% on Z-factor while Tpr has a moderate effect of 12%. This study presents the first unified, statistically validated comparison of ANN, GMDH, and GP models for accurate and interpretable Z-factor prediction. The developed models can be used as an alternative tool to bridge the limitation of cubic EoSs and limited accuracy and applicability of empirical models. Full article
Show Figures

Figure 1

47 pages, 2396 KB  
Article
Adaptive Multi-Stage Hybrid Localization for RIS-Aided 6G Indoor Positioning Systems: Combining Fingerprinting and Geometric Methods with Condition-Aware Fusion
by Iacovos Ioannou, Vasos Vassiliou and Marios Raspopoulos
Sensors 2026, 26(4), 1084; https://doi.org/10.3390/s26041084 - 7 Feb 2026
Viewed by 184
Abstract
Reconfigurable intelligent surfaces (RISs) represent a paradigm shift in wireless communications, offering unprecedented control over electromagnetic wave propagation for next-generation 6G networks. This paper presents a comprehensive framework for high-precision indoor localization exploiting cooperative multi-RIS deployments. We introduce the adaptive multi-stage hybrid localization [...] Read more.
Reconfigurable intelligent surfaces (RISs) represent a paradigm shift in wireless communications, offering unprecedented control over electromagnetic wave propagation for next-generation 6G networks. This paper presents a comprehensive framework for high-precision indoor localization exploiting cooperative multi-RIS deployments. We introduce the adaptive multi-stage hybrid localization (AMSHL) algorithm, a novel approach that strategically combines fingerprinting-based and geometric time-difference-of-arrival (TDoA) methods through condition-aware adaptive fusion. The proposed framework employs a 4-RIS cooperative architecture with strategically positioned panels on room walls, enabling comprehensive spatial coverage and favorable geometric diversity. AMSHL incorporates five key innovations: (1) a hybrid fingerprint database combining received signal strength indicator (RSSI) and TDoA features for enhanced location distinctiveness; (2) a multi-stage cascaded refinement process progressing from coarse fingerprinting initialization through to iterative geometric optimization; (3) an adaptive fusion mechanism that dynamically adjusts algorithm weights based on real-time channel quality assessment including signal-to-noise ratio (SNR) and geometric dilution of precision (GDOP); (4) a robust iteratively reweighted least squares (IRLS) solver with Huber M-estimation for outlier mitigation; and (5) Bayesian regularization incorporating fingerprinting estimates as informative priors. Comprehensive Monte Carlo simulations at 3.5 GHz carrier frequency with 400 MHz bandwidth demonstrate that AMSHL achieves a median localization error of 0.661 m, root-mean-squared error (RMSE) of 1.54 m, and mean-squared error (MSE) of 2.38 m2, with 87.5% probability of sub-2m accuracy, representing a 4.9× improvement over conventional hybrid fingerprinting in median error and a 7.1× reduction in MSE (from 16.83 m2 to 2.38 m2). An optional sigmoid-based fusion variant (AMSHL-S) further improves sub-2m accuracy to 89.4% by eliminating discrete switching artifacts. Furthermore, we provide theoretical analysis including Cramér–Rao lower bound (CRLB) derivation with an empirical MSE comparison to quantify the gap between practical algorithm performance and theoretical bounds (MSE-to-CRLB ratio of approximately 4.0×104), as well as a computational complexity assessment. All reported metrics have been cross-validated for internal consistency across formulas, tables, and textual descriptions; improvement factors and error statistics are verified against primary simulation outputs to ensure reproducibility. The complete simulation framework is made publicly available to facilitate reproducible research in RIS-aided positioning systems. Full article
(This article belongs to the Special Issue Indoor Localization Techniques Based on Wireless Communication)
Show Figures

Graphical abstract

27 pages, 5197 KB  
Article
Dynamic TRM Estimation with Load–Wind Uncertainty Using Rolling Window Statistical Analysis for Improved ATC
by Uchenna Emmanuel Edeh, Tek Tjing Lie and Md Apel Mahmud
Energies 2026, 19(3), 844; https://doi.org/10.3390/en19030844 - 5 Feb 2026
Viewed by 489
Abstract
The rapid integration of renewable energy sources (RES), particularly wind, together with fluctuating demand, has introduced significant uncertainty into power system operation, challenging traditional approaches for estimating Transmission Reliability Margin (TRM) and Available Transfer Capability (ATC). This paper proposes a fully adaptive TRM [...] Read more.
The rapid integration of renewable energy sources (RES), particularly wind, together with fluctuating demand, has introduced significant uncertainty into power system operation, challenging traditional approaches for estimating Transmission Reliability Margin (TRM) and Available Transfer Capability (ATC). This paper proposes a fully adaptive TRM estimation framework that leverages rolling-window statistical analysis of net-load forecast errors to capture real-time uncertainty fluctuations. By continuously updating both the confidence factor and window length based on evolving forecast-error statistics, the method adapts to changing grid conditions. The framework is validated on the IEEE 30-bus system with 80 MW wind (42.3% penetration) and assessed for scalability on the IEEE 118-bus system (40.1% wind penetration). Comparative analysis against static TRM, fixed-confidence rolling-window, and Monte Carlo Simulation (MCS)-based methods shows that the proposed approach achieves 88.0% reliability coverage (vs. 81.8% for static TRM) while providing enhanced transfer capability for 31.5% of the operational day (7.5 h). Relative to MCS, it yields a 20.1% lower mean TRM and a 2.5% higher mean ATC, with an adaptation ratio of 18.8:1. Scalability assessment confirms preserved adaptation (12.4:1) with sub-linear computational scaling (1.82 ms to 3.61 ms for a 3.93× network size increase), enabling 1 min updates interval. Full article
(This article belongs to the Special Issue Renewable Energy System Technologies: 3rd Edition)
Show Figures

Figure 1

21 pages, 4384 KB  
Article
Fault Diagnosis and Health Monitoring Method for Semiconductor Manufacturing Equipment Based on Deep Learning and Subspace Transfer
by Peizhu Chen, Zhongze Liu, Junxi Han, Yi Dai, Zhifeng Wang and Zhuyun Chen
Machines 2026, 14(2), 176; https://doi.org/10.3390/machines14020176 - 3 Feb 2026
Viewed by 236
Abstract
Semiconductor manufacturing equipment such as vacuum pumps, wafer handling mechanisms, etching machines, and deposition systems operates for a long time under high vacuum, high temperature, strong electromagnetic, and high-precision continuous production environments. Its reliability is directly related to the yield and stability of [...] Read more.
Semiconductor manufacturing equipment such as vacuum pumps, wafer handling mechanisms, etching machines, and deposition systems operates for a long time under high vacuum, high temperature, strong electromagnetic, and high-precision continuous production environments. Its reliability is directly related to the yield and stability of the production line. During equipment operation, the fault signals are often weak, the noise is strong, and the working conditions are variable, so traditional methods are difficult to achieve high-precision recognition. To solve this problem, this paper proposes a fault diagnosis and health monitoring method for semiconductor manufacturing equipment based on deep learning and subspace transfer. Firstly, considering the cyclostationary characteristics of the operating signals of key equipment, the cyclic spectral analysis technology is used to obtain the cyclic spectral coherence map, which effectively reveals the feature differences under different health states. Then, a deep fault diagnosis model based on the convolutional neural network (CNN) is constructed to extract deep feature representations. Furthermore, the subspace transfer learning technology is introduced, and group normalization and correlation alignment unsupervised adaptation layers are designed to achieve automatic alignment and enhancement of the statistical characteristics of deep features between the source domain and the target domain, which effectively improves the generalization and adaptability of the model. Finally, simulation experiments based on the public bearing dataset verify that the proposed method has strong feature representation ability and high classification accuracy under different working conditions and different loads. Because the key components and experimental scenarios of semiconductor manufacturing equipment have similar signal characteristics, this method can be directly transferred to the early fault diagnosis and health monitoring of semiconductor production line equipment, which has important engineering application value. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

38 pages, 7167 KB  
Article
Artificial Intelligence (AI) and Monte Carlo Simulation-Based Modeling for Predicting Groundwater Pollution Indices and Nitrate-Linked Health Risks in Coastal Areas Facing Agricultural Intensification
by Hatim Sanad, Rachid Moussadek, Latifa Mouhir, Abdelmjid Zouahri, Majda Oueld Lhaj, Yassine Monsif, Khadija Manhou and Houria Dakak
Hydrology 2026, 13(2), 59; https://doi.org/10.3390/hydrology13020059 - 3 Feb 2026
Viewed by 433
Abstract
This study assesses groundwater quality and nitrate-related health risks in the Skhirat coastal aquifer (Morocco) using a multidisciplinary approach. A total of thirty groundwater wells were sampled and analyzed for physico-chemical properties, including major ions and nutrients. Multivariate statistical analyses were employed to [...] Read more.
This study assesses groundwater quality and nitrate-related health risks in the Skhirat coastal aquifer (Morocco) using a multidisciplinary approach. A total of thirty groundwater wells were sampled and analyzed for physico-chemical properties, including major ions and nutrients. Multivariate statistical analyses were employed to explore contamination sources. Pollution indices such as the Groundwater Pollution Index (GPI) and Nitrate Pollution Index (NPI) were computed, and Monte Carlo simulations (MCSs) were conducted to assess nitrate-related health risks through ingestion and dermal exposure. Furthermore, Random Forest (RF), Gradient Boosting Regression (GBR), Support Vector Regression (SVR) with radial basis function kernel, and Artificial Neural Networks (ANN) models were tested for predicting groundwater pollution indices. Results of hydrochemical facies revealed Na+-Cl dominance in 47% of the samples, suggesting strong marine influence, while nitrate concentrations reached up to 89.3 mg/L, exceeding World Health Organization (WHO) limits in 26.7% of the sites. Pollution indices indicated that 33.3% of samples exhibited moderate to high GPI values, with 36.7% of the samples exceeding the threshold for NPI. The MCS for nitrate health risk revealed that 43% of the samples posed non-carcinogenic health risks to children (Hazard Index (HI) > 1). RF outperformed other models in predicting GPI (R2 = 0.76) and NPI (R2 = 0.95). Spatial prediction maps visualized contamination hotspots aligned with intensive horticultural activity. This integrated methodology offers a robust framework to diagnose groundwater pollution sources and predict future risks. Full article
Show Figures

Figure 1

23 pages, 4845 KB  
Article
Change Point Monitoring in Wireless Sensor Networks Under Heavy-Tailed Sequence Environments
by Liwen Wang, Hongbo Hu and Hao Jin
Mathematics 2026, 14(3), 523; https://doi.org/10.3390/math14030523 - 1 Feb 2026
Viewed by 215
Abstract
In the special case of a heavy-tailed sequence environment, change point monitoring in wireless sensor networks faces many serious challenges, such as high communication overhead, particularly sensitivity to sparse changes, and dependence on strict parameter assumptions. In order to solve these limitations, a [...] Read more.
In the special case of a heavy-tailed sequence environment, change point monitoring in wireless sensor networks faces many serious challenges, such as high communication overhead, particularly sensitivity to sparse changes, and dependence on strict parameter assumptions. In order to solve these limitations, a distributed robust M-estimator-based change point monitoring (DRM-CPM) method is proposed. This method combines ratio statistics with sliding window technology so that in online detection, there is no need to know the distribution before and after changes in advance. A threshold-triggered communication strategy is introduced, where sensors exchange local statistics only when exceeding predefined thresholds, significantly reducing energy consumption. By means of theoretical analysis, the asymptotic characteristics of the statistics are confirmed, and the robustness of the algorithm to heavy-tail noise and unknown parameters is also proved. Simulation results show that the algorithm is better than the existing methods in terms of empirical size control, empirical power, and communication efficiency, particularly in the face of sparse variation or heavy-tailed data. This framework provides a scalable solution for real-time anomaly monitoring with non-Gaussian data characteristics in industrial and environmental applications. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

27 pages, 2073 KB  
Article
SparseMambaNet: A Novel Architecture Integrating Bi-Mamba and a Mixture of Experts for Efficient EEG-Based Lie Detection
by Hanbeot Park, Yunjeong Cho and Hunhee Kim
Appl. Sci. 2026, 16(3), 1437; https://doi.org/10.3390/app16031437 - 30 Jan 2026
Viewed by 240
Abstract
Traditional lie detection technologies, such as the polygraph and event-related potential (ERP)-based approaches, often face limitations in real-world applicability due to their sensitivity to psychological states and the complex, nonlinear nature of electroencephalogram (EEG) signals. In this study, we propose SparseMambaNet, a novel [...] Read more.
Traditional lie detection technologies, such as the polygraph and event-related potential (ERP)-based approaches, often face limitations in real-world applicability due to their sensitivity to psychological states and the complex, nonlinear nature of electroencephalogram (EEG) signals. In this study, we propose SparseMambaNet, a novel neural architecture that integrates the recently developed Bi-Mamba model with a Sparsely Activated Mixture of Experts (MoE) structure to effectively model the intricate spatio-temporal dynamics of EEG data. By leveraging the near-linear computational complexity of Mamba and the bidirectional contextual modeling of Bi-Mamba, the proposed framework efficiently processes long EEG sequences while maximizing representational power through the selective activation of expert networks tailored to diverse input characteristics. Experiments were conducted with 46 healthy subjects using a simulated criminal scenario based on the Comparison Question Technique (CQT) with monetary incentives to induce realistic psychological tension. We extracted nine statistical and neural complexity features, including Hjorth parameters, Sample Entropy, and Spectral Entropy. The results demonstrated that Sample entropy and Hjorth parameters achieved exceptional classification performance, recording F1 scores of 0.9963 and 0.9935, respectively. Statistical analyses further revealed that the post-response “answer” interval provided significantly higher discriminative power compared to the “question” interval. Furthermore, channel-level analysis identified core neural loci for deception in the frontal and fronto-central regions, specifically at channels E54 and E63. These findings suggest that SparseMambaNet offers a highly efficient and precise solution for EEG-based lie detection, providing a robust foundation for the development of personalized brain–computer interface (BCI) systems in forensic and clinical settings. Full article
(This article belongs to the Special Issue Brain-Computer Interfaces: Development, Applications, and Challenges)
Show Figures

Figure 1

16 pages, 801 KB  
Article
Traffic Simulation-Based Sensitivity Analysis of Long Underground Expressways
by Choongheon Yang and Chunjoo Yoon
Appl. Sci. 2026, 16(3), 1249; https://doi.org/10.3390/app16031249 - 26 Jan 2026
Viewed by 219
Abstract
Long underground expressways have emerged as an alternative to surface highways in densely urbanized areas; however, their enclosed geometry, extended length, and steep longitudinal gradients introduce traffic-flow dynamics distinct from those of surface roads. This study investigates the combined and interaction effects of [...] Read more.
Long underground expressways have emerged as an alternative to surface highways in densely urbanized areas; however, their enclosed geometry, extended length, and steep longitudinal gradients introduce traffic-flow dynamics distinct from those of surface roads. This study investigates the combined and interaction effects of traffic volume, heavy-vehicle ratio, longitudinal gradient, lane number, and lane-changing policy on traffic performance in long underground expressways using microscopic traffic simulation. A hypothetical 20 km underground expressway network was evaluated under 72 systematically designed scenarios. Weighted average speed and throughput were analyzed using nonparametric statistics, generalized linear models with interaction terms, and machine learning-based sensitivity analysis. While traffic volume and heavy-vehicle ratio were confirmed as dominant determinants of performance, a key contribution of this study is the identification of the density-dependent role of lane-changing policies. Under moderate traffic density, permissive lane-changing improves efficiency by enabling vehicles to bypass localized disturbances caused by heavy vehicles and longitudinal gradients, thereby enhancing capacity utilization. In contrast, under high-density conditions, permissive lane-changing amplifies lane-change conflicts and shockwave propagation within the confined underground environment, accelerating traffic instability and performance breakdown. These adverse effects are further intensified by steep uphill gradients. The findings demonstrate that lane-changing policies on long underground expressways should be designed in a context-sensitive manner, balancing efficiency and stability across traffic states. Full article
(This article belongs to the Section Transportation and Future Mobility)
Show Figures

Figure 1

18 pages, 2091 KB  
Article
Computational Modelling and Clinical Validation of an Alzheimer’s-Related Network in Brain Cancer: The SKM034 Model
by Kristy Montalbo, Izabela Stasik, Christopher George Severin Smith and Emyr Yosef Bakker
Curr. Issues Mol. Biol. 2026, 48(2), 126; https://doi.org/10.3390/cimb48020126 - 23 Jan 2026
Viewed by 395
Abstract
Cancer and Alzheimer’s disease (AD) display an inverse relationship, and there is a need to further explore this interplay. One key genetic contributor to AD is SORL1, the loss of which is thought to be causally related to AD development. SORL1 also [...] Read more.
Cancer and Alzheimer’s disease (AD) display an inverse relationship, and there is a need to further explore this interplay. One key genetic contributor to AD is SORL1, the loss of which is thought to be causally related to AD development. SORL1 also appears to be implicated in cancer. To examine SORL1 and its network, this article simulated SORL1 and its interactions via signal-flow Boolean modelling, including in silico knockouts (mirroring in vivo loss-of-function mutations). This model (SKM034) predicted a total of 29 key changes in molecular relationships following the loss of SORL1 or another highly connected protein (ERBB2). Literature validation demonstrated that 2 of these predictions were at least partially validated experimentally, whilst 27 were Potentially Novel Predictions (PNPs). Complementing the in-depth relationship analyses was signal flow analysis through the network’s structure, validated using cell line and cancer patient RNA-seq data. Correct prediction rates for these analyses reached 60% (statistically significant relative to a random model). This article demonstrates the clinical relevance of this Alzheimer’s-related network in a cancer context and, through the PNPs, provides a strong starting point for in vitro experimental validation. As with previously published models using similar methods, the model may be reanalysed in different contexts for further discoveries. Full article
(This article belongs to the Collection Bioinformatics Approaches to Biomedicine)
Show Figures

Figure 1

41 pages, 5360 KB  
Article
Jellyfish Search Algorithm-Based Optimization Framework for Techno-Economic Energy Management with Demand Side Management in AC Microgrid
by Vijithra Nedunchezhian, Muthukumar Kandasamy, Renugadevi Thangavel, Wook-Won Kim and Zong Woo Geem
Energies 2026, 19(2), 521; https://doi.org/10.3390/en19020521 - 20 Jan 2026
Viewed by 316
Abstract
The optimal allocation of Photovoltaic (PV) and wind-based renewable energy sources and Battery Energy Storage System (BESS) capacity is an important issue for efficient operation of a microgrid network (MGN). The impact of the unpredictability of PV and wind generation needs to be [...] Read more.
The optimal allocation of Photovoltaic (PV) and wind-based renewable energy sources and Battery Energy Storage System (BESS) capacity is an important issue for efficient operation of a microgrid network (MGN). The impact of the unpredictability of PV and wind generation needs to be smoothed out by coherent allocation of BESS unit to meet out the load demand. To address these issues, this article proposes an efficient Energy Management System (EMS) and Demand Side Management (DSM) approaches for the optimal allocation of PV- and wind-based renewable energy sources and BESS capacity in the MGN. The DSM model helps to modify the peak load demand based on PV and wind generation, available BESS storage, and the utility grid. Based on the Real-Time Market Energy Price (RTMEP) of utility power, the charging/discharging pattern of the BESS and power exchange with the utility grid are scheduled adaptively. On this basis, a Jellyfish Search Algorithm (JSA)-based bi-level optimization model is developed that considers the optimal capacity allocation and power scheduling of PV and wind sources and BESS capacity to satisfy the load demand. The top-level planning model solves the optimal allocation of PV and wind sources intending to reduce the total power loss of the MGN. The proposed JSA-based optimization achieved 24.04% of power loss reduction (from 202.69 kW to 153.95 kW) at peak load conditions through optimal PV- and wind-based DG placement and sizing. The bottom level model explicitly focuses to achieve the optimal operational configuration of MGN through optimal power scheduling of PV, wind, BESS, and the utility grid with DSM-based load proportions with an aim to minimize the operating cost. Simulation results on the IEEE 33-node MGN demonstrate that the 20% DSM strategy attains the maximum operational cost savings of €ct 3196.18 (reduction of 2.80%) over 24 h operation, with a 46.75% peak-hour grid dependency reduction. The statistical analysis over 50 independent runs confirms the sturdiness of the JSA over Particle Swarm Optimization (PSO) and Osprey Optimization Algorithm (OOA) with a standard deviation of only 0.00017 in the fitness function, demonstrating its superior convergence characteristics to solve the proposed optimization problem. Finally, based on the simulation outcome of the considered bi-level optimization problem, it can be concluded that implementation of the proposed JSA-based optimization approach efficiently optimizes the PV- and wind-based resource allocation along with BESS capacity and helps to operate the MGN efficiently with reduced power loss and operating costs. Full article
(This article belongs to the Section A1: Smart Grids and Microgrids)
Show Figures

Figure 1

Back to TopTop