Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (12,406)

Search Parameters:
Keywords = computational tool

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 2863 KB  
Article
Interpretable Network-Level Biomarker Discovery for Alzheimer’s Stage Assessment Using Resting-State fNIRS Complexity Graphs
by Min-Kyoung Kang, Agatha Elisabet, So-Hyeon Yoo and Keum-Shik Hong
Brain Sci. 2026, 16(2), 239; https://doi.org/10.3390/brainsci16020239 - 19 Feb 2026
Abstract
Background/Objectives: This study introduces a reproducible and interpretable graph-based framework for resting-state functional near-infrared spectroscopy (fNIRS) that enables network-level biomarker discovery for Alzheimer’s disease (AD). Although resting-state fNIRS is well suited for task-free assessment, most existing approaches rely on static channel-wise features [...] Read more.
Background/Objectives: This study introduces a reproducible and interpretable graph-based framework for resting-state functional near-infrared spectroscopy (fNIRS) that enables network-level biomarker discovery for Alzheimer’s disease (AD). Although resting-state fNIRS is well suited for task-free assessment, most existing approaches rely on static channel-wise features or conventional functional connectivity, limiting insight into coordinated network dynamics and reproducibility. Methods: Resting-state prefrontal fNIRS signals were represented as subject-level graphs in which edges captured coordinated fluctuations of nonlinear signal complexity across channels, computed using sliding-window analysis. Graph neural networks (GNNs) were employed as analytical tools to identify disease-stage-related network patterns. Interpretability was assessed using edge-level importance measures, and reproducibility was evaluated through fold-wise stability analysis and consensus network construction. Results: The proposed complexity–fluctuation-based graph representation consistently outperformed conventional amplitude-based functional connectivity. Statistically supported prefrontal network biomarkers distinguishing mild cognitive impairment (MCI) from healthy aging were identified, with statistically significant group differences (p = 0.001). In contrast, network patterns associated with Alzheimer’s disease were more heterogeneous and less consistently expressed. Consensus analysis revealed a subset of prefrontal connections repeatedly selected across cross-validation folds, and attention-based network patterns showed strong spatial correspondence with statistically derived biomarkers. Conclusions: This study establishes a reproducible and interpretable framework for resting-state fNIRS analysis that emphasizes coordinated complexity dynamics rather than classification accuracy. The results indicate that network-level alterations are most consistently expressed at the MCI stage, highlighting its role as a critical transitional state and supporting the potential of the proposed approach for longitudinal monitoring and clinically applicable fNIRS-based assessment of neurodegenerative disease. Full article
(This article belongs to the Special Issue Non-Invasive Neurotechnologies for Cognitive Augmentation)
49 pages, 2900 KB  
Article
Comparative Assessment of the Reliability of Non-Recoverable Subsystems of Mining Electronic Equipment Using Various Computational Methods
by Nikita V. Martyushev, Boris V. Malozyomov, Anton Y. Demin, Alexander V. Pogrebnoy, Georgy E. Kurdyumov, Viktor V. Kondratiev and Antonina I. Karlina
Mathematics 2026, 14(4), 723; https://doi.org/10.3390/math14040723 - 19 Feb 2026
Abstract
The assessment of reliability in non-repairable subsystems of mining electronic equipment represents a computationally challenging problem, particularly for complex and highly connected structures. This study presents a systematic comparative analysis of several deterministic approaches for reliability estimation, focusing on their computational efficiency, accuracy, [...] Read more.
The assessment of reliability in non-repairable subsystems of mining electronic equipment represents a computationally challenging problem, particularly for complex and highly connected structures. This study presents a systematic comparative analysis of several deterministic approaches for reliability estimation, focusing on their computational efficiency, accuracy, and applicability. The investigated methods include classical boundary techniques (minimal paths and cuts), analytical decomposition based on the Bayes theorem, the logic–probabilistic method (LPM) employing triangle–star transformations, and the algorithmic Structure Convolution Method (SCM), which is based on matrix reduction of the system’s connectivity graph. The reliability problem is formally represented using graph theory, where each element is modeled as a binary variable with independent failures, which is a standard and practically justified assumption for power electronic subsystems operating without common-cause coupling. Numerical experiments were carried out on canonical benchmark topologies—bridge, tree, grid, and random connected graphs—representing different levels of structural complexity. The results demonstrate that the SCM achieves exact reliability values with up to six orders of magnitude acceleration compared to the LPM for systems containing more than 20 elements, while maintaining polynomial computational complexity. Qualitatively, the compared approaches differ in the nature of the output and practical applicability: boundary methods provide fast interval estimates suitable for preliminary screening, whereas decomposition may exhibit a systematic bias for highly connected (non-series–parallel) topologies. In contrast, the SCM consistently preserves exactness while remaining computationally tractable for medium and large sparse-to-moderately dense graphs, making it preferable for repeated recalculations in design and optimization workflows. The methods were implemented in Python 3.7 using NumPy and NetworkX, ensuring transparency and reproducibility. The findings confirm that the SCM is an efficient, scalable, and mathematically rigorous tool for reliability assessment and structural optimization of large-scale non-repairable systems. The presented methodology provides practical guidelines for selecting appropriate reliability evaluation techniques based on system complexity and computational resource constraints. Full article
36 pages, 5121 KB  
Article
Peripheral Artery Disease (P.A.D.): Vascular Hemodynamic Simulation Using a Printed Circuit Board (PCB) Design
by Claudiu N. Lungu, Aurelia Romila, Aurel Nechita and Mihaela C. Mehedinti
Bioengineering 2026, 13(2), 241; https://doi.org/10.3390/bioengineering13020241 - 19 Feb 2026
Abstract
Background: Arterial stenosis produces nonlinear changes in vascular impedance that are challenging to investigate in real time using either benchtop flow phantoms or high-fidelity computational fluid dynamics (CFD) models. Objective: This study aimed to develop and evaluate a low-cost printed circuit board (PCB) [...] Read more.
Background: Arterial stenosis produces nonlinear changes in vascular impedance that are challenging to investigate in real time using either benchtop flow phantoms or high-fidelity computational fluid dynamics (CFD) models. Objective: This study aimed to develop and evaluate a low-cost printed circuit board (PCB) analog capable of reproducing the hemodynamic effects of progressive arterial stenosis through an R–L–C mapping of vascular mechanics. Methods: A lumped-parameter (0D) electrical network was constructed in which voltage represented pressure, current represented flow, resistance modeled viscous losses, capacitance corresponded to vessel compliance, and inductance represented fluid inertance. A variable resistor simulated focal stenosis and was adjusted incrementally to represent progressive narrowing. Input Uin, output Uout, peak-to-peak Vpp, and mean Vavg voltages were recorded at a driving frequency of 50 Hz. Physiological correspondence was established using the canonical relationships. R=8μlπr4, L=plπr2, C=3πr32Eh, where μ is blood viscosity, ρ is density, E is Young’s modulus, and h is wall thickness. A calibration constant was applied to convert measured voltage differences into pressure differences. Results: As simulated stenosis increased, the circuit exhibited a monotonic rise in Uout and Vpp, with a precise inflection beyond mid-range narrowing—consistent with the nonlinear growth in pressure loss predicted by fluid dynamic theory. Replicate measurements yielded stable, repeatable traces with no outliers under nominal test conditions. Qualitative trends matched those of surrogate 0D and CFD analyses, showing minimal changes for mild narrowing (≤25%) and a sharp increase in pressure loss for moderate to severe stenoses (≥50%). The PCB analog uses a simplified, lumped-parameter representation driven by a fixed-frequency sinusoidal excitation and therefore does not reproduce fully characterized physiological systolic–diastolic waveforms or heart–arterial coupling. In addition, the present configuration is intended for relatively straight peripheral arterial segments and is not designed to capture the complex geometry and branching of specialized vascular beds (e.g., intracranial circulation) or strongly curved elastic vessels (e.g., the thoracic aorta). Conclusions: The PCB analog successfully reproduces the characteristic hemodynamic signatures of arterial stenosis in real time and at low cost. The model provides a valuable tool for educational and research applications, offering rapid and intuitive visualization of vascular behavior. Current accuracy reflects assumptions of Newtonian, laminar, and lumped flow; future work will refine calibration, quantify uncertainty, and benchmark results against physiological measurements and full CFD simulations. Full article
Show Figures

Figure 1

46 pages, 5995 KB  
Review
Advancing Label-Free Imaging Through CARS Microscopy: From Signal Formation to Biological Interpretation
by Agata Barzowska-Gogola, Emilia Staniszewska-Ślęzak, Joanna Budziaszek, Anna Górska-Ratusznik, Andrzej Baliś, Michał Łucki, Adam Sułek and Barbara Pucelik
Int. J. Mol. Sci. 2026, 27(4), 1990; https://doi.org/10.3390/ijms27041990 - 19 Feb 2026
Abstract
Label-free imaging is becoming ever more important, especially in modern molecular biophysics. This method allows observation of biological structures and dynamics without the alteration caused by dyes or genetic labels. Coherent Anti-Stokes Raman Scattering (CARS) microscopy represents a unique method that utilizes the [...] Read more.
Label-free imaging is becoming ever more important, especially in modern molecular biophysics. This method allows observation of biological structures and dynamics without the alteration caused by dyes or genetic labels. Coherent Anti-Stokes Raman Scattering (CARS) microscopy represents a unique method that utilizes the intrinsic vibrational signatures of biomolecules, thereby transforming the field. Fluorescence-based methods show marked sensitivity, but may cause photobleaching, labeling artifacts, and inadequate biochemical detection. CARS enables chemically specific, real-time imaging of molecular structures, e.g., lipids, proteins and nucleic acids, within their natural environment. Over the past decade, advances in laser technology, detection methods, and computer analysis have turned CARS from a rare optical phenomenon into a useful tool applied in many fields, from basic research on molecular structure to practical biomedical imaging. This review presents the principles of CARS microscopy and the latest achievements in this field, highlighting its impact on molecular and cellular biophysics, as well as exploring the potential of artificial intelligence and multimodal approaches to increase its applications in precision medicine. In this context, CARS serves both a state-of-the-art imaging technique and a means of transforming internal molecular vibrations into information useful in biology and biophysics. In this way, it combines the physical sciences with molecular biology, enabling innovative biomedical research. Full article
(This article belongs to the Collection Latest Review Papers in Molecular Biophysics)
Show Figures

Graphical abstract

59 pages, 12506 KB  
Article
Power System Transition Planning: A Planner-Oriented Optimization Model
by Ahmed Al-Shafei, Nima Amjady, Hamidreza Zareipour and Yankai Cao
Energies 2026, 19(4), 1070; https://doi.org/10.3390/en19041070 - 19 Feb 2026
Abstract
This paper presents a comprehensive power system transition-planning model positioned between conventional generation and transmission expansion planning (GTEP) formulations and broader macro-energy system (MES) tools. Existing planning models are typically unable to simultaneously represent detailed network constraints, adaptive long-term uncertainty, and a broad [...] Read more.
This paper presents a comprehensive power system transition-planning model positioned between conventional generation and transmission expansion planning (GTEP) formulations and broader macro-energy system (MES) tools. Existing planning models are typically unable to simultaneously represent detailed network constraints, adaptive long-term uncertainty, and a broad set of grid-enhancing transition technologies within a single tractable optimization framework; this work enables such integrated, scenario-based planning. The framework remains rooted in detailed electrical system modeling while expanding the decision space to include transition-relevant technologies: conventional and renewable generation, transmission, advanced flow-control devices, dynamic line rating, energy storage, and retrofit options, all within a long-term-planning model under uncertainty. The contribution is the integrated representation of these options and the modeling constructs required to capture their interactions, including expressions enabling concurrent investment decisions across FACTS, dynamic line rating, and transmission expansion; network-embedded modeling of series compensation devices; a battery degradation model that avoids exogenous degradation cost proxies; and a GIS-based zoning resolution methodology balancing spatial fidelity and computational tractability. The resulting formulation is a mixed-integer multi-stage stochastic program. Analytical value is demonstrated through a detailed small-scale example based on Alberta’s power system. To overcome the computationally prohibitive results encountered when scaling the formulation to a practical test case consistent with Alberta’s long-term power-system-planning practices, Stochastic Dual Dynamic Programming is employed in parallel. The resulting solution demonstrates the feasibility of a subclass of highly detailed, transition-oriented electrical system planning models that are otherwise intractable under monolithic workstation-based approaches. Full article
12 pages, 1359 KB  
Article
89Zr-girentuximab PET/CT Enables Noninvasive Assessment of Indeterminate Renal Masses and Metastatic Clear-Cell Renal Cell Carcinoma
by Yihan Cao, Jonathan Kim, Justin Talluto, Taylor McVeigh, Michael L. Blute, Douglas M. Dahl, Keyan Salari, Pedram Heidari and Shadi A. Esfahani
Pharmaceutics 2026, 18(2), 258; https://doi.org/10.3390/pharmaceutics18020258 - 19 Feb 2026
Abstract
Background: Indeterminate renal masses (IRMs) frequently require biopsy for characterization and often lead to unnecessary surgical interventions. 89Zr-girentuximab is a positron emission tomography (PET) radiopharmaceutical targeting carbonic anhydrase IX, a biomarker overexpressed in clear-cell renal cell carcinoma (ccRCC). This real-world experience demonstrates [...] Read more.
Background: Indeterminate renal masses (IRMs) frequently require biopsy for characterization and often lead to unnecessary surgical interventions. 89Zr-girentuximab is a positron emission tomography (PET) radiopharmaceutical targeting carbonic anhydrase IX, a biomarker overexpressed in clear-cell renal cell carcinoma (ccRCC). This real-world experience demonstrates the impact of 89Zr-girentuximab PET on the clinical management of patients with IRM and its role in differentiating primary and metastatic ccRCC from other etiologies. Methods: This prospective single-center study, part of an expanded access program (NCT06090331), investigated patients with IRM on conventional imaging who underwent 89Zr-girentuximab PET/computed tomography (PET/CT). Qualitative and quantitative PET/CT features of each lesion were assessed. Pathologic or clinical diagnosis was determined for all lesions. Referring physicians were surveyed to evaluate the impact of PET on patient management. Results: Seven male patients (age range, 57–78 years) were included; four had ccRCC (including two with metastatic disease) and three had oncocytoma (including one with Birt-Hogg-Dubé syndrome). Across all 32 lesions identified, 89Zr-girentuximab PET/CT accurately characterized each lesion based on pathologic or clinical diagnosis. 89Zr-girentuximab PET/CT identified ccRCC tumor thrombi in the inferior vena cava and renal vein branches (SUVmax 12.0–13.0), a perinephric deposit (SUVmax 36.4), and intramuscular (SUVmax 103.0), pulmonary (SUVmax 4.0–10.5), and osseous (SUVmax 10.2) metastases. 89Zr-girentuximab PET/CT enabled the diagnosis of oncocytomatosis in one patient and detected a renal lesion with positive uptake that was occult on MRI. According to referring physicians, 89Zr-girentuximab PET/CT changed clinical management in six of seven patients and improved patient care in all cases. Conclusions: 89Zr-girentuximab PET/CT provides a noninvasive tool for characterizing indeterminate renal masses and metastatic ccRCC and may improve clinical problem-solving in complex scenarios. Full article
Show Figures

Figure 1

18 pages, 1311 KB  
Article
Benchmarking edgeR and methylKit for the Detection of Differential DNA Methylation: A Methodological Evaluation
by Iraia Muñoa-Hoyos, Manu Araolaza, Irune Calzado, Mikel Albizuri and Nerea Subirán
Int. J. Mol. Sci. 2026, 27(4), 1964; https://doi.org/10.3390/ijms27041964 - 18 Feb 2026
Abstract
Despite the improvements in tool development for DNA methylation analysis, there is a lack of a consensus on computational and statistical models used for differentially methylated cytosine (DMC) identification. This variability complicates the interpretation of findings and raises concerns about the reproducibility and [...] Read more.
Despite the improvements in tool development for DNA methylation analysis, there is a lack of a consensus on computational and statistical models used for differentially methylated cytosine (DMC) identification. This variability complicates the interpretation of findings and raises concerns about the reproducibility and biological significance of the detected results. In this regard, here we conducted a comparative evaluation of edgeR and methylKit tools to assess their performance, concordance, and biological relevance in detecting DMCs following a morphine exposure model in mouse embryonic stem cells (mESCs). Both pipelines were applied to the same WGBS dataset (GEO accession number: GSE292082), and concordance was calculated at both single-base and gene levels. Although the total number of DMCs identified differed between tools, both pipelines detected a global hypomethylation pattern. Genomic distribution analysis revealed that DMCs predominantly localized to intergenic and intronic regions, as well as to open sea regions. Despite differences in sensitivity, both pipelines demonstrated moderate concordance at the DMC level (~56%) and high concordance at the gene level (~90%), identifying largely overlapping sets of differentially methylated genes (DMGs). Comparative assessments further showed that the choice of statistical metric can influence the perceived magnitude of biological effects. Sensitivity analyses indicated that threshold selection and normalization methods influence DMC detection, whereas aggregation at gene level reduces discrepancies. Overall, our findings underscore the complementary strengths of methylKit and edgeR and highlight the importance of careful tool selection for epigenetic studies. As a conclusion, we recommend integrating both pipelines to ensure a balanced interpretation of effect sizes, particularly in studies with complex experimental designs. Full article
(This article belongs to the Special Issue Benchmarking of Modeling and Informatic Methods in Molecular Sciences)
Show Figures

Figure 1

35 pages, 1423 KB  
Review
Analysis of Biological Images and Quantitative Monitoring Using Deep Learning and Computer Vision
by Aaron Gálvez-Salido, Francisca Robles, Rodrigo J. Gonçalves, Roberto de la Herrán, Carmelo Ruiz Rejón and Rafael Navajas-Pérez
J. Imaging 2026, 12(2), 88; https://doi.org/10.3390/jimaging12020088 - 18 Feb 2026
Abstract
Automated biological counting is essential for scaling wildlife monitoring and biodiversity assessments, as manual processing currently limits analytical effort and scalability. This review evaluates the integration of deep learning and computer vision across diverse acquisition platforms, including camera traps, unmanned aerial vehicles (UAVs), [...] Read more.
Automated biological counting is essential for scaling wildlife monitoring and biodiversity assessments, as manual processing currently limits analytical effort and scalability. This review evaluates the integration of deep learning and computer vision across diverse acquisition platforms, including camera traps, unmanned aerial vehicles (UAVs), and remote sensing. Methodological paradigms ranging from Convolutional Neural Networks (CNNs) and one-stage detectors like You Only Look Once (YOLO) to recent transformer-based architectures and hybrid models are examined. The literature shows that these methods consistently achieve high accuracy—often exceeding 95%—across various taxa, including insect pests, aquatic organisms, terrestrial vegetation, and forest ecosystems. However, persistent challenges such as object occlusion, cryptic species differentiation, and the scarcity of high-quality, labeled datasets continue to hinder fully automated workflows. We conclude that while automated counting has fundamentally increased data throughput, future advancements must focus on enhancing model generalization through self-supervised learning and improved data augmentation techniques. These developments are critical for transitioning from experimental models to robust, operational tools for global ecological monitoring and conservation efforts. Full article
Show Figures

Figure 1

12 pages, 2260 KB  
Article
PDCG: A Diffusion Model Guided by Pre-Training for Molecular Conformation Generation
by Yanchen Liu, Yameng Zheng, Amina Tariq, Xiaofei Nan, Lingbo Qu and Jinshuai Song
Chemistry 2026, 8(2), 29; https://doi.org/10.3390/chemistry8020029 - 18 Feb 2026
Abstract
Background: While machine learning has advanced molecular conformation generation, existing models often suffer from limited generalization and inaccuracies, especially for complex molecular structures. These limitations hinder their reliability in downstream applications. Methods: We proposed a molecular conformation model combined with a molecular graph [...] Read more.
Background: While machine learning has advanced molecular conformation generation, existing models often suffer from limited generalization and inaccuracies, especially for complex molecular structures. These limitations hinder their reliability in downstream applications. Methods: We proposed a molecular conformation model combined with a molecular graph pre-training module and a diffusion model (PDCG). Feature embeddings are obtained from a pre-trained model and concatenated with the molecular graph information. Fusion features are used for generating conformations in the model. The model was trained and evaluated on the GEOM-QM9 and GEOM-Drugs datasets. Results: PDCG significantly outperforms existing baselines, which shows markedly superior results. Furthermore, in downstream molecular property prediction tasks, conformations generated by PDCG yield results comparable to those derived from DFT-optimized geometries. Conclusions: Our work provides a robust and generalizable model for accurate conformation generation. PDCG offers a reliable tool for downstream computational tasks, such as the virtual screening of functional materials and drug-like molecules. Full article
(This article belongs to the Special Issue AI and Big Data in Chemistry)
Show Figures

Graphical abstract

23 pages, 1833 KB  
Article
MIC-SSO: A Two-Stage Hybrid Feature Selection Approach for Tabular Data
by Wei-Chang Yeh, Yunzhi Jiang, Hsin-Jung Hsu and Chia-Ling Huang
Electronics 2026, 15(4), 856; https://doi.org/10.3390/electronics15040856 - 18 Feb 2026
Abstract
High-dimensional structured datasets are common in fields such as semiconductor manufacturing, healthcare, and finance, where redundant and irrelevant features often increase computational cost and reduce predictive accuracy. Feature selection mitigates these issues by identifying a compact, informative subset of features, enhancing model efficiency, [...] Read more.
High-dimensional structured datasets are common in fields such as semiconductor manufacturing, healthcare, and finance, where redundant and irrelevant features often increase computational cost and reduce predictive accuracy. Feature selection mitigates these issues by identifying a compact, informative subset of features, enhancing model efficiency, performance, and interpretability. This study proposes Maximal Information Coefficient–Simplified Swarm Optimization (MIC-SSO), a two-stage hybrid feature selection method that combines the MIC as a filter with SSO as a wrapper. In Stage 1, MIC ranks feature relevance and removes low-contribution features; in Stage 2, SSO searches for an optimal subset from the reduced feature space using a fitness function that integrates the Matthews Correlation Coefficient (MCC) and feature reduction rate to balance accuracy and compactness. Experiments on five public datasets compare MIC-SSO with multiple hybrid, heuristic, and literature-reported methods, with results showing superior predictive accuracy and feature compression. The method’s ability to outperform existing approaches in terms of predictive accuracy and feature compression underscores its broader significance, offering a powerful tool for data analysis in fields like healthcare, finance, and semiconductor manufacturing. Statistical tests further confirm significant improvements over competing approaches, demonstrating the method’s effectiveness in integrating the efficiency of filters with the precision of wrappers for high-dimensional tabular data analysis. Full article
(This article belongs to the Special Issue Feature Papers in Networks: 2025–2026 Edition)
28 pages, 595 KB  
Article
Reliability Improvement of a Parallel–Series System via Duplication and Reduction Strategies Under the Akshaya Distribution
by Ahmed T. Ramadan, Ahmed R. El-Saeed, Norah D. Alshahrani and Ahlam H. Tolba
Axioms 2026, 15(2), 149; https://doi.org/10.3390/axioms15020149 - 18 Feb 2026
Abstract
Parallel–series systems are fundamental in many industrial and engineering applications, yet their reliability assessment and improvement remain challenging, particularly when components exhibit non-constant failure rates. This study addresses this challenge by modeling a hybrid parallel–series system whose components follow the Akshaya lifetime distribution, [...] Read more.
Parallel–series systems are fundamental in many industrial and engineering applications, yet their reliability assessment and improvement remain challenging, particularly when components exhibit non-constant failure rates. This study addresses this challenge by modeling a hybrid parallel–series system whose components follow the Akshaya lifetime distribution, a flexible model that can capture various hazard-rate shapes. For this system, we derive closed-form analytical expressions for key reliability indices, including the system reliability function, mean time to failure (MTTF), reliability equivalence factors (REFs), and δ-fractiles. To enhance system performance, four improvement strategies are formulated and analytically compared: failure-rate reduction, hot duplication, cold duplication with a perfect switch, and cold duplication with an imperfect switch. A comprehensive numerical case study validates the theoretical derivations and demonstrates the effectiveness of each strategy. The results show that cold duplication with a perfect switch yields the highest reliability gain, and the computed REFs provide a quantitative tool for balancing redundancy against component-level improvements. This work provides reliability engineers with a comprehensive analytical framework for the design and enhancement of complex parallel-series systems. Full article
22 pages, 15805 KB  
Article
A Computational Approach for Risk Prediction to Protect Historical Buildings in Urban Excavations: Case Study of the Cervantes Theater in Segovia
by David Mencías-Carrizosa, Pablo Romero and Miguel A. Millán
Appl. Sci. 2026, 16(4), 1995; https://doi.org/10.3390/app16041995 - 17 Feb 2026
Viewed by 38
Abstract
This study presents the development of a computational tool designed to help automate decision-making in excavation and foundation construction in rock, aiming to minimize risks to adjacent historical structures in an urban context. The tool uses a graphical interface and focuses on estimating [...] Read more.
This study presents the development of a computational tool designed to help automate decision-making in excavation and foundation construction in rock, aiming to minimize risks to adjacent historical structures in an urban context. The tool uses a graphical interface and focuses on estimating the propagation of vibrations generated by these construction processes. A working methodology has been proposed, and a computational tool has been developed to predict the feasibility and safety of specific construction techniques in different areas of study. Using field-collected data, a three-dimensional survey of adjacent buildings is conducted in a 3D CAD model, converting the continuous terrain into a discrete point mesh. This mesh enables the tracing of vibrational wave trajectories from their origin to potentially affected structures. The tool then calculates the peak particle velocities (PPV) at the foundations of these structures. By comparing these PPV values with predefined thresholds—selected from different excavation procedures with heavy equipment—excavation zones where equipment can be safely used are visually represented using a color-coded scheme. To validate the applicability of the proposed method and developed approach, the tool was tested on a case study: The Rehabilitation Project of the Cervantes Theater in Segovia, promoted by the Ministry of Transport, Mobility, and Urban Agenda. This project is currently halted due to damage sustained by adjacent buildings during the excavation process. Full article
(This article belongs to the Special Issue Non-Destructive Techniques for Heritage Conservation)
Show Figures

Figure 1

14 pages, 600 KB  
Communication
SnakeBITE: A SNAKEmake-Based Interface for Third-Generation Sequencing Data Analysis
by Andrea Bimbocci, Marta Baragli and Alberto Magi
Biomolecules 2026, 16(2), 314; https://doi.org/10.3390/biom16020314 - 16 Feb 2026
Viewed by 81
Abstract
In recent years, the use of computational pipelines for the analysis of omic data has become routine in bioinformatics, particularly with the advent of Next-Generation Sequencing (NGS) technologies. These technologies generate vast amounts of data that necessitate sophisticated analysis methods, often requiring programming [...] Read more.
In recent years, the use of computational pipelines for the analysis of omic data has become routine in bioinformatics, particularly with the advent of Next-Generation Sequencing (NGS) technologies. These technologies generate vast amounts of data that necessitate sophisticated analysis methods, often requiring programming skills and command-line interface proficiency. This complexity poses challenges for users from various backgrounds, including clinicians and biologists. Current solutions often involve workflow management tools and graphical user interfaces to simplify pipeline creation and execution. However, these tools predominantly cater to NGS data and are not fully adaptable to Third-Generation Sequencing (TGS) data, such as that produced by Oxford Nanopore Technologies (ONT). Here we present SnakeBITE, a modular genomic data analysis pipeline builder based on the Snakemake workflow manager, integrated with an interactive Shiny-based interface. SnakeBITE enables users to configure and execute TGS data analysis workflows locally without requiring programming expertise. The application supports the full ONT genomics data analysis pipeline, including base calling, alignment, variant calling, and annotation. Our results demonstrate SnakeBITE’s capacity to handle various stages of ONT data analysis efficiently, offering a user-friendly and highly customizable tool that bridges the gap between sophisticated data analysis and user accessibility. Full article
(This article belongs to the Section Bioinformatics and Systems Biology)
23 pages, 809 KB  
Article
Numerical Solution of Integral Algebraic Equations with Singular Points Using the Least Squares Method
by Van Truong Vo, Denis Sidorov, Elena Chistyakova, Viktor Chistyakov and Aliona Dreglea
Mathematics 2026, 14(4), 693; https://doi.org/10.3390/math14040693 - 16 Feb 2026
Viewed by 87
Abstract
We conduct a numerical study of integral algebraic equations (IAEs) with singular points, which pose significant challenges for standard computational methods. The presence of singular points often renders classical discretization schemes unstable and inaccurate. This work explores the reformulation of such problems using [...] Read more.
We conduct a numerical study of integral algebraic equations (IAEs) with singular points, which pose significant challenges for standard computational methods. The presence of singular points often renders classical discretization schemes unstable and inaccurate. This work explores the reformulation of such problems using a least squares framework to restore numerical stability. By recasting the singular IAE as a minimization problem, the least squares method effectively handles the non-integrability and ill-conditioning inherent in direct approaches. We provide a numerical analysis of the proposed scheme and present results from several test cases, demonstrating its superior performance in terms of the convergence rate and solution quality compared to conventional methods. Our findings establish the least squares method as a viable and effective tool for solving singular IAEs. Full article
(This article belongs to the Section C2: Dynamical Systems)
48 pages, 3308 KB  
Review
From Neurons to Networks: A Holistic Review of Electroencephalography (EEG) from Neurophysiological Foundations to AI Techniques
by Christos Kalogeropoulos, Konstantinos Theofilatos and Seferina Mavroudi
Signals 2026, 7(1), 17; https://doi.org/10.3390/signals7010017 - 16 Feb 2026
Viewed by 290
Abstract
Electroencephalography (EEG) has transitioned from a subjective observational method into a data-intensive analytical field that utilises sophisticated algorithms and mathematical models. This review provides a holistic foundation by detailing the neurophysiological basis, recording techniques, and applications of EEG before providing a rigorous examination [...] Read more.
Electroencephalography (EEG) has transitioned from a subjective observational method into a data-intensive analytical field that utilises sophisticated algorithms and mathematical models. This review provides a holistic foundation by detailing the neurophysiological basis, recording techniques, and applications of EEG before providing a rigorous examination of traditional and modern analytical pillars. Statistical and Time-Series Analysis, Spectral and Time-Frequency Analysis, Spatial Analysis and Source Modelling, Connectivity and Network Analysis, and Nonlinear and Chaotic Analysis are explored. Afterwards, while acknowledging the historical role of Machine Learning (ML) and Deep Learning (DL) architectures, such as Support Vector Machines (SVMs) and Convolutional Neural Networks (CNNs), this review shifts the primary focus toward current state-of-the-art Artificial Intelligence (AI) trends. We place emphasis on the emergence of Foundation Models, including Large Language Models (LLMs) and Large Vision Models (LVMs), adapted for high-dimensional neural sequences. Finally, we explore the integration of Generative AI for data augmentation and review Explainable AI (XAI) frameworks designed to bridge the gap between “black-box” decoding and clinical interpretability. We conclude that the next generation of EEG analysis will likely converge into Neuro-Symbolic architectures, synergising the massive generative power of foundation models with the rigorous, rule-based interpretability of classical signal theory. Full article
Show Figures

Figure 1

Back to TopTop