Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (172)

Search Parameters:
Keywords = pipelined-forwarding

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 3153 KB  
Article
Domain-Specific Acceleration of Gravity Forward Modeling via Hardware–Software Co-Design
by Yong Yang, Daying Sun, Zhiyuan Ma and Wenhua Gu
Micromachines 2025, 16(11), 1215; https://doi.org/10.3390/mi16111215 - 25 Oct 2025
Viewed by 411
Abstract
The gravity forward modeling algorithm is a compute-intensive method and is widely used in scientific computing, particularly in geophysics, to predict the impact of subsurface structures on surface gravity fields. Traditional implementations rely on CPUs, where performance gains are mainly achieved through algorithmic [...] Read more.
The gravity forward modeling algorithm is a compute-intensive method and is widely used in scientific computing, particularly in geophysics, to predict the impact of subsurface structures on surface gravity fields. Traditional implementations rely on CPUs, where performance gains are mainly achieved through algorithmic optimization. With the rise of domain-specific architectures, FPGA offers a promising platform for acceleration, but faces challenges such as limited programmability and the high cost of nonlinear function implementation. This work proposes an FPGA-based co-processor to accelerate gravity forward modeling. A RISC-V core is integrated with a custom instruction set targeting key computation steps. Tasks are dynamically scheduled and executed on eight fully pipeline processing units, achieving high parallelism while retaining programmability. To address nonlinear operations, we introduce a piecewise linear approximation method optimized via stochastic gradient descent (SGD), significantly reducing resource usage and latency. The design is implemented on the AMD UltraScale+ ZCU102 FPGA (Advanced Micro Devices, Inc. (AMD), Santa Clara, CA, USA) and evaluated across several forward modeling scenarios. At 250 MHz, the system achieves up to 179× speedup over an Intel Xeon 5218R CPU (Intel Corporation, Santa Clara, CA, USA) and improves energy efficiency by 2040×. To the best of our knowledge, this is the first FPGA-based gravity forward modeling accelerate design. Full article
(This article belongs to the Special Issue Recent Advances in Field-Programmable Gate Array (FPGA))
Show Figures

Figure 1

22 pages, 1940 KB  
Article
A Comparative Study of Lightweight, Sparse Autoencoder-Based Classifiers for Edge Network Devices: An Efficiency Analysis of Feed-Forward and Deep Neural Networks
by Mi Young Jo and Hyun Jung Kim
Sensors 2025, 25(20), 6439; https://doi.org/10.3390/s25206439 - 17 Oct 2025
Viewed by 874
Abstract
This study proposes a lightweight classification framework for anomaly traffic detection in edge computing environments. Thirteen packet- and flow-level features extracted from the CIC-IDS2017 dataset were compressed into 4-dimensional latent vectors using a Sparse Autoencoder (SAE). Two classifiers were compared under the same [...] Read more.
This study proposes a lightweight classification framework for anomaly traffic detection in edge computing environments. Thirteen packet- and flow-level features extracted from the CIC-IDS2017 dataset were compressed into 4-dimensional latent vectors using a Sparse Autoencoder (SAE). Two classifiers were compared under the same pipeline: a Feed-Forward network (SAE-FF) and a Deep Neural Network (SAE-DNN). To ensure generalization, all experiments were conducted with 5-fold cross-validation. Performance evaluation revealed that SAE-DNN achieved superior classification performance, with an average accuracy of 99.33% and an AUC of 0.9993. The SAE-FF model, although exhibiting lower performance (average accuracy of 93.66% and AUC of 0.9758), maintained stable outcomes and offered significantly lower computational complexity (~40 FLOPs) compared with SAE-DNN (~8960 FLOPs). Device-level analysis confirmed that SAE-FF was the most efficient option for resource-constrained platforms such as Raspberry Pi 4, whereas SAE-DNN achieved real-time inference capability on the Coral Dev Board by leveraging Edge TPU acceleration. To quantify this trade-off between accuracy and efficiency, we introduce the Edge Performance Efficiency Score (EPES), a composite metric that integrates accuracy, latency, memory usage, FLOPs, and CPU performance into a single score. The proposed EPES provides a practical and comprehensive benchmark for balancing accuracy and efficiency and supporting device-specific model selection in practical edge deployments. These findings highlight the importance of system-aware evaluation and demonstrate that EPES can serve as a valuable guideline for efficient anomaly traffic classification in resource-limited environments. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

15 pages, 2937 KB  
Article
Denoising Degraded PCOS Ultrasound Images Using an Enhanced Denoising Diffusion Probabilistic Model
by Jincheng Peng, Zhenyu Guo, Xing Chen and Ming Zhou
Electronics 2025, 14(20), 4061; https://doi.org/10.3390/electronics14204061 - 15 Oct 2025
Viewed by 324
Abstract
Currently, for polycystic ovary syndrome (PCOS), diagnostic methods are mainly divided into hormonal indicators and ultrasound imaging. However, ultrasound images are often affected by noise and artifacts during the imaging process. This significantly degrades image quality and increases the difficulty of diagnosis. This [...] Read more.
Currently, for polycystic ovary syndrome (PCOS), diagnostic methods are mainly divided into hormonal indicators and ultrasound imaging. However, ultrasound images are often affected by noise and artifacts during the imaging process. This significantly degrades image quality and increases the difficulty of diagnosis. This paper proposes a PCOS ultrasound image denoising method based on an improved DDPM. During the forward diffusion process of the original model, Gaussian noise is progressively added using a cosine-based scheduling strategy. In the reverse diffusion process, a conditional noise predictor is introduced and combined with the original ultrasound image information to iteratively denoise and recover a clear image. Additionally, we fine-tuned and optimized the model to better suit the requirements of PCOS ultrasound image denoising. Experimental results show that our model outperforms state-of-the-art methods in both noise suppression and structural fidelity. It delivers a fully automated PCOS-ultrasound denoising pipeline whose diffusion-based restoration preserves clinically salient anatomy, improving the reliability of downstream assessments. Full article
Show Figures

Figure 1

23 pages, 4965 KB  
Article
Direct Estimation of Electric Field Distribution in Circular ECT Sensors Using Graph Convolutional Networks
by Robert Banasiak, Zofia Stawska and Anna Fabijańska
Sensors 2025, 25(20), 6371; https://doi.org/10.3390/s25206371 - 15 Oct 2025
Viewed by 443
Abstract
The Electrical Capacitance Tomography (ECT) imaging pipeline relies on accurate estimation of electric field distributions to compute electrode capacitances and reconstruct permittivity maps. Traditional ECT forward model methods based on the Finite Element Method (FEM) offer high accuracy but are computationally intensive, limiting [...] Read more.
The Electrical Capacitance Tomography (ECT) imaging pipeline relies on accurate estimation of electric field distributions to compute electrode capacitances and reconstruct permittivity maps. Traditional ECT forward model methods based on the Finite Element Method (FEM) offer high accuracy but are computationally intensive, limiting their use in real-time applications. In this proof-of-concept study, we investigate the use of Graph Convolutional Networks (GCNs) for direct, one-step prediction of electric field distributions associated with a circular ECT sensor numerical model. The network is trained on FEM-simulated data and outputs of full 2D electric field maps for all excitation patterns. To evaluate physical fidelity, we compute capacitance matrices using both GCN-predicted and FEM-based fields. Our results show strong agreement in both direct field prediction and derived quantities, demonstrating the feasibility of replacing traditional solvers with fast, learned approximators. This approach has significant implications for further real-time ECT imaging and control applications. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

18 pages, 4201 KB  
Article
Hybrid-Mechanism Distributed Sensing Using Forward Transmission and Optical Frequency-Domain Reflectometry
by Shangwei Dai, Huajian Zhong, Xing Rao, Jun Liu, Cailing Fu, Yiping Wang and George Y. Chen
Sensors 2025, 25(19), 6229; https://doi.org/10.3390/s25196229 - 8 Oct 2025
Viewed by 464
Abstract
Fiber-optic sensing systems based on a forward transmission interferometric structure can achieve high sensitivity and a wide frequency response over long distances. However, there are still shortcomings in its ability to position multi-point vibrations and detect low-frequency vibrations, which limits its usefulness. To [...] Read more.
Fiber-optic sensing systems based on a forward transmission interferometric structure can achieve high sensitivity and a wide frequency response over long distances. However, there are still shortcomings in its ability to position multi-point vibrations and detect low-frequency vibrations, which limits its usefulness. To address these challenges, we study the viability of merging long-range forward-transmission distributed vibration sensing (FTDVS) with high spatial resolution optical frequency-domain reflectometry (OFDR), forming the first reported hybrid distributed sensing method between these two methods. The probe light source is shared between the two sub-systems, which utilizes stable linear optical frequency sweeping facilitated by high-order sideband injection locking. As a result, this is a new approach for the FTDVS method, which conventionally uses fixed-frequency continuous light. The method of nearest neighbor signal replacement (NSR) is proposed to address the issue of discontinuity in phase demodulation under periodic external modulation. The experimental results demonstrate that the hybrid system can determine the position of vibration signals between 0 and 900 Hz within a sensing distance of 21 km. When the sensing distance is extended to 71 km, the FTDVS module can still function adequately for high-frequency vibration signals. This hybrid architecture offers a fresh approach to simultaneously achieving long-distance sensing and wide frequency response, making it suitable for the combined measurement of dynamic (e.g., gas leakage, pipeline excavation warning) and quasi-static (e.g., pipeline displacement) events in long-distance applications. Full article
(This article belongs to the Special Issue Advances in Optical Fiber-Based Sensors)
Show Figures

Graphical abstract

24 pages, 4942 KB  
Article
ConvNet-Generated Adversarial Perturbations for Evaluating 3D Object Detection Robustness
by Temesgen Mikael Abraha, John Brandon Graham-Knight, Patricia Lasserre, Homayoun Najjaran and Yves Lucet
Sensors 2025, 25(19), 6026; https://doi.org/10.3390/s25196026 - 1 Oct 2025
Viewed by 437
Abstract
This paper presents a novel adversarial Convolutional Neural Network (ConvNet) method for generating adversarial perturbations in 3D point clouds, enabling gradient-free robustness evaluation of object detection systems at inference time. Unlike existing iterative gradient methods, our approach embeds the ConvNet directly into the [...] Read more.
This paper presents a novel adversarial Convolutional Neural Network (ConvNet) method for generating adversarial perturbations in 3D point clouds, enabling gradient-free robustness evaluation of object detection systems at inference time. Unlike existing iterative gradient methods, our approach embeds the ConvNet directly into the detection pipeline at the voxel feature level. The ConvNet is trained to maximize detection loss while maintaining perturbations within sensor error bounds through multi-component loss constraints (intensity, bias, and imbalance terms). Evaluation on a Sparsely Embedded Convolutional Detection (SECOND) detector with the KITTI dataset shows 8% overall mean Average Precision (mAP) degradation, while CenterPoint on NuScenes exhibits 24% weighted mAP reduction across 10 object classes. Analysis reveals an inverse relationship between object size and adversarial vulnerability: smaller objects (pedestrians: 13%, cyclists: 14%) show higher vulnerability compared to larger vehicles (cars: 0.2%) on KITTI, with similar patterns on NuScenes, where barriers (68%) and pedestrians (32%) are most affected. Despite perturbations remaining within typical sensor error margins (mean L2 norm of 0.09% for KITTI, 0.05% for NuScenes, corresponding to 0.9–2.6 cm at typical urban distances), substantial detection failures occur. The key novelty is training a ConvNet to learn effective adversarial perturbations during a one-time training phase and then using the trained network for gradient-free robustness evaluation during inference, requiring only a forward pass through the ConvNet (1.2–2.0 ms overhead) instead of iterative gradient computation, making continuous vulnerability monitoring practical for autonomous driving safety assessment. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

13 pages, 1935 KB  
Article
Enteroflow: Automated Pipeline for In Silico Characterization of Enterococcus faecium/faecalis Isolates from Short Reads
by Daniele Smedile, Elena L. Diaconu, Matteo Grelloni, Barbara Middei, Virginia Carfora, Antonio Battisti, Patricia Alba and Alessia Franco
Int. J. Mol. Sci. 2025, 26(19), 9441; https://doi.org/10.3390/ijms26199441 - 26 Sep 2025
Viewed by 489
Abstract
Antimicrobial resistance (AMR) is a critical global health challenge that affects both human and animal populations. In accordance with the One Health paradigm, AMR has been monitored in Italy since 2014 in major zoonotic pathogens and opportunistic commensal bacteria from animal productions, in [...] Read more.
Antimicrobial resistance (AMR) is a critical global health challenge that affects both human and animal populations. In accordance with the One Health paradigm, AMR has been monitored in Italy since 2014 in major zoonotic pathogens and opportunistic commensal bacteria from animal productions, in the frame of the EU Harmonized Monitoring Program for AMR (according to EU Decision 2013/652, repealed by EU Decision 2020/1729), conducted by the Italian National Reference Center (CRN-AR) and National Reference Laboratory (NRL-AR) for antimicrobial resistance at the “Istituto Zooprofilattico Sperimentale del Lazio e della Toscana (IZSLT)” (on behalf of the Italian Ministry of Health). Among all monitored bacterial species, the commensal Enterococcus (E.) faecium and E. faecalis have emerged as opportunistic human pathogens with increasing AMR profiles. To address this challenge, the CRN-AR and NRL-AR have developed a custom bioinformatic pipeline, named Enteroflow, which enables the efficient analysis of high-throughput sequencing (HTS) data for the genomic characterization of E. faecium/faecalis isolates. A pivotal feature in this tool is the integration of Nextflow’s workflow manager and Domain Specific Language (DSL), ensuring the reproducibility and scalability of genomic analyses while allowing the monitoring of processes and computational performances. The list of tools included in the workflow spans from short read assemblers to genomic characterization tools for AMR and virulence gene detection and plasmid replicon typing, with results also being combined in structured and usable reports. These developments represent a major step forward in supporting the surveillance efforts and mitigation strategies for AMR in zoonotic and commensal bacteria. Full article
(This article belongs to the Special Issue Computational Genomics and Bioinformatics in Microbiology)
Show Figures

Graphical abstract

34 pages, 6187 KB  
Article
An Automated Domain-Agnostic and Explainable Data Quality Assurance Framework for Energy Analytics and Beyond
by Balázs András Tolnai, Zhipeng Ma, Bo Nørregaard Jørgensen and Zheng Grace Ma
Information 2025, 16(10), 836; https://doi.org/10.3390/info16100836 - 26 Sep 2025
Viewed by 476
Abstract
Nonintrusive load monitoring (NILM) relies on high-resolution sensor data to disaggregate total building energy into end-use load components, for example HVAC, ventilation, and appliances. On the ADRENALIN corpus, simple NaN handling with forward fill and mean substitution reduced average NMAE from 0.82 to [...] Read more.
Nonintrusive load monitoring (NILM) relies on high-resolution sensor data to disaggregate total building energy into end-use load components, for example HVAC, ventilation, and appliances. On the ADRENALIN corpus, simple NaN handling with forward fill and mean substitution reduced average NMAE from 0.82 to 0.76 for the Bayesian baseline, from 0.71 to 0.64 for BI-LSTM, and from 0.59 to 0.53 for the Time–Frequency Mask (TFM) model, across nine buildings and four temporal resolutions. However, many NILM models still show degraded accuracy due to unresolved data-quality issues, especially missing values, timestamp irregularities, and sensor inconsistencies, a limitation underexplored in current benchmarks. This paper presents a fully automated data-quality assurance pipeline for time-series energy datasets. The pipeline performs multivariate profiling, statistical analysis, and threshold-based diagnostics to compute standardized quality metrics, which are aggregated into an interpretable Building Quality Score (BQS) that predicts NILM performance and supports dataset ranking and selection. Explainability is provided by SHAP and a lightweight large language model, which turns visual diagnostics into concise, actionable narratives. The study evaluates practical quality improvement through systematic handling of missing values, linking metric changes to downstream error reduction. Using random-forest surrogates, SHAP identifies missingness and timestamp irregularity as dominant drivers of error across models. Core contributions include the definition and validation of BQS, an interpretable scoring and explanation framework for time-series quality, and an end-to-end evaluation of how quality diagnostics affect NILM performance at scale. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science for Smart Cities)
Show Figures

Figure 1

17 pages, 2969 KB  
Article
Multi-Domain CoP Feature Analysis of Functional Mobility for Parkinson’s Disease Detection Using Wearable Pressure Insoles
by Thathsara Nanayakkara, H. M. K. K. M. B. Herath, Hadi Sedigh Malekroodi, Nuwan Madusanka, Myunggi Yi and Byeong-il Lee
Sensors 2025, 25(18), 5859; https://doi.org/10.3390/s25185859 - 19 Sep 2025
Viewed by 946
Abstract
Parkinson’s disease (PD) impairs balance and gait through neuromotor dysfunction, yet conventional assessments often overlook subtle postural deficits during dynamic tasks. This study evaluated the diagnostic utility of center-of-pressure (CoP) features captured by pressure-sensing insoles during the Timed Up and Go (TUG) test. [...] Read more.
Parkinson’s disease (PD) impairs balance and gait through neuromotor dysfunction, yet conventional assessments often overlook subtle postural deficits during dynamic tasks. This study evaluated the diagnostic utility of center-of-pressure (CoP) features captured by pressure-sensing insoles during the Timed Up and Go (TUG) test. Using 39 PD and 38 control participants from the recently released open-access WearGait-PD dataset, the authors extracted 144 CoP features spanning positional, dynamic, frequency, and stochastic domains, including per-foot averages and asymmetry indices. Two scenarios were analyzed: the complete TUG and its 3 m walking segment. Model development followed a fixed protocol with a single participant-level 80/20 split; sequential forward selection with five-fold cross-validation optimized the number of features within the training set. Five classifiers were evaluated: SVM-RBF, logistic regression (LR), random forest (RF), k-nearest neighbors (k-NN), and Gaussian naïve Bayes (NB). LR performed best on the held-out test set (accuracy = 0.875, precision = 1.000, recall = 0.750, F1 = 0.857, ROC-AUC = 0.921) using a 23-feature subset. RF and SVM-RBF each achieved 0.812 accuracy. In contrast, applying the identical pipeline to the 3 m walking segment yielded lower performance (best model: k-NN, accuracy = 0.688, F1 = 0.615, ROC–AUC = 0.734), indicating that the multi-phase TUG task captures PD-related balance deficits more effectively than straight walking. All four feature families contributed to classification performance. Dynamic and frequency-domain descriptors, often appearing in both average and asymmetry form, were most consistently selected. These features provided robust magnitude indicators and offered complementary insights into reduced control complexity in PD. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

20 pages, 3200 KB  
Article
Analysis of the Risk Factors for PCCP Damage via Cloud Theory
by Liwei Han, Yifan Zhang, Te Wang and Ruibin Guo
Buildings 2025, 15(18), 3363; https://doi.org/10.3390/buildings15183363 - 17 Sep 2025
Viewed by 385
Abstract
Research on prestressed concrete cylinder pipes (PCCPs) has focused primarily on their failure mechanisms, monitoring methods, and the effectiveness of repairs. However, gaps in the study of damage risks associated with PCCPs remain. Based on existing relevant research, this study focused on analysing [...] Read more.
Research on prestressed concrete cylinder pipes (PCCPs) has focused primarily on their failure mechanisms, monitoring methods, and the effectiveness of repairs. However, gaps in the study of damage risks associated with PCCPs remain. Based on existing relevant research, this study focused on analysing the uncertainties in the material production and manufacturing processes of PCCPs to assess their damage risk. The research employs onsite test data about the compressive strength of C55 concrete and the real prestressing force exerted on prestressed steel wires, utilising the measured compressive strength of the concrete core in PCCPs alongside the actual prestressing force applied to the steel wires. An inverse cloud generator was employed to obtain the expected value Ex, entropy En, and hyperentropy He of the characteristic numbers. These values are then combined with the forward cloud model in cloud theory to train random parameters. By combining cloud theory with the Monte Carlo method, a risk analysis model for PCCP pipelines was established. Using internal water pressure monitoring data from the Qiliqiao Reservoir to the Xiayi Water Supply Line in the South-to-North Water Diversion Project, along with relevant PCCP pipeline data, the failure probability of the PCCP pipeline was calculated. The reliability index of this pipeline section under 0.6 MPa loading was found to be 4.49, demonstrating the reliability of the PCCP pipeline in this section of the water supply line. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

19 pages, 1845 KB  
Article
GPU-Accelerated PSO for High-Performance American Option Valuation
by Leon Xing Li and Ren-Raw Chen
Appl. Sci. 2025, 15(18), 9961; https://doi.org/10.3390/app15189961 - 11 Sep 2025
Viewed by 882
Abstract
Using artificial intelligence tools to evaluate financial derivatives has become increasingly popular. PSO (particle swarm optimization) is one such tool. We present a comprehensive study of PSO for pricing American options on GPUs using OpenCL. PSO is an increasingly popular heuristic for financial [...] Read more.
Using artificial intelligence tools to evaluate financial derivatives has become increasingly popular. PSO (particle swarm optimization) is one such tool. We present a comprehensive study of PSO for pricing American options on GPUs using OpenCL. PSO is an increasingly popular heuristic for financial parameter search; however, its high computational cost (especially for path-dependent derivatives) poses a challenge. We review PSO-based pricing and survey prior GPU acceleration efforts. We then describe our OpenCL optimization pipeline on an Apple M3 Max GPU (OpenCL 1.2 via PyOpenCL 2024.1). Starting from a NumPy baseline (36.7 s), we apply successive enhancements: an initial GPU offload (8.0 s), restructuring loops (forward/backward) to minimize divergence (2.3 s → 0.95 s), kernel fusion (0.94 s), and explicit SIMD vectorization (float4) (0.25 s). The fully fused float4 kernel achieves 0.246 s, a ~150X speedup over CPU. We analyzed all eight intermediate kernels (named by file), detailing techniques (memory coalescing, branch avoidance, etc.) and their effects on throughput. Our results exceed prior art in speed and vector efficiency, illustrating the power of combined OpenCL strategies. Full article
(This article belongs to the Special Issue Data Structures for Graphics Processing Units (GPUs))
Show Figures

Figure 1

13 pages, 1571 KB  
Article
CREPE (CREate Primers and Evaluate): A Computational Tool for Large-Scale Primer Design and Specificity Analysis
by Jonathan W. Pitsch, Sara A. Wirth, Nicole T. Costantino, Josh Mejia, Rose M. Doss, Ava V. A. Warren, Jack Ustanik, Xiaoxu Yang and Martin W. Breuss
Genes 2025, 16(9), 1062; https://doi.org/10.3390/genes16091062 - 10 Sep 2025
Viewed by 609
Abstract
Background/Objectives: Polymerase chain reaction (PCR) is ubiquitous in biological research labs, as it is a fast, flexible, and cost-effective technique to amplify a DNA region of interest. However, manual primer design can be an error-prone and time-consuming process depending on the number and [...] Read more.
Background/Objectives: Polymerase chain reaction (PCR) is ubiquitous in biological research labs, as it is a fast, flexible, and cost-effective technique to amplify a DNA region of interest. However, manual primer design can be an error-prone and time-consuming process depending on the number and composition of target sites. While Primer3 has emerged as an accessible tool to solve some of these issues, additional computational pipelines are required for appropriate scaling. Moreover, this does not replace the manual confirmation of primer specificity (i.e., the assessment of off-targets). Methods: To overcome the challenges of large-scale primer design, we fused the functionality of Primer3 and In-Silico PCR (ISPCR); this integrated pipeline, CREPE (CREate Primers and Evaluate), performs primer design and specificity analysis through a custom evaluation script for any given number of target sites at scale. Results: CREPE’s final output summarizes the lead forward and reverse primer pair for each target site, a measure of the likelihood of binding to off-targets, and additional information to aid decision-making. We provide this through a customized workflow for targeted amplicon sequencing (TAS) on a 150 bp paired-end Illumina platform. Experimental testing showed successful amplification for more than 90% of primers deemed acceptable by CREPE. Conclusions: We here provide CREPE, a software platform that allows for parallelized primer design for PCR applications and that is optimized for targeted amplicon sequencing. Full article
(This article belongs to the Section Bioinformatics)
Show Figures

Graphical abstract

25 pages, 3974 KB  
Article
Modular Deep-Learning Pipelines for Dental Caries Data Streams: A Twin-Cohort Proof-of-Concept
by Ștefan Lucian Burlea, Călin Gheorghe Buzea, Florin Nedeff, Diana Mirilă, Valentin Nedeff, Maricel Agop, Dragoș Ioan Rusu and Laura Elisabeta Checheriță
Dent. J. 2025, 13(9), 402; https://doi.org/10.3390/dj13090402 - 2 Sep 2025
Viewed by 717
Abstract
Background: Dental caries arise from a multifactorial interplay between microbial dysbiosis, host immune responses, and enamel degradation visible on radiographs. Deep learning excels in image-based caries detection; however, integrative analyses that combine radiographic, microbiome, and transcriptomic data remain rare because public cohorts are [...] Read more.
Background: Dental caries arise from a multifactorial interplay between microbial dysbiosis, host immune responses, and enamel degradation visible on radiographs. Deep learning excels in image-based caries detection; however, integrative analyses that combine radiographic, microbiome, and transcriptomic data remain rare because public cohorts are seldom aligned. Objective: To determine whether three independent deep-learning pipelines—radiographic segmentation, microbiome regression, and transcriptome regression—can be reproducible implemented on non-aligned datasets, and to demonstrate the feasibility of estimating microbiome heritability in a matched twin cohort. Methods: (i) A U-Net with ResNet-18 encoder was trained on 100 annotated panoramic radiographs to generate a continuous caries-severity score from a predicted lesion area. (ii) Feed-forward neural networks (FNNs) were trained on supragingival 16S rRNA profiles (81 samples, 750 taxa) and gingival transcriptomes (247 samples, 54,675 probes) using randomly permuted severity scores as synthetic targets to stress-test preprocessing, training, and SHAP-based interpretability. (iii) In 49 monozygotic and 50 dizygotic twin pairs (n = 198), Bray–Curtis dissimilarity quantified microbial heritability, and an FNN was trained to predict recorded TotalCaries counts. Results: The U-Net achieved IoU = 0.564 (95% CI 0.535–0.594), precision = 0.624 (95% CI 0.583–0.667), recall = 0.877 (95% CI 0.827–0.918), and correlated with manual severity scores (r = 0.62, p < 0.01). The synthetic-target FNNs converged consistently but—as intended—showed no predictive power (R2 ≈ −0.15 microbiome; −0.18 transcriptome). Twin analysis revealed greater microbiome similarity in monozygotic versus dizygotic pairs (0.475 ± 0.107 vs. 0.557 ± 0.117; p = 0.0005) and a modest correlation between salivary features and caries burden (r = 0.25). Conclusions: Modular deep-learning pipelines remain computationally robust and interpretable on non-aligned datasets; radiographic severity provides a transferable quantitative anchor. Twin-cohort findings confirm heritable patterns in the oral microbiome and outline a pathway toward future clinical translation once patient-matched multi-omics are available. This framework establishes a scalable, reproducible foundation for integrative caries research. Full article
Show Figures

Figure 1

18 pages, 1149 KB  
Article
Advanced Cryptography Using Nanoantennas in Wireless Communication
by Francisco Alves, João Paulo N. Torres, P. Mendonça dos Santos and Ricardo A. Marques Lameirinhas
Information 2025, 16(9), 720; https://doi.org/10.3390/info16090720 - 22 Aug 2025
Viewed by 521
Abstract
This work presents an end-to-end encryption–decryption framework for securing electromagnetic signals processed through a nanoantenna. The system integrates amplitude normalization, uniform quantization, and Reed–Solomon forward error correction with key establishment via ECDH and bitwise XOR encryption. Two signal types were evaluated: a synthetic [...] Read more.
This work presents an end-to-end encryption–decryption framework for securing electromagnetic signals processed through a nanoantenna. The system integrates amplitude normalization, uniform quantization, and Reed–Solomon forward error correction with key establishment via ECDH and bitwise XOR encryption. Two signal types were evaluated: a synthetic Gaussian pulse and a synthetic voice waveform, representing low- and high-entropy data, respectively. For the Gaussian signal, reconstruction achieved an RMSE = 11.42, MAE = 0.86, PSNR = 26.97 dB, and Pearson’s correlation coefficient = 0.8887. The voice signal exhibited elevated error metrics, with an RMSE = 15.13, MAE = 2.52, PSNR = 24.54 dB, and Pearson correlation = 0.8062, yet maintained adequate fidelity. Entropy analysis indicated minimal changes between the original signal and the reconstructed signal. Furthermore, avalanche testing confirmed strong key sensitivity, with single-bit changes in the key altering approximately 50% of the ciphertext bits. The findings indicate that the proposed pipeline ensures high reconstruction quality with lightweight encryption, rendering it suitable for environments with limited computational resources. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

25 pages, 2721 KB  
Review
Next-Generation Nucleic Acid-Based Diagnostics for Viral Pathogens: Lessons Learned from the SARS-CoV-2 Pandemic
by Amy Papaneri, Guohong Cui and Shih-Heng Chen
Microorganisms 2025, 13(8), 1905; https://doi.org/10.3390/microorganisms13081905 - 15 Aug 2025
Cited by 1 | Viewed by 1223
Abstract
The COVID-19 pandemic, caused by Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), catalyzed unprecedented innovation in molecular diagnostics to address critical gaps in rapid pathogen detection. Over the past five years, CRISPR-based systems, isothermal amplification techniques, and portable biosensors have emerged as transformative [...] Read more.
The COVID-19 pandemic, caused by Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), catalyzed unprecedented innovation in molecular diagnostics to address critical gaps in rapid pathogen detection. Over the past five years, CRISPR-based systems, isothermal amplification techniques, and portable biosensors have emerged as transformative tools for nucleic acid detection, offering improvements in speed, sensitivity, and point-of-care applicability compared to conventional PCR. While numerous reviews have cataloged the technical specifications of these platforms, a critical gap remains in understanding the strategic and economic hurdles to their real-world implementation. This review provides a forward-looking analysis of the feasibility, scalability, and economic benefits of integrating these next-generation technologies into future pandemic-response pipelines. We synthesize advances in coronavirus-specific diagnostic platforms and attempt to highlight the need for their implementation as a cost-saving measure during surges in clinical demand. We evaluate the feasibility of translating these technologies—particularly CRISPR-Cas integration with recombinase polymerase amplification (RPA)—into robust first-line diagnostic pipelines for novel viral threats. By analyzing the evolution of diagnostic strategies during the COVID-19 era, we aim to provide strategic insights and new directions for developing and deploying effective detection platforms to better confront future viral pandemics. Full article
Show Figures

Figure 1

Back to TopTop