Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (960)

Search Parameters:
Keywords = pipelined architecture

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1497 KB  
Article
SPARTA: Sparse Parallel Architecture for Real-Time Threat Analysis for Lightweight Edge Network Defense
by Shi Li, Xiyun Mi, Lin Zhang and Ye Lu
Future Internet 2026, 18(2), 88; https://doi.org/10.3390/fi18020088 (registering DOI) - 6 Feb 2026
Abstract
AI-driven network security relies increasingly on Large Language Models (LLMs) to detect sophisticated threats; however, their deployment on resource-constrained edge devices is severely hindered by immense parameter scales. While unstructured pruning offers a theoretical reduction in model size, commodity Graphics Processing Unit (GPU) [...] Read more.
AI-driven network security relies increasingly on Large Language Models (LLMs) to detect sophisticated threats; however, their deployment on resource-constrained edge devices is severely hindered by immense parameter scales. While unstructured pruning offers a theoretical reduction in model size, commodity Graphics Processing Unit (GPU) architectures fail to efficiently leverage element-wise sparsity due to the mismatch between fine-grained pruning patterns and the coarse-grained parallelism of Tensor Cores, leading to latency bottlenecks that compromise real-time analysis of high-volume security telemetry. To bridge this gap, we propose SPARTA (Sparse Parallel Architecture for Real-Time Threat Analysis), an algorithm–architecture co-design framework. Specifically, we integrate a hardware-based address remapping interface to enable flexible row-offset access. This mechanism facilitates a novel graph-based column vector merging strategy that aligns sparse data with Tensor Core parallelism, complemented by a pipelined execution scheme to mask decoding latencies. Evaluations on Llama2-7B and Llama2-13B benchmarks demonstrate that SPARTA achieves an average speedup of 2.35× compared to Flash-LLM, with peak speedups reaching 5.05×. These findings indicate that hardware-aware microarchitectural adaptations can effectively mitigate the penalties of unstructured sparsity, providing a viable pathway for efficient deployment in resource-constrained edge security. Full article
(This article belongs to the Special Issue DDoS Attack Detection for Cyber–Physical Systems)
Show Figures

Figure 1

30 pages, 4048 KB  
Review
Artificial Intelligence as a Catalyst for Antimicrobial Discovery: From Predictive Models to De Novo Design
by Romaisaa Boudza, Salim Bounou, Jaume Segura-Garcia, Ismail Moukadiri and Sergi Maicas
Microorganisms 2026, 14(2), 394; https://doi.org/10.3390/microorganisms14020394 (registering DOI) - 6 Feb 2026
Abstract
Antimicrobial resistance represents one of the most critical global health challenges of the 21st century, urgently demanding innovative strategies for antimicrobial discovery. Traditional antibiotic development pipelines are slow, costly, and increasingly ineffective against multidrug-resistant pathogens. In this context, recent advances in artificial intelligence [...] Read more.
Antimicrobial resistance represents one of the most critical global health challenges of the 21st century, urgently demanding innovative strategies for antimicrobial discovery. Traditional antibiotic development pipelines are slow, costly, and increasingly ineffective against multidrug-resistant pathogens. In this context, recent advances in artificial intelligence have emerged as transformative tools capable of accelerating antimicrobial discovery and expanding accessible chemical and biological space. This comprehensive review critically synthesizes recent progress in AI-driven approaches applied to the discovery and design of both small-molecule antibiotics and antimicrobial peptides. We examine how machine learning, deep learning, and generative models are being leveraged for virtual screening, activity prediction, mechanism-informed prioritization, and de novo antimicrobial design. Particular emphasis is placed on graph-based neural networks, attention-based and transformer architectures, and generative frameworks such as variational autoencoders and large language model-based generators. Across these approaches, AI has enabled the identification of structurally novel compounds, facilitated narrow-spectrum antimicrobial strategies, and improved interpretability in peptide prediction. However, significant challenges remain, including data scarcity and imbalance, limited experimental validation, and barriers to clinical translation. By integrating methodological advances with a critical analysis of the current limitations, this review highlights emerging trends and outlines future directions aimed at bridging the gap between in silico discovery and real-world therapeutic development. Full article
Show Figures

Graphical abstract

22 pages, 1612 KB  
Article
Lightweight 1D-CNN-Based Battery State-of-Charge Estimation and Hardware Development
by Seungbum Kang, Yoonjae Lee, Gahyeon Jang and Seongsoo Lee
Electronics 2026, 15(3), 704; https://doi.org/10.3390/electronics15030704 - 6 Feb 2026
Abstract
This paper presents the FPGA implementation and verification of a lightweight one-dimensional convolutional neural network (1D-CNN) pipeline for real-time battery state-of-charge (SoC) estimation in automotive battery management systems. The proposed model employs separable 1D convolution and global average pooling, and applies aggressive structured [...] Read more.
This paper presents the FPGA implementation and verification of a lightweight one-dimensional convolutional neural network (1D-CNN) pipeline for real-time battery state-of-charge (SoC) estimation in automotive battery management systems. The proposed model employs separable 1D convolution and global average pooling, and applies aggressive structured pruning to reduce the number of parameters from 3121 to 358, representing an 88.5% reduction, without significant accuracy loss. Using quantization-aware training (QAT), the network is trained and executed in INT8, which reduces weight storage to one-quarter of the 32-bit baseline while maintaining high estimation accuracy with a Mean Absolute Error (MAE) of 0.0172. The hardware adopts a time-multiplexed single MAC architecture with FSM control, occupying 98,410 gates under a 28 nm process. Evaluations on an FPGA testbed with representative drive-cycle inputs show that the proposed INT8 pipeline achieves performance comparable to the floating-point reference with negligible precision drop, demonstrating its suitability for in-vehicle BMS deployment. Full article
Show Figures

Figure 1

23 pages, 3004 KB  
Article
Design and Analysis of FSM-Based AES Encryption on FPGA Versus MATLAB Environment
by Sunny Arief Sudiro, Fauziah Fauziah, Ragiel Hadi Prayitno, Bayu Kumoro Yakti, Sarifuddin Madenda and Michel Paindavoine
Electronics 2026, 15(3), 702; https://doi.org/10.3390/electronics15030702 - 5 Feb 2026
Abstract
The present paper compares and analyzes the design of AES-128 encryption and decryption using Finite State Machine (FSM) architecture on FPGA and MATLAB platforms. This study aims to evaluate performance disparities in terms of execution time, throughput, and hardware efficiency under identical input [...] Read more.
The present paper compares and analyzes the design of AES-128 encryption and decryption using Finite State Machine (FSM) architecture on FPGA and MATLAB platforms. This study aims to evaluate performance disparities in terms of execution time, throughput, and hardware efficiency under identical input data and key conditions. The FSM-based AES algorithm was modeled in MATLAB for functional validation and synthesized on an Artix-7 FPGA using VHDL. The experimental results confirmed that both platforms produced identical ciphertext and plaintext outputs, verifying the correctness of the processes employed. However, the FPGA demonstrated significantly better performance in terms of execution speed. Encryption and decryption times were measured in microseconds on the FPGA, while similar operations on the MATLAB platform required hundreds of milliseconds. The FPGA implementation achieved throughput of 872.53 Mbps for encryption and 858.49 Mbps for decryption with area usage of 1263 and 1428 slices, respectively. This yields an efficiency of 0.691 and 0.601 Mbps/slice, which is considered efficient according to established benchmarks. Compared to previous MATLAB-only and FPGA pipelined implementations, the current design strikes a balance between resource usage and performance, making it ideal for lightweight cryptographic applications in embedded systems. These results provide practical insights into selecting platforms for secure, real-time data processing. Full article
(This article belongs to the Section Computer Science & Engineering)
34 pages, 1043 KB  
Review
Conceptual Architecture of a Trustworthy Wind and Photovoltaic Power Forecasting System: A Systematic Review and Design
by Pavel V. Matrenin, Irina F. Iumanova and Alexandra I. Khalyasmaa
Inventions 2026, 11(1), 15; https://doi.org/10.3390/inventions11010015 - 5 Feb 2026
Abstract
Accurate and trustworthy forecasting of wind and photovoltaic power generation is essential for the reliable operation and planning of modern power systems. Although recent machine-learning-based forecasting solutions increasingly incorporate elements of trustworthy artificial intelligence, such as explainability, uncertainty quantification, robustness, drift monitoring, and [...] Read more.
Accurate and trustworthy forecasting of wind and photovoltaic power generation is essential for the reliable operation and planning of modern power systems. Although recent machine-learning-based forecasting solutions increasingly incorporate elements of trustworthy artificial intelligence, such as explainability, uncertainty quantification, robustness, drift monitoring, and machine learning operations, these components are typically introduced in a fragmented manner and remain weakly integrated at the architectural level, which limits their applicability in real operational environments. This paper presents a systematic review of 59 peer-reviewed journal articles published between 2019 and 2025, conducted in accordance with the PRISMA 2020 guidelines. The review includes studies focused on wind and photovoltaic power forecasting that report system architectures, frameworks, or end-to-end pipelines incorporating at least one trust-related attribute. The literature search was performed using Scopus, IEEE Xplore, MDPI, and ScienceDirect. Using a narrative and architectural synthesis, the review identifies six structural gaps hindering industrial deployment: the absence of semantic data models, shallow model-centric explainability, drift monitoring without governance mechanisms, lack of automated model lifecycle management, insufficient robustness to real-world data defects, and the absence of integrated end-to-end architectures. The evidence base is limited by the heterogeneity of architectural descriptions and the predominantly qualitative nature of reported implementations. Based on these findings, a high-level reference architecture for a trustworthy AI-based forecasting system is proposed. The architecture formalizes trustworthiness as a system-level property and integrates semantic, technological, and functional trust layers within a unified data and model lifecycle, supporting reproducible, interpretable, and operationally reliable forecasting for both wind and photovoltaic power plants. Full article
(This article belongs to the Special Issue Emerging Trends and Innovations in Renewable Energy)
16 pages, 1157 KB  
Article
Fine-Grained Assignment of Unknown Marine eDNA Sequences Using Neural Networks
by Sébastien Villon, Morgan Mangeas, Véronique Berteaux-Lecellier, Laurent Vigliola and Gaël Lecellier
Biology 2026, 15(3), 285; https://doi.org/10.3390/biology15030285 - 5 Feb 2026
Abstract
Environmental DNA (eDNA) metabarcoding is an innovative tool that is transforming ecological research. It offers a simple and effective method for simultaneously detecting numerous species across a wide range of environments. The method relies on assigning DNA sequences sampled from the environment to [...] Read more.
Environmental DNA (eDNA) metabarcoding is an innovative tool that is transforming ecological research. It offers a simple and effective method for simultaneously detecting numerous species across a wide range of environments. The method relies on assigning DNA sequences sampled from the environment to taxa, which is straightforward for species that have already been sequenced and are represented in reference databases. However, existing bioinformatics tools often fail to deliver accurate, fine-grained assignments when target species are absent from these databases. This limitation arises from handcrafted classification thresholds that do not account for nucleotide positional information. Here, we propose a deep neural architecture specifically designed to exploit both nucleotide identity and positional patterns in short TELEO sequences. Using an in-silico validation framework based on NCBI genbank sequences, we compare our approach with several state-of-the-art bioinformatics tools (Obitools, Kraken2, Lolo), as well as alternative sequence embedding methods, under controlled conditions. Our approach yields significantly higher classification accuracy at the genus and family levels, achieving average accuracies of 94.7% at the genus level and 86.5% at the family level, substantially outperforming the tested reference-based pipelines. The method remains robust with limited training data and shows improved performance when nucleotide positional information is preserved through sequence alignment. These results demonstrate the potential of AI-powered eDNA metabarcoding to complement existing taxonomic assignment tools, particularly in contexts where reference databases are incomplete or species-level resolution is not achievable, thereby supporting biodiversity monitoring and ecosystem management. Full article
Show Figures

Figure 1

10 pages, 1705 KB  
Proceeding Paper
Low-Capital Expenditure AI-Assisted Zero-Trust Control Plane for Brownfield Ethernet Environments
by Hong-Sheng Wang and Reen-Cheng Wang
Eng. Proc. 2025, 120(1), 54; https://doi.org/10.3390/engproc2025120054 - 5 Feb 2026
Abstract
We developed an AI-assisted zero-trust control system at low capital expenditure to retrofit brownfield Ethernet environments without disruptive hardware upgrades or costly software-defined networking migration. Legacy network infrastructures in small and medium-sized enterprises (SMEs) lack the flexibility and programmability required by modern zero-trust [...] Read more.
We developed an AI-assisted zero-trust control system at low capital expenditure to retrofit brownfield Ethernet environments without disruptive hardware upgrades or costly software-defined networking migration. Legacy network infrastructures in small and medium-sized enterprises (SMEs) lack the flexibility and programmability required by modern zero-trust architectures, creating a persistent security gap between static Layer-1 deployments and dynamic cyber threats. The developed system addresses this gap through a modular architecture that integrates genetic-algorithm-based virtual local area network (VLAN) optimization, large language model-guided firewall rule synthesis, threat-intelligence-driven policy automation, and telemetry-triggered adaptive isolation. Network assets are enumerated and evaluated through a risk-aware clustering model to enable micro-segmentation that aligns with the principle of least privilege. Optimized segmentation outputs are translated into pfSense firewall policies through structured prompt engineering and dual-stage validation, ensuring syntactic correctness and semantic consistency. A retrieval-augmented generation pipeline connects live telemetry with historical vulnerability intelligence, enabling rapid policy adjustments and automated containment responses. The system operates as an overlay on existing managed switches, orchestrating configuration changes through standards-compliant interfaces such as simple network management protocol and network configuration protocol. Experimental evaluation in a representative SME testbed demonstrates substantial improvements in segmentation granularity, refining seven flat subnets into thirty-four purpose-specific VLANs. Compliance scores improved significantly, with the International Organization for Standardization/International Electrotechnical Commission 27001 rising from 62.3 to 94.7% and the National Institute of Standards and Technology Cybersecurity Framework alignment increasing from 58.9 to 91.2%. All 851 automatically generated firewall rules passed dual-agent validation, ensuring reliable enforcement and enhanced auditability. The results indicate that the system developed provides an operationally feasible pathway for legacy networks to achieve zero-trust segmentation with minimal cost and disruption. Future extensions will explore adaptive learning mechanisms and hybrid cloud support to further enhance scalability and contextual responsiveness. Full article
(This article belongs to the Proceedings of 8th International Conference on Knowledge Innovation and Invention)
Show Figures

Figure 1

5 pages, 398 KB  
Proceeding Paper
A Lightweight Deep Learning Framework for Robust Video Watermarking in Adversarial Environments
by Antonio Cedillo-Hernandez, Lydia Velazquez-Garcia and Manuel Cedillo-Hernandez
Eng. Proc. 2026, 123(1), 25; https://doi.org/10.3390/engproc2026123025 - 5 Feb 2026
Abstract
The widespread distribution of digital videos in social networks, streaming services, and surveillance systems has increased the risk of manipulation, unauthorized redistribution, and adversarial tampering. This paper presents a lightweight deep learning framework for robust and imperceptible video watermarking designed specifically for cybersecurity [...] Read more.
The widespread distribution of digital videos in social networks, streaming services, and surveillance systems has increased the risk of manipulation, unauthorized redistribution, and adversarial tampering. This paper presents a lightweight deep learning framework for robust and imperceptible video watermarking designed specifically for cybersecurity environments. Unlike heavy architectures that rely on multi-scale feature extractors or complex adversarial networks, our model introduces a compact encoder–decoder pipeline optimized for real-time watermark embedding and recovery under adversarial attacks. The proposed system leverages spatial attention and temporal redundancy to ensure robustness against distortions such as compression, additive noise, and adversarial perturbations generated via Fast Gradient Sign Method (FGSM) or recompression attacks from generative models. Experimental simulations using a reduced Kinetics-600 subset demonstrate promising results, achieving an average PSNR of 38.9 dB, SSIM of 0.967, and Bit Error Rate (BER) below 3% even under FGSM attacks. These results suggest that the proposed lightweight framework achieves a favorable trade-off between resilience, imperceptibility, and computational efficiency, making it suitable for deployment in video forensics, authentication, and secure content distribution systems. Full article
(This article belongs to the Proceedings of First Summer School on Artificial Intelligence in Cybersecurity)
Show Figures

Figure 1

19 pages, 10329 KB  
Article
Design-to-Fabrication Workflows for Large-Scale Continuous FDM Grading of Biopolymer Composites
by Paul Nicholas, Gabriella Rossi, Carl Eppinger, Cameron Nelson, Konrad Sonne, Shahriar Akbari, Martin Tamke, Jan Hüls, Ryan O’Connor, Mathias Waschek and Mette Ramsgaard Thomsen
Appl. Sci. 2026, 16(3), 1569; https://doi.org/10.3390/app16031569 - 4 Feb 2026
Viewed by 18
Abstract
This paper details the development of innovative grading techniques for 3D-printed biopolymer composites that utilize locally sourced, cellulose-based fibre streams to produce architectural-scale components. It examines the design considerations, methodologies, and fabrication strategies that are necessitated by the utilisation of biopolymers for architectural [...] Read more.
This paper details the development of innovative grading techniques for 3D-printed biopolymer composites that utilize locally sourced, cellulose-based fibre streams to produce architectural-scale components. It examines the design considerations, methodologies, and fabrication strategies that are necessitated by the utilisation of biopolymers for architectural applications, and which underlie key processes of designing for and with variable materials. The presented research interrogates the methodological challenges of formulating new approaches that actively engage architects and designers with the ecological implications of their design choices. It outlines new methods for material grading that enable targeted compositional variation through three interlinked contributions: a gradable recipe, a design-interfaced specification process for grading, and an infrastructure for large-scale 3D printing of biopolymer composites. The paper presents the Rhizaerial demonstrator as an implementation of these contributions. Rhizaerial is a full-scale interior ceiling vault system, whose curved components are printed as a 3D porous lattice structure that creates an interplay of light, visual transparency, and colour, while maintaining structural integrity. We detail the gradable biopolymer composite recipe, and the residual and regenerative material streams it combines. We outline the implicit modelling pipeline, which includes methods for locally specifying lattice structures for 3D printing, as well as assigning continuous grading specifications to print paths. Finally, we describe the fabrication infrastructure and tooling for robotic printing of large-scale graded biopolymer composites. Full article
Show Figures

Figure 1

69 pages, 30976 KB  
Review
Next-Gen Explainable AI (XAI) for Federated and Distributed Internet of Things Systems: A State-of-the-Art Survey
by Aristeidis Karras, Anastasios Giannaros, Natalia Amasiadi and Christos Karras
Future Internet 2026, 18(2), 83; https://doi.org/10.3390/fi18020083 - 4 Feb 2026
Viewed by 44
Abstract
Background: Explainable Artificial Intelligence (XAI) is deployed in Internet of Things (IoT) ecosystems for smart cities and precision agriculture, where opaque models can compromise trust, accountability, and regulatory compliance. Objective: This survey investigates how XAI is currently integrated into distributed and federated IoT [...] Read more.
Background: Explainable Artificial Intelligence (XAI) is deployed in Internet of Things (IoT) ecosystems for smart cities and precision agriculture, where opaque models can compromise trust, accountability, and regulatory compliance. Objective: This survey investigates how XAI is currently integrated into distributed and federated IoT architectures and identifies systematic gaps in evaluation under real-world resource constraints. Methods: A structured search across IEEE Xplore, ACM Digital Library, ScienceDirect, SpringerLink, and Google Scholar targeted publications related to XAI, IoT, edge/fog computing, smart cities, smart agriculture, and federated learning. Relevant peer-reviewed works were synthesized along three dimensions: deployment tier (device, edge/fog, cloud), explanation scope (local vs. global), and validation methodology. Results: The analysis reveals a persistent resource–interpretability gap: computationally intensive explainers are frequently applied on constrained edge and federated platforms without explicitly accounting for latency, memory footprint, or energy consumption. Only a minority of studies quantify privacy–utility effects or address causal attribution in sensor-rich environments, limiting the reliability of explanations in safety- and mission-critical IoT applications. Contribution: To address these shortcomings, the survey introduces a hardware-centric evaluation framework with the Computational Complexity Score (CCS), Memory Footprint Ratio (MFR), and Privacy–Utility Trade-off (PUT) metrics and proposes a hierarchical IoT–XAI reference architecture, together with the conceptual Internet of Things Interpretability Evaluation Standard (IOTIES) for cross-domain assessment. Conclusions: The findings indicate that IoT–XAI research must shift from accuracy-only reporting to lightweight, model-agnostic, and privacy-aware explanation pipelines that are explicitly budgeted for edge resources and aligned with the needs of heterogeneous stakeholders in smart city and agricultural deployments. Full article
(This article belongs to the Special Issue Human-Centric Explainability in Large-Scale IoT and AI Systems)
22 pages, 3280 KB  
Systematic Review
From IoT to AIoT: Evolving Agricultural Systems Through Intelligent Connectivity in Low-Income Countries
by Selain K. Kasereka, Alidor M. Mbayandjambe, Ibsen G. Bazie, Heriol F. Zeufack, Okurwoth V. Ocama, Esteve Hassan, Kyandoghere Kyamakya and Tasho Tashev
Future Internet 2026, 18(2), 82; https://doi.org/10.3390/fi18020082 - 3 Feb 2026
Viewed by 103
Abstract
The convergence of Artificial Intelligence and the Internet of Things has given rise to the Artificial Intelligence of Things (AIoT), which enables connected systems to operate with greater autonomy, adaptability, and contextual awareness. In agriculture, this evolution supports precision farming, improves resource allocation, [...] Read more.
The convergence of Artificial Intelligence and the Internet of Things has given rise to the Artificial Intelligence of Things (AIoT), which enables connected systems to operate with greater autonomy, adaptability, and contextual awareness. In agriculture, this evolution supports precision farming, improves resource allocation, and strengthens climate resilience by enhancing the capacity of farming systems to anticipate, absorb, and recover from environmental shocks. This review provides a structured synthesis of the transition from IoT-based monitoring to AIoT-driven intelligent agriculture and examines key applications such as smart irrigation, pest and disease detection, soil and crop health assessment, yield prediction, and livestock management. To ensure methodological rigor and transparency, this study follows the PRISMA 2020 guidelines for systematic literature reviews. A comprehensive search and multi-stage screening procedure was conducted across major scholarly repositories, resulting in a curated selection of studies published between 2018 and 2025. These sources were analyzed thematically to identify technological enablers, implementation barriers, and contextual factors affecting adoption particularly within low-income countries where infrastructural constraints, limited digital capacity, and economic disparities shape AIoT deployment. Building on these insights, the article proposes an AIoT architecture tailored to resource-constrained agricultural environments. The architecture integrates sensing technologies, connectivity layers, edge intelligence, data processing pipelines, and decision-support mechanisms, and is supported by governance, data stewardship, and capacity-building frameworks. By combining systematic evidence with conceptual analysis, this review offers a comprehensive perspective on the transformative potential of AIoT in advancing sustainable, inclusive, and intelligent food production systems. Full article
(This article belongs to the Special Issue Machine Learning and Internet of Things in Industry 4.0)
Show Figures

Figure 1

24 pages, 6709 KB  
Article
Machine Learning-Guided Optimization of Electrospun Fiber Morphology for Enhanced Osteoblast Growth and Bone Regeneration
by Julia Radwan-Pragłowska, Aleksander Radwan-Pragłowski, Aleksandra Kopacz, Łukasz Janus, Aleksandra Sierakowska-Byczek and Piotr Radomski
Appl. Sci. 2026, 16(3), 1535; https://doi.org/10.3390/app16031535 - 3 Feb 2026
Viewed by 84
Abstract
Optimizing nanofiber morphology is essential for promoting osteoblast elongation and supporting bone regeneration. This study aimed to develop a machine-learning framework capable of predicting optimal scaffold architectures directly from scanning electron microscopy (SEM) images and chemical composition. A four-module pipeline was implemented, combining [...] Read more.
Optimizing nanofiber morphology is essential for promoting osteoblast elongation and supporting bone regeneration. This study aimed to develop a machine-learning framework capable of predicting optimal scaffold architectures directly from scanning electron microscopy (SEM) images and chemical composition. A four-module pipeline was implemented, combining tile-based SEM preprocessing, Cellpose-based cell morphology extraction with edge correction, ensemble machine-learning models, and an end-to-end convolutional neural network (CNN). Cellular quality was quantified using an elongation-weighted metric to emphasize morphological maturity over cell number. The analysis revealed consistent structure–function relationships across samples, with Sample_5 achieving the highest quality score at the 72 h time point. Ensemble models reached an R2 of 0.400, while the end-to-end CNN achieved an R2 of 0.750, indicating that raw SEM texture provides additional predictive information beyond handcrafted features. Feature-importance analysis identified nonlinear MgO effects and synergistic interactions between MgO and gold nanoparticles as key determinants of cell morphology. These findings demonstrate that the integrated workflow can reliably identify morphology–chemistry combinations favorable for osteoblast performance and provide a foundation for data-driven scaffold optimization. The approach supports rational design of nanofibrous biomaterials and may facilitate future development of intelligent scaffolds for bone regeneration applications. Full article
(This article belongs to the Special Issue Advanced Biomaterials: Characterization and Applications)
Show Figures

Figure 1

15 pages, 3287 KB  
Article
FPGA-Based Real-Time Measurement System for Single-Shot Carrier-Envelope Phase in High-Repetition-Rate Laser Amplification Systems
by Wenjun Shu, Pengfei Yang, Wei Wang, Xiaochen Li, Nan Wang, Zhen Yang and Xindong Liang
Appl. Sci. 2026, 16(3), 1525; https://doi.org/10.3390/app16031525 - 3 Feb 2026
Viewed by 81
Abstract
To address the issue of low closed-loop feedback bandwidth caused by the long latency of Carrier-Envelope Phase (CEP) measurement systems for amplified femtosecond laser pulses, and to meet the requirements for real-time single-shot measurement in 10 kHz repetition rate systems, this paper proposes [...] Read more.
To address the issue of low closed-loop feedback bandwidth caused by the long latency of Carrier-Envelope Phase (CEP) measurement systems for amplified femtosecond laser pulses, and to meet the requirements for real-time single-shot measurement in 10 kHz repetition rate systems, this paper proposes a microsecond-level low-latency CEP measurement technique based on a Field-Programmable Gate Array (FPGA). To tackle the problem of non-uniform spectral sampling resulting from nonlinear wavelength-frequency mapping, the system implements a real-time linear interpolation algorithm for the interference spectrum. This approach effectively suppresses computational spurious peaks introduced by non-uniform sampling and significantly reduces measurement errors. Adopting a fully pipelined parallel processing architecture, the system achieves a CEP processing latency of approximately 89 μs, representing an improvement of 2–3 orders of magnitude compared to traditional Central Processing Unit (CPU)-based solutions. Hardware-in-the-loop testing, conducted by injecting a known sinusoidal phase modulation into the interference spectrum of a 10 kHz laser amplification system, demonstrates that the computational error of the proposed algorithm is less than 30 mrad. This work paves the way for achieving single-shot CEP feedback locking in high-repetition-rate laser amplification systems. Full article
Show Figures

Figure 1

31 pages, 1633 KB  
Article
Foundation-Model-Driven Skin Lesion Segmentation and Classification Using SAM-Adapters and Vision Transformers
by Faisal Binzagr and Majed Hariri
Diagnostics 2026, 16(3), 468; https://doi.org/10.3390/diagnostics16030468 - 3 Feb 2026
Viewed by 156
Abstract
Background: The precise segmentation and classification of dermoscopic images remain prominent obstacles in automated skin cancer evaluation due, in part, to variability in lesions, low-contrast borders, and additional artifacts in the background. There have been recent developments in foundation models, with a particular [...] Read more.
Background: The precise segmentation and classification of dermoscopic images remain prominent obstacles in automated skin cancer evaluation due, in part, to variability in lesions, low-contrast borders, and additional artifacts in the background. There have been recent developments in foundation models, with a particular emphasis on the Segment Anything Model (SAM)—these models exhibit strong generalization potential but require domain-specific adaptation to function effectively in medical imaging. The advent of new architectures, particularly Vision Transformers (ViTs), expands the means of implementing robust lesion identification; however, their strengths are limited without spatial priors. Methods: The proposed study lays out an integrated foundation-model-based framework that utilizes SAM-Adapter-fine-tuning for lesion segmentation and a ViT-based classifier that incorporates lesion-specific cropping derived from segmentation and cross-attention fusion. The SAM encoder is kept frozen while lightweight adapters are fine-tuned only, to introduce skin surface-specific capacity. Segmentation priors are incorporated during the classification stage through fusion with patch-embeddings from the images, creating lesion-centric reasoning. The entire pipeline is trained using a joint multi-task approach using data from the ISIC 2018, HAM10000, and PH2 datasets. Results: From extensive experimentation, the proposed method outperforms the state-of-the-art segmentation and classification across the dataset. On the ISIC 2018 dataset, it achieves a Dice score of 94.27% for segmentation and an accuracy of 95.88% for classification performance. On PH2, a Dice score of 95.62% is achieved, and for HAM10000, an accuracy of 96.37% is achieved. Several ablation analyses confirm that both the SAM-Adapters and lesion-specific cropping and cross-attention fusion contribute substantially to performance. Paired t-tests are used to confirm statistical significance for all the previously stated measures where improvements over strong baselines indicate a p<0.01 for most comparisons and with large effect sizes. Conclusions: The results indicate that the combination of prior segmentation from foundation models, plus transformer-based classification, consistently and reliably improves the quality of lesion boundaries and diagnosis accuracy. Thus, the proposed SAM-ViT framework demonstrates a robust, generalizable, and lesion-centric automated dermoscopic analysis, and represents a promising initial step towards clinically deployable skin cancer decision-support system. Next steps will include model compression, improved pseudo-mask refinement and evaluation on real-world multi-center clinical cohorts. Full article
(This article belongs to the Special Issue Medical Image Analysis and Machine Learning)
Show Figures

Figure 1

16 pages, 416 KB  
Article
An Adaptive IoT-Based ForecastingFramework for Structural and Environmental Risk Detection in Tailings Dams
by Raul Rabadán-Arroyo, Ester Simó, Francesc Aguiló-Gost, Francisco Hernández-Ramírez and Xavier Masip-Bruin
Electronics 2026, 15(3), 658; https://doi.org/10.3390/electronics15030658 - 3 Feb 2026
Viewed by 151
Abstract
Tailings dams represent one of the most environmentally sensitive infrastructures in the mining industry. To address the need for continuous and accurate monitoring, this paper presents an adaptive forecasting framework that combines Internet of Things (IoT) technologies with machine learning (ML) models to [...] Read more.
Tailings dams represent one of the most environmentally sensitive infrastructures in the mining industry. To address the need for continuous and accurate monitoring, this paper presents an adaptive forecasting framework that combines Internet of Things (IoT) technologies with machine learning (ML) models to detect early signs of structural and ecological risks. The proposed system architecture is modular and scalable and enables the automated training, selection, and deployment of predictive models for multivariate sensor data. Each sensor data flow is independently analyzed by using a configurable set of algorithms (including linear, convolutional, recurrent, and residual models). The framework is deployed via containers with a CI/CD pipeline and includes real-time visualization through Grafana dashboards. A use case involving tiltmeters and piezometers in an operational tailing dam shows the system’s high predictive accuracy, with mean relative errors below 4% across all variables (in fact, many of them have a mean relative error below 1%). These results highlight the potential of the proposed solution to improve structural and environmental safety in mining operations. Full article
(This article belongs to the Special Issue Empowering IoT with AI: AIoT for Smart and Autonomous Systems)
Show Figures

Figure 1

Back to TopTop