error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,965)

Search Parameters:
Keywords = modern architecture

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 313 KB  
Article
Machine Learning-Enhanced Database Cache Management: A Comprehensive Performance Analysis and Comparison of Predictive Replacement Policies
by Maryam Abbasi, Paulo Váz, José Silva, Filipe Cardoso, Filipe Sá and Pedro Martins
Appl. Sci. 2026, 16(2), 666; https://doi.org/10.3390/app16020666 - 8 Jan 2026
Abstract
The exponential growth of data-driven applications has intensified performance demands on database systems, where cache management represents a critical bottleneck. Traditional cache replacement policies such as Least Recently Used (LRU) and Least Frequently Used (LFU) rely on simple heuristics that fail to capture [...] Read more.
The exponential growth of data-driven applications has intensified performance demands on database systems, where cache management represents a critical bottleneck. Traditional cache replacement policies such as Least Recently Used (LRU) and Least Frequently Used (LFU) rely on simple heuristics that fail to capture complex temporal and frequency patterns in modern workloads. This research presents a modular machine learning-enhanced cache management framework that leverages pattern recognition to optimize database performance through intelligent replacement decisions. Our approach integrates multiple machine learning models—Random Forest classifiers, Long Short-Term Memory (LSTM) networks, Support Vector Machines (SVM), and Gradient Boosting methods—within a modular architecture enabling seamless integration with existing database systems. The framework incorporates sophisticated feature engineering pipelines extracting temporal, frequency, and contextual characteristics from query access patterns. Comprehensive experimental evaluation across synthetic workloads, real-world production datasets, and standard benchmarks (TPC-C, TPC-H, YCSB, and LinkBench) demonstrates consistent performance improvements. Machine learning-enhanced approaches achieve 8.4% to 19.2% improvement in cache hit rates, 15.3% to 28.7% reduction in query latency, and 18.9% to 31.4% increase in system throughput compared to traditional policies and advanced adaptive methods including ARC, LIRS, Clock-Pro, TinyLFU, and LECAR. Random Forest emerges as the most practical solution, providing 18.7% performance improvement with only 3.1% computational overhead. Case study analysis across e-commerce, financial services, and content management applications demonstrates measurable business impact, including 8.3% conversion rate improvements and USD 127,000 annual revenue increases. Statistical validation (p<0.001, Cohen’s d>0.8) confirms both statistical and practical significance. Full article
30 pages, 14221 KB  
Article
Integrated Control of Hybrid Thermochemical–PCM Storage for Renewable Heating and Cooling Systems in a Smart House
by Georgios Martinopoulos, Paschalis A. Gkaidatzis, Luis Jimeno, Alberto Belda González, Panteleimon Bakalis, George Meramveliotakis, Apostolos Gkountas, Nikolaos Tarsounas, Dimosthenis Ioannidis, Dimitrios Tzovaras and Nikolaos Nikolopoulos
Electronics 2026, 15(2), 279; https://doi.org/10.3390/electronics15020279 - 7 Jan 2026
Abstract
The development of integrated renewable energy and high-density thermal energy storage systems has been fueled by the need for environmentally friendly heating and cooling in buildings. In this paper, MiniStor, a hybrid thermochemical and phase-change material storage system, is presented. It is equipped [...] Read more.
The development of integrated renewable energy and high-density thermal energy storage systems has been fueled by the need for environmentally friendly heating and cooling in buildings. In this paper, MiniStor, a hybrid thermochemical and phase-change material storage system, is presented. It is equipped with a heat pump, advanced electronics-enabled control, photovoltaic–thermal panels, and flat-plate solar collectors. To optimize energy flows, regulate charging and discharging cycles, and maintain operational stability under fluctuating solar irradiance and building loads, the system utilizes state-of-the-art power electronics, variable-frequency drives and modular multi-level converters. The hybrid storage is safely, reliably, and efficiently integrated with building HVAC requirements owing to a multi-layer control architecture that is implemented via Internet of Things and SCADA platforms that allow for real-time monitoring, predictive operation, and fault detection. Data from the MiniStor prototype demonstrate effective thermal–electrical coordination, controlled energy consumption, and high responsiveness to dynamic environmental and demand conditions. The findings highlight the vital role that digital control, modern electronics, and Internet of Things-enabled supervision play in connecting small, high-density thermal storage and renewable energy generation. This strategy demonstrates the promise of electronics-driven integration for next-generation renewable energy solutions and provides a scalable route toward intelligent, robust, and effective building energy systems. Full article
(This article belongs to the Special Issue New Insights in Power Electronics: Prospects and Challenges)
Show Figures

Figure 1

46 pages, 1244 KB  
Article
Mapping the Role of Artificial Intelligence and Machine Learning in Advancing Sustainable Banking
by Alina Georgiana Manta, Claudia Gherțescu, Roxana Maria Bădîrcea, Liviu Florin Manta, Jenica Popescu and Mihail Olaru
Sustainability 2026, 18(2), 618; https://doi.org/10.3390/su18020618 - 7 Jan 2026
Abstract
The convergence of artificial intelligence (AI), machine learning (ML), blockchain, and big data analytics is transforming the governance, sustainability, and resilience of modern banking ecosystems. This study provides a multivariate bibliometric analysis using Principal Component Analysis (PCA) of research indexed in Scopus and [...] Read more.
The convergence of artificial intelligence (AI), machine learning (ML), blockchain, and big data analytics is transforming the governance, sustainability, and resilience of modern banking ecosystems. This study provides a multivariate bibliometric analysis using Principal Component Analysis (PCA) of research indexed in Scopus and Web of Science to explore how decentralized digital infrastructures and AI-driven analytical capabilities contribute to sustainable financial development, transparent governance, and climate-resilient digital societies. Findings indicate a rapid increase in interdisciplinary work integrating Distributed Ledger Technology (DLT) with large-scale data processing, federated learning, privacy-preserving computation, and intelligent automation—tools that can enhance financial inclusion, regulatory integrity, and environmental risk management. Keyword network analyses reveal blockchain’s growing role in improving data provenance, security, and trust—key governance dimensions for sustainable and resilient financial systems—while AI/ML and big data analytics dominate research on predictive intelligence, ESG-related risk modeling, customer well-being analytics, and real-time decision support for sustainable finance. Comparative analyses show distinct emphases: Web of Science highlights decentralized architectures, consensus mechanisms, and smart contracts relevant to transparent financial governance, whereas Scopus emphasizes customer-centered analytics, natural language processing, and high-throughput data environments supporting inclusive and equitable financial services. Patterns of global collaboration demonstrate strong internationalization, with Europe, China, and the United States emerging as key hubs in shaping sustainable and digitally resilient banking infrastructures. By mapping intellectual, technological, and collaborative structures, this study clarifies how decentralized intelligence—enabled by the fusion of AI/ML, blockchain, and big data—supports secure, scalable, and sustainability-driven financial ecosystems. The results identify critical research pathways for strengthening financial governance, enhancing climate and social resilience, and advancing digital transformation, which contributes to more inclusive, equitable, and sustainable societies. Full article
23 pages, 1096 KB  
Article
A Reinforcement Learning-Based Optimization Strategy for Noise Budget Management in Homomorphically Encrypted Deep Network Inference
by Chi Zhang, Fenhua Bai, Jinhua Wan and Yu Chen
Electronics 2026, 15(2), 275; https://doi.org/10.3390/electronics15020275 - 7 Jan 2026
Abstract
Homomorphic encryption provides a powerful cryptographic solution for privacy-preserving deep neural network inference, enabling computation on encrypted data. However, the practical application of homomorphic encryption is fundamentally constrained by the noise budget, a core component of homomorphic encryption schemes. The substantial multiplicative depth [...] Read more.
Homomorphic encryption provides a powerful cryptographic solution for privacy-preserving deep neural network inference, enabling computation on encrypted data. However, the practical application of homomorphic encryption is fundamentally constrained by the noise budget, a core component of homomorphic encryption schemes. The substantial multiplicative depth of modern deep neural networks rapidly consumes this budget, necessitating frequent, computationally expensive bootstrapping operations to refresh the noise. This bootstrapping process has emerged as the primary performance bottleneck. Current noise management strategies are predominantly static, triggering bootstrapping at pre-defined, fixed intervals. This approach is sub-optimal for deep, complex architectures, leading to excessive computational overhead and potential accuracy degradation due to cumulative precision loss. To address this challenge, we propose a Deep Network-aware Adaptive Noise-budget Management mechanism, a novel mechanism that formulates noise budget allocation as a sequential decision problem optimized via reinforcement learning. The core of the proposed mechanism comprises two components. First, we construct a layer-aware noise consumption prediction model to accurately estimate the heterogeneous computational costs and noise accumulation across different network layers. Second, we design a Deep Q-Network-driven optimization algorithm. This Deep Q-Network agent is trained to derive a globally optimal policy, dynamically determining the optimal timing and network location for executing bootstrapping operations, based on the real-time output of the noise predictor and the current network state. This approach shifts from a static, pre-defined strategy to an adaptive, globally optimized one. Experimental validation on several typical deep neural network architectures demonstrates that the proposed mechanism significantly outperforms state-of-the-art fixed strategies, markedly reducing redundant bootstrapping overhead while maintaining model performance. Full article
(This article belongs to the Special Issue Security and Privacy in Artificial Intelligence Systems)
16 pages, 23583 KB  
Article
An Algorithmic Framework for Cocoa Ripeness Classification: A Comparative Analysis of Modern Deep Learning Architectures on Drone Imagery
by Thomures Momenpour and Arafat AbuMallouh
Algorithms 2026, 19(1), 55; https://doi.org/10.3390/a19010055 - 7 Jan 2026
Abstract
This study addresses the challenge of automating cocoa pod ripeness classification from drone imagery through a comprehensive and statistically rigorous investigation conducted on data collected from Ghanaian cocoa fields. We perform a direct comparison by subjecting a curated set of seven deep learning [...] Read more.
This study addresses the challenge of automating cocoa pod ripeness classification from drone imagery through a comprehensive and statistically rigorous investigation conducted on data collected from Ghanaian cocoa fields. We perform a direct comparison by subjecting a curated set of seven deep learning models to an identical, advanced algorithmic framework. This pipeline incorporates high-resolution (384×384) imagery, aggressive TrivialAugmentWide data augmentation, a weighted loss function with label smoothing, a unified two-stage fine-tuning strategy, and validation with Test Time Augmentation (TTA). To ensure statistical robustness, all experiments were repeated three times using different random seeds. Under these demanding experimental conditions, modern architectures demonstrated strong and consistent performance on this dataset: the Swin Transformer achieved the highest mean accuracy (79.27%±0.56%), followed closely by ConvNeXt-Base (79.21%±0.13%). In contrast, classic architectures such as ResNet-101 (55.86%±4.01%) and ResNet-50 (64.32%±0.94%) showed substantially reduced performance. A paired t-test confirmed that these differences are statistically significant (p<0.05). These results suggest that, within the evaluated setting, modern CNN- and transformer-based architectures exhibit greater robustness under challenging, statistically validated conditions, indicating their potential suitability for drone-based agricultural monitoring tasks. Full article
Show Figures

Figure 1

18 pages, 1479 KB  
Article
Scalable MLOps Pipeline with Complexity-Driven Model Selection Using Microservices
by Oleh Pitsun and Myroslav Shymchuk
Technologies 2026, 14(1), 45; https://doi.org/10.3390/technologies14010045 - 7 Jan 2026
Abstract
The increasing complexity of integrating modern convolutional neural networks into software systems imposes significant computational demands on machine learning infrastructures. Existing MLOps systems lack mechanisms for dynamic model selection based on dataset complexity, leading to inefficient resource utilization and limited scalability under high-load [...] Read more.
The increasing complexity of integrating modern convolutional neural networks into software systems imposes significant computational demands on machine learning infrastructures. Existing MLOps systems lack mechanisms for dynamic model selection based on dataset complexity, leading to inefficient resource utilization and limited scalability under high-load conditions. This study employs convolutional neural network-based machine learning algorithms for image classification and ensemble methods for quantitative feature classification. The paper presents a self-optimizing machine learning pipeline that integrates a microservices-based architecture with a formal process for estimating image complexity and an optimization-based model selection strategy. The proposed methodology is based on designing an adaptive microservice-based ML pipeline that dynamically reconfigures its computation graph at runtime. The results confirm the effectiveness of the proposed approach for building resilient and high-performance distributed systems. The mechanism proposed in this work enables the adaptive use of modern deep learning algorithms, leading to improved result quality. A comparative analysis with existing approaches demonstrates superiority in model selection complexity, pipeline overhead, and scalability. The outcome of the proposed mechanism is an adaptive algorithm selection process based on bias-related parameters, enabling the selection of the most suitable module for data processing. Full article
Show Figures

Figure 1

13 pages, 1149 KB  
Article
Monitoring IoT and Robotics Data for Sustainable Agricultural Practices Using a New Edge–Fog–Cloud Architecture
by Mohamed El-Ouati, Sandro Bimonte and Nicolas Tricot
Computers 2026, 15(1), 32; https://doi.org/10.3390/computers15010032 - 7 Jan 2026
Abstract
Modern agricultural operations generate high-volume and diverse data (historical and stream) from various sources, including IoT devices, robots, and drones. This paper presents a novel smart farming architecture specifically designed to efficiently manage and process this complex data landscape.The proposed architecture comprises five [...] Read more.
Modern agricultural operations generate high-volume and diverse data (historical and stream) from various sources, including IoT devices, robots, and drones. This paper presents a novel smart farming architecture specifically designed to efficiently manage and process this complex data landscape.The proposed architecture comprises five distinct, interconnected layers: The Source Layer, the Ingestion Layer, the Batch Layer, the Speed Layer, and the Governance Layer. The Source Layer serves as the unified entry point, accommodating structured, spatial, and image data from sensors, Drones, and ROS-equipped robots. The Ingestion Layer uses a hybrid fog/cloud architecture with Kafka for real-time streams and for batch processing of historical data. Data is then segregated for processing: The cloud-deployed Batch Layer employs a Hadoop cluster, Spark, Hive, and Drill for large-scale historical analysis, while the Speed Layer utilizes Geoflink and PostGIS for low-latency, real-time geovisualization. Finally, the Governance Layer guarantees data quality, lineage, and organization across all components using Open Metadata. This layered, hybrid approach provides a scalable and resilient framework capable of transforming raw agricultural data into timely, actionable insights, addressing the critical need for advanced data management in smart farming. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2025 (ICCSA 2025))
Show Figures

Figure 1

36 pages, 2201 KB  
Article
A Stacking-Based Ensemble Model for Multiclass DDoS Detection Using Shallow and Deep Machine Learning Algorithms
by Eduardo Angulo, Leonardo Lizcano and Jose Marquez
Appl. Sci. 2026, 16(2), 578; https://doi.org/10.3390/app16020578 - 6 Jan 2026
Abstract
Distributed Denial-of-Service (DDoS) attacks remain a significant threat to the stability and reliability of modern networked systems. This study presents a hierarchical stacking ensemble that integrates multiple Shallow Machine Learning (S-ML) and Deep Machine Learning (D-ML) algorithms for multiclass DDoS detection. The proposed [...] Read more.
Distributed Denial-of-Service (DDoS) attacks remain a significant threat to the stability and reliability of modern networked systems. This study presents a hierarchical stacking ensemble that integrates multiple Shallow Machine Learning (S-ML) and Deep Machine Learning (D-ML) algorithms for multiclass DDoS detection. The proposed architecture consists of three layers: Layer Zero (base learners), Layer One (meta learners), and Layer Two (final voting). The base layer combines heterogeneous S-ML and D-ML models, tree-based, kernel-based, and neural architectures, while the meta layer employs regression and neural models trained on meta-features derived from base-layer predictions. The final decision is determined through a voting mechanism that aggregates the outputs of the meta models. Using the CIC-DDoS2019 dataset with a nine-class configuration, the model achieves an accuracy of 91.26% and macro F1-scores above 0.90 across most attack categories. Unlike many prior works that report near-perfect performance under binary or reduced-class settings, our evaluation addresses a more demanding multiclass scenario with large-scale traffic (∼8.85 M flows) and a broad feature space. The results demonstrate that the ensemble provides competitive multiclass detection performance and consistent behavior across heterogeneous attack types, supporting its applicability to high-volume network monitoring environments. Full article
Show Figures

Figure 1

13 pages, 1222 KB  
Article
Whole-Plant Trait Integration Underpins High Leaf Biomass Productivity in a Modern Mulberry (Morus alba L.) Cultivar
by Bingjie Tu, Nan Xu, Juexian Dong and Wenhui Bao
Horticulturae 2026, 12(1), 67; https://doi.org/10.3390/horticulturae12010067 - 6 Jan 2026
Viewed by 8
Abstract
Understanding yield improvement in horticultural systems depends on elucidating how multiple plant traits operate in concert to sustain productivity. Mulberry (Morus alba L.) provides a suitable model for examining such whole-plant integration. Under cold-region field conditions, a modern high-yield cultivar (‘Nongsang 14’) [...] Read more.
Understanding yield improvement in horticultural systems depends on elucidating how multiple plant traits operate in concert to sustain productivity. Mulberry (Morus alba L.) provides a suitable model for examining such whole-plant integration. Under cold-region field conditions, a modern high-yield cultivar (‘Nongsang 14’) was compared with a traditional cultivar (‘Lusang 1’). Measurements encompassed canopy architecture, biomass allocation between roots and shoots, leaf economic traits, and gas-exchange parameters, allowing trait coordination to be evaluated across structural and physiological dimensions. Multivariate profiling—Principal Component Analysis (PCA) and correlation networks—was used to characterise phenotypic integration. The modern cultivar’s superior productivity emerged as a coordinated “acquisitive” trait syndrome. This strategy couples a larger canopy (higher LAI) and nitrogen-rich foliage (higher LNC) with greater stomatal conductance (Gs), operating together with reduced root-to-shoot allocation. These features form a tightly connected network where structural investment and physiological upregulation are synchronised to maximise carbon gain. These findings provide a whole-plant framework for interpreting high productivity, offering guidance for breeding programmes that target trait integration rather than single-trait optimisation. Full article
Show Figures

Figure 1

32 pages, 5625 KB  
Article
Multi-Source Concurrent Renewable Energy Estimation: A Physics-Informed Spatio-Temporal CNN-LSTM Framework
by Razan Mohammed Aljohani and Amal Almansour
Sustainability 2026, 18(1), 533; https://doi.org/10.3390/su18010533 - 5 Jan 2026
Viewed by 96
Abstract
Accurate and reliable estimation of renewable energy generation is critical for modern power grid management, yet the inherent volatility and distinct physical drivers of multi-source renewables present significant modeling challenges. This paper proposes a unified deep learning framework for the concurrent estimation of [...] Read more.
Accurate and reliable estimation of renewable energy generation is critical for modern power grid management, yet the inherent volatility and distinct physical drivers of multi-source renewables present significant modeling challenges. This paper proposes a unified deep learning framework for the concurrent estimation of power generation from solar, wind, and hydro sources. This methodology, termed nowcasting, utilizes real-time weather inputs to estimate immediate power generation. We introduce a hybrid spatio-temporal CNN-LSTM architecture that leverages a two-branch design to process both sequential weather data and static, plant-specific attributes in parallel. A key innovation of our approach is the use of a physics-informed Capacity Factor as the normalized target variable, which is customized for each energy source and notably employs a non-linear, S-shaped tanh-based power curve to model wind generation. To ensure high-fidelity spatial feature integration, a cKDTree algorithm was implemented to accurately match each power plant with its nearest corresponding weather data. To guarantee methodological rigor and prevent look-ahead bias, the model was trained and validated using a strict chronological data splitting strategy and was rigorously benchmarked against Linear Regression and XGBoost models. The framework demonstrated exceptional robustness on a large-scale dataset of over 1.5 million records spanning five European countries, achieving R-squared (R2) values of 0.9967 for solar, 0.9993 for wind, and 0.9922 for hydro. While traditional ensemble models performed competitively on linear solar data, the proposed CNN-LSTM architecture demonstrated superior performance in capturing the complex, non-linear dynamics of wind energy, confirming its superiority in capturing intricate meteorological dependencies. This study validates the significant contribution of a spatio-temporal and physics-informed framework, establishing a foundational model for real-time energy assessment and enhanced grid sustainability. Full article
Show Figures

Figure 1

23 pages, 3943 KB  
Article
High-Rise Building Area Extraction Based on Prior-Embedded Dual-Branch Neural Network
by Qiliang Si, Liwei Li and Gang Cheng
Remote Sens. 2026, 18(1), 167; https://doi.org/10.3390/rs18010167 - 4 Jan 2026
Viewed by 189
Abstract
High-rise building areas (HRBs) play a crucial role in providing social and environmental services during the process of modern urbanization. Their large-scale, long-term spatial distribution characteristics have significant implications for fields such as urban planning and regional climate analysis. However, existing studies are [...] Read more.
High-rise building areas (HRBs) play a crucial role in providing social and environmental services during the process of modern urbanization. Their large-scale, long-term spatial distribution characteristics have significant implications for fields such as urban planning and regional climate analysis. However, existing studies are largely limited to local regions and fixed-time-phase images. These studies are also influenced by differences in remote sensing image acquisition, such as regional architectural styles, lighting conditions, seasons, and sensor variations. This makes it challenging to achieve robust extraction across time and regions. To address these challenges, we propose an improved method for extracting HRBs that uses a Prior-Embedded Dual-Branch Neural Network (PEDNet). The dual-path design balances global features with local details. More importantly, we employ a window attention mechanism to introduce diverse prior information as embedded features. By integrating these features, our method becomes more robust against HRB image feature variations. We conducted extensive experiments using Sentinel-2 data from four typical cities. The results demonstrate that our method outperforms traditional models, such as FCN and U-Net, as well as more recent high-performance segmentation models, including DeepLabV3+ and BuildFormer. It effectively captures HRB features in remote sensing images, adapts to complex conditions, and provides a reliable tool for wide geographic span, cross-timestamp urban monitoring. It has practical applications for optimizing urban planning and improving the efficiency of resource management. Full article
Show Figures

Figure 1

28 pages, 2830 KB  
Review
Human Genome Safe Harbor Sites: A Comprehensive Review of Criteria, Discovery, Features, and Applications
by Amer Ahmed, Daria Di Molfetta, Giorgia Natalia Iaconisi, Antonello Caponio, Ansu Singh, Aasia Bibi, Vincenza Dolce, Luigi Palmieri, Vincenzo Coppola and Giuseppe Fiermonte
Cells 2026, 15(1), 81; https://doi.org/10.3390/cells15010081 - 4 Jan 2026
Viewed by 119
Abstract
The stable and safe integration of exogenous DNA into the genome is crucial to both genetic engineering and gene therapy. Traditional transgenesis approaches, such as those using retroviral vectors, result in random genomic integration, posing the risk of insertional mutagenesis and transcriptional dysregulation. [...] Read more.
The stable and safe integration of exogenous DNA into the genome is crucial to both genetic engineering and gene therapy. Traditional transgenesis approaches, such as those using retroviral vectors, result in random genomic integration, posing the risk of insertional mutagenesis and transcriptional dysregulation. Safe harbor sites (SHSs), genomic loci that support reliable transgene expression without compromising endogenous gene function, genomic integrity, or cellular physiology, have been identified and characterized across various model organisms. Well-established SHSs such as AAVS1, ROSA26, and CLYBL are routinely utilized for targeted transgene integration in human cells. Recent advances in genome architecture, gene regulation, and genome editing technologies are driving the discovery of novel SHSs for precise and safe genetic modification. This review aims to provide a comprehensive overview of SHSs and their applications that will guide investigators in the choice of SHS, especially when complementary sites are needed for more than one transgene integration. First, it outlines safety and functional criteria that qualify a genomic site as a safe harbor site. It then discusses the two primary strategies for identifying SHSs: i) traditional lentiviral-based random transgenesis, and ii) modern genome-wide in silico screening followed by CRISPR-based validation. This review also provides an updated catalogue of currently known SHSs in the human genome, detailing their characteristics, uses, and limitations. Additionally, it discusses the diverse applications of SHSs in basic research, gene therapy, CAR T cell-based therapy, and biotechnological production systems. Finally, it concludes by highlighting challenges in identifying universally applicable SHSs and outlines future directions for their refinement and validation across biological systems. Full article
(This article belongs to the Special Issue CRISPR-Based Genome Editing in Translational Research—Third Edition)
Show Figures

Figure 1

15 pages, 921 KB  
Article
Rethinking DeepVariant: Efficient Neural Architectures for Intelligent Variant Calling
by Anastasiia Gurianova, Anastasiia Pestruilova, Aleksandra Beliaeva, Artem Kasianov, Liudmila Mikhailova, Egor Guguchkin and Evgeny Karpulevich
Int. J. Mol. Sci. 2026, 27(1), 513; https://doi.org/10.3390/ijms27010513 - 4 Jan 2026
Viewed by 174
Abstract
DeepVariant has revolutionized the field of genetic variant identification by reframing variant detection as an image classification problem. However, despite its wide adoption in bioinformatics workflows, the tool continues to evolve mainly through the expansion of training datasets, while its core neural network [...] Read more.
DeepVariant has revolutionized the field of genetic variant identification by reframing variant detection as an image classification problem. However, despite its wide adoption in bioinformatics workflows, the tool continues to evolve mainly through the expansion of training datasets, while its core neural network architecture—Inception V3—has remained unchanged. In this study, we revisited the DeepVariant design and presented a prototype of a modernized version that supports alternative neural network backbones. As a proof of concept, we replaced the legacy Inception V3 model with a mid-sized EfficientNet model and evaluated its performance using the benchmark dataset from the Genome in a Bottle (GIAB) project. Alternative architecture demonstrated faster convergence, a twofold reduction in the number of parameters, and improved accuracy in variant identification. On the test dataset, updated workflow achieved consistent improvements of +0.1% in SNP F1-score, enabling the detection of up to several hundred additional true variants per genome. These results show that optimizing the neural architecture alone can enhance the accuracy, robustness, and efficiency of variant calling, thereby improving the overall quality of sequencing data analysis. Full article
Show Figures

Figure 1

28 pages, 8796 KB  
Article
CPU-Only Spatiotemporal Anomaly Detection in Microservice Systems via Dynamic Graph Neural Networks and LSTM
by Jiaqi Zhang and Hao Yang
Symmetry 2026, 18(1), 87; https://doi.org/10.3390/sym18010087 - 3 Jan 2026
Viewed by 116
Abstract
Microservice architecture has become a foundational component of modern distributed systems due to its modularity, scalability, and deployment flexibility. However, the increasing complexity and dynamic nature of service interactions have introduced substantial challenges in accurately detecting runtime anomalies. Existing methods often rely on [...] Read more.
Microservice architecture has become a foundational component of modern distributed systems due to its modularity, scalability, and deployment flexibility. However, the increasing complexity and dynamic nature of service interactions have introduced substantial challenges in accurately detecting runtime anomalies. Existing methods often rely on multiple monitoring metrics, which introduce redundancy and noise while increasing the complexity of data collection and model design. This paper proposes a novel spatiotemporal anomaly detection framework that integrates Dynamic Graph Neural Networks (D-GNN) combined with Long Short-Term Memory (LSTM) networks to model both the structural dependencies and temporal evolution of microservice behaviors. Unlike traditional approaches, our method uses only CPU utilization as the sole monitoring metric, leveraging its high observability and strong correlation with service performance. From a symmetry perspective, normal microservice behaviors exhibit approximately symmetric spatiotemporal patterns: structurally similar services tend to share similar CPU trajectories, and recurring workload cycles induce quasi-periodic temporal symmetries in utilization signals. Runtime anomalies can therefore be interpreted as symmetry-breaking events that create localized structural and temporal asymmetries in the service graph. The proposed framework is explicitly designed to exploit such symmetry properties: the D-GNN component respects permutation symmetry on the microservice graph while embedding the evolving structural context of each service, and the LSTM module captures shift-invariant temporal trends in CPU usage to highlight asymmetric deviations over time. Experiments conducted on real-world microservice datasets demonstrate that the proposed method delivers excellent performance, achieving 98 percent accuracy and 98 percent F1-score. Compared to baseline methods such as DeepTraLog, which achieves 0.93 precision, 0.978 recall, and 0.954 F1-score, our approach performs competitively, achieving 0.980 precision, 0.980 recall, and 0.980 F1-score. Our results indicate that a single-metric, symmetry-aware spatiotemporal modeling approach can achieve competitive performance without the complexity of multi-metric inputs, providing a lightweight and robust solution for real-time anomaly detection in large-scale microservice environments. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

29 pages, 4094 KB  
Article
Hybrid LSTM–DNN Architecture with Low-Discrepancy Hypercube Sampling for Adaptive Forecasting and Data Reliability Control in Metallurgical Information-Control Systems
by Jasur Sevinov, Barnokhon Temerbekova, Gulnora Bekimbetova, Ulugbek Mamanazarov and Bakhodir Bekimbetov
Processes 2026, 14(1), 147; https://doi.org/10.3390/pr14010147 - 1 Jan 2026
Viewed by 205
Abstract
The study focuses on the design of an intelligent information-control system (ICS) for metallurgical production, aimed at robust forecasting of technological parameters and automatic self-adaptation under noise, anomalies, and data drift. The proposed architecture integrates a hybrid LSTM–DNN model with low-discrepancy hypercube sampling [...] Read more.
The study focuses on the design of an intelligent information-control system (ICS) for metallurgical production, aimed at robust forecasting of technological parameters and automatic self-adaptation under noise, anomalies, and data drift. The proposed architecture integrates a hybrid LSTM–DNN model with low-discrepancy hypercube sampling using Sobol and Halton sequences to ensure uniform coverage of operating conditions and the hyperparameter space. The processing pipeline includes preprocessing and temporal synchronization of measurements, a parameter identification module, anomaly detection and correction using an ε-threshold scheme, and a decision-making and control loop. In simulation scenarios modeling the dynamics of temperature, pressure, level, and flow (1 min sampling interval, injected anomalies, and measurement noise), the hybrid model outperformed GRU and CNN architectures: a determination coefficient of R2 > 0.92 was achieved for key indicators, MAE and RMSE improved by 7–15%, and the proportion of unreliable measurements after correction decreased to <2% (compared with 8–12% without correction). The experiments also demonstrated accelerated adaptation during regime changes. The scientific novelty lies in combining recurrent memory and deep nonlinear approximation with deterministic experimental design in the hypercube of states and hyperparameters, enabling reproducible self-adaptation of the ICS and increased noise robustness without upgrading the measurement hardware. Modern metallurgical information-control systems operate under non-stationary regimes and limited measurement reliability, which reduces the robustness of conventional forecasting and decision-support approaches. To address this issue, a hybrid LSTM–DNN architecture combined with low-discrepancy hypercube probing and anomaly-aware data correction is proposed. The proposed approach is distinguished by the integration of hybrid neural forecasting, deterministic hypercube-based adaptation, and anomaly-aware data correction within a unified information-control loop for non-stationary industrial processes. Full article
Show Figures

Figure 1

Back to TopTop