Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,966)

Search Parameters:
Keywords = network scalability

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 4411 KB  
Article
A Methodology for Microcrack Detection in Plate Heat Exchanger Sheets Using Adaptive Templates and Features Value Analysis
by Zhibo Ding and Weiqi Yuan
Electronics 2026, 15(3), 605; https://doi.org/10.3390/electronics15030605 - 29 Jan 2026
Abstract
Aiming at the detection challenges caused by the diverse morphology of microcracks in plate heat exchanger sheets, this paper proposes a detection framework that integrates parameter-driven adaptive template generation, binary scale optimization, and feature value threshold segmentation using convolutional networks. First, based on [...] Read more.
Aiming at the detection challenges caused by the diverse morphology of microcracks in plate heat exchanger sheets, this paper proposes a detection framework that integrates parameter-driven adaptive template generation, binary scale optimization, and feature value threshold segmentation using convolutional networks. First, based on the grayscale characteristics of microcracks, an adaptive template generation model driven by key parameters (width, height, and endpoint grayscale difference) is constructed, obtaining a unique solution by solving the boundary conditions of physical features. Second, to overcome the challenge of microcrack width continuity, a binary scale optimization strategy based on the critical decay ratio k* of the correlation coefficient is designed, enabling the coverage of continuous-width defects with a finite set of templates. Finally, enhanced features are fed into a convolutional network. Utilizing the bimodal characteristic of the feature value distribution, the region corresponding to the extreme values in the top 0.3% before the foreground peak is located using 3σ extreme value statistics, achieving adaptive segmentation to identify defect regions. Evaluation on the self-built microcrack dataset SUT-B1 yielded results of 83.59% recall, 80.55% precision, and an F1 score of 81.98%. This method outperforms small object detection networks, demonstrating its advantage in morphological adaptability for small-sized objects. It also surpasses receptive field optimization modules, proving the necessity of structural optimization. The proposed method demonstrates practicality and scalability in the field of industrial inspection. Full article
Show Figures

Figure 1

39 pages, 1649 KB  
Review
The Network and Information Systems 2 Directive: Toward Scalable Cyber Risk Management in the Remote Patient Monitoring Domain: A Systematic Review
by Brian Mulhern, Chitra Balakrishna and Jan Collie
IoT 2026, 7(1), 14; https://doi.org/10.3390/iot7010014 - 29 Jan 2026
Abstract
Healthcare 5.0 and the Internet of Medical Things (IoMT) is emerging as a scalable model for the delivery of customised healthcare and chronic disease management, through Remote Patient Monitoring (RPM) in patient smart home environments. Large-scale RPM initiatives are being rolled out by [...] Read more.
Healthcare 5.0 and the Internet of Medical Things (IoMT) is emerging as a scalable model for the delivery of customised healthcare and chronic disease management, through Remote Patient Monitoring (RPM) in patient smart home environments. Large-scale RPM initiatives are being rolled out by healthcare providers (HCPs); however, the constrained nature of IoMT devices and proximity to poorly administered smart home technologies create a cyber risk for highly personalised patient data. The recent Network and Information Systems (NIS 2) directive requires HCPs to improve their cyber risk management approaches, mandating heavy penalties for non-compliance. Current research into cyber risk management in smart home-based RPM does not address scalability. This research examines scalability through the lens of the Non-adoption, Abandonment, Scale-up, Spread and Sustainability (NASSS) framework and develops a novel Scalability Index (SI), informed by a PRISMA guided systematic literature review. Our search strategy identified 57 studies across major databases including ACM, IEEE, MDPI, Elsevier, and Springer, authored between January 2016 and March 2025 (final search 21 March 2025), which focussed on cyber security risk management in the RPM context. Studies focussing solely on healthcare institutional settings were excluded. To mitigate bias, a sample of the papers (30/57) were assessed by two other raters; the resulting Cohen’s Kappa inter-rater agreement statistic (0.8) indicating strong agreement on study selection. The results, presented in graphical and tabular format, provide evidence that most cyber risk approaches do not consider scalability from the HCP perspective. Applying the SI to the 57 studies in our review resulted in a low to medium scalability potential of most cyber risk management proposals, indicating that they would not support the requirements of NIS 2 in the RPM context. A limitation of our work is that it was not tested in a live large-scale setting. However, future research could validate the proposed SI, providing guidance for researchers and practitioners in enhancing cyber risk management of large-scale RPM initiatives. Full article
(This article belongs to the Topic Applications of IoT in Multidisciplinary Areas)
Show Figures

Graphical abstract

23 pages, 5814 KB  
Article
Multi-Database EEG Integration for Subject-Independent Emotion Recognition in Brain–Computer Interface Systems
by Jaydeep Panchal, Moon Inder Singh, Karmjit Singh Sandha and Mandeep Singh
Mathematics 2026, 14(3), 474; https://doi.org/10.3390/math14030474 - 29 Jan 2026
Abstract
Affective computing has emerged as a pivotal field in human–computer interaction. Recognizing human emotions through electroencephalogram (EEG) signals can advance our understanding of cognition and support healthcare. This study introduces a novel subject-independent emotion recognition framework by integrating multiple EEG emotion databases (DEAP, [...] Read more.
Affective computing has emerged as a pivotal field in human–computer interaction. Recognizing human emotions through electroencephalogram (EEG) signals can advance our understanding of cognition and support healthcare. This study introduces a novel subject-independent emotion recognition framework by integrating multiple EEG emotion databases (DEAP, MAHNOB HCI-Tagging, DREAMER, AMIGOS and REFED) into a unified dataset. EEG segments were transformed into feature vectors capturing statistical, spectral, and entropy-based measures. Standardized pre-processing, analysis of variance (ANOVA) F-test feature selection, and six machine learning models were applied to the extracted features. Classification models such as Decision Tree, Discriminant Analysis, Support Vector Machine (SVM), K-Nearest Neighbor (K-NN), Naive Bayes, and Artificial Neural Networks (ANN) were considered. Experimental results demonstrate that SVM achieved the best performance for arousal classification (70.43%), while ANN achieved the highest accuracy for valence classification (68.07%), with both models exhibiting strong generalization across subjects. The results highlight the feasibility of developing biomimetic brain–computer interface (BCI) systems for objective assessment of emotional intelligence and its cognitive underpinnings, enabling scalable applications in affective computing and adaptive human–machine interaction. Full article
53 pages, 3098 KB  
Article
Auditing Inferential Blind Spots: A Framework for Evaluating Forensic Coverage in Network Telemetry Architectures
by Mehrnoush Vaseghipanah, Sam Jabbehdari and Hamidreza Navidi
Network 2026, 6(1), 9; https://doi.org/10.3390/network6010009 - 29 Jan 2026
Abstract
Network operators increasingly rely on abstracted telemetry (e.g., flow records and time-aggregated statistics) to achieve scalable monitoring of high-speed networks, but this abstraction fundamentally constrains the forensic and security inferences that can be supported from network data. We present a design-time audit framework [...] Read more.
Network operators increasingly rely on abstracted telemetry (e.g., flow records and time-aggregated statistics) to achieve scalable monitoring of high-speed networks, but this abstraction fundamentally constrains the forensic and security inferences that can be supported from network data. We present a design-time audit framework that evaluates which threat hypotheses become non-supportable as network evidence is transformed from packet-level traces to flow records and time-aggregated statistics. Our methodology examines three evidence layers (L0: packet headers, L1: IP Flow Information Export (IPFIX) flow records, L2: time-aggregated flows), computes a catalog of 13 network-forensic artifacts (e.g., destination fan-out, inter-arrival time burstiness, SYN-dominant connection patterns) at each layer, and maps artifact availability to tactic support using literature-grounded associations with MITRE Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK). Applied to backbone traffic from the MAWI Day-In-The-Life (DITL) archive, the audit reveals selectiveinference loss: Execution becomes non-supportable at L1 (due to loss of packet-level timing artifacts), while Lateral Movement and Persistence become non-supportable at L2 (due to loss of entity-linked structural artifacts). Inference coverage decreases from 9 to 7 out of 9 evaluated ATT&CK tactics, while coverage of defensive countermeasures (MITRE D3FEND) increases at L1 (7 → 8 technique categories) then decreases at L2 (8 → 7), reflecting a shift from behavioral monitoring to flow-based controls. The framework provides network architects with a practical tool for configuring telemetry systems (e.g., IPFIX exporters, P4 pipelines) to reason about and provision the minimum forensic coverage. Full article
(This article belongs to the Special Issue Advanced Technologies in Network and Service Management, 2nd Edition)
17 pages, 1874 KB  
Article
A Large-Kernel and Scale-Aware 2D CNN with Boundary Refinement for Multimodal Ischemic Stroke Lesion Segmentation
by Omar Ibrahim Alirr
Eng 2026, 7(2), 59; https://doi.org/10.3390/eng7020059 - 29 Jan 2026
Abstract
Accurate segmentation of ischemic stroke lesions from multimodal magnetic resonance imaging (MRI) is fundamental for quantitative assessment, treatment planning, and outcome prediction; yet, it remains challenging due to highly heterogeneous lesion morphology, low lesion–background contrast, and substantial variability across scanners and protocols. This [...] Read more.
Accurate segmentation of ischemic stroke lesions from multimodal magnetic resonance imaging (MRI) is fundamental for quantitative assessment, treatment planning, and outcome prediction; yet, it remains challenging due to highly heterogeneous lesion morphology, low lesion–background contrast, and substantial variability across scanners and protocols. This work introduces Tri-UNetX-2D, a large-kernel and scale-aware 2D convolutional network with explicit boundary refinement for automated ischemic stroke lesion segmentation from DWI, ADC, and FLAIR MRI. The architecture is built on a compact U-shaped encoder–decoder backbone and integrates three key components: first, a Large-Kernel Inception (LKI) module that employs factorized depthwise separable convolutions and dilation to emulate very large receptive fields, enabling efficient long-range context modeling; second, a Scale-Aware Fusion (SAF) unit that learns adaptive weights to fuse encoder and decoder features, dynamically balancing coarse semantic context and fine structural detail; and third, a Boundary Refinement Head (BRH) that provides explicit contour supervision to sharpen lesion borders and reduce boundary error. Squeeze-and-Excitation (SE) attention is embedded within LKI and decoder stages to recalibrate channel responses and emphasize modality-relevant cues, such as DWI-dominant acute core and FLAIR-dominant subacute changes. On the ISLES 2022 multi-center benchmark, Tri-UNetX-2D improves Dice Similarity Coefficient from 0.78 to 0.86, reduces the 95th-percentile Hausdorff distance from 12.4 mm to 8.3 mm, and increases the lesion-wise F1-score from 0.71 to 0.81 compared with a plain 2D U-Net trained under identical conditions. These results demonstrate that the proposed framework achieves competitive performance with substantially lower complexity than typical 3D or ensemble-based models, highlighting its potential for scalable, clinically deployable stroke lesion segmentation. Full article
Show Figures

Figure 1

32 pages, 7289 KB  
Article
G-PFL-ID: Graph-Driven Personalized Federated Learning for Unsupervised Intrusion Detection in Non-IID IoT Systems
by Daniel Ayo Oladele, Ayokunle Ige, Olatunbosun Agbo-Ajala, Olufisayo Ekundayo, Sree Ganesh Thottempudi, Malusi Sibiya and Ernest Mnkandla
IoT 2026, 7(1), 13; https://doi.org/10.3390/iot7010013 - 29 Jan 2026
Abstract
Intrusion detection in IoT networks is challenged by data heterogeneity, label scarcity, and privacy constraints. Traditional federated learning (FL) methods often assume IID data or require supervised labels, limiting their practicality. We propose G-PFL-ID, a graph-driven personalized federated learning framework for unsupervised intrusion [...] Read more.
Intrusion detection in IoT networks is challenged by data heterogeneity, label scarcity, and privacy constraints. Traditional federated learning (FL) methods often assume IID data or require supervised labels, limiting their practicality. We propose G-PFL-ID, a graph-driven personalized federated learning framework for unsupervised intrusion detection in non-IID IoT systems. Our method trains a global graph encoder (GCN or GAE) with a DeepSVDD objective under a federated regularizer (FedReg) that combines proximal and variance penalties, then personalizes local models via a lightweight fine-tuning head. We evaluate G-PFL-ID on the IoT-23 (Mirai-based captures) and N-BaIoT (device-level dataset) under realistic heterogeneity (Dirichlet-based partitioning with concentration parameters α{0.1,0.5,} and client counts K{10,15,20} for IoT-23, and natural device-based partitioning for N-BaIoT). G-PFL-ID outperforms global FL baselines and recent graph-based federated anomaly detectors, achieving up to 99.46% AUROC on IoT-23 and 97.74% AUROC on N-BaIoT. Ablation studies confirm that the proximal and variance penalties reduce inter-round drift and representation collapse, and that lightweight personalization recovers local sensitivity—especially for clients with limited data. Our work bridges graph-based anomaly detection with personalized FL for scalable, privacy-preserving IoT security. Full article
25 pages, 2237 KB  
Article
A Generalized Cost Model for Techno-Economic Analysis in Optical Networks
by André Souza, Marco Quagliotti, Mohammad M. Hosseini, Andrea Marotta, Carlo Centofanti, Farhad Arpanaei, Arantxa Villavicencio Paz, José Manuel Rivas-Moscoso, Gianluca Gambari, Laia Nadal, Marc Ruiz, Stephen Parker and João Pedro
Photonics 2026, 13(2), 125; https://doi.org/10.3390/photonics13020125 - 29 Jan 2026
Abstract
Techno-economic analysis (TEA) plays a vital role in assessing the feasibility and scalability of emerging technologies, especially in the context of innovation and development. Central to any effective TEA is a reliable and detailed model of capital and operational costs. This paper reports [...] Read more.
Techno-economic analysis (TEA) plays a vital role in assessing the feasibility and scalability of emerging technologies, especially in the context of innovation and development. Central to any effective TEA is a reliable and detailed model of capital and operational costs. This paper reports the development of such a model for optical networks in the framework of the SEASON project, aimed at supporting a broad spectrum of techno-economic evaluations. The model is constructed using publicly available data and expert insights from project participants. Its generalizable design allows it to be used both within the SEASON project and as a reference for other studies. By harmonizing assumptions and cost parameters, the model fosters consistency across different analyses. It includes cost and power consumption data for a wide range of commercially available optical network components (including transceivers for point-to-multipoint communications), introduces a statistical framework for estimating values for emerging technologies, and provides a cost model for multiband-doped fiber amplifiers. To demonstrate its practical relevance, the paper applies the model to two case studies: an evaluation of how the cost of various multiband node architectures scales with network traffic in meshed topologies and a comparison of different transport solutions to carry fronthaul flows in the radio access network. Full article
(This article belongs to the Section Optical Communication and Network)
Show Figures

Figure 1

24 pages, 3822 KB  
Article
Optimising Calculation Logic in Emergency Management: A Framework for Strategic Decision-Making
by Yuqi Hang and Kexi Wang
Systems 2026, 14(2), 139; https://doi.org/10.3390/systems14020139 - 29 Jan 2026
Abstract
Given the increasing demand for rapid emergency management decision-making, which must be both timely and reliable, even slight delays can result in substantial human and economic losses. However, current systems and recent state-of-the-art work often use inflexible rule-based logic that cannot adapt to [...] Read more.
Given the increasing demand for rapid emergency management decision-making, which must be both timely and reliable, even slight delays can result in substantial human and economic losses. However, current systems and recent state-of-the-art work often use inflexible rule-based logic that cannot adapt to rapidly changing emergency conditions or dynamically optimise response allocation. As a result, our study presents the Calculation Logic Optimisation Framework (CLOF), a novel data-driven approach that enhances decision-making intelligently and strategically through learning-based predictive and multi-objective optimisation, utilising the 911 Emergency Calls data set, comprising more than half a million records from Montgomery County, Pennsylvania, USA. The CLOF examines patterns over space and time and uses optimised calculation logic to reduce response latency and increase decision reliability. The suggested framework outperforms the standard Decision Tree, Random Forest, Gradient Boosting, and XGBoost baselines, achieving 94.68% accuracy, a log-loss of 0.081, and a reliability score (R2) of 0.955. The mean response time error is reported to have been reduced by 19%, illustrating robustness to real-world uncertainty. The CLOF aims to deliver results that confirm the scalability, interpretability, and efficiency of modern EM frameworks, thereby improving safety, risk awareness, and operational quality in large-scale emergency networks. Full article
(This article belongs to the Section Artificial Intelligence and Digital Systems Engineering)
Show Figures

Figure 1

36 pages, 5209 KB  
Article
AI-Enabled System-of-Systems Decision Support: BIM-Integrated AI-LCA for Resilient and Sustainable Fiber-Reinforced Façade Design
by Mohammad Q. Al-Jamal, Ayoub Alsarhan, Qasim Aljamal, Mahmoud AlJamal, Bashar S. Khassawneh, Ahmed Al Nuaim and Abdullah Al Nuaim
Information 2026, 17(2), 126; https://doi.org/10.3390/info17020126 - 29 Jan 2026
Abstract
Sustainable and resilient communities increasingly rely on interdependent, data-driven building systems where material choices, energy performance, and lifecycle impacts must be optimized jointly. This study presents a digital-twin-ready, system-of-systems (SoS) decision-support framework that integrates BIM-enabled building energy simulation with an AI-enhanced lifecycle assessment [...] Read more.
Sustainable and resilient communities increasingly rely on interdependent, data-driven building systems where material choices, energy performance, and lifecycle impacts must be optimized jointly. This study presents a digital-twin-ready, system-of-systems (SoS) decision-support framework that integrates BIM-enabled building energy simulation with an AI-enhanced lifecycle assessment (AI-LCA) pipeline to optimize fiber-reinforced concrete (FRC) façade systems for smart buildings. Conventional LCA is often inventory-driven and static, limiting its usefulness for SoS decision making under operational variability. To address this gap, we develop machine learning surrogate models (Random Forests, Gradient Boosting, and Artificial Neural Networks) to perform a dual prediction of façade mechanical performance and lifecycle indicators (CO2 emissions, embodied energy, and water use), enabling a rapid exploration of design alternatives. We fuse experimental FRC measurements, open environmental inventories, and BIM-linked energy simulations into a unified dataset that captures coupled material–building behavior. The models achieve high predictive performance (up to 99.2% accuracy), and feature attribution identifies the fiber type, volume fraction, and curing regime as key drivers of lifecycle outcomes. Scenario analyses show that optimized configurations reduce embodied carbon while improving energy-efficiency trajectories when propagated through BIM workflows, supporting carbon-aware and resilient façade selection. Overall, the framework enables scalable SoS optimization by providing fast, coupled predictions for façade design decisions in smart built environments. Full article
Show Figures

Figure 1

22 pages, 2585 KB  
Article
Bone-CNN: A Lightweight Deep Learning Architecture for Multi-Class Classification of Primary Bone Tumours in Radiographs
by Behnam Kiani Kalejahi, Sajid Khan and Rakhim Zakirov
Biomedicines 2026, 14(2), 299; https://doi.org/10.3390/biomedicines14020299 - 29 Jan 2026
Abstract
Background/Objectives: Accurate classification of primary bone tumors from radiographic images is essential for early diagnosis, appropriate treatment planning, and informed clinical decision-making. While deep convolutional neural networks (CNNs) have shown strong performance in medical image analysis, their high computational complexity often limits real-world [...] Read more.
Background/Objectives: Accurate classification of primary bone tumors from radiographic images is essential for early diagnosis, appropriate treatment planning, and informed clinical decision-making. While deep convolutional neural networks (CNNs) have shown strong performance in medical image analysis, their high computational complexity often limits real-world clinical deployment. This study aims to develop a lightweight yet highly accurate model for multi-class bone tumor classification. Methods: We propose Bone-CNN, a computationally efficient CNN architecture specifically designed for radiograph-based classification of primary bone tumors. The model was evaluated using the publicly available Figshare Radiograph Dataset of Primary Bone Tumors, which includes nine distinct tumor classes ranging from benign to malignant lesions and originates from multiple imaging centres. Performance was assessed through extensive experiments and compared against established baseline models, including DenseNet121, EfficientNet-B0, and MobileNetV2. Results: Bone-CNN achieved a test accuracy of 96.52% and a macro-AUC of 0.9989, outperforming all baseline architectures. Both quantitative and qualitative evaluations, including confusion matrices and ROC curve analyses, demonstrated robust and reliable discrimination between challenging tumor subtypes. Conclusions: The results indicate that Bone-CNN offers an excellent balance between accuracy and computational efficiency. Its strong performance and lightweight design highlight its suitability for clinical deployment, supporting effective and scalable radiograph-based assessment of primary bone tumors. Full article
Show Figures

Figure 1

36 pages, 2575 KB  
Review
A Comprehensive Review of Metaheuristic Algorithms for Node Placement in UAV Communication Networks
by S. A. Temesheva, D. A. Turlykozhayeva, S. N. Akhtanov, N. M. Ussipov, A. A. Zhunuskanov, Wenbin Sun, Qian Xu and Mingliang Tao
Sensors 2026, 26(3), 869; https://doi.org/10.3390/s26030869 - 28 Jan 2026
Abstract
Unmanned Aerial Vehicle Communication Networks (UAVCNs) have emerged as a transformative solution to enable resilient, scalable, and infrastructure-independent wireless communication in urban and remote environments. A key challenge in UAVCNs is the optimal placement of Unmanned Aerial Vehicle (UAV) nodes to maximize coverage, [...] Read more.
Unmanned Aerial Vehicle Communication Networks (UAVCNs) have emerged as a transformative solution to enable resilient, scalable, and infrastructure-independent wireless communication in urban and remote environments. A key challenge in UAVCNs is the optimal placement of Unmanned Aerial Vehicle (UAV) nodes to maximize coverage, connectivity, and overall network performance while minimizing latency, energy consumption, and packet loss. As this node placement problem is NP-hard, numerous meta-heuristic algorithms (MHAs) have been proposed to find near-optimal solutions efficiently. Although research in this area has produced a wide range of meta-heuristic algorithmic solutions, most existing review articles focus on MANETs with terrestrial nodes, while comprehensive reviews dedicated to node placement in UAV communication networks are relatively scarce. This article presents a critical and comprehensive review of meta-heuristic algorithms for UAVCN node placement. Beyond surveying existing methods, it systematically analyzes algorithmic strengths, vulnerabilities, and future research directions, offering actionable insights for selecting effective strategies in diverse UAVCN deployment scenarios. To demonstrate practical applicability, selected hybrid algorithms are evaluated in a reproducible Python framework using computational time and coverage metrics, highlighting their ability to optimize multiple objectives and providing guidance for future UAVCN optimization studies. Full article
(This article belongs to the Section Communications)
79 pages, 1137 KB  
Review
A Review of Artificial Intelligence Techniques for Low-Carbon Energy Integration and Optimization in Smart Grids and Smart Homes
by Omosalewa O. Olagundoye, Olusola Bamisile, Chukwuebuka Joseph Ejiyi, Oluwatoyosi Bamisile, Ting Ni and Vincent Onyango
Processes 2026, 14(3), 464; https://doi.org/10.3390/pr14030464 - 28 Jan 2026
Abstract
The growing demand for electricity in residential sectors and the global need to decarbonize power systems are accelerating the transformation toward smart and sustainable energy networks. Smart homes and smart grids, integrating renewable generation, energy storage, and intelligent control systems, represent a crucial [...] Read more.
The growing demand for electricity in residential sectors and the global need to decarbonize power systems are accelerating the transformation toward smart and sustainable energy networks. Smart homes and smart grids, integrating renewable generation, energy storage, and intelligent control systems, represent a crucial step toward achieving energy efficiency and carbon neutrality. However, ensuring real-time optimization, interoperability, and sustainability across these distributed energy resources (DERs) remains a key challenge. This paper presents a comprehensive review of artificial intelligence (AI) applications for sustainable energy management and low-carbon technology integration in smart grids and smart homes. The review explores how AI-driven techniques include machine learning, deep learning, and bio-inspired optimization algorithms such as particle swarm optimization (PSO), whale optimization algorithm (WOA), and cuckoo optimization algorithm (COA) enhance forecasting, adaptive scheduling, and real-time energy optimization. These techniques have shown significant potential in improving demand-side management, dynamic load balancing, and renewable energy utilization efficiency. Moreover, AI-based home energy management systems (HEMSs) enable predictive control and seamless coordination between grid operations and distributed generation. This review also discusses current barriers, including data heterogeneity, computational overhead, and the lack of standardized integration frameworks. Future directions highlight the need for lightweight, scalable, and explainable AI models that support decentralized decision-making in cyber-physical energy systems. Overall, this paper emphasizes the transformative role of AI in enabling sustainable, flexible, and intelligent power management across smart residential and grid-level systems, supporting global energy transition goals and contributing to the realization of carbon-neutral communities. Full article
Show Figures

Figure 1

16 pages, 519 KB  
Article
An Efficient and Automated Smart Healthcare System Using Genetic Algorithm and Two-Level Filtering Scheme
by Geetanjali Rathee, Hemraj Saini, Chaker Abdelaziz Kerrache, Ramzi Djemai and Mohamed Chahine Ghanem
Digital 2026, 6(1), 10; https://doi.org/10.3390/digital6010010 - 28 Jan 2026
Abstract
This paper proposes an efficient and automated smart healthcare communication framework that integrates a two-level filtering scheme with a multi-objective Genetic Algorithm (GA) to enhance the reliability, timeliness, and energy efficiency of Internet of Medical Things (IoMT) systems. In the first stage, physiological [...] Read more.
This paper proposes an efficient and automated smart healthcare communication framework that integrates a two-level filtering scheme with a multi-objective Genetic Algorithm (GA) to enhance the reliability, timeliness, and energy efficiency of Internet of Medical Things (IoMT) systems. In the first stage, physiological signals collected from heterogeneous sensors (e.g., blood pressure, glucose level, ECG, patient movement, and ambient temperature) were pre-processed using an adaptive least-mean-square (LMS) filter to suppress noise and motion artifacts, thereby improving signal quality prior to analysis. In the second stage, a GA-based optimization engine selects optimal routing paths and transmission parameters by jointly considering end-to-end delay, Signal-to-Noise Ratio (SNR), energy consumption, and packet loss ratio (PLR). The two-level filtering strategy, i.e., LMS, ensures that only denoised and high-priority records are forwarded for more processing, enabling timely delivery for supporting the downstream clinical network by optimizing the communication. The proposed mechanism is evaluated via extensive simulations involving 30–100 devices and multiple generations and is benchmarked against two existing smart healthcare schemes. The results demonstrate that the integrated GA and filtering approach significantly reduces end-to-end delay by 10%, as well as communication latency and energy consumption, while improving the packet delivery ratio by approximately 15%, as well as throughput, SNR, and overall Quality of Service (QoS) by up to 98%. These findings indicate that the proposed framework provides a scalable and intelligent communication backbone for early disease detection, continuous monitoring, and timely intervention in smart healthcare environments. Full article
Show Figures

Figure 1

25 pages, 5185 KB  
Review
A Review of Routing and Resource Optimization in Quantum Networks
by Md. Shazzad Hossain Shaon and Mst Shapna Akter
Electronics 2026, 15(3), 557; https://doi.org/10.3390/electronics15030557 - 28 Jan 2026
Abstract
Quantum computing is a new discipline that uses the ideas of quantum physics to do calculations that are not possible with conventional computers. Quantum bits, called qubits, could exist in superposition states, making them suitable for parallel processing in contrast to traditional bits. [...] Read more.
Quantum computing is a new discipline that uses the ideas of quantum physics to do calculations that are not possible with conventional computers. Quantum bits, called qubits, could exist in superposition states, making them suitable for parallel processing in contrast to traditional bits. When it comes to addressing complex challenges like proof simulation, optimization, and cryptography, quantum entanglement and quantum interference provide exponential improvements. This survey focuses on recent advances in entanglement routing, quantum key distribution (QKD), and qubit management for short- and long-distance quantum communication. It studies optimization approaches such as integer programming, reinforcement learning, and collaborative methods, evaluating their efficacy in terms of throughput, scalability, and fairness. Despite improvements, challenges remain in dynamic network adaptation, resource limits, and error correction. Addressing these difficulties necessitates the creation of hybrid quantum–classical algorithms for efficient resource allocation, hardware-aware designs to improve real-world deployment, and fault-tolerant architecture. Therefore, this survey suggests that future research focus on integrating quantum networks with existing classical infrastructure to improve security, dependability, and mainstream acceptance. This connection has significance for applications that require secure communication, financial transactions, and critical infrastructure protection. Full article
Show Figures

Figure 1

19 pages, 1037 KB  
Review
Cystic Fibrosis of the Pancreas: In Vitro Duct Models for CFTR-Targeted Translational Research
by Alessandra Ludovico, Martina Battistini and Debora Baroni
Int. J. Mol. Sci. 2026, 27(3), 1279; https://doi.org/10.3390/ijms27031279 - 27 Jan 2026
Abstract
Cystic fibrosis (CF) is caused by loss-of-function variants in the cystic fibrosis transmembrane conductance regulator (CFTR) chloride and bicarbonate channel and affects multiple organs, with pancreatic involvement showing very high penetrance. In pancreatic ducts, CFTR drives secretion of alkaline, bicarbonate-rich fluid that maintains [...] Read more.
Cystic fibrosis (CF) is caused by loss-of-function variants in the cystic fibrosis transmembrane conductance regulator (CFTR) chloride and bicarbonate channel and affects multiple organs, with pancreatic involvement showing very high penetrance. In pancreatic ducts, CFTR drives secretion of alkaline, bicarbonate-rich fluid that maintains intraductal patency, neutralises gastric acid and permits safe delivery of digestive enzymes. Selective impairment of CFTR-dependent bicarbonate transport, even in the presence of residual chloride conductance, is strongly associated with exocrine pancreatic insufficiency, recurrent pancreatitis and cystic-fibrosis-related diabetes. These clinical manifestations are captured by pharmacodynamic anchors such as faecal elastase-1, steatorrhoea, pancreatitis burden and glycaemic control, providing clinically meaningful benchmarks for CFTR-targeted therapies. In this review, we summarise the principal mechanisms underlying pancreatic pathophysiology and the current approaches to clinical management. We then examine in vitro pancreatic duct models that are used to evaluate small molecules and emerging therapeutics targeting CFTR. These experimental systems include native tissue, primary cultures, organoids, co-cultures and microfluidic devices, each of which has its own advantages and limitations. Intact micro-perfused ducts provide the physiological benchmark for studying luminal pH control and bicarbonate (HCO3) secretion. Primary pancreatic duct epithelial cells (PDECs) and pancreatic ductal organoids (PDO) preserve ductal identity, patient-specific genotype and key regulatory networks. Immortalised ductal cell lines grown on permeable supports enable scalable screening and structure activity analyses. Co-culture models and organ-on-chip devices incorporate inflammatory, stromal and endocrine components together with flow and shear and provide system-level readouts, including duct-islet communication. Across this complementary toolkit, we prioritise bicarbonate-relevant endpoints, including luminal and intracellular pH and direct measures of HCO3 flux, to improve alignment between in vitro pharmacology and clinical pancreatic outcomes. The systematic use of complementary models should facilitate the discovery of next-generation CFTR modulators and adjunctive strategies with the greatest potential to protect both exocrine and endocrine pancreatic function in people with CF. Full article
(This article belongs to the Special Issue Molecular Mechanisms Underlying the Pathogenesis of Genetic Diseases)
Show Figures

Figure 1

Back to TopTop