Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (10,249)

Search Parameters:
Keywords = computer resource

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 6506 KB  
Article
Strategic Energy Project Investment Decisions Using RoBERTa: A Framework for Efficient Infrastructure Evaluation
by Recep Özkan, Fatemeh Mostofi, Fethi Kadıoğlu, Vedat Toğan and Onur Behzat Tokdemir
Buildings 2026, 16(3), 547; https://doi.org/10.3390/buildings16030547 - 28 Jan 2026
Abstract
The task of identifying high-value projects from vast investment portfolios presents a major challenge in the construction industry, particularly within the energy sector, where decision-making carries high financial and operational stakes. This complexity is driven by both the volume and heterogeneity of project [...] Read more.
The task of identifying high-value projects from vast investment portfolios presents a major challenge in the construction industry, particularly within the energy sector, where decision-making carries high financial and operational stakes. This complexity is driven by both the volume and heterogeneity of project documentation, as well as the multidimensional criteria used to assess project value. Despite this, research gaps remain: large language models (LLMs) as pretrained transformer encoder models are underutilized in construction project selection, especially in domains where investment precision is paramount. Existing methodologies have largely focused on multi-criteria decision-making (MCDM) frameworks, often neglecting the potential of LLMs to automate and enhance early-phase project evaluation. However, deploying LLMs for such tasks introduces high computational demands, particularly in privacy-sensitive, enterprise-level environments. This study investigates the application of the robustly optimized BERT model (RoBERTa) for identifying high-value energy infrastructure projects. Our dual objective is to (1) leverage RoBERTa’s pre-trained language architecture to extract key information from unstructured investment texts and (2) evaluate its effectiveness in enhancing project selection accuracy. We benchmark RoBERTa against several leading LLMs: BERT, DistilBERT (a distilled variant), ALBERT (a lightweight version), and XLNet (a generalized autoregressive model). All models achieved over 98% accuracy, validating their utility in this domain. RoBERTa outperformed its counterparts with an accuracy of 99.6%. DistilBERT was fastest (1025.17 s), while RoBERTa took 2060.29 s. XLNet was slowest at 4145.49 s. In conclusion, RoBERTa can be the preferred option when maximum accuracy is required, while DistilBERT can be a viable alternative under computational or resource constraints. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

24 pages, 37710 KB  
Article
CropHealthyNet: A Lightweight Hybrid Network for Efficient Crop Disease Detection
by Yuhang Wang, Xiaojing Gao, Jiangping Liu, Xin Pan, Xiaoling Luo and Chenbin Ma
Appl. Sci. 2026, 16(3), 1329; https://doi.org/10.3390/app16031329 - 28 Jan 2026
Abstract
Deploying high-precision deep learning models on resource-constrained edge devices remains a challenge for agricultural disease detection. This study introduces CropHealthyNet, a lightweight hybrid architecture optimized for both accuracy and computational efficiency. The architecture incorporates three key components: the ExGhostConv module, which integrates FReLU [...] Read more.
Deploying high-precision deep learning models on resource-constrained edge devices remains a challenge for agricultural disease detection. This study introduces CropHealthyNet, a lightweight hybrid architecture optimized for both accuracy and computational efficiency. The architecture incorporates three key components: the ExGhostConv module, which integrates FReLU and SimAM attention for enhanced feature utilization; a Universal Position Encoding mechanism that adaptively captures spatial information to address variable lesion scales; and a MemoryEfficientTransformer employing chunked attention to mitigate global modeling memory overhead. Experiments on CDC, AGD_256, and CornLeafDisease datasets indicate that CropHealthyNet achieves a weighted average accuracy of 90.55% with 0.47 million parameters. The model outperforms several state-of-the-art lightweight architectures and achieves accuracy comparable to DenseNet121, with approximately 15 times fewer parameters. These results position CropHealthyNet as a viable solution for real-world deployment in resource-limited agricultural environments. Full article
Show Figures

Figure 1

19 pages, 3374 KB  
Article
Efficient and User Friendly 3D Simulations of Underground Excavations Using the Isogeometric Boundary Element Method
by Gernot Beer, Nicola Grillanda and Vincenzo Mallardo
Geotechnics 2026, 6(1), 11; https://doi.org/10.3390/geotechnics6010011 - 28 Jan 2026
Abstract
Using current approaches, which are almost entirely based on volume methods, 3D simulations of complex underground excavations can be cumbersome and time-consuming. This is because the rock mass, which for practical purposes is of infinite extent, has to be discretised. This leads to [...] Read more.
Using current approaches, which are almost entirely based on volume methods, 3D simulations of complex underground excavations can be cumbersome and time-consuming. This is because the rock mass, which for practical purposes is of infinite extent, has to be discretised. This leads to very large meshes, which have to be truncated at a distance assumed to be “safe”. Consequently, the demand for human and computer resources can be significant. To ascertain the quality of the result is difficult because it depends on the fidelity of the volume mesh and the truncation distance. The aim of this paper is to present a novel approach that does not require volume discretisation. Using the isogeometric boundary element method (IGABEM), only excavation surfaces need to be defined. The geometry of the excavations can be defined in a highly accurate and smooth manner with computer-aided design (CAD) data, eliminating the requirement for mesh generation. Volume effects, such as nonlinear, anisotropic, and heterogeneous ground conditions, as well as the effect of ground support, can be considered. On several examples, related to real projects, it is shown that excavations of high complexity can be simulated, and highly refined results can be obtained in a mesh-free setting. Full article
Show Figures

Figure 1

19 pages, 1132 KB  
Article
A Highly Robust Approach to NFC Authentication for Privacy-Sensitive Mobile Payment Services
by Rerkchai Fooprateepsiri and U-Koj Plangprasopchoke
Informatics 2026, 13(2), 21; https://doi.org/10.3390/informatics13020021 - 28 Jan 2026
Abstract
The rapid growth of mobile payment systems has positioned Near Field Communication (NFC) as a core enabling technology. However, conventional NFC protocols primarily emphasize transmission efficiency rather than robust authentication and privacy protection, which exposes users to threats such as eavesdropping, replay, and [...] Read more.
The rapid growth of mobile payment systems has positioned Near Field Communication (NFC) as a core enabling technology. However, conventional NFC protocols primarily emphasize transmission efficiency rather than robust authentication and privacy protection, which exposes users to threats such as eavesdropping, replay, and tracking attacks. In this study, a lightweight and privacy-preserving authentication protocol is proposed for NFC-based mobile payment services. The protocol integrates anonymous authentication, replay resistance, and tracking protection while maintaining low computational overhead suitable for resource-constrained devices. A secure offline session key generation mechanism is incorporated to enhance transaction reliability without increasing system complexity. Formal security verification using the Scyther tool (version 1.1.3) confirms resistance against major attack vectors, including impersonation, man-in-the-middle, and replay attacks. Comparative performance analysis further demonstrates that the proposed scheme achieves superior efficiency and stronger security guarantees compared with existing approaches. These results indicate that the protocol provides a practical and scalable solution for secure and privacy-aware NFC mobile payment environments. Full article
23 pages, 2136 KB  
Article
Coarse-to-Fine Contrast Maximization for Energy-Efficient Motion Estimation in Edge-Deployed Event-Based SLAM
by Kyeongpil Min, Jongin Choi and Woojoo Lee
Micromachines 2026, 17(2), 176; https://doi.org/10.3390/mi17020176 - 28 Jan 2026
Abstract
Event-based vision sensors offer microsecond temporal resolution and low power consumption, making them attractive for edge robotics and simultaneous localization and mapping (SLAM). Contrast maximization (CMAX) is a widely used direct geometric framework for rotational ego-motion estimation that aligns events by warping them [...] Read more.
Event-based vision sensors offer microsecond temporal resolution and low power consumption, making them attractive for edge robotics and simultaneous localization and mapping (SLAM). Contrast maximization (CMAX) is a widely used direct geometric framework for rotational ego-motion estimation that aligns events by warping them and maximizing the spatial contrast of the resulting image of warped events (IWE). However, conventional CMAX is computationally inefficient because it repeatedly processes the full event set and a full-resolution IWE at every optimization iteration, including late-stage refinement, incurring both event-domain and image-domain costs. We propose coarse-to-fine contrast maximization (CCMAX), a computation-aware CMAX variant that aligns computational fidelity with the optimizer’s coarse-to-fine convergence behavior. CCMAX progressively increases IWE resolution across stages and applies coarse-grid event subsampling to remove spatially redundant events in early stages, while retaining a final full-resolution refinement. On standard event-camera benchmarks with IMU ground truth, CCMAX achieves accuracy comparable to a full-resolution baseline while reducing floating-point operations (FLOPs) by up to 42%. Energy measurements on a custom RISC-V–based edge SoC further show up to 87% lower energy consumption for the iterative CMAX pipeline. These results demonstrate an energy-efficient motion-estimation front-end suitable for real-time edge SLAM on resource- and power-constrained platforms. Full article
(This article belongs to the Topic Collection Series on Applied System Innovation)
Show Figures

Figure 1

79 pages, 1223 KB  
Review
A Review of Artificial Intelligence Techniques for Low-Carbon Energy Integration and Optimization in Smart Grids and Smart Homes
by Omosalewa O. Olagundoye, Olusola Bamisile, Chukwuebuka Joseph Ejiyi, Oluwatoyosi Bamisile, Ting Ni and Vincent Onyango
Processes 2026, 14(3), 464; https://doi.org/10.3390/pr14030464 - 28 Jan 2026
Abstract
The growing demand for electricity in residential sectors and the global need to decarbonize power systems are accelerating the transformation toward smart and sustainable energy networks. Smart homes and smart grids, integrating renewable generation, energy storage, and intelligent control systems, represent a crucial [...] Read more.
The growing demand for electricity in residential sectors and the global need to decarbonize power systems are accelerating the transformation toward smart and sustainable energy networks. Smart homes and smart grids, integrating renewable generation, energy storage, and intelligent control systems, represent a crucial step toward achieving energy efficiency and carbon neutrality. However, ensuring real-time optimization, interoperability, and sustainability across these distributed energy resources (DERs) remains a key challenge. This paper presents a comprehensive review of artificial intelligence (AI) applications for sustainable energy management and low-carbon technology integration in smart grids and smart homes. The review explores how AI-driven techniques include machine learning, deep learning, and bio-inspired optimization algorithms such as particle swarm optimization (PSO), whale optimization algorithm (WOA), and cuckoo optimization algorithm (COA) enhance forecasting, adaptive scheduling, and real-time energy optimization. These techniques have shown significant potential in improving demand-side management, dynamic load balancing, and renewable energy utilization efficiency. Moreover, AI-based home energy management systems (HEMSs) enable predictive control and seamless coordination between grid operations and distributed generation. This review also discusses current barriers, including data heterogeneity, computational overhead, and the lack of standardized integration frameworks. Future directions highlight the need for lightweight, scalable, and explainable AI models that support decentralized decision-making in cyber-physical energy systems. Overall, this paper emphasizes the transformative role of AI in enabling sustainable, flexible, and intelligent power management across smart residential and grid-level systems, supporting global energy transition goals and contributing to the realization of carbon-neutral communities. Full article
18 pages, 2183 KB  
Article
Uncovering miRNA–Disease Associations Through Graph Based Neural Network Representations
by Alessandro Orro
Biomedicines 2026, 14(2), 289; https://doi.org/10.3390/biomedicines14020289 - 28 Jan 2026
Abstract
Background: MicroRNAs (miRNAs) are an important class of non-coding RNAs that regulate gene expression by binding to target mRNAs and influencing cellular processes such as differentiation, proliferation, and apoptosis. Dysregulation in miRNA expression has been reported to be implicated in many human diseases, [...] Read more.
Background: MicroRNAs (miRNAs) are an important class of non-coding RNAs that regulate gene expression by binding to target mRNAs and influencing cellular processes such as differentiation, proliferation, and apoptosis. Dysregulation in miRNA expression has been reported to be implicated in many human diseases, including cancer, cardiovascular, and neurodegenerative disorders. Identifying disease-related miRNAs is therefore essential for understanding disease mechanisms and supporting biomarker discovery, but time and cost of experimental validation are the main limitations. Methods: We present a graph-based learning framework that models the complex relationships between miRNAs, diseases, and related biological entities within a heterogeneous network. The model employs a message-passing neural architecture to learn structured embeddings from multiple node and edge types, integrating biological priors from curated resources. This network representation enables the inference of novel miRNA–disease associations, even in sparsely annotated regions of the network. The approach was trained and validated on a dataset benchmark using ten replicated experiments to ensure robustness. Results: The method achieved an average AUC–ROC of ~98%, outperforming previously reported computational approaches on the same dataset. Moreover, predictions were consistent across validation folds and robustness analyses were conducted to evaluate stability and highlight the most important information. Conclusions: Integrating heterogeneous biological information and representing it through graph neural network representation learning offers a powerful and generalizable way to predict relevant associations, including miRNA–disease, and provide a robust computational framework to support biomedical discovery and translational research. Full article
(This article belongs to the Special Issue Bioinformatics Analysis of RNA for Human Health and Disease)
Show Figures

Figure 1

17 pages, 1592 KB  
Article
An Efficient Distributed Optimization Algorithm for Cooperation of Automated Vehicles Considering Packet Loss
by Feng Gao, Fenlong Lan, Jie Ma, Jian Li and Xiaoqi Zheng
Mathematics 2026, 14(3), 454; https://doi.org/10.3390/math14030454 - 28 Jan 2026
Abstract
With the development of wireless communication technologies, cooperation of automated vehicles (CAVs) becomes a key roadmap to promote the intelligent level and traffic efficiency. In this study, a distributed optimization framework is firstly designed to utilize more computation resources by introducing auxiliary variables [...] Read more.
With the development of wireless communication technologies, cooperation of automated vehicles (CAVs) becomes a key roadmap to promote the intelligent level and traffic efficiency. In this study, a distributed optimization framework is firstly designed to utilize more computation resources by introducing auxiliary variables and equality constraints to separate the coupling parts in the original centralized optimization problem. Bench test results show that more resources can be used by this framework compared with the centralized one, which is beneficial to the real time performance and scale of CAVs. But extra exchanges of the consensus variables between nodes lead to much more communication load, which easily causes packet loss. To ensure the cooperative performance, a robust interactive algorithm is further designed to ensure the convergence of the numerical optimization process in the presence of packet loss. Its global convergence is analyzed theoretically by the operator method under the assumption that the feasible domain is convex. The performances of CAVs controlled by the robust distributed optimization algorithm are validated and verified by several comparative tests under the intersection scenario. The test results show that compared with the centralized structure, the balance of computation load among different nodes is improved by 5 times at least, and the maximum computation period is smaller than 50 ms. Full article
13 pages, 2237 KB  
Article
BioClimPolar_2300 V1.0: A Mesoscale Bioclimatic Dataset for Future Climates in Arctic Regions
by Yuanbo Su, Shaomei Li, Bingyu Yang, Yan Zhang and Xiaojun Kou
Diversity 2026, 18(2), 70; https://doi.org/10.3390/d18020070 - 28 Jan 2026
Abstract
Arctic regions are warming rapidly, elevating extinction risks and accelerating ecosystem change, yet widely used bioclimatic datasets rarely represent polar-specific ecological constraints. Here we present BioClimPolar_2300 v1.0, a raster bioclimatic dataset designed for terrestrial Arctic biodiversity research under climate change. The dataset includes [...] Read more.
Arctic regions are warming rapidly, elevating extinction risks and accelerating ecosystem change, yet widely used bioclimatic datasets rarely represent polar-specific ecological constraints. Here we present BioClimPolar_2300 v1.0, a raster bioclimatic dataset designed for terrestrial Arctic biodiversity research under climate change. The dataset includes 33 gridded bioclimatic layers at a 10 km spatial resolution, covering seven discrete temporal intervals from 2010 to 2300 AD. In addition to conventional variables used globally, BioClimPolar_2300 incorporates three polar-relevant constraint domains: (1) polar day–night phenomena (PDNs), including degree-day metrics during polar night and polar day; (2) temperature-defined seasonal cycles (TSCs), including seasonal temperature, precipitation, aridity, and season length; (3) hot/cold stresses (HCSs), capturing indices of extreme summer heat and winter cold. Precipitation during snow-melting days (P_melting) is also included due to its relevance for species depending on subnivean habitats. Climate fields were extracted from CMIP6 models and statistically downscaled to 10 km using a change-factor approach under a polar projection. Monthly fields were linearly interpolated to derive daily grids, enabling the computation of variables that require daily inputs. Validation against observations from 30 Arctic weather stations indicates performance suitable for biodiversity applications, and two exemplar range shift case studies (one animal and one plant) illustrate biological relevance and provide practical guidance for data extraction and use. BioClimPolar_2300 fills a key gap in Arctic bioclimatic resources and supports more realistic biodiversity assessments and conservation planning through 2300. Full article
(This article belongs to the Section Biodiversity Conservation)
Show Figures

Figure 1

22 pages, 740 KB  
Review
Smart Lies and Sharp Eyes: Pragmatic Artificial Intelligence for Cancer Pathology: Promise, Pitfalls, and Access Pathways
by Mohamed-Amine Bani
Cancers 2026, 18(3), 421; https://doi.org/10.3390/cancers18030421 - 28 Jan 2026
Abstract
Background: Whole-slide imaging and algorithmic advances have moved computational pathology from research to routine consideration. Despite notable successes, real-world deployment remains limited by generalization, validation gaps, and human-factor risks, which can be amplified in resource-constrained settings. Content/Scope: This narrative review and implementation perspective [...] Read more.
Background: Whole-slide imaging and algorithmic advances have moved computational pathology from research to routine consideration. Despite notable successes, real-world deployment remains limited by generalization, validation gaps, and human-factor risks, which can be amplified in resource-constrained settings. Content/Scope: This narrative review and implementation perspective summarizes clinically proximate AI capabilities in cancer pathology, including lesion detection, metastasis triage, mitosis counting, immunomarker quantification, and prediction of selected molecular alterations from routine histology. We also summarize recurring failure modes, dataset leakage, stain/batch/site shifts, misleading explanation overlays, calibration errors, and automation bias, and distinguish applications supported by external retrospective validation, prospective reader-assistance or real-world studies, and regulatory-cleared use. We translate these evidence patterns into a practical checklist covering dataset design, external and temporal validation, robustness testing, calibration and uncertainty handling, explainability sanity checks, and workflow-safety design. Equity Focus: We propose a stepwise adoption pathway for low- and middle-income countries: prioritize narrow, high-impact use cases; match compute and storage requirements to local infrastructure; standardize pre-analytics; pool validation cohorts; and embed quality management, privacy protections, and audit trails. Conclusions: AI can already serve as a reliable second reader for selected tasks, reducing variance and freeing expert time. Safe, equitable deployment requires disciplined validation, calibrated uncertainty, and guardrails against human-factor failure. With pragmatic scoping and shared infrastructure, pathology programs can realize benefits while preserving trust and accountability. Full article
17 pages, 10981 KB  
Article
NeuroGator: A Low-Power Gating System for Asynchronous BCI Based on LFP Brain State Estimation
by Benyuan He, Chunxiu Liu, Zhimei Qi, Ning Xue and Lei Yao
Brain Sci. 2026, 16(2), 141; https://doi.org/10.3390/brainsci16020141 - 28 Jan 2026
Abstract
The continuous handling of the large amount of raw data generated by implantable brain–computer interface (BCI) devices requires a large amount of hardware resources and is becoming a bottleneck for implantable BCI systems, particularly for power-constrained wireless systems. To overcome this bottleneck, we [...] Read more.
The continuous handling of the large amount of raw data generated by implantable brain–computer interface (BCI) devices requires a large amount of hardware resources and is becoming a bottleneck for implantable BCI systems, particularly for power-constrained wireless systems. To overcome this bottleneck, we present NeuroGator, an asynchronous gating system using Local Field Potential (LFP) for the implantable BCI system. Unlike a conventional continuous data decoding approach, NeuroGator uses hierarchical state classification to efficiently allocate hardware resources to reduce the data size before handling or transmission. The proposed NeuroGator operates in two stages: Firstly, a low-power hardware silence detector filters out background noise and non-active signals, effectively reducing the data size by approximately 69.4%. Secondly, a Dual-Resolution Gate Recurrent Unit (GRU) model controls the main data processing procedure on the edge side, using a first-level model to scan low-precision LFP data for potential activity and a second-level model to analyze high-precision LFP data for confirmation of an active state. The experiment shows that NeuroGator reduces overall data throughput by 82% while maintaining an F1-Score of 0.95. This architecture allows the Implantable BCI system to stay in an ultra-low-power state for over 85% of its entire operation period. The proposed NeuroGator has been implemented in an Application-Specific Integrated Circuit (ASIC) with a standard 180 nm Complementary Metal Oxide Semiconductor (CMOS) process, occupying a silicon area of 0.006mm2 and consuming 51 nW power. NeuroGator effectively resolves the resource efficiency dilemma for implantable BCI devices, offering a robust paradigm for next-generation asynchronous implantable BCI systems. Full article
(This article belongs to the Special Issue Trends and Challenges in Neuroengineering)
Show Figures

Figure 1

47 pages, 2081 KB  
Article
A Robust ConvNeXt-Based Framework for Efficient, Generalizable, and Explainable Brain Tumor Classification on MRI
by Kirti Pant, Pijush Kanti Dutta Pramanik and Zhongming Zhao
Bioengineering 2026, 13(2), 157; https://doi.org/10.3390/bioengineering13020157 - 28 Jan 2026
Abstract
Background: Accurate and dependable brain tumor classification from magnetic resonance imaging (MRI) is essential for clinical decision support, yet remains challenging due to inter-dataset variability, heterogeneous tumor appearances, and limited generalization of many deep learning models. Existing studies often rely on single-dataset evaluation, [...] Read more.
Background: Accurate and dependable brain tumor classification from magnetic resonance imaging (MRI) is essential for clinical decision support, yet remains challenging due to inter-dataset variability, heterogeneous tumor appearances, and limited generalization of many deep learning models. Existing studies often rely on single-dataset evaluation, insufficient statistical validation, or lack interpretability, which restricts their clinical reliability and real-world deployment. Methods: This study proposes a robust brain tumor classification framework based on the ConvNeXt Base architecture. The model is evaluated across three independent MRI datasets comprising four classes—glioma, meningioma, pituitary tumor, and no tumor. Performance is assessed using class-wise and aggregate metrics, including accuracy, precision, recall, F1-score, AUC, and Cohen’s Kappa. The experimental analysis is complemented by ablation studies, computational efficiency evaluation, and rigorous statistical validation using Friedman’s aligned ranks test, Holm and Wilcoxon post hoc tests, Kendall’s W, critical difference diagrams, and TOPSIS-based multi-criteria ranking. Model interpretability is examined using Grad-CAM++ and Gradient SHAP. Results: ConvNeXt Base consistently achieves near-perfect classification performance across all datasets, with accuracies exceeding 99.6% and AUC values approaching 1.0, while maintaining balanced class-wise behavior. Statistical analyses confirm that the observed performance gains over competing architectures are significant and reproducible. Efficiency results demonstrate favorable inference speed and resource usage, and explainability analyses show that predictions are driven by tumor-relevant regions. Conclusions: The results demonstrate that ConvNeXt Base provides a reliable, generalizable, and explainable solution for MRI-based brain tumor classification. Its strong diagnostic accuracy, statistical robustness, and computational efficiency support its suitability for integration into real-world clinical and diagnostic workflows. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

17 pages, 1622 KB  
Article
A Battery-Aware Sensor Fusion Strategy: Unifying Magnetic-Inertial Attitude and Power for Energy-Constrained Motion Systems
by Raphael Diego Comesanha e Silva, Thiago Martins, João Paulo Bedretchuk, Victor Noster Kürschner and Anderson Wedderhoff Spengler
Sensors 2026, 26(3), 856; https://doi.org/10.3390/s26030856 - 28 Jan 2026
Abstract
Extended Kalman Filters (EKFs) are widely employed for attitude estimation using Magnetic and Inertial Measurement Units (MIMUs) in battery-powered sensing systems. In such applications, energy availability influences system operation, yet battery state information is commonly treated by external supervisory mechanisms rather than being [...] Read more.
Extended Kalman Filters (EKFs) are widely employed for attitude estimation using Magnetic and Inertial Measurement Units (MIMUs) in battery-powered sensing systems. In such applications, energy availability influences system operation, yet battery state information is commonly treated by external supervisory mechanisms rather than being integrated into the estimation process. This work presents an EKF-based formulation in which the battery State of Charge (SOC) is explicitly included as a state variable, allowing joint estimation of attitude and energy state within a single filtering framework. SOC dynamics are modeled using a low-complexity estimator based on terminal voltage and current measurements, while attitude estimation is performed using a Simplified Extended Kalman Filter (SEKF) tailored for embedded MIMU-based applications. The proposed approach was evaluated through numerical simulations under constant and time-varying load profiles representative of low-power electronic devices. The results indicate that the inclusion of SOC estimation does not affect the attitude estimation performance of the original SEKF, while SOC estimation errors remain below 8% for the evaluated load conditions with power consumption of approximately 0.1 W, consistent with wearable and small autonomous electronic platforms. By incorporating energy state estimation directly into the filtering structure, rather than treating it as an external supervisory task, the proposed formulation offers a unified estimation approach suitable for embedded MIMU-based systems with limited computational and energy resources. Full article
(This article belongs to the Special Issue Inertial Sensing System for Motion Monitoring)
Show Figures

Figure 1

25 pages, 5185 KB  
Review
A Review of Routing and Resource Optimization in Quantum Networks
by Md. Shazzad Hossain Shaon and Mst Shapna Akter
Electronics 2026, 15(3), 557; https://doi.org/10.3390/electronics15030557 - 28 Jan 2026
Abstract
Quantum computing is a new discipline that uses the ideas of quantum physics to do calculations that are not possible with conventional computers. Quantum bits, called qubits, could exist in superposition states, making them suitable for parallel processing in contrast to traditional bits. [...] Read more.
Quantum computing is a new discipline that uses the ideas of quantum physics to do calculations that are not possible with conventional computers. Quantum bits, called qubits, could exist in superposition states, making them suitable for parallel processing in contrast to traditional bits. When it comes to addressing complex challenges like proof simulation, optimization, and cryptography, quantum entanglement and quantum interference provide exponential improvements. This survey focuses on recent advances in entanglement routing, quantum key distribution (QKD), and qubit management for short- and long-distance quantum communication. It studies optimization approaches such as integer programming, reinforcement learning, and collaborative methods, evaluating their efficacy in terms of throughput, scalability, and fairness. Despite improvements, challenges remain in dynamic network adaptation, resource limits, and error correction. Addressing these difficulties necessitates the creation of hybrid quantum–classical algorithms for efficient resource allocation, hardware-aware designs to improve real-world deployment, and fault-tolerant architecture. Therefore, this survey suggests that future research focus on integrating quantum networks with existing classical infrastructure to improve security, dependability, and mainstream acceptance. This connection has significance for applications that require secure communication, financial transactions, and critical infrastructure protection. Full article
Show Figures

Figure 1

18 pages, 1235 KB  
Article
Induction Machine Digital Model Implementation for Fault Injection Analysis
by Javier Fuentes-Sanchez, Julio Hernandez-Perez, Jose de Jesus Rangel-Magdaleno, Sergio Rosales-Nunez and Roberto Morales-Caporal
Processes 2026, 14(3), 456; https://doi.org/10.3390/pr14030456 - 28 Jan 2026
Abstract
In recent years, the digital emulation of power systems, such as induction machines, has increased, driven by advances in computational resources and the processing capabilities of digital platforms. These platforms offer a versatile approach to the design, analysis, and optimization of solutions in [...] Read more.
In recent years, the digital emulation of power systems, such as induction machines, has increased, driven by advances in computational resources and the processing capabilities of digital platforms. These platforms offer a versatile approach to the design, analysis, and optimization of solutions in electric machine drive research. This work presents the design of an Induction Machine (IM) using a digital twin, simulating its performance and behavior under failure conditions using the DQ model. Additionally, this study presents the design and real-time digital emulation of an IM, incorporating a bearing-fault model. The implementation on an FPGA platform enables high-fidelity simulation and analysis of the machine’s performance under both healthy and faulty operating conditions. This approach introduces a distinctive and critical tool for pre-experimental validation, enabling the precise identification of key fault signatures and system responses under real-time conditions, a capability that is not explicitly addressed in existing studies. Quantitative results demonstrate that the digital model implementation is highly accurate in replicating the theoretical IM with a relative error below (<1%). Additionally, through frequency-domain analysis, the signatures of the injected fault can be observed. Full article
(This article belongs to the Section Process Control and Monitoring)
Show Figures

Figure 1

Back to TopTop