Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (709)

Search Parameters:
Keywords = noisy computation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 1413 KB  
Article
A Novel Hybrid Quantum Circuit for Integer Factorization: End-to-End Evaluation in Simulation and Real Quantum Hardware
by Jesse Van Griensven Thé, Victor Oliveira Santos and Bahram Gharabaghi
J. Cybersecur. Priv. 2026, 6(2), 71; https://doi.org/10.3390/jcp6020071 - 10 Apr 2026
Abstract
The literature indicates that the qubit requirements for factoring RSA-2048 remain on the order of 1 million, under commonly assumed architectures and error-correction models, leaving a substantial gap between current resource estimates and near-term practical feasibility. Reducing this requirement to the low-thousand-qubit regime [...] Read more.
The literature indicates that the qubit requirements for factoring RSA-2048 remain on the order of 1 million, under commonly assumed architectures and error-correction models, leaving a substantial gap between current resource estimates and near-term practical feasibility. Reducing this requirement to the low-thousand-qubit regime therefore remains an important open research objective. This work proposes a hybrid classical–quantum algorithm that uses a classical modular exponentiation subroutine with a Quantum Number Theoretic Transform (QNTT) circuit to increase the speed and reduce the required quantum resources relative to Shor’s algorithm for integer factorization, which underpins cryptographic systems like RSA and ECC. We evaluate multiple coprime numbers, the result of multiplication of two primes, in both simulation and real quantum hardware, using IBM’s reference Shor implementation as the baseline. Because Shor and proposed Jesse–Victor–Gharabaghi (JVG) use different register sizes for the same coprime N, the reported gate/depth reductions should be interpreted as end-to-end quantum-resource budgets for factoring the same N, rather than a per-qubit or transform-only efficiency claim. In simulation, the JVG algorithm achieved substantial practical reductions in computational resources, decreasing runtime from 174.1 s to 5.4 s, memory usage from 12.5 GB to 0.27 GB, and quantum gate counts by approximately 99%. On quantum hardware, JVG reduced the required runtime from 67.8 s to 2 s, and the quantum gate counts by over 98%. We showed that the proposed algorithm can address the relevant RSA-1024 case scenario, establishing that this method can provide validation for large-scale situations. Furthermore, extrapolation to RSA-2048 indicates that the JVG algorithm significantly outperforms Shor’s approach, requiring a projected quantum runtime of 29 h for ten thousand runs for factorization under identical scaling assumptions. Overall, these results support JVG as a more hardware-compatible and robust noise-tolerant substitute for Shor’s framework, offering a viable research direction toward practical quantum integer factorization on near-term Noisy Intermediate-Scale Quantum (NISQ) devices. Full article
(This article belongs to the Section Cryptography and Cryptology)
Show Figures

Figure 1

30 pages, 1323 KB  
Article
Circular Polarization-Based Quantum Encoding for Image Transmission over Error-Prone Channels
by Udara Jayasinghe and Anil Fernando
Signals 2026, 7(2), 37; https://doi.org/10.3390/signals7020037 - 8 Apr 2026
Viewed by 169
Abstract
Quantum image transmission over noisy communication channels remains a challenge due to the fragility of quantum states and their susceptibility to channel impairments. Existing quantum encoding schemes often exhibit limited noise resilience, while advanced approaches introduce computational and implementation complexity. To address these [...] Read more.
Quantum image transmission over noisy communication channels remains a challenge due to the fragility of quantum states and their susceptibility to channel impairments. Existing quantum encoding schemes often exhibit limited noise resilience, while advanced approaches introduce computational and implementation complexity. To address these limitations, this paper proposes a circular polarization-based quantum encoding framework for image transmission over error-prone channels. In the proposed approach, source images are compressed and source-encoded using standard image coding formats, including the joint photographic experts group (JPEG) standard and the high-efficiency image file format (HEIF), and converted into classical bitstreams. The resulting bitstreams are protected using channel coding and mapped onto quantum states via circular polarization representations, where left- and right-hand circularly polarized states encode binary information. The encoded quantum states are transmitted over noisy quantum channels to model channel impairments. At the receiver, appropriate quantum decoding and channel decoding operations are applied to recover the classical bitstream, followed by source decoding to reconstruct the image. The performance of the proposed framework is evaluated using image quality metrics, including peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and universal quality index (UQI). Simulation results demonstrate that the proposed circular polarization-based encoding scheme outperforms existing quantum image encoding techniques, achieving channel SNR gains of 4 dB over state-of-the-art Hadamard-based encoding and 3 dB over frequency-domain quantum encoding methods under severe noise conditions. These results indicate that circular polarization-based quantum encoding provides improved noise robustness and reconstruction fidelity for practical quantum image transmission systems. Full article
Show Figures

Figure 1

35 pages, 11805 KB  
Article
MRTS-Boosting: A Quality-Aware Multivariate Time Series Classification Framework for Robust Rice Detection Under Cloud Contamination
by Bayu Suseno, Guilhem Brunel, Hari Wijayanto, Kusman Sadik, Farit Mochamad Afendi and Bruno Tisseyre
Remote Sens. 2026, 18(7), 1025; https://doi.org/10.3390/rs18071025 - 29 Mar 2026
Viewed by 320
Abstract
Accurate rice detection is essential for food security, sustainable agriculture, and environmental monitoring. Satellite time series observations provide scalable capabilities for rice detection; however, their application in tropical regions is challenged by persistent cloud contamination, asynchronous crop development cycles, and temporal misalignment among [...] Read more.
Accurate rice detection is essential for food security, sustainable agriculture, and environmental monitoring. Satellite time series observations provide scalable capabilities for rice detection; however, their application in tropical regions is challenged by persistent cloud contamination, asynchronous crop development cycles, and temporal misalignment among multisensor observations, which reduce classification reliability. This study introduces Multivariate Robust Time Series Boosting (MRTS-Boosting), a quality-aware framework for multivariate time series classification (TSC) designed to improve robustness under noisy and irregular observational conditions. The framework integrates quality-weighted feature construction, joint extraction of full-series and interval-based temporal features, and a flexible multivariate formulation that accommodates heterogeneous satellite inputs without strict temporal alignment. Performance was evaluated using synthetic datasets with controlled cloud contamination, 103 benchmark datasets from the University of California, Riverside (UCR) TSC Archive, and 3261 real-world rice field observations from Indonesia. Comparisons were conducted against representative whole-series, interval-based, shapelet-based, kernel-based, and ensemble classifiers. MRTS-Boosting achieved up to 87% accuracy under severe cloud contamination, an average rank of 2.7 on noise-augmented UCR datasets, and 93% accuracy with Cohen’s kappa of 0.76 for Indonesian rice detection, while maintaining moderate computational cost. These results demonstrate that MRTS-Boosting provides a robust, scalable, and computationally efficient framework for satellite-based rice detection. The framework remains competitive in univariate settings while benefiting from multisensor integration, indicating that performance gains arise from both methodological design and the effective use of heterogeneous data. MRTS-Boosting is therefore well-suited for precision agriculture applications under challenging observational conditions. Full article
Show Figures

Figure 1

11 pages, 2864 KB  
Case Report
Acute Airway Crisis in Mucopolysaccharidosis VI: Management Challenges
by Assel Tulebayeva, Chaitanya Gadepalli and Maira Sharipova
Diagnostics 2026, 16(7), 1009; https://doi.org/10.3390/diagnostics16071009 - 27 Mar 2026
Viewed by 348
Abstract
Background and Clinical Significance: Mucopolysaccharidosis type VI is a rare lysosomal storage disorder due to arylsulfatase B enzyme deficiency, leading to progressive multisystem disease and complex airway. Acute respiratory infections can precipitate airway embarrassment. A structured treatment guideline is currently lacking. We present [...] Read more.
Background and Clinical Significance: Mucopolysaccharidosis type VI is a rare lysosomal storage disorder due to arylsulfatase B enzyme deficiency, leading to progressive multisystem disease and complex airway. Acute respiratory infections can precipitate airway embarrassment. A structured treatment guideline is currently lacking. We present a 7-year-old MPS VI male with respiratory distress, highlighting the challenges in management. Case Presentation: Case review focusing on clinical presentation, imaging findings, and multidisciplinary decision-making during acute deterioration. A child diagnosed with MPS VI at the age of 3.5 years old, due to low arylsulfatase B enzyme activity and homozygous for c.275C>A p.(Thr92Lys) variant in the ARSB gene. At 7 years of age, he showed the signs of dyspnoea, increased respiratory effort with bilateral crepitations, and noisy breathing. Initial management included facemask oxygen, nebulised adrenaline, corticosteroids, and bronchodilators. Computer tomography scan of the neck and chest showed a complex upper airway, multiple tracheal narrowing, tortuosity, and an extra loop of truncus brachiocephalicus from the arch of the aorta. Potential interventions carried substantial risks due to abnormal airway and multisystem disease. Following extensive multidisciplinary discussion after careful consideration of the significant risks associated with invasive airway interventions, a shared decision was reached with the family to adopt a comfort-focused palliative care approach. Despite the best supportive care, the child unfortunately passed away after 3 months. The family was involved in every decision process and was fully supported. Conclusions: MPS VI is associated with complex airways and multisystem disease. Multidisciplinary decision-making with family is critical to safe and appropriate care. The rarity of the disease, lack of guidelines, complex airways, and multiple comorbidities make management challenging. Full article
(This article belongs to the Special Issue Recent Advances in Pathology 2026)
Show Figures

Figure 1

17 pages, 254 KB  
Article
Quantum Entanglement in Digital Forensics: Methodology and Experimental Findings
by Shatha Alhazmi, Khaled Elleithy and Abdelrahman Elleithy
Electronics 2026, 15(7), 1372; https://doi.org/10.3390/electronics15071372 - 26 Mar 2026
Viewed by 281
Abstract
The fast-paced progress in quantum computing introduces significant new challenges for digital forensics by undermining classical cryptographic mechanisms that protect digital evidence. Algorithms such as Shor’s and Grover’s threaten the long-term reliability of traditional hash functions, digital signatures, and encryption schemes, thereby compromising [...] Read more.
The fast-paced progress in quantum computing introduces significant new challenges for digital forensics by undermining classical cryptographic mechanisms that protect digital evidence. Algorithms such as Shor’s and Grover’s threaten the long-term reliability of traditional hash functions, digital signatures, and encryption schemes, thereby compromising the integrity, authenticity, and confidentiality of evidence. This paper investigates how quantum entanglement can be leveraged to enhance the security of digital forensic evidence in the post-quantum era. A hybrid quantum–classical forensic framework is proposed, integrating three entanglement-based components: an entanglement-assisted quantum hashing mechanism for integrity assurance, a CHSH nonlocality-based protocol for authenticity verification, and a BBM92 quantum key distribution scheme for confidentiality and secure chain-of-custody management. All components are implemented using IBM Qiskit and evaluated with the AerSimulator under realistic Noisy Intermediate-Scale Quantum conditions. Experimental results measured using Hamming distance, CHSH S-values, and Quantum Bit Error Rate demonstrate improved tamper detection, reliable authenticity validation, and strong overall confidentiality. Full article
(This article belongs to the Special Issue Feature Papers in Networks: 2025–2026 Edition)
16 pages, 2916 KB  
Article
Deep Learning-Based Relay Selection in a Decode-and-Forward Cooperative System with Energy Harvesting and Signal Space Diversity
by Ahmed Oun, Divyessh Maheshwari and Ahmed Ammar
Electronics 2026, 15(7), 1363; https://doi.org/10.3390/electronics15071363 - 25 Mar 2026
Viewed by 357
Abstract
Deep learning techniques have been widely applied in wireless communication systems to enhance resilience and reduce computational complexity. This paper investigates both traditional and deep learning-based approaches for real-time relay selection in a cooperative communication system with multiple energy-harvesting relays and signal space [...] Read more.
Deep learning techniques have been widely applied in wireless communication systems to enhance resilience and reduce computational complexity. This paper investigates both traditional and deep learning-based approaches for real-time relay selection in a cooperative communication system with multiple energy-harvesting relays and signal space diversity. The assumed relay decoding scheme is decode-and-forward (DF), with selection criteria based on successful decoding from the source, sufficient energy availability, and the best channel to the destination. The system performance is evaluated in terms of outage probability. Monte Carlo simulations are used to determine the exact outage probability of the system and to generate datasets for training machine learning models. The traditional machine learning models implemented include Decision Tree (DT), Logistic Regression (LR), K-Nearest Neighbor (KNN), and Support Vector Machines (SVMs). The deep learning-based method used is the deep neural network (DNN). Two datasets—one with six features and another with nine features—were used for training and testing. The 6-feature datasets are comparatively less random and complex than the 9-feature datasets. The results indicate that among traditional models KNN achieves the highest accuracy and is thus used as a benchmark to compare against DNN performance. For the 9-feature datasets, both KNN and DNN struggle to accurately approximate the exact outage probability, suggesting that the 9-feature datasets are too complex and noisy for effective modeling. However, on the 6-feature datasets, KNN achieves 77% accuracy, while DNN achieves a significantly higher accuracy of 99%. Due to its high accuracy, the DNN model closely approximates the exact outage probability while offering greater computational efficiency compared to the KNN model. These results underscore the potential of deep learning in optimizing real-time relay selection for energy-harvesting cooperative communication systems. Full article
(This article belongs to the Special Issue Advances in Networked Systems and Communication Protocols)
Show Figures

Figure 1

24 pages, 1460 KB  
Perspective
From Sensing to Sense-Making: A Framework for On-Person Intelligence with Wearable Biosensors and Edge LLMs
by Tad T. Brunyé, Mitchell V. Petrimoulx and Julie A. Cantelon
Sensors 2026, 26(7), 2034; https://doi.org/10.3390/s26072034 - 25 Mar 2026
Viewed by 563
Abstract
Wearable biosensors increasingly stream multi-channel physiological and behavioral data outside the laboratory, yet most deployments still end in dashboards or threshold alarms that leave interpretation open to the user. In high-stakes domains, such as military, emergency response, aviation, industry, and elite sport, the [...] Read more.
Wearable biosensors increasingly stream multi-channel physiological and behavioral data outside the laboratory, yet most deployments still end in dashboards or threshold alarms that leave interpretation open to the user. In high-stakes domains, such as military, emergency response, aviation, industry, and elite sport, the constraint is rarely data availability but the cognitive effort required to convert noisy signals into timely, actionable decisions. We argue for on-person cognitive co-pilots: systems that integrate multimodal sensing, compute probabilistic state estimates on devices, synthesize those states with task and environmental context using locally hosted large language models (LLMs), and deliver recommendations through attention-appropriate cues that preserve autonomy. Enabling conditions include mature wearable sensing, edge artificial intelligence (AI) accelerators, tiny machine learning (TinyML) pipelines, privacy-preserving learning, and open-weight LLMs capable of local deployment with retrieval and guardrails. However, critical research gaps remain across layers: sensor validity under real-world conditions, uncertainty calibration and fusion under distribution shift, verification of LLM-mediated reasoning, interaction design that avoids alarm fatigue and automation bias, and governance models that protect privacy and consent in constrained settings. We propose a layered technical framework and research agenda grounded in cognitive engineering and human–automation interaction. Our core claim is that local, uncertainty-aware reasoning is an architectural prerequisite for trustworthy, low-latency augmentation in isolated, confined, and extreme environments. Full article
(This article belongs to the Special Issue Sensors in 2026)
Show Figures

Figure 1

32 pages, 8696 KB  
Article
Phase-Aware Hierarchical Reinforcement Learning with Dynamic Human–AI Authority Allocation for Mountain Search and Rescue
by Chenzhe Zhong, Bo Liu, Wei Zhu, Dongxu Dai and Yu Jiang
Drones 2026, 10(4), 229; https://doi.org/10.3390/drones10040229 - 24 Mar 2026
Viewed by 207
Abstract
Search and rescue (SAR) operations in mountainous terrain present significant challenges due to complex environments, time-critical decisions, and the need for effective human–AI collaboration. Existing approaches typically employ either fully autonomous systems that lack adaptability to varying task requirements, or fixed human–AI authority [...] Read more.
Search and rescue (SAR) operations in mountainous terrain present significant challenges due to complex environments, time-critical decisions, and the need for effective human–AI collaboration. Existing approaches typically employ either fully autonomous systems that lack adaptability to varying task requirements, or fixed human–AI authority allocations that fail to leverage the distinct strengths of humans and AI across different mission phases. This paper proposes Phase-Aware Hierarchical Reinforcement Learning (PAHRL), a novel framework that dynamically allocates decision-making authority between human operators and AI agents based on identified task phases. First, we formulate the mountain SAR problem as a three-phase task structure: Wide Search (WS), Target Confirmation (TC), and Rescue Coordination (RC), and examine the consistency of this decomposition through unsupervised clustering analysis, supported by bootstrap stability (ARI = 0.983 ± 0.083) and multiple clustering metrics. Second, we design an adaptive authority mechanism with four levels (L1: Human-Led to L4: Full-Auto) that automatically adjusts human involvement based on current phase characteristics and environmental uncertainty estimates. Third, we introduce a priority-based task execution module that ensures efficient resource allocation across multiple rescue objectives while respecting authority constraints. Extensive experiments demonstrate that PAHRL outperforms baseline methods, achieving a 20.9% higher success rate compared to standard PPO (59.0% vs. 48.8%) and 66.7% improvement over heuristic approaches. PAHRL maintains 96.9% precision even under 60% noise conditions with only 0.09 false rescues per episode. Ablation studies further reveal that phase awareness serves as a critical robustness mechanism; removing phase detection causes complete mission failure under noisy conditions. These results evaluate that phase-aware dynamic authority allocation significantly enhances both efficiency and robustness in human–AI collaborative SAR missions. While demonstrated in a proof-of-concept simulation with computational human models, validation with real operators and more complex environments remains essential before operational deployment. Full article
(This article belongs to the Section Artificial Intelligence in Drones (AID))
Show Figures

Figure 1

24 pages, 8415 KB  
Article
UAV-Based River Velocity Estimation Using Optical Flow and FEM-Supported Multiframe RAFT Extension
by Andrius Kriščiūnas, Vytautas Akstinas, Dalia Čalnerytė, Diana Meilutytė-Lukauskienė, Karolina Gurjazkaitė, Tautvydas Fyleris and Rimantas Barauskas
Drones 2026, 10(3), 221; https://doi.org/10.3390/drones10030221 - 21 Mar 2026
Viewed by 400
Abstract
Quantifying river surface flow velocity is essential for hydrodynamic modelling, flood forecasting, and water resource management. Traditional in situ methods provide accurate point measurements but are costly and limited in spatial coverage. Unmanned aerial vehicles (UAVs) offer a flexible, non-contact alternative for high-resolution [...] Read more.
Quantifying river surface flow velocity is essential for hydrodynamic modelling, flood forecasting, and water resource management. Traditional in situ methods provide accurate point measurements but are costly and limited in spatial coverage. Unmanned aerial vehicles (UAVs) offer a flexible, non-contact alternative for high-resolution monitoring. Optical flow is a tracer-independent technique for deriving velocity fields from RGB video, making it well suited to UAV-based surveys. However, its operational use is hindered by the limited availability of annotated datasets and by instability under low-texture or noisy conditions. This study combines a Finite element method (FEM)-based physical flow model with UAV video to generate reference datasets and introduces a modified Recurrent All-Pairs Field Transforms (RAFT) architecture based on multiframe sequences. A Gated Recurrent Unit fusion module (Fuse-GRU) is incorporated prior to correlation computation, improving robustness to illumination changes and surface homogeneity while maintaining computational efficiency. The proposed model delivers stable, physically consistent velocity estimates across multiple rivers and flow conditions. Accuracy improves with higher spatial resolution and moderate temporal spacing. Compared to field measurements, the average angular difference ranged from 8 to 15°. The high error values were mainly caused by inaccuracies in the physical model and by complex river features. These findings confirm that multiframe optical flow can reproduce realistic river flow patterns with accuracy comparable to physically-based simulations, thereby supporting UAV-based hydrometric monitoring and model validation. Full article
(This article belongs to the Special Issue Drones in Hydrological Research and Management)
Show Figures

Figure 1

26 pages, 666 KB  
Article
Quantum Heuristic Approach to Vehicle Routing Problem
by Jun Suk Kim, Donghyeon Lee and Chang Wook Ahn
Mathematics 2026, 14(6), 1026; https://doi.org/10.3390/math14061026 - 18 Mar 2026
Viewed by 346
Abstract
Quantum optimization has recently drawn considerable attention as one of the possible applications of noisy intermediate-scale quantum computation, yet the problem of qubit requirement remains a major bottleneck when combinatorial optimization problems are converted into quantum circuits. This issue becomes especially critical in [...] Read more.
Quantum optimization has recently drawn considerable attention as one of the possible applications of noisy intermediate-scale quantum computation, yet the problem of qubit requirement remains a major bottleneck when combinatorial optimization problems are converted into quantum circuits. This issue becomes especially critical in solving the capacitated vehicle routing problem (CVRP) with the quantum approximate optimization algorithm (QAOA), since the number of required qubits increases polynomially with respect to the number of nodes. This study investigates whether a heuristic divide-and-conquer strategy can be adapted to the quantum setting so as to improve qubit efficiency while preserving the optimization capability to a reasonable extent. The proposed method decomposes a single CVRP into multiple traveling salesman problems (TSPs) by the sweeping-based clustering method, searches for the sector configuration with the smallest angle sum by Grover’s search algorithm, and then solves each sector-wise TSP with the QAOA aided by the gravitational search algorithm. Experiments on five benchmark datasets show that the proposed approach attains feasible solutions within 3.4 to 12.7% of the reinforcement-learning baseline on the main test set. These results suggest that the proposed approach serves as a plausible quantum heuristic framework for constrained routing optimization, with the advantage of reducing the qubit burden by decomposing the original problem into smaller subproblems. Full article
Show Figures

Figure 1

24 pages, 2064 KB  
Article
Meta-Label-Corrected Knowledge Distillation for Partial Multi-Label Learning
by Jiwei Shuai, Can Xu, Haiyan Jiang and Bin Hu
Electronics 2026, 15(6), 1233; https://doi.org/10.3390/electronics15061233 - 16 Mar 2026
Viewed by 312
Abstract
Partial multi-label learning (PML) assigns each instance a candidate label set that contains all relevant labels but may also include irrelevant noisy ones, making reliable disambiguation essential. Although a small number of verified clean labels is often available in practice, existing PML methods [...] Read more.
Partial multi-label learning (PML) assigns each instance a candidate label set that contains all relevant labels but may also include irrelevant noisy ones, making reliable disambiguation essential. Although a small number of verified clean labels is often available in practice, existing PML methods rarely exploit such information to explicitly guide candidate-label correction. Meanwhile, directly applying knowledge distillation (KD) to PML is highly vulnerable to noisy supervision during representation learning, which can aggravate error accumulation under overlapping candidate labels. To address these issues, we propose a meta-guided distillation framework for PML that integrates teacher–student learning with nested meta-optimization. Specifically, the teacher is optimized with large-scale noisy data under the guidance of limited clean labels, so that it can learn calibrated probabilistic label semantics and generate corrected soft targets for student training. To make this meta-correction process scalable, a truncated meta-gradient approximation is further adopted to reduce computational overhead. The resulting corrected teacher outputs are then used to drive robust multi-label distillation for the student. Experiments on multiple benchmark multi-label image datasets demonstrate consistent improvements over seven representative PML methods across standard evaluation metrics. These results show that meta-guided calibration effectively reduces semantic ambiguity and mitigates noise-induced error propagation in partial multi-label learning. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning: Real-World Applications)
Show Figures

Figure 1

20 pages, 460 KB  
Article
Training-Free Quantum Architecture Search Under Realistic Noise via Expressibility-Guided Evolution
by Seyedali Mousavi, Seyedhamidreza Mousavi, Paul Pettersson and Masoud Daneshtalab
Entropy 2026, 28(3), 330; https://doi.org/10.3390/e28030330 - 16 Mar 2026
Viewed by 347
Abstract
Designing noise-robust parameterized quantum circuits (PQCs) is a central challenge in the noisy intermediate-scale quantum (NISQ) regime. Existing quantum architecture search methods rely on training large SuperCircuits and evaluating SubCircuits under noisy execution, resulting in high computational cost and architecture assessments that depend [...] Read more.
Designing noise-robust parameterized quantum circuits (PQCs) is a central challenge in the noisy intermediate-scale quantum (NISQ) regime. Existing quantum architecture search methods rely on training large SuperCircuits and evaluating SubCircuits under noisy execution, resulting in high computational cost and architecture assessments that depend on task-specific optimization and device noise. In this work, we propose a training-free quantum architecture search framework based on information-theoretic expressibility measures rather than performance-based estimators. We empirically show that noise-free KL-divergence-based expressibility exhibits a consistent monotonic association with noisy task loss across diverse circuit architectures and realistic hardware noise models. Leveraging this relationship, we introduce an expressibility-guided evolutionary search that requires neither SuperCircuit training nor noisy execution during the search phase. Since expressibility is evaluated independently of hardware noise, the method is inherently device-agnostic, enabling architectures to be reused across multiple quantum devices without re-running the search. Experiments using IBM-derived Qiskit noise models demonstrate that the proposed approach achieves competitive performance compared to SuperCircuit-based baselines, while substantially reducing computational cost. These results establish expressibility as an effective information-theoretic surrogate for ranking PQC architectures under realistic noise. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

9 pages, 924 KB  
Proceeding Paper
Multi-Class Electroencephalography Motor Imagery Classification of Limb Movements Using Convolutional Neural Network
by Yean Ling Chan, Yiqi Tew, Ching Pang Goh and Choon Kit Chan
Eng. Proc. 2026, 128(1), 20; https://doi.org/10.3390/engproc2026128020 - 11 Mar 2026
Viewed by 288
Abstract
We classified essential motor actions, dorsal and plantar flexion (lower limb), and arm movement (upper limb) from electroencephalography (EEG)-based brain–computer interface (BCI) signals, using a convolutional neural network (CNN). Different from previous research on upper or lower limb motor imagery in isolation, we [...] Read more.
We classified essential motor actions, dorsal and plantar flexion (lower limb), and arm movement (upper limb) from electroencephalography (EEG)-based brain–computer interface (BCI) signals, using a convolutional neural network (CNN). Different from previous research on upper or lower limb motor imagery in isolation, we integrated both categories in a unified framework to explore a broader range of movements for broader applications. These motor actions are fundamental to daily activities such as walking, running, maintaining balance, lifting, reaching, and exercising. Upper limb EEG data were provided by INTI International University, whereas lower limb data were obtained from a publicly available dataset, recorded using 16-channel Emotiv and OpenBCI systems, respectively, each with distinct sampling rates and signal formats. To improve signal quality and facilitate joint model training, all signals were downsampled to 125 Hz, standardized to 16 channels, segmented using sliding windows, normalized via StandardScaler, and labelled according to action class. The processed data were used to train a CNN model configured with a kernel size of 3 and rectified linear unit activation functions. Training was terminated early at epoch 11 using an early stopping strategy, resulting in approximately 67% accuracy for both training and validation sets. Although this accuracy was moderate for deep learning, a promising outcome for EEG-based multi-class motor imagery classification was obtained, with the challenges posed by limited data availability, low inter-class feature discriminability, and the inherently noisy nature of non-invasive EEG signals. The results of this study underscore the potential of CNN-based models for future real-time BCI applications. By expanding the dataset, deep learning architectures can be refined to improve signal preprocessing techniques. Prosthetic devices need to be integrated to validate the system in practical scenarios. Full article
Show Figures

Figure 1

48 pages, 6469 KB  
Article
Adaptive Instantaneous Frequency Synchrosqueezing Transform and Enhanced AdaBoost for Power Quality Disturbance Detection
by Chencheng He, Yuyi Lu and Wenbo Wang
Symmetry 2026, 18(3), 475; https://doi.org/10.3390/sym18030475 - 10 Mar 2026
Viewed by 183
Abstract
The integration of renewable energy and power electronics has intensified the occurrence of complex power quality disturbances (PQDs), which increasingly threaten grid stability. To address the challenges of multi-class PQD identification under noisy conditions, this paper proposes a novel framework that combines an [...] Read more.
The integration of renewable energy and power electronics has intensified the occurrence of complex power quality disturbances (PQDs), which increasingly threaten grid stability. To address the challenges of multi-class PQD identification under noisy conditions, this paper proposes a novel framework that combines an enhanced time–frequency analysis method with an optimized AdaBoost decision tree. The main contributions are three-fold: (1) We develop an instantaneous frequency adaptive Fourier synchrosqueezing transform (IFAFSST) equipped with a custom adaptive operator that aligns closely with the frequency modulation patterns in PQD signals, thereby improving time–frequency energy localization. (2) The IFAFSST outputs are decomposed into low-frequency and high-frequency components, from each of which a set of 16 discriminative features is extracted. (3) An improved AdaBoost classifier is introduced, incorporating forward feature selection and Hyperband-based hyperparameter optimization to enhance classification performance. Hyperband accelerates the optimization process by dynamically allocating computing resources and iteratively eliminating suboptimal configurations, thereby enabling efficient determination of the optimal hyperparameters. The method proposed in this paper achieved an accuracy rate of 99.50% on simulated data containing 30 dB white noise and 98.30% on hardware platform data. This framework can effectively handle 23 types of interference, including seven types of single interference, 12 types of double compound interference, three types of triple compound interference, and one type of quadruple compound interference. It performs particularly well in identifying composite interference scenarios. This research has made a significant contribution to power quality analysis, providing a powerful solution with high accuracy and practical applicability, and offering great potential for the implementation of smart grid monitoring systems and the integration of renewable energy. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

19 pages, 1065 KB  
Article
Entropy-Based Dual-Teacher Distillation for Efficient Motor Imagery EEG Classification
by Zefeng Xu and Zhuliang Yu
Entropy 2026, 28(3), 310; https://doi.org/10.3390/e28030310 - 10 Mar 2026
Viewed by 372
Abstract
Motor imagery (MI) EEG classification is a key component of noninvasive brain–computer interfaces (BCIs) and often must satisfy strict latency constraints in online or edge deployments. Although ensembling can reliably improve MI decoding accuracy, its inference cost grows linearly with the number of [...] Read more.
Motor imagery (MI) EEG classification is a key component of noninvasive brain–computer interfaces (BCIs) and often must satisfy strict latency constraints in online or edge deployments. Although ensembling can reliably improve MI decoding accuracy, its inference cost grows linearly with the number of ensemble members, making it impractical for low-latency applications. To address these issues, we propose an entropy-based dual-teacher distillation framework that transfers ensemble teacher knowledge to a single deployable backbone. From an information theoretic perspective, two failure modes are common in small and noisy MI datasets: elevated predictive entropy (noisy decisions) and large fluctuation across late training epochs (unstable convergence and unreliable checkpoint selection). Thus, we introduce an exponential moving average (EMA) teacher with entropy-gated activation as a low-pass filter in parameter space to reduce the student’s prediction noise. In addition, a two-stage cosine annealing schedule is employed to suppress late-stage oscillations and improve the robustness of final checkpoint selection. Experiments on two public MI benchmarks (BCI Competition IV-2a and IV-2b) with three representative backbones (EEGNet, ShallowConvNet, and ATCNet) under the subject dependent protocol show consistent accuracy gains over the ensemble teacher and strong distillation baselines. On IV-2a, our method achieves an average accuracy of 0.7713 across the backbones, surpassing both the original models (0.7222) and the corresponding ensembles (0.7482); on IV-2b, it achieves 0.8583 versus 0.8432 (original) and 0.8529 (ensemble). Full article
(This article belongs to the Special Issue Entropy Analysis of Electrophysiological Signals)
Show Figures

Figure 1

Back to TopTop