Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,286)

Search Parameters:
Keywords = quantum algorithm

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 3656 KB  
Article
Characterization of the Physical Image Quality of a Clinical Photon-Counting Computed Tomography Scanner Across Multiple Acquisition and Reconstruction Settings
by Patrizio Barca, Luigi Masturzo, Luca De Masi, Antonio Traino, Filippo Cademartiri and Marco Giannelli
Appl. Sci. 2026, 16(3), 1322; https://doi.org/10.3390/app16031322 - 28 Jan 2026
Viewed by 62
Abstract
This phantom study presents a thorough characterization of the physical image quality of a clinical whole-body photon-counting computed tomography (PCCT) scanner. Multiple quality metrics—noise, noise power spectrum (NPS), task transfer function (TTF), and detectability index (d′)—were analyzed across a range of reconstruction algorithms [...] Read more.
This phantom study presents a thorough characterization of the physical image quality of a clinical whole-body photon-counting computed tomography (PCCT) scanner. Multiple quality metrics—noise, noise power spectrum (NPS), task transfer function (TTF), and detectability index (d′)—were analyzed across a range of reconstruction algorithms (filtered back projection, FBP, and Quantum Iterative Reconstruction, QIR, with strength levels Q1–Q4), and varying reconstruction kernels (Br40/Br60/Br76/Br98). Both standard (STD, 0.4 mm slice thickness) and high-resolution (HR, 0.2 mm slice thickness) reconstruction modes were assessed. QIR significantly reduced image noise (60–95%) compared to FBP, particularly with sharper kernels. Spatial resolution improved with increasing QIR strength level for smoother kernels and was further enhanced using HR mode with sharp kernels. HR mode exhibited better noise performance than STD with sharper reconstructions, due to the small pixel effect. While STD mode showed higher d′ values for larger objects, HR mode outperformed it for smaller objects and sharper kernels. Compared to a conventional energy-integrating computed tomography system, the PCCT scanner showed superior d′ values under similar settings. Overall, this study highlights the complex interplay between acquisition and reconstruction parameters on image quality, confirms the potential of PCCT technology, and underscores the need for further clinical validation. Full article
(This article belongs to the Special Issue Advances in Diagnostic Radiology)
Show Figures

Figure 1

21 pages, 659 KB  
Article
Digital Quantum Simulation of Wavepacket Correlations in a Chemical Reaction
by Shah Ishmam Mohtashim and Sabre Kais
Entropy 2026, 28(2), 144; https://doi.org/10.3390/e28020144 - 28 Jan 2026
Viewed by 209
Abstract
We present hybrid quantum–classical algorithms to compute time-dependent Møller wavepacket correlation functions via digital quantum simulation. Reactant and product channel wavepackets are encoded as qubit states, evolved under a discretized molecular Hamiltonian, and their correlation is reconstructed using both a modified Hadamard test [...] Read more.
We present hybrid quantum–classical algorithms to compute time-dependent Møller wavepacket correlation functions via digital quantum simulation. Reactant and product channel wavepackets are encoded as qubit states, evolved under a discretized molecular Hamiltonian, and their correlation is reconstructed using both a modified Hadamard test and a multi-fidelity estimation (MFE) protocol. The method is applied to the collinear H + H2 exchange reaction on a London–Eyring–Polanyi–Sato potential energy surface. Quantum-estimated correlation functions show quantitative agreement with high-resolution classical wavepacket simulations across the full time domain, reproducing both short-time scattering peaks and long-time oscillatory dynamics. The ancilla-free MFE protocol achieves matching results with reduced circuit depth. These results provide a proof of principle that digital quantum circuits can be used to accurately calculate the wavepacket correlation functions for a benchmark chemical reaction system. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

25 pages, 5185 KB  
Review
A Review of Routing and Resource Optimization in Quantum Networks
by Md. Shazzad Hossain Shaon and Mst Shapna Akter
Electronics 2026, 15(3), 557; https://doi.org/10.3390/electronics15030557 - 28 Jan 2026
Viewed by 67
Abstract
Quantum computing is a new discipline that uses the ideas of quantum physics to do calculations that are not possible with conventional computers. Quantum bits, called qubits, could exist in superposition states, making them suitable for parallel processing in contrast to traditional bits. [...] Read more.
Quantum computing is a new discipline that uses the ideas of quantum physics to do calculations that are not possible with conventional computers. Quantum bits, called qubits, could exist in superposition states, making them suitable for parallel processing in contrast to traditional bits. When it comes to addressing complex challenges like proof simulation, optimization, and cryptography, quantum entanglement and quantum interference provide exponential improvements. This survey focuses on recent advances in entanglement routing, quantum key distribution (QKD), and qubit management for short- and long-distance quantum communication. It studies optimization approaches such as integer programming, reinforcement learning, and collaborative methods, evaluating their efficacy in terms of throughput, scalability, and fairness. Despite improvements, challenges remain in dynamic network adaptation, resource limits, and error correction. Addressing these difficulties necessitates the creation of hybrid quantum–classical algorithms for efficient resource allocation, hardware-aware designs to improve real-world deployment, and fault-tolerant architecture. Therefore, this survey suggests that future research focus on integrating quantum networks with existing classical infrastructure to improve security, dependability, and mainstream acceptance. This connection has significance for applications that require secure communication, financial transactions, and critical infrastructure protection. Full article
Show Figures

Figure 1

10 pages, 812 KB  
Proceeding Paper
Hybrid Quantum-Fuzzy Control for Intelligent Steam Heating Management in Thermal Power Plants
by Noilakhon Yakubova, Ayhan Istanbullu, Isomiddin Siddiqov and Komil Usmanov
Eng. Proc. 2025, 117(1), 33; https://doi.org/10.3390/engproc2025117033 - 26 Jan 2026
Viewed by 84
Abstract
In recent years, intelligent control of complex thermodynamic systems has gained increasing attention due to global demands for higher energy efficiency and reduced environmental impact in industrial settings. This study explores the integration of quantum control methodologies-grounded in established principles of quantum mechanics—into [...] Read more.
In recent years, intelligent control of complex thermodynamic systems has gained increasing attention due to global demands for higher energy efficiency and reduced environmental impact in industrial settings. This study explores the integration of quantum control methodologies-grounded in established principles of quantum mechanics—into the automation of thermal processes in power plant operations. Specifically, it investigates a hybrid quantum-fuzzy control system for managing steam heating processes, a critical subsystem in thermal power generation. Unlike conventional control strategies that often struggle with nonlinearity, time delays, and parameter uncertainty, the proposed method incorporates quantum-inspired optimization algorithms to enhance adaptability and robustness. The quantum component, based on recognized models of coherent control and quantum interference, is utilized to refine the inference mechanisms within the fuzzy logic framework, allowing more precise handling of state transitions in multivariable environments. A simulation model was constructed using validated physical parameters of a pilot-scale steam heating unit, and the methodology was tested against baseline scenarios with conventional proportional-integral-derivative (PID) control. Experimental protocols and statistical analysis confirmed measurable improvements: up to 25% reduction in fuel usage under specific operational conditions, with an average of 1 to 2% improvement in energy efficiency. The results suggest that quantum-enhanced intelligent control offers a feasible pathway for bridging the gap between quantum theoretical models and macroscopic thermal systems, contributing to the development of more energy-resilient industrial automation solutions. Full article
Show Figures

Figure 1

24 pages, 1526 KB  
Article
EQARO-ECS: Efficient Quantum ARO-Based Edge Computing and SDN Routing Protocol for IoT Communication to Avoid Desertification
by Thair A. Al-Janabi, Hamed S. Al-Raweshidy and Muthana Zouri
Sensors 2026, 26(3), 824; https://doi.org/10.3390/s26030824 - 26 Jan 2026
Viewed by 160
Abstract
Desertification is the impoverishment of fertile land, caused by various factors and environmental effects, such as temperature and humidity. An appropriate Internet of Things (IoT) architecture, routing algorithms based on artificial intelligence (AI), and emerging technologies are essential to monitor and avoid desertification. [...] Read more.
Desertification is the impoverishment of fertile land, caused by various factors and environmental effects, such as temperature and humidity. An appropriate Internet of Things (IoT) architecture, routing algorithms based on artificial intelligence (AI), and emerging technologies are essential to monitor and avoid desertification. However, the classical AI algorithms usually suffer from falling into local optimum issues and consuming more energy. This research proposed an improved multi-objective routing protocol, namely, the efficient quantum (EQ) artificial rabbit optimisation (ARO) based on edge computing (EC) and a software-defined network (SDN) concept (EQARO-ECS), which provides the best cluster table for the IoT network to avoid desertification. The methodology of the proposed EQARO-ECS protocol reduces energy consumption and improves data analysis speed by deploying new technologies, such as the Cloud, SDN, EC, and quantum technique-based ARO. This protocol increases the data analysis speed because of the suggested iterated quantum gates with the ARO, which can rapidly penetrate from the local to the global optimum. The protocol avoids desertification because of a new effective objective function that considers energy consumption, communication cost, and desertification parameters. The simulation results established that the suggested EQARO-ECS procedure increases accuracy and improves network lifetime by reducing energy depletion compared to other algorithms. Full article
Show Figures

Figure 1

28 pages, 4582 KB  
Article
Quantum-Behaved Loser Reverse-Learning Differential Evolution Algorithm-Based Path Planning for Unmanned Aerial Vehicle
by Zhuoyun Chen, Xiangyin Zhang and Yao Lu
Actuators 2026, 15(2), 74; https://doi.org/10.3390/act15020074 - 26 Jan 2026
Viewed by 110
Abstract
This paper proposes the Quantum-behaved Loser Reverse-learning Differential Evolution (QLRDE) algorithm to address the inherent limitations of the standard Differential Evolution (DE) algorithm, including slow convergence speed and the premature stagnation in local optima. QLRDE incorporates three innovations: quantum-behaved mutation strategies suppress premature [...] Read more.
This paper proposes the Quantum-behaved Loser Reverse-learning Differential Evolution (QLRDE) algorithm to address the inherent limitations of the standard Differential Evolution (DE) algorithm, including slow convergence speed and the premature stagnation in local optima. QLRDE incorporates three innovations: quantum-behaved mutation strategies suppress premature convergence by leveraging quantum mechanics, the Loser Reverse-Learning Mechanism enhances diversity by reconstructing inferior individuals through opposition-based learning, and an adaptive parameter adjustment mechanism balances exploration and exploitation to improve robustness and convergence efficiency. Experimental evaluations on twelve benchmark functions confirm that QLRDE demonstrates better performance than existing algorithms in terms of search capability and convergence speed. Furthermore, QLRDE is employed for the 3D UAV path planning problem. QLRDE can generate B-Spline-based smooth flight paths and incorporate real-world constraints into the cost function. Simulation results confirm that QLRDE outperforms several competing algorithms with respect to path quality, computational efficiency, and robustness. Full article
23 pages, 1195 KB  
Article
Deeply Pipelined NTT Accelerator with Ping-Pong Memory and LUT-Only Barrett Reduction for Post-Quantum Cryptography
by Omar S. Sonbul, Muhammad Rashid, Muhammad I. Masud, Mohammed Aman and Amar Y. Jaffar
Electronics 2026, 15(3), 513; https://doi.org/10.3390/electronics15030513 - 25 Jan 2026
Viewed by 106
Abstract
Lattice-based post-quantum cryptography relies on fast polynomial multiplication. The Number-Theoretic Transform (NTT) is the key operation that enables this acceleration. To provide high throughput and low latency while keeping the area overhead small, hardware implementations of the NTT is essential. This is particularly [...] Read more.
Lattice-based post-quantum cryptography relies on fast polynomial multiplication. The Number-Theoretic Transform (NTT) is the key operation that enables this acceleration. To provide high throughput and low latency while keeping the area overhead small, hardware implementations of the NTT is essential. This is particularly true for resource-constrained devices. However, existing NTT accelerators either achieve high throughput at the cost of large area overhead or provide compact designs with limited pipelining and low operating frequency. Therefore, this article presents a compact, seven-stage pipelined NTT accelerator architecture for post-quantum cryptography, using the CRYSTALS–Kyber algorithm as a case study. The CRYSTALS–Kyber algorithm is selected due to its NIST standardization, strong security guarantees, and suitability for hardware acceleration. Specifically, a unified three-stage pipelined butterfly unit is designed using a single DSP48E1 block for the required integer multiplication. In contrast, the modular reduction stage is implemented using a four-stage pipelined, lookup-table (LUT)-only Barrett reduction unit. The term “LUT-only” refers strictly to the reduction logic and not to the butterfly multiplication. Furthermore, two dual-port BRAM18 blocks are used in a ping-pong manner to hold intermediate and final coefficients. In addition, a simple finite-state machine controller is implemented, which manages all forward NTT (FNTT) and inverse NTT (INTT) stages. For validation, the proposed design is realized on a Xilinx Artix-7 FPGA. It uses only 503 LUTs, 545 flip-flops, 1 DSP48E1 block, and 2 BRAM18 blocks. The complete FNTT and INTT with final rescaling require 1029 and 1285 clock cycles, respectively. At 200 MHz, these correspond to execution times of 5.14 µs for the FNTT and 6.42 µs for the INTT. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

26 pages, 911 KB  
Article
Logarithmic-Size Post-Quantum Linkable Ring Signatures Based on Aggregation Operations
by Minghui Zheng, Shicheng Huang, Deju Kong, Xing Fu, Qiancheng Yao and Wenyi Hou
Entropy 2026, 28(1), 130; https://doi.org/10.3390/e28010130 - 22 Jan 2026
Viewed by 85
Abstract
Linkable ring signatures are a type of ring signature scheme that can protect the anonymity of signers while allowing the public to verify whether the same signer has signed the same message multiple times. This functionality makes linkable ring signatures suitable for applications [...] Read more.
Linkable ring signatures are a type of ring signature scheme that can protect the anonymity of signers while allowing the public to verify whether the same signer has signed the same message multiple times. This functionality makes linkable ring signatures suitable for applications such as cryptocurrencies and anonymous voting systems, achieving the dual goals of identity privacy protection and misuse prevention. However, existing post-quantum linkable ring signature schemes often suffer from issues such as excessive linear data growth the adoption of post-quantum signature algorithms, and high circuit complexity resulting from the use of post-quantum zero-knowledge proof protocols. To address these issues, a logarithmic-size post-quantum linkable ring signature scheme based on aggregation operations is proposed. The scheme constructs a Merkle tree from ring members’ public keys via a hash algorithm to achieve logarithmic-scale signing and verification operations. Moreover, it introduces, for the first time, a post-quantum aggregate signature scheme to replace post-quantum zero-knowledge proof protocols, thereby effectively avoiding the construction of complex circuits. Scheme analysis confirms that the proposed scheme meets the correctness requirements of linkable ring signatures. In terms of security, the scheme satisfies the anonymity, unforgeability, and linkability requirements of linkable ring signatures. Moreover, the aggregation process does not leak information about the signing members, ensuring strong privacy protection. Experimental results demonstrate that, when the ring size scales to 1024 members, our scheme outperforms the existing Dilithium-based logarithmic post-quantum ring signature scheme, with nearly 98.25% lower signing time, 98.90% lower verification time, and 99.81% smaller signature size. Full article
(This article belongs to the Special Issue Quantum Information Security)
Show Figures

Figure 1

37 pages, 483 KB  
Review
Lattice-Based Cryptographic Accelerators for the Post-Quantum Era: Architectures, Optimizations, and Implementation Challenges
by Hua Yan, Lei Wu, Qiming Sun and Pengzhou He
Electronics 2026, 15(2), 475; https://doi.org/10.3390/electronics15020475 - 22 Jan 2026
Viewed by 163
Abstract
The imminent threat of large-scale quantum computers to modern public-key cryptographic devices has led to extensive research into post-quantum cryptography (PQC). Lattice-based schemes have proven to be the top candidate among existing PQC schemes due to their strong security guarantees, versatility, and relatively [...] Read more.
The imminent threat of large-scale quantum computers to modern public-key cryptographic devices has led to extensive research into post-quantum cryptography (PQC). Lattice-based schemes have proven to be the top candidate among existing PQC schemes due to their strong security guarantees, versatility, and relatively efficient operations. However, the computational cost of lattice-based algorithms—including various arithmetic operations such as Number Theoretic Transform (NTT), polynomial multiplication, and sampling—poses considerable performance challenges in practice. This survey offers a comprehensive review of hardware acceleration for lattice-based cryptographic schemes—specifically both the architectural and implementation details of the standardized algorithms in the category CRYSTALS-Kyber, CRYSTALS-Dilithium, and FALCON (Fast Fourier Lattice-Based Compact Signatures over NTRU). It examines optimization measures at various levels, such as algorithmic optimization, arithmetic unit design, memory hierarchy management, and system integration. The paper compares the various performance measures (throughput, latency, area, and power) of Field-Programmable Gate Array (FPGA) and Application-Specific Integrated Circuit (ASIC) implementations. We also address major issues related to implementation, side-channel resistance, resource constraints within IoT (Internet of Things) devices, and the trade-offs between performance and security. Finally, we point out new research opportunities and existing challenges, with implications for hardware accelerator design in the post-quantum cryptographic environment. Full article
41 pages, 3913 KB  
Review
Advancing Bioconjugated Quantum Dots with Click Chemistry and Artificial Intelligence to Image and Treat Glioblastoma
by Pranav Kalaga and Swapan K. Ray
Cells 2026, 15(2), 185; https://doi.org/10.3390/cells15020185 - 19 Jan 2026
Viewed by 454
Abstract
Glioblastoma (GB) is one of the most aggressive and invasive cancers. Current treatment protocols for GB include surgical resection, radiotherapy, and chemotherapy with temozolomide. However, despite these treatments, physicians still struggle to effectively image, diagnose, and treat GB. As such, patients frequently experience [...] Read more.
Glioblastoma (GB) is one of the most aggressive and invasive cancers. Current treatment protocols for GB include surgical resection, radiotherapy, and chemotherapy with temozolomide. However, despite these treatments, physicians still struggle to effectively image, diagnose, and treat GB. As such, patients frequently experience recurrence of GB, demanding innovative strategies for early detection and effective therapy. Bioconjugated quantum dots (QDs) have emerged as powerful nanoplatforms for precision imaging and targeted drug delivery due to their unique optical properties, tunable size, and surface versatility. Due to their extremely small size, QDs can cross the blood–brain barrier and be used for precision imaging of GB. This review explores the integration of QDs with click chemistry for robust bioconjugation, focusing on artificial intelligence (AI) to advance GB therapy, mechanistic insights into cellular uptake and signaling, and strategies for mitigating toxicity. Click chemistry enables site-specific and stable conjugation of targeting ligands, peptides, and therapeutic agents to QDs, enhancing selectivity and functionalization. Algorithms driven by AI may facilitate predictive modeling, image reconstruction, and personalized treatment planning, optimizing QD design and therapeutic outcomes. We discuss molecular mechanisms underlying interactions of QDs with GB, including receptor-mediated endocytosis and intracellular trafficking, which influence biodistribution and therapeutic efficacy. Use of QDs in photodynamic therapy, which uses reactive oxygen species to induce apoptotic cell death in GB cells, is an innovative therapy that is covered in this review. Finally, this review addresses concerns associated with the toxicity of metal-based QDs and highlights how QDs can be coupled with AI to develop new methods for precision imaging for detecting and treating GB for induction of apoptosis. By converging nanotechnology and computational intelligence, bioconjugated QDs represent a transformative platform for paving a safer path to smarter and more effective clinical interventions of GB. Full article
(This article belongs to the Special Issue Cell Death Mechanisms and Therapeutic Opportunities in Glioblastoma)
Show Figures

Figure 1

35 pages, 504 KB  
Article
Introducing a Resolvable Network-Based SAT Solver Using Monotone CNF–DNF Dualization and Resolution
by Gábor Kusper and Benedek Nagy
Mathematics 2026, 14(2), 317; https://doi.org/10.3390/math14020317 - 16 Jan 2026
Viewed by 328
Abstract
This paper is a theoretical contribution that introduces a new reasoning framework for SAT solving based on resolvable networks (RNs). RNs provide a graph-based representation of propositional satisfiability in which clauses are interpreted as directed reaches between disjoint subsets of Boolean variables (nodes). [...] Read more.
This paper is a theoretical contribution that introduces a new reasoning framework for SAT solving based on resolvable networks (RNs). RNs provide a graph-based representation of propositional satisfiability in which clauses are interpreted as directed reaches between disjoint subsets of Boolean variables (nodes). Building on this framework, we introduce a novel RN-based SAT solver, called RN-Solver, which replaces local assignment-driven branching by global reasoning over token distributions. Token distributions, interpreted as truth assignments, are generated by monotone CNF–DNF dualization applied to white (all-positive) clauses. New white clauses are derived via resolution along private-pivot chains, and the solver’s progression is governed by a taxonomy of token distributions (black-blocked, terminal, active, resolved, and non-resolved). The main results establish the soundness and completeness of the RN-Solver. Experimentally, the solver performs very well on pigeonhole formulas, where the separation between white and black clauses enables effective global reasoning. In contrast, its current implementation performs poorly on random 3-SAT instances, highlighting both practical limitations and significant opportunities for optimization and theoretical refinement. The presented RN-Solver implementation is a proof-of-concept which validates the underlying theory rather than a state-of-the-art competitive solver. One promising direction is the generalization of strongly connected components from directed graphs to resolvable networks. Finally, the token-based perspective naturally suggests a connection to token-superposition Petri net models. Full article
(This article belongs to the Special Issue Graph Theory and Applications, 3rd Edition)
32 pages, 1325 KB  
Review
AI-Based Prediction of Gene Expression in Single-Cell and Multiscale Genomics and Transcriptomics
by Ema Andreea Pălăștea, Irina-Mihaela Matache, Eugen Radu, Octavian Henegariu and Octavian Bucur
Int. J. Mol. Sci. 2026, 27(2), 801; https://doi.org/10.3390/ijms27020801 - 13 Jan 2026
Viewed by 366
Abstract
Omics research is changing the way medicine develops new strategies for diagnosis, prevention, and treatment. With the surge of advanced machine learning models tailored for omicss analysis, recent research has shown improved results and pushed the progress towards personalized medicine. The dissection of [...] Read more.
Omics research is changing the way medicine develops new strategies for diagnosis, prevention, and treatment. With the surge of advanced machine learning models tailored for omicss analysis, recent research has shown improved results and pushed the progress towards personalized medicine. The dissection of multiple layers of genetic information has provided new insights into precision medicine, at the same time raising issues related to data abundance. Studies focusing on single-cell scale have upgraded the knowledge about gene expression, revealing the heterogeneity that governs the functioning of multicellular organisms. The amount of information gathered through such sequencing techniques often exceeds the human capacity for analysis. Understanding the underlying network of gene expression regulation requires advanced computational tools that can deal with the complex analytical data provided. The recent emergence of artificial intelligence-based frameworks, together with advances in quantum algorithms, has the potential to enhance multiomicsc analyses, increasing the efficiency and reliability of the gene expression profile prediction. The development of more accurate computational models will significantly reduce the error rates in interpreting large datasets. By making analytical workflows faster and more precise, these innovations make it easier to integrate and interrogate multi-omics data at scale. Deep learning (DL) networks perform well in terms of recognizing complex patterns and modeling non-linear relationships that enable the inference of gene expression profiles. Applications range from direct prediction of DNA sequence-informed predictive modeling to transcriptomic and epigenetic analysis. Quantum computing, particularly through quantum machine learning methods, is being explored as a complementary approach for predictive modeling, with potential applications to complex gene interactions in increasingly large and high-dimensional biological datasets. Together, these tools are reshaping the study of complex biological data, while ongoing innovation in this field is driving progress towards personalized medicine. Overall, the combination of high-resolution omics and advanced computational tools marks an important shift toward more precise and data-driven clinical decision-making. Full article
Show Figures

Figure 1

15 pages, 1393 KB  
Communication
Localization of Buried Ferromagnetic Targets Using a Rotating Magnetic Sensor Array with a Joint Optimization Algorithm
by Zifan Yuan, Xingen Liu, Changping Du and Mingyao Xia
Remote Sens. 2026, 18(2), 249; https://doi.org/10.3390/rs18020249 - 13 Jan 2026
Viewed by 132
Abstract
Buried ferromagnetic targets such as unexploded ordnance generate an additional magnetic field to the main geomagnetic field, which manifests as a magnetic anomaly signal for localization. This paper presents an alternative scheme for localization by using a rotating magnetic sensor array and a [...] Read more.
Buried ferromagnetic targets such as unexploded ordnance generate an additional magnetic field to the main geomagnetic field, which manifests as a magnetic anomaly signal for localization. This paper presents an alternative scheme for localization by using a rotating magnetic sensor array and a joint optimization algorithm. Multiple magnetic sensors are integrated into an automated rotating measurement platform to achieve efficient and convenient data acquisition. To solve the target’s position coordinates, we combine quantum particle swarm optimization (QPSO) with the genetic algorithm (GA) to develop a joint optimization algorithm, which we name QPSO-GA. The proposed algorithm incorporates QPSO’s advantages of rapid convergence and local refined search with the advantages of global exploration and diversity preservation from the GA. Field experiments demonstrate that the proposed measurement system and algorithm achieve an average localization error of less than ten centimeters in a scenario with multiple sensors for multiple targets within a survey area of 4 m by 4 m, meeting general application requirements. Full article
Show Figures

Figure 1

21 pages, 1073 KB  
Article
Near-Optimal Decoding Algorithm for Color Codes Using Population Annealing
by Fernando Martínez-García, Francisco Revson F. Pereira and Pedro Parrado-Rodríguez
Entropy 2026, 28(1), 91; https://doi.org/10.3390/e28010091 - 12 Jan 2026
Viewed by 263
Abstract
The development and use of large-scale quantum computers relies on integrating quantum error-correcting (QEC) schemes into the quantum computing pipeline. A fundamental part of the QEC protocol is the decoding of the syndrome to identify a recovery operation with a high success rate. [...] Read more.
The development and use of large-scale quantum computers relies on integrating quantum error-correcting (QEC) schemes into the quantum computing pipeline. A fundamental part of the QEC protocol is the decoding of the syndrome to identify a recovery operation with a high success rate. In this work, we implement a decoder that finds the recovery operation with the highest success probability by mapping the decoding problem to a spin system and using Population Annealing to estimate the free energy of the different error classes. We study the decoder performance on a 4.8.8 color code lattice under different noise models, including code capacity with bit-flip and depolarizing noise, and phenomenological noise, which considers noisy measurements, with performance reaching near-optimal thresholds for bit-flip and depolarizing noise, and the highest reported threshold for phenomenological noise. This decoding algorithm can be applied to a wide variety of stabilizer codes, including surface codes and quantum Low-Density Parity Check (qLDPC) codes. Full article
(This article belongs to the Special Issue Coding Theory and Its Applications)
Show Figures

Figure 1

17 pages, 702 KB  
Article
Machine Learning the Decoherence Property of Superconducting and Semiconductor Quantum Devices from Graph Connectivity
by Quan Fu, Jie Liu, Xin Wang and Rui Xiong
Entropy 2026, 28(1), 89; https://doi.org/10.3390/e28010089 - 12 Jan 2026
Viewed by 282
Abstract
Quantum computing faces significant challenges from decoherence and noise, which limit the practical implementation of quantum algorithms. While substantial progress has been made in improving individual qubit coherence times, the collective behavior of interconnected qubit systems remains incompletely understood. The connectivity architecture plays [...] Read more.
Quantum computing faces significant challenges from decoherence and noise, which limit the practical implementation of quantum algorithms. While substantial progress has been made in improving individual qubit coherence times, the collective behavior of interconnected qubit systems remains incompletely understood. The connectivity architecture plays a crucial role in determining overall system susceptibility to environmental noise, yet systematic characterization of this relationship has been hindered by computational complexity. We develop a machine learning framework that bridges graph features with quantum device characterization to predict decoherence lifetime directly from connectivity patterns. By representing quantum architectures as connected graphs and using 14 topological features as input to supervised learning models, we achieve accurate lifetime predictions with R2>0.96 for both superconducting and semiconductor platforms. Our analysis reveals fundamentally distinct decoherence mechanisms: superconducting qubits show high sensitivity to global connectivity measures (betweenness centrality δ1=0.484, spectral entropy δ1=0.480), while semiconductor quantum dots exhibit exceptional sensitivity to system scale (node count δ2=0.919, importance = 1.860). The complete failure of cross-platform model transfer (R2 scores of −0.39 and −433.60) emphasizes the platform-specific nature of optimal connectivity design. Our approach enables rapid assessment of quantum architectures without expensive simulations, providing practical guidance for noise-optimized quantum processor design. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

Back to TopTop