Developing and Analyzing the Defect-Based Surface Codes Using Optimization Algorithms
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThis manuscript discusses a method for optimizing the defect layout in defect-based surface-code constructions to improve qubit utilization. The optimized layout shows a density comparable to that of a patch-based surface-code construction. However, there are several questions that remain to be clarified.
One of the main challenges for this work is persuading readers that defect-based surface codes should still be considered when patch-based surface codes are widely regarded as more resource-efficient. The authors support their position with two arguments: (1) for the same code distance, defect-based surface codes can achieve lower logical error rates than patch-based surface codes with the same code distance, and (2) patch-based constructions require ancillary qubit patches for CNOT operations, whereas defect-based surface codes do not, which makes the latter preferable when considering logical gate operations when code densities are similar.
Regarding point (1), the primary evidence is Fig. 10, in which an 18x18 lattice is used to build two distance-8 logical qubits and is compared with a single distance-8 patch-based logical qubit. I do not find this comparison fair, because an 18x18 array can hold more data qubits than two distance-12 patch-based logical qubits. To me, this basically shows that, given more physical qubits, a defect-based construction can yield lower logical error rates than a patch-based one with the same code distance, which does not seem to be an equal comparison.
For point (2), I agree that braiding in defect-based surface codes does not require auxiliary qubits. Nevertheless, during braiding for gate operations, the limited patch size may cause the effective code distance to change. It seems that this work does not present a clear method to guarantee that braiding preserves the code distance, so the argument is also weakly supported.
In addition, the manuscript frames layout design as a graph-optimization problem, but this abstraction originates from Ref. [8] of this manuscript. Consequently, the algorithmic novelty appears to be the adaptation of genetic algorithms to solve the problem.
In conclusion, given the present narrative, I do not see a significant potential impact, especially for the surface-code quantum-error-correction community. I recommend that the authors either provide stronger evidence that defect-based surface codes can match the resource efficiency of patch-based constructions or reshape the narrative to define and highlight the work’s contribution more clearly. Until then, I cannot recommend publication of this manuscript.
Author Response
Response to Reviewer 1 Comments
|
|||||||||||||||||||||||
1. Summary |
|
|
|||||||||||||||||||||
Thank you very much for taking the time to review our manuscript. We sincerely appreciate your thoughtful and constructive feedback. Below, we provide detailed responses to each of your comments. The corresponding revisions have been incorporated into the manuscript and are highlighted in colored text in the re-submitted files. To distinguish the changes, we have used red text for modifications made in response to Reviewer 1’s comments and blue text for those made in response to Reviewer 2’s comments. Additionally, each revision in the manuscript is labeled with tags such as (RX:CY) to indicate that the change addresses Reviewer X’s Comment Y, allowing for clear mapping between the reviewers’ comments and the associated modifications.
|
|||||||||||||||||||||||
3. Point-by-point response to Comments and Suggestions for Authors Reviewer 1: The authors support their position with two arguments: (1) for the same code distance, defect-based surface codes can achieve lower logical error rates than patch-based surface codes with the same code distance, and (2) patch-based constructions require ancillary qubit patches for CNOT operations, whereas defect-based surface codes do not, which makes the latter preferable when considering logical gate operations when code densities are similar. Comment 1: Regarding point (1), the primary evidence is Fig. 10, in which an 18x18 lattice is used to build two distance-8 logical qubits and is compared with a single distance-8 patch-based logical qubit. I do not find this comparison fair, because an 18x18 array can hold more data qubits than two distance-12 patch-based logical qubits. To me, this basically shows that, given more physical qubits, a defect-based construction can yield lower logical error rates than a patch-based one with the same code distance, which does not seem to be an equal comparison. Response 1: We acknowledge this concern and understand the need for a fairer comparison. Our intention was to study how the code distance affects the error behavior of surface codes by utilizing optimization algorithms in a different scenario. However, we agree that directly comparing two distance-8 defect-based logical qubits to a single distance-8 patch-based logical qubit may not provide a balanced assessment of efficiency. To address your concern, we removed the surface code without holes of size 8×8 from the comparison. Our primary goal in that section is to highlight the influence of code distance on error characteristics, rather than to make a direct resource-efficiency comparison between patch-based and defect-based approaches in Subsection 4.4. Comment 2: For point (2), I agree that braiding in defect-based surface codes does not require auxiliary qubits. Nevertheless, during braiding for gate operations, the limited patch size may cause the effective code distance to change. It seems that this work does not present a clear method to guarantee that braiding preserves the code distance, so the argument is also weakly supported. Response 2: Thank you for highlighting this important aspect. Indeed, preserving the effective code distance during braiding operations is essential for reliable quantum computation. Logical operations between qubits are performed by extending and contracting topological defects within the lattice structure, as described by Fowler et al. [17] and Brown et al. [21]. However, the effective code distance may vary during braiding due to the dynamic repositioning of defects. Brown et al. [21] discuss strategies for maintaining code distance during such logical operations, emphasizing the need for careful control of defect paths and adequate spacing to prevent logical errors. Nevertheless, detailed methodologies for implementing logical operations while explicitly preserving code distance are not fully developed in these works, and this remains a promising direction for future research. To highlight this challenge, we have added a description of braiding and its implications for code distance in the revised manuscript. Comment 3: In addition, the manuscript frames layout design as a graph-optimization problem, but this abstraction originates from Ref. [8] of this manuscript. Consequently, the algorithmic novelty appears to be the adaptation of genetic algorithms to solve the problem. Response 3: You are correct that the fundamental concept of modeling defect layouts as a graph optimization problem originates from previous works, particularly Ref. [8]. Squab, the software used in this work to simulate the surface code, is also based on Ref. [8], which describes the underlying framework of defect-based surface codes. In our manuscript, we briefly introduced the concept of defect-based surface codes and how they function. Our focus is not on reinventing the defect-based approach, but rather on evaluating its limitations and exploring improved constructions that can yield a greater number of logical qubits while maintaining acceptable logical error rates. For this aim, we employ the genetic algorithms (GA) and simulated annealing (SA) to address the trade-off between logical qubit density and error performance—an aspect that is not thoroughly discussed in Ref. [8]. While that work defines the theoretical properties and potential of defect-based codes, it does not analyze in detail how increasing the number of logical qubits impacts the logical error rate. This is the gap our study aims to fill. To address your concern, we revised the introduction to clearly present the paper's contributions. Comment 4: In conclusion, given the present narrative, I do not see a significant potential impact, especially for the surface-code quantum-error-correction community. I recommend that the authors either provide stronger evidence that defect-based surface codes can match the resource efficiency of patch-based constructions or reshape the narrative to define and highlight the work’s contribution more clearly. Until then, I cannot recommend publication of this manuscript. Response 4: We thank the reviewer for highlighting this concern. The primary aim of this paper is not to compare patch-based and defect-based methods for logical qubit encoding. Instead, our focus is specifically on exploring the limitations and potential of defect-based encoding. To address the reviewer's comments, we have removed any statements that could imply claims about encoding efficiency comparisons between these two methods from the introduction, results, and conclusions. As indicated in our conclusion, we consider that a combined approach leveraging both strategies may lead to more efficient logical encodings, and we intend to investigate this direction in our future research. |
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsThe study focuses on the defect-based approaches capable to encode multiple logical
qubits. The authors evaluate the maximum number of logical qubits for a given error
rate by the SA and GA optimization algorithms. They study the limitations of the defect-based
approach and the impact of various hole types on logical qubit encoding.
The development and analysis of the defect-based surface codes is relevant to the
distributed quantum computing based on multiple qubit systems using an imperfect
hardware.
The defect-based surface codes for encoding multiple qubit systems are
potentially very perspective and represent the state-of-the-art approach to quantum
computing. There is very scarce number of papers concerning this topic. For example,
one of these few is the work by Nagayama et al. (cited as ref. 22), who also considers
the surface code error correction subroutine on a defective lattice. Nagayama et al.
performed a simulation of randomly placed faulty devices and showed that discarding
bad lattices makes the ensemble better, showing the trade-off between the cost of
culling and the strength of fault tolerance of an ensemble. In this work, the authors
incorporate holes into the surface code lattice that increase the number of encoded
logical qubits that gives the efficiency gain, but increase logical error rate due to
reduced code distance. Thus, the authors propose the evaluation of the maximum
number of logical qubits for a given error rate by an optimization algorithm. Although
the optimization algorithms used (Simulated Annealing and Genetic Algorithms) are
not new (refs. 36, 37), the assessment of maximum number of logical qubits for a given
error rate can be regarded as original issue to the field of quantum computing, which
addresses a specific gap in the studying the trade-off between maximizing the number
of encoded logical qubits and maintaining satisfactory error correction.
As compared to the other published material in the subject area, this work adds
a study of new possibilities in the application of defect-based approaches. The authors
have obtained a number of practically important results and came to important
conclusions about the application of defect-based method, including the revelation of
the properties of partially open holes, which help encode more logical qubits than
closed holes, opening a new avenue for improving the code density behind defect-free
approaches.
The work per se is a valuable contribution to quantum computing field and is
written very well, so the manuscript can be published as it is. However, a few specific
issues can be addressed.
- Why the authors have chosen so high temperatures (up to 7200 K) as Tmax when
considering the influence of SA hyperparameters on the number of obtained
logical qubits in a 2D lattice? - For the common reader it would be interesting to learn how the discussed codes
are physically implemented, for instance, what measurements are required to
detect qubit erasures or how, in principle, noise generators are constructed for
quantum computers on practice?
The conclusions are consistent with the evidence and arguments presented in the
manuscript, aptly addressing the main questions of the presented work. The references
are appropriate.
Comments for author File: Comments.pdf
Author Response
Response to Reviewer 2 Comments |
||
|
|
|
1. Summary Thank you very much for taking the time to review our manuscript. We sincerely appreciate your thoughtful and constructive feedback. Below, we provide detailed responses to each of your comments. The corresponding revisions have been incorporated into the manuscript and are highlighted in colored text in the re-submitted files. To distinguish the changes, we have used red text for modifications made in response to Reviewer 1’s comments and blue text for those made in response to Reviewer 2’s comments. Additionally, each revision in the manuscript is labeled with tags such as (RX:CY) to indicate that the change addresses Reviewer X’s Comment Y, allowing for clear mapping between the reviewers’ comments and the associated modifications.
|
||
2. Questions for General Evaluation |
Reviewer’s Evaluation |
Response and Revisions |
Does the introduction provide sufficient background and include all relevant references? |
Yes |
Thank you for your careful review of our work and for providing constructive feedback that helped improve the quality and clarity of the manuscript. |
|
|
|
Is the research design appropriate? |
Yes
|
|
Are the methods adequately described? |
Yes |
|
Are the results clearly presented? |
Yes |
|
Are the conclusions supported by the results? |
Yes |
|
3. Point-by-point response to Comments and Suggestions for Authors Reviewer 2: The study focuses on the defect-based approaches capable to encode multiple logical qubits. The authors evaluate the maximum number of logical qubits for a given error rate by the SA and GA optimization algorithms. They study the limitations of the defect-based approach and the impact of various hole types on logical qubit encoding. The development and analysis of the defect-based surface codes is relevant to the distributed quantum computing based on multiple qubit systems using an imperfect hardware. The defect-based surface codes for encoding multiple qubit systems are potentially very perspective and represent the state-of-the-art approach to quantum computing. There is very scarce number of papers concerning this topic. For example, one of these few is the work by Nagayama et al. (cited as ref. 22), who also considers the surface code error correction subroutine on a defective lattice. Nagayama et al. performed a simulation of randomly placed faulty devices and showed that discarding bad lattices makes the ensemble better, showing the trade-off between the cost of culling and the strength of fault tolerance of an ensemble. In this work, the authors incorporate holes into the surface code lattice that increase the number of encoded logical qubits that gives the efficiency gain, but increase logical error rate due to reduced code distance. Thus, the authors propose the evaluation of the maximum number of logical qubits for a given error rate by an optimization algorithm. Although the optimization algorithms used (Simulated Annealing and Genetic Algorithms) are not new (refs. 36, 37), the assessment of maximum number of logical qubits for a given error rate can be regarded as original issue to the field of quantum computing, which addresses a specific gap in the studying the trade-off between maximizing the number of encoded logical qubits and maintaining satisfactory error correction. As compared to the other published material in the subject area, this work adds a study of new possibilities in the application of defect-based approaches. The authors have obtained a number of practically important results and came to important conclusions about the application of defect-based method, including the revelation of the properties of partially open holes, which help encode more logical qubits than closed holes, opening a new avenue for improving the code density behind defect-free approaches. The work per se is a valuable contribution to quantum computing field and is written very well, so the manuscript can be published as it is. However, a few specific issues can be addressed. Comment 1: Why the authors have chosen so high temperatures (up to 7200 K) as Tmax when considering the influence of SA hyperparameters on the number of obtained logical qubits in a 2D lattice? Response 1: We thank the reviewer for raising this important point. In Simulated Annealing (SA), parameters such as maximal temperature and number of cooling steps significantly impact the exploration of the solution space. The maximal temperature plays a key role in SA: The algorithm is more permissive at high temperatures, allowing it to explore a wide range of solutions, including those that may initially seem suboptimal. By selecting higher maximal temperatures and increasing the number of cooling steps, we aimed to investigate whether the Simulated Annealing algorithm could effectively explore the solution space to find lattices that encode a greater number of logical qubits. The results presented in Table 2 indicate that the maximum temperature parameter of the SA algorithm has a negligible effect on the obtained number of logical qubits. In contrast, increasing the number of annealing steps results in marginal improvement, yielding less than one additional logical qubit on average. To address this point, we added an additional explanation in Subsection 4.3. Comment 2: For the common reader it would be interesting to learn how the discussed codes are physically implemented, for instance, what measurements are required to detect qubit erasures or how, in principle, noise generators are constructed for quantum computers on practice? Response 2: We thank the reviewer for this insightful comment. We agree that a discussion on the physical implementation of qubit erasures and noise generation would be valuable for the general reader. In the current version of the manuscript, we briefly mention in Sections 3.1.3 and 5 that our simulations employ an idealized erasure channel model, provided by Squab, which assumes perfect knowledge of erasure locations. Our work is focused on modeling and analysis rather than experimental realization, but we recognize the importance of connecting simulation assumptions to practical hardware scenarios. To address this point, we have expanded Section 5 to include a brief discussion of how erasures and noise processes are implemented and detected in physical quantum systems. Comment 3: The conclusions are consistent with the evidence and arguments presented in the manuscript, aptly addressing the main questions of the presented work. The references are appropriate. Response 3: We are grateful for the reviewer’s thoughtful evaluation and helpful suggestions. Once again, we sincerely thank the reviewer for their careful reading of our work and for providing constructive feedback that helped improve the quality and clarity of the manuscript. We greatly appreciate your time and effort in reviewing our submission.
|
Author Response File: Author Response.pdf
Round 2
Reviewer 1 Report
Comments and Suggestions for AuthorsIn the revised manuscript, the authors have restructured the introduction section and removed some comparisons with patch-based surface code constructions related to efficiency in physical qubit usage. In the current version, the introduction does not indicate a comparison between the two encoding schemes, and the work largely focuses on logical qubit encoding optimization for defect-based surface code patches. Although the potential impact of this work may still be limited, as discussed in my previous review report, the contribution of the paper is now better presented.
On the other hand, in the later results sections, the authors decided to remove the curves for the patch-based surface code from Fig.10 specifically. I believe this change removes the confusion regarding an unfair comparison between the two encoding methods. However, in the discussion section, the authors still compare the encoding efficiency to that of the patch-based surface code (I believe Ref. 7 is patch based). To avoid giving readers the impression that the authors aim to claim the defect-based surface code is more efficient than the patch-based encoding method, I suggest comparing their optimized encoding method to previously demonstrated logical qubit encoding schemes within defect-based surface code frameworks. A simple example would be adopting the encoding method demonstrated by Fowler in his well-known paper [arXiv:1208.0928, something like Fig. 26], computing how many logical qubits can be encoded, and comparing this result to the proposed method. Alternatively, since the paper refers to Delfosse et al.’s theoretical work, the authors could also compare the obtained scaling results to the analytical scaling, demonstrating how closely their optimization approaches the optimal solution.
Overall, the current version resolves some confusion present in the previous version. However, the discussion regarding the resource efficiency of the current optimization method may still require further elaboration. I am willing to recommend this manuscript for publication after this is addressed.
Author Response
Response to Reviewer 1 Comments
|
|||||||||||||||||||||||
1. Summary |
|
|
|||||||||||||||||||||
Thank you for your constructive and insightful feedback. We have carefully revised the manuscript in response to your comments. The corresponding changes have been incorporated into the manuscript and are highlighted in colored text in the attached file. To distinguish the changes, red text is used for modifications made in response to the reviewer’s comments. Additionally, each revision in the manuscript is labeled with tags such as (Mx) to indicate specific modifications.
|
- Point-by-point response to Comments and Suggestions for Authors
Comment 1:
In the revised manuscript, the authors have restructured the introduction section and removed some comparisons with patch-based surface code constructions related to efficiency in physical qubit usage. In the current version, the introduction does not indicate a comparison between the two encoding schemes, and the work largely focuses on logical qubit encoding optimization for defect-based surface code patches. Although the potential impact of this work may still be limited, as discussed in my previous review report, the contribution of the paper is now better presented.
Response 1:
Thank you for acknowledging the improvements in our revised manuscript. To further highlight our contribution, we have also modified a sentence in the Introduction section, as indicated by tag (M1) in Paragraph 7.
Comment 2:
On the other hand, in the later results sections, the authors decided to remove the curves for the patch-based surface code from Fig.10 specifically. I believe this change removes the confusion regarding an unfair comparison between the two encoding methods. However, in the discussion section, the authors still compare the encoding efficiency to that of the patch-based surface code (I believe Ref. 7 is patch based). To avoid giving readers the impression that the authors aim to claim the defect-based surface code is more efficient than the patch-based encoding method, I suggest comparing their optimized encoding method to previously demonstrated logical qubit encoding schemes within defect-based surface code frameworks. A simple example would be adopting the encoding method demonstrated by Fowler in his well-known paper [arXiv:1208.0928, something like Fig. 26], computing how many logical qubits can be encoded, and comparing this result to the proposed method. Alternatively, since the paper refers to Delfosse et al.’s theoretical work, the authors could also compare the obtained scaling results to the analytical scaling, demonstrating how closely their optimization approaches the optimal solution.
Response 2:
As suggested, we have modified the second paragraph of the Discussion section to clarify the comparisons made. Specifically, we removed any unintended implication that the defect-based surface code is more efficient than patch-based methods. Instead, we now compare our results with the theoretical bounds for defect-based encoding presented by Delfosse et al. [8], demonstrating that our optimization approaches these limits while also taking logical error thresholds into account (M2).
Comment 3:
Overall, the current version resolves some confusion present in the previous version. However, the discussion regarding the resource efficiency of the current optimization method may still require further elaboration. I am willing to recommend this manuscript for publication after this is addressed.
Response 3:
We hope these revisions adequately address your concerns and enhance the clarity and rigor of our manuscript. Thank you again for your valuable comments and for considering our work for publication.
Author Response File: Author Response.pdf