Next Article in Journal
A Highly Stabilized Current Source Topology with Slope Compensation for a High-Power Microwave Guiding Coil
Previous Article in Journal
Fast Multi-View Subspace Clustering Based on Flexible Anchor Fusion
 
 
Article
Peer-Review Record

REO: Revisiting Erase Operation for Improving Lifetime and Performance of Modern NAND Flash-Based SSDs†

Electronics 2025, 14(4), 738; https://doi.org/10.3390/electronics14040738
by Beomjun Kim and Myungsuk Kim *
Reviewer 1:
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Electronics 2025, 14(4), 738; https://doi.org/10.3390/electronics14040738
Submission received: 5 January 2025 / Revised: 3 February 2025 / Accepted: 10 February 2025 / Published: 13 February 2025
(This article belongs to the Section Computer Science & Engineering)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This paper proposes an erase scheme called REO to improve lifetime and performance for 3D NAND flash memory. REO dynamically adjusts the erase voltage and delay based on the current cell erase characteristics. Evaluation results show that REO improves 3D NAND flash memory lifetime and reduces read latency.

   This paper proposes an interesting erase scheme for 3D NAND flash memory. However, the reviewer has the following concerns.

1.   What is the current erase feature of the cell? How to acquire know the erase characteristics of a cell? I suggest giving a detailed explanation.

2.   In the actual scenario, how to gain the reliability margin.

3.   Why does the erase operation affect latency in processing I/O requests.

4. How to predict the optimal erase delay and with what accuracy.

5.   How to erase with WL granularity, hardware implementation complexity may increase.

6. This manuscript mentions that “REO also significantly enhances the average I/O performance because it can effectively eliminate read-retry procedures during read operation”. Why does REO eliminate read retry operations and it needs to be explained in detail.

7.   The ordinate of Figure 6 requires further explanation.

8.   For 3D NAND flash memory, why can't DPES and i-ISPE be used?

9.   This paper mentions that “aggressive tEP reduction mitigates erase-induced cell stress, which can compensate for the additional errors as a long-term impact”. It needs to be explained in detail.

10.  What is the relationship between SEVA and FELP, and can the two be used in combination?

11.  What is the difference between the current version and the conference version that the new contribution needs to be clarified.

Comments on the Quality of English Language

The English description as a whole is well, but some sentences need further explanation

Author Response

[Comment 1] What is the current erase feature of the cell? How to acquire know the erase characteristics of a cell? I suggest giving a detailed explanation.

[Response] As explained in Section 3.2, a NAND flash chip erases a block via multiple erase loops, each of which consists of two steps: (i) an erase-pulse (EP) step and (ii) a verify-read (VR) step. An EP step applies VERASE to the target block for a fixed amount of time tEP (e.g., 3.5 ms) that is predefined by NAND manufacturers at design time. After each EP step, a VR step checks if all the cells in the block are sufficiently erased. For tracing the current erase characteristics in our scheme, we exploit the remaining number of fail bits in the VR step. For example, when a large number of fail bits are measured in the VR step, we can know that most flash cells are not sufficiently erased. Since commodity NAND flash chips already calculate fail bits during the erase operation in the conventional ISPE scheme as explained in Section 3.2, the implementation overhead for monitoring the current erase characteristics of individual blocks is trivial.

[Comment 2]  In the actual scenario, how to gain the reliability margin.

[Response] The reliability margin can be calculated based on the difference between the current RBER and the amount of ECC capability. In general, the ECC capability would not significantly change because manufacturers employ stronger ECC (e.g., BCH or LDPC) to address reliability degradation. Therefore, the reliability margin is mainly determined by the current RBER of the block. The one way to decide on the current RBER is offline error profiling based on comprehensive NAND device characterization. Such a way is simple but cannot sufficiently consider process variations in 3D NAND flash memory, leading to less optimized results. Another approach is to measure the current RBER online dynamically. Once a read operation is done, the RBER value of the target block can be monitored, and SSD tracks each value using an additional data structure. Since the DRAM capacity of modern SSDs reaches several GBs, the space overhead is not significant (i.e., less than 1 byte per block in SSDs incorporated with 72-bit ECC capability).

In the revised manuscript, we added a detailed explanation of the reliability margin in footnote 8.

[Comment 3] Why does the erase operation affect latency in processing I/O requests.

[Response] In modern flash-based SSDs, the I/O request latency can be severely affected when a read request from the host conflicts with an ongoing read, write, or erase operation that is currently being performed.

First, a read request can be delayed due to ongoing write (about 1 ms) or erase (more than 3.5 ms and up to 20 ms) operations with relatively long response times. Such conflict can frequently occur because of the unique management tasks in modern SSDs, such as garbage collection or wear-leveling. In particular, frequent GC is regarded as the root cause of the delay in SSD read response time. Unlike HDDs, NAND flash memory does not support data overwriting, so free blocks where all data was erased must be secured before performing data write operations. If there are no free blocks, only valid data is copied from target blocks and collected into one new block, and then an erase operation should be performed on the target blocks to create free blocks. Therefore, GC requires a very long execution time (more than 100 ms). If a read operation conflicts with GC, the read requests cannot avoid a long delay until GC is completed, leading to abnormal long tail latency.

Second, the read request can also be delayed by ongoing read operations. When the previous read operation suffers from multiple read retries due to high RBER, the read request cannot be issued until the ongoing read retries are completed, which significantly degrades I/O performance.

In the revised manuscript, we added a more detailed explanation of the effect of the erase on I/O performance in Section 2.6.

[Comment 4] How to predict the optimal erase delay and with what accuracy.

[Response] As explained in our manuscript, we employ the number of remaining fail bits after the erase loop to estimate the optimal latency for the next erase. However, since REO can fail to predict accurately, REO’s misprediction handling should be clarified (regardless of its near-zero misprediction probability we observed in our real-device characterization).

If REO overestimates the latency than necessary, there is nothing to handle because it just causes over-erasure.

If REO underestimates latency than necessary, it would cause some insufficiently-erased cells. However, REO does *not* compromise reliability as it performs final verification; if the number of insufficiently-erased cells exceeds the threshold FPASS, REO will perform an additional erase loop while dynamically adjusting tEP depending on the fail-bit count; if the fail-bit count is lower than the threshold FPASS, it means the block is sufficiently erased in the ISPE mechanism, i.e., there is nothing to handle.

In the revised manuscript, we added a detailed explanation in Section 7.4 for the sensitive analysis.

[Comment 5] How to erase with WL granularity, hardware implementation complexity may increase.

[Response] As illustrated in Figure 14, we implemented a WL-based erase scheme by selectively controlling the WL gate voltage instead of changing the erase voltage applied to a substrate. Since WL gate voltage during the erase operation can be set to the target levels using low-level test commands such as GET/SET FEATURE commands, it does not require a hardware modification, thus minimizing the increase in hardware implementation complexity. 

In the revised manuscript, we added a detailed explanation in footnote 7.

[Comment 6] This manuscript mentions that “REO also significantly enhances the average I/O performance because it can effectively eliminate read-retry procedures during read operation”. Why does REO eliminate read retry operations and it needs to be explained in detail.

[Response] As outlined in Section 2.5 of our manuscript, read-retry is triggered when a read operation fails to reconstruct the stored data due to the high RBER value exceeding the ECC capability. Therefore, if we can maintain the RBER value low, it is possible to avoid unnecessary read-retry invocation.

Our REO aims to erase the target block with the optimal latency and voltage, thus efficiently eliminating the over-erasure problem. As shown in Section 3, flash cell wear due to high operating voltage during erase operation is a key factor in increasing the RBER value. REO adopts SEVA to dynamically reduce unnecessary high erase voltage based on WL characteristics, thereby diminishing erase-induced cell stress. Since REO can alleviate the wear-out of flash cells (i.e., the RBER increment) by optimizing the erase voltage, it can efficiently suppress the unnecessary read-retry invocations.

Figure 23 in Section 7.3 shows the distribution of the number of read-retries in REO and REO+ (REO only employs FELP, and REO+ employs both FELP and SEVA.). REO+ invokes read-retry less frequently than REO because REO+ can further optimize the erase operation due to SEVA.

In the revised manuscript, we replaced *eliminate* with *reduce* in the sentence.

[Comment 7] The ordinate of Figure 6 requires further explanation.

[Response] The ordinate of Figure 6 demonstrates the percentage by which erase and write operations contribute to flash memory wear, respectively. When flash memory experiences repetitive program (or write) and erase operations (called P/E cycles), a high operating voltage in erase and write operations damages flash cells. Such damage results in flash memory wear, thus increasing RBER, which is called endurance error. The % of endurance error expresses how much the erase and write operations increase the endurance errors.

We added a detailed explanation of the ordinate of Figure 6 in the revised manuscript.

[Comment 8] For 3D NAND flash memory, why can't DPES and i-ISPE be used?

[Response] As explained in Section 3.4, it is challenging to implement DPES and i-ISPE in modern 3D NAND flash memory due to two reasons. First, erasing 3D flash cells is more complex than 2D flash cells due to their differences in cell physics and erase mechanisms. Second, 3D NAND flash memory exhibits higher process variation across cells compared to 2D NAND flash memory. Such characteristics significantly limit the effectiveness of both DPES and i-ISPE in modern NAND flash memory; (i) for DPES, securing the voltage window wide enough for the program states becomes more challenging; (ii) for i-ISPE, skipping the first erase loops incurs an erase failure more likely, which, in turn, rather requires the next erase loop with a higher VERASE (i.e., more erase-induced stress) compared to the conventional ISPE scheme. We quantitatively evaluated the effectiveness of DPES and i-ISPE in modern SSDs in Section 7.

[Comment 9] This paper mentions that “aggressive tEP reduction mitigates erase-induced cell stress, which can compensate for the additional errors as a long-term impact”. It needs to be explained in detail.

[Response] When we aggressively reduce tEP, bit errors just after the write operation can temporarily increase due to not-sufficiently erased cells, which is called an initial error. However, as P/E cycles increase (i.e., long-term usage of SSD), endurance error is more critical in determining SSD lifetime. Therefore, although aggressive tEP reduction can cause the initial error, it is beneficial to SSD lifetime because it can effectively mitigate erase-induced cell stress. In addition, the increment in the initial error does not cause any issues due to strong ECC in modern SSDs.

In the revised manuscript, we added a detailed explanation in footnote 6.

[Comment 10] What is the relationship between SEVA and FELP, and can the two be used in combination?

[Response] Our scheme, REO, consists of two techniques, FELP and SEVA. To achieve the goal, these two techniques employ different and orthogonal approaches, so REO can be implemented by combining FELP and SEVA simultaneously. FELP is motivated by the fact that there are quite different erase characteristics between flash blocks (inter-block variability). Therefore, FELP aims to mitigate erase-induced cell stress from over-erasure by predicting the optimal erase latency. On the contrary, SEVA is motivated by the fact that erase characteristics significantly vary between WLs even in a block (intra-block variability). By controlling the effective erase voltage based on WL characteristics, SEVA tries to minimize erase-induced cell stress. By combining SEVA and FELP, REO can efficiently improve the lifetime and performance of modern SSDs with little implementation overhead.

[Comment 11] What is the difference between the current version and the conference version that the new contribution needs to be clarified.

[Response] In the conference version, we only focused on reducing unnecessary erase latency by dynamically adjusting erase time using the FELP scheme. FELP is motivated by the fact that there are quite different erase characteristics between flash blocks. Therefore, FELP aims to mitigate erase-induced cell stress from over-erasure by predicting the optimal erase latency.

However, the current journal version adds a new technique, SEVA, which dynamically adjusts the effective erase voltage of each WL. Unlike FELP, which is motivated by inter-block variability, SEVA exploits unique intra-block variability in 3D NAND flash memory, where the erase characteristics significantly vary between WLs even in a block. During the erase operation, SEVA dynamically sets different WL gate levels for each WL based on intra-block variability. Therefore, SEVA can efficiently reduce the erase-induced cell stress of each WL, thus enhancing SSD lifetime and performance. 

Reviewer 2 Report

Comments and Suggestions for Authors

The manuscript addresses an important issue in modern NAND flash-based SSDs by proposing REO, a novel erase operation scheme. The paper is well-structured and provides a comprehensive analysis supported by real-device characterization and simulation results. The proposed method demonstrates substantial improvements in SSD lifetime and performance, which could have a significant impact on the field.

However, there are areas where further clarification and additional details could strengthen the manuscript. Below are specific comments and questions for the authors to address.

1. The paper introduces Fail-bit-count-based Erase Latency Prediction (FELP) and demonstrates its potential in predicting near-optimal erase latency. While the characterization results are compelling, could the authors clarify whether FELP's predictive accuracy is consistent across different types of workloads or manufacturing variations in NAND chips? Providing a comparison or additional data on workload-specific performance would enhance the applicability of the method.

2. The proposed REO implementation requires modifications to SSD firmware and additional data structures. Could the authors elaborate on the challenges of integrating REO into existing SSD systems, particularly in terms of backward compatibility with current hardware and firmware? Any discussion on potential bottlenecks or scalability issues would be valuable.

3. While the paper effectively demonstrates leveraging the ECC-capability margin, the potential risks of aggressive latency reduction on long-term reliability are not fully addressed. Could the authors include an analysis or simulation of scenarios where the ECC margin is insufficient due to unforeseen workload spikes or environmental factors?

4. Some technical terms, such as "ECC-capability margin" and "tBERS," might benefit from more detailed explanations for broader accessibility.

Author Response

[Comment 1] The paper introduces Fail-bit-count-based Erase Latency Prediction (FELP) and demonstrates its potential in predicting near-optimal erase latency. While the characterization results are compelling, could the authors clarify whether FELP's predictive accuracy is consistent across different types of workloads or manufacturing variations in NAND chips? Providing a comparison or additional data on workload-specific performance would enhance the applicability of the method.

[Response] As reviewer’s comment, it is critical to analyze the validity of our model across different types of workloads or NAND flash memories. However, we believe REO’s key ideas are highly applicable to other chips with different technologies and/or from different vendors for three reasons.

First, our chips well represent modern 3D NAND flash memory because most commercial chips, including SMArT/TCAT/BiCS, have similar structures and cell types, e.g., vertical channel structures, gate-all-around cell transistors, and charge-trap type flash cells, thereby sharing key device characteristics like operation mechanisms, reliability characteristics, etc.

Second, erase mechanisms have not changed significantly. For example, ISPE has been used since 2D SLC NAND flash memory. Note that REO leverages general features in erase operation commonly adopted in commercial chips, e.g., ISPE and fail-bit-based verification, without relying on device-specific or technology-specific characteristics.

Third, we confirmed REO’s effectiveness using various Samsung NAND flash chips with different technologies. We tested 2x-nm 2D MLC/TLC and 48-layer 3D MLC NAND flash chips, confirming consistent results with our manuscript’s characterization (we only showed the results of 3D TLC flash chips, the mainstream in the storage market)

 

In addition, we evaluated the effectiveness of our technique over eleven real workloads with different I/O characteristics, such as read-write ratio, request size, and arrival time. Therefore, we believe that our model works well across different types of workloads.

 

We will clarify these in Section 5.5 of the revised manuscript.

[Comment 2] The proposed REO implementation requires modifications to SSD firmware and additional data structures. Could the authors elaborate on the challenges of integrating REO into existing SSD systems, particularly in terms of backward compatibility with current hardware and firmware? Any discussion on potential bottlenecks or scalability issues would be valuable.

[Response] Our technique, REO, requires a small amount of additional data structure to SSD firmware and no hardware modification, so it is easily integrated into existing SSD systems.

 

The modification of NAND operations (e.g., counting fail bits or controlling WL gate voltage) can be implemented by low-level test commands such as SET/GET FEATURE commands. The SET/GET_FEATURE commands used for REO are device-level commands defined by ONFI (Open NAND Flash Interface Specification), different from the NVMe commands with the same names. The ONFI SET/GET_FEATURE commands enable modifying/accessing a NAND flash chip’s device-internal parameters; hence, REO-FTL must use the commands to modify/access erase-timing parameters, fail-bit count, or per-WL gate voltage. Therefore, our technique, REO, does not require a hardware modification, thus minimizing the increase in hardware implementation complexity.

 

Second, REO-FTL requires additional metadata, but the overhead is negligible (0.00000125% of SSD capacity) considering the typical size of internal DRAM in modern SSDs (0.1% of SSD capacity).

 

In the revised manuscript, we will clarify these in footnote 7.

[Comment 3] While the paper effectively demonstrates leveraging the ECC-capability margin, the potential risks of aggressive latency reduction on long-term reliability are not fully addressed. Could the authors include an analysis or simulation of scenarios where the ECC margin is insufficient due to unforeseen workload spikes or environmental factors?

[Response] The ECC capability would affect REO’s benefits (although REO provides significant performance/lifetime improvements over the baselines **without** leveraging ECC margins); the higher the ECC capability, the larger the ECC margin, thereby increasing REO’s benefits. Even though the ECC margin would not significantly change because manufacturers employ stronger ECC to address reliability degradation, sensitivity analysis for ECC-capability margin could be helpful in strengthening our model.

We added sensitive analysis to Section 7.4 in the revised manuscript.

[Comment 4] Some technical terms, such as "ECC-capability margin" and "tBERS," might benefit from more detailed explanations for broader accessibility.

[Response] As reviewer’s comment, we will clarify technical terms in the revised manuscript. In addition, we added the table of acronyms in the “Appendix” Section.

Reviewer 3 Report

Comments and Suggestions for Authors

This paper suggests REO, a block erasure scheme that enhances both the lifetime and performance of contemporary NAND flash-based Solid-State Drives (SSDs). REO dynamically adjusts erase latency and voltage, capitalizing on the inherent variability in block characteristics and the substantial reliability margin available in modern SSDs to optimize the erase process.".

The paper is nice and I enjoyed reading it; however, I have several concerns:

1. The paper needs a table of acronyms to facilitate the search for their definition. Some acronyms may have more than one definition, so it is necessary to have a consistency of the terms.

2. Given the established convention of placing the related work section after the introduction, it is suggested that the authors revise the manuscript to adhere to this standard.

3. In line 166 the authors write “Since the program operation can change the bit value of a flash cell only from ‘1’ to ‘0’, all the flash cells of a page should be erased to program data on the page (erase-before-program).”. This is a very important point. Weisberg, P., & Wiseman, Y. (2009, August), “Using 4kb page size for virtual memory is obsolete”, In 2009 IEEE International Conference on Information Reuse & Integration, pp. 262-265. available online at: https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=037ad437ef4d2f13f9415fceb25bc2695865d326  suggest to enlarge the page size. Enlarging the page size can lead to increased resource demands. This paper presents a novel approach that can effectively address this trade-off between performance gains and resource consumption. I would encourage the authors to cite the IEEE paper and explain that this can be incorporated as an additional motivating factor for their research.

4. Figure 1 was taken from Park, J., Azizi, R., Oliveira, G. F., Sadrosadati, M., Nadig, R., Novo, D. & Mutlu, O. (2022, October). “Flash-Cosmos: In-flash bulk bitwise operations using inherent computation capability of nand flash memory”, In 2022 55th IEEE/ACM International Symposium on Microarchitecture (MICRO), pp. 937-955. The source of the image must be acknowledged.

5. In Figure 3, what is “GSL”? An explanation is needed.

6. In equation 1, why is the constant 1 added at the end of the equation?

7. A clarification regarding the meaning and purpose of the six purple circles in column 3 of Figure 9 would be appreciated.

8. In line 437 the authors refer to Figure 13a; however, there is no Figure 13a.

9. The comparison made in Table 3 is very important; however, why did the authors put it on a table? The data would be easier to understand if it were presented in a graph.

10. A discussion of the potential limitations and future improvements to the proposed model would be beneficial.

11. The format of references should be consistent.

 

Author Response

[Comment 1] The paper needs a table of acronyms to facilitate the search for their definition. Some acronyms may have more than one definition, so it is necessary to have a consistency of the terms.

[Response] In the revised manuscript, we added the table of acronyms in the “Appendix” Section.

[Comment 2] Given the established convention of placing the related work section after the introduction, it is suggested that the authors revise the manuscript to adhere to this standard.

[Response] We agree with the reviewer’s comment. However, in our manuscript, we introduced the various existing techniques and their limitations in Section 1 and Section 3. Therefore, we re-summarized the various research on mitigating the negative impact of the erase operation and the studies on exploiting process variation in 3D NAND flash memory in the “Related Work” section.

[Comment 3] In line 166 the authors write “Since the program operation can change the bit value of a flash cell only from ‘1’ to ‘0’, all the flash cells of a page should be erased to program data on the page (erase-before-program).”. This is a very important point. Weisberg, P., & Wiseman, Y. (2009, August), “Using 4kb page size for virtual memory is obsolete”, In 2009 IEEE International Conference on Information Reuse & Integration, pp. 262-265. available online at: https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=037ad437ef4d2f13f9415fceb25bc2695865d326  suggest to enlarge the page size. Enlarging the page size can lead to increased resource demands. This paper presents a novel approach that can effectively address this trade-off between performance gains and resource consumption. I would encourage the authors to cite the IEEE paper and explain that this can be incorporated as an additional motivating factor for their research.

[Response] As the reviewer’s comment, our technique effectively addresses the trade-off relationship between performance and resource overhead by exploiting various device-oriented characteristics. In the revised manuscript, we modified Section 6 to include the reviewer’s comment.

[Comment 4] Figure 1 was taken from Park, J., Azizi, R., Oliveira, G. F., Sadrosadati, M., Nadig, R., Novo, D. & Mutlu, O. (2022, October). “Flash-Cosmos: In-flash bulk bitwise operations using inherent computation capability of nand flash memory”, In 2022 55th IEEE/ACM International Symposium on Microarchitecture (MICRO), pp. 937-955. The source of the image must be acknowledged.

[Response] As the reviewer’s comment, we added the reference to Figure 1 in the revised manuscript.

[Comment 5] In Figure 3, what is “GSL”? An explanation is needed.

[Response] In NAND flash memory, there are two select transistors at the top and bottom of a BL, which compose the source select line (SSL) and ground select line (GSL) of a block, respectively. By applying proper voltages to the SSL and GSL of a block, we can activate the block for flash operations.

We added an explanation of GSL to footnote 2 in the revised manuscript.

[Comment 6] In equation 1, why is the constant 1 added at the end of the equation?

[Response] To retrieve the stored data, at least one read operation (called default read) is performed even though read retry does not occur. Therefore, +1 should be included in calculating the read latency.

We added an explanation to footnote 3 in the revised manuscript.

[Comment 7] A clarification regarding the meaning and purpose of the six purple circles in column 3 of Figure 9 would be appreciated.

[Response] Figure 9 reveals the limitation of conventional ISPE in modern NAND flash memory. The six purple circles indicate flash blocks that require different latencies to complete the erase operation. In the conventional ISPE scheme, flash blocks denoted by purple circles experience 10.5 ms erase latency regardless of their erase characteristics. However, the flash blocks denoted by the first purple circle require only 7.5 ms to complete erasure. Similarly, the flash blocks denoted by the second purple circle require only 8.0 ms to complete erasure. This clearly shows that a significant number of blocks are over-erased under the conventional ISPE scheme, suffering from more erase-induced stress than necessary.

[Comment 8] In line 437 the authors refer to Figure 13a; however, there is no Figure 13a.

[Response] Figure 13a is a typo for Figure 13. In the revision, we have modified all typos in our manuscript.

[Comment 9] The comparison made in Table 3 is very important; however, why did the authors put it on a table? The data would be easier to understand if it were presented in a graph.

[Response] In the revised manuscript, as reviewer’s comment, we compared the evaluation results as a graph and quantitatively summarized them in a table.

[Comment 10] A discussion of the potential limitations and future improvements to the proposed model would be beneficial.

[Response] In the revised manuscript, we added a paragraph to discuss the future improvements of our proposed model in the “Conclusion” section.

“To strengthen the validity of our model, we plan to evaluate its effectiveness using advanced QLC NAND flash memory. In addition, we will confirm that our model can work under various operating conditions, such as varying temperature environments. For state-of-the-art applications like LLM and data-center, we plan to evaluate the impact of our model on power consumption.”

[Comment 11] The format of references should be consistent.

[Response] In the revised manuscript, we have made the reference format consistent, following the policy of MDPI.

Reviewer 4 Report

Comments and Suggestions for Authors

The authors in this paper proposed a new mechanism that is composed of two schemes for improving the lifetime of modern SSD devices.

With the advancement of SSD technology, REO can be one of the solutions to the significant challenges on SSD devices in terms of lifetime.

However, I am wondering about the copyright of the submitted version because the main sentence in the submitted paper is the same as the sentences in the following two papers.

In addition, the authors have co-authored the following two papers.

1. AERO: Adaptive Erase Operation for Improving Lifetime and Performance of Modern NAND Flash-Based SSDs

2. Smart Electrical Screening Methodology for Channel Hole Defects of 3D Vertical NAND (VNAND) Flash Memory

Please answer the copyright issues.

Comments on the Quality of English Language

The quality of English is fine. 

Author Response

[Comment 1] The authors in this paper proposed a new mechanism that is composed of two schemes for improving the lifetime of modern SSD devices.

With the advancement of SSD technology, REO can be one of the solutions to the significant challenges on SSD devices in terms of lifetime.

However, I am wondering about the copyright of the submitted version because the main sentence in the submitted paper is the same as the sentences in the following two papers.

In addition, the authors have co-authored the following two papers.

  1. AERO: Adaptive Erase Operation for Improving Lifetime and Performance of Modern NAND Flash-Based SSDs
  2. Smart Electrical Screening Methodology for Channel Hole Defects of 3D Vertical NAND (VNAND) Flash Memory

Please answer the copyright issues.

[Response] Thank you for your comments.

First, since this paper is an extension of the conference paper and is open-access, there is no need for copyright. Following the journal copyright policy, it is acceptable that the conference paper extension will have a high duplication rate compared to the original manuscript. We have added the information below the affiliation information.

Second, we revised the “Background Section” to lower duplication rates.

Round 2

Reviewer 3 Report

Comments and Suggestions for Authors

The authors have addressed all my concerns. The revised manuscript is ready for publication.

Reviewer 4 Report

Comments and Suggestions for Authors

This paper is great. I strongly recommend this paper for publication in this journal.

Back to TopTop