Compaction-Aware Flash Memory Remapping for Key–Value Stores †
Abstract
:1. Introduction
- We perform a preliminary study to discover the prevalence of UDB in real-world benchmarks.
- We propose RemapCom, an SSD remapping-based compaction method for LSM-tree-based KV stores. In RemapCom, we design a lightweight state machine to identify duplicate data caused by the compaction process.
- We design a UDB retention strategy in RemapCom to increase the ratio of UDB, in order to better exploit the benefit of UDB remapping.
- We implement RemapCom in LevelDB by providing two primitives, getLPN and remap, to support data block remapping.
- We evaluate RemapCom in real-world benchmarks and experimental results show that it can reduce the write amplification by up to 53% and improve the write throughput by up to 30%.
2. Background
2.1. Flash-Based SSD
2.2. Remapping-Based SSDs
2.3. Log-Structured Merge Trees
3. Motivation
3.1. Unchanged Data Block (UDB)
3.2. Prevalence of UDB
4. RemapCom: Remapped Compaction
4.1. Overview of RemapCom
4.2. Block State Determination
- Begin: A data block is in this state when RemapCom traverses the first KV item of the data block.
- InProgress: A data block is in this state when no KV changes are detected in the data block up to the current time. That is, this data block still has the potential to be a UDB.
- Changed/Unchanged: A data block is in either of these states once its final state has been determined by the state machine. If the state is Unchanged, it is a UDB. Otherwise, it is a block whose content has changed.
- block_start: This event is triggered when RemapCom traverses the first KV item of a block. Accordingly, the state of the block goes to the Begin state in Figure 6.
- KV_reserved: This event is triggered when the KV item that RemapCom is checking in the block is to be reserved. Consequently, it is moved to the data buffer. In Figure 7, the first two KV items with Key 15 and Key 26 in block A are both reserved ones. Thus, the first item switches the state from Begin to InProgress, while the second item switches from InProgress to InProgress in Figure 6.
- KV_droppped: This event is triggered when the KV item that RemapCom is checking in the block is to be dropped. For example, the KV item with Key 30 of block A in Figure 7 is a dropped item and to be removed during compaction. Accordingly, the state of the data block switches to Changed. This can happen from Begin if the first KV item is a dropped item, or from InProgress, otherwise. This means that the data block cannot be a UDB since the Changed state cannot switch to any other state.
- block_switch: This event is triggered when the RemapCom traversal moves from one block to another block. Recall that RemapCom is in the process of merging two SSTables, and this is happening with KV items in two data blocks of SSTables. From a single data block point of view, traversal by RemapCom to a different data block, that is, switching the traversal to a different block, means that the next KV item to be traversed is in a different block. This, in turn, means that the content of the newly generated block will contain KV items from two different blocks. Thus, the state of the block transitions to Changed. For example, a block_switch happens in Figure 6 where RemapCom traverses to Key 30 in block B after traversing Key 30 in block A.
- block_end: This event is triggered when the last KV item in the data block is traversed. This means that all the KV items in this block have been traversed from start to end and will be written to a new block without any change. Thus, in Figure 6, a transition from InProgress to Unchanged occurs, and the data block reaching this Unchanged state is a UDB.
Event | Description |
---|---|
block_start | First KV item is traversed |
KV_reserved | KV item being traversed now is reserved data |
KV_dropped | KV item being traversed now is dropped data |
block_switch | Traversal is directed to KV item in another block |
block_end | Last KV item in block has been traversed |
4.3. Lazy Write Back
4.4. UDB Retention
4.5. Primitives and Data Block Remapping
- getLPN(fileno,offset): This primitive obtains the start LPN of the UDB. Given the SSTable’s file number, fileno, and the UDB’s offset in SSTable, offset, the LPN of the UDB in SSD is obtained by using the ioctl system call.
- remap(src_LPN,dst_LPN,length): This primitive sends the remap signal to the SSD along with the necessary information for remapping: src_LPN, representing the original start LPN of the UDB, dst_LPN, representing the new start LPN of the UDB, and length, representing the number of pages involved in the UDB. Thus, the UDB with LPNs between src_LPN and src_LPN + length − 1 is remapped to the area between dst_LPN and dst_LPN + length − 1. There are no extra write operations, and it only involves changes about several mapping entries.
- Obtain the logical page number corresponding to the UDB in SSD by calling getLPN.
- Send remap requests to the file system by calling remap. For the example in Figure 8a, RemapCom uses remap command to notify SSD that the UDB with the key range (3, 12) needs to be remapped.
- Handle remap requests in SSDs. With the assistance of the remap primitive, the SSD controller can easily perform the remapping just like Remap-SSD [28].
4.6. Overhead Analysis
5. Evaluation
5.1. Experimental Setup
5.2. Results on Microbenchmarks
5.3. Real-World Benchmarks
- For the Load workload, RemapCom-Base increases the throughput by 3.4%, 3.9%, and 28% compared to LevelDB for the three KV sizes, respectively. For RemapCom, the improvements are 9.7%, 14.1%, and 28%, respectively.
- For the write-intensive workloads A and F, compared to LevelDB, RemapCom’s improvements are 7.6%, 12%, and 21.1% for workload A and 7.1%, 12.4%, and 13.8% for workload F, for the respective KV sizes.
- For the read-intensive workload C, we find that read performance in RemapCom-Base obtains 3.9%, 5.9%, and 13.3% improvement for the three KV sizes, respectively. The reason is that the read operations can trigger seek compaction [7], where UDB can also be remapped. However, compared to LevelDB, RemapCom reduces throughput by 3.8% and 3.1% for 256 MB and 1 MB KV sizes, respectively. This is because UDB retention separates UDB from adjacent blocks, which leads to a certain amount of read and write amplification. As mentioned in Section 4.4, the total size of KV items within a data block may be smaller than the size of data block, i.e., internal fragmentation resulted from UDB retention policy causes a certain degree of space waste, which reduces the read performance.
- RemapCom also achieves an improvement in throughput for workloads with only a small portion of writes (i.e., B, D, and E). The average improvements of all KV sizes are 2.5%, 3.2%, and 2.6% for the B, D, and E workloads, respectively. These results show that RemapCom can also improve overall performance for read-intensive workloads.
5.4. Sensitivity Study
6. Related Work
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Ishibe, T.; Maeda, Y.; Terada, T.; Naruse, N.; Mera, Y.; Kobayashi, E.; Nakamura, Y. Resistive switching memory performance in oxide hetero-nanocrystals with well-controlled interfaces. Sci. Technol. Adv. Mater. 2020, 21, 195–204. [Google Scholar] [CrossRef] [PubMed]
- Gupta, R.; Bouard, C.; Kammerbauer, F.; Ledesma-Martin, J.O.; Bose, A.; Kononenko, I.; Martin, S.; Usé, P.; Jakob, G.; Drouard, M.; et al. Harnessing orbital Hall effect in spin-orbit torque MRAM. Nat. Commun. 2025, 16, 130. [Google Scholar] [CrossRef] [PubMed]
- Rashidi, S.; Jalili, M.; Sarbazi-Azad, H. A survey on pcm lifetime enhancement schemes. ACM Comput. Surv. (CSUR) 2019, 52, 76. [Google Scholar] [CrossRef]
- Wu, X.; Xu, Y.; Shao, Z.; Jiang, S. LSM-trie: An LSM-tree-basedUltra-LargeKey-Value Store for Small Data Items. In Proceedings of the 2015 USENIX Annual Technical Conference (USENIX ATC 15), Santa Clara, CA, USA, 8–10 July 2015; pp. 71–82. [Google Scholar]
- O’Neil, P.; Cheng, E.; Gawlick, D.; O’Neil, E. The log-structured merge-tree (LSM-tree). Acta Inform. 1996, 33, 351–385. [Google Scholar] [CrossRef]
- Lu, L.; Pillai, T.S.; Gopalakrishnan, H.; Arpaci-Dusseau, A.C.; Arpaci-Dusseau, R.H. Wisckey: Separating keys from values in ssd-conscious storage. ACM Trans. Storage (TOS) 2017, 13, 5. [Google Scholar] [CrossRef]
- Ghemawat, S.; Dean, J. LevelDB. 2016. Available online: https://github.com/google/leveldb (accessed on 1 October 2023).
- Facebook. Rocksdb, a Persistentt Key-Value Store for Fast Storage Enviroments. 2019. Available online: http://rocksdb.org/ (accessed on 12 June 2024).
- Harter, T.; Borthakur, D.; Dong, S.; Aiyer, A.S.; Tang, L.; Arpaci-Dusseau, A.C.; Arpaci-Dusseau, R.H. Analysis of HDFS under HBase: A facebook messages case study. In Proceedings of the 12th USENIX Conference on File and Storage Technologies, FAST 2014, Santa Clara, CA, USA, 17–20 February 2014; Schroeder, B., Thereska, E., Eds.; USENIX: Berkeley, CA, USA, 2014; pp. 199–212. [Google Scholar]
- Cao, Y.; Dong, Q.; Wang, D.; Liu, Y.; Zhang, P.; Yu, X.; Niu, C. TIDB: A comprehensive database of trained immunity. Database J. Biol. Databases Curation 2021, 2021, baab041. [Google Scholar] [CrossRef] [PubMed]
- Huang, K.; Jia, Z.; Shen, Z.; Shao, Z.; Chen, F. Less is More: De-amplifying I/Os for Key-value Stores with a Log-assisted LSM-tree. In Proceedings of the 37th IEEE International Conference on Data Engineering, ICDE 2021, Chania, Greece, 19–22 April 2021; pp. 612–623. [Google Scholar] [CrossRef]
- Rizvi, S.S.; Chung, T.S. Flash SSD vs HDD: High performance oriented modern embedded and multimedia storage systems. In Proceedings of the 2010 2nd International Conference on Computer Engineering and Technology, Chengdu, China, 16–18 April 2010; Volume 7, p. V7-297. [Google Scholar]
- Geier, F. The differences Between SSD and HDD Technology Regarding Forensic Investigations. Bachelor’s Thesis, Linnaeus University, Växjö, Sweden, 2015. [Google Scholar]
- Wu, S.; Lin, K.; Chang, L. KVSSD: Close integration of LSM trees and flash translation layer for write-efficient KV store. In Proceedings of the 2018 Design, Automation & Test in Europe Conference & Exhibition, DATE 2018, Dresden, Germany, 19–23 March 2018; Madsen, J., Coskun, A.K., Eds.; IEEE: Piscataway, NJ, USA, 2018; pp. 563–568. [Google Scholar] [CrossRef]
- Yao, T.; Zhang, Y.; Wan, J.; Cui, Q.; Tang, L.; Jiang, H.; Xie, C.; He, X. MatrixKV: Reducing Write Stalls and Write Amplification in LSM-tree Based KV Stores with Matrix Container in NVM. In Proceedings of the 2020 USENIX Annual Technical Conference, USENIX ATC 2020, Online, 15–17 July 2020; Gavrilovska, A., Zadok, E., Eds.; USENIX Association: Berkeley, CA, USA, 2020; pp. 17–31. [Google Scholar]
- Ni, F.; Wu, X.; Li, W.; Wang, L.; Jiang, S. Leveraging ssd’s flexible address mapping to accelerate data copy operations. In Proceedings of the 2019 IEEE 21st International Conference on High Performance Computing and Communications; IEEE 17th International Conference on Smart City; IEEE 5th International Conference on Data Science and Systems (HPCC/SmartCity/DSS), Zhangjiajie, China, 10–12 August 2019; pp. 1051–1059. [Google Scholar]
- Li, Q.; Chen, Y.; Wu, G.; Du, Y.; Ye, M.; Gan, X.; Zhang, J.; Shen, Z.; Shu, J.; Xue, C. Characterizing and Optimizing LDPC Performance on 3D NAND Flash Memories. ACM Trans. Archit. Code Optim. 2024, 21, 62. [Google Scholar] [CrossRef]
- Cooper, B.F.; Silberstein, A.; Tam, E.; Ramakrishnan, R.; Sears, R. Benchmarking cloud serving systems with YCSB. In Proceedings of the 1st ACM Symposium on Cloud Computing, SoCC 2010, Indianapolis, IN, USA, 10–11 June 2010; Hellerstein, J.M., Chaudhuri, S., Rosenblum, M., Eds.; ACM: New York, NY, USA, 2010; pp. 143–154. [Google Scholar] [CrossRef]
- Cao, Z.; Dong, S.; Vemuri, S.; Du, D.H.C. Characterizing, Modeling, and Benchmarking RocksDB Key-Value Workloads at Facebook. In Proceedings of the 18th USENIX Conference on File and Storage Technologies, FAST 2020, Santa Clara, CA, USA, 24–27 February 2020; Noh, S.H., Welch, B., Eds.; USENIX Association: Berkeley, CA, USA, 2020; pp. 209–223. [Google Scholar]
- Agrawal, N.; Prabhakaran, V.; Wobber, T.; Davis, J.D.; Manasse, M.S.; Panigrahy, R. Design Tradeoffs for SSD Performance. In Proceedings of the 2008 USENIX Annual Technical Conference, Boston, MA, USA, 22–27 June 2008; Isaacs, R., Zhou, Y., Eds.; USENIX Association: Berkeley, CA, USA, 2008; pp. 57–70. [Google Scholar]
- Gao, C.; Shi, L.; Ji, C.; Di, Y.; Wu, K.; Xue, C.J.; Sha, E.H.M. Exploiting parallelism for access conflict minimization in flash-based solid state drives. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2017, 37, 168–181. [Google Scholar] [CrossRef]
- Huang, S.; Du, Y.; Fan, Y.; Ji, C. Extending SSD Lifetime via Balancing Layer Endurance in 3D NAND Flash Memory. In Proceedings of the 2024 Design, Automation & Test in Europe Conference & Exhibition (DATE), Valencia, Spain, 25–27 March 2024; pp. 1–2. [Google Scholar]
- Du, Y.; Huang, S.; Zhou, Y.; Li, Q. Towards LDPC Read Performance of 3D Flash Memories with Layer-induced Error Characteristics. ACM Trans. Des. Autom. Electron. Syst. 2023, 28, 44. [Google Scholar] [CrossRef]
- Du, Y.; Gao, Y.; Huang, S.; Li, Q. LDPC Level Prediction Towards Read Performance of High-Density Flash Memories. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2023, 421, 3264–3274. [Google Scholar] [CrossRef]
- Lee, S.W.; Park, D.J.; Chung, T.S.; Lee, D.H.; Park, S.; Song, H.J. A log buffer-based flash translation layer using fully-associative sector translation. ACM Trans. Embed. Comput. Syst. (TECS) 2007, 6, 18-es. [Google Scholar] [CrossRef]
- Kang, W.; Lee, S.; Moon, B.; Oh, G.; Min, C. X-FTL: Transactional FTL for SQLite databases. In Proceedings of the ACM SIGMOD International Conference on Management of Data, SIGMOD 2013, New York, NY, USA, 22–27 June 2013; Ross, K.A., Srivastava, D., Papadias, D., Eds.; ACM: New York, NY, USA, 2013; pp. 97–108. [Google Scholar] [CrossRef]
- Wu, Q.; Zhou, Y.; Wu, F.; Wang, K.; Lv, H.; Wan, J.; Xie, C. SW-WAL: Leveraging Address Remapping of SSDs to Achieve Single-Write Write-Ahead Logging. In Proceedings of the Design, Automation & Test in Europe Conference & Exhibition, DATE 2021, Grenoble, France, 1–5 February 2021; pp. 802–807. [Google Scholar] [CrossRef]
- Zhou, Y.; Wu, Q.; Wu, F.; Jiang, H.; Zhou, J.; Xie, C. Remap-SSD: Safely and Efficiently Exploiting SSD Address Remapping to Eliminate Duplicate Writes. In Proceedings of the 19th USENIX Conference on File and Storage Technologies, FAST 2021, 23–25 February 2021; Aguilera, M.K., Yadgar, G., Eds.; USENIX Association: Berkeley, CA, USA, 2021; pp. 187–202. [Google Scholar]
- Wu, Q.; Zhou, Y.; Wu, F.; Jiang, H.; Zhou, J.; Xie, C. Understanding and Exploiting the Full Potential of SSD Address Remapping. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2022, 41, 5112–5125. [Google Scholar] [CrossRef]
- Jin, Y.; Tseng, H.W.; Papakonstantinou, Y.; Swanson, S. Improving SSD lifetime with byte-addressable metadata. In Proceedings of the International Symposium on Memory Systems, Alexandria, VA, USA, 2–5 October 2017; pp. 374–384. [Google Scholar]
- Jagadish, H.V.; Narayan, P.P.S.; Seshadri, S.; Sudarshan, S.; Kanneganti, R. Incremental Organization for Data Recording and Warehousing. In Proceedings of the 23rd International Conference on Very Large Data Bases, VLDB’97, Athens, Greece, 25–29 August 1997; Jarke, M., Carey, M.J., Dittrich, K.R., Lochovsky, F.H., Loucopoulos, P., Jeusfeld, M.A., Eds.; Morgan Kaufmann: Burlington, MA, USA, 1997; pp. 16–25. [Google Scholar]
- Li, H.; Hao, M.; Tong, M.H.; Sundararaman, S.; Bjørling, M.; Gunawi, H.S. The CASE of FEMU: Cheap, Accurate, Scalable and Extensible Flash Emulator. In Proceedings of the 16th USENIX Conference on File and Storage Technologies, FAST 2018, Oakland, CA, USA, 12–15 February 2018; Agrawal, N., Rangaswami, R., Eds.; USENIX Association: Berkeley, CA, USA, 2018; pp. 83–90. [Google Scholar]
- Cai, M.; Jiang, X.; Shen, J.; Ye, B. SplitDB: Closing the Performance Gap for LSM-Tree-Based Key-Value Stores. IEEE Trans. Comput. 2023, 73, 206–220. [Google Scholar] [CrossRef]
- Duan, Z.; Yao, J.; Liu, H.; Liao, X.; Jin, H.; Zhang, Y. Revisiting Log-Structured Merging for KV Stores in Hybrid Memory Systems. In Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Vancouver, BC, Canada, 25–29 March 2023; Volume 2, pp. 674–687. [Google Scholar]
- Kaiyrakhmet, O.; Lee, S.; Nam, B.; Noh, S.H.; Choi, Y. SLM-DB: Single-Level Key-Value Store with Persistent Memory. In Proceedings of the 17th USENIX Conference on File and Storage Technologies, FAST 2019, Boston, MA, USA, 25–28 February 2019; Merchant, A., Weatherspoon, H., Eds.; USENIX Association: Berkeley, CA, USA, 2019; pp. 191–205. [Google Scholar]
- Sun, H.; Chen, G.; Yue, Y.; Qin, X. Improving LSM-Tree Based Key-Value Stores With Fine-Grained Compaction Mechanism. IEEE Trans. Cloud Comput. 2023, 11, 3778–3796. [Google Scholar] [CrossRef]
- Chai, Y.; Chai, Y.; Wang, X.; Wei, H.; Wang, Y. Adaptive lower-level driven compaction to optimize LSM-tree key-value stores. IEEE Trans. Knowl. Data Eng. 2020, 34, 2595–2609. [Google Scholar] [CrossRef]
- Shetty, P.J.; Spillane, R.P.; Malpani, R.R.; Andrews, B.; Seyster, J.; Zadok, E. Building workload-independent storage with VT-Trees. In Proceedings of the 11th USENIX Conference on File and Storage Technologies (FAST 13), San Jose, CA, USA, 12–15 February 2013; pp. 17–30. [Google Scholar]
- Dayan, N.; Weiss, T.; Dashevsky, S.; Pan, M.; Bortnikov, E.; Twitto, M. Spooky: Granulating LSM-tree compactions correctly. Proc. VLDB Endow. 2022, 15, 3071–3084. [Google Scholar] [CrossRef]
- Thonangi, R.; Yang, J. On Log-Structured Merge for Solid-State Drives. In Proceedings of the 33rd IEEE International Conference on Data Engineering, ICDE 2017, San Diego, CA, USA, 19–22 April 2017; pp. 683–694. [Google Scholar] [CrossRef]
- Hu, Y.; Du, Y. Reducing tail latency of LSM-tree based key-value store via limited compaction. In Proceedings of the SAC ’21: The 36th ACM/SIGAPP Symposium on Applied Computing, Virtual Event, Republic of Korea, 22–26 March 2021; Hung, C., Hong, J., Bechini, A., Song, E., Eds.; ACM: New York, NY, USA, 2021; pp. 178–181. [Google Scholar] [CrossRef]
- Lee, H.; Lee, C.; Lee, S.; Kim, Y. Compaction-aware zone allocation for LSM based key-value store on ZNS SSDs. In Proceedings of the HotStorage ’22: 14th ACM Workshop on Hot Topics in Storage and File Systems, Virtual Event, 27–28 June 2022; Anwar, A., Skourtis, D., Kannan, S., Ma, X., Eds.; ACM: New York, NY, USA, 2022; pp. 93–99. [Google Scholar] [CrossRef]
- Wang, X.; Jin, P.; Hua, B.; Long, H.; Huang, W. Reducing write amplification of lsm-tree with block-grained compaction. In Proceedings of the 2022 IEEE 38th International Conference on Data Engineering (ICDE), Kuala Lumpur, Malaysia, 9–12 May 2022; pp. 3119–3131. [Google Scholar]
- Jung, J.; Shin, D. Lifetime-leveling LSM-tree compaction for ZNS SSD. In Proceedings of the 14th ACM Workshop on Hot Topics in Storage and File Systems, Virtual, 27–28 June 2022; pp. 100–105. [Google Scholar]
- Wu, D.; Liu, B.; Zhao, W.; Tong, W. Znskv: Reducing data migration in lsmt-based kv stores on zns SSDs. In Proceedings of the 2022 IEEE 40th International Conference on Computer Design (ICCD), Olympic Valley, CA, USA, 23–26 October 2022; pp. 411–414. [Google Scholar]
- Lee, S.; Lee, C.; Min, D.; Park, I.; Chung, W.; Sivasubramaniam, A.; Kim, Y. Iterator Interface Extended LSM-tree-based KVSSD for Range Queries. In Proceedings of the 16th ACM International Conference on Systems and Storage, SYSTOR 2023, Haifa, Israel, 5–7 June 2023; Moatti, Y., Biran, O., Gilad, Y., Kostic, D., Eds.; ACM: New York, NY, USA, 2023; pp. 60–70. [Google Scholar] [CrossRef]
- Chen, Y.; Yang, M.; Chang, Y.; Shih, W. Parallel-Log-Single-Compaction-Tree: Flash-Friendly Two-Level Key-Value Management in KVSSDs. In Proceedings of the 25th Asia and South Pacific Design Automation Conference, ASP-DAC 2020, Beijing, China, 13–16 January 2020; pp. 277–282. [Google Scholar] [CrossRef]
- Raju, P.; Kadekodi, R.; Chidambaram, V.; Abraham, I. PebblesDB: Building Key-Value Stores using Fragmented Log-Structured Merge Trees. In Proceedings of the 26th Symposium on Operating Systems Principles, Shanghai, China, 28–31 October 2017; ACM: New York, NY, USA, 2017; pp. 497–514. [Google Scholar] [CrossRef]
- Dai, Y.; Xu, Y.; Ganesan, A.; Alagappan, R.; Kroth, B.; Arpaci-Dusseau, A.; Arpaci-Dusseau, R. From WiscKey to Bourbon: A Learned Index for Log-Structured Merge Trees. In Proceedings of the 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20), Online, 4–6 November 2020; pp. 155–171. [Google Scholar]
- Fan, Y.; Du, Y.; Noh, S.H. RemapCom: Optimizing Compaction Performance of LSM Trees via Data Block Remapping in SSDs. In Proceedings of the Design, Automation and Test in Europe Conference (DATE 2025), Lyon, France, 31 March–3 April 2025. [Google Scholar]
Host Configuration | |
---|---|
CPU | Intel(R) Xeon(R) Gold 6226R CPU@2.90 GHz |
Memory | 128 MB DRAM |
OS | Ubuntu 20.04 (kernel version 5.15.0) |
Guest Configuration | |
CPU | 4vCPU |
Memory | 16 MB DRAM |
OS | Ubuntu 20.04 (kernel version 5.15.0) |
File system | EXT4 |
NVMe SSD | 16 MB |
Flash Memory Configuration | |
Page size | 4 KB |
Page read latency | 40 μs |
Page program latency | 200 μs |
Page erase latency | 2000 μs |
Level | 1 | 2 | 3 | 4 |
---|---|---|---|---|
RemapCom-Base | 1% | 26% | 29% | 32% |
RemapCom | 5% | 47% | 52% | 53% |
Workload | Operations and Distribution |
---|---|
Load | 100% inserts, uniform |
A | 50% updates, 50% reads, zipfian |
B | 5% updates, 95% reads, zipfian |
C | 100% reads, zipfian |
D | 5% inserts, 95% reads, latest |
E | 5% updates, 95% scans, zipfian |
F | 50% reads, 50% read-modify-writes, zipfian |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, J.; Yang, Z.; Fan, Y.; Du, Y. Compaction-Aware Flash Memory Remapping for Key–Value Stores. Micromachines 2025, 16, 699. https://doi.org/10.3390/mi16060699
Wang J, Yang Z, Fan Y, Du Y. Compaction-Aware Flash Memory Remapping for Key–Value Stores. Micromachines. 2025; 16(6):699. https://doi.org/10.3390/mi16060699
Chicago/Turabian StyleWang, Jialin, Zhen Yang, Yi Fan, and Yajuan Du. 2025. "Compaction-Aware Flash Memory Remapping for Key–Value Stores" Micromachines 16, no. 6: 699. https://doi.org/10.3390/mi16060699
APA StyleWang, J., Yang, Z., Fan, Y., & Du, Y. (2025). Compaction-Aware Flash Memory Remapping for Key–Value Stores. Micromachines, 16(6), 699. https://doi.org/10.3390/mi16060699