Wear Leveling in SSDs Considered Harmful: A Case for Capacity Variance †
Abstract
1. Introduction
- We evaluate representative wear-leveling algorithms and show that they exhibit high write amplification and undesirable results in modern SSD configurations (Section 3).
- We qualitatively describe the benefits of capacity variance for SSDs and discuss the necessary modifications in the system and storage management (Section 4).
- We quantitatively demonstrate that a capacity-variant interface can significantly extend the SSD’s lifetime by evaluating the presence and absence of wear-leveling algorithms in both fixed and variable capacity SSDs using a set of real I/O workloads (Section 5).
2. Motivation
2.1. Managing the SSD Lifetime
2.2. Wear-Leveling Algorithms
2.3. Wear-Leveling Behaviors
3. Performance of Wear Leveling
3.1. Experimental Setup
3.2. Evaluation of Wear Leveling Under Synthetic Workloads
3.2.1. Write Amplification
3.2.2. Wear Leveling Effectiveness
3.2.3. Small Access Footprint
3.3. Summary of Findings
- Uniform access with a small access footprint: Overall, WL evens out the erase count with low write amplification (Figure 9b).
- Skewed access with a small access footprint: WL achieves good performance but at the cost of high write amplification (Figure 9a).
- Uniform access with a large access footprint: The performance of wear leveling has a negligible difference with not running it at all. There may be cases where performance anomaly occurs (Figure 8).
- Skewed access with a large access footprint: Wear leveling not only amplifies writes but also exhibits an anomaly by significantly accelerating the erase count for a group of blocks (Figure 6a).
4. A Capacity-Variant SSD
File System Design for Capacity Variance
- Data consistency. Maintaining data consistency is crucial when reducing the recommended space below the currently declared space of a logical storage partition. Shrinking the logical capacity of a file system can be a complex procedure that may result in inconsistency and data loss if not performed carefully [33]. To ensure users do not need to unmount and remount the device, the logical capacity should be reduced in an online manner. Additionally, the time it takes to reduce capacity should be minimal with low overhead.
- Address space decoupling (Figure 11a). The file system internally decouples the space exported to users from the LBA and avoids using the space near the end of the logical partition as much as possible to minimize potential data relocation overhead. However, this increases the file system cleaning overhead and fragments the file system address space. Due to the negative effect of address fragmentation [34,35,36,37], we avoid this approach despite the lowest upfront cost.
- Data remapping (Figure 11b). Data is relocated logically, either at the file system level or the device level, to maintain data consistency while avoiding address space fragmentation. File system level remapping does not need to update the inodes for data relocation, but it creates yet another level of indirection. Device-level remapping, on the other hand, takes advantage of the already existing SSD-internal mapping table. However, it requires a new SSD command for associating data with a new LBA without relocation and additional non-volatile data structures to ensure correctness [32,36,38]. We do not explore this approach due to its inherent concerns for crash consistency.
- Data relocation (Figure 11c). The file system relocates valid data within the to-be-shrunk area to a lower LBA region before reducing the capacity from the higher end of the logical partition. This approach maintains the continuity of the entire address space. Additionally, the cumulative overhead of data relocation is bounded by the original logical capacity of the device throughout its lifetime.
- Step 1: State validation. The file system performs sanity testing and checkpoints the current file system layout. If the file system checkpoint functionality is disabled or the file system is not ready to shrink (i.e., frozen or read-only), the reduction will not continue. This stage also initializes and checks new file system parameters, which ensures that the remaining free space is sufficient to accommodate data within the to-be-shrunk logical blocks.
- Step 2: Relocation. The write frontiers of each logging area are moved out of the target range, and any valid data in the to-be-shrunk blocks is relocated while restricting the available space to conform to the new file system layout. This is similar to segment cleaning or defragmentation. The block bitmap, reverse mapping, and file index are updated during this stage.
- Step 3: Metadata update. Once the to-be-shrunk area is free, the file system states are validated and the file system superblock is updated. In case of a system crash or error, the file system is rolled back to its latest consistent state.
- File system cleaning. For log-structured file systems, the append-only nature requires cleaning processes to reclaim scattered and invalidated logical blocks for further logging. Reducing the size of a logical partition may cause log-structured file systems to reclaim free space more frequently when the utilization keeps at the same level, increasing the write amplification of the file system. To mitigate this issue, dynamic logging policies are necessary to adapt capacity variance for LFS [18]. Normal logging always writes data to clean space in a strictly sequential manner and thus yields desirable performance, which should be used when the utilization is relatively low. In the threaded logging, data are written to the invalidated portion of the address space. Consequently, this policy could introduce random writes but requires no cleaning operations. However, as the utilization increases, threaded logging should be considered to avoid high file system cleaning overhead.
5. Evaluation
- Does capacity variance extend the lifetime of the SSD? How does wear leveling (WL) interact with capacity variance? (Section 5.1)
- How sensitive is the effectiveness of capacity variance to different garbage collection (GC) policies? Do GC policies alter WL results? (Section 5.2)
- What is the file system-level overhead with capacity-variant SSDs? Is there a tunable knob for capacity variance? (Section 5.3)
- Fix_NoWL does not run any WL on a fixed-capacity SSD.
- Fix_OBP runs on a fixed-capacity SSD.
- Fix_DP runs on a fixed-capacity SSD.
- Fix_PWL runs on a fixed-capacity SSD.
- Var_NoWL does not run any WL on a capacity-variant SSD.
- Var_OBP runs on a capacity-variant SSD.
- Var_DP runs on a capacity-variant SSD.
- Var_PWL runs on a capacity-variant SSD.
5.1. Effectiveness of Capacity Variance
5.1.1. Workloads with a Relatively Small Footprint
5.1.2. Workloads with a Relatively Large Footprint
5.2. Sensitivity to Garbage Collection
5.3. Limiting File System Overhead
6. Discussion and Related Works
- Wear leveling and write amplification. There exists a large body of work on garbage collection and its associated write amplification (WA) for SSDs, from analytical approaches [24,49,50,51] to designs supported by experimental results [44,52,53]. However, there is surprisingly limited work that measures the WA caused by wear leveling (WL), and it often relies on a back-of-the-envelope calculation for estimating the overhead and lifetime [54]. Even those that perform a more rigorous study evaluate the efficacy of WL by measuring the number of writes the SSD can endure [4,12,55,56] or the distribution of erase count [11,57,58]; only the Dual-Pool algorithm [11] presents the overhead of WL. Our faithful implementation of the Dual-Pool algorithm yields different WA from the original work, however, due to the differences in system configuration and workload.
- Lifetime extension without wear leveling. Prior works such as READY [4], ZombieNAND [55], and wear unleveling [56] extend SSD lifetime without wear leveling by exploiting self-recovery, reusing expired blocks, or selectively skipping writes. READY [4] dynamically throttles writes to exploit the self-recovery effect of flash memory, and ZombieNAND [55] re-uses expired flash memory blocks after switching them into SLC mode. Wear unleveling [56] identifies strong and weak pages and skips writing to weaker ones to prolong the lifetime of the block. Unlike these block- or page-level techniques, our approach introduces a new SSD interface that flexibly adjust usable capacity according to workload characteristics. Capacity-variant SSD is compatible with these existing methods, and integrating it with them can further enhance SSD lifetime. For example, reusing expired blocks and skipping weak pages can reduce the rate of capacity shrinkage in a variant SSDs.
- Zoned namespace. Zoned namespace (ZNS) [31] is a new abstraction for storage devices that has gained significant interest in the research community [59,60,61]. ZNS SSDs export large fixed-sized zones (typically in the GiB range) that can be written sequentially or reset (deleted) entirely. The approach shifts the responsibility of managing space via garbage collection from the SSD to the host-side file system. Unlike its predecessor (Open Channel SSD [62]), ZNS does not manage the physical flash memory block and leaves the lifetime management to the underlying SSD, allowing for the integration of capacity variance with ZNS.
- File systems and workload shaping. It is widely understood that SSDs perform poorly under random writes [64,65], and file systems optimized for SSDs take a log-structured approach so that writes will be sequential [10,29,66,67,68]. Because of the log-structured nature, these file systems create data of similar lifetime, leading to a reduction in wear-leveling overhead [64]. Similarly, aligning data writes to flash memory’s page size reduces the wear on the device [69].
- Flash arrays and clusters. Most existing works at the intersection of RAID and SSDs address poor performance due to the compounded susceptibility to small random writes [70,71,72,73,74]. Of these, log-structured RAID [73,74] naturally achieves good inter- and intra-SSD wear leveling, but at the cost of another level of indirection. On the other hand, Diff-RAID [75] takes a counter-intuitive approach and intentionally skews the wear across the SSDs by distributing the parity blocks unevenly to avoid correlated failures in the SSD array. While we do not advocate the intentional wear out of flash memory blocks, willful neglect of wear leveling can be beneficial in the context of capacity variance.
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| WL | Wear Leveling |
| GC | Garbage Collection |
| FS | File System |
| SSD | Solid-State Drive |
References
- Mielke, N.R.; Frickey, R.E.; Kalastirsky, I.; Quan, M.; Ustinov, D.; Vasudevan, V.J. Reliability of Solid-State Drives Based on NAND Flash Memory. Proc. IEEE 2017, 105, 1725–1750. [Google Scholar] [CrossRef]
- Cai, Y.; Ghose, S.; Haratsch, E.F.; Luo, Y.; Mutlu, O. Error Characterization, Mitigation, and Recovery in Flash-Memory-Based Solid-State Drives. Proc. IEEE 2017, 105, 1666–1704. [Google Scholar] [CrossRef]
- Open NAND Flash Interface. ONFI 5.0 Spec. 2021. Available online: http://www.onfi.org/specifications/ (accessed on 12 December 2021).
- Lee, S.; Kim, T.; Kim, K.; Kim, J. Lifetime management of flash-based SSDs using recovery-aware dynamic throttling. In Proceedings of the USENIX Conference on File and Storage Technologies (FAST), San Jose, CA, USA, 14–17 February 2012. [Google Scholar]
- Jaffer, S.; Mahdaviani, K.; Schroeder, B. Rethinking WOM Codes to Enhance the Lifetime in New SSD Generations. In Proceedings of the USENIX Workshop on Hot Topics in Storage and File Systems (HotStorage), Online, 13–14 July 2020. [Google Scholar]
- Luo, Y.; Cai, Y.; Ghose, S.; Choi, J.; Mutlu, O. WARM: Improving NAND flash memory lifetime with write-hotness aware retention management. In Proceedings of the IEEE Symposium on Mass Storage Systems and Technologies (MSST), Santa Clara, CA, USA, 30 May–5 June 2015; pp. 1–14. [Google Scholar]
- Grupp, L.M.; Davis, J.D.; Swanson, S. The bleak future of NAND flash memory. In Proceedings of the USENIX Conference on File and Storage Technologies (FAST), San Jose, CA, USA, 14–17 February 2012. [Google Scholar]
- Kim, B.S.; Choi, J.; Min, S.L. Design Tradeoffs for SSD Reliability. In Proceedings of the USENIX Conference on File and Storage Technologies (FAST), Boston, MA, USA, 25–28 February 2019. [Google Scholar]
- Jiao, Z.; Bhimani, J.; Kim, B.S. Wear leveling in SSDs considered harmful. In Proceedings of the Workshop on Hot Topics in Storage and File Systems (HotStorage), Online, 27–28 June 2022; pp. 72–78. [Google Scholar]
- Gleixner, T.; Haverkamp, F.; Bityutskiy, A. UBI—Unsorted Block Images. 2006. Available online: http://linux-mtd.infradead.org/doc/ubidesign/ubidesign.pdf (accessed on 12 December 2021).
- Chang, L. On efficient wear leveling for large-scale flash-memory storage systems. In Proceedings of the ACM Symposium on Applied Computing (SAC), Seoul, Republic of Korea, 11–15 March 2007. [Google Scholar]
- Chen, F.; Yang, M.; Chang, Y.; Kuo, T. PWL: A progressive wear leveling to minimize data migration overheads for NAND flash devices. In Proceedings of the Design, Automation & Test in Europe Conference & Exhibition, (DATE), Grenoble, France, 9–13 March 2015. [Google Scholar]
- Chiang, M.L.; Lee, P.C.; Chang, R.C. Using data clustering to improve cleaning performance for flash memory. Softw. Pract. Exp. 1999, 29, 267–290. [Google Scholar] [CrossRef]
- Murugan, M.; Du, D.H. Rejuvenator: A static wear leveling algorithm for NAND flash memory with minimized overhead. In Proceedings of the IEEE Symposium on Mass Storage Systems and Technologies (MSST), Denver, CO, USA, 23–27 May 2011. [Google Scholar]
- Prabhakaran, V.; Bairavasundaram, L.N.; Agrawal, N.; Gunawi, H.S.; Arpaci-Dusseau, A.C.; Arpaci-Dusseau, R.H. IRON file systems. In Proceedings of the ACM Symposium on Operating Systems Principles (SOSP), Brighton, UK, 23–26 October 2005. [Google Scholar]
- Gal, E.; Toledo, S. Algorithms and data structures for flash memories. ACM Comput. Surv. 2005, 37, 138–163. [Google Scholar] [CrossRef]
- The SSD Guy. Comparing Wear Figures on SSDs. 2017. Available online: https://thessdguy.com/comparing-wear-figures-on-ssds/ (accessed on 12 December 2021).
- Rosenblum, M.; Ousterhout, J.K. The Design and Implementation of a Log-Structured File System. In Proceedings of the ACM Symposium on Operating System Principles (SOSP), Pacific Grove, CA, USA, 13–16 October 1991. [Google Scholar]
- Liao, J.; Zhang, F.; Li, L.; Xiao, G. Adaptive wear leveling in Flash-Based Memory. IEEE Comput. Archit. Lett. 2015, 14, 1–4. [Google Scholar] [CrossRef]
- Chang, Y.; Hsieh, J.; Kuo, T. Endurance Enhancement of Flash-Memory Storage, Systems: An Efficient Static Wear Leveling Design. In Proceedings of the Design Automation Conference (DAC), San Diego, CA, USA, 4–8 June 2007; pp. 212–217. [Google Scholar]
- Kim, H.; Lee, S. A New Flash Memory Management for Flash Storage System. In Proceedings of the International Computer Software and Applications Conference (COMPSAC), Phoenix, AZ, USA, 25–26 October 1999. [Google Scholar]
- Chang, Y.; Hsieh, J.; Kuo, T. Improving Flash Wear-Leveling by Proactively Moving Static Data. IEEE Trans. Comput. 2010, 59, 53–65. [Google Scholar] [CrossRef]
- Chen, Z.; Zhao, Y. DA-GC: A Dynamic Adjustment Garbage Collection Method Considering Wear-leveling for SSD. In Proceedings of the Great Lakes Symposium on VLSI (GLSVLSI), Beijing, China, 8–11 September 2020; pp. 475–480. [Google Scholar]
- Desnoyers, P. Analytic modeling of SSD write performance. In Proceedings of the International Systems and Storage Conference (SYSTOR), Haifa, Israel, 4–6 June 2012. [Google Scholar]
- amsung Electronics Co. Samsung SSD Application Note for Data Centers. 2014. Available online: https://download.semiconductor.samsung.com/resources/others/Samsung_SSD_845DC_04_Over-provisioning.pdf (accessed on 12 December 2021).
- Rosenblum, M.; Ousterhout, J.K. The Design and Implementation of a Log-Structured File System. Ph.D. Thesis, University of California at Berkeley, Berkeley, CA, USA, 1992. [Google Scholar]
- Spanjer, E.; Ho, E. The Why and How of SSD Performance Benchmarking—SNIA. 2011. Available online: https://www.snia.org/sites/default/education/tutorials/2011/fall/SolidState/EstherSpanjer_The_Why_How_SSD_Performance_Benchmarking.pdf (accessed on 12 December 2021).
- NVM Express. NVM Express Base Specification 2.0. 2021. Available online: https://nvmexpress.org/developers/nvme-specification/ (accessed on 12 December 2021).
- Lee, C.; Sim, D.; Hwang, J.Y.; Cho, S. F2FS: A New File System for Flash Storage. In Proceedings of the USENIX Conference on File and Storage Technologies (FAST), Santa Clara, CA, USA, 16–19 February 2015. [Google Scholar]
- Mathur, A.; Cao, M.; Bhattacharya, S.; Dilger, A.; Tomas, A.; Vivier, L. The new ext4 filesystem: Current status and future plans. In Proceedings of the Ottawa Linux symposium, Ottawa, ON, Canada, 27–30 June 2007. [Google Scholar]
- Western Digital. Zoned Namespaces (ZNS) SSDs. 2020. Available online: https://zonedstorage.io/introduction/zns/ (accessed on 12 December 2021).
- Oh, G.; Seo, C.; Mayuram, R.; Kee, Y.; Lee, S. SHARE Interface in Flash Storage for Relational and NoSQL Databases. In Proceedings of the International Conference on Management of Data (SIGMOD), San Francisco, CA, USA, 26 June–1 July 2016; pp. 343–354. [Google Scholar]
- Kim, B.S.; Lee, E.; Lee, S.; Min, S.L. CPR for SSDs. In Proceedings of the Workshop on Hot Topics in Operating Systems (HotOS), Bertinoro, Italy, 12–15 May 2019. [Google Scholar]
- Conway, A.; Bakshi, A.; Jiao, Y.; Jannen, W.; Zhan, Y.; Yuan, J.; Bender, M.A.; Johnson, R.; Kuszmaul, B.C.; Porter, D.E.; et al. File Systems Fated for Senescence? Nonsense, Says Science! In Proceedings of the USENIX Conference on File and Storage Technologies (FAST), Santa Clara, CA, USA, 27 February–March 2017; pp. 45–58. [Google Scholar]
- Conway, A.; Knorr, E.; Jiao, Y.; Bender, M.A.; Jannen, W.; Johnson, R.; Porter, D.E.; Farach-Colton, M. Filesystem Aging: It’s more Usage than Fullness. In Proceedings of the Workshop on Hot Topics in Storage and File Systems (HotStorage), Renton, WA, USA, 8–9 July 2019; p. 15. [Google Scholar]
- Hahn, S.S.; Lee, S.; Ji, C.; Chang, L.; Yee, I.; Shi, L.; Xue, C.J.; Kim, J. Improving File System Performance of Mobile Storage Systems Using a Decoupled Defragmenter. In Proceedings of the USENIX Annual Technical Conference (ATC), Santa Clara, CA, USA, 12–14 July 2017; pp. 759–771. [Google Scholar]
- Ji, C.; Chang, L.; Hahn, S.S.; Lee, S.; Pan, R.; Shi, L.; Kim, J.; Xue, C.J. File Fragmentation in Mobile Devices: Measurement, Evaluation, and Treatment. IEEE Trans. Mob. Comput. 2019, 18, 2062–2076. [Google Scholar] [CrossRef]
- Zhou, Y.; Wu, Q.; Wu, F.; Jiang, H.; Zhou, J.; Xie, C. Remap-SSD: Safely and Efficiently Exploiting SSD Address Remapping to Eliminate Duplicate Writes. In Proceedings of the USENIX Conference on File and Storage Technologies (FAST), Online, 23–25 February 2021; pp. 187–202. [Google Scholar]
- Liang, S.; Qiao, Z.; Tang, S.; Hochstetler, J.; Fu, S.; Shi, W.; Chen, H. An Empirical Study of Quad-Level Cell (QLC) NAND Flash SSDs for Big Data Applications. In Proceedings of the IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 3676–3685. [Google Scholar]
- Liu, S.; Zou, X. QLC NAND study and enhanced Gray coding methods for sixteen-level-based program algorithms. Microelectron. J. 2017, 66, 58–66. [Google Scholar] [CrossRef]
- Yadgar, G.; Gabel, M.; Jaffer, S.; Schroeder, B. SSD-based Workload Characteristics and Their Performance Implications. ACM Trans. Storage 2021, 17, 8:1–8:26. [Google Scholar] [CrossRef]
- Lee, C.; Kumano, T.; Matsuki, T.; Endo, H.; Fukumoto, N.; Sugawara, M. Understanding storage traffic characteristics on enterprise virtual desktop infrastructure. In Proceedings of the ACM International Systems and Storage Conference (SYSTOR), Haifa, Israel, 22–24 May 2017. [Google Scholar]
- Kavalanekar, S.; Worthington, B.L.; Zhang, Q.; Sharda, V. Characterization of storage workload traces from production Windows Servers. In Proceedings of the International Symposium on Workload Characterization (IISWC), Seattle, WA, USA, 14–16 September 2008. [Google Scholar]
- Kang, W.; Shin, D.; Yoo, S. Reinforcement Learning-Assisted Garbage Collection to Mitigate Long-Tail Latency in SSD. ACM Trans. Embed. Comput. Syst. 2017, 16, 134:1–134:20. [Google Scholar] [CrossRef]
- Kim, B.S.; Yang, H.S.; Min, S.L. AutoSSD: An Autonomic SSD Architecture. In Proceedings of the USENIX Annual Technical Conference (ATC), Boston, MA, USA, 11–13 July 2018; pp. 677–690. [Google Scholar]
- Dean, J.; Barroso, L.A. The tail at scale. Commun. ACM 2013, 56, 74–80. [Google Scholar] [CrossRef]
- Hao, M.; Soundararajan, G.; Kenchammana-Hosekote, D.R.; Chien, A.A.; Gunawi, H.S. The Tail at Store: A Revelation from Millions of Hours of Disk and SSD Deployments. In Proceedings of the USENIX Conference on File and Storage Technologies (FAST), Santa Clara, CA, USA, 22–25 February 2016; pp. 263–276. [Google Scholar]
- Yan, S.; Li, H.; Hao, M.; Tong, M.H.; Sundararaman, S.; Chien, A.A.; Gunawi, H.S. Tiny-Tail Flash: Near-Perfect Elimination of Garbage Collection Tail Latencies in NAND SSDs. In Proceedings of the USENIX Conference on File and Storage Technologies (FAST), Santa Clara, CA, USA, 27 February–March 2017; pp. 15–28. [Google Scholar]
- Yang, Y.; Misra, V.; Rubenstein, D. On the Optimality of Greedy Garbage Collection for SSDs. SIGMETRICS Perform. Eval. Rev. 2015, 43, 63–65. [Google Scholar] [CrossRef]
- Hu, X.; Eleftheriou, E.; Haas, R.; Iliadis, I.; Pletka, R.A. Write amplification analysis in flash-based solid state drives. In Proceedings of the Israeli Experimental Systems Conference (SYSTOR), Haifa, Israel, 4–6 May 2009; p. 10. [Google Scholar]
- Park, C.; Lee, S.; Won, Y.; Ahn, S. Practical Implication of Analytical Models for SSD Write Amplification. In Proceedings of the ACM/SPEC on International Conference on Performance Engineering (ICPE), L’Aquila, Italy, 22–26 April 2017; pp. 257–262. [Google Scholar]
- Chang, L.; Kuo, T.; Lo, S. Real-time garbage collection for flash-memory storage systems of real-time embedded systems. ACM Trans. Embed. Comput. Syst. 2004, 3, 837–863. [Google Scholar] [CrossRef]
- Zhang, Q.; Li, X.; Wang, L.; Zhang, T.; Wang, Y.; Shao, Z. Optimizing deterministic garbage collection in NAND flash storage systems. In Proceedings of the IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS), Seattle, WA, USA, 13–16 April 2015; pp. 14–23. [Google Scholar]
- Zhang, T.; Zuck, A.; Porter, D.E.; Tsafrir, D. Flash Drive Lifespan *is* a Problem. In Proceedings of the Workshop on Hot Topics in Operating Systems (HotOS), Whistler, BC, USA, 7–10 May 2017; pp. 42–49. [Google Scholar]
- Wilson, E.H.; Jung, M.; Kandemir, M.T. ZombieNAND: Resurrecting Dead NAND Flash for Improved SSD Longevity. In Proceedings of the IEEE International Symposium on Modelling, Analysis & Simulation of Computer and Telecommunication Systems (MASCOTS), Paris, France, 9–11 September 2014; pp. 229–238. [Google Scholar]
- Jimenez, X.; Novo, D.; Ienne, P. Wear unleveling: Improving NAND flash lifetime by balancing page endurance. In Proceedings of the USENIX conference on File and Storage Technologies (FAST), Santa Clara, CA, USA, 17–20 February 2014; pp. 47–59. [Google Scholar]
- Agrawal, N.; Prabhakaran, V.; Wobber, T.; Davis, J.D.; Manasse, M.S.; Panigrahy, R. Design Tradeoffs for SSD Performance. In Proceedings of the USENIX Annual Technical Conference (ATC), Boston, MA, USA, 22–27 June 2008; pp. 57–70. [Google Scholar]
- Huang, J.; Badam, A.; Caulfield, L.; Nath, S.; Sengupta, S.; Sharma, B.; Qureshi, M.K. FlashBlox: Achieving Both Performance Isolation and Uniform Lifetime for Virtualized SSDs. In Proceedings of the USENIX Conference on File and Storage Technologies (FAST), Santa Clara, CA, USA, 27 February–March 2017; pp. 375–390. [Google Scholar]
- Bjørling, M.; Aghayev, A.; Holmberg, H.; Ramesh, A.; Moal, D.L.; Ganger, G.R.; Amvrosiadis, G. ZNS: Avoiding the Block Interface Tax for Flash-based SSDs. In Proceedings of the USENIX Annual Technical Conference (ATC), Online, 14–16 July 2021; pp. 689–703. [Google Scholar]
- Han, K.; Gwak, H.; Shin, D.; Hwang, J. ZNS+: Advanced Zoned Namespace Interface for Supporting In-Storage Zone Compaction. In Proceedings of the USENIX Symposium on Operating Systems Design and Implementation (OSDI), Online, 14–16 July 2021; pp. 147–162. [Google Scholar]
- Stavrinos, T.; Berger, D.S.; Katz-Bassett, E.; Lloyd, W. Don’t be a blockhead: Zoned namespaces make work on conventional SSDs obsolete. In Proceedings of the Workshop on Hot Topics in Operating Systems (HotOS), Ann Arbor, MI, USA, 1–3 June 2021; pp. 144–151. [Google Scholar]
- Bjørling, M.; Gonzalez, J.; Bonnet, P. LightNVM: The Linux Open-Channel SSD Subsystem. In Proceedings of the USENIX Conference on File and Storage Technologies (FAST), Santa Clara, CA, USA, 27 February–March 2017; pp. 359–374. [Google Scholar]
- Le Moal, D.; Yao, T. Zonefs: Mapping POSIX File System Interface to Raw Zoned Block Device Accesses. In Proceedings of the Linux Storage and Filesystems Conference (VAULT). USENIX, Santa Clara, CA, USA, 24–25 February 2020; p. 19. [Google Scholar]
- He, J.; Kannan, S.; Arpaci-Dusseau, A.C.; Arpaci-Dusseau, R.H. The Unwritten Contract of Solid State Drives. In Proceedings of the European Conference on Computer Systems (EuroSys), Belgrade, Serbia, 23–6 April 2017; pp. 127–144. [Google Scholar]
- Chen, F.; Koufaty, D.A.; Zhang, X. Understanding intrinsic characteristics and system implications of flash memory based solid state drives. In Proceedings of the International Joint Conference on Measurement and Modeling of Computer Systems (SIGMETRICS), Seatle, WA, USA, 15–19 June 2009; pp. 181–192. [Google Scholar]
- Min, C.; Kim, K.; Cho, H.; Lee, S.; Eom, Y.I. SFS: Random write considered harmful in solid state drives. In Proceedings of the USENIX conference on File and Storage Technologies (FAST), San Jose, CA, USA, 14–17 February 2012; p. 12. [Google Scholar]
- Zhang, J.; Shu, J.; Lu, Y. ParaFS: A Log-Structured File System to Exploit the Internal Parallelism of Flash Devices. In Proceedings of the USENIX Annual Technical Conference (ATC), Denver, CO, USA, 22–24 June 2016; pp. 87–100. [Google Scholar]
- One, A. Yet Another Flash File System. Available online: http://www.yaffs.net (accessed on 12 December 2021).
- Kakaraparthy, A.; Patel, J.M.; Park, K.; Kroth, B. Optimizing Databases by Learning Hidden Parameters of Solid State Drives. Proc. VLDB Endow. 2019, 13, 519–532. [Google Scholar] [CrossRef]
- Jeremic, N.; Mühl, G.; Busse, A.; Richling, J. The pitfalls of deploying solid-state drive RAIDs. In Proceedings of the Annual Haifa Experimental Systems Conference (SYSTOR), Haifa, Israel, 30 May–1 June 2011; p. 14. [Google Scholar]
- Moon, S.; Reddy, A.L.N. Does RAID Improve Lifetime of SSD Arrays? ACM Trans. Storage 2016, 12, 11:1–11:29. [Google Scholar] [CrossRef]
- Kim, Y.; Oral, S.; Shipman, G.M.; Lee, J.; Dillow, D.; Wang, F. Harmonia: A globally coordinated garbage collector for arrays of Solid-State Drives. In Proceedings of the IEEE Symposium on Mass Storage Systems and Technologies (MSST), Denver, CO, USA, 23–27 May 2011; pp. 1–12. [Google Scholar]
- Kim, J.; Lim, K.; Jung, Y.; Lee, S.; Min, C.; Noh, S.H. Alleviating Garbage Collection Interference Through Spatial Separation in All Flash Arrays. In Proceedings of the USENIX Annual Technical Conference (ATC), Renton, WA, USA, 10–12 July 2019; pp. 799–812. [Google Scholar]
- Chiueh, T.; Tsao, W.; Sun, H.; Chien, T.; Chang, A.; Chen, C. Software Orchestrated Flash Array. In Proceedings of the International Conference on Systems and Storage (SYSTOR), Haifa, Israel, 10–12 June 2014; pp. 14:1–14:11. [Google Scholar]
- Balakrishnan, M.; Kadav, A.; Prabhakaran, V.; Malkhi, D. Differential RAID: Rethinking RAID for SSD reliability. In Proceedings of the European Conference on Computer Systems (EuroSys), Paris, France, 13–16 April 2010; pp. 15–26. [Google Scholar]

















| Parameter | Value | Parameter | Value |
|---|---|---|---|
| Page size | 4 KiB | Physical capacity | 284 GiB |
| Pages per block | 256 | Logical capacity | 256 GiB |
| Block size | 1 MiB | Over-provisioning | 11% |
| Block allocation | FIFO | Garbage collection | Greedy |
| Uniform Access | Skewed Access | |
|---|---|---|
| Small footprint | Effective | Write amplified |
| Large footprint | Negligible | Anomaly |
| Workload | Description | Footprint (GiB) | Avg. Write Size (KiB) | Hotness () | Sequentiality |
|---|---|---|---|---|---|
| YCSB-A | User session recording | 89.99 | 50.48 | 64.69/35.31 | 0.49 |
| VDI | Virtual desktop infrastructure | 255.99 | 17.99 | 64.45/35.55 | 0.14 |
| WBS | Windows build server | 56.05 | 27.82 | 60.34/39.66 | 0.02 |
| DTRS | Developer tools release | 150.63 | 31.85 | 54.20/45.80 | 0.12 |
| DAP-DS | Advertisement caching tier | 0.17 | 7.21 | 78.62/21.38 | 0.03 |
| DAP-PS | Advertisement payload | 36.06 | 97.20 | 55.02/44.98 | 0.16 |
| LM-TBE | Map service backend | 239.49 | 61.90 | 60.29/39.71 | 0.94 |
| MSN-CFS | Storage metadata | 5.58 | 12.92 | 69.28/30.72 | 0.25 |
| MSN-BEFS | Storage backend file | 31.42 | 11.62 | 70.18/29.82 | 0.03 |
| RAD-AS | Remote access authentication | 4.53 | 9.87 | 70.90/29.10 | 0.45 |
| RAD-BE | Remote access backend | 14.73 | 13.02 | 65.51/34.49 | 0.33 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jiao, Z.; Yang, B. Wear Leveling in SSDs Considered Harmful: A Case for Capacity Variance. Electronics 2025, 14, 4169. https://doi.org/10.3390/electronics14214169
Jiao Z, Yang B. Wear Leveling in SSDs Considered Harmful: A Case for Capacity Variance. Electronics. 2025; 14(21):4169. https://doi.org/10.3390/electronics14214169
Chicago/Turabian StyleJiao, Ziyang, and Biyuan Yang. 2025. "Wear Leveling in SSDs Considered Harmful: A Case for Capacity Variance" Electronics 14, no. 21: 4169. https://doi.org/10.3390/electronics14214169
APA StyleJiao, Z., & Yang, B. (2025). Wear Leveling in SSDs Considered Harmful: A Case for Capacity Variance. Electronics, 14(21), 4169. https://doi.org/10.3390/electronics14214169

