Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (35)

Search Parameters:
Keywords = SSDs (Solid State Drives)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 1202 KiB  
Article
Exploiting Data Duplication to Reduce Data Migration in Garbage Collection Inside SSD
by Shiqiang Nie, Jie Niu, Chaoyun Yang, Peng Zhang, Qiong Yang, Dong Wang and Weiguo Wu
Electronics 2025, 14(9), 1873; https://doi.org/10.3390/electronics14091873 - 4 May 2025
Viewed by 707
Abstract
NAND flash memory has been widely adopted as the primary data storage medium in data centers. However, the inherent characteristic of out-of-place updates in NAND flash necessitates garbage collection (GC) operations on NAND flash-based solid-state drives (SSDs), aimed at reclaiming flash blocks occupied [...] Read more.
NAND flash memory has been widely adopted as the primary data storage medium in data centers. However, the inherent characteristic of out-of-place updates in NAND flash necessitates garbage collection (GC) operations on NAND flash-based solid-state drives (SSDs), aimed at reclaiming flash blocks occupied by invalid data. GC processes entail additional read and write operations, which can lead to the blocking of user requests, thereby increasing the tail latency. Moreover, frequent execution of GC operations is prone to induce more pages to be written, further reducing the lifetime of SSDs. In light of these challenges, we introduce an innovative GC scheme, termed SplitGC. This scheme leverages the records of data redundancy gathered during periodic read scrub operations within the SSD. By analyzing these features of data duplication, SplitGC enhances the selection strategy for the victim block. Furthermore, it bifurcates the migration of valid data pages into two phases: non-duplicate pages follow standard relocation procedures, whereas the movement of duplicate pages is scheduled during idle periods of the SSD. The experiment results show that our scheme reduces tail latency induced by GC by 8% to 83% at the 99.99th percentile and significantly decreases the amount of valid page migration by 38% to 67% compared with existing schemes. Full article
(This article belongs to the Section Microelectronics)
Show Figures

Figure 1

16 pages, 971 KiB  
Article
Solid-State Drive Failure Prediction Using Anomaly Detection
by Vanja Luković, Željko Jovanović, Slađana Đurašević Pešović, Uroš Pešović and Borislav Đorđević
Electronics 2025, 14(7), 1433; https://doi.org/10.3390/electronics14071433 - 2 Apr 2025
Viewed by 1222
Abstract
Solid-State Drives (SSDs) enabled the implementation of real-time cloud services, with a primary focus on high performance and high availability. SSD failure prediction can improve overall system availability by preventing data loss and service interruption. SSDs employ a built-in SMART (Self-Monitoring, Analysis, and [...] Read more.
Solid-State Drives (SSDs) enabled the implementation of real-time cloud services, with a primary focus on high performance and high availability. SSD failure prediction can improve overall system availability by preventing data loss and service interruption. SSDs employ a built-in SMART (Self-Monitoring, Analysis, and Reporting Technology) system to predict failures when certain operating parameters exceed predefined thresholds. Such univariate SMART-based models can predict a limited set of drive failures. Research in SSD failure prediction is focused on multivariate models, which can exploit the complex interactions between SMART attributes that lead to drive failure in order to detect a much larger set of failures. This paper presents an anomaly detection model, based on the Mahalanobis distance measure, which is used for the failure prediction of SSD drives. The model is able to rank the features according to their influence on failure prediction by using a forward feature selection algorithm. The proposed model is tested on a publicly available Alibaba SSD dataset, where the six highest-ranked SMART features were identified. Using this subset of SMART features, our model was able to detect 64% of failures with 81% accuracy while keeping a high precision of 96%. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

33 pages, 3673 KiB  
Article
REO: Revisiting Erase Operation for Improving Lifetime and Performance of Modern NAND Flash-Based SSDs
by Beomjun Kim and Myungsuk Kim
Electronics 2025, 14(4), 738; https://doi.org/10.3390/electronics14040738 - 13 Feb 2025
Viewed by 1953
Abstract
This work investigates a new erase scheme in NAND flash memory to improve the lifetime and performance of modern solid-state drives (SSDs). In NAND flash memory, an erase operation applies a high voltage (e.g., >20 V) to flash cells for a long time [...] Read more.
This work investigates a new erase scheme in NAND flash memory to improve the lifetime and performance of modern solid-state drives (SSDs). In NAND flash memory, an erase operation applies a high voltage (e.g., >20 V) to flash cells for a long time (e.g., >3.5 ms), which degrades cell endurance and potentially delays user I/O requests. While a large body of prior work has proposed various techniques to mitigate the negative impact of erase operations, no work has yet investigated how erase latency and voltage should be set to fully exploit the potential of NAND flash memory; most existing techniques use a fixed latency and voltage for every erase operation, which is set to cover the worst-case operating conditions. To address this, we propose Revisiting Erase Operation, (REO) a new erase scheme that dynamically adjusts erase latency and voltage depending on the cells’ current erase characteristics. We design REO by two key apporaches. First, REO accurately predicts such near-optimal erase latency based on the number of fail bits during an erase operation. To maximize its benefits, REO aggressively yet safely reduces erase latency by leveraging a large reliability margin present in modern SSDs. Second, REO applies near-optimal erase voltage to each WL based on its unique erase characteristics. We demonstrate the feasibility and reliability of REO using 160 real 3D NAND flash chips, showing that it enhances SSD lifetime over the conventional erase scheme by 43% without change to existing NAND flash chips. Our system-level evaluation using eleven real-world workloads shows that an REO-enabled SSD reduces average I/O performance and read tail latency by 12% and 38%, respectivley, on average over a state-of-the-art technique. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

22 pages, 1541 KiB  
Article
A Framework for Integrating Log-Structured Merge-Trees and Key–Value Separation in Tiered Storage
by Charles Jaranilla, Guangxun Zhao, Gunhee Choi, Sohyun Park and Jongmoo Choi
Electronics 2025, 14(3), 564; https://doi.org/10.3390/electronics14030564 - 30 Jan 2025
Viewed by 1220
Abstract
This paper presents an approach that integrates tiered storage into the Log-Structured Merge (LSM)-tree to balance Key–Value Store (KVS) performance and storage financial cost trade-offs. The implementation focuses on applying tiered storage to LSM-tree-based KVS architectures, using both vertical and horizontal storage alignment [...] Read more.
This paper presents an approach that integrates tiered storage into the Log-Structured Merge (LSM)-tree to balance Key–Value Store (KVS) performance and storage financial cost trade-offs. The implementation focuses on applying tiered storage to LSM-tree-based KVS architectures, using both vertical and horizontal storage alignment strategies or a combination of both. Additionally, these configurations leverage key–value (KV) separation to further improve performance. Our experiments reveal that this approach reduces storage financial costs while offering trade-offs in write and read performance. For write-intensive workloads, our approach achieves competitive performance compared to a fast NVMe Solid State Drive (SSD)-only approach while storing 96% of data on more affordable SATA SSDs. Additionally, it exhibits lookup performance comparable to BlobDB, and improves range query performance by 1.8x over RocksDB on NVMe SSDs. Overall, the approach results in a 49.5% reduction in storage financial cost compared to RocksDB and BlobDB on NVMe SSDs. The integration of selective KV separation further advances these improvements, setting the stage for future research into offloading remote data in LSM-tree tiered storage systems. Full article
(This article belongs to the Special Issue Future Trends of Artificial Intelligence (AI) and Big Data)
Show Figures

Figure 1

16 pages, 4458 KiB  
Article
High-Performance Garbage Collection Scheme with Low Data Transfer Overhead for NoC-Based SSDC
by Seyeon Ahn, Donghyuk Im, Donggon You and Youpyo Hong
Electronics 2024, 13(23), 4838; https://doi.org/10.3390/electronics13234838 - 7 Dec 2024
Cited by 1 | Viewed by 1143
Abstract
Solid-state drives (SSDs) have become the preferred storage solution for performance-critical applications due to their high speed, durability, and energy efficiency. However, the inherent characteristics of NAND flash memory, such as block-level erasure and data fragmentation, necessitate frequent garbage collection (GC) operations to [...] Read more.
Solid-state drives (SSDs) have become the preferred storage solution for performance-critical applications due to their high speed, durability, and energy efficiency. However, the inherent characteristics of NAND flash memory, such as block-level erasure and data fragmentation, necessitate frequent garbage collection (GC) operations to reclaim storage space. These operations, while essential, introduce significant performance overhead, particularly in modern SSD controllers (SSDCs) that utilize network-on-chip (NoC) architectures. In such architectures, GC requires substantial data transfer over interconnects for error correction, leading to increased latency and reduced throughput. This paper presents a novel GC scheme designed to minimize latency in NoC-based SSDCs. Unlike conventional methods that unconditionally transfer data for error correction, the proposed approach selectively determines the data transfer path based on the presence of errors. By leveraging the low error probability of NAND flash memory, this scheme avoids unnecessary data traversal across the interconnect, significantly reducing GC overhead. A hardware implementation using task queues ensures efficient parallelism without disrupting other operations. The experimental results demonstrate that the proposed scheme improves SSD performance across various real-world workloads, achieving up to a 26.9% reduction in average latency and a 50.0% reduction in peak latency compared to traditional GC methods. These findings highlight the potential of optimizing data traversal paths in NoC architectures, providing a scalable solution for enhancing SSD performance for diverse applications. Full article
Show Figures

Figure 1

15 pages, 5636 KiB  
Article
Sequentialized Virtual File System: A Virtual File System Enabling Address Sequentialization for Flash-Based Solid State Drives
by Inhwi Hwang, Sunggon Kim, Hyeonsang Eom and Yongseok Son
Computers 2024, 13(11), 284; https://doi.org/10.3390/computers13110284 - 2 Nov 2024
Viewed by 1408
Abstract
Solid-state drives (SSDs) are widely adopted in mobile devices, desktop PCs, and data centers since they offer higher throughput, lower latency, and lower power consumption to modern computing systems and applications compared with hard disk drives (HDDs). However, the performance of the SSDs [...] Read more.
Solid-state drives (SSDs) are widely adopted in mobile devices, desktop PCs, and data centers since they offer higher throughput, lower latency, and lower power consumption to modern computing systems and applications compared with hard disk drives (HDDs). However, the performance of the SSDs can be degraded depending on the I/O access pattern due to the unique characteristics of SSDs. For example, random I/O operation degrades the SSD performance since it reduces the spatial locality and induces garbage collection (GC) overhead. In this paper, we present an address reshaping scheme in a virtual file system (VFS) called sVFS for improving performance and easy deployment. To do this, it first sequentializes a random access pattern in the VFS layer which is an abstract layer on top of a more concrete file system. Thus, our scheme is independent and easily deployed on any concrete file systems, block layer configuration (e.g., RAID), and devices. Second, we adopt a mapping table for managing sequentialized addresses, which guarantees correct read operations. Third, we support transaction processing for updating the mapping table to avoid sacrificing the consistency. We implement our scheme at the VFS layer in Linux kernel 5.15.34. The evaluation results show that our scheme improve the random write throughput by up to 27%, 36%, 34%, and 2.35× using the microbenchmark and 25%, 22%, 20%, and 3.51× using the macrobenchmark compared with the existing scheme in the case of EXT4, F2FS, XFS, and BTRFS, respectively. Full article
Show Figures

Figure 1

13 pages, 354 KiB  
Article
Improving Performance of Key–Value Stores for High-Performance Storage Devices
by Sunggon Kim and Hwajung Kim
Appl. Sci. 2024, 14(17), 7538; https://doi.org/10.3390/app14177538 - 26 Aug 2024
Viewed by 1739
Abstract
Key–value stores (KV stores) are becoming popular in both academia and industry due to their high performance and simplicity in data management. Unlike traditional database systems such as relational databases, KV stores manage data in key–value pairs and do not support relationships between [...] Read more.
Key–value stores (KV stores) are becoming popular in both academia and industry due to their high performance and simplicity in data management. Unlike traditional database systems such as relational databases, KV stores manage data in key–value pairs and do not support relationships between the data. This simplicity enables KV stores to offer higher performance. To further improve the performance of KV stores, high-performance storage devices such as solid-state drives (SSDs) and non-volatile memory express (NVMe) SSDs have been widely adopted. These devices are intended to expedite data processing and storage. However, our studies indicate that, due to a lack of multi-thread-oriented programming, the performance of KV stores is far below the raw performance of high-performance storage devices. In this paper, we analyze the performance of existing KV stores utilizing high-performance storage devices. Our analysis reveals that the actual performance of KV stores is below the potential performance that these storage devices could offer. According to the profiling results, we argue that this performance gap is due to the coarse-grained locking mechanisms of existing KV stores. To alleviate this issue, we propose a multi-threaded compaction operation that leverages idle threads to participate in I/O operations. Our experimental results demonstrate that our scheme can improve the performance of KV stores by up to 16% by increasing the number of threads involved in I/O operations. Full article
Show Figures

Figure 1

11 pages, 443 KiB  
Article
Data Placement Using a Classifier for SLC/QLC Hybrid SSDs
by Heeseong Cho and Taeseok Kim
Appl. Sci. 2024, 14(4), 1648; https://doi.org/10.3390/app14041648 - 18 Feb 2024
Viewed by 2204
Abstract
In hybrid SSDs (solid-state drives) consisting of SLC (single-level cell) and QLC (quad-level cell), efficiently using the limited SLC cache space is crucial. In this paper, we present a practical data placement scheme, which determines the placement location of incoming write requests using [...] Read more.
In hybrid SSDs (solid-state drives) consisting of SLC (single-level cell) and QLC (quad-level cell), efficiently using the limited SLC cache space is crucial. In this paper, we present a practical data placement scheme, which determines the placement location of incoming write requests using a lightweight machine-learning model. It leverages information about I/O workload characteristics and SSD status to identify cold data that does not need to be stored in the SLC cache with high accuracy. By strategically bypassing the SLC cache for cold data, our scheme significantly reduces unnecessary data movements between the SLC and QLC regions, improving the overall efficiency of the SSD. Through simulation-based studies using real-world workloads, we demonstrate that our scheme outperforms existing approaches by up to 44%. Full article
(This article belongs to the Special Issue Resource Management for Emerging Computing Systems)
Show Figures

Figure 1

19 pages, 1494 KiB  
Article
Exploiting Data Similarity to Improve SSD Read Performance
by Shiqiang Nie, Jie Niu, Zeyu Zhang, Yingmeng Hu, Chenguang Shi and Weiguo Wu
Appl. Sci. 2023, 13(24), 13017; https://doi.org/10.3390/app132413017 - 6 Dec 2023
Viewed by 2340
Abstract
Although NAND (Not And) flash-based Solid-State Drive (SSD) has recently demonstrated a significant performance advantage against hard disk, it still suffers from non-negligible performance under-utilization issues as the access conflict often occurs during servicing IO requests due to the share mechanism (e.g., several [...] Read more.
Although NAND (Not And) flash-based Solid-State Drive (SSD) has recently demonstrated a significant performance advantage against hard disk, it still suffers from non-negligible performance under-utilization issues as the access conflict often occurs during servicing IO requests due to the share mechanism (e.g., several chips share one channel bus, several planes share one data register inside the die). Many research works have been devoted to minimizing access conflict by redesigning IO scheduling, cache replacement, and so on. These works have achieved reasonable results; however, the potential data similarity characterization is not utilized fully in prior works to alleviate access conflict. The basic idea is that, as data duplication is common in many workloads where data with the same content from different requests could be distributed to the address with minimized access conflict (i.e., the address does not share the same channel or chip), the logic address is mapped to more than one physical address. Therefore, the data can be read out from candidate pages when the channel or chip of its original address is busy. Motivated by this idea, we propose Data Similarity aware Flash Translation Layer (DS-FTL), which mainly includes a content-aware page allocation scheme and a multi-path read scheme. The DS-FTL enables maximization of the channel-level and chip-level parallelism and avoids the read stall induced by bus-shared mechanisms. We also conducted a series of experiments on SSDsim, with the subsequent results depicting the effectiveness of our scheme. Compared with the state-of-art, our scheme reduces read latency by 35.3% on average in our workloads. Full article
Show Figures

Figure 1

14 pages, 2496 KiB  
Article
On-Demand Garbage Collection Algorithm with Prioritized Victim Blocks for SSDs
by Hyeyun Lee, Wooseok Choi and Youpyo Hong
Electronics 2023, 12(9), 2142; https://doi.org/10.3390/electronics12092142 - 7 May 2023
Viewed by 3121
Abstract
Because of their numerous benefits, solid-state drives (SSDs) are increasingly being used in a wide range of applications, including data centers, cloud computing, and high-performance computing. The growing demand for SSDs has led to a continuous improvement in their technology and a reduction [...] Read more.
Because of their numerous benefits, solid-state drives (SSDs) are increasingly being used in a wide range of applications, including data centers, cloud computing, and high-performance computing. The growing demand for SSDs has led to a continuous improvement in their technology and a reduction in their cost, making them a more accessible storage solution for a wide range of users. Garbage collection (GC) is a process that reclaims wasted storage space in NAND flash memories, which are used as the memory devices for SSDs. However, the GC process can cause performance degradation and lifetime reduction. This paper proposes an efficient garbage collection (GC) scheme that minimizes overhead by invoking GC operations only when necessary. Each GC operation is executed in a specific order based on the expected storage gain and the execution cost, ensuring that the storage space requirement is met while minimizing the frequency of GC invocation. This approach not only reduces the overhead due to GC, but also improves the overall performance of SSDs, including the latency and write amplification factor (WAF) which is an important indicator of the longevity of SSDs. Full article
Show Figures

Figure 1

14 pages, 1468 KiB  
Article
Parallelism-Aware Channel Partition for Read/Write Interference Mitigation in Solid-State Drives
by Hyun Jo Lim, Dongkun Shin and Tae Hee Han
Electronics 2022, 11(23), 4048; https://doi.org/10.3390/electronics11234048 - 6 Dec 2022
Cited by 1 | Viewed by 2423
Abstract
The advancement of multi-level cell technology that enables storing multiple bits in a single NAND flash memory cell has increased the density and affordability of solid-state drives (SSDs). However, increased latency asymmetry between read and write (R/W) intensifies the severity of R/W interference, [...] Read more.
The advancement of multi-level cell technology that enables storing multiple bits in a single NAND flash memory cell has increased the density and affordability of solid-state drives (SSDs). However, increased latency asymmetry between read and write (R/W) intensifies the severity of R/W interference, so reads cannot be processed for a long time owing to the extended flash memory resource occupancy of writing. Existing flash translation layer (FTL)-level mitigation techniques can allocate flash memory resources in a balanced manner taking R/W interference into account; however, due to the inefficient utilization of parallel flash memory resources, the effect on performance enhancement is restrictive. From the perspectives of the predicted access pattern and available concurrency of flash memory resources, we propose a parallelism-aware channel partition (PACP) scheme that prevents SSD performance degradation caused by R/W interference. Moreover, an additional performance improvement is achieved by reallocating interference-vulnerable page using leveraged garbage collection (GC) migration. The evaluation results showed that compared with the existing solution, PACP reduced the average read latency by 11.6% and average write latency by 6.0%, with a negligible storage overhead. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

14 pages, 8187 KiB  
Article
Thermal Performance of the Thin Heat Pipe for Cooling of Solid-State Drives
by Dongdong Yuan, Jiajia Chen, Yong Yang, Liyong Zhang, Songyan Liu, Huafei Jiang and Ning Qian
Metals 2022, 12(11), 1786; https://doi.org/10.3390/met12111786 - 23 Oct 2022
Cited by 3 | Viewed by 2857
Abstract
With the rapid development of information science and technology, the demand for computer data processing is increasing, resulting in the rapid growth of the demand for high-power and high-performance solid-state drives (SSDs). The stable operation of SSDs plays an important role in ensuring [...] Read more.
With the rapid development of information science and technology, the demand for computer data processing is increasing, resulting in the rapid growth of the demand for high-power and high-performance solid-state drives (SSDs). The stable operation of SSDs plays an important role in ensuring the reliable working conditions and appropriate temperature of information technology equipment, rack servers, and related facilities. However, SSDs usually have significant heat emissions, putting forward higher requirements for temperature and humidity control, and consequently the heat sink system for cooling is essential to maintain the proper working state of SSDs. In this paper, a new type of thin heat pipe (THP) heat sink is proposed, and the heat transfer performance and cooling effect are experimentally and numerically studied. The numerical results are compared with experimental results, which showed an error within 5%. Single and double heat pipes were investigated under different input powers (from 5 W to 50 W) and different placement angles between 0° and 90°. The heat transfer performance of the new heat sink is analyzed by the startup performance, the evaporator temperature, and the total thermal resistance. The results show that the new double THPs with a 90° angle have a great advantage in the heat transfer performance of SSDs. The research is of great significance for the design and optimization of the SSDs’ cooling system in practical applications. Full article
(This article belongs to the Special Issue Ultra-Thin and Micro Heat Pipe Manufacturing and Their Applications)
Show Figures

Figure 1

16 pages, 521 KiB  
Article
Energy-Saving SSD Cache Management for Video Servers with Heterogeneous HDDs
by Kyungmin Kim and Minseok Song
Energies 2022, 15(10), 3633; https://doi.org/10.3390/en15103633 - 16 May 2022
Cited by 1 | Viewed by 2480
Abstract
Dynamic adaptive streaming over HTTP (DASH) technique, the most popular streaming method, requires a large number of hard disk drives (HDDs) to store multiple bitrate versions of many videos, consuming significant energy. A solid-state drive (SSD) can be used to cache popular videos, [...] Read more.
Dynamic adaptive streaming over HTTP (DASH) technique, the most popular streaming method, requires a large number of hard disk drives (HDDs) to store multiple bitrate versions of many videos, consuming significant energy. A solid-state drive (SSD) can be used to cache popular videos, thus reducing HDD energy consumption by allowing I/O requests to be handled by an SSD, but this requires effective HDD power management due to limited SSD bandwidth. We propose a new SSD cache management scheme to minimize the energy consumption of a video storage system with heterogeneous HDDs. We first present a technique that caches files with the aim of saving more HDD energy as a result of I/O processing on an SSD. Based on this, we propose a new HDD power management algorithm with the goal of increasing the number of HDDs operated in low-power mode while reflecting the heterogeneous HDD power characteristics. For this purpose, it assigns a separate parameter value to each I/O task based on the ratio of HDD energy to bandwidth and greedily selects the I/O tasks handled by the SSD within limits on its bandwidth. Simulation results show that our scheme consumes between 12% and 25% less power than alternative schemes under the same HDD configuration. Full article
Show Figures

Figure 1

16 pages, 1983 KiB  
Article
Selective Power-Loss-Protection Method for Write Buffer in ZNS SSDs
by Junseok Yang, Seokjun Lee and Sungyong Ahn
Electronics 2022, 11(7), 1086; https://doi.org/10.3390/electronics11071086 - 30 Mar 2022
Cited by 3 | Viewed by 4012
Abstract
Most SSDs (solid-state drives) use an internal DRAM (Dynamic Random Access Memory) to improve the I/O performance and extend SSD lifespan by absorbing write requests. However, this volatile memory does not guarantee the persistence of buffered data in the event of sudden power-off. [...] Read more.
Most SSDs (solid-state drives) use an internal DRAM (Dynamic Random Access Memory) to improve the I/O performance and extend SSD lifespan by absorbing write requests. However, this volatile memory does not guarantee the persistence of buffered data in the event of sudden power-off. Therefore, highly reliable enterprise SSDs employ power-loss-protection (PLP) logic to ensure the durability of buffered data using the back-up power of capacitors. The SSD must provide enough capacitors for the PLP in proportion to the size of the volatile buffer. Meanwhile, emerging ZNS (Zoned Namespace) SSDs are attracting attention because they can support many I/O streams that are useful in multi-tenant systems. Although ZNS SSDs do not use an internal mapping table unlike conventional block-interface SSDs, a large write buffer is required to provide many I/O streams. The reason is that each I/O stream needs its own write buffer for write buffering where the host can allocate separate zones to different I/O streams. Moreover, the larger capacity and more I/O streams the ZNS SSD supports, the larger write buffer is required. However, the size of the write buffer depends on the amount of capacitance, which is limited not only by the SSD internal space, but also by the cost. Therefore, in this paper, we present a set of techniques that significantly reduce the amount of capacitance required in ZNS SSDs, while ensuring the durability of buffered data during sudden power-off. First, we note that modern file systems or databases have their own solutions for data recovery, such as WAL (Write-ahead Log) and journal. Therefore, we propose a selective power-loss-protection method that ensures durability only for the WAL or journal required for data recovery, not for the entire buffered data. Second, to minimize the time taken by the PLP, we propose a balanced flush method that temporarily writes buffered data to multiple zones to maximize parallelism and preserves the data in its original location when power is restored. The proposed methods are implemented and evaluated by modifying FEMU (QEMU-based Flash Emulator) and RocksDB. According to experimental results, the proposed selective-PLP reduces the amount of capacitance by 50 to 90% while retaining the reliability of ZNS SSDs. In addition, the balanced flush method reduces the PLP latency by up to 96%. Full article
(This article belongs to the Special Issue Emerging Memory Technologies for Next-Generation Applications)
Show Figures

Figure 1

14 pages, 1511 KiB  
Article
Efficient Garbage Collection Algorithm for Low Latency SSD
by Jin Ae and Youpyo Hong
Electronics 2022, 11(7), 1084; https://doi.org/10.3390/electronics11071084 - 30 Mar 2022
Cited by 2 | Viewed by 4944
Abstract
Solid-state drives (SSDs) are rapidly replacing hard disk drives (HDDs) in many applications owing to their numerous advantages such as higher speed, low power consumption, and small size. NAND flash memories, the memory devices used for SSDs, require garbage collection (GC) operations to [...] Read more.
Solid-state drives (SSDs) are rapidly replacing hard disk drives (HDDs) in many applications owing to their numerous advantages such as higher speed, low power consumption, and small size. NAND flash memories, the memory devices used for SSDs, require garbage collection (GC) operations to reclaim wasted storage space due to obsolete data. The GC is the major source of performance degradation because it greatly increases the latency for SSDs. The latency for read or write operations is sometimes significantly long if the operations are requested by users while GC operations are in progress. Reducing the frequency of GC invocation while maintaining the storage space requirement may be an ideal solution to remedy this problem, but there is a minimal number of GC operations to reserve storage space. The other approach is to reduce the performance overhead due to GC rather than reducing GC frequency. In this paper, following the latter approach, we propose a new GC scheme that reduces GC overhead by intelligently controlling the priorities among read/write and GC operations. The experimental results show the proposed scheme consistently improve the overall latency for various workloads. Full article
Show Figures

Graphical abstract

Back to TopTop