Next Article in Journal
Comparative Study on Device Type Configurations of 2T0C DRAM for Compute-in-Memory Applications
Previous Article in Journal
An Ensemble Imbalanced Classification Framework via Dual-Perspective Overlapping Analysis with Multi-Resolution Metrics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Channel-Aware Garbage Collection Control for Multi-Channel SSDs

Division of Electronics and Electrical Engineering, Dongguk University-Seoul, Seoul 04620, Republic of Korea
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(23), 4741; https://doi.org/10.3390/electronics14234741
Submission received: 2 November 2025 / Revised: 25 November 2025 / Accepted: 30 November 2025 / Published: 2 December 2025
(This article belongs to the Section Computer Science & Engineering)

Abstract

Solid-State Drives (SSDs) have become the dominant storage medium in performance-sensitive systems due to their high throughput, reliability, and energy efficiency. However, inherent constraints in NAND flash memory—such as out-of-place writes, block-level erase operations, and data fragmentation—necessitate frequent garbage collection (GC), which can significantly degrade user I/O performance when not properly managed. This paper presents a channel-aware GC control mechanism for multi-channel SSD architectures that limits GC concurrency based on real-time storage utilization. Unlike conventional controllers that allow GC to proceed simultaneously across all channels—often leading to complete I/O stalls—our approach adaptively throttles the number of GC-active channels to preserve user responsiveness. The control logic uses a dynamic thresholding function that increases GC aggressiveness only as the SSD approaches full capacity, allowing the system to balance space reclamation with quality-of-service guarantees. We implement the proposed mechanism in an SSD simulator and evaluate its performance under a range of real-world workloads. Experimental results show that the proposed adaptive GC control significantly improves SSD responsiveness across various workloads. Across all workloads, the proposed adaptive GC control achieved an average latency improvement factor of 4.86×, demonstrating its effectiveness in mitigating GC-induced interference. Even when excluding extreme outlier cases, the method maintained an average improvement of 1.55×, with a standard deviation of 1.17, confirming its consistency and robustness across diverse workload patterns.

1. Introduction

Solid-State Drives (SSDs) have become the preferred storage medium in modern computing systems due to their low access latency, high bandwidth, and superior energy efficiency compared to traditional hard drives. Their mechanical simplicity and resilience against physical shocks further enhance their reliability, leading to widespread deployment across cloud servers, data centers, and consumer devices [1,2,3].
Despite these advantages, SSD performance and endurance depend heavily on how efficiently the underlying NAND flash memory is managed. NAND flash does not support in-place updates; instead, data must be written to a new location, and previously occupied blocks must be erased before reuse. This architectural constraint necessitates a process known as garbage collection (GC)—which consolidates valid pages and reclaims space from obsolete data [4,5]. However, GC introduces additional internal data movement and can substantially degrade user I/O performance, especially when invoked frequently under high-utilization conditions [6,7].
The challenge of performance degradation due to GC becomes particularly critical in multi-channel SSD architectures, which are designed to maximize throughput through internal parallelism. When GC is triggered across all channels simultaneously, the SSD may experience system-wide contention, resulting in prolonged latency and I/O stalls [8,9]. Especially deleting large, sequentially stored files often initiates a wave of concurrent GC events because the data is striped across channels, exacerbating performance degradation [10]. Hence, managing GC activity across channels is essential to maintaining both performance stability and predictable response times.
To address these challenges, this paper proposes a Channel-Aware Garbage Collection Control mechanism that adaptively regulates GC concurrency based on the SSD’s real-time utilization level. Rather than allowing GC to proceed independently on each channel, the proposed design dynamically limits the number of GC-active channels according to the remaining storage capacity. When sufficient free space is available, GC activity is minimized to reduce interference with user requests. Conversely, as the SSD approaches full capacity, the system gradually increases GC concurrency to maintain sufficient free space. This adaptive policy ensures efficient resource utilization and mitigates the latency spikes caused by uncontrolled GC operations.
The main contributions of this work are summarized as follows:
  • We propose an adaptive channel-level GC control mechanism that dynamically regulates the number of SSD channels participating in GC based on real-time storage utilization. This enables the controller to continuously balance free-space reclamation with I/O responsiveness across varying system states.
  • Rather than relying on fixed thresholds or step-based triggers, our method introduces a graceful scaling policy that increases GC concurrency smoothly in proportion to storage pressure. This prevents abrupt transitions commonly found in traditional GC strategies, which can cause latency spikes and system-wide stalls.
  • The proposed scheme ensures latency stability by preventing excessive GC operations from blocking foreground I/O. Experimental results across diverse workloads demonstrate that our method consistently achieves lower average and maximum latency compared to conventional fixed-policy GC mechanisms.
The remainder of this paper is organized as follows: Section 2 reviews background concepts related to SSD architecture and garbage collection. Section 3 presents the proposed adaptive GC control mechanism in detail. Section 4 describes the experimental setup and results, and Section 5 discusses the conclusions.

2. Background

To motivate the design of our proposed garbage collection mechanism, it is essential to first understand the structural and operational characteristics of modern SSDs. This section provides foundational background in two parts. First, we present an overview of multi-channel SSD architecture, including its hierarchical memory organization and the role of channel striping in achieving high throughput. Then, we describe the mechanics of garbage collection in SSDs, its performance implications in multi-channel systems, and the rationale for our proposed channel-level GC control strategy.

2.1. Multi-Channel SSD Organization

To meet the ever-growing demand for high-speed, high-throughput storage, modern Solid-State Drives (SSDs) are architected with extensive internal parallelism. A central feature of this parallelism is the multi-channel architecture as shown in Figure 1, “SSD Controller Architecture” (adapted from Mishra and Somani [11]), which enables concurrent data access and transfer operations across multiple NAND flash memory chips. Each channel serves as an independent data path between the SSD controller and a group of NAND flash chips, functioning much like an individual lane on a highway. By employing multiple channels—commonly 4, 8, 16, or more in high-performance SSDs—storage devices can achieve massive internal bandwidth, essential for both consumer and enterprise workloads [12,13,14].
At a finer granularity, each NAND chip connected to a channel contains one or more dies, which are electrically independent subunits. Each die is further divided into planes, each of which contains arrays of blocks composed of multiple pages. The page is the smallest addressable unit for read and write operations, while the block is the smallest unit that can be erased. This hierarchical structure—channel → chip → die → plane → block → page—allows the SSD controller to exploit multiple layers of parallelism to serve I/O requests efficiently.
One of the most important mechanisms enabled by multi-channel architecture is channel striping. In this design, large data such as sequential files are split into fixed-size segments and distributed across multiple channels in a round-robin or striped fashion. This means that when a large file is written or read sequentially, different portions of the file can be accessed in parallel from separate channels. For instance, in an 8-channel SSD, a 128 KB file might be divided into eight 16 KB chunks, with each chunk written to a separate channel simultaneously. This parallel access pattern significantly boosts sequential throughput and lowers read/write latency by leveraging the available hardware concurrency.
However, while striping improves performance, it also creates dependencies: completing a large sequential read requires coordination across all involved channels. If even one channel is delayed or blocked—due to internal operations such as garbage collection—the entire read must stall until the slowest component completes. This sensitivity underscores the need for intelligent coordination and scheduling across channels, particularly in the presence of background maintenance tasks.
The multi-channel architecture thus serves as both a source of performance potential and a scheduling challenge. Understanding this hierarchy and the interaction of user and internal tasks across channels is essential for designing efficient SSD controllers, especially in workloads involving mixed read/write patterns and background garbage collection.

2.2. Garbage Collection in SSD

In NAND flash memory, in-place data updates are not possible; instead, data must be written to a new location, and the original page is marked as invalid. Over time, these invalid pages accumulate and reduce usable capacity. To reclaim this space, SSDs employ a background process known as GC. This process consists of three major steps: (1) reading the valid pages from a selected victim block, (2) copying those pages to a new location, and (3) erasing the original block to make it available for future writes as shown in Figure 2.
In single-channel SSDs, GC directly contends with user-initiated read and write requests. Multi-channel SSDs, though more parallel in architecture, still experience contention at multiple levels of the NAND hierarchy—planes, dies, and most critically, channels. If a user operation targets the same storage unit currently occupied by GC activity, the user request must wait until the GC operation is completed. This is particularly problematic for large sequential reads, where data is typically striped across all channels to maximize throughput. If even one channel is blocked due to GC, the entire read operation is delayed, causing significant increases in latency and degrading overall responsiveness.
Although some modern NAND chips support limited inter-die and inter-plane concurrency, most SSD controllers enforce serialized access at the channel level to avoid complexity and ensure timing correctness. As such, once a channel is engaged in any operation—whether a user request or a GC sub-task—no other operation can be serviced on that channel until the current one finishes. This serialization makes the channel the most critical control point for performance-sensitive scheduling decisions.

3. Proposed Garbage Collection Scheme

3.1. Conventional Garbage Collection Approaches

Conventional garbage collection (GC) mechanisms in SSDs generally follow simple and widely used design principles rather than a single standardized algorithm. In most commercial and academic SSD controllers, GC is governed by static, system-wide thresholds, such as a fixed invalid-page ratio within a block or a minimum number of free blocks available in the device. When a block’s invalid-page ratio exceeds a predetermined threshold (commonly 70–80%), it is selected as a GC victim and queued for reclamation by migrating valid pages and erasing the block [1,2,3,4]. In many designs, blocks that satisfy this threshold are simply appended to a victim-block list, which the GC module services when conditions permit.
In describing these mechanisms as “conventional,” we refer to this general class of threshold-driven, non-preemptive, and opportunistic GC behaviors that have been widely adopted in SSD controllers and used as baselines in prior research. Consistent with this, our study assumes the typical industry practice in which both GC and normal read/write operations are non-preemptive: once a NAND program, erase, or read operation begins, it runs to completion. Preempting an in-flight NAND command risks partial programming, stale metadata, and increased write amplification, and is therefore avoided in real devices.
Under these assumptions, GC is scheduled opportunistically: a GC task begins on a channel only when that channel becomes idle. If a channel is busy serving host I/O, GC waits; if the channel is free, the GC module selects the earliest-enqueued victim block for cleaning. While this threshold-triggered, idle-time, non-preemptive behavior is simple and commonly used, it does not adapt to workload dynamics or per-channel utilization patterns. As a result, GC may trigger in bursty, highly parallel waves when global free space decreases rapidly. When many channels perform GC simultaneously, user I/O—especially latency-sensitive reads—can experience noticeable delays due to channel-level contention.
The motivation for our work arises from the observation that conventional approaches typically treat GC concurrency as fixed or incidental. In many SSD controllers, once the global free-space threshold is crossed, GC may activate on all channels simultaneously, without accounting for workload intensity or per-channel load. This behavior often results in system-wide stalls and long-tail latency spikes, particularly under workloads with frequent overwrites (such as journaling and logging) or large sequential deletions. This issue is further underscored by the role of channels as the highest-level shared resource between GC and host I/O: when GC occupies a channel, all user requests mapped to that channel are delayed. In addition, SSD controllers naturally schedule work on a per-channel basis, making channel-level control strongly aligned with hardware abstractions. Managing GC concurrency at the channel granularity also avoids the substantial overhead associated with coordinating dozens or even hundreds of dies or planes. Finally, many commercial SSDs serialize operations at the channel interface due to shared-bus and timing constraints (e.g., tR, tPROG, tRBERS) [15,16], which further supports the practicality of modeling and controlling GC concurrency at the channel level.
To the best of our knowledge, no prior work dynamically adjusts the number of channels permitted to perform GC as a function of SSD usage, despite GC concurrency being a key contributor to GC-induced interference. This observation directly motivates the usage-adaptive GC scheduling scheme introduced in Section 3.2, where we regulate GC concurrency according to the device’s real-time utilization.

3.2. Channel-Aware GC Scheduling

Our proposed method introduces an adaptive, channel-aware garbage collection (GC) control scheme that dynamically regulates the number of GC-active channels based on real-time SSD utilization. Unlike conventional strategies that often initiate GC in large, simultaneous bursts, our design spreads GC activity more evenly over time, reducing the risk of system-wide contention and ensuring that sufficient channels remain available to serve user requests. The GC operations are primarily scheduled during idle periods, but the number of channels permitted to engage in GC is strictly limited by the adaptive function G(t).
At the core of this approach is a continuous control function G(t), defined below, which determines the allowable number of concurrent GC operations as a function of the SSD usage ratio. Rather than relying on a fixed concurrency limit, we employ a two-threshold policy defined by parameters θ1 and θ2, which mark the lower and upper bounds of storage utilization, respectively. This mechanism enables graceful scaling of GC intensity, increasing GC concurrency only as storage pressure rises, thereby maintaining a balanced trade-off between space reclamation and I/O responsiveness.
G t = 0   , U t < θ 1 G m a x · U t θ 1 θ 2 θ 1   , θ 1 U t θ 2   G m a x   , U t > θ 2
where G m a x is the maximum number of concurrent GC channels allowed; U t = S u s e d ( t ) S t o t a l [ 0 , 1 ] is the fraction of SSD space used at time t; θ 1 denotes the SSD usage threshold where GC starts; θ 2 denotes the SSD usage threshold where maximum number of concurrent GC channel is allowed; N c h is the total number of channels.
When the SSD is less than θ1, no garbage collection is initiated. In this region, available free space is sufficient to absorb incoming writes without incurring performance penalties, and unnecessary GC would only add overhead. Between θ1U(t) < θ2, the number of active GC channels increases linearly with SSD usage, enabling graceful scaling of GC pressure as the need for space reclamation grows. Once U(t) ≥ θ2, full GC concurrency is permitted, maximizing cleaning throughput as the system approaches storage exhaustion. This two-threshold model provides several critical advantages over fixed or step-wise policies as follows.
In practical deployments—especially under workloads like web servers, databases (e.g., Cassandra, MySQL), or cloud storage—I/O request patterns are not smooth or uniform. They come in bursts, often alternating between heavy and light phases. If GC concurrency ramps up suddenly (e.g., from 0 to full activation when usage hits a threshold), it can collide with an incoming I/O burst, severely degrading performance. A continuous, linear ramp-up allows the system to adjust gradually in response to storage usage without disrupting ongoing I/O bursts.
GC operations share the same SSD resources—channels, buses, buffers, and so forth—as user I/O. When GC suddenly activates across many channels simultaneously, it creates a resource bottleneck for host requests, especially read operations, which may be delayed or blocked. By contrast, a smooth and predictable increase in GC concurrency supports a balanced scheduler and more stable queues, preventing these disruptive contention spikes and maintaining consistent performance for foreground I/O.

3.3. Comparison with Previous Work

A wide range of studies has explored how to improve the efficiency and performance of garbage collection (GC) in SSDs, focusing primarily on when to trigger GC, which blocks to select, or how to reduce GC interference with host I/O. In contrast, the key novelty of our work lies in treating GC channel concurrency itself as a first-class control variable and adjusting the allowed number of GC-active channels continuously according to SSD usage. Below, we compare our method with the two papers highlighted by the reviewer as well as additional representative GC-related research.
SSDKeeper [17] dynamically allocates SSD channels to host I/O streams to mitigate inter-stream conflicts and improve parallelism. Its control objective is to decide which host stream should use which channel based on workload characteristics such as sequentiality and access locality. Our work differs from their work in that we do not modify host-side channel allocation or reorganize I/O streams. Instead, we regulate how many channels are permitted to perform GC concurrently, using a continuous two-threshold control function G(t) derived from storage utilization. Host I/O continues to follow the baseline mapping, and only GC concurrency is adapted. Thus, SSDKeeper and our method operate on orthogonal dimensions:
CFIO [18] is designed for Open-Channel SSDs and focuses on issuing conflict-free host I/O operations to maximize internal parallelism. CFIO restructures and reorders I/O commands to avoid channel/die/plane conflicts, improving host request throughput. Our work does not reorder or restructure host I/O. Instead, we accept the host-request stream as provided and dynamically control the parallelism of GC operations, which are internal maintenance tasks. CFIO addresses host-level conflict avoidance, whereas our method addresses GC-induced interference. CFIO does not regulate GC concurrency, and our method does not modify host I/O scheduling, making the two approaches orthogonal.
Several studies adjust trigger thresholds or develop more sophisticated victim-selection heuristics for GC. Examples include dynamic-threshold GC, hot/cold-aware victim selection (e.g., Stable Greedy), and reinforcement learning–assisted GC decision making [19,20,21]. These schemes determine when GC should occur and which block should be reclaimed. Our approach is complementary to their algorithms as well. We do not modify victim-block selection or trigger logic. Instead, we regulate how much GC may occur in parallel through a usage-based concurrency limit. These techniques can be integrated with our method because they optimize different aspects of GC behavior.
Another line of work proposes GC schedulers that can be interrupted or preempted by urgent host reads, reducing long-tail latency. Semi-preemptive GC, unified GC–I/O schedulers, and interruptible GC approaches [22,23] adjust timing by deciding when GC should pause or resume. Our control policy operates at a higher level by limiting how many channels may run GC concurrently at any time. Preemptible GC can be used within each channel, while our G(t)-based controller globally bounds GC pressure. These approaches are therefore complementary, not alternatives.
Other works accelerate GC by enhancing intra-chip parallelism, adding dedicated copy engines, or optimizing low-level flash operations [24,25]. These techniques reduce the latency of individual GC tasks but do not reason about how many GC tasks should run simultaneously.
To the best of our knowledge, no prior work proposes a continuous, usage-dependent model for GC channel concurrency nor treats the number of GC-active channels as a tunable control variable. Our method therefore explores a dimension of GC behavior not addressed in existing studies and can coexist with many of them.

4. Experiments

4.1. Experimental Setup

To evaluate our proposed SSD management scheme, we collected workload traces from real applications running on a commercial SSD. All experiments were conducted on a Samsung T5 Portable SSD (1 TB) connected to a workstation with an Intel® Core™ i7-9700K CPU, 32 GB RAM, and Red Hat Enterprise Linux 7.9 (kernel 3.10.0-1160).
For each workload, tracing was activated on the target SSD before launching the application under test. We then started the corresponding application—such as MySQL [26], Cassandra [27], MongoDB [28], RocksDB [29], SQLite [30], Dbench [31], or the Varmail workload model—and generated load using its appropriate benchmarking tool. Specifically, sysbench [32] was used to drive MySQL, YCSB [33] to exercise Cassandra, MongoDB and RocksDB, Filebench to run the Varmail workload, and the Phoronix Test Suite to run SQLite and Dbench. As these applications executed under load, blktrace [34]. recorded all resulting block-layer I/O events, and blkparse later translated these logs into text-based traces. When file-level semantics were required, ftrace [35] output was analyzed in parallel. This process ensured that each trace reflected authentic, application-generated storage behavior.
We developed an event-driven SSD simulator in C to evaluate our proposed GC-concurrency control mechanism under realistic, trace-driven workloads. The purpose of this simulator is not to model every firmware detail, but to accurately reproduce the timing, queuing, and channel-level scheduling interactions that determine how garbage collection competes with foreground I/O in a multi-channel SSD. Because commercial SSDs do not expose internal states such as page validity, channel occupancy, or GC progress, a simulation framework is required to isolate and measure the specific impact of concurrency control. Our model incorporates widely used SSD configuration parameters—including page size, block count, and NAND timing characteristics—drawn from prior studies on flash memory organization and garbage-collection behavior [4,36,37,38,39]. The key parameters used in our evaluation are summarized in Table 1.
The parameters used in the equation for determining the maximum number of concurrent GC channels were configured as follows: θ 1 = 0.5, θ 2 = 0.85, N c h = 8, G m a x   = 4 (i.e., N c h /2). These threshold values were chosen based on empirical observations and design considerations. The lower threshold θ 1 = 0.5 ensures that garbage collection remains completely disabled while more than half of the SSD’s capacity is still available, allowing foreground I/O performance to proceed unimpeded during light to moderate usage. The upper threshold θ 2 = 0.85 reflects the point at which GC becomes increasingly urgent, and full concurrency is needed to prevent space exhaustion. Between these thresholds, GC intensity ramps up smoothly to balance cleaning efficiency with user responsiveness. G m a x   = 4 was selected to ensure that at least half of the system’s channels remain available to service user read/write requests at any time, thereby minimizing the risk of system-wide stalls caused by GC contention.
Table 2 outlines the latency parameters used in our SSD simulation model for page-level operations across different components, including SRAM and NAND flash memory. To determine SRAM access delay, we utilized timing data from a single-port, 16-bit SRAM implemented on a Xilinx Spartan-7 FPGA (model: xc7s100fgga676-1). The measurements were obtained using Xilinx Vivado 2021.2 design tools. For NAND flash memory timing, we referenced the Micron MT29F4G08ABADAH4 device model [40], which provides representative read and write delay specifications for modern flash memory.

4.2. Trace File Analysis

Table 3 summarizes the characteristics of the workload traces collected using the methodology described in Section 4.1. These traces represent a diverse set of real-world storage environments, including database systems (MySQL, Cassandra, MongoDB, RocksDB) and file-system benchmarks (SQLite, Dbench, Varmail). Each trace captures the actual block-I/O patterns generated by its corresponding application under load, including request type distributions, access locality, and temporal burstiness.
To further characterize the nature of the workloads used in our experiments, we analyzed the types of files accessed during trace execution. Table 3 summarizes the distribution of file system entities involved in each workload. For several workloads, the proportion of certain categories is extremely small, and these categories are therefore marked as “n/a”.
To better characterize the workloads used in our experiments, we analyzed user-request patterns, as summarized in Table 4. This includes the proportions of read, write, and delete operations, along with their breakdown into sequential and random access types. The results reveal distinct I/O behaviors across applications—for example, some traces are dominated by sequential writes, while others involve frequent small random reads. Table 5 complements this by detailing the request-size distribution by operation type and access pattern. Together, these statistics offer a comprehensive view of workload diversity, essential for evaluating the generality and effectiveness of our proposed GC scheduling scheme. For several workloads—including Dbench, MySQL, SQLite, and Varmail—the proportion of read requests is extremely small, as reported in Table 4. Because these traces do not contain enough read operations to produce statistically meaningful latency measurements, the corresponding read-latency entries in Table 4 and Table 5 are marked as “n/a” (not applicable). This notation indicates that read-latency analysis is not meaningful for those workloads and avoids misinterpretation of incomplete or unreliable values.

4.3. Experimental Results

We conducted a series of simulations using our C-based SSD simulator to evaluate the effectiveness of the proposed adaptive GC scheduling scheme. Specifically, we measured both average and maximum read/write latencies under a variety of real-world workload traces. We conducted a series of simulations using our C-based SSD simulator to evaluate the effectiveness of the proposed adaptive GC (Garbage Collection) scheduling scheme. Specifically, we measured both average and maximum read/write latencies under a variety of real-world workload traces. To establish a meaningful comparison, we implemented three conventional GC strategies:
  • Conv_0_AllCh: GC is enabled for all channels from the very beginning of SSD operation (i.e., at 0% storage utilization).
  • Conv_0.7_AllCh: GC is postponed until the SSD reaches 70% storage capacity, at which point all channels may participate.
  • Conv_0.7_HalfCh: This variant follows the same 70% usage threshold but introduces a concurrency constraint, allowing at most half of the channels to perform GC operations simultaneously.
These configurations allow controlled evaluation of GC trigger timing and concurrency effects, enabling a focused and fair comparison with our proposed Adaptive GC method.
Table 6 summarizes the measured average and maximum latencies for all configurations. For each workload, we compare the proposed method against the best-performing configuration among the three conventional GC schemes (Conv_0_AllCh, Conv_0.7_AllCh, and Conv_0_7_HalfCh). The average improvement factor was 4.86×. When excluding two extreme outliers (Cassandra’s average read latency and RocksDB’s maximum read latency), the improvement factor still remained substantial with an average of 1.55× and a standard deviation of 1.17, highlighting the consistency and robustness of the proposed method across diverse workload characteristics.
For several workloads—including Dbench, MySQL, SQLite, and Varmail—the proportion of read requests is extremely small, as reported in Table 4. Because these traces do not contain enough read operations to produce statistically meaningful latency measurements, the corresponding read-latency entries in Table 6 are marked as “n/a” (not applicable). This notation indicates that read-latency analysis is not meaningful for those workloads and avoids misinterpretation of incomplete or unreliable values
The proposed GC scheme delivered particularly strong performance gains for workloads such as Cassandra and RocksDB. To understand the cause, we examined each workload’s I/O behavior. Figure 3 illustrates host I/O request patterns over time, and Figure 3a highlights Cassandra’s distinctive access pattern marked by intermittent spikes in read and write activity. These idle periods provide ideal windows for GC to operate with minimal interference, which our adaptive scheduling effectively exploits.
To analyze this effect in more detail with Cassandra workload, we examined key runtime metrics such as the host request rate, the number of GC operations executed, and the depth of pending I/O queues across the SSD system. Under the conventional GC strategy, Conv_0_AllCh, Figure 4a shows that spikes in user requests coincide with GC activity, causing sudden increases in queue depth and degraded responsiveness. This contention is particularly harmful during sequential reads, where data is striped across channels and the slowest channel dictates overall completion time. In contrast, the proposed adaptive GC method dynamically limits the number of channels engaged in GC. As illustrated in Figure 4b, this results in more evenly distributed GC activity, which smooths queue depth fluctuations and improves latency. By avoiding GC bursts and maintaining steady space reclamation, the system achieves superior responsiveness and stability.
Large sequential data is typically striped across multiple channels to maximize parallelism and throughput. While this enhances bandwidth, it also creates interdependence: a sequential read cannot be completed until all its segments are retrieved from the respective channels. When GC occupies all channels simultaneously—as is common in conventional schemes—this dependency leads to pronounced delays in sequential reads as with Cassandra workload. Our adaptive GC mechanism addresses this by throttling the number of GC-active channels based on current SSD usage. This ensures that a sufficient number of channels remain free to handle user I/O, significantly reducing the risk of system-wide stalls. As a result, sequential reads are more likely to complete without interruption, particularly under workloads with intermittent access patterns such as Cassandra. Although it remains possible that a GC-active channel may hold multiple segments of a sequential file, such worst-case overlaps are statistically rare in SSDs with high channel counts. Thus, the adaptive throttling approach not only smooths average latency but also robustly guards against extreme tail latencies, offering improved responsiveness under mixed and sequential workloads.
The MongoDB workload presented a particularly challenging case for the proposed GC scheme, showing higher latencies across all metrics compared with the best-performing conventional approaches. From the beginning of operation, MongoDB generates a continuous stream of interleaved read and write requests, with frequent delete operations overlapping write activity. Once SSD utilization exceeds roughly 70%, these deletions lead to a large number of invalid blocks and trigger intensive GC activity, as illustrated in Figure 5a. Under the Conv_0.7_AllCh configuration, this results in a sharp increase in write latency, as nearly all channels become occupied by GC tasks. However, since GC requests are prioritized ahead of user reads in this configuration, read latency remains comparatively lower despite the heavy background activity.
In contrast, the proposed adaptive GC method intentionally limits the number of channels engaged in GC at any given time. While this design successfully prevents full-channel blocking, it also means that GC extends over a longer duration. In workloads like MongoDB, where write and delete operations occur continuously, this prolonged GC overlap with user I/O increases queue depth and response times. Consequently, both read and write latencies rise as the SSD remains under sustained mixed-load pressure. Figure 5b reflects this behavior, showing how the adaptive scheme trades immediate throughput for more evenly distributed—but ultimately heavier—GC interference in continuously active workloads. This illustrates a key trade-off: by preventing all-channel GC activation, our method avoids system-wide contention, yielding significantly improved latency stability and predictability.
We also examined how the SSD’s available storage capacity evolves over time under different GC strategies, as shown in Figure 6. Figure 6a plots the number of empty blocks at each time interval, where a block is only counted as empty after its garbage collection process is fully completed. Until GC is executed, blocks selected for reclamation remain marked as occupied.
As expected, Conv_0_AllCh shows the slowest decline in storage availability because it initiates GC immediately and aggressively, reclaiming space as soon as possible. This behavior is confirmed in Figure 6b, which illustrates the number of GC executed. In contrast, the proposed adaptive GC scheme, like the other two threshold-based methods—Conv_0.7_AllCh and Conv_0.7_HalfCh—begins reclamation only after the SSD reaches a designated utilization threshold. However, the key distinction is in how GC proceeds after activation. Whereas Conv_0.7_AllCh and Conv_0.7_HalfCh exhibit steep drops in the number of empty blocks, followed by sudden surges—reflecting large bursts of simultaneous GC activity across many channels—the proposed scheme maintains a smooth, gradual decrease in available space. These abrupt oscillations in the conventional approaches result from initiating GC across all or half the channels simultaneously, leading to short-lived but intensive space reclamation events.
By contrast, the proposed adaptive method evenly distributes GC activity over time, preventing abrupt shifts and maintaining a stable, consistent pattern of space recovery. This not only aligns GC execution with actual storage pressure but also helps to avoid performance degradation caused by sudden bursts of background activity.
Overall, this behavior demonstrates a key advantage of the proposed scheme: it avoids inefficient, burst-driven GC, supports continuous background cleaning, and enables more predictable and stable SSD performance across varying workloads and usage levels.
Overall, these results highlight several strengths of the proposed scheme:
  • It smoothly adapts GC intensity to usage conditions, reducing both average and tail latency.
  • It avoids channel-wide stalls by limiting concurrent GC, preserving throughput for foreground tasks.
  • It proves especially beneficial under burst-driven and journal-heavy workloads, where latency spikes from conventional GC methods can be severe.
These findings suggest that dynamic, usage-aware GC scheduling is essential for maintaining SSD responsiveness across diverse workloads, and that a fixed-threshold approach may be insufficient—especially in multi-channel architectures.
To validate the parameter choices used in our GC-concurrency control model, we conducted a sensitivity analysis examining how different threshold values and GC-concurrency limits affect latency and GC activity. Table 7 and Table 8 summarize the results obtained from varying θ 1 and θ 2 , and Gmax across a broad range of configurations. These results confirm that the selected values— θ 1 = 0.5, θ 2 = 0.85, and Gmax = Nch/2 = 4—provide the most consistent trade-off between cleaning effectiveness and user-level responsiveness.
In addition, Table 9 presents an evaluation using a larger NAND page size (8 KB) to examine the robustness of the proposed GC-concurrency control under different flash-geometry configurations. Increasing the page size alters the granularity of write and GC operations, which reduces the overall frequency of GC events and consequently narrows the performance gap between the proposed method and the conventional schemes. While the proposed method generally maintains stable and predictable performance across workloads, the results also reveal important workload-dependent behavior. For MongoDB, the proposed approach continues to show slightly worse performance than the best conventional configuration, consistent with the observations made under the 4 KB page size. For RocksDB, the proposed scheme clearly outperformed the conventional approaches at 4 KB, but this advantage diminishes under the 8 KB geometry, and one of the conventional configurations produces lower average write latency. This shift appears to stem from the way RocksDB’s compaction bursts interact with the coarser page granularity, which reduces GC frequency and alters the balance between GC concurrency and foreground I/O.

5. Conclusions

This paper introduced an adaptive, channel-aware garbage collection (GC) control mechanism for multi-channel SSDs that dynamically adjusts the number of GC-active channels based on real-time storage utilization. Unlike conventional controllers that permit uncoordinated GC across all channels—often causing full-channel blocking and user I/O stalls—our approach throttles GC concurrency in proportion to SSD usage. This design distributes GC activity more evenly over time, maintaining availability of channels for foreground requests and ensuring responsiveness under varying workloads.
Comprehensive evaluations using real-world workloads verified the effectiveness of the proposed method. On average, it achieved a 4.86× latency improvement over conventional schemes. The gains were especially pronounced under burst-intensive workloads, where the proposed approach reduced the average read latency from 427 ms to 0.6 ms, demonstrating its ability to suppress GC-induced contention and mitigate latency spikes effectively. While the method substantially enhances latency stability and responsiveness, it also introduces a trade-off: in sustained, high-intensity mixed I/O workloads such as MongoDB, the longer duration of moderated GC activity can lead to slightly higher write latency compared with more aggressive conventional schemes. This increase reflects a deliberate compromise to prevent channel-wide stalls and maintain predictable performance under heavy load.
Further analysis using a larger NAND page size (8 KB) showed that geometric scaling can influence the relative differences between GC strategies. Although the proposed method remained consistently stable across workloads, the performance gap between our approach and the best conventional scheme narrowed for some workloads, and for RocksDB the advantage observed under the 4 KB geometry diminished under the 8 KB configuration. This indicates that page size and related geometry parameters can shift the balance between GC frequency and foreground I/O and suggests that fully optimal concurrency-control parameters may benefit from adaptive tuning as flash geometry changes.
These results underscore the importance of dynamically regulating GC concurrency—particularly in SSDs with increasing parallelism and higher channel counts. By adapting GC intensity to real-time storage pressure, the proposed method offers a scalable and practical solution for latency-stable SSD management. Future work may explore combining this mechanism with advanced GC techniques, wear-leveling strategies, and I/O schedulers, as well as extending the framework to storage-class memory systems. In addition, validating the approach within publicly available SSD simulation environments such as SSDsim [41] and FEMU [42] remains an important next step, as these platforms provide richer flash-translation-layer designs and full-stack execution contexts that can further strengthen the realism and reproducibility of the evaluation.

Author Contributions

Conceptualization, H.M. and Y.H.; methodology, H.M. and Y.H.; coding, H.M. and Y.H.; validation, H.M. and Y.H.; formal analysis, H.M. and Y.H.; investigation, H.M. and Y.H.; resources H.M. and Y.H.; writing—original draft preparation, H.M. and Y.H.; writing—review and editing, H.M. and Y.H.; visualization H.M. and Y.H.; supervision, Y.H.; project administration, Y.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Dongguk University Research Fund of 2025.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Narayanan, D.; Thereska, E.; Donnelly, A.; Elnikety, S.; Rowstron, A. Migrating server storage to SSDs: Analysis of tradeoffs. In Proceedings of the 4th ACM European Conference on Computer Systems, Nuremberg, Germany, 1–3 April 2009; pp. 145–158. [Google Scholar]
  2. Deng, Y. What is the future of disk drives, death or rebirth? ACM Comput. Surv. 2011, 43, 1–27. [Google Scholar] [CrossRef]
  3. Micheloni, R.; Marelli, A.; Eshghi, K.; Wong, G. SSD market overview. In Inside Solid State Drives (SSDs); Springer: New York, NY, USA, 2013; pp. 1–17. [Google Scholar]
  4. Guo, J.; Hu, Y.; Mao, B.; Wu, S. Parallelism and Garbage Collection Aware I/O Scheduler with Improved SSD Performance. In Proceedings of the IEEE International Parallel and Distributed Processing Symposium (IPDPS), Orlando, FL, USA, 29 May–2 June 2017; pp. 1184–1193. [Google Scholar]
  5. Bux, W.; Iliadis, I. Performance of greedy garbage collection in flash-based solid-state drives. Perform. Eval. 2010, 67, 1172–1186. [Google Scholar] [CrossRef]
  6. High Scalability. Google: Taming the Long Latency Tail—When More Machines Equals Worse Results. Available online: http://highscalability.com/blog/2012/3/12/google-taming-the-long-latency-tail-when-more-machines-equal.html (accessed on 12 March 2012).
  7. Dean, J.; Barroso, L.A. The tail at scale. Commun. ACM 2013, 56, 74–80. [Google Scholar] [CrossRef]
  8. Ae, J.; Hong, Y. Efficient garbage collection algorithm for low-latency SSDs. Electronics 2022, 11, 1084. [Google Scholar] [CrossRef]
  9. Zhang, X.; Pei, S.; Choi, J.; Kim, B.S. Excessive SSD-Internal Parallelism Considered Harmful. In Proceedings of the 15th ACM Workshop on Hot Topics in Storage and File Systems (HotStorage ’23), Boston, MA, USA, 9–11 July 2023; pp. 65–70. [Google Scholar] [CrossRef]
  10. Yan, S.; Li, H.; Hao, M.; Tong, M.H.; Sundararaman, S.; Chien, A.A.; Gunawi, H.S. Tiny-Tail Flash: Near-Perfect Elimination of Garbage-Collection Tail Latencies in SSDS. In Proceedings of the 15th USENIX Conference on File and Storage Technologies (FAST ’17), Santa Clara, CA, USA, 27 February–2 March 2017; pp. 15–28. [Google Scholar]
  11. Mishra, M.; Somani, A.K. On-Disk Data Processing: Issues and Future Directions. arXiv 2017, arXiv:1709.02718. [Google Scholar] [CrossRef]
  12. Gao, C.; Shi, L.; Xue, C.J.; Ji, C.; Yang, J.; Zhang, Y. Plane-Level Parallelism Exploration for High-Performance SSDs. In Proceedings of the 16th IEEE International Symposium on Mass Storage Systems and Technology (MSST ’19), Los Angeles, CA, USA, 13–15 May 2019; pp. 1–11. [Google Scholar] [CrossRef]
  13. Nadig, R.; Sadrosadati, M.; Mao, H.; Mansouri Ghiasi, N.; Tavakkol, A.; Park, J.; Sarbazi-Azad, H.; Gómez Luna, J.; Mutlu, O. Venice: Improving Solid-State Drive Parallelism at Low Cost via Conflict-Free Accesses. In Proceedings of the 50th International Symposium on Computer Architecture (ISCA ’23), Orlando, FL, USA, 17–21 June 2023; pp. 65–70. [Google Scholar] [CrossRef]
  14. Hsieh, J.-W.; Lin, H.-Y.; Yang, D.-L. Multi-Channel Architecture-Based FTL for Reliable and High-Performance SSDs. IEEE Trans. Comput. 2014, 63, 3079–3091. [Google Scholar] [CrossRef]
  15. Fisher, R. Optimizing NAND Flash Performance. In Proceedings of the Flash Memory Summit (FMS 2008), Santa Clara, CA, USA, 12 August 2008; pp. 1–23. [Google Scholar]
  16. Bang, K.; Kim, D.; Park, S.; Chung, E.; Lee, H. Application-Aware Design Parameter Exploration of NAND Flash Memory. J. Semicond. Technol. Sci. 2013, 13, 291–302. [Google Scholar] [CrossRef]
  17. Liu, R.; Chen, X.; Tan, Y.; Zhang, R.; Liang, L.; Liu, D. SSDKeeper: Self-Adapting Channel Allocation to Improve the Performance of SSD Devices. In Proceedings of the IEEE International Parallel and Distributed Processing Symposium (IPDPS), New Orleans, LA, USA, 18–22 May 2020; pp. 966–975. [Google Scholar]
  18. Zhu, J.; Wang, L.; Xiao, L.; Liu, L.; Qin, G. CFIO: A Conflict-Free I/O Mechanism to Fully Exploit Internal Parallelism for Open-Channel SSDs. J. Syst. Archit. 2023, 135, 102803. [Google Scholar] [CrossRef]
  19. Qin, Y.; Feng, D.; Liu, J.; Tong, W.; Zhu, Z. DT-GC: Adaptive Garbage Collection with Dynamic Thresholds for SSDs. In Proceedings of the 2014 International Conference on Cloud Computing and Big Data (CCBD), Wuhan, China, 17–19 November 2014; pp. 182–188. [Google Scholar]
  20. Van Houdt, B. On the necessity of hot and cold data identification to reduce the write amplification in flash-based SSDs. Perform. Eval. 2014, 82, 1–14. [Google Scholar] [CrossRef]
  21. Kang, W.; Shin, D.; Yoo, S. Reinforcement learning-assisted garbage collection to mitigate long-tail latency in SSD. ACM Trans. Embed. Comput. Syst. 2017, 16, 1–20. [Google Scholar] [CrossRef]
  22. Lee, J.; Kim, Y.; Shipman, G.M.; Oral, S.; Wang, F.; Kim, J. A semi-preemptive garbage collector for solid-state drives. In Proceedings of the IEEE International Symposium on Performance Analysis of Systems and Software, Austin, TX, USA, 4–6 April 2011; pp. 12–21. [Google Scholar]
  23. Yao, Y.; Kong, X.; Xu, X.; Gu, N.; Feng, W. Uniform scheduling of interruptible garbage collection and request I/O to improve performance and wear-leveling of SSDs. J. Supercomput. 2022, 78, 9691–9710. [Google Scholar] [CrossRef]
  24. Shahidi, N.; Arjomand, M.; Jung, M.; Kandemir, M.T.; Das, C.R.; Sivasubramaniam, A. Exploring the potentials of parallel garbage collection in SSDs for enterprise storage systems. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC), Salt Lake City, UT, USA, 13–18 November 2016; pp. 753–764. [Google Scholar]
  25. Wu, F.; Zhou, J.; Wang, S.; Du, Y.; Yang, C.; Xie, C. FastGC: Accelerate Garbage Collection via an Efficient Copyback-based Data Migration in SSDs. In Proceedings of the 55th Annual Design Automation Conference (DAC), San Francisco, CA, USA, 24–28 June 2018; pp. 1–6. [Google Scholar]
  26. MySQL 8.0 Reference Manual. Available online: https://imysql.com/wp-content/uploads/2014/10/sysbench-manual.pdf (accessed on 10 October 2025).
  27. Apache Cassandra Documentation v3.11.13. Available online: https://cassandra.apache.org/doc/3.11.13/ (accessed on 10 October 2025).
  28. MongoDB Documentation v6.0. Available online: https://www.mongodb.com/ko-kr/docs/v6.0/ (accessed on 13 October 2025).
  29. RocksDB Documentation. Available online: http://rocksdb.org/docs/getting-started.html (accessed on 11 October 2025).
  30. Gaffney, K.P.; Prammer, M.; Brasfield, L.; Hipp, D.R.; Kennedy, D.; Patel, J.M. SQLite: Past, Present, and Future. Proc. VLDB Endow. 2022, 15, 3535–3547. [Google Scholar] [CrossRef]
  31. DBENCH: Filesystem/Server Load Tool. Available online: https://dbench.samba.org/ (accessed on 18 October 2025).
  32. Sysbench Manual. Available online: https://imysql.com/wp-content/uploads/2014/10/sysbench-manual.pdf?utm_source=chatgpt.com (accessed on 10 November 2025).
  33. Cooper, B.F.; Silberstein, A.; Tam, E.; Ramakrishnan, R.; Sears, R. Benchmarking Cloud Serving Systems with YCSB. In Proceedings of the 1st ACM Symposium on Cloud Computing (SOCC 10), Indianapolis, IN, USA, 10–11 June 2010. [Google Scholar]
  34. Brunelle, A.D. Block I/O Layer Tracing: Blktrace; Hewlett Packard Company: Cupertino, CA, USA, 2006; p. 57. [Google Scholar]
  35. Rostedt, S. Ftrace Linux kernel tracing. In Proceedings of the Linux Conference Japan, Tokyo, Japan, 5–6 October 2010. [Google Scholar]
  36. Chun, Y.; Han, K.; Hong, Y. High-Performance Multi-Stream Management for SSDs. Electronics 2021, 10, 486. [Google Scholar] [CrossRef]
  37. Lee, J.; Kim, Y.; Shipman, G.M.; Oral, S.; Kim, J. Preemptible I/O scheduling of garbage collection for solid state drives. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 2013, 32, 247–260. [Google Scholar] [CrossRef]
  38. Zhang, Q.; Li, X.; Wang, L.; Zhang, T.; Wang, Y.; Shao, Z. Optimizing deterministic garbage collection in NAND flash storage systems. In Proceedings of the 21st IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS), Seattle, WA, USA, 13–16 April 2015. [Google Scholar]
  39. Kishani, M.; Ahmadian, S.; Asadi, H. A modeling framework for reliability of erasure codes in SSD arrays. IEEE Trans. Comput. 2020, 69, 649–665. [Google Scholar] [CrossRef]
  40. Micron. NAND Flash Memory MT29F4G08ABADAH4 16Gb Asynchronous/Synchronous NAND Features Datasheet. Available online: https://www.micron.com/products/nand-flash (accessed on 15 July 2020).
  41. Hu, Y.; Hong, J.; Dan, F.; Lei, T.; Hao, L.; Zhang, S. Performance impact and interplay of SSD parallelism through advanced commands, allocation strategy and data granularity. In Proceedings of the ACM International Conference on Supercomputing (ICS), Tucson, AZ, USA, 31 May–4 June 2011; pp. 96–107. [Google Scholar] [CrossRef]
  42. Li, H.; Hao, M.; Tong, M.H.; Sundararaman, S.; Bjørling, M.; Gunawi, H.S. The CASE of FEMU: Cheap, accurate, scalable and extensible flash emulator. In Proceedings of the 16th USENIX Conference on File and Storage Technologies (FAST), Oakland, CA, USA, 12–15 February 2018; pp. 83–90. [Google Scholar]
Figure 1. SSD Controller Architecture (Adapted from [11]).
Figure 1. SSD Controller Architecture (Adapted from [11]).
Electronics 14 04741 g001
Figure 2. Garbage collection procedure in SSD.
Figure 2. Garbage collection procedure in SSD.
Electronics 14 04741 g002
Figure 3. Host Request Activity Over Time.
Figure 3. Host Request Activity Over Time.
Electronics 14 04741 g003aElectronics 14 04741 g003b
Figure 4. Cassandra Workload (a) Host request, pending request and GC request snapshot with Conv_0_AllCh GC scheme (b) with Proposed scheme.
Figure 4. Cassandra Workload (a) Host request, pending request and GC request snapshot with Conv_0_AllCh GC scheme (b) with Proposed scheme.
Electronics 14 04741 g004
Figure 5. MongoDB Workload (a) Conv_0.7_AllCh (b) Proposed.
Figure 5. MongoDB Workload (a) Conv_0.7_AllCh (b) Proposed.
Electronics 14 04741 g005aElectronics 14 04741 g005b
Figure 6. Remained empty block and number of GC executed for Cassandra.
Figure 6. Remained empty block and number of GC executed for Cassandra.
Electronics 14 04741 g006aElectronics 14 04741 g006b
Table 1. Parameters of SSD configuration.
Table 1. Parameters of SSD configuration.
FeaturesValue
Number of Channels8
Planes per Channel4
Blocks per Plane512
Pages per Block128
Page Size (KB)4
Table 2. Delay of SSD components for one page read or write.
Table 2. Delay of SSD components for one page read or write.
DelayValue
SRAM read (μs)12
SRAM write (μs)12
NAND read (μs)25
NAND write (μs)200
NAND erase (μs)700
Table 3. Distribution of File Types and Metadata Access Categories in Workload Traces.
Table 3. Distribution of File Types and Metadata Access Categories in Workload Traces.
Category (%)CassandraDbenchMySQLSQLiteMongoDBRocksDBVarmail
Journal64.7782.8856.6055.5554.1872.8481.53
Inode0.92n/a0.010.01n/an/a0.53
Directory0.640.070.010.01n/an/a3.93
Misc. meta0.31n/an/an/an/an/a0.03
Sync.-create19.298.23n/a11.216.5510.715.08
Sync.-append2.668.6243.3833.2239.2512.288.64
Asynchronous9.930.20n/an/an/a0.120.08
Table 4. User-requested operation and access type distribution across workloads.
Table 4. User-requested operation and access type distribution across workloads.
WorkloadsCassandraDbenchMySQLSQLiteMongoDBRocksDBVarmail
Read Ratio (%)77.48n/an/an/a35.3249.34n/a
Write Ratio (%)19.1593.16100.0091.4660.4648.4295.45
Delete Ratio (%)3.376.84n/a8.544.222.244.55
(a) Distribution of read, write, and delete operations.
Sequential Request (%)56.8313.219.8018.6618.669.909.90
Random Request (%)43.1786.7990.2081.3481.3490.1090.10
(b) Overall distribution of sequential vs. random accesses.
Sequential Read (%)63.83n/an/an/a26.7710.30n/a
Random Read(%)36.17n/an/an/a73.2389.70n/a
(c) Access type breakdown for read operations.
Sequential Write (%)22.967.639.8011.088.276.486.79
Random Write (%)77.0492.3790.2088.9291.7393.5293.21
(d) Access type breakdown for write operations.
Sequential Delete (%)88.6089.14n/a99.8499.6875.0975.21
Random Delete(%)11.4010.86n/a0.160.3224.9124.79
(e) Access type breakdown for delete operations.
Table 5. Request size distribution by operation type and access pattern.
Table 5. Request size distribution by operation type and access pattern.
WorkloadsCassandraDbenchMySQLSQLiteMongoDBRocksDBVarmail
Avg. Read Size (KB)70.39n/an/an/a8.594.80n/a
Avg Write Size (KB)103.827.665.774.894.524.614.56
Avg. Delete Size (KB)422.1751.74n/a14.3014.3410.0310.20
Avg. Seq. Read Size (KB)108.16n/an/an/a13.0312.53n/a
Avg. Seq. Write Size (KB)438.3651.9122.0612.0012.0011.8812.23
Avg. Seq. Delete Size (KB)421.7251.31n/a14.2914.339.039.21
Table 6. Latency comparison across GC policies. (γ: Best conventional latency/Proposed latency).
Table 6. Latency comparison across GC policies. (γ: Best conventional latency/Proposed latency).
Avg. Read Latency (ms)CassandraDbenchMySQLSQLiteMongoDBRocksDBVarmail
Conv_0_AllCh427n/an/an/a0.629n/a
Conv_0.7_AllCh16n/an/an/a0.08164n/a
Conv_0.7_HalfCh13n/an/an/a0.06163n/a
Proposed0.6n/an/an/a0.39n/a
γ21.7n/an/an/a0.273.22n/a
Avg. Write Latency (ms)CassandraDbenchMySQLSQLiteMongoDBRocksDBVarmail
Conv_0_AllCh228.790.640.530.511.16
Conv_0.7_AllCh5014.0218.0234.978.911.23
Conv_0.7_HalfCh508.8916.350.4611.331.07
Proposed216.150.440.440.40.840.91
γ1.051.431.451.051.251.191.18
Max. Read Latency (ms)CassandraDbenchMySQLSQLiteMongoDBRocksDBVarmail
Conv_0_AllCh5123n/an/an/a31.631,390n/a
Conv_0.7_AllCh2342n/an/an/a6.931,391n/a
Conv_0.7_HalfCh2355n/an/an/a0.7721,374n/a
Proposed460n/an/an/a30.4363n/a
γ5.09n/an/an/a0.0358.88n/a
Max. Write Latency (ms)CassandraDbenchMySQLSQLiteMongoDBRocksDBVarmail
Conv_0_AllCh67810209935.739187234
Conv_0.7_AllCh3711906616,326605710,8841872630
Conv_0.7_HalfCh678519214,51965187671871038
Proposed7747933670.314545187
γ0.881.292.750.510.274.161.25
Table 7. Latency comparison across various θ 1 and θ 2 values.
Table 7. Latency comparison across various θ 1 and θ 2 values.
Avg. Read Latency (ms)CassandraDbenchMySQLSQLiteMongoDBRocksDBVarmail
θ 1 = 0.4, θ 2 = 0.750.6n/an/an/a0.413n/a
θ 1 = 0.5, θ 2 = 0.850.6n/an/an/a0.39n/a
θ 1 = 0.6, θ 2 = 0.950.6n/an/an/a0.3162n/a
Avg. Write Latency (ms)CassandraDbenchMySQLSQLiteMongoDBRocksDBVarmail
θ 1 = 0.4, θ 2 = 0.75216.160.440.460.40.940.91
θ 1 = 0.5, θ 2 = 0.85216.150.440.440.40.840.91
θ 1 = 0.6, θ 2 = 0.95216.110.430.440.40.940.91
Max. Read Latency (ms)CassandraDbenchMySQLSQLiteMongoDBRocksDBVarmail
θ 1 = 0.4, θ 2 = 0.75460n/an/an/a39.521,374n/a
θ 1 = 0.5, θ 2 = 0.85460n/an/an/a30.4363n/a
θ 1 = 0.6, θ 2 = 0.95460n/an/an/a48.621,368n/a
Max. Write Latency (ms)CassandraDbenchMySQLSQLiteMongoDBRocksDBVarmail
θ 1 = 0.4, θ 2 = 0.758327934264.475187187
θ 1 = 0.5, θ 2 = 0.857747933670.314545187
θ 1 = 0.6, θ 2 = 0.958037672650.053187187
Table 8. Latency comparison across various Nch Values.
Table 8. Latency comparison across various Nch Values.
Avg. Read Latency (ms)CassandraDbenchMySQLSQLiteMongoDBRocksDBVarmail
2 channels (Nch·1/4)14.4n/an/an/a0.3163n/a
4 channels (Nch·2/4)0.6n/an/an/a0.39n/a
6 channels (Nch·3/4)0.6n/an/an/a0.3163n/a
Avg. Write Latency (ms)CassandraDbenchMySQLSQLiteMongoDBRocksDBVarmail
2 channels (Nch·1/4)216.120.430.440.40.940.91
4 channels (Nch·2/4)216.150.440.440.40.840.91
6 channels (Nch·3/4)216.180.440.450.40.940.91
Max. Read Latency (ms)CassandraDbenchMySQLSQLiteMongoDBRocksDBVarmail
2 channels (Nch·1/4)2355n/an/an/a43.921,357n/a
4 channels (Nch·2/4)460n/an/an/a30.4363n/a
6 channels (Nch·3/4)460n/an/an/a29.330,064n/a
Max. Write Latency (ms)CassandraDbenchMySQLSQLiteMongoDBRocksDBVarmail
2 channels (Nch·1/4)7727662698.3130187187
4 channels (Nch·2/4)7747933670.314545187
6 channels (Nch·3/4)7367954261.638.6187187
Table 9. Latency comparison across GC policies with 8K page size. (γ: Best conventional latency/Proposed latency).
Table 9. Latency comparison across GC policies with 8K page size. (γ: Best conventional latency/Proposed latency).
Avg. Read Latency (ms)CassandraDbenchMySQLSQLiteMongoDBRocksDBVarmail
Conv_0_AllCh432n/an/an/a0.718n/a
Conv_0.7_AllCh0.4n/an/an/a13.8161n/a
Conv_0.7_HalfCh0.4n/an/an/a14.55n/a
Proposed0.4n/an/an/a0.4172n/a
γ1.0n/an/an/a1.750.03n/a
Avg. Write Latency (ms)CassandraDbenchMySQLSQLiteMongoDBRocksDBVarmail
Conv_0_AllCh2311.341.161.261.02.782.90
Conv_0.7_AllCh2214.023.2452.431.31.901.92
Conv_0.7_HalfCh217.060.700.850.71.901.92
Proposed217.150.690.410.71.901.93
γ1.00.991.012.071.01.00.99
Max. Read Latency (ms)CassandraDbenchMySQLSQLiteMongoDBRocksDBVarmail
Conv_0_AllCh7217n/an/an/a37.924,561n/a
Conv_0.7_AllCh460n/an/an/a3714.49447n/a
Conv_0.7_HalfCh460n/an/an/a3317.89447n/a
Proposed460n/an/an/a73.324,559n/a
γ1.0n/an/an/a0.520.38n/a
Max. Write Latency (ms)CassandraDbenchMySQLSQLiteMongoDBRocksDBVarmail
Conv_0_AllCh6809396487.785194221
Conv_0.7_AllCh2615906628837640.73276194194
Conv_0.7_HalfCh678767942300.3528194194
Proposed6787672243.6163194194
γ1.01.02.912.010.521.01.0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mun, H.; Hong, Y. Adaptive Channel-Aware Garbage Collection Control for Multi-Channel SSDs. Electronics 2025, 14, 4741. https://doi.org/10.3390/electronics14234741

AMA Style

Mun H, Hong Y. Adaptive Channel-Aware Garbage Collection Control for Multi-Channel SSDs. Electronics. 2025; 14(23):4741. https://doi.org/10.3390/electronics14234741

Chicago/Turabian Style

Mun, Hyunho, and Youpyo Hong. 2025. "Adaptive Channel-Aware Garbage Collection Control for Multi-Channel SSDs" Electronics 14, no. 23: 4741. https://doi.org/10.3390/electronics14234741

APA Style

Mun, H., & Hong, Y. (2025). Adaptive Channel-Aware Garbage Collection Control for Multi-Channel SSDs. Electronics, 14(23), 4741. https://doi.org/10.3390/electronics14234741

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop