Next Article in Journal
Effects of Air Gaps on the Output Force Density in COMSOL Simulations of Biomimetic Artificial Muscles
Previous Article in Journal
Epileptic Seizure Detection in Neonatal EEG Using a Multi-Band Graph Neural Network Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

ReZNS: Energy and Performance-Optimal Mapping Mechanism for ZNS SSD

1
Department of Computer Engineering, College of IT Convergence, Gachon University, Seongnam-si 13120, Republic of Korea
2
Department of Computer Engineering, Changwon National University, Changwon-si 51140, Republic of Korea
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2024, 14(21), 9717; https://doi.org/10.3390/app14219717
Submission received: 25 September 2024 / Revised: 22 October 2024 / Accepted: 22 October 2024 / Published: 24 October 2024
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Today, energy and performance efficiency have become a crucial factor in modern computing environments, such as high-end mobile devices, desktops, and enterprise servers, because data volumes in cloud datacenters increase exponentially. Unfortunately, many researchers and engineers neglect the power consumption and internal performance incurred by storage devices. In this paper, we present a renewable-zoned namespace (ReZNS), an energy and performance-optimal mechanism based on emerging ZNS SSDs. Specifically, ReZNS recycles the remaining capacity of zones that are no longer used by adding a renewable concept into the mapping mechanism. We implemented a prototype of ReZNS based on NVMeVirt and performed comprehensive experiments with diverse workloads from synthetic to real-world workloads to quantitatively confirm power and performance benefits. Our evaluation results present that ReZNS improves overall performance by up to 60% and the total power consumption by up to 3% relative to the baseline on ZNS SSD. We believe ReZNS creates new opportunities to prolong the lifespan of various consumer electronics, such as TV, AV, and mobile devices, because storage devices play a crucial role in their replacement cycle.

1. Introduction

With the rapid growth of cloud datacenters and their data volumes, energy efficiency has recently gained significant attention from researchers in academia and industry [1,2,3,4,5,6,7]. Specifically, as a result of the increasing evidence of the carbon emission problem on the environment and society, many cloud providers (e.g., AWS, Google, and Microsoft) and manufacturers (e.g., Samsung and Huawei) have invested substantial effort in improving energy efficiency [8,9,10,11]. For example, Google announced a carbon-free energy plan by 2030, and Meta is focusing on renewable energy with wind and solar projects [12,13]. Samsung Electronics is working to reduce the carbon emissions generated during the manufacturing process of TV, AV, display, and mobile devices [7]. Nevertheless, they still struggle with energy consumption because deep learning workloads (DL) requiring high-end resources (i.e., CPU, memory, and storage devices) are dominant and incur resource contention in multi-tenant environments [14,15,16,17,18].
Meanwhile, some researchers and engineers focused on storage stacks to isolate resource interference with other clients in cloud environments. They proposed a new type of storage device, called ZNS SSD, that efficiently guarantees the I/O performance of each running application by enabling the zoned namespace interface. The ZNS SSDs have gained popularity for proven high performance in diverse domains; they always show good performance by supporting all write requests in an append-only manner [19,20,21,22,23,24,25,26].
For example, ZMS was proposed as a new mobile I/O stack adopting zoned abstraction to enhance performance with a low write amplification factor (WAF) in mobile devices [27]. Some researchers proposed a new interface and ZNS-aware filesystems for ZNS SSDs, and they showed the potential possibility for ZNS SSDs by implementing them in a real SSD device [20]. Several recent studies for ZNS SSDs were conducted based on SSD emulators, such as ConfZNS and NVMeVirt, which help to understand how ZNS works in detail [21,28].
Unfortunately, clients in cloud environments may not use all the capacity allocated in each zone on ZNS SSD due to the behaviors of applications and frameworks. In this case, it wastes the internal capacity of ZNS SSD in that the capacity reserved in a zone cannot be used by other clients. Under-utilization in multiple zones on ZNS SSD can have a negative impact on overall energy consumption and performance because it frequently reclaims free space with zone-reset commands. Guided by these insights, we propose ReZNS, a new mapping mechanism inside ZNS SSD that monitors the unused space in reserved zones and dynamically allocates it to other clients. Therefore, ReZNS can provide high performance and energy efficiency as it achieves better space utilization with collaboration from other zones on ZNS SSD. We implement ReZNS in NVMeVirt and perform comprehensive experiments with diverse workloads, such as FIO, Filebench, and the Yahoo Cloud Serving Benchmark (YCSB) [29]. In particular, we compare the efficiency of ReZNS in terms of performance and energy consumption with baseline ZNS SSDs. Extensive experimental results clearly show that ReZNS improves overall performance by up to 60% and reduces the total energy consumption by up to 3% by efficiently decreasing the total number of zone-reset commands inside ZNS SSDs. We believe ReZNS is a good choice for future sustainable systems that mainly consider the overall performance and energy efficiency. In summary, the main contributions of this paper are:
  • We perform an in-depth breakdown to understand the internal behaviors of ZNS SSDs, including the mapping mechanism and zone-reset command.
  • We design and implement ReZNS, a novel mapping mechanism for ZNS SSDs that includes the management policy for the zones that are no longer in use and the mapping policy for sharing unused capacity.
  • We evaluate and quantitatively compare the benefits of ReZNS using not only a set of synthetic but also real-world workloads. ReZNS significantly reduces the number of zone-reset commands by up to 61%.
The rest of this paper is organized as follows: Section 2 describes the background to understand our work. Section 3 shows the key design of ReZNS in detail. Section 4 details the benefits of ReZNS with comprehensive evaluation results. Finally, Section 5 briefly reviews related work, and Section 6 concludes this paper.

2. Background

Nowadays, NAND flash-based solid-state drives (SSDs) play a crucial role in various environments and are very popular due to attractive benefits compared with hard disk drives (HDDs): low power consumption, high performance, and high density [30,31,32,33,34,35]. In addition, advancements in flash density technology (e.g., MLC, TLC, and QLC) are significantly reducing the cost gap between HDDs and SSDs [36,37,38]. The internal architecture and characteristics of SSDs are well-known: (1) they are essentially composed of NAND flash memory; (2) read and write operations are handled in page granularity, and erase operation is performed in block granularity; (3) erase-before-write, which requires that an entire block should be erased before over-writing any page in that block; (4) the total number of erase counts is limited (called lifespan); (5) a relatively poor performance for random write patterns [39,40,41,42,43]. To hide the aforementioned unique features, a typical SSD includes both a special software, called a flash translation layer (FTL), to handle the mapping table between the logical block address and physical block address, and a special job, called garbage collection (GC) [44,45,46].
Meanwhile, the performance of random writes in SSDs still remains a critical concern. In addition, many researchers and engineers have reported that random writes can hurt the limited lifespan by triggering extra erase operations [47,48,49]. There are numerous research and commercial efforts to improve the performance of random writes so far. Researchers have focused on the storage stack (e.g., page cache, file system, and block I/Os), and they proposed new mechanisms and algorithms to enhance the performance of random writes. For example, F2FS is one of the flash-friendly file systems, and it follows the basic rules of a log-structured filesystem (LFS) to handle all write requests in sequential order [50]. Some researchers have focused on the NVMe interface and internal architecture of the traditional SSDs. As a result, zoned namespace (ZNS) SSDs have recently emerged as a new type of SSD device; it fundamentally addresses the issues about random writes in the hardware layer.
Figure 1 shows the internal architecture of ZNS SSDs in detail. The zone is the basic building component of ZNS SSDs. In ZNS SSD, consecutive flash blocks are grouped into the same size to isolate the address space in zone granularity [19]. An application using ZNS SSDs must create its own zone before issuing write requests (we call the allocated zone for applications “open zone” in this paper). In an open zone, data must be placed in sequential order, and the data are then cleared by a zone-reset command that is triggered by the host when the zone becomes obsolete. Meanwhile, ZNS SSDs never require a huge room for a mapping table (i.e., FTL) in that a write pointer dedicated to a zone indicates the position for the next write request.
Unfortunately, partitioning the internal space into isolated zones incurs a new challenge: wasting capacity in regions that are no longer used. For instance, let us assume that all available zones are opened, and each zone fills about half of the capacities and then leaves in an idle state without triggering the zone-reset command. In this case, any write requests of a new zone cannot be serviced even though ZNS SSD has sufficient free space because there is no mechanism to dynamically reduce the zone’s capacity. Consequently, half of the total capacity on ZNS SSD is unintentionally wasted. On the other hand, it is assumed that the zone-reset command is called upon after performing small write requests. Since the command erases entire flash blocks belonging to a zone at once, not-yet-used flash blocks are unnecessarily erased. In this case, flash blocks are gradually damaged with repeated erase operations and ZNS SSD requires more power to handle it. This motivates us to design a practical mechanism for ZNS SSDs.

3. Design and Implementation

In this section, we propose a renewable-zoned namespace (ReZNS) that improves the overall performance and power consumption in emerging ZNS SSDs by sharing usable capacity among open zones. Especially, we take the basic design of ReZNS from the traditional ZNS SSDs and then add a renewable concept into the mapping mechanism and a daemon for detecting obsoleted zones in an idle state. ReZNS can maximize space utilization in ZNS SSDs that are composed of equally partitioned and isolated regions, called zones. Algorithm 1 shows the pseudocode for ReZNS.
Algorithm 1 Sample pseudocode of ReZNS.
  
Function: void reset_zone (int zoneID)
  1:
if remaining capacity(zoneID) > 25% then
  2:
      if the number of zones in spare-list exceeds half of the number of total zones then
  3:
            ReGC()
  4:
      end if
  5:
      put_sparelist(zoneID)
  6:
else
  7:
      clear flash blocks belonging to the zone
  8:
end if
  
Function: int zoneID create_zone (void)
  9:
if spare-list is not empty then
10:
     zoneID = get_sparelist()
11:
else
12:
     zoneID = new_zone()
13:
end if
14:
return zoneID
  
// this function is periodically triggered based on time threshold
  
Daemon: void scan_zombie_zone (void)
15:
zoneID = head_openzone()
16:
while zoneID ! = NULL do
17:
      if is_zombie_zone(zoneID) then
18:
           put_sparelist(zoneID)
19:
      end if
20:
      zoneID = next_openzone()
21:
end while

3.1. Overall Architecture of ReZNS

Zone-reset and Renewable-zones: Unlike the traditional ZNS SSD, ReZNS does not immediately handle a zone-reset command from the host because the corresponding zone can be reused in the future. Of course, this operation may have a negative effect on space utilization. Thus, ReZNS employs ReGC to address the problem (we will describe ReGC in more detail). When ZNS SSD needs to service a zone-reset command issued from the host, ReZNS first investigates whether the corresponding zone can be renewable or not based on the remaining capacity, and then it categorizes it into two types: normal zone (Nzone) and renewable zone (Rzone). If the remaining capacity in the zone to be reset is more than 25%, ReZNS marks it as Rzone and inserts it in the auxiliary list (called “spare-list”) instead of performing the zone-reset command. Consequently, ReZNS delays a costly zone-reset command for the zone whose capacity remains at more than 25%; it should have a positive effect on the lifespan of the ZNS SSD. Otherwise, ReZNS marks the zone as Nzone to inherit the design philosophy of ZNS SSDs (i.e., “one workload per zone”). This means that a zone marked as Nzone is serviced the same as traditional ZNS SSDs for the zone-reset command.
Recycling and Allocation: Once a Rzone is inserted in the spare-list, ReZNS resorts the list according to the remaining capacity in descending order (i.e., from high to low). This helps to improve both reuse possibility and space efficiency in ReZNS when a zone creation is needed. In ReZNS, it prefers to recycle a zone in the spare-list rather than the creation of a new zone. This is for two reasons. First, reusing Rzone is suitable for resource management because it gives applications more opportunities to create their own zones. Note that ZNS SSDs are limited by the maximum number of zones that can be physically created. Second, workloads requiring small writes and short lifetimes efficiently share the remaining space in Rzone; thus, it improves the total usable capacity of ZNS SSD. In summary, when a request for a zone creation is received from the host, ReZNS assigns one of the Rzones with the largest remaining capacity in the spare-liset to the corresponding zone ID. If there are no more zones in the list to allocate, ReZNS creates a new zone according to the basic rules of the traditional ZNS SSD.
Meanwhile, reusing the Rzone belonging to the spare-list can be considered harmful in terms of isolation in that the application’s data can be scattered throughout the zones. However, because ReZNS prioritizes candidates by sorting the available free space in descending order, it is unlikely that data will be severely scattered during runtime.
Zombie-zones: Triggering the zone-reset command is explicitly performed by host applications. Therefore, if an application is terminated without the zone-reset command, one or more zones allocated to the application unintentionally leave open zones with an idle state on the ZNS SSD (we call this open zone a “zombie zone”). To reclaim zombie zones, ReZNS employs a daemon that is periodically triggered based on a time threshold to scan open zones, and it inserts the zones into the spare-list if some zones are no longer used and exceed the time threshold. Meanwhile, the daemon rarely impacts the overall performance and is negligible because it performs a background thread. Note that since a lower threshold triggers the daemon too quickly and unnecessarily inserts open zones into the spare-list, we empirically set the threshold to 1 h, which is enough to match the changes in write patterns from applications.
Renewable-zone Garbage Collection (ReGC): If too many Rzones exist in the spare-list and they are not used for a long time, ReZNS releases some of the Rzones asynchronously from the list because they can be harmful in terms of management costs (we call this ReGC). For example, it may increase the time for sorting operations and violate the above design philosophy of the ZNS SSD. This means that data generated by one workload can be partially placed across multiple zones because ReZNS prefers to recycle zones until the spare-list becomes empty. Of course, this violation never incurs any extra overhead caused by garbage collection inside ZNS SSD. ReGC is triggered whenever the number of zones in the spare-list exceeds half of the number of total zones on ZNS SSD (see line 3 in Algorithm 1).

3.2. Example

Now, we explain how ReZNS works by showing an example (see Figure 2). In this example, we suppose that a zone on ZNS SSD consists of ten flash blocks, which include hundreds of flash pages, and ReZNS is enabled (square means one flash block in Figure 2). As shown in Figure 2), three zones, Zone0, Zone1, and Zone2, were already assigned to App1, App2, and App3, respectively. After that, App1 was terminated without triggering a zone-reset command after writing its data, which is enough to fill about four flash blocks of Zone0 (Step Applsci 14 09717 i001). Then, App3 called upon the zone-reset command for Zone2 after using just three flash blocks with write operations (Step Applsci 14 09717 i002). In this case, ReZNS inserts Zone2 into the spare-list to reuse it in the future as its remaining capacity is 70% (Step Applsci 14 09717 i003). After 30 min, App2 starts to work and it consumes nine flash blocks to store its data (Step Applsci 14 09717 i004). Next, the daemon of ReZNS inserts Zone0, which has not used for one hour, into the spare-list by identifying it as a zombie zone(Step Applsci 14 09717 i005).
Now, let us consider that App2 is going to consume nine flash blocks again (Step Applsci 14 09717 i006). In this case, App2 first uses the last flash block belonging to Zone1 because it is available. After that, it tries to create a new zone because more flash blocks are needed in practice. At this time, since the spare-list includes two Rzones, ReZNS assigns Zone2 to App2 to recycle its remaining capacity again instead of making a new zone. Note that Zone2 has a high priority in the list because the remaining capacity for Zone2 and Zone0 is 70% and 60%, respectively. Then, App2 resumes a series of write operations from the second block to the end block (Step Applsci 14 09717 i007). Meanwhile, App2 never recognizes that it uses the remaining space of Zone2 (i.e., 70%); thus, ReZNS should add three flash blocks to provide ten flash blocks per zone, consistently. To do so, ReZNS assigns Zone0 to App2 once again when free space in Zone2 is insufficient and then records data belonging to the following 9 blocks in Zone0 (Step Applsci 14 09717 i008).
In summary, ReZNS can prevent the use of the costly zone-reset command caused by Step Applsci 14 09717 i002 and improve space efficiency by reusing eight flash blocks across Zone2 and Zone0 (Step Applsci 14 09717 i007–Step Applsci 14 09717 i008). We believe this is very meaningful in that ReZNS reduces the overall energy consumption in storage devices.

4. Evaluation

In this section, we evaluate the overall performance of ReZNS and how well ReZNS improves energy consumption by reducing the number of zone-reset commands.

4.1. Experimental Setup

For evaluation, we used a 10-core, Intel® Xeon Gold 5215 @ 2.5 GHz server and 40 GB memory, running Ubuntu 20.04.6 LTS (64-bit) with Linux 5.16 kernel. We built ReZNS on NVMeVirt [28], which is a software-defined NVMe SSD device and basically supports the ZNS interface like a real ZNS SSD. Note that NVMeVirt runs based on RAM that has significantly better performance parameters than an SSD, but it never affects the experimental results because the performance metrics are measured using pre-defined latencies, such as read, write, and erase. Thus, the closer the pre-defined latencies are to those of the real SSD, the more accurate the results will be. In the configuration for ZNS SSDs, we set the size of each zone to 64 MiB and erase latency at zone granularity instead of block granularity. We also configure some metrics about latency and energy consumption to NVMeVirt [28,51]. The detailed configurations in NVMeVirt are listed in Table 1. Finally, we mounted F2FS [50] as a default file system on NVMeVirt [28]. To confirm the effectiveness of ReZNS, we compared it with the baseline, which is a default mapping mechanism for ZNS SSDs. For a comprehensive comparison, we used FIO [52], Filebench [53], and YCSB [29] to evaluate the performance impact and energy efficiency. We believe that the benchmarks are sufficient to cover all scenarios in datacenters, including write-heavy and read-heavy workloads based on multi-threading.

4.2. FIO Benchmark

To clearly confirm the key features of ReZNS, we first begin with synthetic workloads from the flexible I/O (FIO) benchmark [52], which includes parameters of libaio I/O engine, 1000 threads, 15 MB size, 1 queue depth, 4 KB block size, random writes, and 16/64/128 fsync. In order to naturally generate zone-reset commands while running the FIO benchmark, we first filled the dummy data up to 50% of total capacity, and then, we consecutively ran the FIO benchmark with the same experiment 3 times. This evaluation setup may trigger frequent zone-reset commands to reclaim free space as the FIO benchmark quickly consumes available free space inside the ZNS SSD.
Figure 3 shows the performance results over varying the fsync interval. As shown in Figure 3, ReZNS shows the best performance in all cases, but the performance gap highly depends on the period of fsync; it outperforms the conventional ZNS SSD by up to 9%, 6%, and 1% for 16, 64, and 128 intervals, respectively. To understand the reason in detail, we monitored and quantified the number of the zone-reset commands on NVMeVirt [28]. Figure 4 shows our evaluation results where the number of zone-reset commands decreases as the fsync interval increases. In addition, the gap between ReZNS and the baseline narrows down as the period of fsync increases.
This is no surprise because of two reasons. The first reason is that the long period of fsyncs in FIO may increase the opportunities to merge consecutive I/O requests. As a result, the merged I/O requests may be perfectly aligned with the boundaries of a single zone. In this case, ReZNS works in the same way as the conventional ZNS SSD because most zones are categorized as normal zones, not renewable zones; the spare-list of ReZNS becomes empty. For the second reason, it causes a noticeable delay when a fsync operation is overlapped with a zone-reset command inside ZNS SSD. As shown in Figure 4, the baseline performs more zone-reset commands than ReZNS due to the first reason as the period of fsync decreases; it has a higher possibility of suffering from a noticeable delay. As a result, Figure 3 shows the largest performance gap in 16 fsync intervals.

4.3. Filebench Benchmark

We further evaluate the benefits of ReZNS under the Filebench benchmark [53], which is one of the well-known macro benchmarks. Before performing an evaluation, we pre-filled dummy data up to 50% of total capacity on ZNS SSD; it helps to trigger zone-reset command, as mentioned above. Then, we ran the varmail and fileserver workloads for 1 min, respectively. The varmail workload emulates the synchronous I/O activities of a mail server that adopts the parameter of 1000 files, 16 threads, 1MB I/O size, 1,000,000 directory width, and 16 KB append size. The fileserver workload emulates a series of asynchronous file operations, such as create, delete, read, and write, that adopt the parameters of 10,000 files, 50 threads, 1 MB I/O size, 20 directory width, and 16 KB append size. In summary, the varmail workload frequently issues synchronous commands (i.e., fsync operation), whereas the fileserver rarely does.
Figure 5 shows the overall performance of two different workloads. As shown in Figure 5, ReZNS outperforms the traditional ZNS SSD by up to 60% and 13% for the varmail and fileserver, respectively. This is extremely meaningful in that both workloads are write-intensive workloads. Note that the performance gap of the varmail is much larger than that of the fileserver workload. This is because the varmail issues synchronous I/O operations, and it quickly consumes available free space inside ZNS SSDs. In other words, the traditional ZNS consecutively writes all data from the applications, and it performs the zone-reset commands to reclaim the zones even if the zones include available free space if the number of zones reaches the maximum zone value. In contrast, ReZNS reuses the available free space inside the zones to be reclaimed, not performing the zone-reset commands.
To confirm our intuition, we plotted the number of zone-reset commands on NVMeVirt [28]. Figure 6 clearly shows that the varmail workload issues the commands much more frequently than the fileserver. Also, Figure 6 compares the number of zone-reset commands obtained from all experiments on NVMeVirt [28]. As expected, ReZNS reduces the number of zone-reset commands by up to 29% and 20% for the varmail and fileserver workload, respectively. This reduction comes from sharing and reusing the flash blocks with free space instead of creating a new zone. Also, this advantage of ReZNS has positive impacts on the consecutive I/O operations as the likelihood of delay caused by the zone-reset command decreases.

4.4. YCSB Benchmark

To evaluate ReZNS on real-world workloads, we additionally compared the effectiveness of ReZNS on the Yahoo Cloud Serving Benchmark (YCSB) [29] on RocksDB. For a comprehensive evaluation, in this experiment, we used YCSB A, B, and F workloads; A and F are write-heavy workloads, and B represents read-heavy workloads. We performed an additional evaluation where the size of each zone in the ZNS SSD is configured with 1 GiB to understand the effectiveness of ReZNS in detail.
Figure 7 and Figure 8 plot the throughput and the number of zone-reset commands of all workloads. As expected, ReZNS is the winner in all cases (see Figure 7); it improves the baseline’s performance by up to 35%, 19%, and 16% for YCSB-A, YCSB-B, and YCSB-F, respectively. As mentioned above, it is reasonable that ReZNS shows good performance compared with the baseline in the write-heavy workloads (i.e., YCSB-A and YCSB-F). Surprisingly, ReZNS also provides better performance on the YCSB-B workload. To understand the reason, we investigated I/O requests from the workload and figured out that the performance gain comes from the 5% write requests of the total YCSB-B workload. This is extremely important in that the small amount of writes can achieve 19% performance improvement. Meanwhile, Figure 7 shows different performance results even though YCSB-A and YCSB-F have the same read–write ratio of 1:1. The main reason is that the YCSB-A workload is composed of read and write operations, but YCSB-F includes a set of read–modify–write operations, which may have a different CPU overhead.
We notice that ReZNS dramatically reduces the number of zone-reset commands by up to 52%, 61%, and 54% in YCSB-A, YCSB-B, and YCSB-F workloads, respectively. This is a valuable result even if the reduced number of commands does not directly affect the performance (see Figure 7). This is because the zone-reset command is the only one that affects the lifespan of ZNS SSDs whose write amplification factor (WAF) is 1. Note that intensive and frequent zone-reset commands are harmful to the lifespan of ZNS SSDs. In summary, ReZNS can extend the lifetime of the NAND flash memory as long as possible by reducing the number of zone-reset commands.
Finally, Figure 9 and Figure 10 show the throughput and the number of zone-reset commands; the size of each zone is configured with a size of 1 GiB. As shown in Figure 9 and Figure 10, ReZNS shows the same pattern as the evaluations in Figure 7 and Figure 8. In other words, the benefits of ReZNS can be enabled regardless of the size of each zone. The reason behind this is that ReZNS maximizes the space utilization by reusing the available free space inside the zones instead of the time spent on the zone-reset commands.

4.5. Energy Efficiency

Now, let us walk through the energy efficiency using ReZNS. To observe the energy consumption, we quantified individual requests performed inside ZNS SSDs: read, write, and zone-reset commands. Then, we calculated the expected energy costs by multiplying the total requests with the corresponding energy consumption listed in Table 1.
Figure 11 compares the total estimated energy consumption for eight workloads, where lower is better. As shown in Figure 11, ReZNS delivers similar or better energy costs than the baseline except for the results of Filebench. The reason for the high energy consumption from Filebench is that the benchmark ran for 1 min based on time. This means that ReZNS handles more operations than the baseline at any given time because of its high performance, as shown in Figure 5. As a result, the graph confirms the positive effect of saving energy, which comes from ReZNS. Meanwhile, ReZNS saves on total power consumption by up to 3% compared with the baseline on the YCSB-F workload. This is quite encouraging in that the YCSB benchmark approximately ran for 20 min. When considering that datacenters always turn on and work, we would expect that the energy advantage of ReZNS continuously increases over time.
To understand why the energy gap in Figure 11 is very close, unlike that of performance, we investigated the details of where the energy is mainly consumed. Figure 12 and Figure 13 show the breakdown of energy consumption in percentage on all workloads on ReZNS and baseline, respectively. In the figures, the portion colored in gray represents the energy consumed by zone-reset commands. As expected, most of the energy is consumed by the read and write operations, whereas zone-reset drains only a few joules. Meanwhile, as shown in Figure 13, the portion of zone-reset commands is larger than that of ReZNS. To clearly understand the difference, we look into the amount of energy that comes from zone-reset commands (see Figure 14). Unlike the results in Figure 11, ReZNS shows the best energy efficiency in all workloads. Interestingly, Figure 12 shows that ReZNS can save on power consumption by up to 64% while running YCSB-B for 20 min. The reason behind this drop is that ReZNS lets multiple applications recycle the remaining space in each zone instead of performing zone-reset commands. In summary, the more often the zone-reset command is triggered after frequent write operations, the more ReZNS can save energy consumption. This is because ReZNS absorbs many of the write operations using renewable zones and avoids triggering the zone-reset commands that require high energy consumption. We believe it becomes more meaningful as the attention to energy efficiency inside the storage devices increases.

5. Related Work

In this section, we introduce the prior efforts on performance and energy optimization in terms of datacenters [1,2,5,6] and storage devices [19,20,21,22,23,24,26,27,28].
Energy efficiency: Today, the problems caused by carbon emissions are extremely important; thus, many industries have started to mitigate the issues by designing their own optimization systems [1,2]. For example, TESLA [1] proposed a new cooling system to minimize the power consumption that comes from cooling down the internal temperature of datacenters. Some researchers focused on the trade-off between performance and carbon emissions and introduced a software-centric approach [2]. Others proposed a novel energy-aware scheduling in the Linux operating system, and they observed the relationship between the process and energy consumption [5]. For a sustainable future, some researchers proposed Lovelock using high-performance-based smart NICs in data-heavy environments [6].
Emerging ZNS SSD: The rise of datacenter technologies has brought about innovative storage devices that can meet high-performance requirements, called ZNS SSDs. There are lots of studies that demonstrate the advantages of ZNS SSDs in various environments. Han et al. proposed a new ZNS interface, called ZNS+ to address the high storage-level reclaiming overhead; it optimizes the performance of ZNS SSDs by supporting in-storage zone compaction and sparse sequential overwrites [20]. Hwang et al. introduced ZMS, a new I/O stack, to apply the ZNS interface in mobile environments [27]. This approach addresses the critical issue of write amplification in mobile storage and improves both performance and write amplification compared to the conventional block-based I/O stack. Meanwhile, some researchers introduced ConfZNS and NVMeVirt [21,28] that help to develop new policies and mechanisms inside the ZNS SSD and observe the internal behaviors and features in detail. Fair-ZNS [23] was proposed as an I/O scheduling that handles the slowdown caused by running multiple applications simultaneously.
Zone-reset in ZNS SSD: The zone-reset command plays a critical role in ZNS SSDs; thus, many researchers have recently focused on the command and its overhead. For example, WA-ZONE [22] was recently proposed to consider the ware-leveling on zones and to avoid unnecessary erase operations. Some researchers introduced a novel algorithm for zone-reset command, called FAR, that dynamically adjusts zone resets based on the available free space in ZNS SSDs [24]. Liu et al. proposed Hi-ZNS [26], which tunes the size of a zone to put a single SST or WAL file into the zone.
Despite these efforts, the wasted capacity inside zones and the costly zone-reset command still remain unclear in the context of ZNS SSDs. In addition, they omit the issues of energy efficiency and carbon emissions even though it becomes extremely important as time passes.

6. Conclusions

Today, high performance and energy efficiency are required in many scenarios from consumer electronics devices to datacenters. This trend has led to the development of emerging storage devices, called ZNS SSDs. In this paper, we briefly explored the internal architectures and behaviors of ZNS SSDs. We also proposed an energy and performance-optimal mapping mechanism for ZNS SSDs, called ReZNS. ReZNS adopts a renewable concept into ZNS SSDs by reusing the remaining capacity inside each zone. In addition, it was designed to have application-level transparency; it requires no change to application codes, and applications never know how ReZNS works. We built a prototype of ReZNS on the NVMeVirt [28] emulator and extensively evaluated ReZNS via different benchmarks from both performance and energy perspectives. Experimental results with eight different workloads showed that ReZNS improves overall performance by up to 60% compared with the baseline. Moreover, it can save up to 64% of the power consumed by zone-reset commands inside ZNS SSDs. Finally, we believe that ReZNS will be widely accepted in a variety of use cases because it provides transparency, low energy consumption, and high performance. Especially, since various consumer electronics adopt NAND flash-based storage devices, such as eMMC and UFS, ReZNS creates new opportunities to prolong their lifespans. In future work, we seek to explore the possibility that data can be scattered across multiple zones. We also want to explore explicit scenarios to minimize energy consumption.

Author Contributions

Conceptualization, C.L. and D.K.; methodology, C.L.; software, C.L.; validation, C.L. and D.K.; formal analysis, C.L. and D.K.; investigation, C.L. and D.K.; resources, C.L., H.K. and D.K.; data curation, C.L.; writing—original draft preparation, C.L. and D.K.; writing—review and editing, D.A. and D.K.; visualization, C.L., S.L., G.M. and D.K.; supervision, D.A and D.K.; project administration, D.K.; funding acquisition, D.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Gachon University research fund of 2022 (GCU-202300690001) and the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (RS-2023-00251730).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SSDSolid State Disk
HDDHard Disk Drive
ZNSZoned Namespace
ReZNSRenewable-Zoned Namespace
DLDeep Learning
MLCMulti-Level Cell
TLCTriple-Level Cell
QLCQuadruple Level Cell
FTLFlash Translation Layer
GCGarbage Collection
LFSLog-structured Filesystem
NzoneNormal Zone
RzoneRenewable Zone
ReGCRenewable-zone Garbage Collection

References

  1. Geng, H.; Sun, Y.; Li, Y.; Leng, J.; Zhu, X.; Zhan, X.; Li, Y.; Zhao, F.; Liu, Y. TESLA: Thermally Safe, Load-Aware, and Energy-Efficient Cooling Control System for Data Centers. In Proceedings of the 53rd International Conference on Parallel Processing, Gotland, Sweden, 12–15 August 2024; pp. 939–949. [Google Scholar]
  2. Anderson, T.; Belay, A.; Chowdhury, M.; Cidon, A.; Zhang, I. Treehouse: A case for carbon-aware datacenter software. ACM SIGENERGY Energy Inform. Rev. 2023, 3, 64–70. [Google Scholar] [CrossRef]
  3. Eilam, T.; Bose, P.; Carloni, L.P.; Cidon, A.; Franke, H.; Kim, M.A.; Lee, E.K.; Naghshineh, M.; Parida, P.; Stein, C.S.; et al. Reducing Datacenter Compute Carbon Footprint by Harnessing the Power of Specialization: Principles, Metrics, Challenges and Opportunities. IEEE Trans. Semicond. Manuf. 2024, 1–8. [Google Scholar] [CrossRef]
  4. Bose, R.; Roy, S.; Mondal, H.; Chowdhury, D.R.; Chakraborty, S. Energy-efficient approach to lower the carbon emissions of data centers. Computing 2021, 103, 1703–1721. [Google Scholar] [CrossRef]
  5. Qiao, F.; Fang, Y.; Cidon, A. Energy-Aware Process Scheduling in Linux. In Proceedings of the 3rd Workshop on Sustainable Computer Systems (HotCarbon 2024); ACM: New York, NY, USA, 2024; pp. 1–7. [Google Scholar]
  6. Park, S.J.; Govindan, R.; Shen, K.; Culler, D.; Özcan, F.; Kim, G.W.; Levy, H. Lovelock: Towards Smart NIC-hosted Clusters. arXiv 2023, arXiv:2309.12665. [Google Scholar]
  7. SAMSUNG. TV, AV & Displays. Available online: https://www.samsung.com/global/sustainability/focus/products/tv-av-displays/ (accessed on 15 September 2024).
  8. Radovanović, A.; Koningstein, R.; Schneider, I.; Chen, B.; Duarte, A.; Roy, B.; Xiao, D.; Haridasan, M.; Hung, P.; Care, N.; et al. Carbon-aware computing for datacenters. IEEE Trans. Power Syst. 2022, 38, 1270–1280. [Google Scholar] [CrossRef]
  9. Cao, Z.; Zhou, X.; Hu, H.; Wang, Z.; Wen, Y. Toward a systematic survey for carbon neutral data centers. IEEE Commun. Surv. Tutor. 2022, 24, 895–936. [Google Scholar] [CrossRef]
  10. Acun, B.; Lee, B.; Kazhamiaka, F.; Maeng, K.; Gupta, U.; Chakkaravarthy, M.; Brooks, D.; Wu, C.J. Carbon explorer: A holistic framework for designing carbon aware datacenters. In Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Vancouver, BC, Canada, 25–29 March 2023; Volume 2, pp. 118–132. [Google Scholar]
  11. Lyu, J.; Wang, J.; Frost, K.; Zhang, C.; Irvene, C.; Choukse, E.; Fonseca, R.; Bianchini, R.; Kazhamiaka, F.; Berger, D.S. Myths and misconceptions around reducing carbon embedded in cloud platforms. In Proceedings of the 2nd Workshop on Sustainable Computer Systems, Boston, MA, USA, 9 July 2023; pp. 1–7. [Google Scholar]
  12. 24/7 Carbon-Free Energy by 2030. Available online: https://www.google.com/about/datacenters/cleanenergy/ (accessed on 30 August 2024).
  13. Energy. Available online: https://sustainability.atmeta.com/energy/ (accessed on 30 August 2024).
  14. PyTorch. Available online: https://pytorch.org/ (accessed on 20 July 2021).
  15. Tensorflow. Available online: https://www.tensorflow.org/?hl=en (accessed on 20 July 2021).
  16. Panda, P.; Sengupta, A.; Roy, K. Energy-efficient and improved image recognition with conditional deep learning. ACM J. Emerg. Technol. Comput. Syst. (JETC) 2017, 13, 1–21. [Google Scholar] [CrossRef]
  17. Peng, Y.; Bao, Y.; Chen, Y.; Wu, C.; Guo, C. Optimus: An efficient dynamic resource scheduler for deep learning clusters. In Proceedings of the Thirteenth EuroSys Conference, Porto, Portugal, 23–26 April 2018; pp. 1–14. [Google Scholar]
  18. Menghani, G. Efficient deep learning: A survey on making deep learning models smaller, faster, and better. ACM Comput. Surv. 2023, 55, 1–37. [Google Scholar] [CrossRef]
  19. Bjørling, M.; Aghayev, A.; Holmberg, H.; Ramesh, A.; Le Moal, D.; Ganger, G.R.; Amvrosiadis, G. ZNS: Avoiding the block interface tax for flash-based SSDs. In Proceedings of the 2021 USENIX Annual Technical Conference (USENIX ATC 21), Virtual, 14–16 July 2021; pp. 689–703. [Google Scholar]
  20. Han, K.; Gwak, H.; Shin, D.; Hwang, J. ZNS+: Advanced zoned namespace interface for supporting in-storage zone compaction. In Proceedings of the 15th USENIX Symposium on Operating Systems Design and Implementation (OSDI 21), Virtual, 14–16 July 2021; pp. 147–162. [Google Scholar]
  21. Song, I.; Oh, M.; Kim, B.S.J.; Yoo, S.; Lee, J.; Choi, J. Confzns: A novel emulator for exploring design space of zns ssds. In Proceedings of the 16th ACM International Conference on Systems and Storage, Haifa, Israel, 5–7 June 2023; pp. 71–82. [Google Scholar]
  22. Long, L.; He, S.; Shen, J.; Liu, R.; Tan, Z.; Gao, C.; Liu, D.; Zhong, K.; Jiang, Y. WA-Zone: Wear-Aware Zone Management Optimization for LSM-Tree on ZNS SSDs. ACM Trans. Archit. Code Optim. 2024, 21, 1–23. [Google Scholar] [CrossRef]
  23. Liu, R.; Tan, Z.; Shen, Y.; Long, L.; Liu, D. Fair-zns: Enhancing fairness in zns ssds through self-balancing I/O scheduling. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2022, 43, 2012–2022. [Google Scholar] [CrossRef]
  24. Byeon, S.; Ro, J.; Jamil, S.; Kang, J.U.; Kim, Y. A free-space adaptive runtime zone-reset algorithm for enhanced ZNS efficiency. In Proceedings of the 15th ACM Workshop on Hot Topics in Storage and File Systems, Boston, MA, USA, 9 July 2023; pp. 109–115. [Google Scholar]
  25. Huang, D.; Feng, D.; Liu, Q.; Ding, B.; Zhao, W.; Wei, X.; Tong, W. SplitZNS: Towards an efficient LSM-tree on zoned namespace SSDs. ACM Trans. Archit. Code Optim. 2023, 20, 1–26. [Google Scholar] [CrossRef]
  26. Liu, R.; Chen, J.; Chen, P.; Long, L.; Xiong, A.; Liu, D. Hi-ZNS: High Space Efficiency and Zero-Copy LSM-Tree Based Stores on ZNS SSDs. In Proceedings of the 53rd International Conference on Parallel Processing, Gotland, Sweden, 12–15 August 2024; pp. 1217–1226. [Google Scholar]
  27. Hwang, J.Y.; Kim, S.; Park, D.; Song, Y.G.; Han, J.; Choi, S.; Cho, S.; Won, Y. ZMS: Zone Abstraction for Mobile Flash Storage. In Proceedings of the 2024 USENIX Annual Technical Conference (USENIX ATC 24), Santa Clara, CA, USA, 10–12 July 2024; pp. 173–189. [Google Scholar]
  28. Kim, S.H.; Shim, J.; Lee, E.; Jeong, S.; Kang, I.; Kim, J.S. NVMeVirt: A Versatile Software-defined Virtual NVMe Device. In Proceedings of the 21st USENIX Conference on File and Storage Technologies (FAST 23), Santa Clara, CA, USA, 21–23 February 2023; pp. 379–394. [Google Scholar]
  29. Cooper, B.F.; Silberstein, A.; Tam, E.; Ramakrishnan, R.; Sears, R. Benchmarking cloud serving systems with YCSB. In Proceedings of the 1st ACM Symposium on Cloud Computing, Indianapolis, IN, USA, 10–11 June 2010; pp. 143–154. [Google Scholar]
  30. Cai, Y.; Ghose, S.; Haratsch, E.F.; Luo, Y.; Mutlu, O. Reliability issues in flash-memory-based solid-state drives: Experimental analysis, mitigation, recovery. InInside Solid State Drives (SSDs); Springer: Singapore, 2018; pp. 233–341. [Google Scholar]
  31. Pan, Y.; Li, Y.; Zhang, H.; Chen, H.; Lin, M. GFTL: Group-level mapping in flash translation layer to provide efficient address translation for NAND flash-based SSDs. IEEE Trans. Consum. Electron. 2020, 66, 242–250. [Google Scholar] [CrossRef]
  32. Yadgar, G.; Gabel, M.; Jaffer, S.; Schroeder, B. SSD-based workload characteristics and their performance implications. ACM Trans. Storage (TOS) 2021, 17, 1–26. [Google Scholar] [CrossRef]
  33. Liu, C.Y.; Lee, Y.; Jung, M.; Kandemir, M.T.; Choi, W. Prolonging 3D NAND SSD lifetime via read latency relaxation. In Proceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Virtual, 19–23 April 2021; pp. 730–742. [Google Scholar]
  34. Zhu, G.; Han, J.; Son, Y. A preliminary study: Towards parallel garbage collection for NAND flash-based SSDs. IEEE Access 2020, 8, 223574–223587. [Google Scholar] [CrossRef]
  35. Jazzar, M.; Hamad, M. Comparing hdd to ssd from a digital forensic perspective. In Proceedings of the International Conference on Intelligent Cyber-Physical Systems: ICPS 2021, Jessup, MD, USA, 18–24 September 2021; Springer: Singapore, 2022; pp. 169–181. [Google Scholar]
  36. Shi, L.; Luo, L.; Lv, Y.; Li, S.; Li, C.; Sha, E.H.M. Understanding and optimizing hybrid ssd with high-density and low-cost flash memory. In Proceedings of the 2021 IEEE 39th International Conference on Computer Design (ICCD), Storrs, CT, USA, 24–27 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 236–243. [Google Scholar]
  37. Takai, Y.; Fukuchi, M.; Kinoshita, R.; Matsui, C.; Takeuchi, K. Analysis on heterogeneous ssd configuration with quadruple-level cell (qlc) nand flash memory. In Proceedings of the 2019 IEEE 11th International Memory Workshop (IMW), Monterey, CA, USA, 12–15 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–4. [Google Scholar]
  38. Liang, S.; Qiao, Z.; Tang, S.; Hochstetler, J.; Fu, S.; Shi, W.; Chen, H.B. An empirical study of quad-level cell (qlc) nand flash ssds for big data applications. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 3676–3685. [Google Scholar]
  39. Li, Q.; Li, H.; Zhang, K. A survey of SSD lifecycle prediction. In Proceedings of the 2019 IEEE 10th International Conference on Software Engineering and Service Science (ICSESS), Beijing, China, 18–20 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 195–198. [Google Scholar]
  40. Han, L.; Shen, Z.; Shao, Z.; Li, T. Optimizing RAID/SSD controllers with lifetime extension for flash-based SSD array. ACM SIGPLAN Not. 2018, 53, 44–54. [Google Scholar] [CrossRef]
  41. Zhang, Y.; Zhou, K.; Huang, P.; Wang, H.; Hu, J.; Wang, Y.; Ji, Y.; Cheng, B. A machine learning based write policy for SSD cache in cloud block storage. In Proceedings of the 2020 Design, Automation & Test in Europe Conference & Exhibition (DATE), Grenoble, France, 9–13 March 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1279–1282. [Google Scholar]
  42. Chen, X.; Li, Y.; Zhang, T. Reducing flash memory write traffic by exploiting a few MBs of capacitor-powered write buffer inside solid-state drives (SSDs). IEEE Trans. Comput. 2018, 68, 426–439. [Google Scholar] [CrossRef]
  43. Wang, H.; Yi, X.; Huang, P.; Cheng, B.; Zhou, K. Efficient SSD caching by avoiding unnecessary writes using machine learning. In Proceedings of the 47th International Conference on Parallel Processing, Eugene, OR, USA, 13–16 August 2018; pp. 1–10. [Google Scholar]
  44. Jung, M.; Choi, W.; Kwon, M.; Srikantaiah, S.; Yoo, J.; Kandemir, M.T. Design of a host interface logic for GC-free SSDs. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2019, 39, 1674–1687. [Google Scholar] [CrossRef]
  45. Garrett, T.; Yang, J.; Zhang, Y. Enabling intra-plane parallel block erase in NAND flash to alleviate the impact of garbage collection. In Proceedings of the International Symposium on Low Power Electronics and Design, Seattle, WA, USA, 23–25 July 2018; pp. 1–6. [Google Scholar]
  46. Chen, H.; Li, C.; Pan, Y.; Lyu, M.; Li, Y.; Xu, Y. HCFTL: A locality-aware page-level flash translation layer. In Proceedings of the 2019 Design, Automation & Test in Europe Conference & Exhibition (DATE), Florence, Italy, 25–29 March 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 590–593. [Google Scholar]
  47. Zhou, Y.; Wu, Q.; Wu, F.; Jiang, H.; Zhou, J.; Xie, C. Remap-SSD: Safely and Efficiently Exploiting SSD Address Remapping to Eliminate Duplicate Writes. In Proceedings of the 19th USENIX Conference on File and Storage Technologies (FAST 21), Virtual, 23–25 February 2021; pp. 187–202. [Google Scholar]
  48. Kim, S.; Han, J.; Eom, H.; Son, Y. Improving I/O performance in distributed file systems for flash-based SSDs by access pattern reshaping. Future Gener. Comput. Syst. 2021, 115, 365–373. [Google Scholar] [CrossRef]
  49. Liu, J.; Chai, Y.P.; Qin, X.; Liu, Y.H. Endurable SSD-based read cache for improving the performance of selective restore from deduplication systems. J. Comput. Sci. Technol. 2018, 33, 58–78. [Google Scholar] [CrossRef]
  50. Lee, C.; Sim, D.; Hwang, J.; Cho, S. F2FS: A new file system for flash storage. In Proceedings of the 13th USENIX Conference on File and Storage Technologies (FAST 15), Santa Clara, CA, USA, 16–19 February 2015; pp. 273–286. [Google Scholar]
  51. Li, H.L.; Yang, C.L.; Tseng, H.W. Energy-Aware Flash Memory Management in Virtual Memory System. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 2008, 16, 952–964. [Google Scholar] [CrossRef]
  52. Axboe, J. Flexible I/O Tester (fio). 2024. Available online: https://github.com/axboe/fio (accessed on 30 August 2024).
  53. Vasily, T. Filebench: A flexible framework for file system benchmarking. login. USENIX Mag. 2016, 41, 6. [Google Scholar]
Figure 1. ZNS SSDs’ internal architecture.
Figure 1. ZNS SSDs’ internal architecture.
Applsci 14 09717 g001
Figure 2. An example of ReZNS.
Figure 2. An example of ReZNS.
Applsci 14 09717 g002
Figure 3. Overall performance of FIO benchmark.
Figure 3. Overall performance of FIO benchmark.
Applsci 14 09717 g003
Figure 4. The number of zone-reset commands on the FIO benchmark.
Figure 4. The number of zone-reset commands on the FIO benchmark.
Applsci 14 09717 g004
Figure 5. Overall performance of Filebench. (a) Varmail; (b) Fileserver.
Figure 5. Overall performance of Filebench. (a) Varmail; (b) Fileserver.
Applsci 14 09717 g005
Figure 6. The number of zone-reset commands on Filebench. (a) Varmail; (b) Fileserver.
Figure 6. The number of zone-reset commands on Filebench. (a) Varmail; (b) Fileserver.
Applsci 14 09717 g006
Figure 7. Overall performance of YCSB workloads. In this evaluation, each zone is configured with a size of 64 MiB.
Figure 7. Overall performance of YCSB workloads. In this evaluation, each zone is configured with a size of 64 MiB.
Applsci 14 09717 g007
Figure 8. The number of zone-reset commands on YCSB workloads. In this evaluation, each zone is configured with a size of 64 MiB.
Figure 8. The number of zone-reset commands on YCSB workloads. In this evaluation, each zone is configured with a size of 64 MiB.
Applsci 14 09717 g008
Figure 9. Overall performance of YCSB workloads. In this evaluation, each zone is configured with a size of 1 GiB.
Figure 9. Overall performance of YCSB workloads. In this evaluation, each zone is configured with a size of 1 GiB.
Applsci 14 09717 g009
Figure 10. The number of zone-reset commands on YCSB workloads. In this evaluation, each zone is configured with a size of 1 GiB.
Figure 10. The number of zone-reset commands on YCSB workloads. In this evaluation, each zone is configured with a size of 1 GiB.
Applsci 14 09717 g010
Figure 11. Comparison of the total energy consumption with eight workloads.
Figure 11. Comparison of the total energy consumption with eight workloads.
Applsci 14 09717 g011
Figure 12. Energy Breakdown (in Percentage) of Eight Workloads on ReZNS.
Figure 12. Energy Breakdown (in Percentage) of Eight Workloads on ReZNS.
Applsci 14 09717 g012
Figure 13. Energy breakdown (in percentage) of eight workloads on the baseline.
Figure 13. Energy breakdown (in percentage) of eight workloads on the baseline.
Applsci 14 09717 g013
Figure 14. Energy consumption of zone-reset commands on all workloads.
Figure 14. Energy consumption of zone-reset commands on all workloads.
Applsci 14 09717 g014
Table 1. Configurations of NVMeVirt [28,51].
Table 1. Configurations of NVMeVirt [28,51].
ItemSpecificationsUnit
Capacity40 GiB-
Page Size32 KiB-
Block Size2 MiB-
Zone Size64 MiB-
Number of Zones640-
Flash Blocks per Zone32-
Read Latency47.2 μ sPage
Write Latency533 μ sPage
Erase Latency96 msZone
Energy Consumption for Read679 nJPage
Energy Consumption for Write7.66 μ JPage
Energy Consumption for Zone-reset1.38 mJZone
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, C.; Lee, S.; Moon, G.; Kim, H.; An, D.; Kang, D. ReZNS: Energy and Performance-Optimal Mapping Mechanism for ZNS SSD. Appl. Sci. 2024, 14, 9717. https://doi.org/10.3390/app14219717

AMA Style

Lee C, Lee S, Moon G, Kim H, An D, Kang D. ReZNS: Energy and Performance-Optimal Mapping Mechanism for ZNS SSD. Applied Sciences. 2024; 14(21):9717. https://doi.org/10.3390/app14219717

Chicago/Turabian Style

Lee, Chanyong, Sangheon Lee, Gyupin Moon, Hyunwoo Kim, Donghyeok An, and Donghyun Kang. 2024. "ReZNS: Energy and Performance-Optimal Mapping Mechanism for ZNS SSD" Applied Sciences 14, no. 21: 9717. https://doi.org/10.3390/app14219717

APA Style

Lee, C., Lee, S., Moon, G., Kim, H., An, D., & Kang, D. (2024). ReZNS: Energy and Performance-Optimal Mapping Mechanism for ZNS SSD. Applied Sciences, 14(21), 9717. https://doi.org/10.3390/app14219717

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop