Mitigating Self-Heating in Solid State Drives for Industrial Internet-of-Things Edge Gateways

: Data storage in the Industrial Internet-of-Things scenario presents critical aspects related to the necessity of bringing storage devices closer to the point where data are captured. Concerns on storage temperature are to be considered especially when Solid State Drives (SSD) based on 3D NAND Flash technology are part of edge gateway architectures. Indeed, self-heating effects caused by oppressive storage demands combined with harsh environmental conditions call for proper handling at multiple abstraction levels to minimize severe performance slow downs and reliability threats. In this work, with the help of a SSD co-simulation environment that is stimulated within a realistic Industrial Internet-of-Things (IIoT) workload, we explore a methodology orthogonal to performance throttling that can be applied in synergy with the operating system of the host. Results evidenced that by leveraging on the SSD micro-architectural parameters of the queuing system it is possible to reduce the Input/Output operations Per Second (IOPS) penalty due to temperature protection mechanisms with minimum effort by the system. The methodology presented in this work opens further optimization tasks and algorithmic reﬁnements for SSD and system designers not only in the IIoT market segment, but generally in all areas where storage power consumption is a concern.


Introduction
The new industrial revolution, mainly fueled by the Internet-of-Things (IoT) paradigm has forced many factories to deal with the issue of data storage. Indeed, an avalanche of bytes coming from sensors, robots, and cameras dispatched in several places of a factory needs collection for real-time data analytics delivery [1]. Cloud storage (either on-premise or remote) is not the prime choice for this operation since it is imperative to guarantee low latency and fast responsiveness to take decisions in the manufacturing process [2]. The best place to do this is close to the data source, colloquially defined as the Edge. Gateways hardware at the edge aggregates data and store them for local processing before sending data to the cloud [3]. In this context, Solid State Drives (SSDs) are the primary storage backbone of gateway platforms as they possess most of the sought features in the Industrial IoT world (IIoT), namely high bandwidth and low latency [4,5].
However, the necessity of bringing storage closer to the point where data are generated poses several challenges from a reliability standpoint. Although SSDs are known to outclass traditional magnetic storage like Hard Disk Drives in terms of metrics like Annualized Failure Rate (AFR) and Mean Time Between Failures (MTBF) [6,7], they still could suffer in harsh environments (like those in IIoT) especially when it comes to elevated data storage temperatures or sharp operating temperature 1. We evaluate the impact of a SSD's micro-architectural parameters in its internal queuing system on the performance of the drive. To the best of our knowledge, we carry this activity for the first time considering the IIoT scenario peculiarities; 2. We provide a methodology orthogonal to the state-of-the-art throttling to guard-band self-heating of the drive assuming a monitoring of its internal temperature. This can be achieved by a proper tailoring of the internal drive command queues to achieve the desired power throttling level; 3. We base all our explorations on a co-simulation framework that processes Trace-Driven Benchmarks (TPC-IoT [15]) being able to calculate SSD quality metrics (e.g., IOPS, latency, etc.) according to the measurements results of a 3D NAND Flash chip. The adaptation of the benchmark results to the IIoT scenario conferring solidness to our assumptions.

Related Works
Storing the data for IoT and IIoT edge applications through SSDs is a widely adopted strategy for at-source analytics and decision making [1,2,16]. Despite those scenarios appearing to be equivalent, there is a clear distinction between the industrial requirements with respect to pure IoT [17]. In harsh environments, reliability is the prime concern. In a storage context this feature is degraded by the temperature. There is a general consensus reported in many large-scale studies [7,8,18,19] that high temperatures may negatively affect the reliability of SSDs by aging the storage media integrated within. Therefore the JEDEC standard for SSDs testing [20] exploits this factor to accelerate failures for reliability assessments. The state-of-the-art methodology to guard-band the temperature increase, and therefore degradation, in the drive is to devise performance throttling supervised by the SSD controller through on-board temperature sensors, as shown in the studies from [10,21].
Such straightforward side-effects are a severe increase on the drive's latency and a general perceived QoS loss by the system. These studies are focused on the characterization and on the modeling of the thermal management of the SSD at a product-level, although considering typical consumer applications that are far from IIoT temperature requirements.
Stand-alone 3D NAND Flash memories have specific testing procedures as well to characterize their temperature sensitivity prior to integration in SSDs [22]. An important issue is in regard of the data retention. Indeed, 3D NAND Flash technology suffers from multiple sources of retention loss caused by temperature-activated charge loss mechanisms [23][24][25]. Either it is vertical or lateral charge loss through the structure of the memory architecture [12], the outcome is always the same: Temperature corrupts the content of the memory cells to a point where stored data are unrecoverable. Several works in literature discuss how to optimize the NAND Flash characteristics in the drive in order to improve the overall reliability of the system [26,27]. Most of these optimizations are at a system-level abstraction, either considering additional firmware routines to be implemented in the SSD controller, dedicated hardware in the SSD, or through external accelerators directly attached to the interface fabric exploited by the drive to communicate with the host [28,29].
Another important topic to consider is about the simulation/emulation environments that allow an exploration of SSD micro-architectural parameters and storage medium characteristics for thermal management solutions development in IIoT. In [30], a disk emulation strategy dealing with real SSD through a virtual platform was proposed. Fast performance estimations are enabled by a highly abstracted description of the components in the SSD, although losing accuracy under certain workload conditions. SSD trace-driven simulation tools were proposed in [31,32] allowing SSD performance and power consumption evaluation. However, reliability is marginally considered and they still lack the possibility to evaluate micro-architectural effects on the SSD performance like commands pipelining or uncommon queuing mechanisms.
The related research topics presented so far are resumed in Table 1.  [7,8,18,19] Thermal management in SSD [10,21] 3D NAND Flash technology retention temperature sensitivity and mitigation [23][24][25][26][27]29] SSD reliability/performance simulators [30][31][32] All the studies presented so far are focused on peculiar topics without considering the picture as a whole. Most of the literature discusses with a limited extent the intertwined relationship between SSD architecture, 3D NAND Flash memory peculiarities, and workload environmental conditions. To the best of our knowledge, this is the first work investigating the effect of temperature on the reliability and performance of SSD embodied in edge gateways for IIoT applications. Moreover, in this work we propose general design methodologies and algorithms that are orthogonal to the state-of-the-art and can be easily included by firmware designers in existing products.

Edge Gateway and SSD Architectures
In an IIoT edge gateway system architecture (we consider the one presented from Dell in [33] without lack of generality), the SSD plays an important role. As shown in Figure 1, the data captured from a pool of sensors in an industrial environment are connected to the gateway through peripheral ports like RS422/485 or CANbus. The received data packets are temporarily stored in a volatile Double Data Rate Dynamic RAM (DDR DRAM) and then processed by the Central Processing Unit (CPU). This step is supervised by the Operating System (OS) which orchestrates the data transfers and protocol management. Once the data have been interpreted, they must be transferred through an interconnect fabric (e.g., SATA, PCIe, etc.) on a storage element for future availability. SSD is therefore the gateway component entrusted to this extent. Applications dedicated to data mining like machine learning frameworks or any other data visualization tool rely on big stored datasets to help the manufacturing process. This could be in the form of a simple process report or by proactively altering the production steps through the remote control of actuators like robots connected to the gateway. A SSD is a complex electronic system composed by many elements whose layout is presented in Figure 2a. The data arrive or are retrieved to/from the SSD through a host interface sharing the same interconnection fabric of the host (e.g., SATA [34] or PCIe [35]). Internally, the data movement is handled by the smartness of the drive, the SSD controller [36], that manages all the reliability and performance firmware routines sometimes with the help of an optional DRAM buffer [8]. This is where Flash Translation Layer (FTL) routines like wear leveling, garbage collection, and block management routines are executed [5]. Other Integrated Circuits (ICs) like temperature sensors or voltage detectors are connected to the SSD controller to help those algorithms fine tune the drive's reliability/performance characteristics. A significant portion of the SSD board is occupied by the storage media (i.e., 3D NAND Flash), which interfaces to the SSD controller through a dedicated memory interface protocol (e.g., ONFI [37] or proprietary). From an architectural standpoint it appears that a SSD is a highly hierarchical piece of architecture as shown in Figure 2b. Besides the SSD controller that integrates a multi-core CPU to run parallel FTL tasks, it is worth mentioning the presence of a channel controller. This hardware block handles the data organization of the memories (organized in N c parallel communication channels) while at the same time providing a link with a multi-channel Error Correction Code (ECC) engine. The latter block is the one determining the ability of the SSD to handle data corruption and needs careful design to avoid performance flaws during the entire lifetime of the drive. A generic Low-Density Parity-Check ECC designed for SSD application can correct hundreds corrupted bits per 1 KBytes codeword [38].

Characterization and Simulation Tools
To explore the reliability, performance, and power consumption features of SSD architecture in IIoT edge gateways, we exploit the SSDExplorer co-simulator [39]. This Computer Aided Design (CAD) tool allows a fine-grained design space exploration of a drive by allowing modifications of its micro-architectural parameters like command queues, interaction mechanisms with the host system, error recovery policies, and so on. All the simulations performed with this tool consider both the electrical and timing characteristics of the storage medium. Such information comprises of reliability metrics as well (i.e., through the bit error rate or fail bits count), which have been extracted from 3D NAND Flash memory samples with the test equipment and procedures presented in [40]. SSDExplorer has been conceived and designed as a tool for virtual platforms so it can be easily plugged in virtualization environments like QEMU [41] to simulate SSDs in a fully functional machine with a working OS. In this work we setup the machine characteristics (e.g., host DRAM and number of processor cores) to be close to a representative IIoT gateway [33]. Table 2 summarizes the parameters of the host system.
To be consistent with the scenario under investigation, we exploited the TPC-IoT [15,42] benchmark in our findings. This workload mimics the typical data ingestion and query procedures in IoT gateways and can be considered also for IIoT scenarios. It consists of a large dataset representing data from sensors coming from several electric power stations. The single records in the dataset pack identification tag of the sensor, timestamp of the reading, a readout value, etc. for a storage size of 1 Kbytes per record. The workload emulates data injection in the SSD of the gateway on which a real-time data analytics platform can run queries in the background. For all the details about the structure of the benchmarking system and its configuration we would like to refer to the guidelines provided in [15]. Concerning the configuration of the SSDs considered in this work, we resume the assumed architectural parameters in Table 3. We considered different SSD sizes to evaluate the impact of parallel channels on the power consumption of the drive and its impact on reliability. Moreover, we speculate the integration of the Triple Level Cell (TLC) 3D NAND Flash technology in such storage platforms to also project this study in future applications where larger amounts of data would require denser and more complex memory structures.

Characterizing the Power Consumption in SSDs
When a SSD is continuously accessed at full performance by a host system with data read and write requests as in our study case there is an increase in temperature. This is because a drive's temperature depends not only on the ambient temperature, but also on the different power sources in the SSD architecture that translate in multiple heat sources. As shown in [21,43], it can be observed how the temperature of a SSD increases by several tens of degrees Celsius passing from an idle state to a full performance state. Moreover, since components in the SSD architecture thermally react differently (due to different heat transfer/radiation and thermal dissipation features) there could be up to a ±15 • C difference from chip to chip on the storage system. This is critical for 3D NAND Flash memories since their average reliability features heavily depends on temperature. Figure 3 shows the number of bit errors, normalized with respect to the ECC correction capability, retrieved in several memory chips as a function of the elapsed time when the storage temperature varies from 55 to 70 • C. Differences are reported up to 5.6 times in the errors number. A case like the one just described is to be avoided since different wear dynamics are experienced by the memories (i.e., one 3D NAND Flash chip could fail precociously) with a burden on SSD reliability.  Assessing SSD power consumption is therefore a mandatory task to develop strategies that could limit the insurgence of thermal issues. With such a consideration, we breakdown the power contributors in a drive as follows: 1. The SSD controller that is an Application Specific Integrated Circuit (ASIC) whose power consumption linearly increase with time according to the amount of data to process and manage; 2. The 3D NAND Flash memory sub-system whose power contribution depends on the amount of parallel accessed channels and on the operation performed (i.e., data read, write/program or erase); 3. The DRAM buffer used as a cache or as a temporary storage for FTL-metadata structures; 4. Other ICs and passive components for power supply and temperature control.
To investigate the contribution of the sole 3D NAND Flash memory modules on the overall power consumption, we adopted the experimental setup provided in [44]. Such a characterization system extracts the current drawn from the memory power-supply therefore providing the actual power consumption figures for the different operations of the memory. Figure 4a,b shows an example of two extracted power traces from a 3D NAND Flash chip during program and read operations. It is worth mentioning that since we consider a TLC memory in this study, we collected different power traces for each page type (i.e., lower, central, and upper pages). Similar behavior, although with different timings and peak values are found. When multiple memory chips are accessed in parallel on a SSD the Kirchoff's Current Law (KCL) holds on the power supply of the drive so that the memory sub-system power consumption is the sum of single power contributions, as shown in Figure 4c. We then performed a simulation of a 64 GB SSD with a single channel (i.e., single 3D NAND Flash package) and a number of outstanding commands to serve equal to 8. We exploited the 3D NAND Flash power traces presented before. The TPC-IoT benchmark workload is considered in the investigation. Please note that the SSDExplorer can extract the power consumption only for the memories in the drive (both Flash and DRAM). To assess the total drive power consumption, we considered the peak values reported in [21] for the SSD controller and for the other ICs. We set the former equal to 2 W and the latter equal to 400 mW, respectively. We repeated the benchmark considering also a 512 GB drive with eight parallel memory channels and an increase in the SSD controller power up to 6 W due to a higher amount of channels needing to serve. As observed in Figure 5, the 3D NAND Flash memories contribution on the SSD power consumption weighs from 19.4% to 41% on total, scaling with the channel parallelism. As a general rule, the higher the amount of 3D NAND Flash chips on the SSD, the stronger its contribution will be. This strongly motivates us to find solutions that limit the 3D NAND Flash power consumption to keep the SSD temperature constrained. As a benefit, a minimal degradation of the drive's performance will be ensured while providing high reliability in environments like IIoT where these requirements on storage are enforced day by day.

The Role of SSD Micro-Architecture on Power Consumption
The data flow in a SSD, both from the read and write perspective is regulated by the SSD controller that allows for the servicing of a number of outstanding operations that depends on the parallelism degree of the storage medium. To maximize the throughput, the host interface of the SSD implements a queue to store a set of commands to be serviced internally by the drive. The maximum number of commands that fills such a queue is defined as Queue Depth (QD). This command/operation queuing concept stems from the past of HDDs largely exploiting the SATA communication protocol [34]. Even if communication protocols evolved, SSDs are not different from such paradigm except for the micro-architecture of the queuing system. Besides QD, SSDs have additional queue entities as shown in Figure 6. The highly hierarchical architecture of a drive with different components accessed in parallel calls for multiple queues in the channel controller (see Figure 2b) to take advantage of the storage medium features [5]. In fact, each 3D NAND Flash chip connected to the channel controller features multiple dies (up to eight in this work) and each one basically retains its own queue called Target Command Queue (TCQ). In this case, the drive can sustain commands on a die already busy by another operation. The target command queue is a fixed parameter that depends on the architecture of the firmware run by the SSD controller. However, to provide enough flexibility there is an additional entity stored in the DRAM of the SSD, defined as frame-buffer, which collects the maximum amount of transactions processable by all the TCQs. At first glance, we evaluated the impact of QD on the number of input/output operations (measured in IOPS) sustained by the drive and on the latency of a 64 GB SSD-IIoT platform. To enrich this study, we compared the results from the TPC-IoT benchmark execution with those of a 4 kB mixed (i.e., 50%-50%) synthetic read/write workload run on the same gateway architecture. The choice of the latter workload is dictated by the requirements of nowadays file systems exploited by the OS, which tend to align the file size to 4 kB to improve the data throughput [45]. As evidenced in Figure 7, the average drive's IOPS and latency scale with QD as expected ranging from 9 kIOPS at a minimum QD of up to 48 kIOPS. Indeed, the higher the number of commands in the queue, the higher the amount of data to process and so it is for the throughput. Latency straightforwardly increases because the higher the number of commands in the queue, the longer the service time. It is interesting to note that the synthetic 4 kB workload saturates the IOPS sustained by the drive starting from a QD value equal to 8. This is ascribed to the maximum number of parallel addressable 3D NAND Flash targets in the SSD channel, as defined in Table 3. In general, the TPC-IoT produces a higher IOPS amount since at the same time, units write/read more transactions to the SSD. Of cours this will not hold for bandwidth concerns. It is also worth noticing that a too high QD value may collide with the fast responsiveness requests in an IIoT scenario that usually should target few milliseconds [46]. A similar trend from the results are obtained by simulating a 512 GB SSD-IIoT platform except that the bandwidth is eight times higher than in the 64 GB counterpart thanks to a higher number of parallel 3D NAND Flash chips accessed by the SSD channel controller. From these simulations we extracted the power consumption of the 3D NAND Flash sub-system in the drive. Two types of power figures are of interest in SSD studies [44]: The average consumption during the entire workload and the peak consumption from the 3D NAND Flash sub-system, as previously highlighted in Figure 4. Figure 8 shows both metrics as a function of QD. Results demonstrate that the average power consumption of the drive, when TPC-IoT or synthetic workload are concerned, correlated with the sustained IOPS since it depended on the number of active 3D NAND Flash targets on the SSD during the workload. Considering the TPC-IoT workload, it was observed that there was an increase of the average power consumption from 100 mW at low QD up to 550 mW at maximum QD. Peak power consumption instead, was a function of the probability in having multiple targets and channels in the SSD active on their peaks of the power figures (see Figure 4). For low QD values (below 8), the TPC-IoT workload generated a low probability of peak overlapping for the different targets mainly due to the small data transfer sizes involved. Each workload saturated the peak power consumption at 704 mW. Once again, the same considerations could be derived for a larger drive like the 512 GB one considered in this study except that its average and peak power magnitude would be eight times higher. Another SSD micro-architectural parameter that impacts power consumption is the frame-buffer size. Its role is twofold since it constrains the amount of data moving in the 3D NAND Flash sub-system and also affects the amount of on-board DRAM allocated for the TCQ collection. Here we consider only simulation results on TPC-IoT since a 4 kB synthetic workload would expose similar trends and would not add significance to the discussion. To better expose the relationship between the frame-buffer size and QD, we performed the simulations with the latter parameter set to 4, 8, and 16. In Figure 9, we reported the kIOPS of the drive and the power consumption of the 3D NAND Flash memory sub-system as a function of the frame-buffer size and for different QD values. Low frame-buffer sizes are associated with a low power consumption (below 250 mW on average) since the managed TCQs by the DRAM were smaller in depth and so the number of parallel 3D NAND Flash active targets. DRAM power consumption decreased as well since the allocated amount of data for TCQ was lower, although this had a minor impact on the overall SSD power consumption and goes beyond the scope of this work. It is worth pointing out that a very low frame-buffer size could increase the probability of Head-of-Line (HoL) blocking events [5] regardless of the QD, with adverse effects on the drive's responsiveness (i.e., latency) since the SSD spends most of the time in a stall condition. In Figure 10, we demonstrate this by showing the number of HoL events detected in the drive during a snippet of 10 6 transactions during the TPC-IoT execution and the corresponding SSD latency. For this latter metric in particular, we observe an indirect dependency (from 850 µs down to 300 µs) compared with the QD-related results therefore care must be taken in using a frame-buffer as a parameter for power reduction.

A Benchmark with Command Submission Time-Based Throttling
In SSDs, the throttling procedure is a common solution to manage thermal issues [10,21]. To assess the drive's temperature, a set of temperature sensors was exploited to measure the temperature of the most critical components in the drive, namely the 3D NAND Flash chips. When the temperature of the worst 3D NAND Flash chip (i.e., the one working with the highest temperature) exceeded a threshold, the performance of the SSD reduced to a significant extent. This would provide a time-window in which the temperature could decrease and as soon as it went below that threshold the performance was brought to a fully operative state. Depending on the scenario and on the severity of the application there could be multiple throttling stages [47]. The easiest way to achieve throttling was by leveraging the OS through the Command Submission Time (CST). This parameter was largely exploited in tuning IoT system-QoS and a general response time in virtual environments [48]. When the temperature sensors on board of the SSD reported to the host OS an alert situation (e.g., it could be through S.M.A.R.T. indicators [49]), the OS could augment the actual time taken to transfer a data read/write command from the OS to the drive. Having less commands to process turned into a lower utilization of the SSD controller resources and of the 3D NAND Flash sub-system, yielding to a lower power consumption and temperature. We simulated the impact of throttling varying the CST of the host OS in the IIoT edge gateway by analyzing the performance of the larger 512 GB drive. This would magnify the sustained IOPS slow-down and ease the understanding of such issue. Nevertheless, the simulation results still reflected what happens in a smaller 64 GB SSD although on a different performance scale due to the lower number of parallel 3D NAND Flash channel. Figure 11 shows that a highly aggressive throttling with 20 ms CST could slow-down the SSD's sustained IOPS up to 4.93 times. We also evaluated the QoS considering different levels (i.e., different nines in SSDs jargon [5]) of the Cumulative Distribution Function (CDF) of the drive's latency. Increasing the CST would paradoxically improve the QoS since the probability to fill the queuing system in the SSD was lowered due to a higher time interval between commands. However, a drive's responsiveness was heavily traded with IOPS which could be detrimental for applications that requires a high amount of data processed per second like in an IIoT gateway executing real-time data analytics. We previously experienced that both QD and frame-buffer are parameters exploitable for power consumption reduction purposes and for thermal management. Since both are user-definable parameters generally through the OS resources that communicates with the SSD (i.e., through OS drivers), a rule-of-thumb solution in power consumption reduction and therefore on drive's self-heating could be an algorithm orthogonal to the throttling one, as depicted in Figure 12. The drive starts with the maximum QD achievable by the SSD to grant maximum sustained IOPS and then, when a certain temperature threshold level (i.e., pre-throttling temperature) sufficiently below throttling temperature is reached, it progressively reduces it before the drive actually enters in a throttling state. For a fine-grained tuning of the power consumption at a given QD, the frame-buffer size can be varied as well though carefully monitoring possible HoL events that would affect latency. Of course throttling events cannot be completely avoided by only tuning QD and frame-buffer size especially if the SSD sustains a heavy workload for long time frames, but their occurrence can be delayed with the potential benefit on the average sustained IOPS during the workload since the time window in which the SSD operates outside throttling (i.e., the time spent between safe temperature and throttling temperature) is widened. We must remind that with our methodology it is possible to reduce the throttling time and have a fast temperature recovery since we can set the QD and frame-buffer size to minimum values, which is currently not feasible with state-of-the-art throttling algorithm. In Figure 13, we benchmark the state-of-the-art throttling with our proposed methodology. Our SSDExplorer simulator could infer the temperature of the 3D NAND Flash devices integrated in the drive and then simulate throttling from the power traces generated during the submission of the workload using the thermal model and the simulation strategy provided in [50]. Currently, we do not support the modeling of the SSD controller temperature and of the DRAM module. The simulation results show that the sole throttling approach achieved on a generic time-window of the TPC-IoT workload a 161.98 kIOPS average performance, whereas our proposed optimized algorithm materialized in a 184.4 kIOPS average performance. The option devised to manage the fallback from the throttling in our algorithm was to set the QD and frame-buffer size one step below the value achieved before entering the throttling stage. This is only one of the available options that should be explored in future to identify the strategy leading to the best gain. We also expect that in case of an extremely heavy workload sustained by the SSD (i.e., more than 10 or 100 throttling events in the same time window considered in our analysis) our methodology should converge to the state-of-the-art in terms of sustained IOPS. However, the degree of flexibility provided could open unprecedented optimization that leverage as an example with applications, OS, drivers, etc. Throttling temp.

Safe temp.
Pre-throttling temp. Figure 13. Benchmarking the CST-based throttling with respect to our optimized algorithm (i.e., throttling + opt.). A 512 GB SSD is considered in the study with QD = 8 as a starting point for both approaches.
Eventually, our proposed solution would work well under the assumption that the ambient temperature is sufficiently far from the throttling temperature of the SSD and given the possibility to smart control the QD, frame-buffer size, temperature, and many other OS-related parameters through proper system drivers.

Conclusions
In this work, we have analyzed the self-heating effect in SSDs for IIoT edge gateway applications through the study of the power consumption of the storage medium sub-system. We considered 3D NAND Flash technology in wake of the augmented storage demands for this application scenario. By characterizing the power requirement of the write and read operations through electrical measurements we studied, with the help of a co-simulation environment, the impact on overall consumption (up to 41%) in SSD architectures.
Furthermore, we explored methods of reducing the self-heating effect by acting on the SSD micro-architectural parameters of its queuing system, namely the queue depth and the frame-buffer size. Their role was thoroughly investigated by monitoring SSD sustained IOPS, latency, and power consumption.
Finally, we proposed a methodology orthogonal to command submission time-based throttling that could be implemented by the host operating system and that contributes to reduce performance slowdowns when temperature crosses the throttling temperature. Up to 20 kIOPS on average could be gained. This methodology could be exploited by SSD and system designers for future refinements in multiple scenarios besides IIoT.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript: