Next Article in Journal
The Current Risk Management Practices and Knowledge in the Construction Industry
Previous Article in Journal
Key Assessment Criteria for Organizational BIM Capabilities: A Cross-Regional Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design and Thermal Environment Analysis of a Decentralized Cooling System with Surface-Mount Heat Pipe Exchangers on Servers in Data Centers

1
School of Mechanical Engineering, Aalto University, 02150 Espoo, Finland
2
School of Mechanical Engineering, Tongji University, Shanghai 200092, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Buildings 2022, 12(7), 1015; https://doi.org/10.3390/buildings12071015
Submission received: 9 June 2022 / Revised: 4 July 2022 / Accepted: 12 July 2022 / Published: 14 July 2022
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)

Abstract

:
This paper proposes a decentralized cooling system combined with a traditional computer room air conditioning unit and server-level heat pipe exchangers for thermal environment optimization in a data center. Two cooling strategies, with heat exchangers installed above and below the servers respectively, are proposed and compared with the original CRACs system in terms of thermal environment. The simulation results of the original data center model are in good agreement with the on-site measurement results, and thus its reliability can be validated. The results show that a decentralized cooling system can effectively improve the thermal environment in data centers. To obtain the highest cooling efficiency, altogether 18 cases, where heat pipe exchangers were installed at different locations and heights, are analyzed and compared. The results show that the thermal environment is optimal when heat pipe exchangers are installed 0.01 m below each server. The local hotspot temperature is reduced by 6.8 °C, and the temperature distribution of the rack is the most uniform, which can effectively reduce the heat accumulation in data centers.

1. Introduction

A data center (DC) is a physical space, which can realize digital information’s centralized calculation, transmission, exchange, and storage management [1]. The core equipment in DCs is computer servers, network exchange equipment, and other operation auxiliary equipment (e.g., uninterrupted power system (UPS), cooling system, fire control system, and monitoring system) [2]. In recent years, with the rapid development of the Internet, cloud computing and other emerging information technologies, DCs have become the critical infrastructure to build up and hold a global information network, and will also become a strategic resource for digital information storage and circulation [3]. Although DCs have stepped into a stage of rapid development, they also suffer from the challenge of high system operation risk and high specific energy consumption due to their characteristics of high-density servers and 24 h continuous operation. DCs belong to a special building segment that consumes much more energy than the similarly-sized traditional office buildings [4]. Today, the worldwide total power consumption used in DCs reaches 416 TW, which accounts for approximately 3% of the total global power demand, while CO2 emissions associated with DCs are almost equivalent to that of the whole global aviation industry [5,6]. The total energy used in DCs is even 40% higher than the total annual energy use in the UK with a population of 65 million [1]. By 2020, the energy consumption annually increased by 140 billion kWh in DCs [7], while the growth trend will undoubtedly continue on this upward trajectory in the future due to the rapid development of high-density and high-performance IT (Internet Technology) equipment. According to the energy usage split of typical DCs, IT equipment and cooling system account for the largest proportion, whose figures are 44% and 40%, respectively [8]. Thus, from the perspective of the HVAC (Heating, Ventilation, and Air Conditioning) industry, cooling efficiency should be improved in DCs to ensure the safe operation of IT equipment and achieve energy efficiency.
Due to its characteristics of high reliability, low initial investment and maintenance costs, air-side cooling technologies have been widely used and regarded as a preferred cooling scheme in DCs in the past several decades [9]. Therefore, typical DCs generally adopt traditional air-cooled refrigeration technology (e.g., traditional computer room air conditioning (CRAC) units comprised of coolers, water pumps, and cooling towers) [10]. Although air-side cooling systems have been widely used in DCs, they have low cooling efficiency, which results in energy waste and system operation risk of over-heating. Their low cooling efficiency and potential system operation risk are caused by four possible reasons: (1) high energy consumption of traditional vapor compression system in CRACs; (2) cooling power loss in the pipeline system; (3) local overheating due to poor air distribution [11]; (4) The mixed hot and cold air flows due to lack of airflow control devices [12]. While the specific heat of servers continues to increase, traditional CRACs can only meet the cooling demand of servers by directly reducing the supply air temperature or increasing the supply air flow. To achieve both system operation safety and high cooling performance, more advanced cooling technologies should be developed and applied in DCs, such as airflow and thermal management [13,14,15,16,17], free cooling technology [18,19,20], liquid cooling technology [21,22,23], and so on.
Among all the available cooling technologies applied in DCs, heat pipe (HP) composite cooling is a feasible and promising solution, which can enhance the cooling efficiency for the servers and reduce the cooling demand required. The HP is a new type of high-efficiency heat exchange element, which can efficiently transfer a large amount of heat through a small heat exchange area without external power or only a small external power. When the outdoor temperature is cool enough, free cooling sources can be used to cool the DC with the HP system [11]. Since the latent heat is hundreds of times the sensible heat, compared with other cooling methods, HP cooling can carry out heat exchange under a smaller temperature gradient on the heating surface, and thus increase the heat transfer rate [24,25]. In addition, HP cooling avoids the direct contact between the refrigerant and the server, which reduces the risk of internal server leakage and ensures safer system and equipment operation environment [26]. At the same time, the use frequency of the fans inside the server is reduced, and the noise generated is reduced, which can be controlled below 45 dB [27].
By far, many scholars have applied HPs to DCs and investigated Their applicability and potential for operation safety and energy saving. Basing their work on the transformation of vapor compression air conditioning, Okazaki et al. [28,29] firstly proposed a refrigeration system with a natural circulation loop of refrigerant. The energy consumption of that refrigeration system was about 50% lower than that of the vapor compression system. Tian et al. [30] analyzed HP exchanger (HPE) units in DCs, and found that installing HP can save 40–50% of energy for CRACs. In addition, Ling et al. [31] studied the micro-channel separated HPs of the cooling device used in telecommunication stations by experiments, established a steady-state mathematical model to analyze the influence of the geometric design of the HPs and environmental conditions on the thermal performance, and verified it with experimental data. It is found that fin pitch and the temperature difference between indoor and outdoor have a significant impact on the heat dissipation capacity of HPs. In order to avoid the difficulties of fluid distribution and flow control, Zhang et al. [32] combined the two independent circuits of mechanical refrigeration and thermosiphon circuits with a three-fluid heat exchanger, and proposed a new type of vapor compression and thermosiphon integrated system. The new system has three modes: mechanical refrigeration, thermosyphon free cooling, and dual mode, all of which have sufficient cooling capacity. Compared with the existing integrated system of mechanical refrigeration and thermosyphon, the new system is more reliable and more convenient to operate. In recent years, Zhou et al. [33] have developed a pump driven loop HP system for free cooling in DCs. The research shows that the ambient temperature of DCs using this system was maintained at 18–25 °C. The average payback period of the system was about 3.9 years, and the annual energy saving rate exceeded 30%.
The previous researches on HP in DCs are mainly based on the room-level and rack-level composite cooling application [28,29,30,31,32,33]. However, with the increasing power density of servers, the cooling power requirements of DC environments are getting higher, while room-level or rack-level composite cooling may not meet the cooling demand of high-density servers and promise safe operation of IT equipment. Thus, the novelty of this paper is to propose a new composite cooling system with CRACs and server-level HPEs in DC. The HPEs are installed between the adjacent servers. The reliability of the DC model is verified by the on-site experiments. The study is divided into 16 cases, where the HPE is installed in two different installation positions with 8 different installation heights. By comparing the outlet temperature of the back door of the rack and the surface temperature of rack hotspot, this paper studies and analyzes the influence of different installation positions and heights of the HPE on the thermal environment and system operation safety of the DC. Finally, the optimal composite cooling system with CRACs and server-level HPEs is proposed.

2. Data Center

2.1. Description of the Studied DC Room

The studied DC, with a total area of around 300 m2, serves in a university to support internet and information services, while it is divided into six different functional areas, including three DC rooms (base, core, and supercomputer rooms) and other functional areas (e.g., monitoring area, uninterruptible power supply (UPS) distribution area, hosting room). In this paper, the base DC room covering an area of 88 m2 is selected as the research object, while the base room simulation model is established totally according to its geometric and physical parameters. This base room adopts a Schneider air-cooled conditioner and under-floor air distribution (UFAD) system with closed cold aisle (CCA) and open hot aisle (OHA). In the base room, CRAC unit supplies cool air from its air outlet to the plenum, while the cool air flows into the CCA through the perforated tiles. Then, the cool air is drawn into the racks, and removes the heat generated by the servers, from where the heated air flows out of the racks and into the OHA, and finally returns to the CRAC unit. Figure 1 shows the layout of the core computer room and the specific parameters of the base room and its equipment are summarized in Table 1 as well as the site photo of the base room, CRAC unit, CCA, racks, and servers. Both sides of the cold aisle adopt the “face-to-face, back-to-back” rack arrangement method to partition the cold and hot air flow from each other.

2.2. Model Set-Up

In this study, the electronic thermal analysis software Icepak [34] is used to build the numerical simulation model of the base room. According to the actual area of the base room, the actual equipment model and layout, and other parameters measured by experiments, the 3D numerical model of the base room is established as shown in Figure 2.

2.2.1. Control Equation and Turbulence Model

The establishment of appropriate governing equations is the premise for solving any flow and heat transfer problems. Governing equations describe the basic laws of conservation of physics that governed the flow of fluid, while discrete equations describe the law of conservation of mass, momentum, and energy, respectively. The air velocity in base room is small, and the airflow is regarded as turbulence whose thermal properties are constant [35,36,37]. The k-ε model has the advantages of short calculation time, high stability, and reasonable prediction of large-scale turbulence; thus, it has been widely used in the simulation of airflow distribution in DCs [26,36,38,39]. Since the internal environment of base room is relatively closed and the internal air flow is large space flow, it is suitable for the model of high Reynolds number turbulence calculation [40,41]. In this paper, the standard k-ε model is selected for the CFD numerical simulation. The governing equations of the three conservation laws are as follows [42,43]:
(a)
Mass conservation equation (continuity equation):
ρ t + d i v ρ u = 0
When the fluid is incompressible, this formula can be simplified as follows:
d i v ρ u = 0
(b)
Momentum conservation equation:
u t + d i v u u = d i v ν e f f g r a d u 1 ρ p n + g
(c)
Energy conservation equation:
ρ T t + d i v ρ u T = d i v k c g r a d T + S T
where ρ is the density of fluid, t is the time, u is the velocity vector of fluid, ν eff is the effective fluid viscosity, p is the static pressure, n is the direction vector, g is the gravitational body force, T is the static temperature of fluid, k is the thermal conductivity, c is the specific heat of air at constant pressure, ST is the internal heat source of the fluid, and the part where fluid mechanical energy is converted into thermal energy due to viscosity.
In this paper, the k-ε model used for numerical simulation introduces the turbulent kinetic energy k and the turbulent dissipation rate ε. It contains the following two transfer equations for solving k and ε [42]:
(d)
Turbulent kinetic energy equation:
ρ k t + d i v ρ u k = d i v μ + μ t σ k g r a d k ρ ε + G k
(e)
Dissipation rate equation:
ρ ε t + d i v ρ u ε = d i v μ + μ t σ ε g r a d ε + ε k C 1 ε G k C 2 ε ρ ε
where μ is the dynamic viscosity, μ t is the turbulent viscosity, G k is the generation term of turbulent kinetic energy k due to the mean velocity gradient, σ k is the Prandtl number corresponding to the turbulent kinetic energy k, σ ε is the Prandtl number corresponding to the turbulent dissipation rate ε, C 1 ε and C 2 ε are the empirical constants. In this study, the fluid flow is considered to be steady. Therefore, in the above partial differential equations, the term containing t can be eliminated, so as to simplify the numerical calculation process.
Based on some existing numerical simulation researches of DCs, some settings are made as follows: (1) In most of the studies, the influence of the built-in fan of the server and the obstacles under the plenum on the simulation results are ignored. Therefore, this study makes the same treatment to this problem in the process of modeling; (2) considering the various parameter changes and flow patterns in base room, this study adopts the Boussinesq approximation; (3) the air flow rate in base room is low, which can be set as an incompressible fluid that flows constantly; (4) ignore the effect of radiation on the simulation.

2.2.2. Boundary Conditions and Mesh Generation

In this simulation, the wall, floor, and ceiling of the base room are set as the adiabatic boundary, and the air supply outlet and return air outlet are set as fan modules. Convergence of the iterative calculation is considered to be achieved when the residuals reach a certain set criterion, and the convergence criterion determines the degree of convergence of the numerical simulation. Table 2 summarizes the boundary conditions and related parameter settings.
As presented in Nielsen CFD quality control methods [44], the independence of the grid should also be checked before the numerical simulation calculation. Figure 3 shows the independent grid test of the original base computer room. After selecting different grid densification levels for the initial DC model, we found that when the number of grids changes from about 420,000 to 970,000, the impact of increasing the grid density on the simulation is very small. Under this circumstance, the grid is tested as independent, and the balance between high accuracy and high efficiency of numerical calculation has been achieved. Therefore, the grid number of 425,081 is selected as the appropriate number. In this case, the maximum grid size in the x, y, and z axis directions is 0.1 m. In the subsequent simulation calculation process of this research, the grid independence test is also carried out. In addition, the quality of the grid is checked by the quality inspection tool that comes with Icepak, while the mesh quality is qualified with the appropriate mesh number.

2.3. Experimental Validation of Model Reliability

In this study, the reliability and accuracy of the numerical model are validated by the experimental measurement in the actual base room. Rack B4 is selected as the object rack in this validation study due to its characteristics as follows:
Highest rated power (16.98 kW) among all 16 racks;
Only two kinds of rack-mounted servers with power of 0.75 kW×2 and 0.495 kW×2;
All the servers are 2 U, which share the consistent appearance size;
Servers are placed more evenly, whose intervals are 1 U and 2 U.
The layout and corresponding specifications of the servers in rack B4 are shown in Figure 4a. In light of a comprehensive consideration of the characteristics of rack B4, it can be regarded as a high-density rack, and thus is selected as the object rack in this study. During the on-site measurement in the base room, K-type thermocouples were used to measure the temperature of test points, while the Agilent 34970A data acquisition instrument was used to record the experimental data. As shown in Figure 4b, nine K-type thermocouple test points were equidistantly arranged at the air outlet of the rear door of rack B4. The locations of the test points are set next to the door gap in the middle of the air outlet of the rack, while the temperature of each measuring point was collected in real time and the sampling time was 10 s.
Figure 5 compares the temperature values of the test points between the numerical and experimental results in the base room. The relative error between the experimental and simulation results of each measuring point was below 5%, which is within an acceptable limit, while the temperature of each measuring point in the simulation results is slightly higher than that in the experimental results. These prove that the base room temperatures modeled are reliable and can be used for following further research.

3. Improved Thermal Environment with Installed HPEs

As the packing density of racks continues to increase, CRACs cannot easily meet the cooling demands of these high-density DCs and may suffer from the problems of low efficiency and possible local overheating. Under this circumstance, this paper proposes a composite cooling system combined CRACs and server-level HPEs in DCs.

3.1. Description of the HPEs

The HPEs in this study are designed to be suitable for the Dell servers used in the actual rack B4 in the base room. Because copper has excellent thermal conductivity and corrosion resistance, it is widely used in HP manufacturing [45]. According to the working temperature range of the server, a copper-water HP is selected. The shell material is pure copper, and the working fluid of the HP is pure water. The water in the evaporation section of the HP absorbs the heat emitted by the server and evaporates, and then the steam flows to the colder end of the HP. The steam condenses on the wall of the HP and transfers heat to the condensation section. These HPEs have good reliability, and no working fluid leakage will cause safety hazards when installed in the rack, which avoids potential safety hazards. The evaporation section of the HPEs is the section where the HP contacts the heat source. In the condensation section, the heat absorbed by the HP is dissipated through a water-cooling plate.
The size of the server is 0.8 m (L) × 0.46 m (W) × 0.09 m (H), while the size of the HPE is designed accordingly to fit exactly to server area. The height of the heat exchanger was 0.01 m, and there was a certain air gap between server and HP. This arrangement can cover the cooling power requirements of the server [46]. This system can realize the composite cooling combined air cooling and HP heat dissipation in the base room. On the one hand, the heat dissipated by the operation of the server is cooled by air conditioning, and the hot air is discharged into the hot aisle through the back door of the rack. On the other hand, the heat can be partially cooled by the HPE. The absorbed heat is exported to the outside of the server through the HP, then cooled by the water-cooling plate, and finally discharged into the outside atmosphere through the cooling tower. According to the basic parameters of the HPEs and the original DC model, the HPE is initially set to be installed above each server in the rack, and the distance between the exchanger and the server is 0.01 m. The cooling method of the new system for a single server in the rack is shown in Figure 6. The outlet water temperature of the water-cooling plate is 25 °C and the outlet temperature is 30 °C. The pipe joints on the water-cooling plate are connected to cooling water, and the condensation section of the HPEs is cooled inside the water-cooling plate [30].
Figure 7 shows the cross-sectional schematic diagram of the numerical model of the composite cooling system. The system uses HPs inside the rack for heat transfer, but the HPEs are installed outside the server, which can isolate the cooling water from the server chassis, thereby avoiding the leakage of cooling water causing short circuit and damage to the server, and improving the reliability of the cooling system.

3.2. Comparison of Composite and Standard Cooling System

In this section, the original cooling system and the server integrated pipe heat exchangers are respectively simulated by CFD. The locations of the 9 temperature points are the same as the test points measured, which are respectively 0.3 m, 0.5 m, 0.7 m, 0.9 m, 1.1 m, 1.3 m, 1.5 m, 1.7 m, and 1.9 m from the floor. In addition, the average temperature of the rack’s back door is used to analyze the heat dissipation and cooling effect of the rack. Teva back represents the average temperature of the back door of the rack. σT is used to represent the standard deviation of the air outlet temperature at each measurement point of the rack rear door, and to analyze the uniformity of rack cooling and local hot spots. In this paper, the point of the highest temperature at the air outlet is defined as a rack hotspot, where server damage is most likely to occur [47].
Figure 8 shows the profile of heat distribution of the air outlet of rack B4 in the two systems. As shown in Figure 8a, the different server power in the rack results in uneven temperature distribution in the original system. The temperature of the middle and lower parts of the rack B4 is highest, especially at the bottom of the rack B4 where there is significant heat accumulation. As shown in Figure 8b, after installing the HPE above the server in the rack, the heat distribution of the air outlet of rack B4 is more uniform. The heat accumulation in the middle and lower parts has been greatly reduced, and the thermal environment has also been improved. Figure 9 shows the exhaust air temperatures of the 9 temperature recording points of these two cooling systems. It shows that compared with the original model, the temperature of each recording point has been reduced to varying degrees in the composite cooling model. The rack hotspot is reduced from 34.16 °C to 30.19 °C, and the maximum temperature drop is 3.97 °C, while the maximum temperature drop of the composite baffle system is 3.4 °C [19], which significantly improves the current situation of excessive local temperature of the rack. It can be seen that the new cooling system has a good optimization effect for hot spots. The standard deviation of the exhaust air temperature recording points in the two models is 2.80 °C and 1.48 °C, respectively, and the value of the standard deviation is decreased by 1.32 °C. As a result, the uniformity of the temperature distribution in the rack is greatly improved, and the thermal environment of base room is significantly optimized.
Figure 10 shows the air flow pattern of the rack B4 in the original model and composite cooling model (installation height 0.01 m). In the original model, the air velocity near the bottom of the rack B4 is higher, which leads to lower air flow in the lower part of the rack and makes the air with higher temperature in the hot aisle return to the rack. Under this circumstance, the air distribution is uneven, and heat accumulation occurs. For the composite model, the installation of the HPE occupies the air flow space between the servers to a certain extent, but also changes the flow path of air. The instability of part of the airflow at the inlet is increased, which causes turbulence; heat transfer is also enhanced. Compared with the T-shaped underfloor air duct network system for improving the floor air supply to improve the temperature uniformity, the optimization effect of this composite system on the overall temperature uniformity of the rack is more obvious [48].

3.3. The Effect of Installation Distance on Performance

According to the comparison and analysis in Section 3.2, the composite cooling system with HPEs (0.01 m from the servers) can effectively improve the thermal environment in the base room and eliminate the local hot spot in the rack. In order to further analyze the optimal design of the composite cooling system, 8 cases with different installation heights of HPEs are simulated. The case 0 represents the original model without HPEs. Table 3 describes those 8 cases with different installation heights of HPEs, and case 3 is the case described in Section 3.2. In those cases, the two parameters of whether to install the HPE and the installation location of the heat exchanger from the server were varied and the other simulation parameters remain constant. When the HPEs are attached to the top side of the server, the heat exchangers have good thermal contact with the server and the thermal resistance of the conductivity is ignored.
Figure 11 compared the 9 temperature recording points in case 0 and cases 1–8, respectively. When the HPEs are installed close to the top of the server, the composite system hardly optimizes the heat dissipation of the rack (i.e., case 1). While in the other 8 cases, the heat accumulation phenomenon in the lower part of the rack B4 has been alleviated to a certain extent. However, in case 2 and case 8, the heat dissipation of the upper part of the rack deteriorates, while the temperature of the air outlet of the rack in cases 3–7 is improved overall. Especially in case 3 and case 4, the temperature of the recording point is below 31 °C, and the maximum temperature drop reaches 4.1 °C. Therefore, considering the temperature changes of the 9 temperature recording points at the outlet of rack B4, case 3 and case 4 (installation height 0.01 m, installation height 0.015 m), the temperature distribution of the outlet of the rack B4 is the most uniform and lowest in average.
Figure 12 shows the heat distribution profiles of the air outlet in cases 0–8. There is significant heat accumulation in the lower part of rack B4 with case 0. The heat accumulation is the worse in case 1, with slight heat accumulation in the middle and top part of the rack. In case 2, although the lower-part heat accumulation is eliminated, the upper heat accumulation occurs. In cases 3–8, the rack hotspot temperatures have been reduced to varying degrees, and the heat distribution at the upper and lower parts is relatively uniform. Case 3 performs optimally in terms of temperature distribution within the air outlet in rack B4, and the next best performance was observed in case 4. Under this circumstance, the temperature distribution of air outlet is the most uniform, and the temperature of rack hotspot is minimized to 30.3 °C.
Figure 13 shows the change in the average temperature of air outlet in cases 0–8. In case 0, the average temperature of the back door of rack B4 is 28.4 °C. In case 1, the HPEs are installed close to the top of the server, which increases the average air temperature of the back door Teva, bac to 28.9 °C. By adjusting the different installation heights, we found that, in case 2, the mean temperature of the back door is significantly reduced to 26.6 °C. When the installation height continues to increase from 0.5 cm to 3.5 cm, the mean temperature of the back door of the rack B4 rises slightly, but it still maintains lower the mean temperature than with case 0.
Figure 13 presents also the standard deviation of the outlet air temperature at the 9 recording points with case 0 and improved cases 1–8. It can be seen that the standard deviation σT of case 8 is the lowest, and the standard deviations of all other cases are also lower than that with case 0. It depicts that by installing HPEs, the temperature distribution of the rack is more even, and the local overheating of the rack can be avoided.
Given the average temperature of the air outlet of the rack B4, the temperature change of each measuring point, and the standard deviation, the schemes of cases 3 and 4 have a good chance on improving the thermal environment and heat exchange uniformity of the DC. However, in case 3, the rack hotspot has the lowest temperature and the temperature distribution is more uniform than that in case 4. This solution has a better effect on optimizing the temperature in the base room. Therefore, case 3 (the installation of the HPEs is 0.01 m above the server) has the best overall performance of heat exchanger.

4. Composite System with HPEs Installed under the Server

4.1. Model Establishment of the System with a Heat Exchanger Installed under the Server

Since the internal chips and other components are installed close to the bottom of the server, the heat exchange optimization effect of the HPEs installed under the server should also be considered. Based on the previous analysis of the cooling effect of the system when the HPEs are installed at different heights above the server, this section explores optimal installation when the HPEs are installed under the server. Like the simulation in Section 3, the numerical simulation of different installation heights of heat exchangers under the server is also divided into 8 cases, which are cases 9–16. Those 8 cases with different installation heights of HPEs are shown in Table 4. In the numerical model, although the installation location of the heat exchanger from the server is moved under the server, the other parameters settings and boundary conditions are the same as before. When the HPEs are attached to the bottom side of the server, the heat exchangers have good thermal contact with the server, and the thermal resistance of the conductivity is also ignored.

4.2. Analysis of under Server Simulation

Figure 14 shows the heat distribution profiles of the air outlet in case 0 and cases 9–16. In case 9, the heat distribution in the lower part of the rack B4 is not improved, and the heat accumulation is generated in the middle of the rack, which deteriorates the thermal environment. In case 10, although the heat accumulation of the lower part is eliminated, the heat accumulation of the upper part occurs. Only in cases 11–16, the temperature distribution of each part in the rack has been improved significantly, and the heat distribution at the upper and lower parts of the air outlet is relatively uniform. From the perspective of the temperature improvement and heat distribution uniformity of the back door air temperature of rack B4, case 11 performs better than other solutions. Under this circumstance, the temperature distribution of air outlet is the most uniform of the under-server cases, and the temperature of rack hotspot is minimized to 28.6 °C.
The results of temperature change at different points analyzed in cases 9–16 are summarized in Table 5. It can be seen from Table 5 that when the HPEs are installed close to the top of the server, the composite system hardly contributes to the heat dissipation of the rack (i.e., the case 9). While in the other 7 cases, the thermal environment of the rack B4 has been partially improved, and the heat accumulation in the lower part of the rack has also been mitigated to a certain extent. Compared with the base room refrigeration system of case 0, all the temperature values of the 9 points are reduced only in case 12. The temperature of those points is all below 29 °C, and the maximum temperature drop reaches 6.9 °C. Therefore, for case 11, where the HPEs is installed at 0.01 m below the server, the optimization performance of the composite system is better, and the air temperature at each point is all reduced.
Figure 15 shows the variation of the average air temperature of the back door in case 0 and cases 9–16. In case 1 where the HPEs are installed close to the bottom of the server, the average air temperature of the back door, Teva, bac, increases to 28.9 °C, which is 0.5 °C higher than that of the case 0. By adjusting the installation distance, it can be seen that the average temperature of the outlet air has dropped in other cases. Especially in case 11, the mean air temperature of the back door is significantly reduced to 26.1 °C. When the installation distance increased from 1 cm to 3.5 cm, the mean air temperature of the back door rises slightly, but it is still lower than that with case 0.
Figure 15 summarizes the standard deviation of the outlet air temperatures of the 9 points analyzed in the original case 0 and the improved cases 9–16. It can be found from the table that the standard deviation of case 11 has the lowest standard deviation: 0.9. The standard deviations σT of all other cases are also lower than that of the case 0. This means that by installing HPEs, the uniformity of the rack air temperature can be improved, and the local overheating of the rack can be avoided. Therefore, given the uniformity of the temperature of the rack B4, in case 11 (0.01 m from the bottom of the server), the temperature distribution of the rack is the most even.
Given the average air temperature of the back door, the temperature change of each measuring point at the air outlet of the rack, and the standard deviation of the 9 points, the scheme of case 11 has the best performance effect on improving the thermal environment and heat exchange uniformity of base room. The HPEs installation at 0.01 m below the server is selected as the optimal cooling solution for the HPEs in the bottom-mounted system.

4.3. Comparison of above and below Installation

The previous sections separately analyze the effect of the HPEs installation above or below the server. They also study the installation distance’s impact on the cooling effect of the base room. This section compares those optimal schemes by comparing the temperatures of the 9 points in the cases 3 and 11. As a result, the temperature of eight points in case 11 is lower than that in case 3. The average air temperature of the back door of case 11 is 26.1 °C, which is 0.7 °C lower than that of case 3. In addition, the standard deviation of the air outlet temperature of the rack in case 3 is lower, and the temperature distribution is more uniform.
Figure 16 shows the comparison of heat distribution profiles of the air outlet of the rack B4 under three different conditions (no HPEs, HPEs installed 0.01 m above or below the servers). It can be seen from the figure that the optimization effect of case 11 is better than that of case 3. This is because the hardware inside the server is installed at the bottom of the chassis, and HPEs installed below the servers are closer to the heating parts of the server than installed above the servers, so the optimization performance of case 11 is better. With HPEs installed 0.01 m below the servers, the hot spot temperature of the rack is reduced from 37.0 °C to 28.6 °C, and the maximum temperature drop among all recording points reached 6.8 °C. The thermal environment of the base room is greatly improved, and the heat distribution at the air outlet of the rack is the most uniform. Therefore, the composite cooling system with the HPEs installed at 0.01 m below the server is the best cooling scheme in this study.

4.4. Discussion and Further Work

According to the previous simulation results, after using the new system, it can effectively bring the heat generated by the operation of servers and some heat generated by other components in the DC, so as to significantly eliminate the local hot spots in the rack and make the temperature distribution more uniform. The thermal ambient temperature of the rack can be optimized to a certain extent, improve the working performance and operation reliability of the servers, reduce the dependence on the air conditioning system, and save the refrigeration power consumption. This paper focuses on the influence of the installation position of HPEs on the heat dissipation effect. However, factors such as heat exchange quality, pipe resistance and radiator shape will also affect the heat dissipation effect, which have not been considered and studied. On the other hand, it is necessary to continue to optimize and improve the design and processing technology of HP heat exchanger.

5. Conclusions

In order to improve the thermal environment of a large DC and increase the uniformity of rack temperature, this study analyzes the effect of HPEs on the heat distribution and air temperature profile of DCs. By adjusting the installation distance of the HPEs compared with the server in the rack, the optimal installation scheme of the composite system is realized. The simulation results show that the composite cooling system with HPEs installed 0.01 m below the servers has the best performance on improving the thermal environment of base room. The main conclusions are as follows:
(1)
Installing HPEs between servers means that convective air cooling and HP cooling technology can be combined. The maximum temperature drop of rack hotspot is 3.97 °C, and the standard deviation of outlet temperature is also optimized by 1.32 °C, which can significantly improve temperature uniformity and reduce the risk of the hot spots in the rack.
(2)
Whether the HPEs are installed above or below the server, when the installation distance is 0.01 m, the composite system has the best performance, and the temperature of the back door is optimized to 26.8 °C and 26.1 °C.
(3)
The composite cooling system with the HPEs installed 0.01 m below the servers is the most effective way to improve the thermal environment of base room, and it greatly eliminates the heat accumulation in the rack. The hot spot temperature of the rack is reduced from 37.0 °C to 28.6 °C, and the maximum temperature drop among all recording points reached 6.8 °C.

Author Contributions

Conceptualization, X.Z. and X.Y.; methodology, X.Z.; software, X.Y.; validation, X.Z. and X.Y.; formal analysis, X.Z. and Y.L.; resources, X.Y.; data curation, X.Z. and Y.L.; writing—original draft preparation, X.Z.; writing—review and editing, X.Y., R.K., Y.P. and Z.L.; supervision, R.K. and Y.P.; project administration, X.Y. and Y.P.; funding acquisition, Y.P. and X.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the National Natural Science Foundation of China (Grant No. 51978481), and ICT for Climate actions project from Academy of Finland (Grant No. 342123).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We also want to acknowledge the Information Center in Jiangpu Campus of Nanjing Tech University, Nanjing, China for providing the on-site measurement site.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Masanet, E.; Shehabi, A.; Lei, N.; Smith, S.; Koomey, J. Recalibrating global data center energy-use estimates. Science 2020, 367, 984–986. [Google Scholar] [CrossRef] [PubMed]
  2. Dai, J.; Ohadi, M.M.; Das, D.; Pecht, M.G. Optimum Cooling of Data Centers; Springer: New York, NY, USA, 2016. [Google Scholar] [CrossRef]
  3. Dayarathna, M.; Wen, Y.; Fan, R. Data Center Energy Consumption Modeling: A Survey. IEEE Commun. Surv. Tutor. 2016, 18, 732–794. [Google Scholar] [CrossRef]
  4. Jin, C.; Bai, X.; Yang, C.; Mao, W.; Xu, X. A review of power consumption models of servers in data centers. Appl. Energy 2020, 265, 114806. [Google Scholar] [CrossRef]
  5. Gelenbe, E.; Caseau, Y. The impact of information technology on energy consumption and carbon emissions. Ubiquity 2015, 2015, 1–15. [Google Scholar] [CrossRef] [Green Version]
  6. Belkhir, L.; Elmeligi, A. Assessing ICT global emissions footprint: Trends to 2040 & recommendations. J. Clean. Prod. 2018, 177, 448–463. [Google Scholar] [CrossRef]
  7. Whitney, J.; Delforge, P. Data Center Efficiency Assessment; Natural Resources Defense Council: New York, NY, USA, 2014. [Google Scholar]
  8. Song, Z.; Zhang, X.; Eriksson, C. Data Center Energy and Cost Saving Evaluation. Energy Procedia 2015, 75, 1255–1260. [Google Scholar] [CrossRef] [Green Version]
  9. Khalaj, A.H.; Halgamuge, S.K. A Review on efficient thermal management of air- and liquid-cooled data centers: From chip to the cooling system. Appl. Energy 2017, 205, 1165–1188. [Google Scholar] [CrossRef]
  10. Gong, X.; Zhang, Z.; Gan, S.; Niu, B.; Yang, L.; Xu, H.; Gao, M. A review on evaluation metrics of thermal performance in data centers. Build. Environ. 2020, 177, 106907. [Google Scholar] [CrossRef]
  11. Ni, J.; Bai, X. A review of air conditioning energy performance in data centers. Renew. Sustain. Energy Rev. 2017, 67, 625–640. [Google Scholar] [CrossRef]
  12. Zhang, H.; Shao, S.; Xu, H.; Zou, H.; Tian, C. Free cooling of data centers: A review. Renew. Sustain. Energy Rev. 2014, 35, 171–182. [Google Scholar] [CrossRef]
  13. Rambo, J.; Joshi, Y. Convective Transport Processes in Data Centers. Numer. Heat Transf. Part A Appl. 2006, 49, 923–945. [Google Scholar] [CrossRef]
  14. Yuan, X.; Zhou, X.; Liu, J.; Wang, Y.; Kosonen, R.; Xu, X. Experimental and numerical investigation of an airflow management system in data center with lower-side terminal baffles for servers. Build. Environ. 2019, 155, 308–319. [Google Scholar] [CrossRef]
  15. Ni, J.; Jin, B.; Ning, S.; Wang, X. The Numerical Simulation of the Airflow Distribution and Energy Efficiency in Data Centers with Three Types of Aisle Layout. Sustainability 2019, 11, 4937. [Google Scholar] [CrossRef] [Green Version]
  16. Yuan, X.; Xu, X.; Liu, J.; Pan, Y.; Kosonen, R.; Gao, Y. Improvement in airflow and temperature distribution with an in-rack UFAD system at a high-density data center. Build. Environ. 2020, 168, 106495. [Google Scholar] [CrossRef]
  17. Zhou, X.; Yuan, X.; Xu, X.; Liu, J.; Kosonen, R.; Liu, C. Research on the thermal performance of rack-level composite baffle diversion system for data centre. Energy Effic. 2020, 13, 1245–1262. [Google Scholar] [CrossRef]
  18. Ahmadi, V.E.; Erden, H.S. A parametric CFD study of computer room air handling bypass in air-cooled data centers. Appl. Therm. Eng. 2020, 166, 114685. [Google Scholar] [CrossRef]
  19. Wang, J.; Zhang, Q.; Yoon, S.; Yu, Y. Reliability and availability analysis of a hybrid cooling system with water-side economizer in data center. Build. Environ. 2019, 148, 405–416. [Google Scholar] [CrossRef]
  20. Cheung, H.; Wang, S. Reliability and availability assessment and enhancement of water-cooled multi-chiller cooling systems for data centers. Reliab. Eng. Syst. Saf. 2019, 191, 106573. [Google Scholar] [CrossRef]
  21. Chen, H.; Peng, Y.-H.; Wang, Y.-L. Thermodynamic analysis of hybrid cooling system integrated with waste heat reusing and peak load shifting for data center. Energy Convers. Manag. 2019, 183, 427–439. [Google Scholar] [CrossRef]
  22. Wu, C.; Tong, W.; Kanbur, B.; Duan, F. Full-scale Two-phase Liquid Immersion Cooing Data Center System in Tropical Environment. In Proceedings of the 2019 18th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (ITherm), Las Vegas, NV, USA, 28–31 May 2019; pp. 703–708. [Google Scholar]
  23. Kheirabadi, A.C.; Groulx, D. Experimental evaluation of a thermal contact liquid cooling system for server electronics. Appl. Therm. Eng. 2018, 129, 1010–1025. [Google Scholar] [CrossRef]
  24. Alkharabsheh, S.; Puvvadi, U.; Ramakrishnan, B.; Ghose, K.; Sammakia, B. Failure Analysis of Direct Liquid Cooling System in Data Centers. In Proceedings of the ASME 2017 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems Collocated with the ASME 2017 Conference on Information Storage and Processing Systems, San Francisco, CA, USA, 29 August–1 September 2017; p. V001T02A008. [Google Scholar]
  25. Zühlsdorf, B.; Jensen, J.K.; Elmegaard, B. Heat pump working fluid selection—economic and thermodynamic comparison of criteria and boundary conditions. Int. J. Refrig. 2019, 98, 500–513. [Google Scholar] [CrossRef]
  26. Zeng, L.; Liu, X.; Zhang, Q.; Yi, J.; Li, X.; Liu, X.; Su, H. Experimental and Simulation Study of Micro-Channel Backplane Heat Pipe Air Conditioning System in Data Center. Appl. Sci. 2020, 10, 1255. [Google Scholar] [CrossRef] [Green Version]
  27. Ding, T.; He, Z.G.; Hao, T.; Li, Z. Application of separated heat pipe system in data center cooling. Appl. Therm. Eng. 2016, 109, 207–216. [Google Scholar] [CrossRef]
  28. Garimella, S.V.; Persoons, T.; Weibel, J.A.; Gektin, V. Electronics Thermal Management in Information and Communications Technologies: Challenges and Future Directions. IEEE Trans. Compon. Packag. Manuf. Technol. 2016, 7, 1191–1205. [Google Scholar] [CrossRef] [Green Version]
  29. Okazaki, T.; Sumida, Y.; Matsushita, A. Development of Vapor Compression Refrigeration Cycle with a Natural-Circulation Loop; Mitsubishi Electric Corp.: Shizuoka, Japan, 1999. [Google Scholar]
  30. Okazaki, T.; Seshimo, Y. Cooling system using natural circulation for air conditioning. Trans. Jpn. Soc. Refrig. Air Cond. Eng. 2011, 25, 239–251. [Google Scholar]
  31. Tian, H.; Li, Z.; Liu, X. Application study of data center heat pipe air conditioning system. Build Sci. 2010, 10, 141–145. (In Chinese) [Google Scholar]
  32. Ling, L.; Zhang, Q.; Yu, Y.; Wu, Y.; Liao, S. Study on thermal performance of micro-channel separate heat pipe for telecommunication stations: Experiment and simulation. Int. J. Refrig. 2015, 59, 198–209. [Google Scholar] [CrossRef]
  33. Zhang, H.; Shao, S.; Xu, H.; Zou, H.; Tian, C. Integrated system of mechanical refrigeration and thermosyphon for free cooling of data centers. Appl. Therm. Eng. 2015, 75, 185–192. [Google Scholar] [CrossRef]
  34. Zhou, F.; Li, C.; Zhu, W.; Zhou, J.; Ma, G.; Liu, Z. Energy-saving analysis of a case data center with a pump-driven loop heat pipe system in different climate regions in China. Energy Build. 2018, 169, 295–304. [Google Scholar] [CrossRef]
  35. Mulay, V. Analysis of Data Center Cooling Strategies and the Impact of the Dynamic Thermal Management on the Data Center Efficiency, Mechanical Engineering; University of Texas at Arlington: Arlington, TX, USA, 2010. [Google Scholar]
  36. Alkharabsheh, S.; Fernandes, J.; Gebrehiwot, B.; Agonafer, D.; Ghose, K.; Ortega, A.; Joshi, Y.; Sammakia, B. A Brief Overview of Recent Developments in Thermal Management in Data Centers. J. Electron. Packag. 2015, 137, 040801. [Google Scholar] [CrossRef]
  37. Song, Z. Thermal performance of a contained data center with fan-assisted perforations. Appl. Therm. Eng. 2006, 102, 1175–1184. [Google Scholar] [CrossRef]
  38. Cruz, E.; Joshi, Y. Coupled inviscid-viscous solution method for bounded domains: Application to data-center thermal management. Int. J. Heat Mass Transf. 2015, 85, 181–194. [Google Scholar] [CrossRef]
  39. Priyadumkol, J.; Kittichaikarn, C. Application of the combined air-conditioning systems for energy conservation in data center. Energy Build. 2014, 68, 580–586. [Google Scholar] [CrossRef]
  40. Almoli, A.; Thompson, A.; Kapur, N.; Summers, J.; Thompson, H.; Hannah, G. Computational fluid dynamic investigation of liquid rack cooling in data centres. Appl. Energy 2012, 89, 150–155. [Google Scholar] [CrossRef]
  41. Jin, C.; Bai, X.; Yang, C. Effects of airflow on the thermal environment and energy efficiency in raised-floor data centers: A review. Sci. Total Environ. 2019, 695, 133801. [Google Scholar] [CrossRef]
  42. Phan, L.; Hu, B.; Lin, C.-X. An evaluation of turbulence and tile models at server rack level for data centers. Build. Environ. 2019, 155, 421–435. [Google Scholar] [CrossRef]
  43. Wang, F. Computational Fluid Dynamics Analysis: The Principle and Application of CFD Software; Tsinghua University Press: Beijing, China, 2004. [Google Scholar]
  44. Zhang, K.; Zhang, X.; Li, S.; Wang, G. Numerical Study on the Thermal Environment of UFAD System with Solar Chimney for the Data Center. Energy Procedia 2014, 48, 1047–1054. [Google Scholar] [CrossRef] [Green Version]
  45. Yuan, X.; Wang, Y.; Liu, J.; Xu, X.; Yuan, X. Experimental and numerical study of airflow distribution optimisation in high-density data centre with flexible baffles. Build. Environ. 2018, 140, 128–139. [Google Scholar] [CrossRef]
  46. Li, Y.; Nielsen, P.V. CFD and ventilation research. Indoor Air 2011, 21, 442–453. [Google Scholar] [CrossRef]
  47. Wang, J.; Lin, Y.; Zhu, K.; Lin, E. Design and performance analysis of the server water-cooled heat pipe cooling system. Sci. Technol. Innov. Her. 2015, 12, 58–59. [Google Scholar]
  48. Chung, D.D.L. Materials for thermal conduction. Appl. Therm. Eng. 2001, 21, 1593–1605. [Google Scholar] [CrossRef]
Figure 1. The layout of the base computer room.
Figure 1. The layout of the base computer room.
Buildings 12 01015 g001
Figure 2. The 3D model of the data room.
Figure 2. The 3D model of the data room.
Buildings 12 01015 g002
Figure 3. The independent grid test of the original base computer room.
Figure 3. The independent grid test of the original base computer room.
Buildings 12 01015 g003
Figure 4. Schematic diagram of the server in the rack. (a) The layout and corresponding specifications of the servers in rack B4. (b) Arrangement and test connection diagram of rack temperature test points.
Figure 4. Schematic diagram of the server in the rack. (a) The layout and corresponding specifications of the servers in rack B4. (b) Arrangement and test connection diagram of rack temperature test points.
Buildings 12 01015 g004
Figure 5. Comparison of measured and numerical simulated values of the outlet temperatures of rack B4.
Figure 5. Comparison of measured and numerical simulated values of the outlet temperatures of rack B4.
Buildings 12 01015 g005
Figure 6. The cooling method of the combined HPEs for a single server in the rack.
Figure 6. The cooling method of the combined HPEs for a single server in the rack.
Buildings 12 01015 g006
Figure 7. The cross-sectional diagram of the numerical model of the composite cooling system.
Figure 7. The cross-sectional diagram of the numerical model of the composite cooling system.
Buildings 12 01015 g007
Figure 8. The profile of heat distribution of the back door in rack B4 with the original and new cooling systems.
Figure 8. The profile of heat distribution of the back door in rack B4 with the original and new cooling systems.
Buildings 12 01015 g008
Figure 9. The exhaust air temperatures of the 9 different points in the rack B4 of these two models with the original and new cooling systems.
Figure 9. The exhaust air temperatures of the 9 different points in the rack B4 of these two models with the original and new cooling systems.
Buildings 12 01015 g009
Figure 10. The air flow pattern of the rack B4 in the original model and composite cooling model (installation height 0.01 m).
Figure 10. The air flow pattern of the rack B4 in the original model and composite cooling model (installation height 0.01 m).
Buildings 12 01015 g010
Figure 11. The temperature comparison of the nine temperature measurement points in the rack B4 without the heat pile (case 1) and by using HP with different distance from server (cases 2–8).
Figure 11. The temperature comparison of the nine temperature measurement points in the rack B4 without the heat pile (case 1) and by using HP with different distance from server (cases 2–8).
Buildings 12 01015 g011
Figure 12. The heat distribution profiles of the air outlet of the rack B4 in the cases 0–8.
Figure 12. The heat distribution profiles of the air outlet of the rack B4 in the cases 0–8.
Buildings 12 01015 g012
Figure 13. The change in the average air temperatures of the back door of the rack B4 in cases 0–8.
Figure 13. The change in the average air temperatures of the back door of the rack B4 in cases 0–8.
Buildings 12 01015 g013
Figure 14. The heat distribution profiles of the air outlet of the rack B4 in case 0 and case 9–16.
Figure 14. The heat distribution profiles of the air outlet of the rack B4 in case 0 and case 9–16.
Buildings 12 01015 g014
Figure 15. The average air temperatures of the back door in case 0 and cases 9–16.
Figure 15. The average air temperatures of the back door in case 0 and cases 9–16.
Buildings 12 01015 g015
Figure 16. The comparison of heat distribution profiles of the air outlet of the rack B4 under three different conditions.
Figure 16. The comparison of heat distribution profiles of the air outlet of the rack B4 under three different conditions.
Buildings 12 01015 g016
Table 1. Parameters of the base room and its equipment.
Table 1. Parameters of the base room and its equipment.
ItemsDescriptionValuesSite Photo
Base RoomDimension of base room9.2 m (L) × 8 m (W) × 4 m (H) Buildings 12 01015 i001
Base data room
Buildings 12 01015 i002
CCA
Height of plenum0.45 m
Height above the plenum3.55 m
Air supply patternUFAD & direct air return
Aisle configuration modeCCA & OHA
Dimension of CCA4.8 m (L) × 1.8 m (W) × 2.2 m (H)
No. of perforated tilesAltogether 24 in three rows
Dimension of perforated tiles0.6 m (L) × 0.6 m (W)
Room ambient humidityBelow 36%
Room ambient temperatureApproximate 26 °C
CRACsNumber and typeA Schneider air-cooled conditioner Buildings 12 01015 i003
CRAC unit
Buildings 12 01015 i004
Nameplate of CRACs
Dimensions of CRACs1.8 m (L) × 0.8 m (W) × 2.25 m (H)
Distance between CCAs and CRACs1.6 m
Height of CRACs above the plenum1.8 m
Height of CRACs under the plenum0.45 m
Dimensions of CRACs’ air outlet vent1.8 m (L) × 0.25 m (W)
Area of each air vent of CRACs0.45 m2
Air velocity from the CRACs5.3 m/s
Set supply air temperature (SAT)22 °C
Actual operating SATBetween 22.1 and 22.3 °C
Total power per CRAC43.1 kW
Cooling capacity per CRAC65.8 kW
COP1.53
Operation time24/24 h, 365 days/year
Racks & ServersNumber of racksAltogether 16 in 2 rows (Racks A&B) Buildings 12 01015 i005 Buildings 12 01015 i006
Dimensions of each rack1.2 m (L) × 0.6 m (W) × 2.2 m (H)
The ratio of open area of doors65%
Type of servers2 U
Dimensions of servers0.8 m (L) × 0.46 m (W) × 0.09 m (H)
Rated power of servers(495 W× 2) × 2 & (750 W× 2) × 10
Distance between neighboring servers0.09 m or 0.0045 mRacksServers
Table 2. The summary of simulated boundary conditions and related parameter settings.
Table 2. The summary of simulated boundary conditions and related parameter settings.
ParametersSetting Values
DC wallAdiabatic
Raised floor of DCAdiabatic
Ceiling of DCAdiabatic
Air supply temperature of CRACs22 °C
Air supply speed of CRACs5.33 m/s
The opening rate of perforated tiles45%
The opening rate of rack door65%
Mesh typeHexa Unstructured
Number of simulation iterations1000
Convergence criterion of energy equation10−6
Convergence criterion of other equation10−3
Table 3. The distance analysed of upper mounted HPEs between the servers.
Table 3. The distance analysed of upper mounted HPEs between the servers.
Case 12345678
Installation height/(cm)00.51.01.52.02.53.03.5
Above/Under serverAbove
Table 4. The distance analysed of bottom mounted HPEs between the servers.
Table 4. The distance analysed of bottom mounted HPEs between the servers.
Case910111213141516
Distance from the bottom of the server/(cm)00.51.01.52.02.53.03.5
Above/Under serverUnder
Table 5. The results of temperature change at different points in cases 9–16.
Table 5. The results of temperature change at different points in cases 9–16.
Case910111213141516
Test Point
1+++++++
2+++++++
3+++
4
5+
6++
7+
8
9+
Note: The symbol of “” represents that in this case, the temperature of the test point is lower than that of the corresponding test point in case 0 of the original model, and the symbol of “+” indicates that the temperature of the test point in the case is higher than that of the corresponding point in Case 0.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yuan, X.; Zhou, X.; Liang, Y.; Pan, Y.; Kosonen, R.; Lin, Z. Design and Thermal Environment Analysis of a Decentralized Cooling System with Surface-Mount Heat Pipe Exchangers on Servers in Data Centers. Buildings 2022, 12, 1015. https://doi.org/10.3390/buildings12071015

AMA Style

Yuan X, Zhou X, Liang Y, Pan Y, Kosonen R, Lin Z. Design and Thermal Environment Analysis of a Decentralized Cooling System with Surface-Mount Heat Pipe Exchangers on Servers in Data Centers. Buildings. 2022; 12(7):1015. https://doi.org/10.3390/buildings12071015

Chicago/Turabian Style

Yuan, Xiaolei, Xuetao Zhou, Yumin Liang, Yiqun Pan, Risto Kosonen, and Zhongping Lin. 2022. "Design and Thermal Environment Analysis of a Decentralized Cooling System with Surface-Mount Heat Pipe Exchangers on Servers in Data Centers" Buildings 12, no. 7: 1015. https://doi.org/10.3390/buildings12071015

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop