Why consider hybrid for high density data center cooling?
High density data center cooling appears poised to move beyond its historic niches in high-performance computing (HPC) and cryptocurrency and gain greater adoption in traditional data centers. Businesses increasingly rely on data centers to store and process reams of complex information. A typical organization has more than a dozen data centers, which can range from a server room in the office to large, remote sites.
Keeping all that processing power at the optimal temperature — typically below 80 degrees Fahrenheit — requires a lot of resources. Data centers account for more than 3% of global energy usage, which is more than the energy needs of some nations. These facilities can also consume a considerable amount of water, with about 1.8 liters used for every 1 kilowatt-hour the center uses.
Interest in high density data center cooling is no longer just about density and “hot hardware”. The progress of liquid cooling is influenced by a confluence of factors that includes sustainability and edge deployments.
How can data center operators reduce both their operating costs and environmental footprints? A solution for many facilities is a hybrid approach to cooling that combines refrigerant-based direct expansion air cooling with water-based cooling.
Before we dive into this solution, let’s take a look at five innovations in modern high density data center cooling methods:
1. Direct-to-chip cooling
Liquid cooling is an innovation that has significantly transformed data center cooling and is one of the most popular cooling techniques used today. There are many different liquid cooling systems, with each new innovation providing greater efficiency. Direct-to-chip cooling is a more recent liquid cooling technique where a liquid coolant is delivered to a chip via tubes. The coolant absorbs the heat from the chip and removes it, cooling processors directly. Direct-to-chip cooling is considered one of the most effective and efficient methods for data center heat removal.
There are many benefits associated with direct-to-chip cooling, the first being reduced energy consumption as liquid cooling decreases the amount of airflow needed by up to 90%. Also, the carrying capacity for liquid cooling can be up to 3,500 times greater than traditional air cooling. Another benefit is increased processing capacity as equipment can support greater chip densities due to the more precisely targeted cooling. Additionally, the more effective removal of heat requires less space for equipment. Finally, since cooling is kept inside the enclosures, there is no need for in-row cooling units. This corresponds with a lower likelihood of equipment overheating which is a major cause of data center downtime
2. Two-phase immersion cooling
Liquid cooling has transformed data center cooling systems and has become one of the most popular cooling techniques today. There are many different types of liquid cooling and more innovations are being developed to enhance this relatively new technology. Two-phase liquid immersion cooling involves the submerging of electronic components into a bath of dielectric heat transfer liquid. With a boiling point of 50° C, this fluid is a much better heat conductor than air, water, or oil. The vapor formed from the interaction between the liquid and the heat generating components passively facilitates the heat transfer. In 2021, Microsoft tested a two-phase immersion technique, making it the first cloud provider running two-phase immersion cooling in a production environment. The company is experimenting with deploying data center “pods” in the ocean to move one-step closer towards self-sustaining data center units powered by renewable energy with a quick deployment time.
There are numerous benefits associated with two-phase immersion cooling, the first being higher efficiency and energy savings. This cooling technology has a >90% efficiency advantage in data centers compared to that of air cooling. Additionally, there is improved reliability with this technology as components are not subject to temperature variations. Therefore, there is reduced failure potential as no cooling fans are required, which would typically cause degradation from vibration. There is also a higher computing density with two-phase immersion cooling as there is no longer a need for heat sinks and cooling fans, which take up a relatively large amount of space. Equipment can then be placed closer together, increasing computing power by up to 10 times within the same space.
3. Microchannel liquid cooling
Microchannel liquid cooling is an extension of direct-to-chip liquid cooling with the addition of cold plates that directly target CPUs, GPUs, and memory modules. Operating on a heat-spreading premise, the sealed metal plates spread heat that is generated in a device into small internal fluid channels. This technique provides cooling for a large surface area, and the small fluid channels facilitate an interaction between the flowing coolant and the heated surface.
This type of cooling is increasingly beneficial for data centers as it offers improved performance by increasing air-side heat transfer between the fins in the heat exchanger and the ambient air. It is reported to provide 20% to 40% greater overall heat transfer performance along with a drastic reduction in airside pressure drops. The heat exchangers are also 10% to 30% smaller in size and up to 60% less weight, allowing for a smaller footprint in the data center. This cooling method also reduces costs as there are less refrigerant and material costs.
4. Calibrated vector cooling (CVC)
Calibrated vectored cooling (CVC) was designed by IBM in 2005 to be used with its blade server series and other products that were in close proximity to its computing equipment. CVC involves the passing of refrigerated air into the parts of high-density systems that have the highest temperatures. This enables lower device temperatures and limits the required number of internal cooling fans needed to cool equipment. CVC has greatly improved cooling efficiency by optimizing cool air flow in computers and servers. As a result, power is better allocated throughout the data center. Only equipment that needs to be cooled is cooled as opposed to cooling all equipment regardless of its temperature.
5. Rear Door Heat Exchange (RDHx)
For racks with densities 20kW or higher, RDHx are low-maintenance, flexible solutions that efficiently manage heat rejection and cooling within high-performing data centers. If a single unit is unavailable due to maintenance or repair, the IT equipment in that rack can still operate efficiently because the whole room is, in effect, the cold aisle, and adjacent racks can generate enough cold air to cool surrounding equipment inputs.
RDHx are categorized into passive and active heat exchangers. Each type is a slim panel that attaches to the back of the cabinet by utilizing a customizable adaptive frame, resulting in up to 14” of additional cabinet depth. While neither delivers liquid directly to the server or chip, both employ coils filled with chilled water to capitalize on the high thermal transfer properties of water and the reduced fan energy by removing the heat closer to the IT equipment source.
Passive RDHx utilize the server fans to push the heated exhaust air across a liquid-filled coil that absorbs the heat before the air is returned to the data hall.
Active RDHx feature additional fans to pull the heated exhaust air across the liquid-filled coil that removes the heat before the air is returned to the data hall.
After the chilled water absorbs the server heat, it journeys back to the chilled water plant to be cooled down and then returned to the data hall to repeat the cycle through the rear door heat exchangers.
What is precision cooling and why is it important in data centers?
Precision cooling is air conditioning that is designed specifically for a process environment such as IT equipment in data centers and computer rooms. With precision cooling, you’re using only the energy that you need.
In the case of colocation providers, which are increasingly prevalent, precision cooling is even more critical because they’re very much trying to manage their allocation of power, especially in a multitenant type of operation.
What is the impact of increased server rack density?
As densities grow to keep up with demand, there are higher airflow requirements per kilowatt of cooling. Those higher airflows require more control of various fans, which are typically configured in an array.
Now, there are airflow requirements across an array of fans as opposed to one or two fans. Efficient control of that equipment becomes a critical piece of designing and operating your data center. This is only one part of high density data center cooling.
What will the affect be on a data center’s carbon footprint and water usage?
A more efficient high density data center cooling process naturally means less power consumption. Efficient technology can reduce refrigerant use, benefiting the planet.
The industry is migrating to lower global warming-potential types of refrigerants, but that comes with a tradeoff in the form of high operating pressures and potentially higher flammability. Like with everything, it’s always a balancing act, but refrigerant management is key.
Whether you’re talking about an air-cooled chiller or a DX CRAC [direct expansion computer room air conditioning] unit-cooled facility, refrigerant management becomes important because the more total refrigerant that you have on-site, the higher your carbon footprint is going to be.
Reducing your total refrigerant usage on-site is one piece of operating more sustainably. The other piece of it is to reduce some of your risk associated with any leakage of refrigerant through pipes. Leakage is always a risk just through routine maintenance.
Data center environmental management also concerns water consumption. The strategies you choose for your high density data center cooling infrastructure come into play quite a bit in how you will achieve that because energy economization is a way to reduce your carbon footprint or your water utilization.
There are lots of different economization technologies out there, but essentially there’s air-side and water-side technologies. On the air side, you can introduce outside air on a cold, dry day, mix it with your recirculated indoor air and reduce your carbon footprint through reduced energy consumption.
Another method is water-side economization. This introduces more infrastructure but water-side economization can end up being highly efficient.
How is a hybrid solution that combines air and water cooling the best solution?
Hybrid high density data center cooling can provide the best of both worlds in terms of reliability, scalability and efficiency. Air-side economization is challenging due to unpredictable outdoor conditions and limited suitable air quality hours. However, water-side economization is more predictable and easier to achieve.
Is there an example of a successful application of hybrid high density data center cooling?
One Wilshire, a 50-year-old office building in Los Angeles was converted for use as a data center. The client did the research to understand its water and energy consumption needs.
The Data Aire solution used a central condenser water system supplying 67-degree water and a CRAC system with gForce Ultra technology providing 72-degree air.
The gForce system has two cooling coils in series. One is a refrigerant coil using the compressors to make cooling, along with a chilled water coil that uses water from the cooling tower for cooling — referred to as the Energy Saver coil. The chilled water coils and the condensers are oversized to provide maximum economization hours. Most importantly, the system includes a variable speed compressor operated by a VFD, which saves energy.
A hybrid solution such as the one at One Wilshire strikes the balance between the efficiency of evaporative water-cooled and water consumption of an open-loop system. When combined with variable speed DX technology in the CRAC units, the efficiency benefits outweigh the scarce water consumed in the generation of power serving the site.
The system operates 260 days a year with full economization and 105 days with partial economization, achieving a PUE of 1.2 at full load.
One Wilshire also is a prime example of adaptive reuse of commercial real estate, and it was made possible in part by the use of a hybrid cooling system.
Conclusion
Like the data center and cloud industries, hybrid data center cooling will likely vary by region and use case. Increasingly, entities will optimize processing for profitable and fast-growing applications by increasing watts per square foot. As those ratios begin to exceed air cooling capabilities, liquid cooling may become the only option. In the meantime, there are seemingly endless possibilities and benefits to using a hybrid cooling model.