202011-fever-cooling-pads-for-data-centers
202011-fever-cooling-pads-for-data-centers

Does anyone make “fever cooling pads” for data centers?

A group of colleagues working in a computer security room having a discussion

In this era of IoT we all suffer to varying degrees from power shortage anxiety. We find ourselves constantly worrying about whether our "smart" mobile phones, bracelets, door locks and other devices can be sufficiently powered.

Since these devices at the edge of the Internet of Things are usually small and can only be powered by batteries, a great deal of attention is focused on the design of their power supply systems. However, the average user probably doesn’t realize that the power problems faced at the other end of the Internet of Things, namely the cloud where the data center is located, are extremely energy-consuming and equally if not more worrying.

Data centers: big energy consumers

As the "cloud brain" of the digital age, the data center is responsible for a large amount of data processing and computing, and behind this high-density data processing is ever-increasing power consumption. In China, for example, the power consumption growth rate of data centers across the country has exceeded 12% for eight consecutive years. In 2017, the total power consumption of data centers had already reached 120-130 billion kWh, which is more than the total energy generated by the Three Gorges Dam and the Gezhouba Power Plant. It is estimated that by 2020 this value will reach 296.2 billion kWh, while by 2025 the number will soar to 384.22 billion kWh.

The same trend is also occurring on a global scale with some analysts saying that by 2025, data centers will account for 33% of global energy consumption, outranking all other forms of energy consumption. With the development of AI and other technologies and applications that require more computing power support, actual data center energy consumption growth rate is likely to be faster than projected.

Since data centers consume so much energy, great efforts are being invested to make them more energy-efficient. For this reason, the energy consumption composition of the typical data center has been analyzed. Here are some key findings.

  1. In a traditional data center, items of IT equipment are the most power-consuming, accounting for about 50% of total energy consumption. This percentage of energy consumption is directly used for data computing and processing.
     
  2. Coming in at second place, the energy consumption of the cooling system accounts for about 35%. Its main purpose is to cool down operating IT equipment and ensure that it functions normally within the specified operating temperature range.
     
  3. Next is the energy consumption of the distribution system, mainly the energy consumption of UPS equipment and the loss of electric energy in the transmission and transformation of the distribution system, which accounts for about 10%.
     
  4. Lastly is the energy consumed by lighting and other complementary support systems of the data center, accounting for about 5%.


Does anyone make “fever cooling pads” for data centers?
Figure 1: Energy consumption composition of traditional data centers


“Cooling down” the data center

For all of these reasons, an indicator formulated to measure the energy utilization efficiency of data centers, called PUE (Power Usage Effectiveness), has been developed. This indicator reveals the total power consumption of the data center throughout the year divided by the annual power consumption of IT equipment – in other words, the ratio of the sum of items 1-4 in the above energy consumption composition of the data center to the value of item 1. The lower the PUE, the less power the data center consumes outside of IT equipment, and the more energy savings made.

Understandably, lowering the PUE value has become a matter of utmost concern for data center operators and industry management departments. For example, in February 2019, the Ministry of Industry and Information Technology, National Government Offices Administration and National Energy Administration of China jointly issued the "Guiding Opinions on Strengthening the Construction of Green Data Centers". It stated that "By 2022, the average energy consumption of data centers will basically have reached the advanced international level, and the power efficiency value of newly built large and super large data centers will have reached the target of 1.4 or less.” However, the fact that many old data centers built in the past have a PUE higher than 2 shows that there is still a substantial gap between the reality and the ideal.

Judging from the energy consumption composition of the data center, we can see that one of the most direct ways to significantly reduce PUE is to reduce the energy consumption of the cooling system, since it accounts for the bulk of energy consumed by non-IT equipment. To this end, many novel ideas have been proposed and drastic measures adopted. These include the use of liquid cooling (water cooling) with higher heat dissipation efficiency, or simply building the data center in areas with lower ambient temperatures – even in the Arctic and on the seabed. Other advanced approaches include utilizing AI technology to manage energy consumption. For instance, Google claimed to have established a neural network model for PUE, using machine learning-based data center energy management methods to reduce total cooling power consumption by about 40%, thereby reducing the total power consumption of the data center by approximately 15%.

Starting with the power management of IT equipment

All of the above approaches, however big and small, have one thing in common – all of them seek to eliminate the heat generated during the operation of data center IT equipment. However, in actual fact there are more fundamental measures to be taken, starting with IT equipment such as servers. Improving their energy efficiency through effective power management would minimize the heat generated by energy loss. Only when work at the front-end and source is performed more efficiently will the demand for subsequent cooling and heat dissipation treatment decrease.

The advancement of semiconductor technology is the most basic way to reduce the energy consumption of the IC that is responsible for data computing and processing in the server in order to improve its efficiency. However, as everyone knows, the advancement of Moore's Law has slowed down after the process has entered the nanoscale. In some respects, progress has been extremely slow and plagued by setbacks. As these efforts alone will definitely not suffice to meet the current needs of energy conservation and efficiency in data centers, we must search for other means. 

For example, the power required by the main processor in the server is provided after the conversion and processing of peripheral power management devices. In the past, due to restrictive specifications, these power management devices could not be placed near the processor chip and required longer wiring from the power management device to the processor, leading to additional power loss and increased heat generation. Clearly it is critical to improve the efficiency of these supporting power management devices and thus minimize the need for them. Some manufacturers have already met with success.

One success story is TDK's newly developed μPOL™ DC-DC converter, which uses 3D packaging technology to integrate power management ICs, inductors and other components into a package of just 3.3 mm × 3.3 mm × 1.5 mm that can support 6A output current. Compared with other similar products, the solution is reduced by half in size, while power density is as high as 1W/mm3.

Does anyone make “fever cooling pads” for data centers?
Figure 2: ΜPOL™ DC-DC converter in 3D package is smaller in size and higher in energy density (Image source: TDK)


This type of miniaturized design enables the μPOL™ DC-DC converter to be as close as possible to the load point during system design, avoiding energy consumption due to lengthy wiring. Moreover, the device itself offers excellent heat dissipation and can be mounted on the back of a circuit board with poor air flow, thereby further improving design flexibility and saving the space occupied by the entire system.

Does anyone make “fever cooling pads” for data centers?
Figure 3: Comparison of μPOL™ DC-DC converters with previous products (Image source: TDK)


Fact: The future world will be data-driven. However, the harsh reality is that to achieve this goal we must first have enough energy to "drive" this data, which will require a series of advancements in power management technology. As the data center continues to dominate the energy consumption composition of the future, every 1% or even 0.1% increase in efficiency gained through power management will serve as “cooling pads” for the data center, thus improving the "health" of the data center and contributing to the energy saving and emission reduction commitments of the entire global community.

202011-fever-cooling-pads-for-data-centers
202011-fever-cooling-pads-for-data-centers
Related Articles
Integration of Multiple Technologies Heralds a New Era in Smart Healthcare
Beyond SiC: the quiet achievers in EVs
November 15, 2024
The new era of EV charging, ushered in by SiC devices, is underpinned by passive components such as MLCCs. The right passive components can significantly enhance the efficiency, reliability, and performance of charging systems.
Integration of Multiple Technologies Heralds a New Era in Smart Healthcare
E-bikes: a rising trend for commuters
October 15, 2024
By helping enhance the availability and performance of e-bikes, Avnet hopes to help more people experience the wonders of these new technologies and the freedom of riding.
202011-fever-cooling-pads-for-data-centers
Related Events

No related Events found