The rise of AI requires so much supercomputing power that companies are turning to liquid cooling, as much as 3,000 times more effective than using air, to keep it all from overheating

data center servers data centre

Summary List Placement

The inside of a data center or computer room is not a pleasant place to be. It’s so loud due to server fans that workers have to wear ear muffs if they spend an extended amount of time in there. It’s hot from the server exhaust and the smell of hot metal, primarily from the ventilation of heat sinks, is distasteful.

All of this is due to the method of cooling the servers: air conditioning. Air cooling has been the standard means of cooling down servers, but liquid cooling is growing more and more prevalent as a means of heat reduction.

“It’s happening more often because of the shift in types of workloads we’ve seen in the past five years,” said Peter Rutten, research director in the infrastructure, systems, platforms and technologies group at IDC.

In fact, liquid cooling is slowly but surely becoming the standard in the modern data center, to the point of being mandatory for certain use cases. Need has driven the adoption as well as overcoming fears of a leak destroying the server equipment, a logical concern on the part of IT managers who were not up to speed with advances in liquid cooling.

“With AI becoming a more typical workload in data centers, especially AI training and AI analytics, AI is moving out of academic labs and into data centers because businesses are looking to AI to solve the problems they have,” Rutten said.

Lenovo jumped out in front of the liquid cooling effort with its Neptune line of liquid cooled systems that started in 2012. Lenovo’s first big effort in liquid cooling was trying to help the Leibniz-Rechenzentrum (LRZ) supercomputer center in Munich, Germany reduce its power bill.

“They were doing it purely for cost reasons. German power costs are among the highest in the world,” said Scott Tease, general manager for HPC & AI business at Lenovo Data Center Group. The result was a reduction in electrical consumption by up to 40 percent.

“So it was a huge, huge cost savings story, you know, that quickly moved to how great this was from an eco standpoint, from a sustainability standpoint,” said Tease.

Air cooling requires air conditioners called computer room air conditioning (CRAC). These types of systems can sometimes draw more electricity than the servers.

Server power efficiency is measured in Power Usage Effectiveness (PUE), a ratio of server power versus cooling required. If your facility uses a total of 100 kiloWatts and 50 kW goes to cooling, your PUE is 2.0. If you use 100kW and use just 20kW for cooling, your PUE is 1.25. The lower the number the better, and it can never get below 1.0.

NREL’s Eagle consumes a total of 888 kilowatts and yet has a PUE of 1.034. That means virtually all of its power consumption is in the IT gear, not cooling. NYU’s supercomputer, called Green, has a PUE of 1.08. At NYU’s old facility, the PUE of the computer Green replaced is 2.0. “My energy cost 15 cents in Manhattan. That’s 30 cents …read more

Source:: Business Insider


(Visited 7 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *