AI workloads often require low-latency, high-bandwidth networking, resulting in dense rack layouts that overload power and thermal management systems in existing data centers.

Modern GPUs can consume more than 700 watts of power, and servers can consume more than 10 kW. Training large language models may require hundreds of such systems, exceeding the 10-20 kW per rack typical for most data centers. This compression can cause network bottlenecks that will negatively impact cluster performance.

While fewer AI accelerators are needed to solve the inference problems that arise from using trained models to generate text and images, powering dense racks and dissipating heat efficiently still remains a challenge.

Schneider Electric is encouraging data center operators to consider changes in power, cooling, rack configuration and software management to adapt to the demands of widespread adoption of AI.

Source: Ferra

Previous articleWhy FAW cars are interesting in Russia. A new Lada will be released on one of them
Next articleDetsky Mir completed the transformation of the company into a private company
I am a professional journalist and content creator with extensive experience writing for news websites. I currently work as an author at Gadget Onus, where I specialize in covering hot news topics. My written pieces have been published on some of the biggest media outlets around the world, including The Guardian and BBC News.

LEAVE A REPLY

Please enter your comment!
Please enter your name here