IT Brief UK - Technology news for CIOs & IT decision-makers
Story image

HPE unveils fanless liquid cooling for AI data centres

Today

Hewlett Packard Enterprise (HPE) has announced the development of the industry's first fully fanless direct-liquid-cooling systems architecture aimed at improving the energy and cost efficiency of large-scale artificial intelligence (AI) deployments.

This new architecture, showcased by HPE, is designed to address the mounting power demands of next-generation AI systems, moving away from conventional cooling techniques and fully embracing direct liquid cooling. According to the company, this advancement reduces the cooling power necessary per server blade by 37%, with the associated benefits of lowering utility costs, decreasing carbon emissions, and minimising data centre noise from fans.

HPE has a history of innovation in direct liquid cooling, a technology now finding more significant application in AI workloads due to the effectiveness in cooling high-power systems. This development has contributed to HPE's systems powering seven of the top ten most energy-efficient supercomputers globally, as listed in the Green500.

Antonio Neri, President and CEO of HPE, stated, "As organisations embrace the possibilities created by generative AI, they also must advance sustainability goals, combat escalating power requirements, and lower operational costs. The architecture we unveiled today uses only liquid cooling, yielding greater energy and cost efficiency advantages than the alternative hybrid cooling solutions on the market. HPE's expertise deploying the world's largest liquid-cooled IT environments and our market leadership spanning several decades put us in excellent position to continue to capture AI demand."

The architecture's construction relies on four key components, including an advanced 8-element cooling design covering various server components such as the server blade, network fabric, and coolant distribution unit (CDU). Other features include a high-density, high-performance system, an integrated and scalable network fabric, and an open system design allowing flexibility in accelerator choices.

This comprehensive approach not only achieves significant efficiency in cooling power but also supports increased server cabinet density, effectively reducing the required data centre floor space by 50%.

During the announcement, executives, including Antonio Neri, Fidelma Russo, Executive Vice President and General Manager of Hybrid Cloud and HPE CTO, and Neil MacDonald, Executive Vice President and General Manager of Server, highlighted how HPE's portfolio provides essential components necessary to fulfill the opportunities presented by AI, networking, and hybrid cloud solutions.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X