Technology

NVIDIA Unveils Advances in Artificial Intelligence and Supercomputing at SC 2024

NVIDIA Unveils Advances in Artificial Intelligence and Supercomputing at SC 2024

NVIDIA revealed varied infrastructure, {hardware} and assets for scientific analysis and enterprise on the International Conference on High-Performance Computing, Networking, Storage and Analytics, held November 17-22 in Atlanta. Key amongst these bulletins was the upcoming normal availability of the H200 NVL AI accelerator.

The new Hopper chip will arrive in December

NVIDIA introduced in a press convention on November 14 that platforms constructed with the H200 NVL PCIe GPU shall be obtainable in December 2024. Enterprise prospects can discuss with an enterprise reference structure for the H200 NVL. The buy of the brand new enterprise-scale GPU comes with a five-year subscription to the NVIDIA AI Enterprise service.

Dion Harris, director of accelerated computing at NVIDIA, mentioned throughout the briefing that the H200 NVL is right for lower-power knowledge facilities — lower than 20 kW — and air-cooled accelerator rack designs.

The H200 NVL is meant for low-power HPC and AI workloads. Image: NVIDIA

“Companies can tune LLMs in hours” with the subsequent GPU, Harris mentioned.

H200 NVL exhibits a 1.5x reminiscence improve and 1.2x bandwidth improve in comparison with NVIDIA H100 NVL, the corporate mentioned.

Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro will help the brand new PCIe GPU. It may even seem on platforms from Aivres, ASRock Rack, GIGABYTE, Inventec, MSI, Pegatron, QCT, Wistron and Wiwynn.

SEE: Companies like Apple are working laborious to construct a workforce of chipmakers.

Procedure for the launch of the Grace Blackwell chip

Harris additionally identified that companions and suppliers have the NV GB200 NVL4 chip (Grace Blackwell) in hand.

“The Blackwell launch goes easily,” he mentioned.

Blackwell crisps shall be offered out over the subsequent yr.

The subsequent part of real-time omniverse simulations revealed

In manufacturing, NVIDIA launched Omniverse Blueprint for Real-Time CAE Digital Twins, now in early entry. This new reference pipeline exhibits how researchers or organizations can speed up real-time simulations and visualizations, together with real-time digital wind tunnel testing.

Powered by NVIDIA NIM AI microservices, Omniverse Blueprint for Real-Time CAE Digital Twins allows real-time execution of simulations that usually take weeks or months. This performance shall be on show at SC’24, the place Luminary Cloud will show how it may be leveraged in a fluid dynamics simulation.

“We created Omniverse in order that all the things can have a digital twin,” mentioned Jensen Huang, founder and CEO of NVIDIA, in a press launch.

“By integrating NVIDIA Omniverse Blueprint with Ansys software program, we allow our prospects to deal with more and more complicated and detailed simulations sooner and extra precisely,” mentioned Ajei Gopal, president and CEO of Ansys, in the identical press launch.

CUDA-X library updates speed up scientific analysis

NVIDIA’s CUDA-X libraries assist speed up real-time simulations. These libraries are additionally receiving updates geared toward scientific analysis, together with modifications to CUDA-Q and the discharge of a brand new model of cuPyNumeric.

Dynamic simulation performance shall be included in CUDA-Q, NVIDIA’s growth platform for constructing quantum computer systems. The aim is to run quantum simulations in sensible timescales, akin to an hour fairly than a yr. Google is working with NVIDIA to create representations of its qubits utilizing CUDA-Q, “bringing them nearer to the aim of realizing sensible, large-scale quantum computing,” Harris mentioned.

NVIDIA additionally introduced the newest model of cuPyNumeric, the accelerated computing library for scientific analysis. Designed for scientific environments that usually use NumPy applications and run on a single CPU node, cuPyNumeric permits such tasks to scale to hundreds of GPUs with minimal code modifications. It is at the moment utilized in chosen analysis establishments.

Source Link

Shares:

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *