• Home
  • Sofware
  • Nothing more advanced: Introducing the Nvidia GH200 Grace Hopper Superchip

Nothing more advanced: Introducing the Nvidia GH200 Grace Hopper Superchip

Nvidia GH200 Grace Hopper Superchip powered systems join more than 400 system configurations powered by different combinations of NVIDIA's latest CPU, GPU and DPU architectures such as Grace, Hopper, Ada Lovelace and BlueField. GH200 GPU...
 Nothing more advanced: Introducing the Nvidia GH200 Grace Hopper Superchip
READING NOW Nothing more advanced: Introducing the Nvidia GH200 Grace Hopper Superchip
Nvidia GH200 Grace Hopper Superchip powered systems join more than 400 system configurations powered by different combinations of NVIDIA’s latest CPU, GPU and DPU architectures such as Grace, Hopper, Ada Lovelace and BlueField. The GH200 GPUs were designed primarily to help meet the growing demand for productive AI.

Speaking at Computex, Nvidia CEO Jensen Huang announced new systems, partners and additional details about the GH200 Grace Hopper Superchip, which combines Arm-based Grace CPU and Hopper GPU architectures using NVLink-C2C interconnect technology.

GH200 Grace Hopper Superchip features

The Nvidia GH200 Grace Hopper Superchip offers total bandwidth of up to 900GB/s, which is 7x the bandwidth of standard PCIe Gen5 lanes found in traditional and previous generation systems. High-bandwidth offers a broad playing field for tackling demanding productive AI and HPC applications.
In the GH200 Superchip, where the CPU+GPU design is continued, the chips are interconnected with the NVLink Chip-2-Chip (C2C) interconnect. The heart of the Grace Hopper Superchip, this link delivers total bandwidth of up to 900 gigabytes per second (GB/s), 7 times higher than the PCIe Gen5 lanes commonly used in accelerated systems.
This interconnection allows applications to use high-bandwidth CPU memory along with GPU memory. With up to 480GB of LPDDR5X CPU memory per Grace Hopper Superchip, the GPU has direct access to memory 7 times faster than HMB3. Together with the NVIDIA NVLink Switch System, all GPU threads running on up to 256 NVLink-connected GPUs can access up to 150 terabytes (TB) of high-bandwidth memory. This design will be used in the new DGX supercomputer.

Energy efficiency

NVIDIA Grace CPU delivers twice the performance per watt of traditional x86-64 platforms. According to Nvidia, this CPU is the fastest Arm data center processor in the world. The Grace CPU combines 72 Neoverse V2 Armv9 cores with up to 480GB server-grade LPDDR5X memory with ECC. This design provides the right balance between bandwidth, energy efficiency, capacity and cost. Compared to the eight-channel DDR5 design, the Grace CPU LPDDR5X memory subsystem provides up to 53 percent more bandwidth at one-eighth power per gigabyte per second.

Performance and Speed

On the GPU side of the GH200 Grace Hopper Superchip is the Hopper H100. The H100 is Nvidia’s ninth generation data center GPU, bringing huge performance leaps in large-scale AI and HPC workloads compared to the previous generation Nvidia A100 Tensor Core GPU.
Based on the new Hopper GPU architecture, the NVIDIA H100 GPU features many innovations: The new fourth-generation Tensor Cores perform matrix calculations faster than ever before across a much wider range of AI and HPC tasks. The new Transformer Engine enables the H100 to deliver up to 9x faster AI training and up to 30x faster AI inference than the previous GPU generation. The Secure Multi-Instance GPU (MIG) splits the GPU into isolated, right-sized instances to maximize quality of service (QoS) for smaller workloads.

Systems with the GH200 Grace Hopper Superchip are expected to hit the market later this year. Global hyperscalers and supercomputing centers in Europe and the USA are among the few customers that will have access to GH200 powered systems.

Comments
Leave a Comment

Details
136 read
okunma53597
0 comments