• Home
  • Sofware
  • Nvidia plays big in AI: Introducing the GH200 Grace Hopper with HBM3e memory

Nvidia plays big in AI: Introducing the GH200 Grace Hopper with HBM3e memory

Nvidia introduced the new generation GH200 Grace Hopper Superchip platform with the world's first HBM3e memory for artificial intelligence and high performance computing at the SIGGRAPH event. The GH200 Grace Hopper is currently Nvidia's most powerful and popular ...
 Nvidia plays big in AI: Introducing the GH200 Grace Hopper with HBM3e memory
READING NOW Nvidia plays big in AI: Introducing the GH200 Grace Hopper with HBM3e memory
Nvidia introduced the new generation GH200 Grace Hopper Superchip platform with the world’s first HBM3e memory for artificial intelligence and high performance computing at the SIGGRAPH event. The GH200 Grace Hopper will have the same Grace CPU and GH100 Hopper GPU as the H100, currently Nvidia’s most powerful and popular AI product, but will offer three times the bandwidth and more than three times the memory capacity.

The new generation GH200 Grace Hopper is very ambitious

The new GH200 Grace Hopper Superchip is based on a 72-core Grace CPU equipped with 480GB of ECC LPDDR5X memory, as well as a GH100 compute GPU paired with 141GB of HBM3E memory that comes in stacks of six 24GB and uses a 6,144-bit memory interface. Although Nvidia physically installs 144 GB of memory, only 141 GB can be accessed for better efficiency. Since this platform is a binary configuration, it is necessary to multiply these values ​​by two.

Nvidia’s current GH200 Grace Hopper Superchip platform comes with 96GB of HBM3 memory and provides less than 4TB/s of bandwidth. In contrast, the new model increases memory capacity by around 50 percent and bandwidth by over 25 percent. These massive improvements make it possible for the new platform to run larger AI models than the original version and provide tangible performance improvements (which will be especially important for education).

According to Nvidia, Nvidia’s GH200 Grace Hopper platform with HBM3 is currently in production and will be commercially available starting next month. In contrast, the GH200 Grace Hopper platform with HBM3e is currently being sampled and is expected to be available in the second quarter of 2024. On the pricing side, no information was shared, but it should be noted that these platforms are worth tens of thousands of dollars depending on the configuration.

Nvidia has a near monopoly on productive AI-enabled GPUs. Cloud providers like AWS, Azure, and Google all use Nvidia’s H100 Tensor Core GPUs. Microsoft and Nvidia have also partnered to build new supercomputers, but Microsoft itself is also known to want to produce artificial intelligence chips. Nvidia is also facing competition from AMD, which is looking to ramp up production of its AI GPU in the fourth quarter of this year.

Comments
Leave a Comment

Details
124 read
okunma59390
0 comments