The new generation GH200 Grace Hopper is very ambitious
The new GH200 Grace Hopper Superchip is based on a 72-core Grace CPU equipped with 480GB of ECC LPDDR5X memory, as well as a GH100 compute GPU paired with 141GB of HBM3E memory that comes in stacks of six 24GB and uses a 6,144-bit memory interface. Although Nvidia physically installs 144 GB of memory, only 141 GB can be accessed for better efficiency. Since this platform is a binary configuration, it is necessary to multiply these values by two.
According to Nvidia, Nvidia’s GH200 Grace Hopper platform with HBM3 is currently in production and will be commercially available starting next month. In contrast, the GH200 Grace Hopper platform with HBM3e is currently being sampled and is expected to be available in the second quarter of 2024. On the pricing side, no information was shared, but it should be noted that these platforms are worth tens of thousands of dollars depending on the configuration.
Nvidia has a near monopoly on productive AI-enabled GPUs. Cloud providers like AWS, Azure, and Google all use Nvidia’s H100 Tensor Core GPUs. Microsoft and Nvidia have also partnered to build new supercomputers, but Microsoft itself is also known to want to produce artificial intelligence chips. Nvidia is also facing competition from AMD, which is looking to ramp up production of its AI GPU in the fourth quarter of this year.