NVIDIA, the number one name in the graphics card and GPU industry, announced its new artificial intelligence-focused chip, HGX H200, at an event it organized. The GPU, which is a highly advanced version of the H100, was designed to enable generative artificial intelligence and advanced language models to run much faster.
NVIDIA HGX H200 is better than the previous GPU in every way. In this context; The new chip has 1.4 times the memory bandwidth of the previous one. The memory capacity of the HGX H200 was announced as 1.8 times that of the previous model. Although these figures do not mean anything on their own, they are very important for the development of artificial intelligence technologies. According to the company’s statement, HGX H200’s memory bandwidth has been increased to 4.8 terabytes and memory capacity to 141 gigabytes with a special technology called HBM3e.
New chips can be used even in computers with old chips
NVIDIA’s next-generation AI development GPU has a few more notable features. One of them is that the chip is much more efficient. Providing better results than the H100 in terms of energy consumption, the new chip can instantly optimize GPU usage. Thanks to this feature, more power is consumed in more demanding tasks and processing performance increases. On the other hand; This chip can be used with supercomputers that have been built with the H100 GPU in the past. This will be a significant improvement for supercomputers.
NVIDIA HGX H200 is not a product that is directly important to us ordinary users. However, there is one thing we should not miss. Artificial intelligence-focused chips will accelerate the work of companies developing artificial intelligence-focused technology. Moreover, the results will be better. Think of it this way; Chatbots such as ChatGPT and Google Bard can sometimes produce meaningless results. This is exactly why the HGX H200 is important to us. The higher the performance of the chip used in the training of artificial intelligence, the better the service will be offered to the end user.