In recent years, many new technologies have entered our lives in order to increase the efficiency of computers. In fact, we have seen one of the biggest examples of this in RAM and processors that are directly interacted with on laptops. Although it had negative aspects such as not being able to change the amount of RAM, this move directly affected the performance of the devices. Now, NVIDIA and IBM are trying something similar by connecting GPUs to SSDs to increase AI capability.
NVIDIA and IBM will overcome the artificial intelligence bottleneck
NVIDIA and IBM are introducing a new technology to increase the machine learning capabilities of devices. By connecting the GPUs of the systems directly to the SSD, they aim to accelerate the flow of data that will require large amounts of storage and prevent possible bottlenecks.
Leaks that have emerged on the subject indicate that the technology called Large Accelerator Memory (BaM) aims to connect GPUs directly to high-capacity SSDs and not only prevents bottlenecks, but also existing processing capacity. (directly proportional to the capacity of the connected SSDs) says it will raise it much higher than the existing ones.
BaM enables GPU threads to read or write small amounts of data on demand, as determined by computation I/O (input-output/ input-output-I/O), reducing traffic amplification.
The units we call machine learning or artificial intelligence generally come above processors, namely CPUs. What NVIDIA and IBM aim here is to highlight the power of the GPU by avoiding the dependency on CPUs at this point. In addition, it is stated that the first tests carried out showed very promising results.
What do you think about this issue? Don’t forget to share your feedback with us on the SDN Forum or in the comments!