Technology giant Microsoft announced two custom-made chips for artificial intelligence workloads at the annual Microsoft Ignite conference.
The idea behind the project is to make “everything from silicon to services” available to meet the demand for AI. In addition to building chips in-house, Microsoft has ensured that everything else, such as software, server cabinets, and cooling systems, is optimized for AI workloads.
Microsoft says the Azur Maia AI Accelerator is designed specifically for Azure hardware, ensuring maximum utilization of the hardware. Azure Cobalt, on the other hand, is an energy-efficient ARM chip optimized per watt in data centers.
To accommodate the new chips within existing data center infrastructure, Microsoft redesigned server cabinets and implemented liquid cooling solutions. The company will begin rolling out its new AI-focused processors to data centers early next year, initially powering Microsoft Copilot and Azure OpenAI Service.
In addition to launching dedicated silicon for AI, Microsoft is expanding partnerships with other manufacturers to give customers more options.
Microsoft has released a preview of new virtual machines powered by NVIDIA’s H100 Tensor Core GPUs. In addition, the software giant plans to use NVIDIA H200 Tensor Core and AMD’s MI300X. These additions plan to bring performance, reliability and efficiency for mid- and high-class learning and productive AI.