Advertisement

Nvidia declined to comment on Microsoft’s plans to stop relying on the company’s tech. Though Nvidia maintains the biggest lead in AI training chips, the company does probably still want to keep Microsoft as a customer. Last year, the U.S. introduced new restrictions to keep the company from sending its A100 and H100 chips to Russia and China. Last month, Nvidia said it was allowing for more cloud-based access to its H100 chips, and that Meta was also jumping on the H100 bandwagon.

Competition and costs are reportedly speeding up Athena’s development. Google, the other major tech giant trying to make a statement in the burgeoning AI industry, is also working on its own AI chips. Earlier this month, the company offered more details on its own Tensor Processing Unit supercomputers. The company said it connected several thousand of these chips together to make a machine learning supercomputer, and that this system was used to train its PaLM model, which in turn was used to create its Bard AI.

Advertisement

Google even claimed its chips use two to six times less energy and produce approximately 20 times less CO2 than “contemporary DSAs.” Knowing just how much energy it takes to train and run these AI models, Microsoft’s new chip will need to contend with the massive environmental cost of proliferated AI.


Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators, The Best ChatGPT Alternatives, and Everything We Know About OpenAI’s ChatGPT.