- OpenAI is reportedly developing its first custom AI chip with Broadcom
- The chip could be manufactured as soon as 2026
- The move could help reduce the costs of running OpenAI-powered apps
OpenAI is a step closer to developing its first AI chip, according to a new report – as the number of developers making apps on its platform soars alongside cloud computing costs.
The ChatGPT maker was first reported to be in discussions with several chip designers, including Broadcom, back in July. Now Reuters is claiming that a new hardware strategy has seen OpenAI settle on Broadcom as its custom silicon partner, with the chip potentially landing in 2026.
Before then, it seems OpenAI will be adding AMD chips to its Microsoft Azure system, alongside the existing ones from Nvidia. The AI giant’s plans to make a ‘foundry’ – a network of chip factories – have been scaled back, according to Reuters.
The reason for these reported moves is to help reduce the ballooning costs of AI-powered applications. OpenAI’s new chip apparently won’t be used to train generative AI models (which is the domain of Nvidia chips), but will instead run the AI software and respond to user requests.
During its DevDay London event today (which followed the San Francisco version on October 1), OpenAI announced some improved tools that it’s using to woo developers. The biggest one, Real-time API, is effectively an Advanced Voice Mode for app developers, and this API now has five new voices that have improved range and expressiveness.
Right now, three million developers from around the world are using OpenAI’s API (application programming interface), but the problem is that many of its features are still too…
Read full post on Tech Radar