OpenAI has reportedly built its first in-house custom AI chip in collaboration with Broadcom and TSMC, as the AI giant is looking to upscale its inferencing capabilities.
OpenAI’s First AI Chip Will Reportedly Target Inferencing Workloads, Expected To Debut With Cutting-Edge Performance
OpenAI is probably one of the only firms in the industry that has presented massive ambitions when scaling AI compute power, whether through a network of “fabrication facilities” or even through developing in-house solutions that are more effective than existing counterparts. Sam Altman is known to experiment with the “AI hype,” and now, according to a new report by Reuters, OpenAI has developed its first AI chip, collaborating with the chip designer Broadcom and none other than TSMC, the semiconductor giant.
The report states that the firm’s newest chip is targeted towards inferencing workloads, and while the industry is currently focused on model training and enhancements, the future definitely lies in bringing inference capabilities in the LLM models out there. Interestingly, it is revealed that OpenAI has dropped the idea of building a “network” of foundries and shifted the focus towards in-house chip design since the latter requires less financial resources and execution time.
OpenAI is said to be building a hybrid model of “AI compute acquisition,” which means that the firm is planning to expand its AI capabilities through the integration of existing architectures, such as those from NVIDIA and AMD, along with developing in-house solutions, to ensure…
Read full on Wccftech