OpenAI has acknowledged that it is testing Google’s in-house Tensor Processing Unit (TPU) AI chips, but said it has no active plans to deploy them at scale.
An OpenAI representative told Reuters its AI lab was in early testing with the chips.
AI labs typically test a variety of processors, but deploying them at scale would be a complex engineering task.
Cloud collaboration
The Nvidia AI accelerators that OpenAI primarily uses are tied to Nvidia’s software platform, forming a significant barrier to adapting workloads to run on competing GPUs from companies such as AMD.
OpenAI has begun using Google Cloud data centre infrastructure to meet growing demand, according to reports last month, and the potential use of Google’s TPU chips, reported by several outlets last week, added to an unusual collaboration between the two major AI competitors.
Google has signed up Apple, Safe Superintelligence, Cohere and other external customers for its TPUs, while reserving the most advanced models for its own internal use, such as developing its Gemini AI model.
Morgan Stanley analysts noted on Monday that OpenAI is now running workloads across most major cloud providers, including Google, Microsoft Azure, Oracle and CoreWeave, with Amazon Web services being “the notable player missing from the list”.
Amazon makes an in-house Trainium AI chip that competes with Google’s TPU.
Custom chips
OpenAI is developing its own in-house chip and is reportedly on track to meet a tape-out milestone this year in which a chip is finalised for manufacturing.
By contrast, a Microsoft effort to create a next-generation AI chip has reportedly been delayed by six months into next year, in part due to design changes requested by OpenAI.