How Apple Collaborated with Google to Train Its AI Models
Divyang Garg
President New Technology | Sr. Solutions Architect | Data Analyst & Engineering | Cloud | IoT | Big Data | AI/ML | Reporting
In a recent announcement at its annual event, Apple CEO Tim Cook unveiled an exciting partnership with OpenAI, integrating its advanced AI model into Siri, Apple's voice assistant. However, a closer look at technical details shared in a subsequent Apple document reveals that Google, under Alphabet Inc., has also played a significant role in Apple's AI advancement strategy.
Apple's engineers used their proprietary framework software along with a variety of hardware, including their in-house graphics processing units (GPUs) and Google's specialized tensor processing units (TPUs) available exclusively on Google Cloud. Google has been refining TPUs for a decade and has disclosed details about its fifth-generation chips, which are optimized for AI training. Notably, Google's latest fifth-generation performance version competes favorably with Nvidia's H100 AI chips. The tech giant has also announced plans to launch a sixth generation of TPUs later this year, underscoring its ongoing commitment to AI innovation.
Google has developed a robust ecosystem around TPUs, offering dedicated cloud computing hardware and software platforms tailored for AI applications and model training. Both Apple and Google declined immediate requests for comment on the specifics of their collaboration and the extent to which Apple relied on Google's technology compared to other AI hardware providers like Nvidia.
Accessing Google's TPUs typically involves purchasing cloud services through its cloud division, akin to how customers engage with computing services from leading providers such as Amazon Web Services (AWS) and Microsoft Azure.