WWDC24: Apple's Obligatory AI Refresh
Image credit: Apple

WWDC24: Apple's Obligatory AI Refresh

Apple has been prodded and poked for not having an "AI strategy". Frankly, I don't think most of those folks who prod and poke rightly know what an AI strategy would look like other than selling massively expensive GPUs and supercomputing appliances to hyperscalers.

The reality is Apple has had an "AI strategy" for a while. The company has been pioneering consumer AI since they introduced Siri to the world back in 2011. Since then, Apple has driven AI features across their devices and services and was first to put a neural engine or NPU (Neural Processing Unit) into a smartphone, tablet, and even a smartwatch.


What could Apple possibly do to upstage what we have witnessed from Google, Microsoft, Nvidia, and Open AI? This is what makes WWDC 2024 curious and highly anticipated as expectations of an earth-shattering Cupertino "AI strategy" hits a fever pitch in the era of generative AI.

Apple CEO, Tim Cook, kicks off WWDC24 (Image credit: Apple)

Since ChatGPT broke out onto the world stage last year, Apple has continually mentioned their long legacy in machine learning (ML) and intent to take a thoughtful approach to adopting generative AI. All of this seems to have fallen upon deaf ears that could only tune into "AI" not aware that Tim Cook's early mentions of transformers, which is the neural network architecture innovation that make generative AI models possible, have been deployed in some of the earliest GenAI applications to land on a smartphone such as Apple's AutoCorrect.

Did Apple revolutionize or reinvent AI? Did they finally catch up if they were ever behind?

Apple Intelligence

The big reveal of WWDC 2024 was Apple Intelligence. More than a product, it is an "AI framework" not unlike what we have seen from Microsoft Copilot and Google Duet. Yes, a revamped chatbot framework with a vast library of models and interfaces that surface intelligent features across Apple devices and experiences.

Apple Intelligence (Image credit: Apple)

The big revelation of Apple Intelligence - Apple developed its own LLM (Large Language Model) and diffusion model to drive the many generative AI-augmented features highlighted throughout the keynote.

Craig Federighi, SVP of Software Engineering at Apple, described Apple Intelligence as “personal intelligence.” Apple Intelligence is about generative AI features that use your data on device to contextualize and theoretically "personalize" your user experiences across Apple devices. Ideally, this is done with privacy and security maintained.

Key Apple Intelligence Features

Apple Intelligence boils down to three simple applications; Rewrite, Summarize and Image Playground. These applications are enabled by two foundation models that underlie Apple Intelligence.

  • Rewrite provides compositional support that you might expect from a ChatGPT-like chatbot, providing tone-based draft recommendations for things like email replies, proofreading, and hopefully a better spellchecker. Sorry, no translation. English only upon launch in the Fall.
  • Summarize does exactly that. It will summarize an email, article, or webpage.
  • Image Playground generates an image in four "deepfake-resistant" styles based on a graphical or image input that can be further refined by the user with text prompts.

Apple asserts that these features will run locally on device based on their foundation LLM and diffusion models to keep things more private and trustworthy by keeping applications grounded on your data. Apple claims to do this with its Semantic Index which sounds like a vector and/or graph database that securely stores embeddings of your personal data/content to enable Apple Intelligence personalization. This mechanism is likely similar to Microsoft’s own Semantic Index for Copilot.

Devices with the A17 Pro processor (with a whopping 35 TOPS of NPU compute) and M Series processors from M1 to M4 will get Apple Intelligence when it launches in the Fall. Interestingly, the 11 TOPS of the M1 appears to be the minimum NPU compute requirement to run Apple Intelligence. This contrasts with the 40 TOP minimum threshold to qualify for Microsoft's CoPilot+PC.

Image Playground (Image credit: Apple)

The story behind OpenAI and Apple Intelligence

Another big revelation coming out of WWDC 2024 was the nature of OpenAI’s ChatGPT “integration” with Apple Intelligence. Many media outlets had speculated prior to the WWDC event that Apple had to resort to using a third party model, namely OpenAI’s ChatGPT, because they were “behind in AI”.

Apple Intelligence is based on Apple’s own foundation models, NOT OpenAI’s models despite wide speculation to the contrary.

How is Apple Intelligence different from the likes of the widely known ChatGPT which OpenAI famously launched to the public last year? Apple dubs models such as GPT-4 that underlies ChatGPT as “world models” which are trained on much larger sets of content with the controversial ambition of becoming “AGI” or Artificial General Intelligence.

ChatGPT & Apple Intelligence (Image credit: Apple)

Apple has “integrated” ChatGPT at the UI level with privacy safeguards implemented to ensure that users have full awareness and consent when their requests are offloaded/transferred to ChatGPT.

Apple is clearly not OpenAI exclusive citing that it is open to “integrating” with other world model providers in the future. This agnostic approach is smart and allows Apple to better ease Apple Intelligence into markets like China and the EU where the concerns about world models and AI in general are growing along with regulatory scrutiny.

The Apple Intelligence Architecture

Apple also introduced Private Cloud Compute, which amounts to a hybrid AI architecture and cloud infrastructure. Apple described features of Private Cloud Compute that look a lot like confidential computing more commonly associated with cloud computing for banks, healthcare and other industries that require trusted environments for critical and highly private applications.

Apple Intelligence's Confidential Computing (Image credit: Apple)

While the mechanism for when and how Apple Intelligence requests are orchestrated between device and cloud/server remains a mystery, Apple claims that server-side requests and processing do not store the user’s personal data. This suggests a serverless implementation of Apple Intelligence services delivered on Apple’s own data center software stack running on Apple Silicon-based data center infrastructure.

According to Apple, the policy for Apple Intelligence is on device first, Private Cloud Compute second if the request requires the assistance of a larger Apple Intelligence LLM, and finally, a recommendation to offload to a third party world model such as ChatGPT for requests that Apple Intelligence is not well suited to handle.?

Apple Intelligence architecture (Image credit: Apple)

Curiously, it does not seem that Private Cloud Compute will support the Apple Intelligence Image Model. This makes sense as Apple is likely looking to minimize the much higher cost of providing vision, video, and multimodal GenAI services from their Apple Intelligence cloud.

It's also interesting to note that Apple has taken an approach with Apple Intelligence that alleviates Apple from having to pay third parties, including OpenAI, for tokens or GPUs. Apple is reported to have rented TPU instances through Google to train Apple Intelligence foundation models. That's all OPEX and what I think is a smart avoidance of massive capital investment in data centers given the speculative present and future of GenAI supercomputing.

Apple Intelligence for Siri and App Intents (Image credit: Apple)

While not entirely new, Apple has found a fresh role for App Intents which is essentially Shortcuts which Apple acquired back in 2018. This Apple’s answer to the emerging talk of agents from Microsoft, Google, and Nvidia. It turns out Apple has had a framework for agents for several years now that will now be driven by a rejuvenated Siri.

Siri infused with Apple Intelligence could put Apple in an interesting position of leadership in “AI”. While panned for years for being a bit of a lame duck among virtual assistants, a more contextually aware and conversant Siri could change the game especially with the introduction of Type to Siri effectively making Siri Apple’s equivalent of Microsoft's Copilot.

Apple Intelligence Adapters (Image credit: Apple)

One esoteric Apple Intelligence announcement that stood out for me was Adapters. Thanks to the many technical briefings with Qualcomm on edge AI, I surmised Adapters referred to LoRA or "Lowest Rank Adaptation", a parameter-efficient method of fine tuning that involves adding parameters to an AI model rather than the costly exercise of changing parameters through full fine tuning.

Adapters allow a foundation model to be specialized for different domains of use and expertise precluding the need for redundant expert models. This will be important if Apple want’s to functionally scale Apple Intelligence while keep its on device AI footprint lightweight but powerful and extensible.

Speaking of lightweight and tiny, Apple Intelligence on device will support 4-bit precision (INT4). Yes, INT4 is what Nvidia’s Blackwell BH200 NVL72 modular supercomputers will support dramatically accelerating certain (not all) GenAI inference workloads in the cloud.

Conclusion

neXt Curve has long argued that generative AI will find bright spots in narrow applications that are dependable enough to be valuable to consumers and enterprises.

It seems that Apple, in their thoughtfulness, has arrived at the same conclusion.

With the announcement of Apple Intelligence, Apple has demonstrated that they are indeed not “behind in AI”. They are taking an angle with generative AI that Wall Street and the industry didn’t anticipate and struggled to comprehend.

Apple Intelligence is flipping the script on "AI". According to Apple, AI (including generative AI) will need to run on device first and work its way out toward the cloud only as necessary with privacy first.

The value for Apple users? Trust.?

The most astonishing revelation coming out of WWDC 2024 was how quickly Apple was able to mobilize its platform assets to assemble what is arguably the most compelling and comprehensive chatbot/agent framework that will deliver generative AI-infused features across most if not all Apple device categories.

IF Apple Intelligence turns out to be as good as marketed, Apple will have a very powerful platform to take the Apple experience to the next level. However, WWDC 2024 was notably devoid of live demos that have frequently embarrassed Apple’s peers to the tune of billions.

Unsurprisingly, Apple Intelligence will be rolled out in the Fall as a beta.


For a more in-depth neXt Curve analysis, go to www.next-curve.com.


Subscribe to the neXt Curve Insights monthly newsletter to be notified when the next newsletter is published. Go to www.next-curve.com to be added to our mailing list. You will also be notified when we publish new research notes and media content.

Follow us on LinkedIn, Twitter, and YouTube.

This material may not be copied, reproduced, or modified in whole or in part for any purpose except with express written permission or license from an authorized representative of?neXt Curve. In addition to such written permission or license to copy, reproduce, or modify this document in whole or part, an acknowledgement of the authors of the document and all applicable portions of the copyright notice must be clearly referenced.

??2024 neXt Curve. All Rights Reserved


Leonard Lee

Tech Industry Advisor & Realist dedicated to making you constructively uncomfortable. Ring the bell ?? and subscribe to next-curve.com for the tech and industry insights the matter!

3 个月
回复

要查看或添加评论,请登录