“Apple Intelligence” Is Homing In On Generative AI, & More
Want more research from the ARK Team? Have feedback on our publications? Click here to help inform our content creation .
1. “Apple Intelligence” Is Homing In On Generative AI
By: Nick Grous
During its Worldwide Developer Community conference (WWDC) conference last week, Apple introduced the Apple Intelligence framework, aiming to embed machine learning across Apple devices. A strategic pivot toward AI integration, Apple Intelligence should be able to deliver more personalized and intuitive user experiences.
App Intents integrates seamlessly into Siri, creating an operating system to orchestrate interactions across myriad apps within Apple's ecosystem. Leveraging App Store applications, Siri is poised to augment voice with text commands to offer a more cohesive and interconnected user experience that could transform how users interact with their devices.
Unlike standalone generative AI chat apps, Apple's integrated approach leverages its robust hardware and software ecosystem, resulting in a unified AI experience that standalone apps are unlikely to match. That said, limited to its ecosystem, Apple users could become isolated from more diverse AI innovations surfacing beyond its walled garden, much like AOL in the early days of the internet. Generative AI developers are innovating rapidly across the open internet with niche applications that offer users cutting-edge features and functionalities that Apple’s tightly controlled ecosystem might be slow to adopt.
To mitigate that risk, Apple has partnered with Open AI to allow users to query ChatGPT directly. In the short-term, Apple is likely to host a range of frontier models like OpenAI and Anthropic until its own smaller models become more performant.
Interested in learning more about ARK’s early take on the future of Consumer AI? Read our new research article, “Generative AI: A New Consumer Operating System .”
2. ARK Shared Its 2029 Tesla Valuation Model On The Eve Of Tesla’s Historic Shareholder Meeting
By: Tasha Keeney CFA, Sam Korus, & Daniel Maguire, ACA
Last Wednesday, we released ARK’s updated 2029 open-source Tesla valuation model and published an article detailing our assumptions.[1 ] Most important, we estimate that its robotaxi business will generate ~90% of Tesla’s enterprise value and earnings in 2029, as shown below. Available on GitHub ,[2 ] our open-source model incorporates 45 independent variables that users can modify to ascertain Tesla‘s expected value under different assumptions. We look forward to your feedback.
Last Thursday, Tesla held its Annual Shareholder Meeting, during which shareholders approved for a second time Elon Musk’s 2018 CEO Performance Award.[3 ] In our view, Elon’s leadership will be critical to Tesla as transportation evolves from human centric to autonomous during the next three to four years. Based on his 2018 agreement, Elon will not be able to cash in on his options for five years post exercise, securing his leadership at Tesla during a transformation that we believe will be more meaningful than the ramp of the Model 3.
During the shareholder meeting, Elon noted that the company’s Full-Self Driving (FSD) software has eliminated nearly all collisions with static objects, with each software update improving performance up to ~10x. Recently, Tesla released FSD v12.4 to a limited number of customers, removing the steering wheel nag and enabling limited hands-free rides.[4 ] We are monitoring FSD’s progress and looking forward to Tesla’s robotaxi announcement in August.[5 ]
Importantly, Elon noted that Optimus, Tesla’s humanoid robot, could be valued at ~$25 trillion in the long-term, five times more than the ~$5 trillion likely for its autonomous taxi platform.[6 ] While our updated open-source model assumes conservatively that Tesla will not sell Optimus by 2029, our longer-term research suggests that humanoid robots represent a ~$24 trillion global revenue opportunity. [7 ] We look forward to sharing more of our research on humanoid robots during the months and years ahead.
3. Bitcoin’s Price Is Consolidating With Healthy On-Chain Fundamentals
By: David Puell
On May 23, the U.S. Securities and Exchange Commission (SEC) approved spot Ethereum (ETH) ETF stock exchange rule change proposals (Form 19b-4), clearing the way for registration approvals (Form S-1) of spot ETH ETFs, potentially this summer .[8 ] Month-over-month in May, the ETH/USD and ETH/BTC price pairs increased 24.8% and 12.1%, respectively, as shown below. Notably, it was ETH’s best month relative to bitcoin since October 2022.
During May, bitcoin’s fundamentals held up well, as its Long-Term-Holder Supply[9 ] on a 30-day basis turned positive at 96,417 bitcoins, after outflowing during the quarter ended March. Importantly, bitcoin’s price stabilized above $65,000 during the March quarter outflows and during May’s inflows, as shown below.
Finally, the yearly Realized Profit/Loss Ratio in the Bitcoin network closed May at 4.6, as shown below, “neutral” relative to prior bull markets when the ratio peaked at ~6.4+. We invite readers to visit ARK’s The Bitcoin Monthly for more on-chain information on bitcoin’s health.
领英推荐
4. Do AI Models Need “Attention” and GPUs After All?
By: Brett Winton
In 2017, a paper titled "Attention is All You Need" introduced the transformer architecture that catalyzed the large language model renaissance and spurred excitement in both the technology and financial markets.[10 ] While remarkable in many ways, AI transformers rely on a computationally expensive mechanism called “attention” to relate each token—nearly a word-sized chunk of text—to all other tokens in a passage.
A new paper published by Rui-Jie Zhi and colleagues[11 ] calls “attention” into question, pointing to AI performance breakthroughs that might allow small devices to host powerful models. In so doing, they could shake up the graphics processing unit (GPU)-dominated AI hardware regime.
How does “attention” work? For every token generated, the transformer model recalculates the relationship between and among each of the tokens in the body of text. Imagine reading a word in?thissentence, and then pausing and assessing whether that word has changed the relationship between "All" and "Need" in the first sentence of the first paragraph. While creating seemingly cognitive waste, until now that process has been the best linguistic/computational approach available.
In contrast, Zhu et al.’s research has focused on a way to replace that architecture with one that calculates sequentially the relationship between a new token and all the tokens that have preceded it. When paired with a rounding of model parameters into 1s, 0s, and -1s, the new approach yields dramatic performance gains.
The authors demonstrate that this type of model is as inexpensive computationally as training traditional transformer models on an Nvidia accelerator, and perhaps less expensive for larger GPT-4 class models. More importantly, they show that the new models operate more rapidly and less expensively after training. Indeed, across model sizes, the architecture requires one tenth the memory and responds five times faster, even when running on Nvidia hardware, as shown in the two charts below.
The authors demonstrate that computational hardware custom-built for these types of operations could run extremely efficiently. Moreover, because the current GPU architecture is designed around the parallel processing necessary to compute “attention,” this breakthrough could catch existing GPU providers off guard. With an easily programmable chip called an “FPGA,” the authors show that a 1.3 billion parameter model could generate tokens at human reading speed while consuming the same marginal energy as the human brain. Full chip design and manufacturing should be able to do substantially better.
While this breakthrough is still in early days, AI researchers are likely to train models using this new architecture, potentially accelerating the decline in AI costs. Soon, AI that once required the power of a data center could operate on a laptop, then on a smartphone, a smartwatch, and smart glasses.
Want more research from the ARK Team? Have feedback on our publications? Click here to help inform our content creation .
[1] Keeney, T. et al. 2024. “ARK’s Expected Value For Tesla 9n 2029: $2,600 Per Share.” ARK Investment Management LLC.
[2] ARK Investment Management LLC. 2024. “ARK-Invest-Tesla-Valuation-Model.” GitHub.
[3] Tesla. 2024. “Tesla Shareholder Meeting Starts at 3:30pm CT.” X.
[4] Whole Mars Catalog. 2024. “Tesla FSD 12.4.1 drives from Santana Row to Stanford.” X.
[5] Musk, E. 2024. “Tesla Robotaxi unveil on 8/8.” X.
[6] On a Market Capitalization Basis.
[7] ARK Investment Management LLC. 2024. Big Ideas 2024: Disrupting the Norm, Defining the Future.”
[8] If and when those S-1 Forms are approved, the ETH ETFs may begin trading. On June 13, SEC Chairman Gary Gensler suggested that those approvals could come this summer. See Beyoud, L. 2024. “Ether ETF Could Get Approved This Summer, SEC’s Gensler Says.” Bloomberg.
[9] LTH supply is defined as bitcoin not moved?for 155 days or more.
[10] Vaswani, A. et al. 2017. “Attention is All You Need.” arXiv.
[11] ?Zhu, R. et al. 2024. “Scalable MatMul-free Language Modeling.” arXiv.