FOD#73: Apple Intelligence is undercooked

FOD#73: Apple Intelligence is undercooked

We discuss Apple’s unfortunate underdelivery and provide a collection of immensely interesting articles, interviews, news, and research papers. Dive in!

This Week in Turing Post:

  • Wednesday, AI 101: New portion of your favorite ML flashcards!
  • Friday, Agentic Workflows series: The History of Agents (Who’s that JARVIS?!)


The main topic

Not that I intended the wordplay in the title of this newsletter. But when Tim Cook announced on Twitter that Apple Intelligence was here, I got excited – finally, buying the new iPhone was justified. I immediately downloaded the necessary update (iOS 18.1), joined the waitlist (which was a bit annoying), and got approved in about an hour.

I can try Apple Intelligence now! You can see, I’m a fan.

I wasn’t prepared for much it sucks. Believe me, I still think Apple Intelligence could become a powerful AI promoter among the masses. But it’s embarrassing how undercooked it was on launch day.

Here’s what I tried to do with my iPhone today:

Starting with the disappointments, here’s where Apple Intelligence falls short (so far!):

  • It doesn’t understand half of the commands and certainly doesn’t converse like ChatGPT.
  • It barely answers questions, usually providing links from the internet instead—often not even related to the question.
  • It struggles to understand whom I ask it to call.
  • It couldn’t write an email to the correct person.
  • I couldn’t figure out how to make the camera recognize what’s in front of it (supposedly, it should be able to do that).
  • I couldn’t find how to create a fancy new emoji (most likely it’s in the next update, but why announce it on the website then?!).

On the bright side, here are some features that worked well:

  • Finally, call recording is here. Apple Intelligence saves a call into Notes, provides a transcription, and can summarize, rewrite, and offer other text options—definitely convenient!
  • It provides “smart tools” in the Apple email app. But I don’t ever use that app since the Gmail app is much more convenient. Yes, now Apple’s app offers these new options for email summarization and reply rewriting—I might give it another try.
  • When you call Siri, the screen all glowing – that’s very beautiful (not sure, how useful).
  • It can summarize all notifications you choose, but so far, this feature isn’t that helpful. Not exactly “intelligent,” I would say.

AI needs an optimistic approach, but it also requires honesty. We’ve been through too many AI winters to allow ourselves to overpromise and underdeliver. And if ChatGPT was a true moment of magic, Apple Intelligence reminds me of that Friends episode where Phoebe tries to cover for Joey at a French-speaking audition, insisting he’s fluent, while he actually babbles nonsense. As she tells the director, “C'est mon petit frère. Il est un peu retardé.”

Alas.

But I still like all my Mac devices and hope with all final updates, the AI (apple intelligence) will bring at least a tiny bit of excitement.


?? We recommend – Learn GenAI tips from NVIDIA, Databricks, Twilio, and more

Today 10,000+ AI professionals will learn how NVIDIA, Databricks, Twilio, and more get their GenAI apps into production! Don't miss GenAI Productionize 2.0 for top best practices, including:

  • How to design an enterprise GenAI stack
  • Techniques for AI governance, evaluation, and observability
  • Proven strategies for getting GenAI apps into production

LAST CHANCE TO REGISTER


Twitter library

Speaking about Diffusion models: Ideogram introduced creative board Canvas; Midjourney announced external image editor, image retexturing, and next-gen AI moderation systems; Stability AI open sourced Stable Diffusion 3.5.


Weekly recommendation from AI practitioner????

We don't usually spotlight big players here, but Anthropic’s new “computer use” feature for Claude 3.5 stands out. With this feature, Claude can view your screen, click, type, and complete tasks autonomously. While promising, limitations like API speed and costs remain hurdles. But you should try it anyway! These are surely the first steps toward highly capable AIs that make us all a four-armed Ganesha (the famous Hindu god widely worshipped as the remover of obstacles and the god of new beginnings).


We are reading

  • There were a huge amount of blogs dedicated to Anthropic’s “computer use”, we liked this post a lot: it explores how this feature can be exploited via prompt injection to run commands and potentially turn AI-powered systems into “ZombAIs,” posing serious cybersecurity risks (by Embrace The Red).
  • This is a super interesting article – Prompting Considered Harmful – it questions our heavy reliance on prompts (which don’t work consistently – so please don’t pay for the “prompt collections”. Experiment yourself!) and pushes for more intuitive AI interfaces (by Meredith Ringel Morris).
  • Fun interview with Marc Benioff from Salesforce (by Stratechery).
  • And this article is so very promising! Carbon dioxide capture from open air using covalent organic frameworks (published in Nature)
  • For all my readers from software development: State of the software engineering job market in 2024 (by Pragmatic Engineer).


News from The Usual Suspects ?

  • Meta drops NotebookLlama

NotebookLM from Google got a lot of attention, now Meta wants to steal it. Meta released “NotebookLlama,” an open-source workflow on GitHub, offering a complete guide for transforming PDFs into podcasts using Llama-3 models. Covering PDF processing, transcript writing, and dramatic TTS, this recipe allows the podcast-curious to dive deep with customizable settings and experiment with Llama-3 models, Parler TTS, and more. And yes, community contributions are encouraged to take it further.

  • They also step on OpenAI’s foot – Meta Turns to Reuters for AI News Savvy

In a new multi-year agreement, Meta has tapped Reuters as its go-to news source for real-time updates via the Meta AI chatbot. This move, Meta’s first AI-era news deal, lets U.S. users access Reuters’ real-time reporting across Meta’s platforms. The deal provides Reuters compensation, though it's unclear if their journalism will also be used to train Meta’s language models.

  • OpenAI and Microsoft Bet $10M on Local News Innovation

They have teamed up with the Lenfest Institute, contributing $10 million to a pioneering AI initiative supporting local journalism. Starting with grants for five U.S. metro outlets, the partnership enables these newsrooms to experiment with AI tools like conversational archives and ad analytics, aiming to boost local news sustainability and open-source innovation across communities.

  • Hugging Face pushes boundaries with AutoTrain Advanced

With a few clicks, users can craft state-of-the-art models on Hugging Face Spaces or locally – no coding or heavy-lifting required. Plus, you only pay for what you use. Simple setup, sophisticated outcomes, HF likes to deliver.


But there was a bunch of interesting research papers last week (categorized for your convenience)


要查看或添加评论,请登录

TuringPost的更多文章

社区洞察

其他会员也浏览了