Ep. 5 | What's the latest in AI? The Autumn Update
Hi folks, it's been a minute. ??
Life happened, summer happened, and then life happened some more. Welcome back to the fifth edition of my newsletter: Perplexing Tech.
Like me, you may find that the pace of innovation in technology - and AI more specifically - is quite overwhelming. For that reason, I've written this edition to give you an overview of where we are are up to so far, and what we might expect in the final quarter of the year.
I'll start by giving you a primer on what's going on at OpenAI (because on any given day you could flip a coin and get 'major leader resigns due to safety concerns' or 'huge new development shipped to users' ... in the past couple weeks we got both, twice!), then I'll run through the major updates of September, and finally touch on some of things we might anticipate seeing before the year finishes...
In personal news, a fortnight ago I joined peers in Amsterdam for an FEC Data event, where I spoke to industry players about ethics and legislation in responsible AI, as well as major architectural research that will help us capitalise value in CDD.
And this week, I'm speaking at Delta Capita 's ESG breakfast about AI and data in ESG reporting, and then I'm joining the GenAI Risk & Governance panel at AFME (Association for Financial Markets in Europe) 's OPTIC conference in the city. Exciting!
First of all, what's going on at OpenAI?
... and who still works there? (clue: it rhymes with Ham Paltman)
? Earlier this month, OpenAI launched the o1 model series ("strawberry", to those that have been following along). I wrote about it, and my takeaways from the system card.
This is really important, so let me try and explain:
?? This is a different, smarter model structure. A reminder that OpenAI use closed source models, so we can't know everything about what is going on behind the scenes, but we do know that this is not just a bigger or better LLM architecture.
?? It introduces new components, e.g., a 'time-based component' that gives the system 'reasoning tokens' that it uses to compute a better response. We don't see this as the user, but it means more time is allocated to working through the ask, the next steps, and likely follow-ups.
??This is the first really good example of what we call "chain-of-thought" reasoning. It organises 'thinking' better, and organises recommendations better. I use, and would still use, Sonar Large 3.5 through Perplexity for text generation, text review, definitions etc., but I would absolutely now use o1 for difficult questions and - as has become my number one recommended use of AI to friends and colleagues - conversational dialogue.
?? A really good outcome from the new model series is that we don't need to give highly tailored prompts. It does a sensible structuring for you, regardless of whether you ask for it or not.
?? I've seen some really good results for code generation and review (I'm going to post a walkthrough video soon of how I used it to build a market scanning news tool in Python), and I've been using both o1-preview and o1-mini every day since they came out. I've not tried via API yet.
? In less shocking OpenAI news, you may have seen that Sam Altman told staff that the company's non-profit corporate structure will change next year. He reportedly said that the company had "outgrown" the unusual structure it currently has and will therefore change. Colour me shocked!
There's a great read in The Wall Street Journal from the weekend about how complex (and, frankly, incredibly unusual) the journey from non-profit is, and why OpenAI has an insane challenge ahead to make it happen. This journey also considers whether Sam Altman could get a 7% equity stake in the company ...
? The long-time CTO of OpenAI (and previous interim-CEO during "Samgate"), Mira Murati, resigned last week. As did the Chief of Research. As did the VP of Research. Clearly, people want to understand the real reasons behind these departures, and OpenAI's reliance on development of ChatGPT has left them in a tricky position compared to competitors who are majorly diversifying (see my section below).
Major updates in September
... and the AI status at the major tech players
? OpenAI
? 谷歌
The tool takes notes, PDFs, and other docs and turns them in to these insane AI-generated podcasts. The functionality got an upgrade this week, and you have to try it to believe it. You can now upload a Youtube video or audio file (e.g. lectures, long in-depth sessions) and get these really vibrant study guides
领英推荐
? Meta
Side note: you might have seen content from me before about 'Post Digital', and how the acronym DREAM-C covers the technology that defines the next paradigm of modern society. The 'E' is 'extended reality (AR & VR)', and developments like this prove exactly why that 'E' needs to be included. Wearable tech is absolutely going to define our future.
? 微软
"...the CMA said that it tested Amazon’s arrangements with Anthropic against thresholds for turnover and share of supply, concluding that neither was met, and therefore a further investigation will not be pursued."
In other news ...
? At Lenovo Innovation World 2024, the tech giant went big on pioneering AI devices. They unveiled several AI-driven devices including the Yoga Slim 7i Aura Edition and the ThinkPad X1 Carbon Gen 13 Aura Edition. It's anticipated that by 2027, 60% of PCs being shipped will be AI-capable.
?? Folks got really angry that LinkedIn had been scraping their data for training before updating its terms of services, and before releasing an opt out feature.
? The US Commerce Department has proposed mandatory reporting requirements for developers of advanced AI models, through the lens of national security. Ultimately, they would require AI developers to provide detailed reports on their development activities, cybersecurity measures, and outcomes from red-teaming effort. The Bureau of Industry and Security (BIS) is pushing this on the basis that innovation is lowering the barrier to entry for non-experts to do harm with CBRN weapons. They're right - see my earlier post about the o1 models and the results in their system card.
?? The AI coding assistant Supermaven has been working away raising cash from OpenAI and Perplexity co-founders. It has a massive context window which makes code copiloting more effective / reduces hallucinations.
? In my last update, I told you about a major new industry group: the Ultra Accelerator Link (UALink) Promoter Group. Comprising of Intel Corporation, Google, Microsoft, Meta and friends, the group was announced to facilitate development of the components that link together AI accelerator chips in data centres. What's happened since?
? I came across an incredibly interesting and well-written post by Raymond Sun this month on "AI security" versus "AI Safety". It was fascinating to see a breakdown of some key points from the recently published "Artificial Intelligence Safety Governance Framework" by the China Cybersecurity Standardisation Technical Committee (catchy), but also fascinating to read about the nuance between concepts presented in English vs. Mandarin, and if there are implications from this. Well worth a read!
What can we expect in the remainder of the year?
? If you're struggling to stay up to date with developments in AI and other innovative technologies, let me do the work for you! I post weekly summaries on the feed, and this newsletter is (usually) every fortnight. ?
Hope you found this interesting - let me know!
Bit behind for various reasons, but I got there in the end. This is ace Niamh, very clear!
Assistant Vice President at Delta Capita
2 个月Such a great round-up Niamh Kingsley! SO much going on in this space at the moment!
Super comprehensive, helpful and brilliant, as always! ????