This week was a little less dramatic than last week, but it was by no means boring. Here's the latest:
- Axios highlights one of the biggest fears around AI right now: it’s not that AI will become sentient, but that AI will be able to create images and audio/visual content that exceeds our human discernment between reality and fiction, leading to abuse by bad actors to cause societally disruptive results. This is why it’s incredibly important that large companies (e.g., Microsoft) are stepping into the void to help keep AI from derailing democracy.
- More news from the UK AI Safety Summit: most stakeholders are agreeing to pre-deployment safety testing (which seems like a table-stakes commitment). The cartoon of the week really nails it. As the BBC asks: how close are we really to alignment on AI safety?
- Algorithms can optimize products for revenue generation. Case in point? Amazon’s algorithms used as part of Project Nessie, which the FTC claims netted Amazon $1 billion in additional revenue.
- As Alex Kantrowitz wrote in Big Technology, OpenAI isn’t invulnerable so it has to keep innovating.? What followed: OpenAI held their developer conference (“DevDay”) and unveiled the release of configurable GPTs. It’s basically the new app store, and people are already building cool things (if you’re looking to build, here’s how to do it with Microsoft). OpenAI also announced that ChatGPT has hit 100m WAUs. Here’s Sam Altman’s keynote address. (And Ben Thompson’s write-up of it.) Then… OpenAI got hit by a DDoS attack (they’re back up now).
- I don’t know if xAI will be more like Tesla, SpaceX, X, or some other Musk enterprise that also faces ethical challenges, but it’s brought us Grok (which is somewhere around GPT-3.5 level) in about two months. And it might show up soon in Teslas… Elon also thinks AI will end the need for work, so there’s that.
- At least one survey says: tech experts don’t trust CEOs on AI.
- I don’t know if this is a good or bad thing: California might try to regulate AI before DC gets to it.
- OECD updated its definition of “artificial intelligence” to help inform EU AI Act deliberations.
- On the topic of AI safety, Concordia published a comprehensive look at the state of AI safety in China.?
- I hadn’t noticed it so I’m glad VentureBeat flagged it: the FTC filed comments in connection with the Copyright Office’s study of the intersection of AI and copyrights. A16Z also provided feedback, noting that billions of dollars in speculative R&D investment is at risk if copyright issues remain unresolved. If you want to look through other comments (and there are a lot of them), click here.
- AI just negotiated a contract with AI. I guess it’s a combination of applied game theory + generative AI + low stakes.
- Everyone has a think tank and a view on responsible AI these days: the Silicon Valley Leadership Group issued its responsible AI principles.
- LSE researchers looked into the impact of AI use by financial firms and the implications of that use for financial stability.
- Market standard? OpenAI is now also offering to pay legal fees for copyright/IP claims aimed at customers.?
- The NYT goes deep on Humane, and it’s pretty exciting stuff.
- Phil Lee posted a helpful diagram on LinkedIn to illustrate the roles played by the forthcoming EU AI Act, the AI Product Liability Directive and revised Product Liability Directive in risk mitigation and allocation of responsibilities.
- Spain’s AI and digitalization minister Carme Artigas suggests that complaints about the AI Act’s impact on start-ups are overblown.
- Kris Shrishak at the Irish Council for Civil Liberties discussed the regulatory powers that might be available to supervisory authorities under the AI Act.
- I care a great deal about AI regulation. Libertarians apparently do. You probably do, too. But many Americans evidently do not.
- The latest steps from Github indicate that Copilot will go beyond writing code to helping manage the SDLC.
- The Special Competitive Studies Project and the Johns Hopkins Applied Physics Laboratory created yet another framework for evaluating and addressing ‘highly consequential’ AI risks.
- Microsoft released a course on generative AI via Github.
- How often do chatbots hallucinate? It depends on which chatbot you’re chatting with.
- A gambling man? DeepMind co-founder Shane Legg thinks there is a 50% chance we reach AGI in the next 5 years.
- MIT Technology Review dug into the idea of AI watermarking, as did Bloomberg.
- Creative Commons, Wikimedia Europe, and Communia Association weighed in on transparency provisions in the AI Act.
- Despite the fact that students and teachers are using AI all the time, most U.S. states have issued no guidance as to how schools should approach the topic.
- Fascinating: Rudy Arora outlines the impact of generative AI on other ‘frontier’ technologies (e.g., crypto, metaverse).
- The Conversation has a solid high-level piece about the race for AI governance.
- Smart. IBM is investing in a huge AI fund.
- IDC reports that companies investing in AI are seeing massive positive returns. And, as the WSJ reports, large tech companies investing in AI developers are seeing huge gains as a result.
- Amazon is reportedly training a very powerful foundation model code-named ‘Olympus.’
- Microsoft’s new LeMa model is built to mirror human problem-solving techniques.
- The 2024 U.S. elections are promising to be an AI-generated content horror-fest. Here’s how to navigate it.
- DALL*E 3 is apparently very, very good and artists aren’t very happy about it.
- Speaking of image generation, here is how certain tools are being used to propagate false images of the war in Gaza.
- The UK is spending lots of money to build a very powerful supercomputer for AI-related purposes.
- Is Figma’s Figjam the Midjourney for design?
- When we said that AI could be useful for financial firms, we didn’t mean that insider trading should be a functionality. Or that PE firms should become even more efficient.
- If you aren’t paying attention to the tech development coming out of Africa, you’re missing some serious growth stories.
- Nvidia is reportedly taking their chips destined for China right up to the line of U.S. sanctions (which is risky!). But Baidu is reportedly buying from Huawei now.
- The SheppardMullin team provides a good primer on what to think about in the context of using generative AI with OSS.
- An experiment conducted by the UK’s Royal Society and Humane Intelligence demonstrated that red-teamers were quickly able to get past generative AI safeguards.
- Absolutely bonkers but not unexpected: SlashNext reports that, since ChatGPT came out, there has been a 1265% increase in phishing emails.
- The U.S. Department of Defense unveiled its general AI strategy.
- Makes sense: AI can optimize HVAC systems at scale.
- AI can also optimize for speed and scale in job applications but might make some mistakes along the way.
- The IAPP surveyed what is coming next in state-level AI regulation.
- I’m glad folks like Gary Marcus raise questions. Like whether Cruise is essentially the Theranos of autonomous vehicles, and whether the push for open models is actually a net positive for society (or whether it opens up the potential for catastrophic risks in the pursuit of marginal branding gains for certain AI developers).?
- Mozilla called for greater alignment around openness in AI development, and Azeem Azhar discussed how open the UK government is to open AI models.
- Speaking of, Dell and Meta announced a plan to push for on-prem Llama 2 deployments.
- If, as various sources suggest, we’re in a ‘cold war’ over AI (horrible framing), then perhaps funding AI developers in rival countries is suboptimal? It’s going in both directions, to be clear.
- Speaking of China, Kai-Fu Lee built 01.AI to a unicorn with a top-level LLM in just eight months.
- Google expanded their generative AI capabilities in search, and rolled it out to 120 countries.
- Interesting question: if you’re Google and facing constant pressure from other AI developers on one side and pressure to not rock the search/public relations boat on the other, what do you do? And what if generative AI is negatively impacting the veracity of search results?
- AI is coming to IVF, with positive and potentially negative implications.
- “Now and Then” was completed with a little help of AI, despite George Harrison viewing it as “$@*&ing rubbish.”
- If you’re starting to think about holiday gift-giving, here are some AI-related ideas.
Senior Counsel, Competition & Regulatory @ LinkedIn
1 年Good stuff as always, Jon.
VP, Legal - Product, Platform & Partnerships at LinkedIn
1 年I always love the newsletter, but this week I love the photo from Olympic National Park even more!
Advisor Ai & Healthcare for Singapore Government| AI in healthcare | 2x Tedx Speaker #DrGPT
1 年I wanted to show how the new GPTs by Openai this week, can be used to help patients: https://www.dhirubhai.net/posts/harveycastromd_patientadvocacy-healthcareinnovation-medihelper-activity-7128524381965717506-ocHk