It's been a busy two weeks, so I didn't get around to publishing a newsletter last week. But that just means I packed more into this edition. As always, I hope it's helpful/insightful, and feel free to share with others.
- The White House and leadership from seven major tech firms announced responsible AI commitments at the end of last week. Reportedly, the White House is hoping to do even more (and soon), since we’re obviously in the ‘early days’ of AI regulation. If you want to learn more about the direction regulations may take, Vox has you covered.
- Following that, Microsoft, Anthropic, Google, and OpenAI launched the Frontier Model Forum to coordinate on risk mitigation for powerful models. One way they’re working on that is by using AI labeling schemes (e.g., C2PA or even unicode).
- Let’s be honest: most “open” AI isn’t really open-source software or really even that open. Seriously. If you’re interested, here’s an epic Twitter thread on open source model performance.? Meanwhile, GitHub, HuggingFace, and others are calling for more open-source protections under the EU’s proposed AI Act (joining developers of proprietary systems in lobbying)
- Let’s throw out the Turing test: ChatGPT passed the test, and it’s clearly not actually intelligent.
- Lessons from Capitol Hill: if you want to get something passed, try to stick it in the big defense spending bill.
- Some are worried that Europe’s AI Act will kill AI innovation on the continent. Others are thinking that the EU’s sense of tech FOMO will ultimately help foster support for AI innovation.
- Stability AI scored a win in court. But the lawsuits keep spreading: now, Cigna is being sued for using AI to deny patient claims.
- Sam Altman is pushing Worldcoin as a solution to questions of authenticity in an age of AI, but perhaps we should instead be asking why, if we need such a terrible solution, we want to create this problem in the first place?
- Big question to resolve: when AI makes up something about a person and presents it as a fact, who is responsible for the damages?
- MIT Tech Review has a list of the ways in which AI might transform American politics.
- Since AI-augmented political ads are inevitable in this election cycle, here’s a useful primer for how to distinguish AI from reality.
- Banking interns are all in on ChatGPT it appears. So are hedge funds. And (American) banks. Which might cause the next financial crisis, according to the SEC’s Gary Gensler.
- Finance types are unlike skeptical lawyers who are mostly not using AI. Perhaps that’s because AI might result in a cratering of firm profitability.
- AI could make health care so much better. For example, AI can improve breast cancer diagnosis, improve other diagnostic efforts, and help design appropriate hypertension treatment protocols.? And AI can help make diagnostic errors a thing of the past. This is why companies like AWS are offering generative AI resources for health care companies. Related: Doctors probably shouldn’t use ChatGPT for patient notes.
- Generative AI might be damaging Stack Overflow, but it’s investing in its own AI tools to help developers.
- Valve is blocking developers from using generative AI unless they can demonstrate non-infringement. Meanwhile, other gaming groups are going gangbusters for generative AI.
- Protection of AI systems is apparently a hot business.
- Watch out, the glacier is moving way faster: Congress is advancing a bill calling for an 18 month study on AI accountability.
- The EEOC is warning about AI biases (and efforts to control them).
- If you are looking for a plain English explanation of how LLMs work, this might be the best one I’ve seen.
- A top policy advisor on the EU Parliament side argues for the Parliament draft of the AI Act as a mechanism for increasing competition for developers of foundational models (while also meeting civil society expectations regarding human rights).
- A few weeks ago, hundreds of business executives warned about the impact of the proposed AI Act. Hundreds of civil society advocates just responded.
- The American Chamber of Commerce issued their position on the proposed AI Act.
- European rights-holders and creators argued for appropriate transparency requirements for AI.
- The best use of AI is to get rid of boring, tedious work.
- We are seeing an unfortunate rise in the use of AI for interviewing employment candidates.
- And we’re also seeing some sketchy uses of AI to try to predict when employees might resign.
- Do AI developers have Oppenheimer moments?
- Medium is taking a stand: no AI-generated content is welcome there.
- The U.S. is leading on AI but continued success is not inevitable. Pablo Chavez published an incisive essay and analysis in Lawfare that outlines the geopolitical issues relating to AI development and argues that the U.S. needs to exert strong leadership here.
- The team at Modern Diplomacy is writing a series of essays on the impact of AI on economies and warfare. And Palantir’s CEO keeps arguing for AI-augmented weapons (why he’s the one beating that drum, I’m not entirely sure, but he’s being joined by other related Lord of the Rings-influenced companies).
- It’s still not clear whether generative AI is going to help or hurt hackers.
- What scares many about generative AI is that it’s a black box (sort of).
- Most companies want to “do something about AI” but a majority aren’t resourced to do it.
- Nathan Lambert kicks the tires on LLaMa 2 and points out some shortcomings.
- Very useful AI: Wayfair is offering AI to help people reimagine how their homes could look (of course it helps sell furniture on wayfair.com).
- AI training data is like gene pools: when generative AI trains on synthetic content, and then uses its output for training, eventually artifacts get amplified through a self-consuming loop.?
- Meta will reportedly be embedding various AI helper bots throughout its platforms. It’s part of a broader strategy towards AI ubiquity.
- AI hiring is highly, highly concentrated in a few cities.
- Ethan Mollick wrote a good post re: the “strange tide of generative AI.”
- Generative AI is going to create lots of opportunities for AI consultants.
- And SAP spoke with Axios on the spending needs associated with going big on AI.
- McKinsey thinks that middle managers will hold the key to unlocking the value of AI.?
- Many lower income, white collar occupations will be disproportionately impacted by AI.
- Alexa is going to receive a generative AI reboot.
- Not quite an A-lister salary, but Netflix is offering pretty high compensation for AI product managers.
- If you want to compete for the lead in AI, you have to be prepared to spend big. And work hard: at Google, Sergei Brin is jumping back into action. See also: Intel wants to put AI in everything.
- In the more mundane world of recommendation algorithms, there has been a series of articles published recently regarding how Facebook’s models work.
- Smart by Nvidia: invest heavily to help your customers’ businesses grow (so they’ll need more chips and all).
- Fascinating: AI2 unveiled its AI2 ImpACT license program.?
- Photoshop’s AI tools now let you ‘uncrop’ photos.
- Oh boy. An AI-powered ‘news’ channel will produce news clips tailored to the viewers’ political perspectives.
- Google’s Assistant is getting AI updates.
- Axios published a deeper look at Apple’s moves into the generative AI landscape.
- Microsoft and Leidos are reportedly partnering to expand AI use in the public sector.
- MIT announced “PhotoGuard” to protect images from AI edits. Here’s more on how to use it. Good news! But on the other hand, OpenAI shutdown its “AI detection” tool since it was pretty ineffective. Maybe Instagram’s tool will be better?
- IBM and Hugging Face are releasing a climate change-oriented foundation model, and Microsoft is using AI to help address wildfire risks.
- An AI startup is hoping to help diesel-powered trains clean up their act.?
- The AI arms race means that, evidently, Nvidia is facing insane demand for its chips.
- Fast Company argues that the AI boom is saving San Francisco.
- Michael Dempsey published a long post on how to think about R&D and capital development in the AI industry.
- Someone put together a list of the best AI-related newsletters where you can read deeper dives on all of the above topics.
Chief Privacy Officer, Data Protection Officer as-a-service, Independent Legal Scholar on DSA, DMA, Data Act; Author "Applying the GDPR"; Policy Advisor for Data Regulation
1 年Always fair. Always balanced. Always honest in attribution. Thank you for keeping an eye on the ball?
Senior Counsel, Co-Head, AI Practice Group at Autodesk | Board Director at Berkeley Law | x-Apple Product Counsel
1 年Great stuff Jon!
Corporate AI Governance Consulting @Trace3: All Possibilities Live in Technology: Innovating with Responsible AI: I'm passionate about advancing business goals through AI governance, AI strategy, privacy & security.
1 年There is always something new to learn from your newsletters. Love them!! (The only ones I’m ever opening ??)
The Data Diva | Data Privacy & Emerging Technologies Advisor | Technologist | Keynote Speaker | Helping Companies Make Data Privacy and Business Advantage | Advisor | Futurist | #1 Data Privacy Podcast Host | Polymath
1 年Jon Adams excellent round up ????