This week's newsletter highlights the ways in which everything impacts AI and in which AI impacts everything. Hopefully you all find it as interesting as I did. And, as always, feel free to share with friends, family, colleagues, and anyone else.
I'm planning to take next week off, but will be back in two weeks!
- If ChatGPT helps you write the code for your app, how is copyright applied? ZDNet asked a variety of experts for their views.
- Broken clocks are right once or twice a day: the Cato Institute flags that the GDPR and similar regulations may have impeded the development of AI services in the EU. It’s a fair comparison, and the AI Act might be more impactful than the GDPR in some respects. Which is why many in the European tech community are concerned about the AI Act proposal.
- The speed at which the Act has been deliberated suggests that regulators and lawmakers are being thoughtful, but also highlights the ways in which technology outpaces regulation (it’s true in the U.S. as well, even if you take the view that U.S. leadership on AI regulation is critically important).
- If you’re going to propose a set of rules or a code of conduct regarding AI and it’s going to be derived from a legislative proposal, it probably makes sense to get the legislative proposal locked down before working on the code of conduct.
- If you’re a consultant looking to make bank for the next several years, start talking about how AI is going to transform the economy (and how you can help companies navigate the changes).
- Theme of the week? People are going to throw lots of money at AI in various different ways – from acquisitions to investments to R&D to talent.
- Check back on this in a year or two: Casey Newton suggests that AI is “eating itself” by starting to dominate the creation of new content.
- Meta leaned heavily into transparency and explainability for their feed-related models, ostensibly to educate consumers but also probably to get ahead of Digital Services Act compliance obligations and similar risks.
- I spoke on a PLI talk about this last week, but the “open vs. closed” AI development discourse is gaining traction. This week the debate is headlining one of the Axios morning newsletters! On that topic, one of the open models (by MosaicML) is reportedly outperforming GPT-3 (and the announcement of their model’s performance was quickly followed by an announcement that Databricks bought them for $1.3b) (here’s why they bought them).
- The team over at ‘AI Snake Oil’ has a good point: we’ll probably be expecting to see AI-related transparency reporting from large model deployers soon.?
- If you wanted to know how Mistral raised so much money when they barely just formed their business, here is their strategy memo.
- Different tech, same story: immigrants are playing an outsized role in AI development in the U.S.
- The FTC issued a set of comments regarding potential competition and unfair trade practices relating to the generative AI market. Here’s another good read on the competition concerns in the AI market.
- Congress is requiring that lawmakers and staff use enterprise AI models only.
- Here is a great paper for explaining how to think about the external/societal impact of generative AI.
- Sobering: certain Buddhist monks think AI is helping to bring about the end of humankind. Conversely, the Vatican, in conjunction with Santa Clara University, issued a guide to AI ethics.
- But if you are concerned that many experts are thinking about threats from LLMs, the IEEE has a chart of who thinks what.
- Intel published a short essay on how to think about environmental impacts from AI development.
- What do Chinese bureaucrats and billionaires have in common? For one thing, a desire to compete with the U.S. in AI. That might be a bit harder if additional export restrictions on chips come into effect. Also if China’s innovative AI start-ups continue to be quickly acquired.
- Speaking of chips and China, ByteDance is spending lots of money on GPUs!
- The National Artificial Intelligence Advisory Committee (NAIAC) held a series of meetings with industry leaders on the state of AI, and also reported out a set of recommendations to help with designing trustworthy AI systems.
- The UK is trying to turn itself into the European epicenter for AI. It appears to be working (as London now has the lion’s share of AI talent in Europe) and OpenAI just announced that it will be opening an office in London.
- But, to be clear, SF is the AI capital of the world, right?
- Evidently, Zuckerberg/Meta are aligned with the EU’s approach in the proposed AI Act.
- AI is powering a new wave of clickbait and major advertisers are placing ads on those spam sites.
- Tired: “I’m a software engineer.” Wired: “I’m an AI engineer.”
- There are probably other ways to do this but New York is planning to use a supercomputer to help it figure out how to regulate AI.
- India’s Economic Times suggests relatively marginal impacts to the Indian IT workforce as a result of generative AI.
- Amazon is taking an interesting (but possibly quite smart) approach: offering choices of open models to AWS customers and others to allow them to use the best AI for a particular purpose.
- About 35% of Y Combinator’s latest round of startups are AI-focused.
- Typeface just joined the billionaire start-up club.
- AI-powered tools will save lives but cost jobs in the agricultural sector.
- The Verge published a survey that shows how people are actually using generative AI tools.
- Mission Accomplished? GitHub is saying the use of AI is now normalized for developers.
- Attention to Canadian lawyers practicing in Manitoba: there are now court rules regarding disclosure of the use of AI.
- The NIH points out that using generative AI tools to help in the peer review process generally results in violations of confidentiality.
- A Forbes article outlines something that makes a good deal of sense: most executives do not want to remove humans entirely from AI decision-making processes.
- DJs needed taste to discern what songs would land. AI can use predictive power to do the same thing.
- DeepMind is claiming their next model, Gemini, can outperform GPT models.
- The EU is setting up ‘crash test’ sandboxes for AI deployment.
- There is a growing cadre of AI skeptics who call into question the entire narrative emerging around AI development.
- Horrible but expected: without rules in place, AI will shape the 2024 U.S. election.
- Oh, come on! A recent study shows that AI-generated tweets are more likely to be believed than tweets from people.
- Some of the worst: generative AI is helping sextortion scammers.
- Wonderful! Researchers using AI to target influenza have found several promising prospective treatments.
- Safety first! AI deployment in healthcare may be more cautious than elsewhere, given the stakes. Of course, it’ll ultimately transform the way that doctors work, that’s almost certainly a given at this point.
- DeepMind suggests that AI will play an outsized role in combating climate change.
- Wall Street got a warning from the OCC on the use of AI.
- BlackRock explains why AI is not the next ‘metaverse’ or ‘web3’.
- If you’re looking for a glossary for what some of the above terms (and many more AI-related terms) mean, the AIPP has a new glossary for you.
Senior AI and Tech Policy Advisor @US Senate | Responsible Tech | Data Privacy | AI and Healthcare | AI and Education | AI and Labor
1 年Thanks for sharing! So useful!
Sr. Lead Counsel at LinkedIn
1 年Also Reid predicts demand for lawyers goes up not down over the next decade notwithstanding GPT. So we're good. https://conversationswithtyler.com/episodes/reid-hoffman-2/
Senior Program Manager l Customer Centric Change Management l Driver of frictionless user experiences
1 年+1 Thanks for consistently delivering relevant and insightful content; it's a true gem in my inbox! Happy shutdown.
Deputy Chief Privacy Officer
1 年I always look forward to your "Thursday Thoughts." Thanks for sharing!