- Politico hits the nail on the head: in an era of convincing generative AI-created content, how can anyone trust what they see
? More importantly, what does that mean for epistemology and how ordinary people trust media and content (e.g., the news)? It’s a sobering question and highlights how AI can be abused to erode societal values. But as Audrey Tang, Taiwan’s digital affairs minister explains, the situation isn’t inherently dire and AI can be used to support democratic norms
.
- Worth a read, if you care about the intersection of data protection and AI: the EDPS opinion letter regarding the AI Act draft
.?
- MEP Drago? Tudorache gave Euronews an update on the AI Act trilogue negotiations
. The thorniest parts of the AI Act
are of course the hardest parts for achieving alignment
.
- A group of leading AI researchers
published a good set of recommendations for mitigating risks presented by AI systems
.
- The UN announced the creation
of an Advisory Body on AI
. Perhaps it’ll ultimately help lead to the equivalent of a CERN for AI
?
- Making good on their promises, a group of major tech firms announced the creation of the Frontier Model Forum
.
- Pablo Chavez makes the argument
(and I’m quite inclined to agree with it) that the U.S. is actually making great strides in advancing AI regulations. In a separate post, he also highlighted the similarities and (often subtextual) distinctions between the G7 and Chinese AI governance principles
.
- Much was made of the Foundation Model Transparency Index (FMTI) that a Stanford team released last week. As people have had time to digest it, more (lengthy) opinions on the utility and validity of the findings
are circulating.
- The Senate held another hearing on AI on Tuesday, with more to come
. Axios gave a rundown of the different voices and perspectives
being brought to the table.
- Messaging matters. And headlines like “AI Could Spur an Economic Boom. Humans Are in the Way.
” are not going to help build trust in AI developers.
- Anthropic is trying to establish a ‘constitutional’ approach to AI governance but is running into a unique challenge: what Anthropic thinks is important doesn’t really line up closely with what most people find to be important
.
- There’s one school of thought that thinks that basically all generative AI models are (or should be) unlawful or at great legal peril due to IP, data protection, and other reasons. One group not in that school? Venture capitalists
. (For what it’s worth, I’m of the school that Pandora’s box has opened up and it won’t likely be closed again.)
- Ethan Mollick makes a great point: if we have multiple powerful models available to nearly everyone on the planet, how do we focus on using those tools in the most useful or “best available” manner
?
- Leica is releasing a camera with content authentication built into its image processing
. More devices will likely follow and, maybe at some future point, non-authentication will be a mark of suspicion (as Adobe’s GC suggests)
. Google also unveiled some image authentication tools for images found on the internet
.
- Someday, maybe Monday, maybe another day: the White House will eventually issue an EO regarding AI
. It promises to be a grab bag of industry rules and national security restrictions
and more, but Congress is already suggesting it doesn’t go far enough
(and, to be fair, perhaps regulation via legislative process is better than via executive order but that would require Congress to function (something Sen. Amy Klobuchar believes is possible
)).
- Interesting: if there’s an AI arms race between the U.S. and China, ScaleAI is aiming to become the leader of the next version of the military-industrial complex
. And I think we now know which side HuggingFace might support
.
- I missed this last week: the U.S. and Singapore agreed on developing interoperable AI governance frameworks
. Brookings is encouraging similar engagement with China
.
- UK PM Sunak is pushing countries to force the labeling of AI that is capable of generating ‘catastrophic harms.
’ Of course, he might want to look at the UK government’s own use of AI
, which is potentially increasing efficiency but is reportedly coming with some pretty significant negative consequences.
- On the topic of AI-created harms: Yoshua Bengio suggests that there needs to be an organization dedicated to defending humanity from AI
. He also co-authored a recent paper outlining how to pair AI progress with support for democratic processes
.?
- But, hey, Yann LeCun keeps saying there’s no real reason to worry
(yet).
- Proposals for regulations often (in democratic societies) offer an opportunity for stakeholders and the public to weigh in. But what if AI, rather than regulators, reviews the comments or what if, even worse, AI floods the commentary zone
?
- If you’re interested in supporting organizations that are fighting for more transparent and ethical AI, then this article
in ZDNet is for you.
- I guess if you run a social media platform and you don’t like the laws in a particular jurisdiction relating to content moderation, you might choose to leave the market
?
- If you ask a sample of people to compare ethical advice from ChatGPT versus the guidance given by the NYT’s Ethicist
, you might be surprised to see the results.
- Apple is reportedly working hard to catch up in the generative AI space but spending a $1 billion a year on generative AI
is not keeping up with how much competitors are reportedly spending.
- Korea’s Personal Information Commission has created an AI-oriented task force
.
- Members of Congress say that a data privacy law is foundational to AI regulation in the U.S.
but that makes me wonder if we’ll have to wait decades for AI regulation (as we’ve had to for general privacy legislation at the federal level). More here
.
- Massive AI startup valuations aren’t just a Western thing: China’s Baichuan raked in $300 million in their latest round
and Zhipu hit nearly $350 million
.
- Cities are the labs
for experiments with AI and governance.
- Kaggle published a cool report on the state of AI in 2023
.
- AI can be used to develop novel proteins and drugs, but it’s not clear how well they translate to practice
.
- Let’s face it, AI healthcare tools get things wrong. Sometimes, it happens as a result of hallucinations. Other times, it comes from human biases that translated into racist views in the models
.
- Sobering and worth a read: Wired took a look at the millions of people training AI models for pennies a day
.
- Gary Marcus takes aim at the AI hype cycle
.
- If, as reported, Jon Stewart walked away from Apple TV over differences regarding AI and China
, that prompts a whole slew of questions.
- Mind-blowing: it’s faster/cheaper to train robots in a virtual environment
, so that’s just what Nvidia and Meta are doing.
- What’s in a name? A look
at “AI”, “machine learning”, and how they’re used by different parties for different purposes in the AI value chain.
- Microsoft is investing significantly in Australia with a focus on AI,
cloud computing, and information security.
- Gen Z workers have plenty of thoughts
about the role of AI in the workplace.
- People talk about AI saving office workers from drudgery; robots might save warehouse workers from actual injury
.
- MIT’s Lincoln Lab is figuring out ways to reduce the energy costs
of AI development/deployment.
- Hoo boy, Amazon is bringing conversational AI to kids
.
- A McGill team offered a critique of Canada’s Artificial Intelligence and Data Act
, focused on its potential impact on prosperity.
- YELLING IN ALL CAPS is bad form on the internet but might be helpful in LLM-based programming
(where ‘tone’ is important).
- If you train AI models with images of attractive people, is it really surprising that they will provide attractive people as outputs
?
- Stack Overflow’s blog has an interesting post
relating to the management of data protection when building with generative AI.
- Good thought piece: what if there was a massive open training data set for robotics purposes
?
- SemiAnalysis offered up a useful overview of the current state of AI chip manufacturing export controls/restrictions
.
- Scientific American looks at how nearly all of the Internet became an AI data training ground
.
- Want to understand Large Multimodal Models? Chip Huyen has you covered
.
- Glad this got some good press: LinkedIn’s InfoSec team found a way to use generative AI to take care of more mundane tasks
.
- Bill Gates evidently thinks that generative AI has hit a plateau
.
- The WSJ looks at how generative AI will quickly reshape most American consumers’ experience on the internet
.
- If you work at an AI startup (or in tech, more generally), odds are that Okta is being used for identification management
.
- Nature published a great state-of-play article
regarding the “AI revolution” in medicine.
- Andrew Ng is offering a free course
on generative AI.
- Google has reportedly been developing a powerful multi-modal tool code-named “Stubbs
.”
- As tech company earnings reports come in, it looks like AI bets are starting to pay off
, resulting in major market gains
. But convincing customers to pay for a company’s AI features isn’t always easy as competition heats up
.
- Business Insider reports that lots of job candidates are using ChatGPT
(and poorly) in their job search process.
- Reddit is evidently planning to fight back against generative AI developers
.
- There’s evidently good money
in trying to clean up AI training data.
- Politico dug into the AI lobbying industry
swarming Washington these days.
- Yep: AI might finally kill captcha
.
- Checks out: there’s also good money in the “AI for insurance
” market.
- Qualcomm is building chips to bring better generative AI capabilities to smart phones
. It makes sense
for a variety of reasons.
- The robotaxi fleet cruising SF will be a little smaller now
.
- Generative AI tools are driving improvements in scam email quality
.
- Yet another paper points out that “AI detection tools” don’t work
.
- Just like every other tech boom, students are dropping out of school to get in on the action
.
- When we look back at 2023 from the future, will we think of the explosion of AI tools as being like a ‘first flight’ moment
?
?? Generate Leads and Sales Through Search Engine Optimization; specialized for Law Firms, Veterinarians, Local Business and Ecommerce Sites ????
1 年Impressive insights on the ever-evolving AI and legal landscape! Your updates are a valuable resource for those navigating this intersection. Keep up the great work! ??
Legal Leader | Financial Services | Cyber, Privacy, eDiscovery, AI
1 年It’s staggering to see this all in one place and realize how fast things are moving in this space.
AI and Algorithmic Auditing | International Human Rights Lawyer| Policy and Governance |
1 年Thanks for my weekly reading list, Jon Adams!
Growth Catalyst, Activator, Ideator, Maximiser
1 年Mina Vucic
Cybersecurity (Global GRC) | Let’s Talk About AI Security and Data Governance | CEO/Co-Founder | Consultant | Speaker | PhD Candidate - AI Research | Leadership
1 年This is by far my favorite newsletter! I’m always waiting for you to post, Jon! Thanks??