Godfathers of AI feud over degree of AI existential risk
...And we are back with more AI related news, updates and insight into the impact AI is making to tackle global grand challenges.
---
Packed inside
f you are enjoying this content and would like to support the work then you can get a plan?here?from £2/month!
__________________________________
Key Recent Developments
---
Godfathers of AI feud over whether AI is an existential risk
What: "In a series of online articles, blog posts and posts on X/LinkedIn over the past few days, AI pioneers (sometimes called “godfathers” of AI) Geoffrey Hinton, Andrew Ng, Yann LeCun and Yoshua Bengio have amped up their debate over existential risks of AI by commenting publicly on each other’s posts. The debate clearly places Hinton and Bengio on the side that is highly concerned about AI’s existential risks, or x-risks, while Ng and LeCun believe the concerns are overblown, or even a conspiracy theory Big Tech firms are using to consolidate power."
Key Takeaway: In the backdrop of increasing pressure on regulators to act to ensure AI safety, there is an emerging and equally powerful voice calling for AI existential risks to be contextualized and not overblown. The fears are it may lead to the nullification of open research and open access models which would only further consolidate power among the largest research labs.
In reality, a nuanced approach of context specific regulation is likely the answer; distinguishing large labs with billions at their disposal from smaller or open source projects.
---
Biden joins the AI regulation party
What: Biden's administration have recently released an executive order on the safe, secure and trustworthy development and use of AI. "The order directs various federal agencies and departments that oversee everything from housing to health to national security to create standards and regulations for the use or oversight of AI... The order invokes the Defense Production Act to require companies to notify the federal government when training an AI model that poses a serious risk to national security or public health and safety."
---
Commission welcomes G7 leaders' agreement on Guiding Principles and a Code of Conduct on Artificial Intelligence
What: G7 leaders have reached an agreement on a voluntary code of conduct for AI which will complement the incoming and legally binding EU AI act. See below for a selection of the principles:
Key Takeaway: During a week with the UK AI summit occurring and the USA's recently announced executive order relating to AI, it's clear governments are beginning to take AI regulation very seriously. The question remains how this will affect grassroot innovation without the resources to navigate the myriad of voluntary standards and mandatory laws. If the existential or societal risk is high enough, is this a price that has to be paid regardless?
__________________________________
领英推荐
AI Ethics and 4 good
Other interesting reads
Papers & Repos
__________________________________
Cool companies found this week
Multimodal AI
Jina AI - Supporting the deployment of multimodal AI applications with ~$37m raised.
Twelve labs - Multimodal video understanding. Extract key features from video such as action, object, text on screen, speech, and people.
__________________________________
Thanks for reading and we'll see you next week!
If you are enjoying this content and would like to support the work then you can get a plan?here?from £2/month!
___________________________________