?? Meta's AI practices

?? Meta's AI practices

The latest developments in AI policy & regulation | Edition #108

*Dear readers, I'm moving this newsletter from LinkedIn to my website, where I have more autonomy and can build a closer relationship with the readers. Please re-subscribe here.?Once you receive the welcome message in your inbox, you can safely unsubscribe from this LinkedIn publication. Thank you!


?? Hi, Luiza Jarovsky here. Welcome to the 108th edition of this newsletter on AI policy & regulation, read by 26,500+ subscribers in 135+ countries. I hope you enjoy reading it as much as I enjoy writing it.

?? Gift: The next cohorts of the EU AI Act Bootcamp?and the?Emerging Challenges in AI, Tech & Privacy Bootcamp start next month. Get 10% off using the code JUNE-10-OFF?by Friday (June 21). More than 750 professionals have attended our training programs?at the AI, Tech & Privacy Academy - don't miss them!


?? A special thanks to Usercentrics for sponsoring this week's free edition of the newsletter. Read their article:

Are all types of browser-based cookies being phased out? Or just third-party ones? Companies have relied heavily on cookies for their marketing data for a long time. But things are changing — for the better. Privacy-led marketing practices enable you to get high-quality user data and customer preferences with consent. Achieve regulatory compliance while delivering better customer experiences. Learn more?in this article?from Usercentrics.


?? Meta's AI practices

If you use Meta's products (Facebook, Instagram, etc) and are worried about your personal data being used to train AI, read this:

?? Meta indeed uses posts and pictures to train its AI models. Here are a few quotes from its Generative AI Privacy Policy:

"We use information that is publicly available online and licensed information. We also use information shared on Meta’s Products and services. This information could be things like posts or photos and their captions. We do not use the content of your private messages with friends and family to train our AIs."

"When we collect public information from the internet or license data from other providers to train our models, it may include personal information. For example, if we collect a public blog post it may include the author’s name and contact information. When we do get personal information as part of this public and licensed data that we use to train our models, we don’t specifically link this data to any Meta account."

"Even if you don’t use our Products and services or have an account, we may still process information about you to develop and improve AI at Meta. For example, this could happen if you appear anywhere in an image shared on our Products or services by someone who does use them or if someone mentions information about you in posts or captions that they share on our Products and services."

?? Summarizing Meta's AI practices:

1- Meta trains their AI with user posts and pictures;

2- Personal information is included in the AI training dataset;

3- Non-users are also affected (e.g., if a user posts a picture of a non-user).

?? What can you do about that? There are opt-out forms, such as this one, but it's unclear how effective they are in practice.

?? Regarding EU-based users, the non-profit noyb urged 11 EU data protection authorities to “immediately stop Meta's abuse of personal data for AI.” noyb stated:

The objection is a farce. Meta even tries to make users responsible for taking care of their privacy by directing them to an objection form (opt-out) that users are supposed to fill out if they don't want Meta to use all their data. While in theory an opt-out could be implemented in such way that it requires only one click (like the 'unsubscribe' button in newsletters), Meta makes it extremely complicated to object, even requiring personal reasons. A technical analysis of the opt-out links even showed that Meta requires a login to view an otherwise public page. In total, Meta requires some 400 million European users to 'object', instead of asking for their consent.”

?? noyb's actions worked, and less than 10 days after their complaints, the Irish Data Protection Authority announced that Meta paused its plans to train their AI models using data from EU/EEA users:

“The DPC welcomes the decision by Meta to pause its plans to train its large language model using public content shared by adults on Facebook and Instagram across the EU/EEA. This decision followed intensive engagement between the DPC and Meta. The DPC, in co-operation with its fellow EU data protection authorities, will continue to engage with Meta on this issue."

?? Here's what Meta answered. A quote:

“We’re disappointed by the request from the Irish Data Protection Commission (DPC), our lead regulator, on behalf of the European DPAs, to delay training our large language models (LLMs) using public content shared by adults on Facebook and Instagram? — particularly since we incorporated regulatory feedback and the European DPAs have been informed since March. This is a step backwards for European innovation, competition in AI development and further delays bringing the benefits of AI to people in Europe.”

?? And this is what noyb said about Meta's answer:

“Meta worried about EU users? Just ask for opt-in consent! As with all Meta messages, there is no shortage of reframing and disingenuous claims in this one. Meta emphasises that EU/EEA users will not be able to use AI services for the time being. However, this does not seem too logical. The GDPR allows for almost anything, as long as users give (valid) opt-in consent. Meta could therefore roll out AI technology in Europe if it only bothered to ask people for their consent. But it seems that Meta is doing everything it can to never get opt-in consent for any processing.”

?? For the rest of the world, nothing changes, and everything posted on Facebook, Instagram, and other Meta products is fair game to train its AI models.

?? To learn more about the intersection of AI and data protection, join our 4-week Bootcamp on Emerging Challenges in AI, Tech & Privacy (the 8th cohort starts in July).

?? Share


?? Great AI paper

Ian Ayres & Jack M. Balkin published the paper "The Law of?AI?is the Law of Risky Agents without Intentions." A must-read for everyone in AI.

?? Quotes:

"A recurrent problem in adapting law to artificial intelligence programs is how the law should regulate the use of entitles that lack intentions. Many areas of the law, including freedom ofspeech, copyright, and criminal law, make liability turn on whether the actor who causes harm(or creates a risk of harm) has a certain intention or mens rea. But AI agents—at least the oneswe currently have—do not have intentions in the way that humans do. If liability turns on intention, that might immunize the use of AI programs from liability." (page 1)

"The two strategies of ascribing intention and imposing standards of behavior based on an imagined intention are mirror images of each other. The first strategy says 'regardless of your intentions, the law will treat you as if you had a particular intention and regulate or penalize you accordingly.' The second strategy says 'regardless of your actual intentions, the law will measure your conduct by the standard of a hypothetical person with a particular mental state and regulate or penalize you if you do not live up to that standard.' We propose that the law regulate the use of AI programs through these two strategies. (...)" (page 3)

"The spread of AI technology will likely require changes in many different areas of the law. In this essay we’ve argued for viewing AI technology not in terms of its independent agency but in terms of the people and companies that design, deploy, offer and use the technology. To properly regulate AI, we need to keep our focus on the human beings behind it." (page 10)

?? Read the full paper.

?? Share


??? Governing with AI

The OECD published the report "Governing with Artificial Intelligence -?Are governments ready?" Important information:

?? According to the official release: "(...) This policy paper outlines the key trends and policy challenges in the development, use, and deployment of AI in and by the public sector. First, it discusses the potential benefits and specific risks associated with AI use in the public sector. Second, it looks at how AI in the public sector can be used to improve productivity, responsiveness, and accountability. Third, it provides an overview of the key policy issues and presents examples of how countries are addressing them across the OECD."

?? Quotes:

"(...) The responsible use of AI can improve the functioning of government administrations in several ways.

? First, the use of AI in the public sector can help governments increase productivity with more efficient internal operations and more effective public policies.

? Second, AI can help make the design and delivery of public policies and services more inclusive and responsive to the evolving needs of citizens and specific communities.

? Third, AI can strengthen the accountability of governments by enhancing their capacity for oversight and supporting independent oversight institutions."

"This potential has by no means been fully explored and exploited. More evidence is needed on use cases to better understand how to successfully develop and deploy AI initiatives, learning from successes and failures. Despite the potential benefits of AI, there are also growing concerns about the risks of a fragmented and ungoverned deployment of AI in the public sector. Such risks include the amplification of bias, the lack of transparency in system design, and breaches in data privacy and security – all of which could lead to unfair and discriminatory outcomes with profound societal implications. The public sector has a special responsibility to deploy AI in a way that minimises harm and prioritises the well-being of individuals and communities, especially when deploying AI in sensitive policy domains such as law enforcement, immigration control, welfare benefits, and fraud prevention."

?? Read the full report.

?? Share


? Dark patterns on Google's Privacy Sandbox

The non-profit noyb filed a complaint against Google over the use of dark patterns in the context of its Privacy Sandbox. A must-read for everyone in privacy.

?? Quotes:

"Far from any “privacy” tool, the system behind the Sandbox API still tracks a user’s web browsing history. The difference is that now the Chrome browser itself tracks user behaviour and generates a list of advertising "topics" based on the websites users visit. At launch there were almost 500 advertising categories like "Student Loans & College Financing," "Undergarments" or "Parenting" that users were associated with based on their online activity. An advertiser that has a presence on a website enabling the Sandbox API will ask the Chrome browser what topics a user belongs to, and then potentially display an advertisement accordingly." (page 2)

"Rather than making it clear that they were asking for consent to have their browser track users, Google sold the Sandbox API as a “privacy feature” to users. It is understood that this was a conscious choice to manipulate user understanding and ensuring a high consent rate, as users thought that their browser is now protecting them against tracking for advertisement." (page 3)

"Given that the burden of proof rests on Google, it follows that Google should disclose the consent rate for the Sandbox API, as well as any results from A/B testing or other methods that allow to see that Google has in fact provided the most transparent information to data subjects and has not - as alleged - used these tools to intentionally mislead data subjects." (page 11)

?? Read the full complaint and noyb's official release.

?? Share


?? On-demand course: Limited-Risk AI Systems

Check out our June on-demand course: Limited-Risk AI Systems. I discuss the EU AI Act's category of limited-risk AI systems, as established in Article 50, including examples and my insights on potential weaknesses. In addition to the video lesson, you receive additional material, a quiz, and a certificate.

?? Paid subscribers of this newsletter get free access to our monthly on-demand courses. If you are a paid subscriber, request your code here. Free subscribers can upgrade to paid here.

?? For a comprehensive program on the AI Act, register for our 4-week Bootcamp in July. It's a live online program with me - don't miss it.


?? Another great AI paper

Luciano Floridi published the paper "Hypersuasion - On AI’s Persuasive Power and How to Deal With It," and it's a must-read for everyone in AI.

?? Quotes:

"The relentless nature of AI’s hypersuasion, the magnitude of its scope, its availability, affordability, and degree of efficiency based on machine-generated content accurately tailored to individual users or consumers who spend increasing amounts of their lives onlife (both online and offline) in the infosphere overshadow its precursors, not only in terms of the depth of personalised influence but also for the potential scale of distribution and impact (Burtell and Woodside 2023). AI can and will be used, evermore commonly and successfully, to manipulate people’s views, preferences, choices, inclinations, likes and dislikes, hopes, and fears (...)"

"For the moment, one may be tempted to conclude that AI and, indeed, any persuasive technology is neutral (perhaps with the only exception of the erosion of autonomy, more on this presently). However, as I argued elsewhere (Floridi 2023), this would be a mistake. It is much better to interpret it as double-charged, in tension between evil and good uses. The forces pulling in the wrong direction may be as strong as those pulling in the right. Arguably, if some autonomy is eroded (but see below), this may be to the advantage of the individuals persuaded, their societies, or the environment (...)"

"To conclude, AI has introduced a new and potent form of persuasion (hypersuasion). Preventing, minimising, and withstanding the negative impact of hypersuasion requires a comprehensive strategy, at the individual and societal level, that includes the?protection of privacy, the development of fair competition among actors, transparent allocation of accountability, and good education and engagement. These and other factors – not highlighted in this article because they are more generally relevant, from responsible design to good social practices – require, as usual, ethical, and legal frameworks, regulatory oversight, implementation, and enforcement(...)."

?? Read the full paper.

?? Share


?? AI: Model Personal Data Protection Framework

The Hong Kong?Privacy Commissioner’s Office published the document “Artificial Intelligence: Model Personal Data Protection Framework,” a must-read for everyone in AI. Important info:

?? According to the official release: "(...)?the Model Framework, which is based on general business processes, provides a set of recommendations and best practices regarding governance of AI for the protection of personal data privacy for organisations which procure, implement and use any type of AI systems."

?? The document covers recommended measures in the following areas:

??AI strategy and governance;

??Risk assessment and human oversight;

??Customisation of AI models and implementation and management of AI systems;

??Communication and engagement with stakeholders.

?? Quotes:

"A risk-based approach should be adopted in the procurement, use and management of AI systems. Comprehensive risk assessment is necessary for organisations to systematically identify, analyse and evaluate the risks, including privacy risks, involved in the process. A risk management system should be formulated, implemented, documented and maintained throughout the entire life cycle of an AI system." (page 23)

"In proportion to the level of risks involved, there should be rigorous testing and validation of the AI models to ensure that they perform as intended, and their reliability, robustness and fairness should be evaluated before deployment, especially where the models have been customised."(page 38)

"An organisation's use of AI should be transparent to stakeholders to demonstrate the organisation's adherence to the "Transparency and Interpretability" principle. Organisations should communicate and engage effectively and regularly with stakeholders, in particular internal staff, AI suppliers, individual customers and regulators. The level of transparency will vary depending on the stakeholder. Effective communication is essential to building trust." (page 47)

?? Read the document.

?? Share


?? AI Governance is HIRING

Below are ten AI governance positions posted last week. Bookmark, share, and be an early applicant:

1. Credo AI (Belgium):?Policy Manager - AI Governance - apply

2. Google (US): Director, Data Governance, Compliance and Security - apply

3. ByteDance (UK):?Senior Counsel, AI Governance & Tech Policy - apply

4. BIP (Italy):?AI Governance Specialist - apply

5. The Walt Disney Company (US): Data Governance Engineer - apply

6. Venquis (Germany):?Consultant - Trustworthy AI / AI Governance - apply

7. IMDA (Singapore):?Manager, AI Governance - apply

8. The Weir Group PLC (UK):?Head of Data & AI Governance - apply

9. Comerica Bank (US):?AI Governance Manager - apply

10. Southern Water (UK):?Data and AI Governance Manager - apply

?? For more AI governance and privacy job opportunities, subscribe to our weekly job alert.

?? To upskill and land your dream AI governance job, check out our training programs in AI, tech & privacy. Good luck!

?? Share


??AI vs. job market

Most people don't want to talk about it, but they should: AI is already replacing our jobs, and everyone should have a plan. Watch my 3-minute video:

?? You can watch all my videos, including my 1-hour talks with AI & privacy experts, on my YouTube Channel. Subscribe and never miss a new video.


?? Are you looking for a speaker in AI, tech & privacy?

I would welcome the opportunity to:

? Give a talk at your company;

? Speak at your event;

? Coordinate a training program for your team.

?? Here's my short bio with links. Get in touch!


? Reminder: AI training opportunities

?? The EU AI Act Bootcamp (4 weeks)

Tuesdays, July 16 to Aug 6 at 10am PT

?? Register here

?? Emerging Challenges in AI, Tech & Privacy (4 weeks)

Wednesdays, July 17 to August 7 at 10am PT

?? Register here

?? To receive our AI, Tech & Privacy Academy weekly emails with learning opportunities, subscribe to our Learning Center.

I hope to see you there!


?? Upskill & advance your career

Choose a paid newsletter subscription, and besides receiving this free weekly edition?on AI policy & regulation, get exclusive access to:

  1. Weekly in-depth analyses of the EU AI Act;
  2. A monthly on-demand course on AI policy & regulation topics. This month's course is Limited-Risk AI Systems. (Upgrade to paid and get free access)


?? Thank you for reading!

If you have comments on this week's edition, reply to this email or write to me, and I'll get back to you soon.

Have a great day.

Luiza



Meta's opt out from using your data to train their AI models seems to be friction by design. It discourages users with multiple steps that create hurdles that many won't overcome. Glad to see the Irish Data Protection Authority’s action is able at least to protect EU/EEA users.

Jan B.

Beta-tester at Parrot Security* Polymath*

5 个月
WILLIAM SLATER

CISO, vCISO, M.S. in Cybersecurity, MBA, PMP, CISSP, CISA, SSCP, U.S. Air Force Veteran

5 个月

#Yuge! Another superb contribution to the #AI Body of Knowledge. Thank you, #LuizaJarovsky. You are #TheAbsoluteBest! #KeepUpTheGreatWork! P.S. #MarkZuckerberg and #Meta are still #Evil.

要查看或添加评论,请登录

Luiza Jarovsky的更多文章

  • ?? New EU Regulation Impacting AI

    ?? New EU Regulation Impacting AI

    AI Governance Must-Reads | Edition #150 ?? PERSONAL REQUEST: You are currently subscribed to this newsletter's LinkedIn…

    7 条评论
  • ?? Will Liability Wreck AI Companies?

    ?? Will Liability Wreck AI Companies?

    AI Governance Professional Edition | Paid-Subscriber Only | #149 ?? PERSONAL REQUEST: You are currently subscribed to…

    6 条评论
  • ?? EU AI Liability: A Game Changer

    ?? EU AI Liability: A Game Changer

    AI Policy, Compliance & Regulation | Edition #148 ?? PERSONAL REQUEST: You are currently subscribed to this…

    4 条评论
  • ?? Can AI Ignore Copyright?

    ?? Can AI Ignore Copyright?

    AI Governance Professional Edition | Paid-Subscriber Only | #147 ?? PERSONAL REQUEST: You are currently subscribed to…

    10 条评论
  • ?? AI and Environmental Impact

    ?? AI and Environmental Impact

    AI Policy, Compliance & Regulation | Edition #146 ?? PERSONAL REQUEST: You are currently subscribed to this…

    3 条评论
  • ?? From Responsible AI to AI Governance

    ?? From Responsible AI to AI Governance

    AI Governance Professional Edition | Bonus Edition | #145 ?? Hi, Luiza Jarovsky here. Welcome to the 145th edition of…

    10 条评论
  • ?? Open-Source AI: Legal Implications

    ?? Open-Source AI: Legal Implications

    AI Policy, Compliance & Regulation | Edition #144 ?? PERSONAL REQUEST: You are currently subscribed to this…

    6 条评论
  • ?? Free Speech & AI

    ?? Free Speech & AI

    AI Policy, Compliance & Regulation | Edition #142 ?? PERSONAL REQUEST: You are currently subscribed to this…

    1 条评论
  • ?? Fundamental Rights & AI

    ?? Fundamental Rights & AI

    AI Policy, Compliance & Regulation | Edition #140 ?? PERSONAL REQUEST: You are currently subscribed to this…

    7 条评论
  • ??? AI & Biometrics

    ??? AI & Biometrics

    AI Policy, Compliance & Regulation | Edition #138 ?? PERSONAL REQUEST: You are currently subscribed to this…

    9 条评论

社区洞察

其他会员也浏览了