Privacy and AI weekly - Issue 12

Privacy and AI weekly - Issue 12

This Friday on Privacy and AI weekly

Privacy

? FTC enforcement action against a VoIP for facilitating illegal telemarketing robocalls

? Best interests of the child self-assessment

? ICO's Self-assessment risk tool

? The hybrid war in Ukraine (Microsoft report)

Artificial Intelligence

? People-centric approaches to algorithmic explainability (TTC Labs)

? Not all AI is ML: ISO TR 24372:2021



PRIVACY


Sponsor

You are probably aware of the privacy issues Google Analytics is facing. Simple Analytics is the privacy-first alternative to Google Analytics. We provide the website insights your organisation needs without ever collecting any personal data or using cookies.?

No alt text provided for this image

As friends of this newsletter and fans of Federico’s work, we would like to give you a 25% discount if you sign up. Just let us know you found out about Simple Analytics in this newsletter.

?????????????????????? Try Simple Analytics



FTC enforcement action against a VoIP for facilitating illegal telemarketing robocalls

FTC enforcement action against a VoIP service provider for, among other things, providing VoIP services to customers despite knowing its customers were:

? using the services to place calls to numbers on the FTC’s DNC Registry

? delivering pre-recorded messages; and

No alt text provided for this image

? displaying spoofed caller ID services to callers involved in scams related to credit card interest rate reduction, tech support, and the COVID-19 pandemic.

* Voice over Internet Protocol (VoIP) refers to making phone calls that are made through the Internet, rather than through a regular landline or a mobile network. A VoIP system works by taking analogue voice signals, converting them into digital signals, then sending them as data over the broadband line. For instance, Skype can be used to call a regular landline or mobile numbers too.

Access the press release here



Best interests of the child self-assessment

The Information Commissioner's Officer developed an assessment for the best interests of the child following the the Children's Code. The Children’s code (or Age appropriate design code) is a data protection code of practice for online services, such as apps, online games, and web and social media sites, likely to be accessed by children.

Standard 1 of the Children’s code requires online services to treat the best interests of the child as a primary consideration when designing and developing online services likely to be accessed by a child.

No alt text provided for this image

To consider whether organisations are acting in the best interests of children, they must consider how the use of children's data impacts the rights they hold under the United Nations Convention on the Rights of the Child (UNCRC).

To help organisations make the assessment, the ICO created tools, templates and guidance.

Step 1: Understand rights: Understand the rights of the child under the UNCRC.

Step 2: Identify impacts: Identify potential impacts on the rights of the child in your product or service.

Step 3: Assess impacts: Consider the likelihood and scale of potential impacts on the rights of children.

Step 4: Prioritising actions; Create an action plan for issues highlighted in your risk assessment.


ICO's Self-assessment risk tool

The ICO has published a self-assessment risk tool to implement the Children's Code.

This tool?has been created with medium to large private, public and third sector organisations in mind.

From the moment a young person opens an app, plays a game or loads a website, data begins to be gathered. For all the benefits the digital economy can offer children, we are not currently creating a safe space for them to learn, explore and play. The Children's code looks to change that, not by seeking to protect children from the digital world, but by protecting them within it.

This tool will help organisations to conduct their own risk assessment of how both the UK General Data Protection Regulation and the Children's code applies in the context of their digital services and give them practical steps for you to apply a proportionate and risk-based approach to ensuring children’s protection and privacy.

Download the self-assessment risk tool here



The hybrid war in Ukraine (Microsoft report)

Microsoft released a report detailing Russian cyberattacks observed in a hybrid war against Ukraine, and they also explain the measures they have taken to protect Ukrainian people and organizations.

Before the invasion, Microsoft observed that at least six separate Russia-aligned nation-state actors launched more than 237 operations against Ukraine.

Before the war, malicious actors: sought access to insights on Ukrainian defence and foreign partnerships, positioned themselves for third-party attacks on networks in Ukraine and partner nations, sought access to insights on military and humanitarian response capabilities, secured access to critical infrastructure for future destruction.

After the war:

No alt text provided for this image
No alt text provided for this image

Access the report here (via Tudor Galos)



ARTIFICIAL INTELLIGENCE

People-centric approaches to algorithmic explainability

No alt text provided for this image

Providing transparency and explainability of artificial intelligence (AI) presents complex challenges across industry and society, raising questions around how to build confidence and empower people in their use of digital products.

The purpose of this report is to contribute to cross-sector efforts to address these questions. It shares the key findings of a project conducted in 2021 between TTC Labs and Open Loop in collaboration with the Singapore Infocomm Media Development Authority (IMDA) and Personal Data Protection Commission (PDPC).

Through this project TTC Labs and Open Loop have developed a series of operational insights to bring greater transparency to AI-powered products and develop related public policy proposals.

These learnings are intended both for policymakers and product makers – for those developing frameworks, principles and requirements at the government level and those building and evolving apps and websites driven by AI.

No alt text provided for this image

You can access the full report here and the visual explainer here



Not all AI is ML: ISO TR 24372:2021

One of the criticisms that the AIA draft often receives concerns the breadth of the definition of AI. According to the Commission's proposal, AI is software that:?

? is developed with one or more of the following techniques or approaches:

  • Machine learning approaches
  • Logic- and knowledge-based approaches
  • Statistical approaches, Bayesian estimation, search and optimization methods

? can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with

It is indeed a very broad definition of AI and many applications or solutions that are not usually considered to be covered in the general idea of AI, following this definition, may be included.

During the debate of the draft in the EU Parliament MEP Axel Voss suggested restricting the definition of AI to those approaches based only on machine learning (see Amendment 37). This was the rationale for the amendment:

"Although the AI Act is an EU Regulation, it should use the wording developed by the OECD. Using this widely accepted definition will help the EU to better cooperate with non-EU democracies such as the USA, Canada or UK. Together, it will be easier to promote international standards based on our democratic values. The new definition for AI systems moreover creates legal certainty while providing enough flexibility by accommodating future technological developments."

But despite the

It is worth noticing that, apart from the approaches and the objectives/consenquences that define an AI system, the AIA regulates almost exclusively AI systems according to their risks. Hence, as long as a system does not creates high risks it would be largely unregulated.

Apart from that, the ISO recently issued a Technical Report ISO/IEC TR 24372:2021(en) Information technology — Artificial intelligence (AI) — Overview of computational approaches for AI systems where it explains different approaches that are covered by the umbrella concept of AI. In this document, the ISO included among the algorithms and approaches used in AI systems:

? knowledge engineering and representation

? logic reasoning

? machine learning

? metaheuristics

No alt text provided for this image
No alt text provided for this image

Access a preview of ISO TR 24372:2021 here

Admittedly, both practitioners and academia hotly debate this issue. However, it is expected more insightful and convincing explanations from lawmakers.

The Draft Report on the AIA by the LIBE seems to have left the Commission's broad definition of AI almost untouched.

No alt text provided for this image

Let's be patient, there is still a long way to go for the final (if ever) enactment of the AIA.




No alt text provided for this image

Qubit Privacy?is a boutique consultancy firm that provides data protection and AI governance services. Qubit Privacy helps your organization to stay compliant with privacy regulations like the GDPR, to protect you against cyber-attacks and data breaches and to manage and assess algorithmic risks through a range of affordable professional solutions.

Federico Marengo is the founder of Qubit Privacy. He is a PhD student (in data protection and AI) and the author of “Data Protection Law in Charts. A Visual Guide to the General Data Protection Regulation“, available?here

For inquiries, feedback or collaborations, please contact me at federico@qubitprivacy.com


Thanks for reading & for our sponsor Simple Analytics

Georg Philip Krog

Pioneering AI-Driven Data Privacy, Security & Compliance | Creator of Data Privacy and Security Standard Vocabularies and Ontologies | Founder of Signatu | Transforming Legal Tech into Business Advantage

2 å¹´

Great as always Federico!

要查看或添加评论,请登录

Federico Marengo的更多文章

  • Privacy and AI #21

    Privacy and AI #21

    In this edition of Privacy and AI ? Swedish Data Protection Authority publishes guidance on GenAI and GDPR ? Commission…

    8 条评论
  • Privacy and AI #20

    Privacy and AI #20

    In this edition of Privacy and AI PRIVACY ? Privacy People (Stephen Bolinger, Documentary) ? EDPB, Data protection…

    8 条评论
  • Privacy and AI #19

    Privacy and AI #19

    In this edition of Privacy and AI SUCCESSFUL AI USE CASES IN ORGANIZATIONS ? Successful AI Use Cases in Legal and…

    3 条评论
  • Privacy and AI #18

    Privacy and AI #18

    In this edition of Privacy and AI AI REGULATION ? California AI Transparency ? ICO consultation on the application of…

    5 条评论
  • Privacy and AI #17

    Privacy and AI #17

    In this edition of Privacy and AI ? Privacy & AI book giveaway ? LLMs can contain personal information in California ?…

    4 条评论
  • Privacy and AI #16

    Privacy and AI #16

    In this edition of Privacy and AI ? AI & Algorithms in Risk Assessments (ELA, 2023) ? Hamburg DPA position on Personal…

    6 条评论
  • Privacy and AI #15

    Privacy and AI #15

    In this edition of Privacy and AI ? Generative AI and EU Institutions (EDPS) ? Supervision of AI systems in the EU (NL…

    4 条评论
  • Privacy and AI #14

    Privacy and AI #14

    In this edition of Privacy and AI: PRIVACY ? Privacy and AI for AI Governance Professional (AIGP) certification ?…

    7 条评论
  • Privacy and AI #13

    Privacy and AI #13

    In this edition of Privacy and AI: PRIVACY ? FTC prohibits telehealth firm Cerebral from using or disclosing sensitive…

    21 条评论
  • Privacy and AI #12

    Privacy and AI #12

    In this edition of Privacy and AI: PRIVACY ? Purpose limitation in the GenAI lifecycle (ICO call for evidence) ?…

    9 条评论

社区洞察

其他会员也浏览了