Nijta and AFNOR: Crafting comprehensive privacy and confidentiality standards

Nijta and AFNOR: Crafting comprehensive privacy and confidentiality standards


A very important milestone for the regulation, development and implementation of Artificial Intelligence, the AI Act is coming into the forefront after being published in the Official EU Journal on 12 July 2024. Initially proposed in 2021, the AI Act is a comprehensive legal framework that aims to ensure that AI systems used within the EU are deployed ethically and responsibly, to enhance innovation in AI, and to strengthen the trust of the public in the usage of AI. Protecting people’s personal data and securing confidential information of organisations are key requirements to achieve these objectives.

How exactly can such an ambitious regulation be deployed? Standards are the answer. They can be set at international and national levels to ensure the practical implementation of the principles of the AI Act.

Understanding the Problem - Why privacy and confidentiality standards matter?

Today, data is more vulnerable than ever. Biometric data, which includes fingerprints, facial recognition, and voiceprints, is uniquely sensitive as it directly ties to an individual’s identity. Personal data including name, age, location, and profession, as well as all kinds of confidential data of organisations such as internal documents, meetings, client feedback, or financial data need similar protection. The misuse of such data can lead to severe privacy breaches, identity theft, loss of intellectual property and competitiveness, or even security threats. To tackle these risks the AI Act classifies AI into four categories according to their intended purpose and potential impact on health, safety or fundamental rights, namely Minimal risk, Limited risk, High-risk and Unacceptable risk. Most of the legal uncertainties around the AI Act are concerned with high-risk businesses, as the law often does not specify actionable compliance criteria and measures to achieve them. This is where standards come into picture.


Nijta and AFNOR: Crafting comprehensive privacy and confidentiality standards

We, at Nijta, are extensively involved in helping organisations execute the vision of the AI Act. Nijta has joined the French Association of Standardization (AFNOR), through which it is at the forefront of current efforts of the European standardization body CEN-CENELEC to implement robust privacy and confidentiality standards. As concerns about the usage of AI escalate, our joint work with AFNOR and CEN-CENELEC aims to set clear, enforceable guidelines to protect data, and move towards our general vision to ensure privacy and confidentiality for all. This collaboration is focused on developing privacy and confidentiality standards that are both practical and enforceable. As part of a panel of academics and industry experts, two of our co-founders Brij Mohan Lal Srivastava, PhD and Emmanuel Vincent are extensively contributing to these standardisation efforts at the national and EU level. We are particularly involved with two working groups of CEN-CENELEC: WG3 and WG5.

Working group 3:

WG3 is developing a standard on AI datasets to facilitate compliance with Article 10 of the AI Act. In particular, Article 10(5) considers "technical limitations on the re-use of personal data, and state-of-the-art security and privacy-preserving measures”. Nijta will promote a set of possible, recommended or mandatory measures to protect personal and confidential information in data in the widest possible range of use cases. Our proposed measures include:

  1. cutting the data into smaller pieces,
  2. detecting personal or confidential information through regular expressions and named entity recognition in text, face and body detection in images, or unique co-occurrence patterns in tabular data,
  3. replacing or adding noise to it so as to pseudonymize or anonymize the data, and
  4. shared? anonymization metrics and thresholds based on the use case to ensure that the desired level of protection is attained.

Working group 5:

WG5 is developing a standard on AI cybersecurity for compliance with Article 15(5) of the AI Act which concerns the security risks raised by AI models including "confidentiality attacks". All AI models, especially generative ones, can memorise personal or confidential information present in the training or fine-tuning data and enable an attacker to identify the data subject and possibly reconstruct the data. Nijta’s proposed measures to protect against such attacks include:

  1. the concealment of personal and confidential information in the data using anonymization techniques,
  2. differentially private training to reduce the amount of remaining personal or confidential information in the model, if any — specifically, by adding noise to the gradient computed at each iteration in case of neural networks,
  3. privacy and confidentiality filters to patch the model at inference time if necessary, and
  4. shared metrics and thresholds based on the use case.

WG5 is part of the CEN-CLC/JTC 13 “Cybersecurity and data protection” technical committee, which works with ENISA (The European Union Agency for Cybersecurity) on European certification schemes and collaborates with the European Commission in the context of the cybersecurity-related standardisation requests.

Most of the legal uncertainties around the AI Act are concerned with high-risk businesses, as the law often does not specify actionable compliance criteria and measures to achieve them.

Actively shaping privacy standards in France and the EU

Through its engagement in CEN-CENELEC’s WG3 and WG5 via AFNOR, Nijta demonstrates its commitment to safeguarding personal and confidential data and reducing legal risk and liability costs, reflecting our dedication to a secure and private digital future for all. By working closely with key stakeholders, we ensure that the standards will be comprehensive, forward-looking, practical and effective in addressing the risks associated with all kinds of data. Our participation is strongly motivated by the pressing need to translate international regulations into precise, actionable rules and ensure the results for all players in the market. We invite you to follow us in this journey and be part of this impactful and dedicated movement.


要查看或添加评论,请登录

Nijta的更多文章

社区洞察

其他会员也浏览了