AI News, Industry Updates, and Governance

AI News, Industry Updates, and Governance

Thank you for joining this newsletter! ?? I am Karin Tafur, an AI Legal and Ethics Consultant, Author, and International AI Speaker & Trainer. My career has focused on advancing AI and its governance.

?? Through this newsletter, I will provide valuable insights, industry updates, expert research, analysis, and the latest regulations! to help you stay informed and navigate this critical field. I appreciate your engagement and look forward to exploring these topics together!

If you haven’t subscribed yet, ??please do so here... Thank you!



?? AI GOVERNANCE: Insights on Policies, Regulations, and Ethical Practices

? AI Standards for Europe: Current Overview

The European Commission is in the process of establishing comprehensive AI standards as part of its regulatory framework, particularly in support of the Artificial Intelligence Act. These standards are designed to ensure that AI systems used in the EU are safe, ethical, and respect human rights.

  1. Who should pay attention to European AI standards?

  • Industry leaders, policymakers, compliance officers, and researchers should pay attention to European AI standards to ensure regulatory compliance.
  • Development of AI Standards

The CEN (European Committee for Standardization) and CENELEC (European Committee for Electrotechnical Standardization) are responsible for developing these standards. In January 2023, their Joint Technical Committee (JTC 21) released a roadmap for advancing AI standardization.

Additionally, the Joint Research Centre (JRC), which offers scientific advice for EU policies, published an analysis in 2023 titled "Analysis of the Preliminary AI Standardisation Work Plan in Support of the AI Act." This report assesses existing standards, identifies gaps, and recommends new standards to promote responsible AI use. The JRC's goal is to assist policymakers and industry leaders in meeting the requirements of the AI Act.

  1. Why is Important the Presumption of Conformity in the AI Act?

The concept of presumption of conformity is crucial in the context of the AI Act, particularly under Article 40.1. This provision establishes that if companies comply with harmonized standards, they are presumed to meet the essential requirements set by the AI legislation. This approach is designed to simplify the compliance process and encourage businesses to adopt these standards proactively.

  1. Concerns About the AI Act's Presumption of Conformity

  • Some experts warn that this presumption could allow high-risk AI systems to bypass necessary independent third-party assessments, potentially undermining public trust and safety if these systems are deployed without rigorous external evaluation.
  • The presumption of conformity may not address all legal requirements, especially if specific directives or regulations are not covered by Harmonized Standards. Companies could potentially overlook other essential compliance aspects, such as specific safety measures not included in the standards.
  • Not all products or sectors have well-defined Harmonized Standards, which can create uncertainty for manufacturers. For instance, emerging technologies may lack specific standards, leaving companies unsure of how to ensure compliance.

  1. Deadline for the Release of European AI Standards

The final publication of European AI Standards is expected by the end of 2025.

Read the full information of the JRC 2023 here




? Caution: The Dangers of Pressuring DPOs in AI Compliance

  1. Who Should Read This Information and Why?

This brief report, titled “Is the DPO the Right Person to Be the AI Officer?” and authored by Marc Bellon, Lionel Capel, Ernst-Oliver Wilhelm, and Maria Moloney, published by the CEDPO AI and Data Working Group, is particularly valuable for:

  • Executives and decision-makers in organizations: Business leaders considering the integration of AI into their operations will find valuable information on the governance structures needed to ensure compliance with both data protection and AI regulations.
  • Regulators and Policymakers: The report provides context on the regulatory landscape surrounding AI and data protection, which is crucial for those involved in shaping policies.
  • Compliance and risk management professionals: Those involved in governance, risk, and compliance (GRC) will benefit from understanding the implications of the upcoming EU AI Act and how it intersects with data protection regulations.
  • Data Protection Officers (DPOs): The report examines the evolving role of DPOs in the context of AI compliance. It offers insights into whether DPOs are equipped to take on the additional responsibilities of an AI Officer, making it essential reading for those in this position.

  1. In-Depth Analysis:

  • AI Compliance and Governance:
  • Challenges in Merging Roles:
  • They note that assigning AI governance responsibilities to the DPO may compromise the integrity of their data protection duties, as outlined in Article 38 of the GDPR. This could result in a situation where the same individual is tasked with both implementing data processing activities and ensuring compliance, which is prohibited.

  1. The Risks of Mismanaging DPO Roles in Small vs. Large Organizations

  • The report discusses the varying responsibilities of DPOs in small vs. large organizations. In smaller organizations, DPOs often wear multiple hats and may struggle to manage both data protection and AI compliance effectively.
  • In contrast, larger organizations are more likely to have separate roles for data protection and AI governance, allowing for more specialized attention to compliance and risk management. The authors emphasize that this delineation is crucial for ensuring that both data protection and AI governance are handled appropriately.

  1. Critique by the authors:

  • The authors critique the current regulatory framework's ambiguity regarding the role of the AI Officer. They point out that, unlike the DPO role, which is clearly defined in the GDPR, the AI Officer role lacks a universally accepted definition and mandate. This uncertainty can lead to confusion within organizations about who is responsible for ensuring AI compliance.
  • Additionally, the authors express concerns that solely assigning the AI Officer role to the DPO could limit the effectiveness of both positions. They argue that while DPOs have valuable insights into data protection, they may lack the strategic vision required for driving AI innovation and compliance effectively.

Read the full information here



? AI Coding Assistants: Benefits, Risks, and Best Practices

This report, created by the French Cybersecurity Agency (ANSSI) and the German Federal Office for Information Security (BSI), provides recommendations on how to use AI coding assistants safely. It discusses both the benefits and risks of these tools and provides specific recommendations to avoid problems.

1. What is a Coding Assistant?

A coding assistant is an AI-powered tool that helps programmers write code more efficiently. Using large language models (LLMs), these tools understand natural language prompts and assist with tasks like generating code, explaining code, creating test cases, and translating between programming languages. Essentially, they act like smart chatbots, offering code suggestions based on user input to streamline the coding process and minimize repetitive work.

2. Opportunities with AI Coding Assistants

AI coding assistants can help developers in several ways:

  • Generating Code: They can automatically create code, which saves time and effort.
  • Understanding Code: They help developers learn about new projects by explaining existing code.
  • Testing: They can automatically generate test cases to check if the code works correctly.
  • Formatting and Documentation: They can assist with organizing code and writing documentation, making the process easier.
  • Translating Code: They can convert old code into newer programming languages, simplifying maintenance.
  • Employee Satisfaction: Using these tools can make developers happier because they reduce workload.

3. Risks of AI Coding Assistants

However, there are some important risks to consider:

  • Data Leaks: Sensitive information can be accidentally shared when developers use these tools, depending on how the providers handle data.
  • Quality Issues: The code generated by AI can vary in quality and may not always be reliable. Some code might even have security flaws.
  • Security Threats: Using AI coding assistants could create new security risks. For example, hackers might find ways to exploit weaknesses in the AI's code.

4. Recommendations for Safe Use

To address these risks, the report suggests the following actions:

  1. Use Human Developers: AI coding assistants should not replace experienced developers. Relying too much on them can lead to security problems.
  2. Assess Risks: Before using these AI tools, companies should analyze the potential risks, including how trustworthy the providers are.
  3. Quality Control: Any increase in productivity from using AI should be matched by stronger quality checks in software development teams.
  4. Review AI-Generated Code: Developers should always check and validate the code created by AI before using it.

Read the full information here


?? COURT UPDATE

EU Court Blocks Meta from Targeting Ads by Sexual Orientation

The article explains a recent decision by the European court that stops Meta (formerly Facebook) from targeting ads based on users' sexual orientation. This decision came from a case involving Maximilian Schrems, privacy activist and Founder of Noyb, who publicly disclosed his sexual orientation and subsequently received targeted ads based on that information. The court ruled that while individuals may publicly disclose their sexual orientation, this does not authorize the processing of additional related data obtained from outside the platform. This decision is based on the EU's General Data Protection Regulation (GDPR).

This court decision sets a significant precedent for companies regarding data privacy and the handling of personal data in Europe.

Read the full information here and the court decision here

If you haven’t subscribed yet, ?? please do so here... Thank you!


?? AI INDUSTRY UPDATES

? AI IN LEGAL, FINANCE, INSURANCE, AND DIGITAL SERVICES:

?Scaling Generative AI in Banking

The document, “Scaling gen AI in banking: Choosing the best operating model” published by McKinsey & Company, discusses how banks can effectively implement generative AI. It outlines the benefits and risks of using AI coding assistants and emphasizes the importance of a centralized operating model. The document provides recommendations for integration, including risk assessment, talent acquisition, and strategic alignment to maximize the potential of generative AI in financial services.

Read the full information here


?State of AI in Financial Services: 2024 Trends

This report is published by NVIDIA. It explores the impact of artificial intelligence in the financial services sector, focusing on trends for 2024. The document discusses how AI technologies are transforming operations, enhancing customer engagement, and optimizing risk management. It also highlights the necessity for ethical AI practices and adherence to regulatory standards, urging financial institutions to innovate responsibly while addressing challenges related to data security and privacy

Read the full information here


? DIGITAL HEALTH & MEDICAL AI

? The potential for artificial intelligence to transform healthcare: perspectives from international health leaders

The article, published by npj Digit, discusses the transformative potential of artificial intelligence (AI) in healthcare, emphasizing the need for high-quality data and robust monitoring systems. It highlights risks associated with AI implementation, such as data privacy concerns and algorithmic bias. The authors recommend enhancing data quality, building supportive infrastructure, encouraging data sharing, and creating incentives to accelerate AI adoption.

Read the full information here


? FDA Digital Health and Artificial Intelligence Glossary – Educational Resource

The FDA (U.S. Food and Drug Administration) is a federal agency in the United States that protects public health by regulating food, drugs, medical devices, and health-related products.

The FDA has released a glossary that provides definitions for commonly used terms in digital health, AI, and machine learning, sourced from various public organizations. This glossary is intended for educational purposes only and does not serve as regulatory guidance or enforceable requirements.

Read the full information here


? AI AND ENVIRONMENT

Navigating the environmental impact of AI

The article, published by the OECD AI, examines the environmental impact of artificial intelligence (AI) technologies, showcasing their potential benefits and challenges. It emphasizes the need for effective policies to manage AI’s carbon footprint and resource consumption. The document describes strategies for sustainable AI development and calls for collaboration among governments, industries, and researchers to mitigate negative effects while utilizing AI's capabilities for positive environmental outcomes.

Read the full information here


?? AI LITERACY TRAINING (Live Online Session)

***Effective February 2, 2025, the European AI regulation will require all AI providers (developers) and users (non-AI companies that use AI and AI companies) across Europe to ensure their staff possess sufficient AI literacy.

??FOR INDIVIDUALS:

This live online session is designed to gain expert insights that will empower you to lead in AI literacy and regulation in your organization. DATE: February + Only 15 seats available. ?? To receive further information, contact me here.

→ Instructor: Karin Tafur: Explore my AI expertise and contributions here.


??FOR TEAMS:

Contact me at: contact [at] karintafur [dot] com, with the SUBJECT LINE: AI Literacy for Teams.


?? AI WARNINGS & AI FOR GOOD

? AI WARNING

U.S. Cybersecurity Official Warns: AI Tools Need Human Oversight for Effective Defense

Lisa Einstein, the Chief AI Officer of CISA (the Cybersecurity and Infrastructure Security Agency) - a U.S. government agency responsible for protecting the nation’s critical infrastructure from cyber threats - recently emphasized the need for human involvement when using AI tools for cybersecurity during events in Washington, D.C. (NVIDIA’s AI Summit - 2024). She pointed out that while AI can improve cyber defenses, it isn’t a complete solution and still requires strong human processes. Einstein emphasized the importance of addressing specific problems rather than forcing AI into every situation.

Read the full information here


? AI FOR GOOD

AI to the rescue: how to enhance disaster early warnings with tech tools

The article, published by the Nature Journal, discusses how artificial intelligence (AI) can improve early-warning systems for natural disasters, making predictions more accurate and efficient. However, it stresses the importance of creating strong international standards to address data bias and compatibility between systems. The article also explains the need for teamwork among governments, researchers, and other stakeholders to develop best practices.

Read the full information here

If you haven’t subscribed yet, ?? please do so here... Thank you!


?? INVITATION TO GUEST WRITERS! - Limited Slots


→ Are you interested in sharing your expertise? Join me as a guest writer for the AI newsletter! I look forward to featuring your insights. Contact me here if you’d like to collaborate!


?? AI VIDEO OF THE WEEK

A Critical Perspective on AI, Generative AI, Big Tech, and the Future of Artificial Intelligence – BBC. Watch the video here

Source and image_BBC News.

?? Partnership Opportunities:

Appreciating my work? Let’s connect and explore collaboration opportunities.

??Sponsorship Opportunities:

→If your company or organization is interested in exploring sponsorship options, Join our brand partner waitlist here

if your organization is interested in sponsoring and offering AI Literacy Training to specific groups or populations, contact me here


??Thank you for reading!

If you’d like to provide feedback for improvement, feel free to connect with me here.

?? If you found this information insightful, ?? I encourage you to share it with your network to enhance your collective understanding of AI news and governance. Together, we can support more informed decision-making in this challenging field!


Kind regards,

Karin Tafur | AI?Legal and Ethics?Consultant | Author | Lecturer?|?International AI Speaker?& Trainer

Karin Tafur

AI Leader: I Help Professionals & Businesses in Europe Master Essential AI Literacy for a Responsible and Smarter AI Future // Ready to Level Up Your AI Knowledge? Head to ?? karintafur.com/home

3 个月

Thank you for your kind words, Salman Afridi!!

Salman Afridi

Business Development Executive|Blockchain Researcher|Web3 Enthusiastic

3 个月

Excellent work Karin and thanks for sharing!

要查看或添加评论,请登录

Karin Tafur的更多文章

社区洞察

其他会员也浏览了