March 2024 Newsletter

March 2024 Newsletter

Dear All,

Welcome to the February 2024 edition of the AI & Partners Newsletter.? Our tailor-made monthly newsletter by AI & Partners updating you on the latest developments and analyses of the proposed EU Artificial Intelligence Act (the “EU AI Act”). ?To begin, this month heralds the beginning of firms’ preparations since the publication of the final text, a historic milestone since December’s monumental political approval of the EU AI Act. With work reportedly underway on finalising and signing off the tentative political agreement in a consolidated text, formal adoption by the European Parliament and by the Council of the European Union is imminent. In this sense, firms’ EU AI Act implementation journeys have just begun. This issue examines a range of matters things from final text drafting to standards approval (specifically ISO/IEC 42001). Our objective is to remain ahead of the curve to keep you informed.

AI’s long-term potential for sustained value creation remains uncontested. Now, and in the foreseeable future, we base our services around helping firms achieve regulatory excellence for the EU AI Act.? We hope you find this content beneficial.?

As always, if you have any comments or recommendations for content in future editions or would like to contribute content, please let us know: [email protected].

What’s the latest with the legislative process?

After the political agreement on the EU AI Act late last year, the European Commission published their answers to the most common questions about the Act. The questions were as follows: 1) Why do we need to regulate the use of Artificial Intelligence? 2) Which risks will the new AI rules address? 3) To whom does the AI Act apply? 4) What are the risk categories? 5) How do I know whether an AI system is high-risk? 6) What are the obligations for providers of high-risk AI systems? 7) What are examples for high-risk use cases as defined in Annex III? 8) How are general-purpose AI models being regulated? 9) Why is 10^25 FLOPs an appropriate threshold for GPAI with systemic risks? 10) Is the AI Act future-proof? 11) How does the AI Act regulate biometric identification? 12) Why are particular rules needed for remote biometric identification? 13) How do the rules protect fundamental rights? 14) What is a fundamental rights impact assessment? Who has to conduct such an assessment, and when? 15) How does this regulation address racial and gender bias in AI? 16) When will the AI Act be fully applicable? 17) How will the AI Act be enforced? 18) Why is a European Artificial Intelligence Board needed and what will it do? 19) What are the tasks of the European AI Office? 20) What is the difference between the AI Board, AI Office, Advisory Forum and Scientific Panel of independent experts? 21) What are the penalties for infringement? 22) What can individuals do that are affected by a rule violation? 23) How do the voluntary codes of conduct for high-risk AI systems work? 24) How do the codes of practice for general purpose AI models work? 25) Does the AI Act contain provisions regarding environmental protection and sustainability? 26) How can the new rules support innovation? 27) Besides the AI Act, how will the EU facilitate and support innovation in AI? and 28) What is the international dimension of the EU's approach?

Facial recognition controversy: Gian Volpicelli from POLITICO reported that the AI Act, initially agreed upon in early December, has undergone last-minute changes that would allow law enforcement to use facial recognition technology on recorded video without judicial approval. German MEP Svenja Hahn criticised these modifications in the final text, calling them an attack on civil rights and likening the potential misuse of biometric technology to practices in authoritarian states like China. She argues that the changes, finalised on 22 December, diverge from the original agreement, which required stricter conditions and judicial oversight for facial recognition use. Hahn highlighted concerns about post facial recognition technology, which deals with pre-existing footage, as opposed to real-time public space surveillance that would be largely outlawed. While some, including Parliament's leading negotiator Drago? Tudorache, defend the text, others like Patrick Breyer from the German Pirate Party, and representatives from digital rights groups, echo Hahn’s criticism. EU governments will review the final text on 24 January, aiming for approval on 2 February, followed by a Parliamentary vote. Potential amendments would require additional legislative work.

What do the latest analyses state?

  • Euractiv's Théophane Hartmann reported that the French government has faced criticism over its stance in the AI Act negotiations. Allegations centre around the influence of the former digital state secretary Cédric O, who is accused of having conflicts of interest. Senator Catherine Morin-Desailly claimed that Cédric O and his company, Mistral, which represents American corporate interests, influenced the government's position to weaken the AI regulation. Digital Minister Jean-No?l Barrot refuted these accusations, insisting on the government's commitment to the general interest and denying that it acted as a spokesperson for private interests. He argued that fostering AI champions in Europe is crucial for protecting citizens and the creative industry. However, Barrot's stance was criticised by Pascal Rogard of the Society of Dramatic Authors and Composers for not supporting culture, the creative industry, or copyrights. The High Authority for Transparency in Public Life had barred Cédric O from lobbying or owning tech sector shares for three years, yet he invested in Mistral AI and did not fully declare his holdings. Commissioner Breton also criticised O, questioning his commitment to the public interest.
  • Javier Espinoza, EU Correspondent at the Financial Times reported that Margrethe Vestager, the EU’s competition and digital chief, defended the proposed AI Act against criticisms, including those from French President Emmanuel Macron. Vestager emphasised that the legislation would not hinder innovation and research but rather enhance it by providing clear rules for building foundational models, like those underlying generative AI products. She argued that the Act would offer predictability and legal certainty for both creators and users of these technologies, ensuring that regulatory measures do not stifle innovation. Macron had expressed concerns that the AI Act might cause European tech companies to fall behind their counterparts in the US and China. The law still needs to be ratified by member states in the coming weeks, but France, alongside Germany and Italy, is partaking in early discussions about seeking alterations or blocking the law. Vestager highlighted that regulation is crucial for fostering trust in the market, which is necessary for investment and practical use.
  • David Haber, CEO of Lakera, published a commentary on Fortune about his experience as an advisor to the EU on the AI Act. Haber states that initially, the Act focused on regulating narrow and predictive AI, addressing issues like AI in diagnostics and creditworthiness evaluations. However, the advent of generative AI presented a significant challenge, forcing policymakers to consider whether to stick to their original narrow focus or adapt to the rapidly evolving AI landscape. The EU ultimately chose a hybrid approach, where the Act remains largely true to its original intent but includes an addendum to address generative AI. The Act is still evolving, with crucial technical details pending beyond high-level pieces around transparency requirements and punitive measures. The next phase will involve incorporating industry-specific controls and integrating the Act with existing regulations.
  • The Future Society analysed how much AI Act compliance would cost for general-purpose AI (GPAI) providers. They first estimate the total investment needed to develop cutting-edge GPAI models, considering significant expenses for hardware, chips, and engineers. The compliance costs then add in internal and external risk evaluations, technical documentation, and quality management systems, with conservative assumptions like high San Francisco salaries and the need for additional staff and secondary evaluations. The findings reveal that compliance costs for GPAI models are minimal, ranging between 0.07% and 1.34% of the total capital expenditure required to build such models. This result is based on models ranging from 10^24 to 10^26 FLOPs of training computation. The analysis suggests that these costs are good value for ensuring the safety, security, and reliability of these technologies, and seen as beneficial for EU citizens and the digital economy.
  • Dutch AI supervision plans: The Dutch Data Protection Authority (AP) published its second AI and Algorithmic Risks Report, which shares a national master plan for the Netherlands aiming for effective control over AI and algorithm use by 2030, involving collaboration among companies, government, academia, and NGOs. The strategy includes annual goals and agreements and integrates regulations like the AI Act. The Act, effective from 2025, will provide oversight of foundational models and developers, addressing risks like disinformation, manipulation and discrimination. Supervisors in the Netherlands are preparing for AI Act supervision, which was politically agreed upon in December 2023. However, effective control of AI and algorithms extends beyond supervision, requiring proactive risk management and internal controls within companies and organisations for reliable and safe AI use. Aleid Wolfsen, Chair of the AP, notes that the more AI and algorithms are being used in society, the more incidents seem to occur, emphasising the need for immediate risk management, particularly as 75% of Dutch organisations plan to use AI in workforce management. He highlights the necessity of robust supervision and regulation to maintain trust in AI and protect fundamental rights.
  • Regulating foundation models: Cornelia Kutterer, Research Fellow at the Chair on the Legal and Regulatory Implications of Artificial Intelligence at MIAI Grenoble Alpes, wrote an extensive article on regulating foundation models in the AI Act. Kutterer says that the provisional agreement on general purpose AI (GPAI) and foundation models introduces a new risk category, systemic risks, expanding the existing categories in the Act. Under this agreement, all GPAI models require regular updates of technical documentation, including training and testing details, and providers must help AI system integrators understand the models' capabilities and limitations as well as comply with the regulation. They must also comply with EU copyright law, share training content summaries, and cooperate with regulatory authorities. Models posing systemic risks have additional obligations: evaluating and mitigating such risks, monitoring and reporting serious incidents, taking corrective actions, and ensuring robust cybersecurity. The agreement maintains a risk-based approach but expands it to include systemic risks, reflecting AI technology advancements. The proposal addresses open-source AI models, exempting them unless they pose systemic risks. This approach aims to balance safety concerns and the benefits of knowledge sharing within the community, navigating tensions between understanding AI model performance and mitigating potential risks.
  • Perspectives in the music industry: Daniel Tencer, Deputy Editor at Music Business Worldwide, reviewed the AI Act from the perspective of the music industry. Tencer states that the Act is a crucial piece of legislation for the music industry, particularly regarding copyright infringement and transparency in AI training. Rightsholders, including the global music industry representative IFPI, are cautiously optimistic about the Act. The Act seems to support rights holders by suggesting that using copyrighted materials for AI training requires their permission. This is subject to certain exceptions, however, notably for scientific research, introducing complexity and potential loopholes. An area of concern for the music industry is the Act's "opt-out" system, which shifts the burden to rights holders to forbid the use of their material in AI training. This contrasts with the preferred "opt-in" system where AI developers would by default need to obtain licenses beforehand. The Act indicates that a summary of data sources might be sufficient for compliance, which could be problematic given the vast amount of data in sources like Common Crawl, used in AI training. Overall, while the AI Act is seen as a positive step, entities like GEMA and Warner Music Group CEO Robert Kyncl suggest it needs further technical refinement, with some preferring stricter regulations.
  • Generative AI and watermarking: Tambiama André Madiega, Policy Analyst at the European Parliamentary Research Service, wrote a briefing on generative AI and how it is being regulated around the world. While tools like ChatGPT, GPT-4, and Midjourney facilitate content generation, they raise concerns of plagiarism, privacy issues, AI hallucination (providing false information convincingly), copyright infringement, and disinformation. The challenge of distinguishing between AI-generated and human content is a growing policy issue. Policymakers and AI practitioners are exploring ways to increase the transparency and accountability of generative AI, including content labelling, automated fact-checking, forensic analysis, and watermarking to clarify AI content's origins. The EU's AI Act imposes obligations on AI system providers and users to label AI-generated content and disclose its artificial nature, better informing user decisions. These systems must also mark synthetic content in a machine-readable format. GPAI models must meet transparency obligations, respect EU copyright law using advanced technologies, and provide detailed summaries of copyrighted content used in training. Additionally, generative AI providers must disclose AI-generated content and prevent illegal content creation, likely by employing watermarking techniques.

If you have any interaction with AI systems, such as use, marketing, development, importing, distribution or development, including those categorised as high-risk, you may be within scope.? Contact us to receive more information on this matter.

What is coming up next?

The Economist: 4th annual Business Innovation Summit ( February 2024)

Unlock Business Growth with AI: Join the 4th Annual Business Innovation Summit!

Empower your leadership team at the 4th Annual Business Innovation Summit, themed "Harnessing AI: from Fear to Fortune." Explore how generative AI and emerging technologies drive transformative efficiencies and shape the future of business. Engage with 70+ influential speakers, participate in debates, and gain actionable insights. Don't miss the chance to network with senior executives across industries.

Register now to seize growth opportunities and navigate the evolving landscape. Join us on 21st March 2024 at etc.venues Bishopsgate, London, United Kingdom.

Register here: https://events.economist.com/business-innovation-summit/.

Don't miss out—secure your spot today! #BusinessInnovation #AI #Leadership

FinTech Fringe: Rise & Shine (1 February 2024)

Regulating the Future: Building Trust and Managing Risks in AI for FinTechs.

Some of the topics we'll be covering include:

  • Examining how AI can better enable decision-making by the board in UK fintech companies, emphasising the importance of ethical considerations.
  • Exploring the distinctions in how AI is applied in B2B and B2C scenarios in the UK, and the unique challenges and opportunities each presents.
  • How the use of AI in financial services can have concrete impacts on consumers and markets that may be relevant from a regulatory and ethical perspective.
  • The crucial role of data governance in amplifying and accelerating AI in UK fintech and how it contributes to risk mitigation.
  • Insights into practical methods for identifying different types of AI systems within financial services and understanding their unique risks.
  • Discussing practical means of identifying and mitigating risks associated with AI systems in the UK financial sector.
  • Cross-Industry Collaboration Frameworks.

Access here.

teissLondon2024: The European Information Security Summit (22 February 2024)

Panel discussion on Cybersecurity and AI

Can AI tools really improve your incident response?

  • Automating your detection and triage processes using algorithms and models
  • Effectively using AI-based solutions to analyse complex data lakes in real time
  • Improving your overall organisational resilience, reducing recovery costs and minimising downtime.

Access here.

What AI & Partners can do for you

Providing a suite of professional services laser-focused on the EU AI Act

  • Providing advisory services: We provide advisory services to help our clients understand the EU AI Act and how it will impact their business.? We do this by identifying areas of the business that may need to be restructured, identifying new opportunities or risks that arise from the regulation, and developing strategies to comply with the EU AI Act.
  • Implementing compliance programs: We help our clients implement compliance programs to meet the requirements of the EU AI Act.? We do this by developing policies and procedures, training employees, and creating monitoring and reporting systems.
  • Conducting assessments: We conduct assessments of our clients' current compliance with the EU AI Act to identify gaps and areas for improvement.? We do this by reviewing documentation, interviewing employees, and analysing data.
  • Providing technology solutions: We also provide technology solutions to help our clients comply with the EU AI Act.? We do this by developing software or implementing new systems to help our clients manage data, track compliance, or automate processes.

We are also ready to engage in an open and in-depth discussion with stakeholders, including the regulator, about various aspects of our analyses.

Our Best Content Picks for 2023

Article: International Business Times | What Kind Of Artificial Intelligence Technological Development Trend Can We Expect In 2024?

As we venture into 2024, the dynamics of AI regulation are poised to undergo significant shifts, with a spotlight on the metaverse and its integration into various aspects of our lives.

Read more at > https://www.ibtimes.co.uk/what-kind-artificial-intelligence-technological-development-trend-can-we-expect-2024-1722517

Article: LEXOLOGY | Two Years of EU AI Act: What Can We Expect Moving Forward?

Following the European Commission’s (“EC”) political approval of the European Union (“EU”) artificial intelligence (“AI”) Act (the “EU AI Act”, “Act” or “Regulation”)1 after more than two years since the initial proposal in April 20212, AI & Partners provides input on the central elements of the EU AI Act3 that can be a success alongside recommendations for how the EU AI Act can be triumphant. These are based on an certain aspects of the ex-post exercise conducted by DIGITALEUROPE with regards to the General Data Protection Regulation (“GDPR”).

Read more at > https://www.lexology.com/library/detail.aspx?g=b5acba34-6175-4da0-8c17-9ae00fa6d009

Paper: AI & Partners and #RISK AI | Global AI Benchmarking Study

Report to help stakeholders understand the global AI industry.

Read more at > https://www.ai-and-partners.com/_files/ugd/2984b2_1633ca2b9d61418584c71cacb2858f73.pdf

Paper: AI & Partners and #RISK AI | Global AI Regulatory Landscape Study

Understanding the AI regulatory landscape.

Read more at > https://www.riskai.global/risk-a-i-reports

Podcast: Dr Deandra Cutaja

Dr Deandra shares her thoughts on how we are at the very origins of a new wave of technological development and innovation.

Listen in at > https://spotifyanchor-web.app.link/e/jy9NTs2ZeGb

Paper: AI & Partners and The Economist | Navigating the Future: Unleashing the Power of Generative AI at the 4th Annual Business Innovation Summit

In a world where the pace of technological advancement defines success, the 4th Annual Business Innovation Summit emerges as a beacon for leaders navigating the transformative landscape of artificial intelligence. Under the overarching theme of, "Harnessing AI: from fear to fortune," this summit promises not just to demystify the complexities surrounding generative AI but to empower leaders to leverage it for innovation and growth. In this small excerpt, we'll uncover the reasons why this event is not just another summit but a crucial opportunity for business leaders seeking to remain at the forefront of industry evolution.

Read more at > https://www.ai-and-partners.com/media

Event: ?IoD Finance & FinTech and AI & Partners | Demystifying the EU AI Act: Navigating the Future of AI Regulation

In this event, we explored the Act's key provisions, compliance strategies, and the broader implications for businesses and society. Join us for expert-led sessions, panel discussions, and practical insights into the AI Act's nuances.

What did the audience learn?

  • In-depth knowledge of the EU AI Act's core principles and provisions.
  • Strategies for ensuring compliance with the AI Act's requirements.
  • Real-world examples and case studies illustrating the Act's impact.
  • Insight into the ethical and legal considerations surrounding AI.
  • A forward-looking perspective on AI regulation in the EU.

Who did attend???

This masterclass was ideal for:

  • Business leaders and executives operating in the EU or engaging with EU markets.
  • Policymakers and legal professionals seeking to understand and apply the AI Act.
  • AI developers and innovators interested in compliance and ethical AI.
  • Professionals responsible for AI strategy, governance, and risk management.
  • Anyone eager to grasp the evolving landscape of AI regulation in the EU.

Why did IoD members attend?

IoD members attended to:

  • Gain a comprehensive understanding of the EU AI Act.
  • Stay ahead of the regulatory curve by preparing for AI compliance.
  • Network with experts, policymakers, and peers in the AI and business sectors.
  • Learn how to leverage AI ethically and within legal boundaries.
  • Position their organizations as AI-compliant and responsible entities in the EU market..

Read more at > https://www.dhirubhai.net/company/iod-finance-and-fintech-group/

Legitimate Interest

As we will continue to communicate with UK and EU individuals on the basis of ‘legitimate interest’ if you are happy to continue to receive marketing communications covering similar topics as you have previously then no action is needed, else please unsubscribe using the Opt Out link below. ?We will process your personal information in accordance with our privacy notice.?

Opt Out

If, however, you do not wish to receive any marketing communications from us in the future you can unsubscribe from our mailing list at any time, thank you.

DISCLAIMER

All information in this message and attachments is confidential and may be legally privileged. Only intended recipients are authorized to use it. E-mail transmissions are not guaranteed to be secure or error free and sender does not accept liability for such errors or omissions. The company will not accept any liability in respect of such communication that violates our e-Mail Policy.

Muhammad Sajwani

C-Level HR | Transformation Leader | Board Advisor | Author | Business Coach | Organisational Consultant

9 个月

Nice one ??

要查看或添加评论,请登录

社区洞察

其他会员也浏览了