Welcome to the November 2024 edition of the AI & Partners Newsletter.? Our customised monthly newsletter by AI & Partners updating you on the latest developments and analyses of the [now] legally binding EU Artificial Intelligence Act (the “EU AI Act”). ?To start, October marked the third full month in which the EU AI Act became legally applicable following its entry into force on 1 August 2024. Accordingly, the EU AI Act’s two-year transition period is well underway with firms having less than six months to either de-risk or de-commission any prohibited AI systems under Article 5 of the EU AI Act. After more than three years of dedicated work and research, the EU AI Act’s entry into force represents a landmark moment, both in terms of regulating of ‘frontier technologies’ as well as driving forward a trustworthy AI ecosystem.
With the EU AI Act’s formal entry into force as well as the start of the two-year mandatory compliance implementation period, businesses must now start their EU AI Act readiness journeys. This issue examines a range of matters things from [X] to [X]. Our objective is to remain ahead of the curve to keep you informed.
AI’s long-term potential for sustained value creation remains uncontested. Now, and in the foreseeable future, we base our services around helping firms achieve regulatory excellence for the EU AI Act.? We hope you find this content beneficial.?
As always, if you have any comments or recommendations for content in future editions or would like to contribute content, please let us know: [email protected].
What’s the latest with the legislative process?
- AI Pact Pledge Signing. More than a hundred companies sign AI Pact pledges: The European Commission has announced that over one hundred companies, including multinational corporations and SMEs across various sectors, have initially signed the AI Pact. The Pact aims to encourage voluntary adherence to the principles of the AI Act ahead of its formal implementation and strengthen engagement between the AI Office and stakeholders. Signatories commit to at least three key actions: developing an AI governance strategy to ensure compliance with the Act, identifying high-risk AI systems, and increasing AI literacy among staff. Additionally, over half of the participants have pledged to ensure human oversight, mitigate risks, and label certain AI-generated content transparently. The Pact remains open for other companies to join and commit to both the core and additional actions until the AI Act is fully in effect.
- Pledges for general-purpose AI Code of Practice. Over 400 submissions for general-purpose AI code of practice: The European Commission has received nearly 430 responses to its consultation on the upcoming Code of Practice for general-purpose AI (GPAI), as outlined in the AI Act. These submissions will contribute to the finalisation of the Code by April 2025, with GPAI provisions set to take effect on 1 August 2025. Key topics of focus include transparency, copyright regulations, risk assessment and mitigation, and internal governance. This input will assist the AI Office in the implementation and enforcement of GPAI rules, as well as in developing guidelines for summarising training data used in GPAI models. In addition, almost a thousand organisations and individuals worldwide have expressed an interest in participating in the drafting of the first Code of Practice for GPAI. An online opening plenary is planned for 30 September.
- MEPs raise questions around appointment process. MEPs raise questions about the appointment process for general-purpose AI code of practice leaders: According to Eliza Gkritsi, Tech Editor at Euractiv, three Members of the European Parliament (MEPs) have raised concerns about the European Commission's process for appointing key positions for drafting general-purpose AI guidelines. On 24 September, the Commission responded to those interested in participating, providing limited details beyond the initial plenary scheduled for 30 September. MEPs Axel Voss, Svenja Hahn, and Kim van Sparrentak have asked questions regarding how the Commission is selecting the chairs and vice-chairs of the working groups, particularly concerning international expertise. They are seeking clarification on whether these appointments will be announced by the plenary on Monday and how the Commission plans to ensure delivery within the tight timeframe.
- Working group position appointments announced. Chair and vice-chair appointments for the working groups: The AI Office has announced the chairs and vice-chairs for four working groups tasked with developing the first General-Purpose AI Code of Practice. These experts, chosen for their diverse backgrounds in computer science, AI governance, and law, will lead the process from October 2024 to April 2025. The selection criteria included expertise, independence, geographical diversity, and gender balance. For example, the working group focused on transparency and copyright is co-chaired by experts in European copyright law and AI transparency. The four working groups will address topics such as transparency, copyright, risk assessment, mitigation measures, and internal risk management for general-purpose AI providers. The appointed chairs and vice-chairs will guide discussions, synthesize input from participants, and work toward presenting a final draft by April 2025.
- Inaugural plenary for Code of Practice on general-purpose AI (GPAI): According to tech journalist Jacob Wulff Wold from Euractiv, the European Commission conducted its inaugural plenary for the Code of Practice on general-purpose AI (GPAI) on September 30. During the session, the Commission introduced the chairs and vice-chairs of the working groups responsible for drafting the Code and welcomed close to 1,000 participants to the virtual meeting. The drafting process will include input from a range of stakeholders, workshops with GPAI providers, and discussions with the chairs and vice-chairs. The first draft is anticipated around November 3, with the final version scheduled for April 2025. Early findings from a stakeholder consultation, which attracted nearly 430 submissions, were also shared. The consultation highlighted contrasting views on data transparency and risk assessment between GPAI providers and other participants. Contributions came from various sectors, including industry, rightsholders, civil society, and academia, though the plenary itself was primarily attended by academics and independent experts.
- Draft Implementing Act for AI Scientific Panel: The European Commission is inviting public input on a draft act to establish a scientific panel of independent experts for the AI Act. This expert group will support the AI Office and national market surveillance authorities with advice on implementing and enforcing the Act. The consultation period spans from October 18 to November 15, lasting a total of four weeks. Feedback will be published on the Commission's website, provided it meets feedback guidelines, and will inform the final stages of the initiative, which outlines the panel’s setup and operational framework.
- Leadership for AI Act Monitoring: Euronews' Senior EU Policy Reporter, Cynthia Kroet, has shared that the European Parliament has appointed Michael McNamara and Brando Benifei to co-chair its AI Act monitoring group. This group will oversee the Act’s implementation, with McNamara representing the Committee on Civil Liberties, Justice and Home Affairs (LIBE) and Benifei representing the Committee on Internal Market and Consumer Protection (IMCO). Benifei previously co-led the report on the AI Act, and McNamara joined the Parliament in July following the European elections. While the Legal Affairs Committee has expressed interest in joining the group, they have yet to appoint a representative. The date for the group's first meeting remains unconfirmed, and most discussions will likely be held privately, similar to working groups for the Digital Services Act and Digital Markets Act under the new Parliament.
- Hiring Progress at the AI Office: As reported by POLITICO Tech Reporter Pieter Haeck, the European Commission has so far staffed half of the intended positions for the AI Office, with 83 staff members currently employed and 17 more expected to join soon. The office, which began operating under DG CONNECT in June, aims for a total of 140 employees. Industry insiders have raised concerns about the potential challenges posed by the staffing shortage, given the Office’s broad mandate. The AI Office includes five specialized units, one of which is dedicated to "excellence in artificial intelligence and robotics" based in Luxembourg. Currently, the AI Safety unit, a key area of focus, is led temporarily by AI Office director Lucilla Sioli, as a permanent head for the unit has not yet been appointed.
What do the latest analyses state?
- Future of the AI Pact remains uncertain: According to a report by Euractiv's Eliza Gkritsi and Jacob Wulff Wold, the future of the AI Pact is unclear following the resignation of Commissioner Thierry Breton. The Pact consists of two main components: Pillar I, which serves as a peer-to-peer network for sharing best practices, and Pillar II, which involves a series of pledges. Notable tech giants such as Microsoft, Google, Amazon, and OpenAI are among the 115 signatories, while companies like Meta, Anthropic, and Mistral are absent from the initial list. Some companies were hesitant to sign due to concerns about the prescriptive nature of the Pact and its potential interference with compliance efforts related to the AI Act. The Pact features three core commitments along with additional voluntary ones, with approximately half of the signatories committing only to the core elements. Signatories are expected to report on their implementation after twelve months, although the specific reporting requirements have yet to be clarified.
- Tech giants emphasize the importance of the code of practice: Martin Coulter, European Technology Correspondent for Reuters, reported that the enforcement of regulations for general-purpose AI, including the potential for copyright lawsuits and multi-billion dollar fines, will remain uncertain until the accompanying codes of practice are finalised. The EU has invited a diverse group of stakeholders to participate in drafting this code, and it has received an unusually large number of applications. Although the code of practice will not be legally binding, it will serve as a compliance guide for companies, and disregarding it could lead to legal challenges. Major technology firms and non-profit organisations have applied to be part of the drafting process. Industry representatives stress that it is crucial to develop the code in a way that supports ongoing innovation without becoming overly restrictive. Additionally, some stakeholders have expressed concerns that companies are making efforts to avoid transparency.
- European Parliament's study on AI liability: Euractiv's tech journalist Jacob Wulff Wold reported that the European Parliament Research Service (EPRS) has recommended expanding liability regulations to include general-purpose AI products and developing a broader legal framework for software liability. The study suggests that the liability regime should cover general-purpose AI, as well as prohibited and high-risk uses of AI as defined in the AI Act. It proposes transitioning the Artificial Intelligence Liability Directive (AILD) into a more comprehensive Software Liability Instrument. The study also examines how the AILD interacts with the AI Act and the updated Product Liability Directive. Member of the European Parliament Axel Voss indicated that the JURI committee will determine the next steps in October. To prevent market fragmentation and enhance clarity across the EU, the EPRS recommends expanding the AILD's scope. It also suggests applying strict liability to AI systems prohibited under the AI Act and considering this approach for "high-impact" systems.
- Overview of EU Standardisation Supporting the AI Act: Skadden's legal team has provided an outline of the EU's efforts toward AI standardisation under the AI Act. The European Commission has tasked CEN and CENELEC with creating European standards by April 30, 2025, aimed at ensuring the safety of AI systems in the EU market, while protecting fundamental rights and fostering innovation. The CEN-CENELEC Joint Technical Committee (JTC 21) has introduced a roadmap for AI standardisation, which has been assessed by the European Commission's Joint Research Center. The evaluation highlighted significant gaps in existing international standards and proposed the development of new standards to meet the requirements of the AI Act. Some harmonised standards, such as CEN/CLC ISO/IEC TR 24027:2023 and ISO/IEC 23894:2023, have already been adopted by JTC 21. A work programme and dashboard have been released by CEN and CENELEC to track progress on the development of further standards, although delays are expected. The completion of these standards may be pushed to late 2025, giving companies limited time to implement them before the AI Act's enforcement in August 2026.
- UK's Shift Towards Cooperation with Europe: David Matthews, writing for Science|Business, highlighted the UK's shift towards greater collaboration with the EU in science and technology, driven by the country’s new technology secretary. This marks a departure from the previous government's focus on regulatory divergence post-Brexit, which included relaxed rules on genetically engineered crops and a light-touch approach to AI regulation. The new government is expected to introduce AI legislation, although it may not be as comprehensive as the EU's AI Act. The UK seeks to cooperate closely with both the US and EU on AI matters, continuing its emphasis on AI safety through the AI Safety Institute. The planned legislation will make voluntary agreements with AI companies enforceable and establish the AI Safety Institute as an independent body. The UK aims to balance between the EU's regulatory framework and the US's executive order-driven approach, but some experts warn this strategy could leave the UK lagging in AI regulation. Additionally, the size of the EU market may pressure AI companies to prioritize compliance with EU regulations over any British frameworks.
- Risk Management Practices in Leading AI Companies: SaferAI, a French non-profit, recently published a report evaluating the risk management practices of top AI companies. The report found that Anthropic, OpenAI, Google, and DeepMind performed reasonably well in risk identification, while Meta was rated poorly for risk analysis and mitigation. Mistral and xAI were rated the lowest, with most of their risk management practices deemed "non-existent." SaferAI's CEO, Simeon Campos, underscored the urgency of robust risk management as AI capabilities continue to advance. Yoshua Bengio, who is leading a working group developing a Code of Practice with the Commission's AI Office, has expressed support for this initiative, which outlines the risk management steps required from providers of general-purpose AI to comply with the AI Act. The AI Office is currently expanding its technical team by hiring 25 specialists with expertise in computer science and engineering to address the risks associated with generative and general-purpose AI.
- Telecom Operators and the AI Act: Independent journalist Michelle Donegan, writing for TM Forum, discussed how the EU AI Act will require telecom operators to invest more in compliance efforts to meet new safety standards. While the Act may not alter their overall AI strategies, telcos will need to evaluate deployment risks more carefully. Lower-risk applications, such as customer service chatbots, will still need to meet transparency requirements, while certain high-risk use cases, particularly those involving critical infrastructure and network operations, will face additional regulatory obligations. The Act's impact extends beyond Europe, affecting telecom operators worldwide. Several major telecom companies, including Deutsche Telekom, KPN, Orange, Telefonica, Telenor, TIM Telecom Italia, and Vodafone, have already signed on to the AI Pact, a voluntary initiative to align with the AI Act's requirements early. Orange views the AI Pact as a direct channel of communication with the European Commission, while Telenor sees the Act as a vital step in establishing global standards for AI development.
- The Changing Role of Chief Privacy Officers Under the AI Act: Ron De Jesus, Field Chief Privacy Officer at Transcend, argues in The Parliament Magazine that the EU AI Act has significantly expanded the role of chief privacy officers (CPOs). CPOs are now tasked with responsibilities beyond traditional data protection, including ensuring transparency, fairness, and security in AI systems, as well as compliance with copyright laws. They must gain technical expertise in AI algorithms, machine learning models, and automated decision-making processes, while also addressing the ethical challenges associated with these technologies. This expanded role is becoming evident across sectors such as financial services, healthcare, and e-commerce. To meet these new demands, De Jesus advocates for companies to invest in training CPOs and providing the necessary resources to manage AI's legal, ethical, and technical challenges. Enterprises must also allocate more financial and human resources to support CPOs in their evolving roles.
- Scientific Panel Draft Feedback Sought The European Commission has released a draft act for public feedback concerning the formation of a scientific panel composed of independent experts for the AI Act. This panel will provide guidance to the AI Office and national market surveillance authorities on enforcing and implementing the Act. Feedback is open from October 18 to November 15, with responses to be posted on the Commission’s site in line with feedback guidelines. Insights gathered will inform the final draft, outlining the panel's structure and functions.
- Monitoring of AI Act Led by McNamara and Benifei According to Euronews' Cynthia Kroet, the European Parliament has appointed Michael McNamara and Brando Benifei as co-chairs of its AI Act monitoring group, tasked with overseeing the law’s implementation. McNamara represents the Committee on Civil Liberties, Justice and Home Affairs (LIBE), while Benifei, a previous AI Act co-rapporteur, represents the Committee on Internal Market and Consumer Protection (IMCO). The Legal Affairs Committee has shown interest in joining the group, although it has yet to designate a representative. The date for the group’s inaugural meeting has not been set, with most discussions likely held privately, similar to structures for the Digital Services Act and Digital Markets Act under the new Parliament.
- Hiring Progress at the AI Office POLITICO’s tech journalist Pieter Haeck reports that the European Commission’s AI Office has reached half of its target staffing level, employing 83 personnel, with 17 more anticipated soon. Operating under DG CONNECT since June, the office aims to employ 140 staff. Industry insiders have voiced concerns over the adequacy of staffing to manage the office’s broad responsibilities. Among the five units established, one focuses on "excellence in artificial intelligence and robotics" based in Luxembourg. The AI Safety unit, currently directed by Lucilla Sioli in an acting role, is yet to appoint a permanent head.
- Compliance Assessment of LLMs Using a New Tool As reported by Pascale Davies from Euronews, researchers from ETH Zurich, Bulgaria's Institute for Computer Science, AI, and Technology, and LatticeFlow AI have developed the “LLM Checker,” which evaluates major generative AI models for compliance with EU AI regulations. Models from Alibaba, Anthropic, OpenAI, Meta, and Mistral AI were scored on aspects like cybersecurity, environmental impact, and data governance. While most models achieved an average score of 0.75 or above, certain deficiencies were noted, particularly in discrimination and cybersecurity. OpenAI's GPT-4 Turbo, for instance, scored 0.46 on discriminatory output, while Alibaba’s Cloud scored 0.37. The European Commission has welcomed this initiative, seeing it as a foundational tool in translating the AI Act into measurable standards.
- Summary of Industry Event on General-Purpose AI Rules CCIA Europe’s recent AI Roundtable brought together stakeholders from industry, academia, government, and civil society to discuss the Code of Practice for general-purpose AI (GPAI). Legal expert Yann Padova highlighted four main challenges: 1) aligning viewpoints from a broad group of nearly 1,000 stakeholders, 2) limited representation of GPAI providers, who constitute only 5% of the drafting process, 3) risk of discussions diverging from the Act's focus, and 4) concerns about added compliance obligations. A follow-up roundtable scheduled for later in the year will focus on AI’s intersection with privacy and data protection laws.
- Template for High-Risk AI System Instructions The Knowledge Centre Data & Society has developed a prototype template for compliance with Article 13 of the AI Act, which pertains to high-risk AI systems. The template is designed to help providers and deployers meet transparency obligations by detailing necessary information on the system’s purpose, features, and risk profile. Deployers can review these instructions and request further clarification if needed. Providers are also advised to review any additional guidance from relevant authorities regarding Article 13.
- Standards for AI Act Compliance The Joint Research Centre (JRC) has outlined expected standards that will support AI Act implementation. The Act, adopted in August 2024, requires these standards, especially for high-risk AI systems, to be effective within the next 2-3 years. European harmonised standards, which provide legal certainty for compliant AI systems, are being developed by CEN and CENELEC at the European Commission’s request. These standards are crucial to enforcing EU policies and promoting competition, especially for SMEs in the AI sector. However, drafting has encountered delays due to challenges in defining scope and reaching agreement within standardisation bodies. Progress toward consensus will need to accelerate to meet the timelines.
What is coming up next?
ADIPEC 2024 shaping the future of energy together!
Join the world's largest energy conference.
The world's biggest energy event is back, and this year's focus is on decarbonisation.? Join us in Abu Dhabi this November 4-7 for @ADIPEC 2024 and be part of the conversation shaping a more sustainable energy future!
Here's what awaits you at @ADIPEC:
- High-impact dialogue with over 1,600 industry leaders
- Showcase of cutting-edge innovations from 2,200+ exhibitors
- Opportunity to connect with 184,000+ energy professionals
- Focus on practical solutions for decarbonization across the entire energy chain ??
The EU and AI in Energy:?
The EU #AI Act is setting the stage for the responsible development of AI in various sectors, including energy. While its main focus isn't directly on decarbonisation, it? could? accelerate the energy transition by promoting? more efficient energy use and renewable energy technologies? ?
Join us and be part of the solution!
- ·Learn more and register today: https://www.adipec.com/
- ?Limited spots available!? Don't miss out - register now and be a part of ADIPEC 2024!
#ADIPEC #energy #decarbonisation #innovation #future
Insurance Innovators Summit 2024
The world's most important insurance conference.
Attracting 1500+ insurance big hitters and next-gen disruptors from across the globe, Insurance Innovators Summit is the event reshaping insurance as we know it.
Rub shoulders with industry goliaths, have game-changing conversations, and be part of bringing insurance innovation to life.
The Business Show 2024
Celebrating 50 editions of entrepreneurialism, small businesses, and innovation.
Are you looking to launch a new business or scale your current one? For over 23 years, The Business Show has transformed the lives of entrepreneurs and small businessowners worldwide. With over 1.2 million attendees to date, the event is back once again to help you and your business thrive.
The 50th edition of the show will unlock the secrets to business adaptation, innovation, and resilience, offering access to the products and services that can propel your business to the next level.
Attendees will also have the chance to hear from renowned keynote speakers from past events, delivering insights to inspire and educate. Plus, an exciting £100k business start-up package is up for grabs! Entrepreneurs can apply online, with select applicants invited to pitch their ideas live at the show. The winning business will walk away with a £100,000 package of resources, including £10,000 in cash – a truly unmissable opportunity.
Running alongside The Business Show are two other must-attend events: Going Global Live and Retrain Expo. Going Global Live provides essential knowledge on trade agreements, international strategies, and cultural insights, while connecting you with business owners seeking overseas expansion. Meanwhile, Retrain Expo focuses on upskilling and retraining to ensure future success in an ever-evolving industry landscape.
With more than 750 exhibitors, 200 expert-led seminars, and unmissable masterclasses covering topics from business growth to cybersecurity and marketing, visitors will leave with the tools and knowledge to excel in their industry. Ready to take the next step? Register for your free ticket today at greatbritishbusinessshow.co.uk.
World Summit AI 2024 (MENA)
The Only AI Summits in the World that matter are coming to Qatar
Fostering diversity and inclusion in MENA
World Summit AI and Intelligent Health are known for their unwavering commitment to fostering diversity and equality. We firmly believe in the democratization of AI, striving to make the transformative power of artificial intelligence accessible to all regions and communities, contributing to the collective advancement of AI on a global scale.
AI & Big Data Expo | Global 2025
Delivering AI & Big Data for a Smarter Future..
Book your conference and expo ticket to AI & Big Data Global which is set to explore what’s new and worth attention in the #aiandbigdataexpo Ecosystem. Join us in-person at the Olympia, London between the 5th – 6th February to hear from our industry leading speakers representing companies such as Meta, BT Group, LinkedIn, Heineken, Co-op and Volvo Cars, just to mention a few!
Dive into a comprehensive exploration of AI and Big Data with 8 co-located events, each tailored to provide in-depth insights into specific aspects of these transformative technologies.
Network with over 7,000 professionals, thought leaders, and enthusiasts from around the globe. Exchange ideas, collaborate, and forge connections that will shape the future of AI and Big Data.
Gain profound knowledge from 200+ industry experts and visionaries who will share their insights, experiences, and forecasts. Be inspired by thought-provoking keynotes and panel discussions from leading international companies!
Elevate your networking experience as 56% of attendees hold director-level positions or higher. Engage with decision-makers and influencers, fostering meaningful connections for future collaborations.
Explore thematic tracks covering Enterprise AI, Machine Learning, Security, Ethical AI, Deep Learning, Data Ecosystems, and NLP and more. Tailor your experience to align with your specific interests and professional goals.
Your in-person ticket will also grant you access to the co-located events exploring IoT Tech, Unified Communications, Intelligent Automation, Data Centres, Edge Computing, Cyber Security & Cloud and Digital Transformation! Register your pass here:?https://www.ai-expo.net/global/pass-types-and-prices/
Explore the AI & Big Data Expo conference agendas here:?https://www.ai-expo.net/global/agenda-2025/
What AI & Partners can do for you
Providing a suite of professional services laser-focused on the EU AI Act
- Providing advisory services: We provide advisory services to help our clients understand the EU AI Act and how it will impact their business.? We do this by identifying areas of the business that may need to be restructured, identifying new opportunities or risks that arise from the regulation, and developing strategies to comply with the EU AI Act.
- Implementing compliance programs: We help our clients implement compliance programs to meet the requirements of the EU AI Act.? We do this by developing policies and procedures, training employees, and creating monitoring and reporting systems.
- Conducting assessments: We conduct assessments of our clients' current compliance with the EU AI Act to identify gaps and areas for improvement.? We do this by reviewing documentation, interviewing employees, and analysing data.
- Providing technology solutions: We also provide technology solutions to help our clients comply with the EU AI Act.? We do this by developing software or implementing new systems to help our clients manage data, track compliance, or automate processes.
We are also ready to engage in an open and in-depth discussion with stakeholders, including the regulator, about various aspects of our analyses.
Our Best Content Picks for 2024
?? EU AI Act: 4th Regulatory Initiatives Grid (#RIG)
?? The fourth edition of the #RIG is here, spotlighting initiatives tied to the EU AI Act and related legal frameworks. ???? With the EU AI Act having entered into force on 1st August 2024, numerous initiatives have now been set-in motion, such as the official sign off of the #AI Act. Suffice to say, it's a transformation moment in regulatory history.
?? The Grid encompasses initiatives led by various organizations, including the @European Commission, @European Parliament, @The Council of the European Union, and national standards bodies, such as the @BSI Group, @VDE, @ISO, and @IEEE.
?? It covers not only the EU #AI Act but also interrelated legal efforts like civil liability frameworks and sectoral safety legislation, including the newly revised #AI Liability Directive.
?? The third edition allows firms to benchmark progress and navigate evolving regulatory landscapes effectively.
Vital considerations for firms:
?? Understanding the implications of the EU #AI Act and associated regulations.
?? Staying updated on sector-specific safety legislation and civil liability frameworks.
?? Collaborating with relevant stakeholders to ensure compliance and trustworthy #AI practices.
Ready to dive deeper? ?? Explore the contents of the Regulatory Initiatives Grid to stay ahead in the evolving AI regulatory landscape.
#AIRegulation #EUAIAct #ComplianceJourney #TrustworthyAI #RIG ??
EU AI Act: AI for Social Outcomes
Our latest report "#AI for Social Outcomes" explores how the EU #AI Act supports trustworthy #AI ecosystems for social advancement. ??
1??# AI has the potential to address critical global challenges, but responsible deployment is essential to avoid risks.
2?? Facilitators play a crucial role in connecting tech organizations with communities, ensuring #AI initiatives are inclusive and impactful.
3?? The EU #AI Act provides a regulatory framework that emphasizes human rights, transparency, and fairness to guide #AI development for societal benefit.
Together, we can harness the power of #AI to make a positive global impact! ??
#EUAIAct #TrustworthyAI #SocialImpact #AIForGood #AIPartners
?? EU vs. US: Who will win the EU #AI Act compliance race? ??
?? Race for Compliance Has Begun! ??
The countdown is on! With the European Union’s Artificial Intelligence (EU AI) Act now in effect since August 1, 2024, the race to achieve compliance has officially started. Companies across the US and EU need to gear up to meet the Act’s requirements, including implementing robust risk management, governance, and compliance processes. The transition period offers some leeway, but it’s crucial to get ahead now.
?? Inferences Drive Future Outcomes ??
Drawing insights from a recent study on GDPR compliance, sponsored by McDermott Will & Emery LLP, reveals key lessons for navigating the EU AI Act. The study, covering over 1,000 companies in both the US and EU, highlights a significant benchmark: 90% of respondents are aware of their GDPR obligations, while 10% are uncertain. This presents a valuable comparison point for understanding and preparing for AI compliance.
?? Difficulties Faced with Compliance ??
The report suggests that many companies may struggle to meet the August 2026 deadline or may not be fully aware of their compliance status. With 40% of respondents likely to achieve compliance only after the deadline and 8% uncertain of their compliance timing, the path to meeting the EU AI Act’s requirements is challenging.
?? Huge thanks to our invaluable corporate partners for their contributions!
@Feeney FinTech, @JC Legal, @Rialto Consultancy, @Cyber Security Unity
?? Massive gratitude to our invaluable individual partners for their support!
@Doug Hohulin, @Lisa Ventura MBE, @Silvia A. Meyer
Stay informed and proactive to navigate these complexities and ensure your organization is prepared for the evolving regulatory landscape. ????
#AIAct #Compliance #EURegulations #GDPR #RiskManagement
EU AI Act: AI Regulatory Alignment
?? What is #AI Regulatory Alignment???
?? As #AI continues to transform industries, aligning with the EU #AI Act is no longer just a regulatory requirement—it's a strategic imperative for businesses. The EU #AI Act emphasizes fundamental rights such as privacy, fairness, and accountability, ensuring #AI systems behave ethically and transparently. This means that businesses must adopt #AI regulatory alignment throughout the #AI lifecycle to remain compliant, innovative, and trustworthy.
Fundamental Rights Integration: #AI systems must safeguard human dignity, privacy, and fairness, translating these ethical principles into actionable norms.??
Continuous Monitoring & Accountability: #AI requires ongoing oversight, with businesses responsible for ensuring their #AI systems align with regulatory and societal standards.??
Stakeholder Collaboration: Actively engaging stakeholders—governments, communities, and users—is essential to maintaining transparency and trust.??
Auditable Processes: Clear, documented processes are crucial for verifying that #AI systems uphold both ethical principles and legal obligations??.
The EU #AI Act is more than just compliance—it’s about fostering responsible #AI that aligns with societal values. #AI systems should not only be technically robust but also ethically sound, operating within strict boundaries to avoid harm or discrimination. For businesses, this means enhancing trust, transparency, and accountability across all #AI deployments.
?? See this research report for more information > https://lnkd.in/dZBmiVzR
For help in aligning your #AI systems with the EU #AI Act, DM us on LinkedIn or contact the team at [email protected].
EU AI Act: AI for Society
?? How can enterprises meet United Nations' #SDGs?
??As the world moves towards the 2030 United Nations Sustainable Development Goals (#SDGs), #AI is becoming a key enabler for addressing the most pressing global challenges. Responsible #AI innovation offers the potential to accelerate progress on sustainability while meeting regulatory requirements like the European Commission's EU #AI Act.
#AI is more than a technology—it's a catalyst for sustainability. From enhancing climate action to improving social well-being, #AI solutions are driving significant impact. With six years left to meet the 2030 targets, time is of the essence, and #AI is uniquely positioned to help close the gap.
Collaboration between global leaders, policymakers, and innovators will be essential in leveraging #AI’s full potential to create a more equitable, sustainable future.
EU AI Act: Compliance in the Era General-Purpose AI
?? Why does #GPAI Compliance Matter???
As general-purpose AI (GPAI) transforms industries, ensuring compliance with evolving regulations is not just a legal requirement—it's a competitive advantage. Businesses that take a holistic approach to regulatory adherence can drive innovation while minimizing risks. Here's how your firm can adopt a 3-pillar framework to stay ahead:
?? 3 Pillars for AI Compliance Success:
1?? Utilise the Past: What It Means: Leverage existing compliance systems rather than reinventing the wheel. By evaluating and updating current frameworks, businesses can address the unique challenges posed by GPAI. What Businesses Should Do:
- Audit your current compliance measures to identify gaps related to AI.
- Clarify roles across departments for AI governance to ensure accountability.
- Strengthen existing teams by investing in AI expertise for compliance and risk management.
2?? Create the Present: What It Means: Effective governance requires active collaboration across legal, IT, compliance, and business units. A culture of transparency and knowledge-sharing ensures robust AI oversight. What Businesses Should Do:
- Encourage cross-functional cooperation to align AI governance with the entire organization.
- Engage top management to lead by example, promoting responsible AI practices.
- Foster interdisciplinary problem-solving, ensuring that all stakeholders contribute to AI risk management.
3?? Structure the Future: What It Means: To stay competitive, firms must anticipate future developments in AI and regulation. Build flexibility into your governance systems so you can adapt to new risks and opportunities as they arise. What Businesses Should Do:
- Invest in upskilling your workforce with AI-related expertise.
- Monitor AI innovations and continuously assess emerging risks.
- Implement agile governance frameworks that evolve with both technology and regulations.
By adopting this holistic framework, your firm can not only meet today’s regulatory standards but also be ready for tomorrow’s AI-driven world. ??
#AICompliance #GPAI #EUAIAct #AIGovernance #FutureOfAI #Innovation
Legitimate Interest
As we will continue to communicate with UK and EU individuals on the basis of ‘legitimate interest’ if you are happy to continue to receive marketing communications covering similar topics as you have previously then no action is needed, else please unsubscribe by following the Opt Out process. ?We will process your personal information in accordance with our privacy notice.
Opt Out
If, however, you do not wish to receive any marketing communications from us in the future you can unsubscribe from our mailing list at any time, thank you.
DISCLAIMER
All information in this message and attachments is confidential and may be legally privileged. Only intended recipients are authorized to use it. E-mail transmissions are not guaranteed to be secure or error free and sender does not accept liability for such errors or omissions. The company will not accept any liability in respect of such communication that violates our e-Mail Policy.
Sr. Consultant - Dell Technologies | .Net Technologies | Cloud Solutions Architecture | Python | AI/ML Governance Developer Advocate & Practitioner | MIT xPRO | MS Azure Certified Solutions Architect
2 周Meta-Sealing: Establishing Trust Across the AI Lifecycle ?? ? The paper, titled "Meta-Sealing: A Revolutionary Integrity Assurance Protocol for Transparent, Tamper-Proof, and Trustworthy AI Systems", presents a comprehensive approach to ensuring the integrity of AI systems throughout their entire lifecycle. Unlike traditional verification mechanisms, Meta-Sealing utilizes cryptographic seal chains that generate verifiable, immutable records for all system actions and transformations. By integrating advanced cryptographic techniques with distributed verification, Meta-Sealing provides tamper-evident guarantees while addressing stringent regulatory requirements for AI transparency and auditability. ? ? Research Manuscript: https://doi.org/10.48550/arXiv.2411.00069 ? Brief: https://dev.to/mahesh_vaikri/ensuring-trustworthy-ai-with-meta-sealing-a-paradigm-shift-in-integrity-assurance-2hfl ? #MetaSealing #AIIntegrity #TrustworthyAI #AIPolicy #AIForGood #AITransparency #CryptographicAI #AIAuditability #RegulatoryCompliance #EthicalAI #AIInHealthcare #FinancialAI #SustainableAI #AIGovernance #TechForTrust