AI and cyber readiness: how organizations can prepare for the new “galaxy” of regulations

AI and cyber readiness: how organizations can prepare for the new “galaxy” of regulations

AI and cyber readiness: how organizations can prepare for the new “galaxy” of regulations

In June 2024, several London hospitals and surgeries were hit by a ransomware attack, taking out 90% of capacity in blood testing , and forcing hundreds of patients to be redirected for operations and appointments . It was the latest reminder of the vulnerability of critical infrastructure due to supply chain risks. This incident not only underscores the immediate dangers of cyber threats but also serves as a stark example of the broader risks associated with increasingly interconnected and intelligent technologies, including those driven by artificial intelligence (AI).

The complexity and sophistication of AI signals a new frontier in technological progress, where huge potential benefits to society usher in risks that could be transformative. More and more, governments introduce guidelines to encourage innovation, while safeguarding individuals and society from harm.

This poses a challenge for companies trying to harness the power of AI. Their C-suite and functions need to fully understand the minutiae of upcoming regulation, and the implications for their business. In this article, I outline key regulations coming into application across Europe and their most critical dimensions, together with a set of actions and recommendations that will ensure organizations are well placed to respond.

Four key regulations

Europe is on the cusp of a galaxy of AI regulation. A series of devastating cyber attacks in recent years have driven national governments and other entities to prioritize preventative measures and advance a raft of regulatory reform. However, the current medley of pan-European, country-based, and sector-based regulation being imposed on Member States can be difficult to navigate. The confusing regulatory picture makes it challenging for multinational or multi-sector organizations to achieve compliance.


There are four regulatory frameworks that the C-suite and decision-makers should seek to understand, as these are likely to have the biggest impact on their organizations.

The European Union (EU) AI Act

The first is the EU AI Act, representing a world-first attempt to impose a comprehensive regulatory framework for AI. The Act is wide-reaching and works by classifying AI products according to their risk levels, and adjusting transparency requirements accordingly. To comply, organizations must carry out ongoing risk assessments, maintain high-quality datasets, log AI activity, and cooperate closely with regulatory and industry peers.

The Act covers three main tiers of risk for AI systems: prohibited, high risk, and minimal risk. Each tier has different compliance requirements. You can find a summary in this article . Organizations need to be aware of the features and intricacies of the systems they are developing in order to identify their level of risk, particularly given AI’s ability to amplify biases and distortions. This awareness must also extend to the activities of suppliers, importers, distributors, and deployers, as organizations can be held partly liable for the actions of these third parties. The consequences for organizations that fail to take these issues into consideration can be severe: fines of up to €35 million, or 7% of company annual turnover, will be imposed for non-compliance.

NIS 2 Directive

The NIS 2 Directive is a piece of EU-wide legislation aimed at boosting cybersecurity across the European Union. The first version focused on the oil and gas sector, but updated rules under the NIS2 Directive cover sectors ranging from transportation to banking, health and water, and in the majority of cases, companies need to achieve compliance by October 2024.

Adherence to sector-specific guidelines is complicated by the fact that each EU Member State builds regulations based on the Directive, leading to differences in interpretation. Smaller entities and those involved in critical sectors may have difficulty navigating the new Directive. Now could be an opportune moment for organizations to exchange experiences and share knowledge to achieve compliance.

The Digital Operational Resilience Act?

The Digital Operational Resilience Act (DORA) aims to strengthen the IT security of the European financial sector, including banks, insurance companies, and investment firms. The Act is intended to complement Europe’s General Data Protection Regulation (GDPR) guidelines and instill an organizational culture that goes beyond compliance to include more comprehensive risk management policies, greater engagement with cybersecurity experts, regular audits, and increased staff awareness and training.

The EU Cyber Resilience Act

The EU Cyber Resilience Act (CRA) aims to improve security for digital products, hardware, and software. The Act, due to come into force in late 2024, will protect consumers by enforcing mandatory cybersecurity requirements for retailers and manufacturers of those products.

Companies selling digital products will be required to complete cybersecurity risk assessments before they reach the market, and to improve transparency over the security of hardware and software products. To prepare, organizations should establish an open reporting architecture that can be readily updated. Rather than being considered “nice to have”, this architecture should be seen as an integral part of a secure software development lifecycle.

How organizations should respond

Organizations need to be aware of all the policies and regulations that are being proposed and enacted, to prevent them from having to overhaul their operations. Many of the frameworks will be implemented in phases; companies should prioritize the most imminent regulation. The various frameworks include overlapping elements, so preparation will provide a comprehensive overview of the regulatory landscape. AI itself, including GenAI chat functionality, can help organizations track regulations and navigate compliance. But AI is also a potential novel vulnerability that needs careful and continuous oversight.


Recommended actions

  • Understand the regulatory galaxy, stay updated and strategize

What does all this look like in practice? Organizations need to identify and understand the regulation they need to comply with now or in the near future, and monitor developments. Legal counsel can assist with interpreting requirements, while liaising with relevant regulatory bodies helps organizations to keep abreast of expectations and deadlines. A gap assessment can help determine the current state of compliance, and where to focus efforts at improvement. This should include emerging risks linked to security and data privacy in AI-driven systems and processes, especially in the software and AI supply chain.?

  • Establish governance, policies, risk monitoring and management

Information from the gap assessment needs to be channeled into a governance framework, which should be co-created by a multidisciplinary committee, including members from the cyber, technical, legal and domain expert functions. Together, they should be tasked with creating a set of internal policies and guidelines that are subject to regular assessments and audits. An additional set of risk mitigation strategies will ensure data governance is prioritized. AI warrants specific governance oversight, such as thorough risk assessments of AI systems for vulnerabilities.

  • Training, awareness and communication

No cybersecurity strategy can operate effectively without providing regular training on cybersecurity best practices. This means establishing awareness programs for staff and the C-suite to be kept abreast of the issues and compliance, maintaining constant and open communication regarding cyber practices and compliance efforts, and building a culture of cybersecurity.

  • Continuous improvement

Finally, organizations need to be committed to regulatory compliance and cybersecurity as a long-term strategy that requires continuous monitoring and improvement. That involves adopting recognized security frameworks and automating compliance processes wherever possible.

A culture of “shifting left”

It will not be possible to respond ad hoc to each new regulatory requirement and modification. Organizations will need a culture of “shifting left”, which means embedding cyber capability into the very fabric and design of the organization. This goes beyond the prevailing “shifting right” approach, which is reactive and may result in having to make difficult, costly, and hasty organizational adjustments to meet regulatory guidelines. The more that companies can embed cybersecurity from the beginning of every product and process, the better their overall security posture.

?

As a rule, it is always more expensive to protect a product or service after it has been assembled, than it is to secure it from the start. Organizations therefore need to review all existing processes and see where cyber considerations can be embedded from the beginning. For new projects and procedures, cybersecurity personnel need to be included in discussions from inception. Together, these steps represent a fundamental move away from viewing regulation as a box-ticking exercise, creating a business that is proactive and places cybersecurity regulation at the heart of its operations.

?

Watch the on-demand webcast on how to navigate cybersecurity in the AI world and message me if you want to discuss the issues in more depth.?

?

The views reflected in this article are the views of the author and do not necessarily reflect the views of the global EY organization or its member firms.

Piotr Ciepiela, insightful piece on the urgent need for AI readiness in cybersecurity! As AI evolves, so do the threats it brings. Organizations need a proactive strategy to integrate AI securely, ensuring robust defenses without compromising data integrity. Preparing now isn’t just a best practice; it’s essential for resilience in the face of new, intelligent cyber threats.

回复
Monika Ingram

Information Security Program and Portfolio Manager

1 个月

Interesting view, Piotr... Embedding cybersecurity from the start is critical, but fast-paced industries often struggle with tight deadlines and legacy systems. How can companies realistically implement a "shift left" strategy without sacrificing speed to market or agility?

Mishari Fernando

Msc student in International Business and trade|International Business Analyst | Professional banker| Customer Service Specialist|Transformational Leader| Aspiring Purchasing Agent | Entrepreneurial Mindset|HR Specialist

1 个月

Great perspective! This article highlights how crucial it is for businesses to comprehend and navigate the constantly changing landscape of cybersecurity and AI legislation. Businesses can leverage AI's disruptive promise and better prepare for the difficult problems ahead by integrating cybersecurity from the start and cultivating a culture of continuous improvement.

Simon Stirling

Chief Solutions Architect / Chief Technology Officer / Senior Director Software Engineering

1 个月

Excellent read! It’s clear we’re stepping into a new galaxy of regulations, and organizations need to be fully prepared for the challenges ahead. The shift towards embedding cybersecurity right from the design phase—“shifting left”—is crucial. AI brings immense potential, but also opens up new vulnerabilities, and it’s great to see regulatory frameworks prioritizing proactive risk management. In a world where cyber threats are evolving as fast as tech itself, we need to ensure that regulations are seen as part of innovation, not a roadblock. ???? #Cybersecurity #AIRegulation #ShiftLeft

Vitaliy Onoshko

Industry 4.0 Practice Lead (EMEIA Region) / Industrial Digital Transformation Strategist, Architect, Advisor and Tech Expert

1 个月

Great perspective! The regulations, which are exponentially scalling up with rise of privacy and sustainability agendas, adding up to set of other transformation drivers that shape up Organizations Strategies. In order to navigate the Galaxy, we definitely need a Guide (#hhgttg)! And here is where #LLMs come handy with their capability of making Body of Knowledge intractable by human intelligence by means of conversations. So instead of studying (which is still very important thing to do) one can just define own profile and ASK the Body of Revulations what applies, what does the compliance mean and then how to make it happen. So, #dontpanic and employ #AI agents, which would perfectly fit the task. But then remember, those things still need to be supervised, so trust but check ??

要查看或添加评论,请登录

Piotr Ciepiela的更多文章

社区洞察

其他会员也浏览了