?On the morning of 13 March 2024 in Lake Charles, Louisiana, I had the privilege of discussing cybersecurity regulations, best practice, and minimum standards with representatives of the Cybersecurity Infrastructure & Security Agency (Region 6), Cyber Florida, and Idaho National Laboratory at the Critical Infrastructure Protection and Resilience North America Conference (CIPRNA). This is the third year in a row that I’ve supported the event. I continue to return because of great content, great networking, and a dedicated focus on protecting critical infrastructure (CI).
I try not to use slides in my presentations; however, I think the key elements of my contributions are worth sharing. I appreciate your input, questions, and feedback.
Commercial businesses are quickly deploying generative AI like ChatGPT and other AI solutions. While this can be beneficial in commercial environments, this approach could be disastrous in a controls system environment. Unplanned and unmanaged adoption of emerging technology like AI can be downright dangerous in a CI organization. However, many providers of CI technology are exploring solutions to enhance SCADA, PLC, HMI, and other components for customers in all 16 CI sectors defined by CISA. I think it is prudent to start developing an understand of the harms that AI and cyber can produce, which helps drive informed decisions about what to integrate into existing infrastructure and when. I am not afraid that AI-enabled Industrial IOT will brining about Skynet. Still, there are important concerns that organizations must consider for responsible, risk-based adoption of AI technologies in CI. This is the key to good governance.
- The business judgement rule allows board directors to make mistakes, but business judgment is no substitute for the duty of care and the obligation of boards and executive management to develop adequate knowledge to support risk oversight decisions. For 25 years, “Caremark tests” highlight director liability when directors fail to implement a reporting system or adequate controls, and having implemented such systems and controls, they consciously failed to provide oversight to remain informed of risks or problems requiring their attention. There is no excuse for ignorance or lack of oversight according to this standard. The leadership of CI organizations must understand the exposure produced by AI and cyber risks. They must also take responsibility for making informed risk management decisions, providing adequate resources to reduce risk to an acceptable level, and monitoring the execution of risk management initiatives.
- A strategic plan is great, but strategic management is more important than the piece of paper that documents the strategy. Most strategic plans fail because of poor execution. Because of the risks facing CI organizations, effective strategic management is essential. The action of planning helps clarify goals and balance trade-offs. This must take place for leadership to formalize priorities. A formal process to manage strategy using the Demming Cycle (Plan-Do-Check-Act) will drive improvement in the assessment, formulation, execution, and evaluation of objectives and initiatives defined by the strategic plan. Performance should be measured quarterly to ensure desirable outcomes are actually achieved.
- Cybersecurity and artificial intelligence are quickly rising to the top of the list of concerns that corporate directors on public and private company boards worry about. Yet, board directors and c-suite executives remain unsure about how to manage the operational, regulatory, and reputational challenges they face when cybersecurity and AI are not managed effectively. Significant uncertainty exists because of new and evolving cybersecurity rules from the SEC in the US and new artificial intelligence rules based on the AI Act in the EU. Decision makers must understand and respond properly to key risks that cybersecurity and AI uniquely produce for each organization based on its CI sector, location(s) of operation, existing controls, and available resources. Operationally, there is no out-of-the-box solution for AI in critical infrastructure. Models and interfaces require customization for each use case adopted in an organization. This requires time, training, and expertise. It also requires planning and limits light-speed adoption of this emerging technology in a CI environment.
- Definitions are important. An “AI system” is an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy. Understanding how different AI systems function and the risks related to them is important. People who are familiar with the NIST definition of an “information system” benefit from applying risk management practices to defined security boundaries for individual systems and systems of systems based on the impact of compromised confidentiality, integrity, or availability.
- The EU AI Act provides a great definition of high-risk critical infrastructure AI systems. Like GDPR, the EU AI Act will likely become one of the most influential regulations in the world. The Act classifies AI according to its risk and focuses mostly on preventing harms to people and groups. Unacceptable risk is prohibited as a statutory requirement (e.g. social scoring systems and manipulative AI). Critical infrastructure is considered a high-risk AI system that requires an appropriate response to limit the significant risk of harm to the health, safety and fundamental rights of an individual. Per the regulation, high-risk AI providers must:
- Requirements from the EU AI Act and standards like the NIST AI Risk Management Framework highlight the need for good governance both for providers and users of AI systems to reduce harms resulting from AI that affect cybersecurity and privacy. Executive leadership must participate in conversations focused on prioritizing people and processes to manage technology in way that maximizes operational efficiency while minimizing these risks.
- Project Cerebellum is a good, neutral resource to help drive the production of "Safe, Secure, and Trustworthy AI."
All CI sectors face unique challenges. Strategic governance of cybersecurity and AI risk will allow the corporate leaders to understand the impact of cybersecurity and AI on their corporate goals and objectives. What is the corporate strategy? How are resources leveraged to transform that strategy into a reality? How can effective governance of cybersecurity and AI produce desired outcomes? The answers will vary, but ideal outcomes are possible. Companies across the CI landscape can use the practices and resources highlighted in this discussion to identify, understand, and plan effective responses to the uncertainty produced by evolving cybersecurity and AI requirements.
CISA Critical Infrastructure Sectors:
European Union Artificial Intelligence Act (EU AI Act): https://artificialintelligenceact.eu/the-act/
NIST AI Risk Management Framework:
Thrilled to see such vital discussions! As Warren Buffet says, risk comes from not knowing what you're doing. Here's to deeper understanding and safeguarding our future! ????? #Resilience
In the spirit of fostering resilience, remember - collaboration amplifies our impact. Teams that embrace diverse perspectives build stronger defenses. ?? Emphasizing teamwork today, inspired by the ethos of CIPRNA.
Absolutely on point! ?? Nurturing resilience in critical infrastructure is key. Similar to what Angela Ahrendts once hinted at - innovation stems from a blend of technology, leadership, and a deep commitment to progress. Here's to pushing boundaries! #Resilience #Innovation ??
Cybersecurity Advisor, Partnerships, Alliances | Mentorship | Working with non-profit teams
6 个月Excellent insight into #criticalinfrastructure (CI) Keyaan Williams. Thank you for the education!
Technology innovation and strategic thinking to support my organizations goals
6 个月Keyaan Williams your presentations are always an opportunity for learning and fresh perspectives. Your focus on business and risk is backed by your experiences you share with the audience. I am off to check out Cerebellum while trying to ride this wave of AI. Safe travels sir.