A.I. Value Preservation and the Paradox of A.I.
Sean Lyons

A.I. Value Preservation and the Paradox of A.I.

“The whole is greater than the sum of its parts.” - Aristotle

A.I. Value Preservation in 2024

“It was a country . . . that he and his people had known how to use and abuse, but not how to preserve.” - Wendell Berry

A.I. value preservation begins with the acknowledgement that A.I. value is determined by our stakeholders (i.e. users, developers, researchers, regulators, policy makers, investors, shareholders, and society) and that this stakeholder value is subject to the universal forces of creation, destruction, and preservation. Value destruction stems from the dangers associated with risks, threats, and hazards and can manifest itself in the form of not only its initial impact but also the subsequent collateral damage. For example, A.I. value destruction could manifest itself as a result of the negative impact of the following:

  1. Misuse and Abuse
  2. Privacy, Criminality, and Discrimination
  3. Job Displacement and Societal Impact
  4. Autonomous Weapons
  5. Super Intelligence and the Singularity
  6. Energy Consumption and the Environment

Collectively these issues could be considered to represent a potentially existential threat to humanity (see Pre-mortem of an A.I. Scandal(s): Anticipation of Future Hazards).

Comment: Of course, should stakeholders take a step backwards and truly open their eyes it becomes all too evident that with human beings at the helm these negative issues are already occurring all around us on a daily basis to varying degrees. The much vaunted, requirement for human oversight of A.I. therefore becomes somewhat muted. Are stakeholder fears concerning A.I. merely a projection of the existing flaws in humanity itself? After all, A.I. is developed by humans and trained using data based on human activity. The use of A.I. without appropriate safeguards is likely to simply accelerate humanity further along its current trajectory.

The Value Preservation Imperative

Sustainable Value = Value Creation + Value Preservation

The value preservation imperative refers to the obligation to preserve, protect, and defend stakeholder value from the possibility of value erosion, reduction, and destruction. Defending against A.I. value destruction involves anticipating a hazard in advance, taking appropriate steps to help prevent its occurrence, detecting the hazard event on a timely basis should it occur, and taking appropriate remedial action. Learning from the whole experience is also crucial in order to better anticipate future occurrences. A robust A.I. defense program needs to be proactive, comprehensive, and systematic in nature and requires a holistic and multi-disciplinary approach (see A.I. Scandal Pre-mortem: The A.I. Defense Diagnosis).

The A.I. Paradox

“Technology is dominated by two types of people: those who manage what they do not understand, and those who understand what they do not manage.” -? Putt’s Law

The recent increase in the proliferation of A.I. technology clearly presents extraordinary benefits, opportunities, challenges, and threats to the corporate world and to humankind in general. In business this can be viewed in terms of the potential for exceptional rewards accompanied by equally exceptional risks. The dynamic nature of these new A.I. related risks, threats, and hazards means that the digital age has become increasingly complicated and complex. In fact, advancements in A.I. technology and the recent Generative A.I. tsunami is leading to an exponential level of complexity which humankind is already struggling to fully comprehend. It may well be that there are limits to the level of complexity that humans can effectively manage and that at some point technological development will simply become too complex for humans.

Comment: Some might argue that this new digital age has already pushed our level of complexity beyond these thresholds essentially resulting in our blind trust in “Black Box” technologies. The reality is that we may already have arrived at the point where the logic, rationale, and practical effectiveness of human intervention in A.I. processes are questionable and may simply be regarded as convenient window dressing or political theatre.

The paradox of A.I. is that eventually only A.I. technology will have the capability to manage A.I. technology.

Ironically, it seems increasingly likely that it is only through sophisticated A.I. technology that humans can ever hope to effectively manage this increasingly complex cyber landscape. For this to occur in as ethical, safe, and secure a manner as possible it will however require enhanced levels of due diligence. This will necessitate a holistic approach to A.I. defense, one which is capable of delivering the necessary levels of defense-in-width, defense-in-height, and defense-in-depth (see Holistic A.I. Defense and A.I. Defense Due Diligence).

A.I. Defense Fortification

"The only way to discover the limits of the possible is to go beyond them into the impossible." - Arthur C. Clarke

The daunting challenge of upgrading our approach to A.I. defense is however now becoming a realistic proposition due to the ongoing utilization of technology with varying levels of A.I. sophistication to augment and fortify defense related activities. The following represent just ten examples:

  1. Diligence: By embedding defense due diligence into the A.I. life cycle (i.e. Ideation, Design, Development, Deployment, Maintenance, and Retirement) organizations can better adhere to best practices and help ensure fairness, minimize bias, and eliminate discrimination. For example, data is generally considered to be the lifeblood of A.I. and the success of its performance is very much dependent on the quality, quantity, and provenance of data used throughout its life cycle. Data robustness can be improved by incorporating the critical A.I. defense components into the data management framework (e.g. Data Governance, Data Risk, Data Compliance, Data Intelligence, Data Security, Data Resilience, Data Controls, and Data Assurance). ???
  2. Automation: Advanced technology (including the use of A.I. bots) can be used to automate the activities of these critical A.I. defense components and to help to ensure that these activities are autonomously operating on a continuous basis and providing real-time information. Ongoing activities such as verification, validation, and testing can benefit from automation and help to increase confidence and trust in defense processes (e.g. Automated Auditing, Continuous Auditing, and Real-time Auditing etc).
  3. Specialization: The use of specially focused narrow A.I. (e.g. algorithms, analytics, models, and platforms etc) can be used to perform specific A.I. defense activities from cradle to grave. This can involve narrow technical solutions and can include processes such as issue identification, assessment, remediation, monitoring, and reporting (e.g. Risk Identification, Risk Assessment, Risk Response, Risk Monitoring, and Risk Reporting etc).?
  4. Foresight: Forward looking and future focused technologies can be used as forecasting instruments and tools to help support the anticipation of future issues. Foresight enables the implementation of proactive measures in advance. These technologies can involve the use of predictive analytics, sensitivity analysis, scenario modelling, and scenario simulations (e.g. Resiliency Analysis, Predictive Maintenance, Crisis Modelling, and Scenario Testing etc).
  5. Interconnectivity: A.I. technology can be used to help better understand symbiotic relationships and appreciate the correlations, dependencies, and interconnectivity of activities. This can involve the extrapolation of 1st, 2nd, and 3rd order consequences in order to outline any possible cascades of contagion. This can help to create, protect, and maintain a big picture perspective (e.g. Relational Mapping, Interconnectivity Linking, and Consequence Projections).
  6. Speed: The use of technology can help to contain potentially volatile situations from quickly escalating by helping to accelerate reactions and speed up response times. The timely detection of unusual, unexpected, abnormal, or suspicious activity can be critical. This can help ensure that an individual incident does not escalate to an emergency, to a crisis, to a disaster, and on to a catastrophe (e.g. Real-time Alerts, Early Warnings Mechanisms, and Various Response Triggers etc).
  7. Learning: The use of self-learning technology offers the potential of continuous learning in real-time based on learning from ongoing behaviors, subtle patterns, and performance metrics. Adaptive learning capabilities can help defense activities to evolve and develop on a day-to-day basis thereby helping to amplify defense processes, enhance defense capabilities, and improve the overall defense posture (e.g. Adaptive Authentication, Adaptive Recovery, and Adaptive Controls etc).
  8. Vigilance: Technology can be used to help improve vigilance in terms of the current environment. Real-time vigilance can help to ensure early intervention and adherence to frameworks, codes, best practices, and standards thereby helping to minimize the occurrence of negative events. The quality of corporate health can be monitored using diagnostics to indicate potential compromises (e.g. anomalies, deviations, violations, or system failures etc) which can help to quickly identify new exposures, vulnerabilities, and operational gaps (e.g. Scanning Technology, Benchmarking Tools, and Exception Reporting etc).
  9. Decisions: A.I. technology can be used to enhance, augment, and support decision-making through education, training, and awareness thereby helping improve options and choices. A.I. driven personalization based on professional and personal preferences can provide tailored content and recommendations through customized updates, guidance, and assistance. A.I. can help provide the individual with the transparency required to arrive at more informed, ethical, and risk-weighted decisions (e.g. Explainable A.I. (XAI), User friendly Interfaces, and Virtual Assistants etc).
  10. Collaboration: A.I. technology can help facilitate stakeholder interactions, collaboration, cooperation, and coordination through group communication interfaces. It can facilitate group brainstorming in addition to the constant sharing of ideas and insights, and the ongoing exchange of information, intelligence, and knowledge as part of the collaboration process (e.g. Chat Platforms, Chatrooms, and Chatbots etc).

The Power of Cognitive A.I. Defense

“What I know is a drop, what I don't know is an ocean.” - Isaac Newton

Sean Lyons

Going forward, the proactive use of increasingly advanced and evolving A.I. technology (e.g. Machine Learning, Natural Language Processing (NLP), Robotic Process Automation (RPA), and Computer Vision etc) can help to futureproof against potential hazards. In fact, using these technologies to augment A.I. defense management offers the potential to transcend our current limitations through greater integration (consolidation, alignment, and aggregation) and optimization of defense activities. The integration of these technologies is an essential step in providing the unifying force required to develop a holistic view of A.I. defense. Such an integrated approach offers the potential to optimize limited resources by orchestrating disparate disciplines across multiple dimensions, delivering seamless workflows, and streamlining activities.

A more robust level of A.I. defense may therefore be achievable by fully embracing the power of A.I. thereby enabling organizations to better safeguard stakeholder value by proactively preserving, protecting, and defending against the dangers posed by evolving risks, vulnerabilities, and exploitability. With the necessary safeguards in place it becomes possible to harness A.I.'s transformative potential and utilize its decision-making and problem-solving capabilities to help unlock new opportunities. Ideally, holistic A.I. defense can help to facilitate a more creative environment which is necessary to foster imagination, ingenuity, inspiration, innovation, and ideas (C through 5 I's).

A bright future is certainly possible, but caution and care must be exercised. Appropriate safeguards need to be put in place.

Previous Articles in this A.I. Series

  1. Pre-mortem of an A.I. Scandal(s): Anticipation of Future Hazards
  2. A.I. Scandal Pre-mortem: The A.I. Defense Diagnosis
  3. Holistic A.I. Defense and A.I. Defense Due Diligence


Valentin-Petru Mazareanu, PhD

Governance, Risk Management & Compliance Professional | CISSP | ITIL Ambassador | GRC speaker & trainer | ITIL Strategic Leader | MOR | CBCI | Prince 2 | ISO27k | BSC | MCT

6 个月

Thank you Sean for offering your views and thoughts. Extremely valuable !

Sean Lyons

Value Preservation & Corporate Defense Author, Pioneer, and Thought Leader #PlanetPreservation #AIsafety #ValuePreservation #CorporateDefense #ERM #ESG #GRC #IA

6 个月
回复
Sean Lyons

Value Preservation & Corporate Defense Author, Pioneer, and Thought Leader #PlanetPreservation #AIsafety #ValuePreservation #CorporateDefense #ERM #ESG #GRC #IA

6 个月
回复
Sean Lyons

Value Preservation & Corporate Defense Author, Pioneer, and Thought Leader #PlanetPreservation #AIsafety #ValuePreservation #CorporateDefense #ERM #ESG #GRC #IA

6 个月
回复
Sean Lyons

Value Preservation & Corporate Defense Author, Pioneer, and Thought Leader #PlanetPreservation #AIsafety #ValuePreservation #CorporateDefense #ERM #ESG #GRC #IA

6 个月
回复

要查看或添加评论,请登录

Sean Lyons的更多文章

社区洞察

其他会员也浏览了