Leadership and Artificial Intelligence
Chip Laingen, CDR, USN (Ret.), M.P.A.
Military Veteran, Business Executive, Graduate Faculty
Future projections of what Artificial Intelligence (AI) could do to, and for, humanity include obvious benefits; but more attention is being paid to the potential harm that could come of it, and rightfully so.?It’s an important debate, but what it reminds us about human nature itself is equally important; perhaps more so, because the evidence it reveals provides the means to ensure AI is beneficial to our future, rather than destructive.
AI itself – at least in its infancy – reflects the reality of its creators.?ChatGPT, for instance, has been lauded for its impressive abilities in doing everything from passing bar exams to writing PhD dissertations to penning arguably impressive poetry, music and literature; yet it’s also criticized for often just making things up.?We don’t want that, or we think we don’t.?The criticism is thus directed at the design of the algorithm, but that’s not really fair.?ChatGPT is supposed to mimic the ability and also the behavior of its human designers; who, after all, are wont to, yes, make things up.?A lot.?In that sense ChatGPT is then nearly perfect, or as perfectly imperfect, as it were, as its creators.
The real danger comes not with AI as we now know it, but with its ultimate manifestation: ?Artificial General Intelligence, whereby AI becomes fully operational in a sense, taking over human systems that control how we live.?The inimitable screenwriter Gene Roddenberry understood this well before our time.?He wrote the scripts for the original Star Trek television series in the 1960s.?
For one particularly episode, Roddenberry postulated that technology will ultimately make wars un-winnable in a sense, as destructive power becomes so great, even without nukes, that engaging in warfare in a connected world is enormously counter-productive, both physically and economically (a dilemma that China currently has vis-à-vis its aspirations for Taiwan).?Roddenberry wrote that a future society understood human nature would never change, so conflict was inevitable; yet this hypothetical future society also knew that warfare didn't just mean loss of life, but it was also ultimately destructive to another thing that human nature cares about - the economy itself.?So in this Star Trek episode their version of modern warfare was to wait for diplomacy to inevitably fail, then simply run a virtual, mathematically-informed war between the adversaries and let the model determine how many would have died on each side, and who the ultimate victor would be.?Then each side would agree to "exterminate" that number of people in a painless way (randomly selected by lottery of course), and go on about their business, with physical infrastructure on both sides, and thus the core economy, fully intact.?Perverse to be sure but ultimately logical.?Spock understood it; Kirk was horrified.
Therein lies a dilemma (admittedly among many more that exist), for the designers of AI, and ultimately for Artificial General Intelligence.?Pure, Vulcan-like logic is compelling; but human compassion is part of our nature too, and perhaps AI should ultimately take both into account, with an algorithm that is equally contradictory, better mirroring the nature of its designers.?The more divinely-inspired reader might add that a human soul is not replicable in any sense, and therefore can’t be reconciled by a machine-centric algorithm in any way.?
The debate will continue, but likely at a rate that AI itself will outpace.?It’s off and running, with current manifestations learning and refining on their own.?At the same time, our modern American business culture is focused on things like reacting to governmental market intrusions and the resulting economic chaos; and self-inflicted malaise with fashionable initiatives like the once highly-beneficial Diversity and Inclusion (D&I) mindset recently adding the culturally corrosive dynamic of “equity” (DEI) – thereby leading companies to embrace the antithesis of merit, with its inevitable consequence of poorer results.?Businesses are also increasingly embracing Environmental, Social, and Governance (ESG) principles, an equally counter-productive force for a capitalist economy inside a constitutional republic.?A “logical” AI would certainly de-construct DEI and ESG if logic was the prime directive.?If compassion at the expense of logic (and profits) were programmed in, perhaps not as much.
My insertion of DEI and ESG here represents perhaps an opportune political statement for me.?But more importantly, the polarized reaction of readers serves to underscore the dilemma that AI’s designers, and ultimately AI itself faces in this short interval in time before Artificial General Intelligence runs away with its human-inspired prime directives – which are as yet undecided, and will always remain so, if we’re being honest.
What does all of this have to do with leadership??Whatever AI becomes, humans are still part of the equation, even if we become slaves to our AI masters.?Before AI defines and re-defines how we should lead humans, those inserting machine-based intelligence into our lives have to decide what humans ultimately respond to, and why.?We sure as hell better get the incentives for human performance, satisfaction, and ultimately joy, correctly inserted into the algorithm.?If a designer were to ask me what that was, I’d tell them to be equally true to logic and soul as possible.?Examine the dilemma that Kirk and Spock faced, and know that they were both right, and both wrong.?The machine had better be as conflicted as both were – or we’re doomed. ??
Chip Laingen ~ 2023