The Future of AI: Valuable or Overblown?
“Senators Propose Bipartisan Framework for Laws Regulating Artificial Intelligence.” New York Times, September, 2023
?Hidden inside this New York Times edition, this article reported that Senators Richard Blumenthal of Connecticut and Josh Hawley of Missouri were planning to hold early senate hearings. They had invited industry luminaries like Brad Smith, President of Microsoft, and William Dally, chief scientist of AI chip maker Nvidia.
?Senator Chuck Schumer, leader of the senate, also intended holding meetings with other top executives like Elon Musk, Satya Nadella- CEO Microsoft, and Sam Altman – President OpenAI. All these meetings could hardly be soon enough!
?Such intended interactions demonstrate increasing lawmaker concerns on educating themselves on the risks and opportunities available through AI. They propose to develop rules for the industry. They didn’t desire to make the same mistakes as were made with the social media companies, where privacy, safety and security were all overlooked. We all share that same concern.
?It’s all reminiscent of prior years when IBM developed Deep Blue and became the wonder-kid when it defeated Gary Kasparov the world chess champion. We didn’t hear too much about the dismantling of Deep Blue shortly after, when Kasparov challenged it to rematches. And then came IBM’s Watson, where many significant companies clamored for business contracts to help them handle their commercial challenges. The medical field also jumped on the bandwagon to help hospitals and doctors better diagnose medical conditions and short-cut diagnoses. Within the past year or so, many of those firms and hospital groups have walked away from Watson, since they were somewhat disappointed with the outcomes.
?All this goes to say that we should embrace the many possibilities of AI, but we shouldn’t get too giddy. There are many examples and signs of AI’s potential advantages, although we shouldn’t be ready to ditch mankind in our attempts to reap big profits. Anyone who is a writer will be familiar with the usefulness of AI when drafting articles, as well as the irritations of dealing with its limitations and righteous pronouncements. Let’s also not overlook that AI’s “brainpower” is based upon human brainpower!
??In a Wall Street Journal, September 2023, ‘Firms Lean on Workers to Help Fact Check AI,’ author Isabelle Bousquette starts with the point, ‘For companies deploying generative AI, the idea of “keeping a human in the loop” is critical – but getting that human to fully understand that role can be a challenge.’ Without proper monitoring, there’s always room for AI to be error prone or misjudge situations, as with humans. Properly programmed and managed, it’s likely that such errors will be relatively infrequent. Even so, they’ve happened, which could lead to serious consequences in critical situations; hence the need for human monitoring.
?Many companies worry that without adequate AI safeguards, humans are capable of letting things slide. It’s too easy to look away when AI or robots consistently spit out or churn out high standard articles or products… sit back and enjoy the ride! More practically minded people or quality engineers will probably aim for greater caution based upon real life experiences: hence Bousquette’s recommendation to have properly qualified people to monitor your AI systems and robots. ?
?Again this point reinforces the possibility of future workforces operating in three tiers:
? Level 1 – Brilliant scientists, inventors and authors providing fresh ideas, content and guidelines for AI to operate with
领英推荐
? Level 2 – Savvy operators to set-up AI systems and robots, so they can meet expectations
? Level 3 – Human monitors to frequently screen/overview what AI and robots are doing on a daily basis – providing those monitors are appropriately trained. They will watch out for missteps and consistent quality.
(Note: The idea of replacing people through employing efficiency experts needs to be put into perspective. The financial industry’s idealistic, people-less world is still a long way off and we shouldn’t overlook the possibility of tremendous human turbulence and backlash before that moment arrives!)
_____________________________________________________
Author, Peter A. Arthur-Smith, Founding Principal with Leadership Solutions, Inc., is based in New York, and author of Smart Decisions: Goodbye Problems, Hello Options. He has drafted a potential new publication about enlightened leadership that offers a slew of fresh leadership concepts and practical models. Feel free to follow author at: Linkedin.com/in/peter-arthur-smith-2115722/?
? 2023 PAAS-LSI- All rights reserved.
?
.
?