KT clear thinking to reduce the effect of Artifical Stupidity.
Pale blue dot of reason

KT clear thinking to reduce the effect of Artifical Stupidity.

I was lucky enough to do my PhD with Prof. Ian Pyle who founded the Computer Science Department at York, worked at Harwell and at Aberysytwth Comp. sci. He is now in his 80's, sharp as always and we meet for lunch last year. He coined the term Artificial Stupidity which he sees as trying to apply A.I. in situations that it does not fit or has unacceptable risks.

In the mid 90's I worked as an Artifical Intelligence reseacher for a few years while I worked out that my career calling involved stopping large organisations self harming with middle sized and large computers. Fast forward to early 2024, I was chatting with Dr Praboda Rajapaksha at an prospective students Open Day at Aberystwyth University Computer Science Department. We talked about her research and how Neural Networks (LLM's) had advanced in the last 30 years, but that 90% plus of the progress in this area was down to hardware advances. A Cray C90 delivered around 4GigaFLOPS which could cost up to $30,000,000 in 1992. Fast forward 32 years and a NVIDIA A100 weights in at 312 teraFLOPS at about $10,000. Not all FLOPS are equal, but I rest my case.

I have had a number of discussions in the last 6 months where I have been treated as a bit dim for not grasping that A.I. already has emergent intelligence and will find new ways of solving problems beyond human abilities. I would have liked to explained how a LLM really works and how that determines the what a LLM is really good at, can exceed human speed and accurracy and what its a liablity at. However, some bubbles of misplaced confidence based on single data points from personal experience and a dab of science fiction are best not burst.

An LLM can be thought of as a very large mathamatical matrix (large as in billions of cells, 175 billion in the case of ChatGPT) with a number of layers (ChatGPT has 96) of neurons. Data is presented from the training set and through repeated exposure to the training set which has a input and an expected result, the values inside the matrix are adjusted. An LLM is just a statistical model(a big one) of its training set. This is great for something like medical image analysis where there is a huge high quality training set with known outcomes and it is reasonable that an LLM may do a better job than a well trained human in classifying medical images of conditions it has been trained on.

Before Andrew Vermes retired from KT last year, I wrote some PyTorch which would take an object/defect statement and based on a customer training set he had access to, the theory was it should be able to classify Object/Defect statements into good and bad. I had Covid which took me out of action for a while and once I was functional Andrew had retired. If you happen to have a good training set of object/defect statements lying around I can train the PyTorch on, we should talk. Its quite reasonable that something like ServiceNow could classify the quality of a problem statement (A problem well specified is a problem half solved). Less convinced that problem statements in domains outside the training set would be classified correctly. Finding out was the point of the prototype.

There is no visability of which elements of the training set combine to derive the value in a cell in the matrix comes from. It is an open challange to explain why a particular answer is given. When ChatGPT blesses you with one of its wonderful hullications, it can't be tied back to a particular part or parts of its training set (an Internet Brain dump).

There will be 3 types of primary job roles with A.I. :-

  1. Those who create and improve the underlying A.I. technology
  2. Those who find new uses for A.I. technology
  3. Those who clear up afterwards.

For the 5-10 of my working life left, at what ever pace it happens, my lot will mostly be sweaping up after the fact.

A starting point for defence against the dark art of selling A.I. solutions to all problems (and non-problems):

  1. Set out the decision to make, the objectives, weight the objectives, assess the alternatives against the objectives and taking the best alternatives articulate the risks of each and can they be accepted. KT Decision Analysis enable us to reason in the open, shining a light into the dark corners where bias tends to hide and prosper. Those with a bias they hold dear or a motivation to exploit a bias really hate KT Decision Analysis.
  2. Before solutions involving LLM's are put in place, Potential Problem Analysis (risk management) is used to set out effective triggers(better if automatic, but not always possible) to identify hullications with associated contingent actions. Preventative actions maybe are better dealt with as risks in the Decision Analysis (answers on a post card ....).
  3. When something does go wrong, establish a proper causal chain with a clear understanding of root cause, contributing circumstances and breached barriers (checks that did not work as intended). KT Incident Maps are a great way of representing this.
  4. Engineer the Performance System to include A.I. components which interact with people[no, not stack ranking AI as part of performance management]. How does the A.I. component get feedback and adjust to consequences (re-training) for particular behaviours in particular situations. Pay attention to the white space between A.I. and people. A mismatch between behaviour and response will not always be down to a problem in the A.I. part of the Performance System. It might just be a bug to log :-)

I am as excited as anyone about the benefits of A.I. will bring. This NY times article pretty much aligns with my positive view of the technology (I try not to think beyond 5 years where any technology is concerned), but mix in the human skill for mis-application and hype. For 30 years I lived and worked in an environment where people around me have been involved in A.I. research (At home as well. Appearing on Radio 4 P.M. as an invited expert with zero clue 3 million people were listening, bless).

A.I. is at its early stages of delivering its benefits(nearly 70 years on from John McCarthy's 1st Artificial Intelligence workshop), like any technology breaking out of research labs, its power and pitfallls are not, yet (??), well understood by the general population and decision makers.

Time for Clear Thinking humans to keep Artificial Stupidity in its place.


Next up lets explore why learning skills (and of course I am talking about skills like Problem Solving and Decision Making) takes time and effort, what we can learn from Army training and the tendancy in larger organisations to priortise WHAT over HOW.





Conor Horgan

Consultant @ Kepner-Tregoe | Problems, Decisions, Risks and Opportunities | Request a demo

7 个月

Great read, Clive. Sounds like a white paper in the making...

Andrew Maitland

Director Customer Success EMEA

7 个月

Great insight. Clive

回复

要查看或添加评论,请登录

Clive King的更多文章

社区洞察

其他会员也浏览了