Rise of the Machines Part I: Mind Machinery
Christopher Bramley
Executive/Leadership Coach | TEDx/Keynote Speaker | Advisor | Director @Finding Shores | Senior Leader | Director of Coaching | Complexity/Flow/Agility/Ecosystems/Learning | Author/Writer/Teacher | AASD1
Here's a field that heavily integrates into a number of the areas I talk about, and I think it's a good time to explore a few areas. Internet of Things, Internet of Us, Automation, AI: simultaneously incredibly exciting prospects with amazing potential, yet new tiresome buzzwords used to jump on the bandwagon (about 40% of European "AI Companies" don't use AI at all).
I think it's best to split out the fields of AI and Automation here, as, although they are linked in some cases, they affect us in different ways, so first, let's look at the rise of the Machine Intelligence; watch out for Rise of the Machines Part II for discussions on Automation. I'll also abbreviate a lot of the terms as per headings below.
Artificial Intelligence (AI) and Cognitive Computing (CC)
I heard about true AI in business nearly 15 years ago as the next big thing, and I think we erroneously believed it was immediately poised to drastically change the market. It then apparently went quiet. AI has yet to burst into our consciousness as we gleefully describe it. Certainly in its nascent state, I don't think it had the support and understanding required.
Fast forward to the now, and most people know it's been used for some time algorithmically for social media, or that a computer beat Gary Kasparov at chess - but it now goes far deeper than that. AI can be used to find new viewpoints, crunch huge amounts of data, position things to hack into group human behaviour, even manipulate our decisions.
In fact this is a huge field, and one I am learning more about all the time. You can probably split it initially into two main fields:
- ArtificiaI Intelligence, which looks to solve complex problems to produce a result
- Cognitive Computing, which looks to emulate solving complex problems as a human would to produce a process
So, going back to Gary Kasparov, Deep Blue was definitely AI because it performed what was essentially a brute-force computation to solve a complex task better than a human can, but it isn't Cognitive Computing, because it wasn't mimicking how a human would play chess at all (and it seems there are multiple different types of "mimicking").
Which of these you want will be contextual. Do we want a self-driving car emulating human decisions? Or do we want it to give the best possible result as quickly as possible? Food for thought; the answer may be "both".
To further confuse things, some AI can also be CC - but both of these are already also becoming a buzzword, a cargo-cult ("do you do AI?"), so it's also going to be defined by industry marketing in some cases.
The influence of AI on business structure, not just process
Back to an old favourite, the Knowledge management matrix!
Now, we know about Taylorism, Process Engineering, and Systems Thinking (at least, if you read any of my articles you do!) - explanation here.
But when we look at how humans ACTUALLY work, and thus most companies, we see much of it lies firmly in Social Complexity.
AI/Machine Learning(ML) is currently fairly solidly in the Mathematical Complexity quadrant, with a touch on Systems Thinking; essentially, the use of mathematical models and algorithms to find optimal output. Where we go wrong again here is that we often use this to predict, where it can only really simulate.
What true Artificial Neural Network Intelligence (ANNI) may eventually offer is an interesting cross-over potential of mathematical complexity and systems thinking - and with cognitive computing leveraging these, the possibility of a non-human processing unit that at least partially understands human social complexity. We're already looking at branches of AI for decision-making.
This could be very interesting - and potentially harmful, as humans are (demonstrably!) easily socially manipulated; but also because as a nonhuman, an AI capable of doing this would be out of context and therefore not bound by any human constraints. Even a CC-AI would be emulating a human, not being one, and I think it's going to be some time before that becomes fairly accurate. It's a very dispositional field.
To extend the future exploration, it's hard to tell if a resulting endpoint intelligence would be alien - or because of our insistence on modelling human minds, produce an intelligence that has a sociopathy or psychopathy. If there is this much understanding of humanity, the NN might need more to try to give a feeling of empathy, and it's hard to say how that would interact for a nonhuman intelligence.
Speaking of Neural Networks…
Artificial Neural Network Intelligence (ANNI/ANN/NN)
This is much closer to the human brain in structure, and is a subset of AI. Neural Networks have been around since the 40's, as scientists have long been fascinated with human brains.
We have an approximate storage of 2-3gb in our heads, which is pretty poor. It's not even a decent single layer DVD! But because we store and recall data not just by creating neural connections between neurons, but firing them in sequence, our actual storage is estimated at about 2 PB. That's enormous - although it's not immediately accessible (we just don't work like that). It's also something we use in myriad, intuitive, individual ways to arrive at decisions.
We knew computers worked totally differently, logically, via calculation, and the race has (until recently, with CC and Qubits) been about how quickly we can sequentially do it. Even with CC, we just can't emulate the human intuitive leaps and individual distributed cognitive functions that arise from social complexity - yet, if ever.
But now we're edging into the fringes of these areas, and ANNI are being combined with incredible hardware, software, and new understanding to incrementally produce something much closer to sentient intelligence.
Machine Learning (ML)
Although my specialty is Human Learning, when you start talking about Machine Learning and AI there are some interesting similarities, as well as some drastic differences.
Machine Learning and AI aren't quite the same. AI pertains to the field of artificial intelligence as a whole, and machine learning and Deep Learning are subsets of this. They rely on the use of algorithms for pattern-hunting and inference - although I need to spend more time understanding if the latter is sometimes more imputation (reasonable substitution) than actually inferring in many cases.
Machine learning can also be considered a specific, logical process which doesn't carry the human intuition aspect that AI and neural networks are seen as able to edge into with Cognitive Computing. In this arena, an amusing example of Machine learning could be:
Me: I'll test this smart AI with some basic maths! What’s 2+2?
Machine: Zero.
Me: No, it’s 4.
Machine: Yes, It’s 5.
Me: No, it's 4!
Machine: It's 4.
Me: Great! What's 2+3?
Machine: 4.
Machine Learning tends to ask what everyone else is doing - but some AI may be able to decide what to do for itself on extrapolation. There is a clearly a nuance.
Deep Learning (DL)
This is an expansion of basic machine learning. Input is everything, and where traditional processing cannot utilise everything, methods like deep learning can, because it progressively uses multiple layers to extract more and more information. Human-based systems are easily saturated, distracted, and fallible, and traditional IT is really still an offshoot of this methodology, using automation and tools to make this task easier; augmenting human effort.
Deep learning needs as much data as it can get however - which is how Big Data and algorithms that can take billions of user's data can change how we work.
I remember many years ago working a data protection deal with the largest radio telescope in the world at the time (I believe now a precursor to SKA SA) in South Africa. They needed to back up the data they pulled in, but I believe (and these figures are approximate, from memory) that they could only process 1% of the data that the array pulled in, and that’s what we were looking to protect and offsite.
This was around 2008. Imagine that sheer quantity of data with today's storage and ANN/DL capabilities. Given the right patterns to look for, you don't NEED human intervention any more. I strongly suspect that when SKA SA comes online around 2027, it will be using Deep Learning and AI, if not ANN, to parse, categorise, and archive the bulk of that data for searching.
Deep learning and AI has the capability to be a game-changer for how we analyse the world and advance; if we combine that with quantum computing capabilities, we're starting to work out where the genesis of the godlike Minds of Iain M. Bank's novels could come from, if we're lucky.
Or Skynet's People Processing Services if we're not. Personally, I would prefer the former!
Big Data
You can't talk about AI without the latest buzzword/required input for AI. The term “big data” refers to data that is so large, fast or complex that it’s difficult or impossible to process using traditional methods. The act of accessing and storing large amounts of information for analytics has been around a long time, but when a certain level of complexity and data saturation is reached, false positives become an issue. (another danger here is Big Biased Data, so having as much as possible may help reduce this).
Where AI variants shine is that they need this data to make decisions or produce better results, so now this is on many lips too.
The supporting structures
AI requires connectivity, integration, a variety of data required; the ability to comprehend rather than merely operate on statistics, failure incidences and calculation; automation, IoT, and potentially IoU.
These are much more prevalent today, and more importantly the human supporting structure is there; AI/CC is already now proven for certain deliverables.
Without these, AI alone isn't able to affect much. It requires input, constraints to act against; indeed I believe this is why the initial fanfare around AI was premature some years ago. It wasn't in any way ready (AI has only just beaten professional human players at StarCraft II, which is totally different to calculating chess moves in advance and requires nuance), and more importantly the surrounding structures weren't ready either - which I have long considered integral to its success.
Internet of Things (IoT)
One of the things any organism - artificial or otherwise - requires to learn and grow is feedback to stimulus; information. And in terms of AI, this is likely to amount to as much connectivity and data as possible to carry out its tasks.
IoT is therefore interesting because it offers the opportunity to both optimise our lives, and learn frightening amounts of data about us. This is already being massively misused by humans - an example being Facebook, or Amazon, using AI and algorithms. Could this be worse with a full AI entity? That depends on what its purpose is, and whether humans have access to all the results (initially at least the answer is probably "yes").
What I find fascinating is that there is the potential here to have IoT act somewhat analogous to a Peripheral Nervous System to an ANN's CNS (Neural network and immediate supporting structures). Facebook does this in a rudimentary way with mobile devices; Siri, Cortana, and Google AI exist; Amazon also uses Alexa and analogues.
Special mention: Internet of Us (IoU)
And now we come to something really interesting. What happens when humans integrate into this? And I mean really integrate?
Jowan Osterlund has done some fascinating, groundbreaking work which I've referred to a number of times regarding the conscious biochipping of humans with full control over the composition, sharing, and functionality of the data involved.
This has amazing potential, including ID and medical emergency information, and giving full control to the owner means it can be highly personalised. And therein may lie a weakness for us as well as a strength, as far as AI is concerned.
There's currently no way to track an inert chip like that via GPS or our contemporary navigation systems; however, AI integration could potentially chart the almost real-time progress of someone through payment systems, IoT integration for building security, even medical checkups where human agencies couldn't and wouldn't.
On the other hand, the potential for human and AI collaboration here is immense. Imagine going into Tesla for an afternoon with one of Jowan's chips implanted in your hand, and coming out with it programmed to respond to the car as you approached (assuming no power source was required for the fob, which I believe it currently is). Your car would unlock because it's you.
That's fantastic, but also open to vast potential and dangerous misuse by humans, let alone AI. Cyborgs already exist, but they just aren’t quite at the Neuromancer stage yet, and neither are the AI's (or "Black Ice Firewalls" - Gibson is recommended reading!).
Stories Vs Reality
I think there is definite value in reading Sci-Fi and looking at how people imagine AI, because we've already seen life imitating art as well as art imitating life - and there are so many narratives of AI, from the highly beneficial to the apocalyptic, that there is something of warning or hope across the board. This can help us take a balanced approach, perhaps - but it needs to be tempered by reality.
Our ability to craft stories of AI gone awry as a deep-rooted fear of the usurpation of humanity, its subsequent destruction at our hands in a pretty violent way, and the other myths we surround ourselves with might not reflect well upon us should a learning ANNI come across it unprepared. We simply don't know how, or even if, any of this data would be taken in.
The Dangers of AI
AI as a tool has a number of worrying possibilities. It is developing so fast that the danger we will ourselves not adapt in time is real; additionally, we need to balance job losses with new roles around the new tech, which is exponentially faster and more disruptive than the physical and hybrid processes that came before. If we have massive numbers of people losing jobs and don’t find a solution, this is a cause for real concern.
Of course a tool can be used for good as well; but AI is a dynamic tool that can potentially learn to think and change. Some very smart people, including the late Professor Stephen Hawking, have been concerned about the dangers. There are some great examples here (https://futurism.com/artificial-intelligence-experts-fear), as well as a few worrying instances recently:
Tay was a bot designed to be targeted towards people of ages 15-24 to better understand their methods of communication and learn to respond. It initially held language patterns of an age 19 US girl. Tay was deliberately subverted; it lasted only 16 hours before removal.
Some users on Twitter began tweeting politically incorrect phrases, teaching it inflammatory messages revolving around common themes on the internet... as a result, [Tay] began releasing racist and sexually-charged messages in response to other Twitter users. Artificial intelligence researcher Roman Yampolskiy commented that Tay's misbehavior was understandable because it was mimicking the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior.
Within 16 hours of its release, and after Tay had tweeted more than 96,000 times, Microsoft suspended the Twitter account for adjustments, saying that it suffered from a "coordinated attack by a subset of people" that "exploited a vulnerability in Tay."
(Source: Wikipedia)
Another AI has been designed to be deliberately psychopathic. Norman - MIT's psychopathic AI - has been designed to highlight that it isn't necessarily algorithms at fault, but the bias of data fed to them.
The same method can see very different things in an image, even sick things, if trained on [a negatively biased] data set. Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms.
Norman is a controlled study to highlight these dangers, and in fact there is a survey available to help Norman "learn" to fix itself - but imagine if code was leaked, or elements of Norman somehow were used by other AI to learn. This is similar to a cerebral virus - once it escapes a lab, it's very hard to contain, so let's hope it doesn't (I'm not going to speculate on the results of any MIT robots being subject to this!).
A third - and the most damaging IMO - example is Facebook's algorithms twinned with their data harvesting and manipulation practices. Fed by the deeply troubling human-led Cambridge Analytica data, for example, it not only expands on the above issues but adds in the wholesale manipulation and misinformation of large portions of populations around the globe. The intent may be to make money for Facebook (and they still show little mores where this is concerned, especially politically and with a bent towards specific leanings, which continues to alarm many), but the reality is these algorithms display a great understanding of making humans do something en masse. This is now changing politics. Humans are wired to tap into fragmented narratives because of our evolution of mental patterns; we should have checks in place on social media before implementing these measures. We don't, and that's likely deliberate where corporate profit is concerned. It's alarming enough when humans direct this - what if AI manages to change to the point it's now taking this decision?
AI also ties directly into digital warfare and the removal of the human condition from life decisions - terrorism, drones, vital infrastructure disruptions, and more, all directed by humans, and the errors could be as damaging as the aims. If we enable this, and AI decides to include more than our directives, this becomes doubly problematic.
Bear in mind - in many instances of machines not working as expected, especially in computers, the root cause is almost always human error, through mistake, misunderstanding, or lack of foresight. We are and will be further complicit in mistakes made by and of AI as we move forward, so we must take care how we step ourselves. We can't actually predict all this, only simulate, because an AI in the wild would be totally alien to us. It would not have any recognisable humanity. And therein lies a danger, or not. It could potentially see us as a threat, or be utterly indifferent to us; it might not understand us at all. Add into this that many people find it amusing to deliberately warp the process without care or thought for consequences, and there is genuine cause for concern and will skew things further.
However, even with all of this, these have still been directed or influenced directly by humans. Extrapolating further, it's hard to project what AI-derived AI would be like. A lot of this depends on how it's approached by us. It's possible that sentient AI could be, in human terms, schizophrenic, isolated, sociopathic, psychotic, or any combination of these. It's equally possible that these terms simply don't apply to what stories love to describe as "cold, machine intelligence". Or perhaps we'll go full Futurama or Star Trek and install "emotion chips" to emulate full empathy. It's hard to say, but I think it goes without saying that we need to step sensibly and cautiously, and not simply focus on profit and convenience.
My own concern isn't so much what an innately innocent self-determining AI would do; it's what an AI would do at the behest of the creatures that created it - and who often crave power without caring about others of their species. Instill those attributes into an AI, and we have some of the worst elements of humanity, along with an alien lack of compassion.
It's a fascinating field of study and projection with a deep level of complexity, and we know only one thing for sure; whatever we do now will have unforeseen and unintended consequences. This is where Cynefin is really important; we need to make our AI development safe-to-fail, and not attempt "failsafes".
(I'll be writing an article on safe-to-fail vs failsafes another time).
Looking ahead...
Much of this is in the future; in terms of human replacement, AI is currently behind automation, which has a good head start. AI is currently either set to augment human thinking, or analyse it - not completely replace it.
Even in some of the better future stories I've read, such as Tad Williams's Otherland series, the AI capability still requires gifted human integration to be truly potent, and we're probably going to be at that level for some time (albeit in a less spectacular fashion).
So there is some interesting exploration of AI and linked fields. Some of this no doubt sounds far-fetched, and I have obviously read my fair share of hard and soft sci fi as well as real-life research and study - but the truth is we simply don't know where we will end up; only simulate at best. We must tread emergent pathways cautiously.
"The real worry isn't an AI that passes the Turing Test - it's an AI that self-decides to deliberately fail it!"
I hope this has been a useful exploration of the disruption of AI and its impact on the market - keep your eyes out for Rise of the Machines Part II, where we also delve into Automation.
Executive/Leadership Coach | TEDx/Keynote Speaker | Advisor | Director @Finding Shores | Senior Leader | Director of Coaching | Complexity/Flow/Agility/Ecosystems/Learning | Author/Writer/Teacher | AASD1
1 年?????????????? Craig Cockburn apropos of your last post - you might find this interesting (or tell me how wrong I am ??)
Executive/Leadership Coach | TEDx/Keynote Speaker | Advisor | Director @Finding Shores | Senior Leader | Director of Coaching | Complexity/Flow/Agility/Ecosystems/Learning | Author/Writer/Teacher | AASD1
1 年Dr. Delia McCabe - I wrote this a while back (fairly basic exploration), apropos of our comments! )
Executive/Leadership Coach | TEDx/Keynote Speaker | Advisor | Director @Finding Shores | Senior Leader | Director of Coaching | Complexity/Flow/Agility/Ecosystems/Learning | Author/Writer/Teacher | AASD1
2 年Nigel Thurlow given your upcoming chat, thought you might like a browse of this! :)
Executive/Leadership Coach | TEDx/Keynote Speaker | Advisor | Director @Finding Shores | Senior Leader | Director of Coaching | Complexity/Flow/Agility/Ecosystems/Learning | Author/Writer/Teacher | AASD1
4 年Lionel?- you might find this interesting musing!?
Director of Finance & Operations
4 年Christopher Bramley?I am so glad to be connected with you because what you write about and how you write make me a better, stronger, and more intelligent person. Thank you. As for AI, I have a healthy fear of the capabilities AI has precisely because of the human element that can control and program the machines. Couple that with millions of mindless social media zealots and you have the potential of starting an armed revolution in an hour. This scares me. However, AI is a creation of human minds carrying the capabilities we have further than ever before. You hear it stated many times that we only use a small percentage of our brainpower. AI shows us what that brainpower has the potential to do. This is probably a very simplistic comment given the depth and complexity of your article but it's the best I have for now. Thank you for sharing.?