Part Ten of Natural Intelligence - How Artificial Intelligence could spiral downward into real stupidity

Part Ten of Natural Intelligence - How Artificial Intelligence could spiral downward into real stupidity

Part 10 of 14: Stealth Surveillance - The increasing capacity to monitor and predict behavior

Are we really living in a world where unpredictability is an asset? It would seem that, in some senses, the answer to that is yes. The UK, as a democracy, is purported to have the highest density of CCTV capability in the world. GCHQ, is one of the most sophisticated communications surveillance capabilities in the world. How should we view that? Well now let’s think beyond what might be called the “traditional” or “analogue” methods of communication and communicating and the combined intelligence about what we do, where we go, how we react to events (think “likes” and “dislikes”), what we buy, what we aspire to, our hopes and fears, our anxieties, our allegiances, our social makeup, our political views. All of these can be, and are, combined to observe you in a manner that is unparalleled in human history. But although this might be a little disturbing in itself and very “Big Brother” (and not the TV show but the Orwellian leader of the totalitarian state of Oceania in the novel Nineteen Eighty-Four) But, it doesn’t end there…

It all very well looking at what you do in the past and present and analysing that to make you accountable for your actions – so crime detection and anti-terrorism uses spring to mind, as we want the protection that this provides. However, even in that context, as well as the items that are required to meet the aims of this protective umbrella, there is a vast amount of data that is collected that is inconsequential to the aim, but very consequential to individuals. This is the basis of a very fine lined debate on privacy. Think about video doorbells and dashcams and the peripheral vision (and hearing) that these have. Walk down an urban residential street now and have a conversation along the way and it is likely that this will have been picked up and recorded across any number of seemingly innocent domestic doorbells. Now is that a problem? After all, you are in “public”, so this is the very definition of not being in private? The point is that, in normal, old fashioned circumstances, you would have been able to see who was around you and had the potential to listen, unless they were being particularly furtive. Perhaps even more significant, is that you would consider that only snippets of your conversation would be overheard (sorry for the diversion but to Overhear – can you therefore Underhear?) as the participants alter and therefore any context is difficult to follow.

Now I am not suggesting the doorbells conspire in this manner, but the capability certainly exists when you have stored data, to stitch a large number of seemingly disparate sources to make a much bigger and more coordinated picture (or soundtrack). Even more so if you consider the potential for these devices and their storage to be hacked (in my aim to educate and encourage the questioning mind – where did the use of the Hacking term come from? "One who gains unauthorised access to computer records" had come into use by 1975, actually pre-dating the Internet by many years. But the origin seems to stem from a slightly earlier tech slang sense of "one who works like a hack at writing and experimenting with software, one who enjoys computer programming for its own sake," reputedly a usage that evolved at Massachusetts Institute of Technology in the 1960s)

But, back to the central point, certainly, what you need to consider is when a large number of devices are under the same control, the ability to piece together things in a very spooky manner exists. I will leave a thought hanging in the air – Siri, Alexa, Google (and others) listen all the time and we carry these around with us everywhere on our mobile phones and invite then into our homes…

And still it doesn’t end there…

Albertism: “Small is the number of people who see with their eyes and think with their minds.”

It is one thing if it is taking what it can see and hear and interpreting that, it is quite something else if this data is being manipulated in a manner to be predictive as opposed to interpretive. In the 2002 file “Minority Report” (well worth a watch if only to add sceptical elements as technologist to the notion that you can get through an iris scanning door with someone elses eye in a bag and that people seeking reports can be thwarted by someone lying in a bath of ice) there was an interesting term introduced – “pre-cognition”. I don’t want to spoil the whole thing for those of you who want to watch the film, but the basis is that the future can be seen and therefore a “pre-crime” unit can intercept and prosecute based on something that is going to happen rather than something that already has. What a preposterous notion I hear you cry, surely there are so many variables that can affect every action we take (another film related pointer to this is the 1998 “Sliding Doors” – much less serious than Tom Cruise). But whilst me may consider this to be far fetched, there are systems and algorithms that are making predictions about individual and collective actions and sentiment all the time, that can or could have a profound influence on us?

A much admired human rights solicitor and author, Dr Suzie Alegre once recounted in a session I hosted, about a big tech company that had developed and validated a system that, within a country, could reliably predict you political leanings simply from video imaging. The way that you looked, dressed, your demeanor, all gave makers to your social standing, education, attitude which were enough to accurately predict this trait. Now image how this could be augmented by all the other tracers you leave across your digital footprint and we have a potential for a very invasive capability. That i a very complex scenario, even in a simpler form, racial profiling through videos to segment parts of the community in places like China, or even closer to home to predict a “likeliness” to present a problem at an airport or a football match or concert. There is nothing to say that it is right, it works on the basis of probability. But how does probability fit into uniqueness, quirkiness or indeed the “benefit of the doubt” which actually is the basis of many law systems.

We talked a little earlier about privacy in the analogue sense. We looked at the fact that we had the right to see what saw us. The problem is that an increasing number of individual systems do not know who we are, so could not tell us that our data is being accumulated. Somewhere in all of this lies the ability to “super-consolidate” and create a view of individuals and notional connections of individuals where all the context cannot be traced back to source and verified. Pictures and profiles are being created of us all the time that may or may not be accurate, but they can be used to influence (or indeed make) decisions and be a source for manipulation by malevolent parties.

And we then go back to the fact that AI being introduced into the public domain gives unprecedented processing capability in the hands of relatively inexperienced or socially immature people who lack critical skepticism and sometimes moral and ethical fortitude. But, even worse, it provides a capability to accumulate connections based on interpreted rulesets that may be incomplete or have intended or unintended bias. We have the potential to use this observation of actions to predict our very fibre and being without any determination of accuracy or, more importantly, without empathy.

There is a pathway we need to ensure that we build into this bright new future we are creating for ourselves. The key element of this is defining the hierarchy of who is the master and who is the slave in the relationship between man (or woman) and machine. This requires ?a good, old fashioned and much maligned, degree of skepticism – we have to stop assuming things are true because “the computer says no” (go and have a look at the sketches from the series “Little Britain” – you have a lot of suggested viewing from this article)

Albertism: “A man should look for what is, and not for what he thinks should be.”

Coming Next - Part 11 of 14: Deep Thought - The implications and dangers of trying to form a replica of human thought without context

Geoff Wainwright

Impact Data Metrics Ltd

7 个月

It’s a great article, and those linked in the comments too. Here’s my very brief take on this, and it is a simplistic, high-level view. We are quantum. Before we described processes such as quantum tunnelling, nature had got there first. Whether it is smell, vision, or how information is communicated across the myriad of biological processes we find in life, many have a quantum element to them. We trust our senses (most of the time) and my view is we have entered a period where trust is key. I am fascinated by Large Language Models, but I am also aware (from experience) that they do hallucinate. I am fascinated by the black boxes inherent in neural nets, and we spend a lot of time discussing and worrying about how AI/ML derives its outputs. As biological entities, we are full of black box processes, but we trust that they are (mostly) right, most of the time. This is, essentially, an empirical experience. If we do A then B will happen. We may not understand why, but past experience tells us it is true. So welcome to the “empirical age” of computational science. It’s interesting and is forcing us to determine how we accept or deny things to be true, and how we judge the world around us.

要查看或添加评论,请登录

Kurt Roosen的更多文章

社区洞察

其他会员也浏览了