Artificial Intelligence, really?
Binary: The InterWeb

Artificial Intelligence, really?

Let's be honest about the name - it is a bit misleading. It asserts a capability and entangles itself with a perception of all the things we use the word “Intelligence” to describe in the real world.

I have been a curious observer and consumer of many things technology related for some time - from 3D printers to CNC laser engravers, from AutoDesk packages to GPS trackers, AR to VR, smart watches to smart rings to smart home devices, autonomous driving to deep adversarial neural networks. All sorts of hardware and software gadgetry really. I am by no means a Computer Scientist, or a Polymath, I am not an expert nor a reliable source from which to take advice on such matters - I’m just a keen student of what is possible and where we are heading. More recently I have experienced an insatiable appetite for the what, where, why and how surrounding this mysterious concept called Artificial Intelligence (AI), and the even more elusive Artificial General Intelligence (AGI). 

Sources

Like any keen student, I am always up for listening to or reading about new technological frontiers from various sources. In my professional life, we are reminded constantly about exercising “Professional Skepticism” in what we do - I find that trait especially helpful in my travels of technological curiosity. 

I have attended the World Science Festival 2 years in a row for a broad range of lectures including as a speaker, from astrophysics to gene based technologies. I am a regular listener to a bunch of Podcast’s, including but not limited to RadioLab, StarTalk, Joe Rogan, Sam Harris, The Portal, The Hive, Deep Dive, Making a Killing, Sworn, Pivot, Lex Fridman’s AI Podcast and Revisionist History. Long form (3 hours plus) or short form, I don’t mind. 

The YouTube is also a good source of learning, especially if you have a decent BS detector, at places like ColdFusion, Unbox Therapy & Lew Later, Marquez Brown, Jim’s Review Room, and of course the token academic, corporate and government sources like NASA, DARPA and Lockheed Martin, MIT, Harvard, Oxford et al. 

All of these sources, plus my professional career, all melded by my comprehension of the content is the method by which I traverse the technological landscape at her frontiers. 

Truth In Labeling

Let's be honest about the name - it is a bit misleading. It asserts a capability and entangles itself with a perception of all the things we use the word “Intelligence” to describe in the real world. From the Village Idiot to Stephen Hawking, intelligence is an incredibly broad term covering so many different attributes. A clear and meticulously articulated definition, we do not currently have. Spend a minute or two and try to define it.

Did you include, information processing, abstraction, cognition, awareness, feeling, intuition, prediction, anticipation, understanding, problem identification and solving, awareness of mind and self, memory and recall, interpretation, comprehension, articulation. What about creativity and a sense of right and wrong or moral awareness, or even the ability to suffer or flourish? 

It’s complex - it's both as infinitely broad as it is so intensely precise. So much so, it has and can often become a useless term, used in unhelpful ways, or even weaponized with nefarious intent. The term is much debated and far from settled across the sciences and has been fearlessly contested by Philosophy and the broader Social Sciences. Even Neurological Medicine can not explain or define "what is consciousness?", let alone understand the detailed workings of the human brain - those discoveries are yet to be made and the maps towards them are yet to be drawn.

The lesson: Healthy skepticism is required about what people mean when they use the word intelligence, especially in combination with other words, like artificial. The perception or expectation it emboldens is likely to not match the reality. 

How intelligent?

Machines have been with us for centuries that have showed, to varying degrees, certain attributes or capabilities that generations of us have described as smart machines or intelligent machines. Each time, including now, such names and labels may have been appropriate and fitting of the time. The Pianola stormed the world in 1898, a self playing piano, no less. You could swap out the rolls of punched paper to play any of your favorite music, no human musician required. Come to think of it, it was the worlds first Autonomous Piano. Was this artificial intelligence? Most probably yes, but certainly not intelligence outside the role for which it was specifically designed. 

Our more recent journey from vacuum tube based, building or room sized computers to the modern day, nanometre scale silicone chips has accelerated our capabilities and capacities in a matter of decades that in Darwinian natural evolutionary terms has taken our form of life millions of years to accomplish. It is no wonder that us humans are left in a state of disorientation and bewilderment with where are we at and what can we do now? This is what I wish to articulate, based on all I have seen, read, played with, heard and watched.

Hitting the wall

As a consequence of Moore’s Law having played out as it has (Google it), computer processing speed and storage capacity has achieved, in Elon Musk speak, Ludicrous Levels. Here is a sense of scale, which in the coming months will no doubt get surpassed. In September 2018, Apple released details of the new processor for the iPhone XS, it was called the A12 Bionic with a 7 nanometer scale (7nm). Just reflect on that for a moment. 

The 7 nanometer scale means the distance between the individual circuits inside the silicon based chip are 7 nanometres apart - that is, 7 billionths of a meter, or 7 millionths of a centimeter, or 7 thousandths of a millimetre. So, 7nm is very small - but surely they will just keep getting smaller and smaller and it will all be awesome? 

Well, here lies a technical limitation - silicone atoms, the lowest resolution of the base known constituents of matter, the atom itself, are spaced at only 0.078nm apart. So, sure, we can move further down in size, possibly as much as 7 times down, but what do we do after that? That next step my friends is Quantum Computing but let's leave that for another time because once that leap occurs, all bets are off. At this point, this represents the “brick wall” within our sights.

Let's level set again - by way of analogy, the kind of shift in potential we have made is like going from the horse power of a single cylinder 2-stroke lawn mower to Space X’s latest Falcon Heavy Rocket (or the NASA Apollo programs Saturn 5 Rocket which sadly from the 1960’s is still the most powerful we’ve built to date). That is a seriously crazy shift in potential and power. In the computing world, this has unleashed literally that, old fashion brute force power. Our current ability to simply capture, carry, process, check, process again and deliver information is stupid fast and getting faster. With that, our ability to store more and more of it to check it against, gives you the sense of what we are talking about. Remember - the Cloud is NOT a thing, it's someone else's server stack somewhere else on planet Earth.

The AI Winter(s)

There is a lot of history here, dating back as far as the early 1900’s. You could pick numerous points in history, but a common one is the lasting contribution of Alan Turing. Turing is oft cited as amongst the first to publish his ideas around computer sciences and the source of the famous “Turing Test” for human level computer intelligence. In his paper published in 1950, he starts with the words, ‘I propose to consider the question, “Can machines think?”’. 

Like so many technology related thematics, AI has suffered from it’s own hype-cycle syndrome with peaks and troughs, feasts and famines. The more recent period sometimes called the “AI Spring” coincides with the rapid increases in compute power and storage, along with the evolution of the statistical models deployed. As more and more data has become accessible (or processable), the more accurate the statistical model, or algorithm, is at predicting or identifying an answer, or an outcome, or a pattern, or a sequence or the “next thing”. 

As previously mentioned, our processing capacity and capability has skyrocketed. The core mathematics however, have not. 

The unleashing of brute force calculating power has not yielded advances in our ability to make truly human level generally intelligent machines - we just don’t have the math to do that. What has happened is we have extended the scalability of old mathematical models to bigger and bigger datasets, using more and more multi-variate analyses. Like any statistic however, it is only as good as the information input it is generated from. Similarly, a correlation is not itself causation. 

What about the Chess and Go examples...?

It is true that we have unleashed brute force compute power against some domains of human endeavor. Well known and some less so examples as Naughts & Crosses, Chess, Jeopardy, Go, World of Warcraft, Fortnite and others have yielded results well exceeding expert level humans. 

To me, the most insightful outcome of these examples or case studies is on the occasions that the developers of these machines observe the machine performing tasks which they could not explain, or perform moves that no human has exhibited before. When this happens, from the developers own point of view, it is as if the machine has shown a level of “independent creativity”. In those moments immediately during or after, those completely natural human feelings and descriptions or associations could be and seem incredibly true and real. But on deeper and honest reflection, it is nothing more than the machine having processed its way through more variations than humans had been capable of prior to then. This is impressive, no doubt, but it is pure brute force mathematics at work. 

How are we feeling?

At this point you may be feeling a number of different intuitions. The first might be; this is out of control and we can’t stop it, or, this is all fine - no need to worry because we can always pull the plug out of the wall. Or maybe; hang on a second, as impressive as the potential of this is, we are still just talking about processing speed and capacity - nothing like "out in the wild crazy world of cognition and universal randomness". 

The truth is, learned people from all sides are feeling all of these intuitions and more. Some are worried and warning of an AI future where there are more chances of it developing in bad ways than in a single best way. Some argue that the constant march of technology is towards a “dystopian future of the singularity” - the Matrix anyone? I wont go into all the crazy scenarios people have voiced here but for nothing more than entertainment value alone, I would recommend Googling "AI and the Dam", “AI and the Paper Clip” and “Pulling Balls from an Urn”.

Some argue the opposite view, that we are no where close to AGI and a debate around safer or more human centered AI is no more worthy of worrying about than how to plan to feed a colony on Mars. 

So, what do we know?

For me, there are some fundamental truths we already know that are worthy of broader understanding and acknowledgement. The first is the machines are getting much better, better than most people ever expected, at performing the tasks they were designed for. Brute force calculating ability brings the potential for brute force in all its forms when digitally enabled and even subtle or ever more nuanced forces we had not thought of ourselves. Be it the way that current forms of AI return your internet search results, to the products at the top of your Amazon home screen, to the quality of deep fake voice, audio, music, text and video that is already passable as real or “close enough” to real, that Joe Average can’t tell the difference. 

These capabilities, through the deployment of the mathematics we already have, should be seen as impressive as they are, but no more than that. Every one of the capacities and capabilities we have today and all those on the technological frontier that governments are willing to tell us about, are all based on at least +50 year old mathematics, which even the great minds at the Royal Academy, the National Academy, MIT, NASA and DARPA currently all acknowledge. Machine learning is an evolution of old mathematics, deep adversarial neural networks are no different. 

Each of these have been major advances for sure, but in reality they represent marginal incremental steps in mathematics, which for the past 30 years has been able to ride the crest of the compute power wave. This wave is beaching itself soon with no promise of anything new or emerging on the horizon - the computers can’t get smaller than the size of individual atoms based on current science. Period.

Trigger warning: Rant ahead 

Here lay the problem at our feet - the elephant that we cannot see for looking and the gap we dare not speak to: Physics does not currently hold the knowledge to enable or make this happen. Physics by its very own admission, has not developed an emergent roadmap to do so for decades - don’t get me started on the “dead cat on the pavement” that is String Theory - physics does not currently have an all encompassing comprehensive theory that spans both special and general relativity AND quantum mechanics - it is not yet explained or currently explainable - quantum entanglement, or superpositions, anyone? Sorry about the rant, but the natural sciences have traded off, for way too long in my opinion, long dated advancements and achievements of creators and thinkers who are now, long since dead.  

Conclusion

All of this does NOT mean we should not worry about the deployment of very clever machines in increasingly clever ways. What it DOES mean is that we should all be wary of the illusory nature of the term, artificial intelligence, and be willing to challenge those who wish to invoke its name. 

Written by Craig Heraghty, July 2019. This article represents the views of the author as an individual and does not represent the views of PricewaterhouseCoopers, its partners or its staff.


Mark Williams

Insurance Law Specialist | Public Liability | Professional Indemnity | Life Insurance | Defamation Lawyer

5 年

Awesome read you've got there Craig, I'll have to pass it on!

回复
Saker Ghani

Strategic Growth < Innovation | AI, Data, Product

5 年

Brilliant, Craig. Clearly and cogently argued piece!

要查看或添加评论,请登录

Craig Heraghty的更多文章

  • Make good choices.

    Make good choices.

    WARNING: These words should not be taken as tailored advice or rules to live by. If you are genuinely struggling to…

    22 条评论
  • Status Update Completed: Retirement

    Status Update Completed: Retirement

    Sorry for the cryptic status update pending messages… Herewith the real update. Last week I informed my friends…

    64 条评论
  • How to best position Australia's protein industry for the future?

    How to best position Australia's protein industry for the future?

    Written by Craig Heraghty, National Agribusiness Leader, Partner, PwC Australia, 4 April 2019. I have spent all of…

    1 条评论
  • The most overused and misunderstood statistic in global agriculture...

    The most overused and misunderstood statistic in global agriculture...

    "We need to double food production by 2050 to feed 9 billion people." I must attribute my better understanding and more…

  • It’s what comes out the front, not the back.

    It’s what comes out the front, not the back.

    LIVESTOCK AND CLIMATE CHANGE WARNING: OUTRAGE MAY ENSUE By Craig Heraghty, 3 October 2018, Sydney. In the coming…

    2 条评论
  • Drought and the lizard's tail

    Drought and the lizard's tail

    By Craig Heraghty, National Agribusiness Leader and Partner, PwC Australia. 25 July 2018.

    3 条评论
  • What are you trying to prove?

    What are you trying to prove?

    In a recent meeting with some senior research executives I was asked a pretty simple question specific to food - “What…

    4 条评论
  • Our food bowl can do more.

    Our food bowl can do more.

    First published May 2017, Western Sydney Business Access, Parramatta. By Craig Heraghty https://www.

    3 条评论
  • Let's be honest... about our food.

    Let's be honest... about our food.

    Prosperous farm, prosperous nation. Our food economy is sick.

    5 条评论
  • Very proud of this and we're only getting started. Lets play a much bigger game.

    Very proud of this and we're only getting started. Lets play a much bigger game.

    PwC announces significant investments to support growth in the Australian food and agribusiness sector.

    2 条评论

社区洞察

其他会员也浏览了