TECHNOLOGISM AND THE THOUGTFUL  HUMANISM  – ARTIFICIAL INTELLIGENCE VS HUMAN INTELLIGENCE – A PERSPECTIVE -sudhanshu

TECHNOLOGISM AND THE THOUGTFUL HUMANISM – ARTIFICIAL INTELLIGENCE VS HUMAN INTELLIGENCE – A PERSPECTIVE -sudhanshu

GOOD MORNING !! SOME MORNING THOUGHTING …….

TECHNOLOGISM AND THE THOUGTFUL HUMANISM ?– ARTIFICIAL INTELLIGENCE VS HUMAN INTELLIGENCE – A PERSPECTIVE ?

?

Yesterday I read a 1953 essay, the philosopher Isaiah Berlin popularized a Greek fable originally quoted in Erasmus’s Adagia: ‘The fox knows many things, the hedgehog one big thing.’ Like Newton in Blake’s colour print, the AI community that emerged after McCarthy’s Summer Project was a hedgehog. It became obsessed with one big idea – intelligence through logic. The cyberneticists it displaced were foxes; the breadth of ideas covered by my favourite scientist or technologist Wiener and his collaborators included logic, probability, feedback, neural models and information theory. But by the 1950s those foxes were caged. Their ability to implement their ideas was limited by the capability of the tools at their disposal. In the meantime, the AI hedgehog went to work, improving and extending the capabilities of the digital computer.

?

Yesterday I was reflecting that there’s a phenomenon known as scientific exceptionalism. Scientists like to think of themselves and their pursuit as special, but science is not exceptional in terms of exploration and execution. The early cyberneticists were the explorers, the foxes that knew many things, but the full power of the digital computer was realized by refinement and execution, by the hedgehog that knew one big thing. Each role is equally important. The refinements of the machine would in time deliver the tools that made possible the next wave of exploration. The hedgehog unleashes the fox. But the foxes are hunting for the next frontier so that the hedgehog can forge forward.

?

With rare exceptions, such as IBM’s DeepBlue computer defeating the World Champion chess player in 1997, few of the original aims of AI were achieved. The classical AI community ignored the influence of Laplace’s gremlin. The logical systems it employed worked only in a purely deterministic universe in which all is known and so what follows is predictable. The failure of the AI community was driven by its inability to incorporate the variation and uncertainty and doubt that permeates the real world.

?

All this was also known to Wiener, who was cruelly cut off from the community he founded. His interest in probability was driven by an understanding of Laplace’s gremlin. He was inspired particularly by J. Willard Gibbs, an American chemist, who built on Bernoulli’s and Laplace’s ideas to introduce the idea of the statistical ensemble Wiener used in his stochastic processes. Wiener started to see the causal explanations that comes from deterministic perspectives as something that can be seen only in terms of gradations rather than as absolutes.

?

This notion of a statistical ensemble extends the ideas behind Bayes’s probability distributions, giving us the mathematical tools for handling Laplace’s gremlin. Wiener also understood the nature of our locked-in intelligence:

“Because I had some contact with the complicated mechanism of the nervous system, I knew that the world about us is accessible only through a nervous system, and that our information concerning it is confined to what limited information the nervous system can transmit.”

?

When he was only ten years old he had written his first essay on what he called ‘The theory of ignorance’, reflecting that even at that age: …

?

“I was struck by the impossibility of originating a perfectly tight theory with the aid of so loose a mechanism as the human mind. And when I studied with Bertrand Russell, I could not bring myself to believe in the existence of a closed set of postulates for all logic, leaving no room for any arbitrariness in the system defined by them.

?

?The arbitrariness of the world emerges from the gremlin of uncertainty. Even modern computers, with their vastly superior information bandwidth, cannot store all the data or perform all the computations necessary to make the predictions implied by Laplace’s demon. The loose mechanism of the human mind has evolved to accommodate this arbitrariness when it deals with the world around it.”

?

In a further irony, even George Boole, the father of mathematical logic, was aware of the limitations of logic as a model for thought. The full title of his book is An Investigation of the Laws of Thought on which are Founded the Mathematical Theories of Logic and Probabilities.

?

Just as the cursory reader of Laplace stops at his description of the mechanistic universe, so a cursory understanding of Boole stops at logic. George Boole was just as aware of Laplace’s gremlin as Wiener was. The book that triggered a revolution in logic which gave us Holmes, Russell, Wittgenstein and the digital computer was published a century before the Dartmouth Meeting, and even at that time Boole understood that the nature of intelligence was based as much on what we don’t know as what we know.

?

Those who became so obsessed with logic wore the model-blinkers so tightly they didn’t even notice that one of their foundational texts is just as much about probability as it is about logic. Ignorance in the real world means we often need to retain multiple strategies to resolve the challenge of uncertainty by considering different possible paths in the future. However, in the end we can only make one decision, and that is often driven by the need to react; afterthought is useful for improving processes and finding gaps in our understanding, but afterthought can’t save the lives of people who are dying in an ongoing pandemic. In the fullness of time, we may realize that our reaction was the wrong one and so our short-term response can be inconsistent with our long-term objectives. Somehow this inconsistency needs to be reconciled – in newspeak, Orwell developed the notion of doublethink to describe the inconsistency of IngSoc policies. This reflects the idea that the state was aware of its imposition of these inconsistent ideas and would knowingly impose them. In our intelligence, these inconsistencies emerge from the timescales over which our actions are required, so in homage to Orwell we might refer to the inconsistency that arises from ignorance as doublereact. While its origins are more innocent, its effects can still be pernicious. To resolve a double reaction, an act of doublethink may be required. Like the proverbial fox that couldn’t reach the grapes and decided they must be sour, our slow-reacting self sometimes needs to retrospectively reposition its objectives to accommodate new circumstances.

?

The nature of our ignorance, our vulnerability to our fast-reacting self, the need to resolve the inconsistencies that emerge when decisions must be made – all these phenomena provide routes for human intelligences to be manipulated. The corporations that have delivered surveillance capitalism are controlled by individuals. The track record of behaviour for humans who have been given such unusual powers is not a good one. The individuals who control these companies are also human; so they are prone to acts of doublethink that allow them to absolve themselves of the harms their companies do.

?

Konrad Lorenz’s Behind the Mirror asks the same question that struck the Pyrrhonian sceptics: can we trust our senses? Lorenz’s answer is from evolutionary biology: he concludes that the fact we have persisted over many generations demonstrates that we can trust our senses, because they have informed us about the world around us, allowing our species to persist. But when the machine interlopes in this space it understands our base desires through our personal data and it uses this understanding to undermine our fast-reacting self. This means that Lorenz’s evolutionary argument for trust in our senses is under threat. We did not evolve alongside System Zero.

?

The algorithm operates on data. It affects us in the digital world, and these effects bleed into the real world. Today, System Zero has used our data so extensively that our personal data has become interlinked with our personal security.

?

Scientific frameworks provide a coherent landscape in which we can rely on the way that everything fits together. This is Kuhn’s notion of the paradigm. According to Kuhn, normal science proceeds within a particular paradigm and can be reduced to puzzle-solving. Paradigm shifts in science are rare, but great scientists can trigger them. Human endeavour isn’t as fortunate as science in this regard. THE WORLD IS SO COMPLEX THAT NOT EVERY CIRCUMSTANCE CAN BE MODELLED MATHEMATICALLY AND NOT EVERY QUESTION CAN BE ANSWERED. In many circumstances, over-adherence to a paradigm can do great damage. We need to rely on those around us, particularly those closest to us. The social duties this reliance imposes on us, the need to balance our concern for family with our concern for the wider family, can lead to great sacrifices and great betrayals. Like a ship at sea, our intelligence relies on externalities to situate itself. When a ship navigates by the stars or by a known shoreline the mariner can be confident about their location. The process of navigation can be reduced to a series of puzzles in the manner of Kuhn’s normal science. But when clouds shield the night sky and fog hides the shore, we need to orientate ourselves through the other vessels around us. Our feel for our fellow voyagers is underpinned by the extent to which we trust them, and our shared jeopardizes underpin that trust. The machine is neither vulnerable in the same way as the atomic human nor infallible in the same way as mathematics. Yet within our seascape the descendants of the Perceptron have formed new cognitive monsters that fill the horizons. Let machine work in proper and just unison with Human Intelligence with right blend and perspective.


LET US WORK THROUGH AI ?AND NOT THAT AI ?WORK THROUGH US. ??


sudhanshu

?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了