Machines: Do they err, learn, and innovate
Getty Image - IA

Machines: Do they err, learn, and innovate

Historically, machines have been viewed as static entities, designed to operate within set parameters. But with the advent of modern technology, particularly artificial intelligence, this perspective is rapidly changing. Today, machines can err, adapt from their missteps, and even tread the path of innovation. The question isn't just about their capability but the implications of these capabilities for our future.

Do Machines Err?

In the not-so-distant past, pondering whether a machine could err or learn might have seemed peculiar. Such attributes – making mistakes and learning – were typically reserved for humans, not their creations, irrespective of how intricate those machines or algorithms were.

So, what prompts this inquiry now? Could it be the rapid strides in artificial intelligence, its ubiquitous role in our daily routines – ranging from self-driving cars to digital assistants? Or perhaps it's the almost human-like descriptions we give to AI: one AI solves a problem, another forecasts the dollar's decline, or even crafts a painting. This anthropomorphic portrayal compels us to re-evaluate their potential to err, learn, or even innovate.

From Assumed Perfection to the Possibility of Mistakes

Historically, machines were viewed as near-perfect entities. Their tasks were straightforward and mechanical, like welding, managing automated production lines, or operating a toll barrier. If a weld was flawed or a toll barrier malfunctioned, it was often chalked up to human coding errors rather than doubting the machine's assumed perfection. Phrases like "the computer didn’t want to" were commonplace, but we always intuited it was more about the developer's intentions or oversights.

Yet, as the 20th century dawned, perceptions began to shift. Science fiction writers endowed machines and robots with decision-making prowess and autonomy, far ahead of what technology permitted. Some even attributed to them consciousness, suggesting their actions could mimic human behavior. A notable example is Isaac Asimov, who established "universal" laws for robots. While not a direct comparison with the Ten Commandments, it's intriguing to observe similarities in guiding principles for autonomous machines: "Do no harm, Obey authority, and Preserve existence."

In the 1920s, predating Asimov, Karel Capek's play "R.U.R." (Rossum's Universal Robots) painted robots with almost human-like capacities for revolt. These robots, once subjugated, rose in rebellion, overpowering their human masters. Capek even cast Primus and Helena as a modern-day Adam and Eve.

Today, the once assumed infallibility of machines, which brought a certain comfort, has been replaced with scrutiny regarding their propensity to err – a quality eagerly pointed out by many. The resurgence of artificial intelligence, the swift advancements in Machine Learning across its facets, combined with the general public's limited grasp, has undoubtedly sparked apprehension.

Who hasn't chuckled at an AI confusing a muffin for a Chihuahua, or needing hours of training to tell a cat from a dog, when even a pre-verbal toddler can discern the difference? Glaring mistakes or biases, especially in tools like facial recognition or recruitment software, underscore the machine's fallibility. Yet, while certain AI systems can diagnose lung cancer on par with, if not better than, seasoned radiologists (as noted in the Journal of Radiography), we tend to hyper-focus on trivial missteps, overshadowing AI's remarkable and burgeoning contributions.

Could it be because machines are becoming more "human"? After all, "Errare Humanum est" – "To err is human." Or perhaps, in exposing their imperfections and discarding their once-vaunted perfection, we feel machines are less of a threat, their credibility undercut by their blunders? The dynamics are more intricate than they initially appear. In "How Humans Judge Machines," Cesar Hidalgo and his colleagues delve deep into the nuanced human-machine relationship.

For example, we often hold machines to a higher standard than humans when their errors result in physical harm, given that we expect machines to be inherently reliable. Conversely, we're tougher on humans than on machines when it comes to biased or unjust decisions.

The Art of Learning from Errors: Machines on the Path to Mastery

"Errare Humanum est, perseverare diabolicum" translates to "To err is human, to persist (in the mistake) is diabolical." This sentiment resonates with the statement: "I haven’t failed; I've just discovered 10,000 methods that don't succeed."

When a machine errs, is it capable of learning from that misstep and charting a strategy that inches closer to accuracy? Can it, similar to a mouse devising its path to cheese, strategize its responses for optimal outcomes? In the 1980s, pioneering minds like Richard Sutton and Andrew Barto introduced the world to reinforcement learning techniques. Here, the machine or agent refines its decision-making by continually interacting with its environment. Its ultimate objective? Crafting a strategy or ruleset that dictates actions at every juncture, all to maximize long-term rewards. The prowess of programs like AlphaGo and AlphaZero, outclassing even the most skilled human competitors, stands as a testament to this.

Human-guided Learning: Harnessing the Power of Experience

As tasks grow in intricacy, so does the challenge of defining the right approach. In such intricate terrains, the synergy between an expert guiding the machine becomes invaluable. Especially for tasks where a straightforward reward function might not suffice. This mentorship-driven approach becomes the bedrock of "learning by example." Observation and emulation are often the pillars of such techniques, prominently observed in robotics. Here, experts can offer hands-on demonstrations, showcasing to the robot the nuances of a particular operation.

Mistakes, Learning, Prediction – But What About Innovation?

"Innovation is an idea that appears absurd until we successfully bring it to life," mused Thomas Edison.

The ability to think beyond conventional boundaries and the audacity to chase seemingly outlandish or impractical ideas — aren't these attributes quintessential to machines devoid of inherent meaning? Wouldn't this make unconventional thought processes more attainable for them? A striking experiment by Columbia University researchers underscores this notion. They tasked an AI with understanding two pendulum motion solely through observation. The outcome was both puzzling and thought-provoking. While conventional physics dictates the movement to be influenced by four parameters, the AI posited a model based on 4.7 parameters. This novel, albeit perplexing, approach to modeling prompts us to wonder: Could there exist alternate representations of the physical world, ones we've yet to fathom?

Navigating the AI Dilemma: A Threat or a New Beginning?

From machines once deemed infallible to entities capable of erring, learning, and even exhibiting a spark of creativity, the narrative around AI has shifted. There's a polarized debate: Is AI, as Yuval Harari cautioned, an existential threat to human civilization? Or, echoing Yan Le Cun's sentiments, is it rather a beacon of rejuvenation and evolution for humanity?

There's no clear-cut answer. However, one thing is certain: halting research, as some suggest, isn't the solution. Instead, our focus should pivot to combining technological advancements with the insights from human sciences, ensuring we create AI systems rooted in ethical principles and engendering trust. The journey might be long and filled with challenges, but it's vital we proceed with care and caution, ensuring we preserve the essence of progress without jeopardizing our core values.

Isaac Arnault {BA, MSc, PhD}

Transformation Program Manager - Data, GenAI, Analytics

8 个月

This picture is as meaningful as the article . . .

回复
Vinita Palan

Infrastructure Presales Consultant| Project Management| Bid Management| Financial Modelling & Operation Management

1 年

Well said echo your words "create AI systems rooted in ethical principles and engendering trust"

要查看或添加评论,请登录

Mohammed Sijelmassi的更多文章

社区洞察

其他会员也浏览了