Let's talk AI, baby, all that may be
Image: iStock / Getty Images Plus, title inspired by Salt-N-Pepa

Let's talk AI, baby, all that may be

‘Prediction is difficult, especially if it’s about the future’ Niels Bohr? Mark Twain?

Making predictions is key to my job and passion, and holding yourself accountable is both important and fun. Y’all recall my previous posts on Why Facebook won’t lose 80% of its customers in 3 years (which turned out to be fully correct ), my 2018 predictions for the future of market research, and my August 2022 prediction that the MetaVerse is not 5-10 years away, as folks still claim, but 20+ years (the jury is still out). But 2023 is all about (generative) AI, so it came as no surprise that you voted for this topic over 3 others in my Friday poll. Moreover, I just finished reading ‘Smart until it’s dumb: why artificial intelligence keeps making epic mistakes (and why the AI bubble will burst’ by Emmanuel Maggiori and it has been a while since I reviewed a book. So I’ll start with my predictions, and then discuss this wonderful book.

1)????General Artificial Intelligence won’t happen by 2029, as Elon Musk and Kurzweil believe. Instead, AI will continue to improve at specific tasks, and remain artificial – sometimes outright dumb - at others till at least 2050. Also, I am happy to bet that ‘the singularity’ won’t happen in our lifetimes.

2)????The AI threat to jobs is overblown: even in the West, where employment will be most affected by AI, less than 10% of employees can be replaced by AI for at least half of their tasks according to recent report by Goldman Sachs. For crying out loud, even truck driving hasn’t been replaced by AI !

No alt text provided for this image


However, I am much more positive than even generative AI itself on bias, where ChatGPT writes:

“the hype around AI often overlooks some of the challenges and limitations of the technology. For example, AI algorithms can be biased if they are trained on data that is not representative of the real world. This can lead to discrimination and injustice, particularly in areas such as hiring, where AI can be used to filter job applicants.”

Human bias in hiring has been well documented – which is why the best companies such as Amazon have such strong mechanisms to counteract it. Why would we demand that AI hiring tools be fully unbiased, as New York State appears to do with its Law 144, going into effect April 15 ? Instead, the required audit should reveal whether AI hiring tools improve over the biased human hiring previously in place. Moreover, candidates should be informed to what extent AI was used in the employment decision – I am still reeling from being rejected for what my worst critic described as my dream job, and am fairly confident no human there saw my application.

No alt text provided for this image

Just as the good old ‘Database Models and Managerial Intuition: 50% Model + 50% Manager’ , the combination of human and AI will continue to outperform AI-only work. AI is helping us in so many tasks which we humans find tedious, or are too inconsistent. I agree with the important current uses and substantial potential of AI to identify data patterns (describe what happened), predict out of sample (what will happen?) and finetune such models fast and at scale. However, it has not unlocked the third level of prescription (what should we do?): while it can give the decision maker a shortlist of options, we humans still feel more comfortable making the final decision ourselves. That is how product recommendations, predictive text etc all work nowadays, and I have seen little progress in my 2017 musings on when humans would be comfortable delegating such choice to AI (beyond music) .

So there you have it: Artificial General Intelligence, just like the Metaverse, won’t materialize for at least 20 years, your job will incorporate specific AI tasks but not disappear, and AI will be increasingly helpful in specific tasks for which human biases and fatigue are known to mess up decisions, such as medical diagnosis or hiring. Let’s now hear from Emmanuel Maggiori, PhD, who is a freelance software engineer specialized in machine learning and scientific computing.

No alt text provided for this image

Why did I like reading “ Smart Until It’s Dumb: Why artificial intelligence keeps making epic mistakes (and why the AI bubble will burst)”? First, the ROI is high: Emmanuel doesn’t beat around the bush or expands a 1-page blog in a 250 page book as too many I reviewed. Second, he speaks from experience: I first got intrigued by his blog on “I’ve been employed in tech for years, but I’ve almost never worked’ https://emaggiori.com/employed-in-tech-for-years-but-almost-never-worked/ Third, he has a scientific mind, breaking down assumptions behind bold claims. Finally, he explains in layman’s terms machine learning, supervised and unsupervised ML, deep learning and its Convolutional Neural Networks (CNN) and AI.

Emmanuel tells two stories:

1)????What current AI is, can and can’t do and what its potential dangers are

2)????AI hype: how unrealistic expectations proliferate in the AI world

We currently are in the third AI boom, fueled by machine learning. The 1960s saw the first, with the otherwise awesome Herbert Simon predicting that ‘Machines will be capable, within 20 years, of doing any work a man can do” [cue the Lord of the Rings ‘But I am no man’ reference]. After the first AI winter of the 1970s, experts systems brought the next boom in the 1980s. Experts wrote down thousands of rules for a task, and then AI would imitate their work. But this was tedious and ineffective. Finally in the 2010s, machine learning allowed computers to automatically learn stuff by scanning large amounts of data. This AI has been proven effective in many areas, such as product recommendations, image recognition and sentiment analysis, which I also use in my research and commercial implementation every day. The limitation: the template we give to computers to learn has to be restrictive to come up with good results. If we simply instruct the machine to ‘learn anything it wants’, it picks up on chance correlations and reveals epic fails. Successful “machine learning is narrow – each task is tackled with its own model, assumptions and data” (p.35). Emmanuel gives the example of a daily visit to your favorite online store, where you (1) are shown a personalized list of recommended products on the homepage, (2) search for a product and are shown a list of candidates by relevance, (3) read user reviews, some of which have been translated from another language.

Even deep learning, which outsmarted human players from Chess to Go, does not come close to the human brain, as is often claimed. While our brain also has many layers, the feedback loops between the thalamus and the neocortex are not well understood. Moreover, deep learning is a black box, lacking explainability, and it makes silly mistakes when images are slightly altered or when objects are put in unfamiliar situations, such as recognizing a cow on the beach instead of on grass, with which it is usually associated. AI especially sucks when a task requires a mental model of the real world, such as assessing the danger in a traffic situation. Emmanuel gives the example of an umbrella flying in your path: should you swerve or let it hit your car? You and I know an umbrella is light and won’t do much damage, but how would the machine know? Humans learn fast from limited data and are able to focus despite noise: “When a parent points out a butterfly to a toddler and says “That’s a butterfly”, the toddler may learn the word right away” ( p73), while a computer would have to browse many different butterfly images to learn which colorful bug (and not the trees, clouds or shadows…) it is supposed to label as such.

No alt text provided for this image

As to the AI hype, companies have now spent about $200B on self-driving cars, with limited success, and daily headlines on how AI will take your job or even your life fill the news.?But at current writing, there are no “fully autonomous self-driving cars on the open road”. The law of accelerated returns does not guarantee artificial general intelligence (AGI), because that requires a lot of innovation, and AI researchers have a poor record of “predicting the rate of advances in their own field or the shapes that such advances would take” (Nick Bostrom, Superintelligence ).

No alt text provided for this image

The five stages of AI product failure were especially enlightening:

1)????Put the cart before the horse: force AI as the solution instead of assessing the problem fit

2)????Errors go unnoticed: while a model bug gets magnified and therefore spotted in traditional software development, incorrect AI tends to have great apparent performance in sample. The example involves data leakage of what needs to be predicted (client forgetting an appointment) by a predictor variable coming from different sources AFTER the failed appointment was flagged.

3)????Explosive growth: business hiring too many AI experts after an initial prototype was successful

4)????Lies: unable to deliver on the inflated expectations, the AI team decides to hide the issue

5)????Silent Burst: sooner or later, the organization notices the low AI bang for the buck and reduces funding for the project until it peters out. The problem is the lack of public announcement of such action, furthering the AI hype build on publicly announced success cases.

Recommendations to avoid this fate are good old ‘working backwards’ and system design principles:

1)????Start with understanding and formulating the client’s problem

2)????After deciding AI is a good fit, build it and validate AI results

3)????Learn start up principle, which I have shown to help with Big Data Bias: collect user feedback based on that minimum lovable product

4)????Foster the right work culture of making excellent mistakes and sharing the learning

5)????Talk openly about failure, also in white papers and at conferences

Finally, Emmanuel goes full philosophical on AI consciousness, which is a hard problem as we don’t even know where to draw that line in nature. The computational theory of the mind (your brain is a computer!) would assert consciousness of a computer that is similar to the human brain, but then anything that runs a computer program, such as your thermostat, would be conscious. Moreover, many people, including myself, are bothered by the absence of free will in such world view. Computers follow instructions to the letter and are thus predictable, while true randomness is an important component of the universe in Quantum Theory. Moreover, our brains likely can’t be simply copied without losing our essence. Some AI researchers believe that human-level, let alone super-human AI is impossible ?(Stuart Russell, Human Compatible ). I am sympathetic to their views and hence believe the ‘singularity’ will never happen – or at least not in our own or our children’s lifetimes.

No alt text provided for this image


In sum, you have nothing to fear from AI but fear itself and the unrealistic expectations from the current hype. Yes, your life will see some impact, but you likely won’t lose your job or your life…at least not to AI.

Lynd Bacon PhD, MBA

Quantitative Cognitive Scientist, Data Science and Analytics Expert. @[email protected] on Mastodon.

1 年

Cool post. Maybe not surprisingly, questions about the reproducibility of AI developments have been cropping up for some of the same reasons as in other areas of science, like the social and medical sciences.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了