Not not going to happen
Alistair Croll
Writing surprisingly useful books, running unexpectedly interesting events, and building things humans need for the future.
There’s plenty of debate over whether AI is real. Part of that is on us: If you were to show a computer scientist from 50 years ago a modern algorithm’s ability to drive a car, identify images, compose text, or diagnose diseases, they’d immediately conclude that AI was here. We do incredible things, unthinkable things, things at scale that simply weren't conceivable in the mainframe era.
But AI is brittle. It makes mistakes. At best, it complements human intelligence. Here's an example from the inimitable Janelle Shane of what happens when you train an AI on recipe titles, and ask it to make up some of its own:
I don't know about you, but when an algorithm tries to sell me whole chicken cookies topped with Beasy Mist, I'm not too worried about my job. Ultimately, AI isn't artificial intelligence, it's different intelligence. So do we need to act—or is AI little more than cognitive augmentation?
I think there are two reasons to act, both now and strongly, to make sure machine learning is deployed wisely and tackles the right problems.
AI is the next step up the stack
The first is that AI is a logical progression of computer trends that have been happening for decades:
- The rise of the modern, RAID-like data center (with cross-connects and redundancy, distributed across several physical locations) gave us significant, centralized computing and storage power.
- Cloud computing made these data centers elastic, so you didn’t have to buy what you didn’t need. You could "own the base and rent the spike," which made new, experimental computing workloads economically appealing.
- This, in turn, meant sharing infrastructure, and led to “bursty” workloads. With this model, anyone could analyze vast reams of information quickly. And of course, the modern, connected, mobile Internet was only too happy to provide that torrent of data for analysis.
- Once organizations had that much data stored in data lakes and object stores, they needed algorithms to crunch through it. Initially that might have been statistics, but it quickly morphed into smarter code for cleaning, labelling, and finding insights.
- The best of those algorithms create better versions of themselves, which we call Machine Learning.
And that’s the state of modern AI and data science. AI is inevitable because it's the next step up the stack.
Not not going to happen
The second is what I call the “not not going to happen” argument. I heard a great version of this during a panel I moderated at the 2019 APEX symposium in Ottawa. I asked Deloitte’s Shelby Austin whether she felt organizations needed to act on AI immediately, or whether they could wait. “We don’t know which of the companies deploying AI today will win,” she replied, “but we know the winner will use AI.” (I'm paraphrasing a bit here.)
This is a good Occam’s Razor for AI adoption. We don’t know for sure which AI strategy will win out, or exactly how it will be deployed successfully. But we do know—given AI's tremendous power to make sense of the torrent of data modern society generates—that AI is not not going to happen.
The automation that machine learning and data science can bring is often a nonpartisan issue—given that it can bring tailored, personalized services to bear, but also can be used to cut costs and run balanced budgets. At the same time, everyone’s concerned about the invasion of privacy that AI might bring (whether that’s analyzing public data to infer private facts; reinforcing prejudices in policing and justice, or myriad other concerns.)
Diving into AI at FWD50
Next week, I'm chairing the third edition of FWD50, which has quickly become a global event on the future of digital government. We've welcomed over 30 countries to Ottawa since we launched the conference, and AI has figured prominently in the lineup since that first year. I've written up some of the (many) AI-related sessions we're running in a longer version of this on the conference website.
If you’re interested in how AI and data science change government—and you really, really should be—then FWD50 is the place to be this November. You can grab a ticket here, and become part of this critical conversation.
Podcaster | Protopian | Technologist
5 年Mmmm. “Complete meat circle.” Just like mom used to replicate.
Consultant || Chair, Open Data Charter board || Smart Cities Council GTL in Data Ethics and Governance || data || community || environment || sustainability || regeneration || strategy || futures || innovation || climate
5 年I think we need to be very careful about Elon Musk :P I agree that #AI is inevitable: one of my bugbears is how loosely the term is used at the moment. Calling it weak / narrow AI would be far more accurate, and help alleviate some of the confusion and snake-oil sales happening. And, indeed, sometimes things just shouldn’t be called AI at all!
Chief Executive Officer at Giller Investments
5 年Flexible optimization is not intelligence. Many of the things currently labeled "AI" do not involve creativity and, for me, creativity is the true measure of intelligence. This is the message I learned from Gary Kasparov's excellent book "Deep Thinking," about the rise of computer chess. Computers are very good a chess, better than humans in fact, but what we've learned from the process of developing chess computers is that you don't actually need "general intelligence" to be good at chess: what you need is speed and memory, things that computers do have in abundance. How many of these tasks that we currently assume require intelligence to be done well will we learn do not?? Perhaps we don't need to be "intelligent" to drive a car? Will a successful autopilot have an ego? If it does, will it respond like Douglas Adam's elevators and sulk in the bottom of the parking lot when it realizes its entire existence is to drive around to some human? Will it make a "treacherous turn" quite literally, and eliminate its tormentors? I think we need to ask ourselves quite carefully whether we really understand what intelligence actually is? And my impression is that our answer to that question is definitely no, we don't.