The Brief on AI #2
On how to not "fall asleep behind the wheel" with AI
Here are 3 recent developments, 1 quote, my take on it all and 1 useful AI tip for you to apply or learn from.
3 recent developments
Quote of this week
“Bad AI” may perform better than “Good AI” within a human/AI collaboration. Fabrizio Dell’Acqua
What to make of all this?
The world is grappling with the big question how to collaborate with AI to make us better without relying on it so much that we make (fatal) mistakes. When you read EY’s press release you will see that a large portion of their investment is geared to answering that exact question for their employees and clients. We want AI to augment us, empower us, make us more productive and smarter and in many cases it does just that, but in some it doesn’t. Ethan Mollick, one of the social scientists who worked on the aforementioned study points out how difficult it is to make the distinction between the two.
Thanks for reading The Brief on AI! Subscribe for free to receive new posts and support my work.
“On some tasks AI is immensely powerful, and on others it fails completely or subtly. And, unless you use AI a lot, you won’t know which is which.” Ethan Mollick
Getting it wrong can have detrimental effects. Researcher Fabrizio Dell’Acqua calls this “falling asleep behind the wheel”. To reduce the risk of this happening it is important to think about how to collaborate with AI. The researchers shared two approaches to do so and gave them fancy names. One approach is becoming the Centaur. In this collaboration form you give the AI a task and when that task is completed you take over. There is a clear distinction between your work and the work AI is doing. On the other end we have Cyborgs. Here your work and the work of the AI is going back and forth before the task is completed, making it less clear who is doing what.
The case of self driving cars shows how dangerous it can be to make a wrong decision about the role division. It can have fatal consequences when done incorrectly as we can see in the graph below. The Washington Post reported “17 fatalities, 736 crashes as the shocking toll of Tesla’s Autopilot”.
Giving the car complete control is not safe. Being on stand by to take over when necessary might sound like a good solution but isn’t realistic either. Missy Cummings - Professor and Director of Mason Autonomy and Robotics Center (MARC) - points out in an interview with Gary Marcus that most people will use the time they have as co-pilots to check their phones, take a nap or pick up something from the back seat. If something goes wrong they will be too late to react effectively.
I don’t have the solution for the catch 22 in this particular case of self driving vehicles but I do strongly advise in general to make conscious decisions about how to collaborate with AI in the years to come as AI will play an increasingly important role in our professional and private lives.
Useful AI tip
This week's tip will equip you with the ability to determine whether a text was writen by a human or by using generative AI. Try https://gptzero.me/ and https://originality.ai/