More AI Progress…and (as always) Hype

More AI Progress…and (as always) Hype

This is a continuation of my last post, exploring developments in AI and machine learning (ML). What can it do?

Create Fake Photos

ML has moved beyond the realm of taking an existing photo and changing the season, weather, or time of day. In my last post I talked about how GANs can create convincing images of people that don’t exist, and it’s not just people…it’s cars, cats, and dogs, and soon most anything.

The practical uses for this are huge for entertainment and advertising, but also raises a concern: how will we discern true photos from fakes? Good news: People are working on that, both by using ML to spot fakes, and blockchain to prove authenticity. Bad news: no idea yet how successful these approaches will be.

Make Lifelike Avatars

Just as with photos, the same is happening to videos, with so-called deepfakes (after the deep learning algorithms used to create them).  We’re moving to an era where fake video is no longer restricted to altering existing video (like turning a horse into a zebra), but it can be created from scratch. Facebook has been working on the ability to generate avatars that look – and move – just like you. This means no need for source video…which means deepfakes are no longer limited only to celebrities with lots of available footage. Soon we’ll be able to fabricate videos of anyone, doing anything. For now, these accurate avatars require a scan of your facial features, and some facial measurements while making various expressions. How long until we can generate full-action videos of someone from a few photos on social media?

Recognize Faces

Especially when you consider the pace at which facial recognition is advancing. The ability to recognize people just from a photo or video of their faces is rapidly improving, and high-quality, Wi-Fi cameras are so cheap now that they’re getting installed in public places everywhere. Did you know it’s getting deployed at major U.S. airports? Facial recognition of passengers has been in place at some airports, since 2016 (Atlanta was first). Now there’s a federal mandate to use this technology for 100% of all international passengers in the top 20 U.S. airportsby 2021. It’s already in use at six.

Does this concern you? A Georgetown study concluded that as of mid-2016, half of all Americans’ faces are already in a facial recognition database. Even more concerned? There are currently no laws in the U.S. that govern the use of facial recognition.

But what’s perhaps scarier is how far this has progressed in China. China has installed cameras literally everywhere and has used facial recognition to do all sorts of things, such as fine (and publicly humiliate) jaywalkerslimit toilet paper use, and catch criminals at concerts. Of course, there are benefits – if you forget your wallet, you can also use it to board the train or pay your restaurant bill.

Recognize Emotions

But it doesn’t stop there. Facial recognition is rapidly becoming emotion recognition. Advertisers would like nothing more than to know how you feelabout their advertisements. Which ones work? Which ones are turn-offs? For whom?

In China, emotion recognition is being used to determine if students are attentive or slacking off.While there are still some questions as to how well this technology works, in a few short years it will (at least statistically speaking) do a better job than we do ourselves.

Make Reservations

Chatbots are all the rage right now, thanks to lots of media attention and products like Google Duplex. If you haven’t heard of it before, this demo is a must-see, where an AI chatbot calls the restaurant for you and makes your reservation. Google has now made this capability available in 43 states…if you have an Android phone.

Which makes everyone think that chatbots are here, and ready to replace humans. But they’re not; question-and-answer systems are incredibly difficult to build. How difficult? Amazon has 10,000 employees working on Alexa and the Amazon Echo. Let that sink in: ten thousand.

Chatbots require a lot of manual work (and often use a LOT less AI than people realize…most successful ones make extensive use of human-created rules). They may handle the basic stuff ok but they are far from robust. My favorite example with Siri:

  • Me: “Will it be sunny tomorrow?”
  • Siri: “Yes, it should be sunny tomorrow.” 
  • Me: “Will it be sunny or cloudy tomorrow?”
  • Siri: “I can only answer one weather fact at a time Jeff, sorry about that.”

At least she apologizes. :-)

Take Better Pictures

The cameras on our phones keep getting better and better. But with such small lenses, we’ve reached the limit of optics. So how do they keep improving? AI. Cameras in Google’s latest Pixel phones do some very impressive things. Well, not the cameras actually…but the software behind them. Their “Night Sight” feature takes a burst of multiple photos to improve dark shots. Each individual photo is grainy because of the lack of light, but each photo is also taken from a slightly different position, because your hand isn’t perfectly still. Google has developed a machine learning model which amplifies the image and suppresses the noise. The result is a much higher-quality photo (check out the sliders in this article), something that until now could only be obtained with larger lenses and faster sensors. 

Predict Medical Outcomes

Not a fun subject for sure, but an indicator of what ML can do: predict death (which will extend to other medical outcomes as well). Using pattern recognition with only a few data points, ML beats gut feel of medical professionals, even with a lot more information at their fingertips. The bottom line is that when it comes to predicting which patients will die in the next 12 months, doctors get less than 75% right. But a deep learning approach developed a model that was over 90% accurate, using only electronic health records. “The algorithm did not use the results of lab assays, pathology reports, or scan results, not to mention more holistic descriptors of individual patients, including psychological status, will to live, gait, hand strength, or many other parameters that have been associated with life span.”

…But there’s still plenty of AI Hype

A recent survey of tech companies in Europe found that a full 40% of AI companies were not using AI at all! Although this headline distorts things a bit (it wasn’t based on companies that said they used AI, but on companies identified by third party analysts as using AI) it’s indicative of the confusion and hyped-up state of the market. AI is still very hot, and a very misunderstood one, so everyone’s willing to cash in on the hype. The press often benefits from the hype, so they fuel the fire.

No, AI won’t be able to do everything. It will not replace all human jobs. And it won’t develop superintelligence and kill us all (that’s another topic altogether). But it is fundamentally and drastically changing the world and our lives. Moreover, it’s doing that rapidly. Its implications are difficult to anticipate, and the time we have to adjust will be short.

I’ll explore more about this in future posts.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了