AI, the Tool That’s Going to Sink Us or Save Us
Left-to-right: Annie Klomhaus, Shaarik Zafar, Guiseppe Abbamonte, Gianni Riotta

AI, the Tool That’s Going to Sink Us or Save Us

Thanks to Capital One for sponsoring this post.

There are few emerging technologies that hold as much potential for good and evil as Artificial Intelligence.

It “can be part of the problem, but also part of the cure” said European Union Director of Media Policy Giuseppe Abbamonte during a lively SXSW panel in Austin, Texas, entitled “AI: The Silver Bullet Against Disinformation?” He sat next to newly-minted Facebook Policy Manager Shaarik Zafar whose platform was ground zero for misinformation (or fake news) during the 2016 Presidential elections and has struggled with it ever since.

“We were so awed by Facebook, Google, Apple, that we forget that they had no clue on social and political effect of their creation.” said panelist Princeton University Professor Gianni Riotta.

Now companies like Facebook are trying to unravel the mess with, at least in part, the help of AI.

Zafar revealed that Facebook is now using AI to take down 1 million fake accounts a day and, even as Zafar said it  doesn’t “want to be the arbiter of what’s true and what’s not”  is wading deeper into managing the spread of disinformation on the platform. In particular Facebook is now taking a more aggressive approach to the anti-vaccination movement.

Recent Measles outbreaks in the U.S. have Facebook rethinking its approach to free speech. Zafar explained that the misinformation surrounding the efficacy of vaccinations was leading to real-world harm. So, the company made the decision to stop sharing anti-vaccination posts and has outright banned anti-vaccination ads.

Zafar outlined for me how Facebook uses AI to identify this and other types of misinformation. He said there are a number of signals they can look at, including users comments and the level of “disbelief.” “The machines can see that. It’s an important signal that machines can do at scale.” The AI’s though are still not the final arbiters of misinformation. Questions of truthfulness identified by AI are passed along to third-party fact-checkers

“Once they look at it, then at scale then we can reduce it at 80%,” claimed Zafar.

Misinformation is not confined to the written word. Manipulated photos and, increasingly, video (DeepFakes) are flooding the Internet, which prompted the European Union (EU) to launch the InVID Program. Abbamonte said they hope to build a platform that can help newsrooms identify videos like the famously debunked Pope Performing Table Cloth Trick.

Abbamonte doesn’t see AI as a silver bullet, especially because it’s becoming the preferred weapon of those trying to spread misinformation. As soon as Facebook, EU and others start using one form of AI, those seeking to spread disinformation counter with their own equally powerful form of artificial intelligence.

What’s harder for AI to handle, for instance, is videos that contain grains of truth but are being shown out of context. For those videos, “We need people, the fact-checker,” said Abbamonte.

And to further throw cold water on the idea of AI overcoming the waves of disinformation Riotta reminded me that “You can generate fake news text with AI,” and then quickly read through a few news sentences written entirely by artificial intelligence.

Even as social media and nations try using AI to combat disinformation, others are looking for ways to manage and regulate AI’s growth and spread.

ElementAI Founder and CEO JF Gagne (above) explained during his SXSW talk on regulating AI that he’s, obviously as the CEO of an AI company, not looking to stop AI, but to ensure that “as we deploy and use, we build trust with users and citizens.” He explained that the way in which AI is developed creates a complex chain of accountability that can be difficult to trace. The more complex an AI system, the more of black box it creates. There are, Gagne noted, “algorithms with bias being applied to circumstances they do not yet fully understand.”

“The key here is to try and invent the industry and people to ask for transparency. We’re looking for a higher degree of expectation management,” He said.

Many of the same bodies that are trying to deploy AI solutions are simultaneously looking for ways to control and manage it.

Companies like Microsoft, Google, and IBM have put forth ethical frameworks, basically promising they won’t use AI for evil.

There organizations like Future of Life Institute trying to get ahead of the existential risks posed by the growth of AI.

There are some industry standards from organizations like the IEEE and the International Organization for Standardization.

Then there are the more than 20 countries, including the United States, Canada, and the UK, proposing their own National Strategies for managing AI.

However, on the regulation front, Gagne said there’s little happening. “There’s very little that connects the dots.”

For his part, Gagne’s working on a European Commission, a 52-member, high-level expert group that hopes to craft an overall strategy and develop a unified AI perspective. After working through use cases, they hope they can then start to regulate.

“Europe likes to regulate and sometimes it’s good and sometimes it’s bad. They’re certainly very good at it,” joked Gagne.

There’s clearly a lot to regulate when it comes to AI. Gagne says they need to focus on:

  • Transparency of purpose
  • Robustness and Safety
  • Privacy and Data governance
  • Human autonomy and oversight
  • Compliance and inclusion by design
  • Societal and Environmental well-being

It’s a lot, but Gagne says there are already many laws on the books to handle these requirements, they’re just not being leveraged.

In the meantime, Facebook, Google, Microsoft, the EU, and others will continue deploying AI and some will be using it to battle disinformation. Guess regulation will simply have to catch up.

要查看或添加评论,请登录

Lance Ulanoff的更多文章

  • You Can't Afford to Ignore TikTok

    You Can't Afford to Ignore TikTok

    I joined TikTok in 2019 and, at least initially, understood it no better than its predecessor Music.ly.

    1 条评论
  • Cloud Storage Pipe Dream

    Cloud Storage Pipe Dream

    Last week, Google pulled the rug out from under millions of Google Photo users, capping free “high-quality” photo…

    1 条评论
  • The Power and Danger of Data

    The Power and Danger of Data

    If there was a lesson to be learned from the 2016 presidential election, it was, perhaps, the power of data…

  • Spam Nightmare; Amazon Gadgets; Elon's Tech; Xbox Series X

    Spam Nightmare; Amazon Gadgets; Elon's Tech; Xbox Series X

    Now, I'm angry As I mentioned some weeks ago, spam calls are on the rise. To be more exact: They’re exploding.

  • TikTok Lives at a Cost; Apple Watch 6; Why Phone; Facebook VR

    TikTok Lives at a Cost; Apple Watch 6; Why Phone; Facebook VR

    TikTok locks The deal is done (more or less), but Oracle becoming the data parent and Walmart the ecommerce partner for…

  • Apple Event; Land Line Woes; Drone Time; Android 11 Arrives

    Apple Event; Land Line Woes; Drone Time; Android 11 Arrives

    Do you get what you pay for? $90.75 a month.

    1 条评论
  • The beginning of the end for Cable TV?

    The beginning of the end for Cable TV?

    I watch very little traditional (cable, linear, whatever you like to call it) TV any more. Sure, I still gravitate to…

    1 条评论
  • Is This thing On?

    Is This thing On?

    Windows 95 at 25 Last week marked 25 years since the launch of Windows 95. I was at the launch event on Microsoft’s…

    1 条评论
  • How is August Almost Over?

    How is August Almost Over?

    eBay Trolls; Virtual Conventions; Hello, Pikachu; Unreal; Laptop Shortage Here's another sample of my, more or less…

  • Turning the lights back on while I try to make sense of everything

    Turning the lights back on while I try to make sense of everything

    Things you learn in a blackout Hurricane Isaias tore through my area, upending trees and throwing many of them onto…

社区洞察

其他会员也浏览了