Attacking machine learning with adversarial examples

Abstract Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they're like optical illusions for machines. In this post we'll show how adversarial examples work across different mediums, and will discuss why securing systems against them can be difficult.

Authors: IAN GOODFELLOW, NICOLAS PAPERNOT, SANDY HUANG, YAN DUAN, PIETER ABBEEL, JACK CLARK

Read full article at https://bit.ly/2lTj1nO

Very interesting, thank you. Gradient masking looks a little scary.

要查看或添加评论,请登录

Diego Marinho de Oliveira的更多文章

社区洞察

其他会员也浏览了