Attacking machine learning with adversarial examples
Diego Marinho de Oliveira
Gen-AI Search, RecSys | ex-SEEK, AI Lead, Data Scientist Manager and ML Engineer Specialist
Abstract Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake; they're like optical illusions for machines. In this post we'll show how adversarial examples work across different mediums, and will discuss why securing systems against them can be difficult.
Authors: IAN GOODFELLOW, NICOLAS PAPERNOT, SANDY HUANG, YAN DUAN, PIETER ABBEEL, JACK CLARK
Read full article at https://bit.ly/2lTj1nO
///
8 年Very interesting, thank you. Gradient masking looks a little scary.