Code: https://github.com/mzweilin/EvadeML-Zoo
- Feature squeezing: reducing the color bit depth of each pixel and spatial smoothing.
- Framework:
![在这里插入图片描述](https://img-blog.csdnimg.cn/670f7f92f63743f9932d43462b6b034a.png?x-oss-process=image/watermark,type_ZHJvaWRzYW5zZmFsbGJhY2s,shadow_50,text_Q1NETiBAQmlsbHkxOTAw,size_13,color_FFFFFF,t_70,g_se,x_16)
-
Adversarial examples attacks
-
L
p
n
o
r
m
L_p norm
Lpnorm attack
- FGSM
- BIM
- DeepFool
- JSMA
- Carlini/Wagner attacks
-
Defense:
- Adversarial training
- Gradient masking
- Feature squeezing/input transformation
-
Detecting adversarial examples
- Sample statistics: maximum mean discrepancy
- Training a detector
- Prediction inconsistency: one adversarial example may not fool every DNN model.
-
Color depth
![在这里插入图片描述](https://img-blog.csdnimg.cn/ff4b30be5a8846eca034aad5be3b3e10.png)
-
Spatial smoothing
- Local smoothing
![在这里插入图片描述](https://img-blog.csdnimg.cn/626eb83d63a94b5ebee9df77b3b7cdd9.png)
- Non-local smoothing
![在这里插入图片描述](https://img-blog.csdnimg.cn/b3765e47f1a44b209696074ab6f37979.png)
-
这篇paper大篇幅都在survey adversarial attack and defense, 提出的方案很简单,并不effective
![在这里插入图片描述](https://img-blog.csdnimg.cn/341735c1e6d249508c43ebe0d49417f9.png?x-oss-process=image/watermark,type_ZHJvaWRzYW5zZmFsbGJhY2s,shadow_50,text_Q1NETiBAQmlsbHkxOTAw,size_16,color_FFFFFF,t_70,g_se,x_16)
More Update:https://github.com/Billy1900/Backdoor-Learning
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)