[AISec]Show-and-Fool: Crafting Adversarial Examples for Neural Image Captioning 对抗样本攻陷图像标注系统

in aisec •  7 years ago  (edited)

针对深度学习系统的对抗性样本攻击问题,来自麻省理工学院,加州大学戴维斯分校,IBM Research 和腾讯 AI Lab 的学者在 arXiv 上发表论文提出对于神经网络图像标注系统(neural image captioning system)的对抗样本生成方法。实验结果显示图像标注系统能够很容易地被欺骗。

论文地址:
https://arxiv.org/pdf/1712.02051.pdf

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!