Seneors报告 一种通用的对立样本和虚假图像检测方法-2022年

ID:28678

大小:2.35 MB

页数:11页

时间:2023-01-07

金币:10

上传者:战必胜
Citation: Lai, J.; Huo, Y.; Hou, R.;
Wang, X. A Universal Detection
Method for Adversarial Examples
and Fake Images. Sensors 2022, 22,
3445. https://doi.org/10.3390/
s22093445
Academic Editor: Ilsun You
Received: 17 March 2022
Accepted: 7 April 2022
Published: 30 April 2022
Publishers Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
sensors
Article
A Universal Detection Method for Adversarial Examples and
Fake Images
Jiewei Lai
1,2
, Yantong Huo
1
, Ruitao Hou
3,
* and Xianmin Wang
3
1
School of Computer Science and Cyber Engineering, Guangzhou University, Guangzhou 510006, China;
1906200093@e.gzhu.edu.cn (J.L.); 1906300038@e.gzhu.edu.cn (Y.H.)
2
Pazhou Lab, Guangzhou 510330, China
3
Institute of Artificial Intelligence and Blockchain, Guangzhou University, Guangzhou 510006, China;
xianmin@gzhu.edu.cn
* Correspondence: 1111906005@e.gzhu.edu.cn
Abstract:
Deep-learning technologies have shown impressive performance on many tasks in recent
years. However, there are multiple serious security risks when using deep-learning technologies. For
examples, state-of-the-art deep-learning technologies are vulnerable to adversarial examples that
make the model’s predictions wrong due to some specific subtle perturbation, and these technologies
can be abused for the tampering with and forgery of multimedia, i.e., deep forgery. In this paper, we
propose a universal detection framework for adversarial examples and fake images. We observe some
differences in the distribution of model outputs for normal and adversarial examples (fake images)
and train the detector to learn the differences. We perform extensive experiments on the CIFAR10 and
CIFAR100 datasets. Experimental results show that the proposed framework has good feasibility and
effectiveness in detecting adversarial examples or fake images. Moreover, the proposed framework
has good generalizability for the different datasets and model structures.
Keywords: adversarial example; deep forgery; detection
1. Introduction
In recent years, as one of the core technologies of artificial intelligence, deep learning
has attracted unprecedented attention from academia and industry [
1
]. Compared with
traditional machine learning methods, deep learning produces results with higher accuracy,
does not require complex feature engineering, and has better adaptability. Hence, deep-
learning technology has been gradually applied to various fields, such as computer vision,
speech recognition, natural language processing, autonomous driving, etc., [
2
5
]. However,
research shows that deep learning still has many problems in its security and privacy [
6
9
],
such as adversarial examples and deep forgery [10,11].
Szegedy et al. first proposed the concept of adversarial examples [
12
]. Its basic princi-
ple is to add some specific subtle perturbations to the original data; the model would output
error results with high confidence. The discovery of adversarial examples illustrates the
fragility of deep-learning models. Since then, the researchers have researched adversarial
examples and proposed many adversarial example generation methods, such as FGSM,
C&W, DeepFool, etc., [
13
16
]. These methods can generate adversarial examples with
extremely high success rates based on different attack scenarios and targets. Moreover, it
was found that the adversarial examples are transferable, i.e., the adversarial examples
generated for one model are effective for other similar models [
17
]. This aggravates the
seriousness of deep learning security problems and greatly restricts the application of deep-
learning technology in military, medical, financial, and other sophisticated fields [1820].
Except for the security issues of the technology itself, deep learning has abuse prob-
lems, such as deep forgery. Deep forgery uses deep-learning algorithms [
11
], i.e., generative
adversarial networks (GANs), to tamper with or forge original data so that observers
Sensors 2022, 22, 3445. https://doi.org/10.3390/s22093445 https://www.mdpi.com/journal/sensors
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭