Citation: Lai, J.; Huo, Y.; Hou, R.;
Wang, X. A Universal Detection
Method for Adversarial Examples
and Fake Images. Sensors 2022, 22,
3445. https://doi.org/10.3390/
s22093445
Academic Editor: Ilsun You
Received: 17 March 2022
Accepted: 7 April 2022
Published: 30 April 2022
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Article
A Universal Detection Method for Adversarial Examples and
Fake Images
Jiewei Lai
1,2
, Yantong Huo
1
, Ruitao Hou
3,
* and Xianmin Wang
3
1
School of Computer Science and Cyber Engineering, Guangzhou University, Guangzhou 510006, China;
1906200093@e.gzhu.edu.cn (J.L.); 1906300038@e.gzhu.edu.cn (Y.H.)
2
Pazhou Lab, Guangzhou 510330, China
3
Institute of Artificial Intelligence and Blockchain, Guangzhou University, Guangzhou 510006, China;
xianmin@gzhu.edu.cn
* Correspondence: 1111906005@e.gzhu.edu.cn
Abstract:
Deep-learning technologies have shown impressive performance on many tasks in recent
years. However, there are multiple serious security risks when using deep-learning technologies. For
examples, state-of-the-art deep-learning technologies are vulnerable to adversarial examples that
make the model’s predictions wrong due to some specific subtle perturbation, and these technologies
can be abused for the tampering with and forgery of multimedia, i.e., deep forgery. In this paper, we
propose a universal detection framework for adversarial examples and fake images. We observe some
differences in the distribution of model outputs for normal and adversarial examples (fake images)
and train the detector to learn the differences. We perform extensive experiments on the CIFAR10 and
CIFAR100 datasets. Experimental results show that the proposed framework has good feasibility and
effectiveness in detecting adversarial examples or fake images. Moreover, the proposed framework
has good generalizability for the different datasets and model structures.
Keywords: adversarial example; deep forgery; detection
1. Introduction
In recent years, as one of the core technologies of artificial intelligence, deep learning
has attracted unprecedented attention from academia and industry [
1
]. Compared with
traditional machine learning methods, deep learning produces results with higher accuracy,
does not require complex feature engineering, and has better adaptability. Hence, deep-
learning technology has been gradually applied to various fields, such as computer vision,
speech recognition, natural language processing, autonomous driving, etc., [
2
–
5
]. However,
research shows that deep learning still has many problems in its security and privacy [
6
–
9
],
such as adversarial examples and deep forgery [10,11].
Szegedy et al. first proposed the concept of adversarial examples [
12
]. Its basic princi-
ple is to add some specific subtle perturbations to the original data; the model would output
error results with high confidence. The discovery of adversarial examples illustrates the
fragility of deep-learning models. Since then, the researchers have researched adversarial
examples and proposed many adversarial example generation methods, such as FGSM,
C&W, DeepFool, etc., [
13
–
16
]. These methods can generate adversarial examples with
extremely high success rates based on different attack scenarios and targets. Moreover, it
was found that the adversarial examples are transferable, i.e., the adversarial examples
generated for one model are effective for other similar models [
17
]. This aggravates the
seriousness of deep learning security problems and greatly restricts the application of deep-
learning technology in military, medical, financial, and other sophisticated fields [18–20].
Except for the security issues of the technology itself, deep learning has abuse prob-
lems, such as deep forgery. Deep forgery uses deep-learning algorithms [
11
], i.e., generative
adversarial networks (GANs), to tamper with or forge original data so that observers
Sensors 2022, 22, 3445. https://doi.org/10.3390/s22093445 https://www.mdpi.com/journal/sensors