一种基于面具的对抗防御方案

ID:38862

大小:1.28 MB

页数:14页

时间:2023-03-14

金币:2

上传者:战必胜
Citation: Xu, W.; Zhang, C., Zhao, F.;
Fang, L. A Mask-Based Adversarial
Defense Scheme. Algorithms 2022, 15,
461. https://doi.org/10.3390/
a15120461
Academic Editors: Krzysztof
Ejsmont, Aamer Bilal Asghar, Yong
Wang and Rodolfo Haber
Received: 9 November 2022
Accepted: 2 December 2022
Published: 6 December 2022
Publishers Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
algorithms
Article
A Mask-Based Adversarial Defense Scheme
Weizhen Xu, Chenyi Zhang *, Fangzhen Zhao and Liangda Fang
College of Information Science and Technology, Jinan University, Guangzhou 510632, China
* Correspondence: chenyi_zhang@jnu.edu.cn
Abstract:
Adversarial attacks hamper the functionality and accuracy of deep neural networks (DNNs)
by meddling with subtle perturbations to their inputs. In this work, we propose a new mask-based
adversarial defense scheme (MAD) for DNNs to mitigate the negative effect from adversarial attacks.
Our method preprocesses multiple copies of a potential adversarial image by applying random
masking, before the outputs of the DNN on all the randomly masked images are combined. As a
result, the combined final output becomes more tolerant to minor perturbations on the original input.
Compared with existing adversarial defense techniques, our method does not need any additional
denoising structure or any change to a DNN’s architectural design. We have tested this approach
on a collection of DNN models for a variety of datasets, and the experimental results confirm that
the proposed method can effectively improve the defense abilities of the DNNs against all of the
tested adversarial attack methods. In certain scenarios, the DNN models trained with MAD can
improve classification accuracy by as much as 90% compared to the original models when given
adversarial inputs.
Keywords:
adversarial defense; adversarial attack; deep neural networks; random mask; robustness
in machine learning
1. Introduction
Deep neural networks (DNNs) have achieved great success in the past decade in
research areas such as image classification, natural language processing, and data analytics,
with a variety of application domains like banking, financial services and insurance, IT and
telecommunications, manufacturing, and healthcare etc. [
1
]. However, researchers have
discovered that it is possible to introduce human imperceptible perturbations to inputs of a
DNN in order to induce incorrect or misleading outputs from the DNN at the choice of an
adversary [24].
As of today, the existing approaches to counter adversarial attacks can be roughly
divided into two categories. The reactive approach focuses on the detection of adversarial
inputs (e.g., [
5
7
]) and tries to correct the adversarial inputs. The proactive approach,
sometimes known as adversarial defense, takes steps to strengthen DNNs (e.g., [
8
10
]),
making them more robust to withstand perturbations on inputs. In this paper, we follow
the latter path by enhancing robustness in the decision procedure of DNNs. Inspired by a
recent paper [
11
] that restores missing pixels of provided images with random patching,
we devise a new adversarial defense scheme called mask-based adversarial defense (MAD)
for the training and testing of DNNs that perform image-classification tasks. In addition,
we believe that such a mechanism may also be applicable to improve DNN robustness in
other scenarios.
In this approach, we perform a series of experiments with regard to MAD. We split
an image into grids of predefined size (e.g., 4
×
4), and randomly fill each grid by using a
default value (e.g., the black value in RGB for those masked pixels) with a given probability
(e.g., 75%) to generate training samples. After the training, we also apply masking to images
at the test phase (for the classification task), as illustrated in Figure 1. Given an (unmasked
original) image, we need to repeat the test process a number of times with different patterns
Algorithms 2022, 15, 461. https://doi.org/10.3390/a15120461 https://www.mdpi.com/journal/algorithms
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭