采用较小样本大小进行归一化的顺序归一化

ID:38850

大小:30.92 MB

页数:14页

时间:2023-03-14

金币:2

上传者:战必胜
Citation: Dimitriou, N.; Arandjelovi´c,
O. Sequential Normalization:
Embracing Smaller Sample Sizes for
Normalization. Information 2022, 13,
337. https://doi.org/10.3390/info
13070337
Academic Editors: Krzysztof
Ejsmont, Aamer Bilal Asghar, Yong
Wang and Rodolfo Haber
Received: 24 May 2022
Accepted: 5 July 2022
Published: 12 July 2022
Publishers Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
information
Article
Sequential Normalization: Embracing Smaller Sample Sizes
for Normalization
Neofytos Dimitriou * and Ognjen Arandjelovi´c
School of Computer Science, University of St Andrews, St Andrews KY16 9SX, UK;
ognjen.arandjelovic@gmail.com
* Correspondence: neofytosd@gmail.com
Abstract:
Normalization as a layer within neural networks has over the years demonstrated its
effectiveness in neural network optimization across a wide range of different tasks, with one of
the most successful approaches being that of batch normalization. The consensus is that better
estimates of the BatchNorm normalization statistics (
µ
and
σ
2
) in each mini-batch result in better
optimization. In this work, we challenge this belief and experiment with a new variant of BatchNorm
known as GhostNorm that, despite independently normalizing batches within the mini-batches,
i.e.,
µ
and
σ
2
are independently computed and applied to groups of samples in each mini-batch,
outperforms BatchNorm consistently. Next, we introduce sequential normalization (SeqNorm), the
sequential application of the above type of normalization across two dimensions of the input, and
find that models trained with SeqNorm consistently outperform models trained with BatchNorm
or GhostNorm on multiple image classification data sets. Our contributions are as follows: (i) we
uncover a source of regularization that is unique to GhostNorm, and not simply an extension from
BatchNorm, and illustrate its effects on the loss landscape, (ii) we introduce sequential normalization
(SeqNorm) a new normalization layer that improves the regularization effects of GhostNorm, (iii) we
compare both GhostNorm and SeqNorm against BatchNorm alone as well as with other regularization
techniques, (iv) for both GhostNorm and SeqNorm models, we train models whose performance
is consistently better than our baselines, including ones with BatchNorm, on the standard image
classification data sets of CIFAR–10, CIFAR-100, and ImageNet ((
+
0.2%,
+
0.7%,
+
0.4%), and (
+
0.3%,
+1.7%, +1.1%) for GhostNorm and SeqNorm, respectively).
Keywords:
batch normalization; ghost normalization; loss landscape; computer vision; neural
networks; ImageNet; CIFAR
1. Introduction
The effectiveness of batch normalization (BatchNorm), a technique first introduced
by Ioffe and Szegedy [
1
] on neural network (NN) optimization, has been demonstrated
over the years on a variety of tasks, including computer vision [
2
4
], speech recognition [
5
],
and other [
6
8
]. BatchNorm is typically embedded at each NN layer either before or after
the activation function, normalizing and projecting the input features to match a Gaussian-
like distribution. Consequently, the activation values of each layer maintain more stable
distributions during NN training, which in turn is thought to enable faster convergence
and better generalization performance [1,9,10]. Following the effectiveness of BatchNorm
on NN optimization, other normalization techniques emerged [
11
15
], a number of which
introduced normalization across a different input dimension (e.g., layer normalization [
12
]),
while others focused on improving other aspects of BatchNorm, such as the accuracy of the
batch statistics estimates [11,16,17], or the train–test discrepancy in BatchNorm use [18].
Despite the wide adoption and practical success of BatchNorm, its underlying mechan-
ics within the context of NN optimization has yet to be fully understood. Initially, Ioeffe
and Szegedy suggested that it came from it reducing the so-called internal covariate shift [
1
].
Information 2022, 13, 337. https://doi.org/10.3390/info13070337 https://www.mdpi.com/journal/information
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭