OptiNET自动网络拓扑优化

ID:38849

大小:7.04 MB

页数:26页

时间:2023-03-14

金币:2

上传者:战必胜
Citation: Maniatopoulos, A.;
Alvanaki, P.; Mitianoudis, N.
OptiNET—Automatic Network
Topology Optimization. Information
2022, 13, 405. https://doi.org/
10.3390/info13090405
Academic Editors: Krzysztof
Ejsmont, Aamer Bilal Asghar, Yong
Wang and Rodolfo Haber
Received: 25 July 2022
Accepted: 24 August 2022
Published: 27 August 2022
Publishers Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
information
Article
OptiNET—Automatic Network Topology Optimization
Andreas Maniatopoulos *, Paraskevi Alvanaki and Nikolaos Mitianoudis
Electrical and Computer Engineering Department, Democritus University of Thrace, 69100 Komotini, Greece;
paraalva@ee.duth.gr (P.A.); nmitiano@ee.duth.gr (N.M.)
* Correspondence: amaniato@ee.duth.gr; Tel.: +30-25410-79572
Abstract:
The recent boom of artificial Neural Networks (NN) has shown that NN can provide viable
solutions to a variety of problems. However, their complexity and the lack of efficient interpretation of
NN architectures (commonly considered black box techniques) has adverse effects on the optimization
of each NN architecture. One cannot simply use a generic topology and have the best performance in
every application field, since the network topology is commonly fine-tuned to the problem/dataset
in question. In this paper, we introduce a novel method of computationally assessing the complexity
of the dataset. The NN is treated as an information channel, and thus information theory is used to
estimate the optimal number of neurons for each layer, reducing the memory and computational
load, while achieving the same, if not greater, accuracy. Experiments using common datasets
confirm the theoretical findings, and the derived algorithm seems to improve the performance of the
original architecture.
Keywords: topology optimization; network optimization; pruning
1. Introduction
One of the current biggest challenges in Deep Neural Networks (DNNs) is the limited
memory bandwidth and capacity of DRAM devices that have to be used by modern systems
to store the huge amounts of weights and activations in DNNs [
1
]. Neural networks require
memory to store data, weight parameters and activations. Memory usage is high, especially
during training, since the activations from a forward pass must be retained, until they can
be used to calculate the error gradients in the backwards pass. A typical example is the
‘ResNet-50’ network, which has approximately 26 million weight parameters and computes
approximately 16 million activations in the forward pass. Using the conventional 32-bit
floating-point format, one would require almost 170 MB.
All the above clearly demonstrate an urgent need to reduce the memory requirements
in modern DNN architectures. One way to address this is to reduce computation. A simple
technique is to discard values that are relatively cheap to compute, such as activation
functions, and re-compute them when necessary. Substantial reductions can be achieved by
discarding retained activations in sets of consecutive layers of a network and re-computing
them when they are required during the backwards pass, from the closest set of remaining
activations [
2
]. However, this does not appear to be the optimal way to save on memory. A
similar memory-reuse approach has been developed by researchers at Google DeepMind
with Recurrent Neural Networks (RNNs). For RNNs, re-computation has shown to reduce
memory by a factor of 20 for sequences of length 1000 with only a 0.3 performance over-
head [
3
]. A third significant approach has been recently discovered by the Baidu Deep
Speech team. Through various memory-saving techniques, they managed to obtain a 16
×
reduction in memory for activations, enabling them to train networks with 100 layers,
instead of the previously attainable nine layers, using the same amount of memory [4].
The above three approaches mark a great improvement in memory handling; however,
the greatest memory hog is the a-priori non optimised neural network topologies. Thus,
Information 2022, 13, 405. https://doi.org/10.3390/info13090405 https://www.mdpi.com/journal/information
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭