用于深度神经网络压缩的学习和压缩低阶矩阵因子分解

ID:38958

大小:10.55 MB

页数:22页

时间:2023-03-14

金币:2

上传者:战必胜
Citation: Cai, G.; Li, J.; Liu, X.; Chen,
Z.; Zhang, H. Learning and
Compressing: Low-Rank Matrix
Factorization for Deep Neural
Network Compression. Appl. Sci.
2023, 13, 2704. https://doi.org/
10.3390/app13042704
Academic Editors: Phivos Mylonas,
Katia Lida Kermanidis
and Manolis Maragoudakis
Received: 5 January 2023
Revised: 15 February 2023
Accepted: 16 February 2023
Published: 20 February 2023
Copyright: © 2023 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
applied
sciences
Article
Learning and Compressing: Low-Rank Matrix Factorization for
Deep Neural Network Compression
Gaoyuan Cai
1,2
, Juhu Li
1,2,
*, Xuanxin Liu
1,2
, Zhibo Chen
1,2
and Haiyan Zhang
1,2
1
School of Information Science and Technology, Beijing Forestry University, Beijing 100083, China
2
Engineering Research Center for Forestry-Oriented Intelligent Information Processing of National Forestry
and Grassland Administration, Beijing 100083, China
* Correspondence: lijuhu@bjfu.edu.cn
Abstract:
Recently, the deep neural network (DNN) has become one of the most advanced and
powerful methods used in classification tasks. However, the cost of DNN models is sometimes
considerable due to the huge sets of parameters. Therefore, it is necessary to compress these models
in order to reduce the parameters in weight matrices and decrease computational consumption,
while maintaining the same level of accuracy. In this paper, in order to deal with the compression
problem, we first combine the loss function and the compression cost function into a joint function,
and optimize it as an optimization framework. Then we combine the CUR decomposition method
with this joint optimization framework to obtain the low-rank approximation matrices. Finally, we
narrow the gap between the weight matrices and the low-rank approximations to compress the
DNN models on the image classification task. In this algorithm, we not only solve the optimal ranks
by enumeration, but also obtain the compression result with low-rank characteristics iteratively.
Experiments were carried out on three public datasets under classification tasks. Comparisons
with baselines and current state-of-the-art results can conclude that our proposed low-rank joint
optimization compression algorithm can achieve higher accuracy and compression ratios.
Keywords:
deep neural network compression; low-rank matrix factorization; truncated singular
value decomposition; CUR decomposition; joint optimization; optimal rank
1. Introduction
In recent years, DNNs have become prevalent in the machine learning area, applied in
various fields such as computer vision (CV) [
1
,
2
], natural language processing (NLP) [
3
,
4
],
and speech recognition [
5
,
6
]. However, with the increase in the accuracy performance
of real-world applications, the DNN model needs more neurons, thus the increasing
number of layers in DNN induce massive weight parameters, which makes the calculation
resources large. Therefore, it is significant to compress the DNNs while keeping their
accuracy performance when running these compressed models on resource-constrained
embedded devices.
There exist many DNN compressing techniques so far, such as pruning [
7
10
], weight
quantization [1114], and low-rank matrix factorization (MF) [1518].
Weight pruning is proposed to reduce the weight parameters of the DNN models
while retaining original precision, including removing neurons and automatically learning
the correct number of neurons and weights. However, all the pruning criteria require
the manual setup of sensitivity for each layer and require the fine-tuning of the weight
parameters. Weight quantization intends to compress the DNN model by re-representing
the number of bits required to represent each weight parameter. In fact, the main difficulty
of the weight quantization is that processing low-precision weights and excessive quanti-
zation will make the parameters of DNN model lose important content and information.
The low-rank matrix factorization is to find an approximate matrix with low-rank property
Appl. Sci. 2023, 13, 2704. https://doi.org/10.3390/app13042704 https://www.mdpi.com/journal/applsci
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭