软硬件协同设计的卷积神经网络模型压缩方法

ID:38861

大小:2.64 MB

页数:15页

时间:2023-03-14

金币:2

上传者:战必胜
Citation: Jang, S.; Liu, W.; Cho, Y.
Convolutional Neural Network
Model Compression Method for
Software—Hardware Co-Design.
Information 2022, 13, 451. https://
doi.org/10.3390/info13100451
Academic Editor: Krzysztof Ejsmont
Received: 31 August 2022
Accepted: 23 September 2022
Published: 26 September 2022
Publishers Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
information
Article
Convolutional Neural Network Model Compression Method
for Software—Hardware Co-Design
Seojin Jang
1
, Wei Liu
2
and Yongbeom Cho
1,2,
*
1
Department of Electrical and Electronics Engineering, Konkuk University, Seoul 05029, Korea
2
Deep ET, Seoul 05029, Korea
* Correspondence: ybcho@konkuk.ac.kr
Abstract:
Owing to their high accuracy, deep convolutional neural networks (CNNs) are extensively
used. However, they are characterized by high complexity. Real-time performance and acceleration
are required in current CNN systems. A graphics processing unit (GPU) is one possible solution to
improve real-time performance; however, its power consumption ratio is poor owing to high power
consumption. By contrast, field-programmable gate arrays (FPGAs) have lower power consumption
and flexible architecture, making them more suitable for CNN implementation. In this study, we
propose a method that offers both the speed of CNNs and the power and parallelism of FPGAs. This
solution relies on two primary acceleration techniques—parallel processing of layer resources and
pipelining within specific layers. Moreover, a new method is introduced for exchanging domain
requirements for speed and design time by implementing an automatic parallel hardware–software
co-design CNN using the software-defined system-on-chip tool. We evaluated the proposed method
using five networks—MobileNetV1, ShuffleNetV2, SqueezeNet, ResNet-50, and VGG-16—and FPGA
processors—ZCU102. We experimentally demonstrated that our design has a higher speed-up
than the conventional implementation method. The proposed method achieves 2.47
×
, 1.93
×
, and
2.16× speed-up on the ZCU102 for MobileNetV1, ShuffleNetV2, and SqueezeNet, respectively.
Keywords:
convolutional neural network; field-programmable gate array; hardware–software co-design
1. Introduction
In recent years, artificial intelligence and deep learning have been extensively used
to solve many real-world problems. Currently, convolutional neural networks (CNNs)
are one of the most advanced deep learning algorithms and are used to solve recognition
problems in several scenarios. CNNs are more accurate than conventional algorithms.
However, many parameters of the convolution operation require a considerable amount
of computational resources and memory access [
1
]. This is a computational challenge
for the central processing unit (CPU) as it consumes excessive power. Instead, hardware
accelerators such as a graphics processing unit (GPU), field-programmable gate array
(FPGA), and application-specific integrated circuit (ASIC) have been used to increase
the throughput of CNNs [
2
,
3
]. When CNNs are integrated through hardware, latency
is improved, and the energy consumption is reduced. GPUs are the most widely used
processors and can improve the training and inference processes of CNNs. However, GPUs
consume excessive power, which is a key performance metric in modern digital systems.
ASIC designs achieve high throughput and low power consumption but require more
development time and cost. By contrast, FPGAs increase the capacity of hardware resources,
providing thousands of floating-point computing units and lower power consumption.
Therefore, FPGA-based accelerators, like ASICs, are an efficient alternative that offer high
throughput and configurability at low power consumption and a reasonable cost.
With the development of FPGA-based hardware accelerators, algorithms for improv-
ing the accuracy of CNNs are also evolving. Advances in CNN algorithms require many
Information 2022, 13, 451. https://doi.org/10.3390/info13100451 https://www.mdpi.com/journal/information
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭