Citation: Jang, S.; Liu, W.; Cho, Y.
Convolutional Neural Network
Model Compression Method for
Software—Hardware Co-Design.
Information 2022, 13, 451. https://
doi.org/10.3390/info13100451
Academic Editor: Krzysztof Ejsmont
Received: 31 August 2022
Accepted: 23 September 2022
Published: 26 September 2022
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Article
Convolutional Neural Network Model Compression Method
for Software—Hardware Co-Design
Seojin Jang
1
, Wei Liu
2
and Yongbeom Cho
1,2,
*
1
Department of Electrical and Electronics Engineering, Konkuk University, Seoul 05029, Korea
2
Deep ET, Seoul 05029, Korea
* Correspondence: ybcho@konkuk.ac.kr
Abstract:
Owing to their high accuracy, deep convolutional neural networks (CNNs) are extensively
used. However, they are characterized by high complexity. Real-time performance and acceleration
are required in current CNN systems. A graphics processing unit (GPU) is one possible solution to
improve real-time performance; however, its power consumption ratio is poor owing to high power
consumption. By contrast, field-programmable gate arrays (FPGAs) have lower power consumption
and flexible architecture, making them more suitable for CNN implementation. In this study, we
propose a method that offers both the speed of CNNs and the power and parallelism of FPGAs. This
solution relies on two primary acceleration techniques—parallel processing of layer resources and
pipelining within specific layers. Moreover, a new method is introduced for exchanging domain
requirements for speed and design time by implementing an automatic parallel hardware–software
co-design CNN using the software-defined system-on-chip tool. We evaluated the proposed method
using five networks—MobileNetV1, ShuffleNetV2, SqueezeNet, ResNet-50, and VGG-16—and FPGA
processors—ZCU102. We experimentally demonstrated that our design has a higher speed-up
than the conventional implementation method. The proposed method achieves 2.47
×
, 1.93
×
, and
2.16× speed-up on the ZCU102 for MobileNetV1, ShuffleNetV2, and SqueezeNet, respectively.
Keywords:
convolutional neural network; field-programmable gate array; hardware–software co-design
1. Introduction
In recent years, artificial intelligence and deep learning have been extensively used
to solve many real-world problems. Currently, convolutional neural networks (CNNs)
are one of the most advanced deep learning algorithms and are used to solve recognition
problems in several scenarios. CNNs are more accurate than conventional algorithms.
However, many parameters of the convolution operation require a considerable amount
of computational resources and memory access [
1
]. This is a computational challenge
for the central processing unit (CPU) as it consumes excessive power. Instead, hardware
accelerators such as a graphics processing unit (GPU), field-programmable gate array
(FPGA), and application-specific integrated circuit (ASIC) have been used to increase
the throughput of CNNs [
2
,
3
]. When CNNs are integrated through hardware, latency
is improved, and the energy consumption is reduced. GPUs are the most widely used
processors and can improve the training and inference processes of CNNs. However, GPUs
consume excessive power, which is a key performance metric in modern digital systems.
ASIC designs achieve high throughput and low power consumption but require more
development time and cost. By contrast, FPGAs increase the capacity of hardware resources,
providing thousands of floating-point computing units and lower power consumption.
Therefore, FPGA-based accelerators, like ASICs, are an efficient alternative that offer high
throughput and configurability at low power consumption and a reasonable cost.
With the development of FPGA-based hardware accelerators, algorithms for improv-
ing the accuracy of CNNs are also evolving. Advances in CNN algorithms require many
Information 2022, 13, 451. https://doi.org/10.3390/info13100451 https://www.mdpi.com/journal/information