CDTNet改进的基于标准、扩展和转置卷积的图像分类方法

ID:38564

大小:3.00 MB

页数:16页

时间:2023-03-11

金币:2

上传者:战必胜
Citation: Zhou, Y.; Chang, H.; Lu, Y.;
Lu, X. CDTNet: Improved Image
Classification Method Using
Standard, Dilated and Transposed
Convolutions. Appl. Sci. 2022, 12,
5984. https://doi.org/10.3390/
app12125984
Academic Editor: Byung-Gyu Kim
Received: 12 May 2022
Accepted: 10 June 2022
Published: 12 June 2022
Publishers Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
applied
sciences
Article
CDTNet: Improved Image Classification Method Using
Standard, Dilated and Transposed Convolutions
Yuepeng Zhou
1
, Huiyou Chang
1,
*, Yonghe Lu
2,
* and Xili Lu
3
1
School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510006, China;
zhouyp9@mail2.sysu.edu.cn
2
School of Information Management, Sun Yat-sen University, Guangzhou 510006, China
3
School of Information and Engineering, Shaoguan University, Shaoguan 512005, China; luxili521@163.com
* Correspondence: isschy@mail.sysu.edu.cn (H.C.); luyonghe@mail.sysu.edu.cn (Y.L.)
Abstract:
Convolutional neural networks (CNNs) have achieved great success in image classification
tasks. In the process of a convolutional operation, a larger input area can capture more context
information. Stacking several convolutional layers can enlarge the receptive field, but this increases
the parameters. Most CNN models use pooling layers to extract important features, but the pooling
operations cause information loss. Transposed convolution can increase the spatial size of the feature
maps to recover the lost low-resolution information. In this study, we used two branches with
different dilated rates to obtain different size features. The dilated convolution can capture richer
information, and the outputs from the two channels are concatenated together as input for the next
block. The small size feature maps of the top blocks are transposed to increase the spatial size of the
feature maps to recover low-resolution prediction maps. We evaluated the model on three image
classification benchmark datasets (CIFAR-10, SVHN, and FMNIST) with four state-of-the-art models,
namely, VGG16, VGG19, ResNeXt, and DenseNet. The experimental results show that CDTNet
achieved lower loss, higher accuracy, and faster convergence speed in the training and test stages.
The average test accuracy of CDTNet increased by 54.81% at most on SVHN with VGG19 and by
1.28% at least on FMNIST with VGG16, which proves that CDTNet has better performance and strong
generalization abilities, as well as fewer parameters.
Keywords: CDTNet; dilated convolution; transposed convolution; feature fusion; receptive field
1. Introduction
Convolutional neural networks (CNNs) [
1
] have been widely applied in many fields,
including image classification [
2
6
], natural language processing (NLP) [
7
], object de-
tection [
8
11
], and speech classification [
12
]. Many CNN models have been developed
and improved, and they have been successfully applied in medical fields [
13
,
14
], image
denoising [1517], and semantic segmentation [1820].
The excellent performance of CNNs comes from their wider and deeper models [
4
];
however, these models have also faced an increasing memory burden [
21
], which limits
their application in resource-constrained and high real-time requirement scenarios, such as
mobile terminals and embedded systems with low hardware resources [22,23].
The CNN operation usually extracts features through the convolutional layer and
integrates features by subsampling and fully connected (FC) layers; the method based
on deep features can learn the most distinguishable semantic-level features from the
original input [
24
]. Most image classification networks [
2
,
3
,
25
,
26
] employ successive
pooling operations to gradually reduce the resolution of features and extend the receptive
field (RF) size, but the pooling operations will cause information loss [27].
In CNNs, each feature map of the output only depends on a certain area of the input;
a larger input area can capture more context information [
15
]. Enlarging the RF can extract
Appl. Sci. 2022, 12, 5984. https://doi.org/10.3390/app12125984 https://www.mdpi.com/journal/applsci
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭