Citation: Zhou, Y.; Chang, H.; Lu, Y.;
Lu, X. CDTNet: Improved Image
Classification Method Using
Standard, Dilated and Transposed
Convolutions. Appl. Sci. 2022, 12,
5984. https://doi.org/10.3390/
app12125984
Academic Editor: Byung-Gyu Kim
Received: 12 May 2022
Accepted: 10 June 2022
Published: 12 June 2022
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Article
CDTNet: Improved Image Classification Method Using
Standard, Dilated and Transposed Convolutions
Yuepeng Zhou
1
, Huiyou Chang
1,
*, Yonghe Lu
2,
* and Xili Lu
3
1
School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510006, China;
zhouyp9@mail2.sysu.edu.cn
2
School of Information Management, Sun Yat-sen University, Guangzhou 510006, China
3
School of Information and Engineering, Shaoguan University, Shaoguan 512005, China; luxili521@163.com
* Correspondence: isschy@mail.sysu.edu.cn (H.C.); luyonghe@mail.sysu.edu.cn (Y.L.)
Abstract:
Convolutional neural networks (CNNs) have achieved great success in image classification
tasks. In the process of a convolutional operation, a larger input area can capture more context
information. Stacking several convolutional layers can enlarge the receptive field, but this increases
the parameters. Most CNN models use pooling layers to extract important features, but the pooling
operations cause information loss. Transposed convolution can increase the spatial size of the feature
maps to recover the lost low-resolution information. In this study, we used two branches with
different dilated rates to obtain different size features. The dilated convolution can capture richer
information, and the outputs from the two channels are concatenated together as input for the next
block. The small size feature maps of the top blocks are transposed to increase the spatial size of the
feature maps to recover low-resolution prediction maps. We evaluated the model on three image
classification benchmark datasets (CIFAR-10, SVHN, and FMNIST) with four state-of-the-art models,
namely, VGG16, VGG19, ResNeXt, and DenseNet. The experimental results show that CDTNet
achieved lower loss, higher accuracy, and faster convergence speed in the training and test stages.
The average test accuracy of CDTNet increased by 54.81% at most on SVHN with VGG19 and by
1.28% at least on FMNIST with VGG16, which proves that CDTNet has better performance and strong
generalization abilities, as well as fewer parameters.
Keywords: CDTNet; dilated convolution; transposed convolution; feature fusion; receptive field
1. Introduction
Convolutional neural networks (CNNs) [
1
] have been widely applied in many fields,
including image classification [
2
–
6
], natural language processing (NLP) [
7
], object de-
tection [
8
–
11
], and speech classification [
12
]. Many CNN models have been developed
and improved, and they have been successfully applied in medical fields [
13
,
14
], image
denoising [15–17], and semantic segmentation [18–20].
The excellent performance of CNNs comes from their wider and deeper models [
4
];
however, these models have also faced an increasing memory burden [
21
], which limits
their application in resource-constrained and high real-time requirement scenarios, such as
mobile terminals and embedded systems with low hardware resources [22,23].
The CNN operation usually extracts features through the convolutional layer and
integrates features by subsampling and fully connected (FC) layers; the method based
on deep features can learn the most distinguishable semantic-level features from the
original input [
24
]. Most image classification networks [
2
,
3
,
25
,
26
] employ successive
pooling operations to gradually reduce the resolution of features and extend the receptive
field (RF) size, but the pooling operations will cause information loss [27].
In CNNs, each feature map of the output only depends on a certain area of the input;
a larger input area can capture more context information [
15
]. Enlarging the RF can extract
Appl. Sci. 2022, 12, 5984. https://doi.org/10.3390/app12125984 https://www.mdpi.com/journal/applsci