Article
A Convolutional Autoencoder Topology for Classification in
High-Dimensional Noisy Image Datasets
Emmanuel Pintelas
1,
*, Ioannis E. Livieris
2
and Panagiotis E. Pintelas
1
Citation: Pintelas, E.; Livieris, I.E.;
Pintelas, P.E. A Convolutional
Autoencoder Topology for
Classification in High-Dimensional
Noisy Image Datasets. Sensors 2021,
21, 7731. https://doi.org/10.3390/
s21227731
Academic Editors: Zahir M. Hussain
and Juan M. Corchado
Received: 12 October 2021
Accepted: 18 November 2021
Published: 20 November 2021
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2021 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
1
Department of Mathematics, University of Patras, 26500 Patras, Greece; pintelas@math.upatras.gr
2
Core Innovation and Technology O.E., 11745 Athens, Greece; livieris@upatras.gr
* Correspondence: e.pintelas@upatras.gr
Abstract:
Deep convolutional neural networks have shown remarkable performance in the image
classification domain. However, Deep Learning models are vulnerable to noise and redundant infor-
mation encapsulated into the high-dimensional raw input images, leading to unstable and unreliable
predictions. Autoencoders constitute an unsupervised dimensionality reduction technique, proven
to filter out noise and redundant information and create robust and stable feature representations. In
this work, in order to resolve the problem of DL models’ vulnerability, we propose a convolutional
autoencoder topological model for compressing and filtering out noise and redundant information
from initial high dimensionality input images and then feeding this compressed output into convo-
lutional neural networks. Our results reveal the efficiency of the proposed approach, leading to a
significant performance improvement compared to Deep Learning models trained with the initial
raw images.
Keywords:
convolutional autoencoders; dimensionality reduction; deep learning; convolutional
neural networks; computer vision; image classification
1. Introduction
Nowadays, convolutional neural networks (CNNs) have considerably flourished
mainly because they have shown noticeable classification performance in image classifica-
tion and computer vision tasks [
1
,
2
]. However, robustness and stability are some major
problems in which Deep Learning (DL) models are prone, since it is proved that they can be
fooled even by a tiny amount of perturbation, exhibiting poor and unreliable performance
in these cases [3,4].
Moreover, in Machine Learning (ML) image classification tasks when dealing with
high-dimensional data, which usually contain a lot of redundant information and noise,
the reliable knowledge feature extraction procedure deteriorates [
5
]. The extraction of only
the most important features compresses the initial feature space, leading to a stable and
robust latent image representation [
2
,
5
,
6
]. Thus, it is necessary to capture only the most
relevant information.
Training a supervised DL model with high dimensionality and low-quality image data
can lead to overfitting and/or unstable behavior, especially when the training instances
are limited or unbalanced. In other words, small pixel changes can lead the model to
change its predictions, which implies that it has not exploited the information in the
training data, and it exhibits poor and inefficient performance [
7
]. Additionally, it is
worth highlighting that another significant problem is that the higher the dimension of
the input images, the more the network is affected by the presence of noise, even if the
amount of noise is small. By taking into consideration these difficulties and constraints,
the application of a preproccessing step, which will attempt to reduce the noise in the image
data while simultaneously reduce their dimension is considered essential for improving
the performance of the DL model.
Sensors 2021, 21, 7731. https://doi.org/10.3390/s21227731 https://www.mdpi.com/journal/sensors