Article
Classification for Breast Ultrasound Using Convolutional
Neural Network with Multiple Time-Domain Feature Maps
Hyungsuk Kim , Juyoung Park, Hakjoon Lee, Geuntae Im, Jongsoo Lee, Ki-Baek Lee * and Heung Jae Lee *
Citation: Kim, H.; Park, J.; Lee, H.;
Im, G.; Lee, J.; Lee, K.-B.; Lee, H.J.
Classification for Breast Ultrasound
Using Convolutional Neural Network
with Multiple Time-Domain Feature
Maps. Appl. Sci. 2021, 11, 10216.
https://doi.org/10.3390/app112110216
Academic Editors: Keun Ho Ryu and
Nipon Theera-Umpon
Received: 19 September 2021
Accepted: 28 October 2021
Published: 31 October 2021
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2021 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Department of Electrical Engineering, Kwangwoon University, Seoul 01897, Korea; hskim@kw.ac.kr (H.K.);
419kog@kw.ac.kr (J.P.); cpfl410@kw.ac.kr (H.L.); holsky7@kw.ac.kr (G.I.); dldnxks12@kw.ac.kr (J.L.)
* Correspondence: kblee@kw.ac.kr (K.-B.L.); hjlee@kw.ac.kr (H.J.L.)
Abstract:
Ultrasound (US) imaging is widely utilized as a diagnostic screening method, and deep
learning has recently drawn attention for the analysis of US images for the pathological status of
tissues. While low image quality and poor reproducibility are the common obstacles in US analysis,
the small size of the dataset is a new limitation for deep learning due to lack of generalization. In this
work, a convolutional neural network (CNN) using multiple feature maps, such as entropy and phase
images, as well as a B-mode image, was proposed to classify breast US images. Although B-mode
images contain both anatomical and textual information, traditional CNNs experience difficulties in
abstracting features automatically, especially with small datasets. For the proposed CNN framework,
two distinct feature maps were obtained from a B-mode image and utilized as new inputs for training
the CNN. These feature maps can also be made from the evaluation data and applied to the CNN
separately for the final classification decision. The experimental results with 780 breast US images in
three categories of benign, malignant, and normal, showed that the proposed CNN framework using
multiple feature maps exhibited better performances than the traditional CNN with B-mode only for
most deep network models.
Keywords:
medical ultrasound; breast US images; deep learning; convolutional neural network;
B-mode image; entropy image; phase image
1. Introduction
Among medical imaging modalities, ultrasound (US) is one of the most commonly
utilized in clinical screening and diagnostic applications due to its safety by utilization
of non-ionizing radiation, portability, cost effectiveness, and real-time data acquisition
and display. Despite these advantages, US imaging also has limitations, such as rela-
tively low imaging contrast and degradation of quality caused by noise and speckles, high
image variability due to the operator-dependent hand-held nature in the data acquisi-
tion process, and poor image reproducibility across different manufacturers’ US imaging
systems. Consequently, a more objective and accurate understanding for analysis of US
images, called B-mode images, is important for US diagnosis and assessment in addition to
ultrasound-guided interventions and therapy.
For the better analysis of US images, computer-aided diagnosis (CAD) systems using
machine learning algorithms have been developed and applied to various kinds of features
that are calculated and/or estimated from the B-mode images in order to classify or
quantify the pathological status of the scanned tissue. In traditional CAD systems, image
features including texture, contrast, pattern, morphology, and model-based parameters
are extracted first from B-mode images automatically or manually and then selected and
classified using an automatic classifier such as a support vector machine (SVM) [
1
] to
divide the feature space. Since AlexNet [
2
], which is a convolutional neural network (CNN)
in early deep learning generation, won the first prize in the 2012 ImageNet Large Scale
Visual Recognition Challenge (ILSVRC), deep learning has garnered significant attention
Appl. Sci. 2021, 11, 10216. https://doi.org/10.3390/app112110216 https://www.mdpi.com/journal/applsci