Article
Interactive Machine Learning-Based Multi-Label Segmentation
of Solid Tumors and Organs
Dimitrios Bounias
1,2,3
, Ashish Singh
1,2
, Spyridon Bakas
1,2,4
, Sarthak Pati
1,2,4
, Saima Rathore
1,2
,
Hamed Akbari
1,2
, Michel Bilello
1,2
, Benjamin A. Greenberger
1,2,5
, Joseph Lombardo
5
, Rhea D. Chitalia
1,2,6
,
Nariman Jahani
1,2,6
, Aimilia Gastounioti
1,2,6
, Michelle Hershman
2
, Leonid Roshkovan
2
, Sharyn I. Katz
2
,
Bardia Yousefi
1,2,6
, Carolyn Lou
7,8
, Amber L. Simpson
9
, Richard K. G. Do
10
, Russell T. Shinohara
1,7,8
,
Despina Kontos
1,2,6
, Konstantina Nikita
3
and Christos Davatzikos
1,2,
*
Citation: Bounias, D.; Singh, A.;
Bakas, S.; Pati, S.; Rathore, S.; Akbari,
H.; Bilello, M.; Greenberger, B.A.;
Lombardo, J.; Chitalia, R.D.; et al.
Interactive Machine Learning-Based
Multi-Label Segmentation of Solid
Tumors and Organs. Appl. Sci. 2021,
11, 7488. https://doi.org/10.3390/
app11167488
Academic Editor: Keun Ho Ryu
Received: 4 July 2021
Accepted: 11 August 2021
Published: 15 August 2021
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2021 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
1
Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania,
3700 Hamilton Walk, Philadelphia, PA 19104, USA; dimitrios.bounias@pennmedicine.upenn.edu (D.B.);
ashish.singh@pennmedicine.upenn.edu (A.S.); sbakas@upenn.edu (S.B.); patis@upenn.edu (S.P.);
saima.rathore@pennmedicine.upenn.edu (S.R.); Hamed.Akbari@pennmedicine.upenn.edu (H.A.);
michel.bilello@pennmedicine.upenn.edu (M.B.); Benjamin.Greenberger@jefferson.edu (B.A.G.);
Rhea.Chitalia@pennmedicine.upenn.edu (R.D.C.); Nariman.Jahani@pennmedicine.upenn.edu (N.J.);
Aimilia.Gastounioti@pennmedicine.upenn.edu (A.G.); Bardia.Yousefi@pennmedicine.upenn.edu (B.Y.);
russell.shinohara@pennmedicine.upenn.edu (R.T.S.); Despina.Kontos@pennmedicine.upenn.edu (D.K.)
2
Department of Radiology, Perelman School of Medicine, University of Pennsylvania,
3400 Civic Center Boulevard, Philadelphia, PA 19104, USA; mlhershman@gmail.com (M.H.);
Leonid.Roshkovan@pennmedicine.upenn.edu (L.R.); sharyn.katz@pennmedicine.upenn.edu (S.I.K.)
3
School of Electrical and Computer Engineering, National Technical University of Athens,
9 Iroon Polytechniou St, 15780 Athens, Greece; knikita@ece.ntua.gr
4
Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of
Pennsylvania, 3400 Civic Center Boulevard, Philadelphia, PA 19104, USA
5
Department of Radiation Oncology, Sidney Kimmel Medical College & Cancer Center, Thomas Jefferson
University, 233 S 10th St, Philadelphia, PA 19104, USA; Joseph.Lombardo@jefferson.edu
6
Computational Breast Imaging Group (CBIG), University of Pennsylvania, 3700 Hamilton Walk,
Philadelphia, PA 19104, USA
7
Department of Biostatistics, Epidemiology, and Informatics, Perelman School of Medicine,
University of Pennsylvania, Philadelphia, PA 19104, USA; louc@pennmedicine.upenn.edu
8
Penn Statistics in Imaging and Visualization Center (PennSIVE), University of Pennsylvania,
423 Guardian Drive, Philadelphia, PA 19104, USA
9
Department of Biomedical and Molecular Sciences, School of Medicine, Queen’s University, 18 Stuart Street,
Kingston, ON K7L 3N6, Canada; amber.simpson@queensu.ca
10
Department of Radiology, Memorial Sloan Kettering Cancer Center, 1275 York Avenue,
New York, NY 10065, USA; dok@mskcc.org
* Correspondence: Christos.Davatzikos@pennmedicine.upenn.edu
Featured Application: The proposed interactive segmentation method can be used to facilitate
faster and consistent creation of annotations for large-scale studies, to enable subsequent com-
putational analyses. The proposed method combines strengths of expert-based annotations and
machine learning.
Abstract:
We seek the development and evaluation of a fast, accurate, and consistent method
for general-purpose segmentation, based on interactive machine learning (IML). To validate our
method, we identified retrospective cohorts of 20 brain, 50 breast, and 50 lung cancer patients,
as well as 20 spleen scans, with corresponding ground truth annotations. Utilizing very brief
user training annotations and the adaptive geodesic distance transform, an ensemble of SVMs is
trained, providing a patient-specific model applied to the whole image. Two experts segmented
each cohort twice with our method and twice manually. The IML method was faster than manual
annotation by 53.1% on average. We found significant (p < 0.001) overlap difference for spleen
(Dice
IML
/Dice
Manual
= 0.91/0.87), breast tumors (Dice
IML
/Dice
Manual
= 0.84/0.82), and lung nod-
ules (Dice
IML
/Dice
Manual
= 0.78/0.83). For intra-rater consistency, a significant (p = 0.003) dif-
ference was found for spleen (Dice
IML
/Dice
Manual
= 0.91/0.89). For inter-rater consistency, sig-
nificant (p < 0.045) differences were found for spleen (Dice
IML
/Dice
Manual
= 0.91/0.87), breast
Appl. Sci. 2021, 11, 7488. https://doi.org/10.3390/app11167488 https://www.mdpi.com/journal/applsci