Citation: Li, H.; Liu, C.; Basu, A.
Semantic Segmentation Based on
Depth Background Blur. Appl. Sci.
2022, 12, 1051. https://doi.org/
10.3390/app12031051
Academic Editors: Nunzio Cennamo,
Yangquan Chen, Simone Morais,
Subhas Mukhopadhyay, Junseop Lee
and M. Jamal Deen
Received: 4 December 2021
Accepted: 17 January 2022
Published: 20 January 2022
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Article
Semantic Segmentation Based on Depth Background Blur
Hao Li
1
, Changjiang Liu
1,
* and Anup Basu
2
1
School of Mathematics and Statistics, Sichuan University of Science and Engineering, Zigong 643000, China;
stulihao@163.com
2
Department of Computing Science, University of Alberta, Edmonton, AB T6G 2H1, Canada; basu@ualberta.ca
* Correspondence: liuchangjiang@189.cn; Tel.: +86-189-9002-4310
Abstract:
Deep convolutional neural networks (CNNs) are effective in image classification, and are
widely used in image segmentation tasks. Several neural netowrks have achieved high accuracy in
segementation on existing semantic datasets, for instance PASCAL VOC, CamVid, and Cityscapes.
However, there are nearly no studies on semantic segmentation from the perspective of a dataset
itself. In this paper, we analyzed the characteristics of datasets, and proposed a novel experimental
strategy based on bokeh to weaken the impact of futile background information. This crucial bokeh
module processed each image in the inference phase by selecting an opportune fuzzy factor
σ
, so that
the attention of our network can focus on the categories of interest. Some networks based on fully
convolutional networks (FCNs) were utilized to verify the effectiveness of our method. Extensive
experiments demonstrate that our approach can generally improve the segmentation results on
existing datasets, such as PASCAL VOC 2012 and CamVid.
Keywords: bokeh; fully convolutional networks; semantic segmentation
1. Introduction
In recent years, an increasing number of researchers have applied convolutional
neural networks (CNNs) to resolve pixelwise and end-to-end image segmentation tasks,
e.g., semantic segmentation [
1
–
4
]. Semantic segmentation can be understood as the need
to segment each object in an image and annotate it with different colors. For instance,
people, displays, and aircraft in the PASCAL VOC 2012 dataset were marked in pink, blue,
and red respectively. As a significant role in computer vision, semantic segmentation has
been widely implemented for fields like autonomous driving [
5
], robot perception [
6
],
augmented reality [7], and video surveillance [8].
Since the advent of fully convolutional networks (FCN [
9
]), they have greatly sim-
plified the conventional approach to address the conundrum of semantic segmentation.
Various end-to-end network architectures derived from FCN have been proposed over the
years. Based on existing datasets, the segmentation accuracies are relatively high or even
the maximum possible. The series of DeepLab [
10
–
13
] proposes atrous convolution with
dilation to improve the problem of a scarce receptive field caused by an insufficient amount
of down-sampling. The proposed atrous spatial pyramid pooling, to carry out multi-scale
feature fusion, significantly advances the accuracy of network segmentation.
Yu et al. [14]
proposed the bilateral segmentation network, which better preserves the spatial infor-
mation of the original image while ensuring a sufficient receptive field. From semantic
segmentation to real-time semantic segmentation [
14
–
16
], considering redundant to lean
network architectures, existing scholars accomplish better segmentation by designing and
improving the structure of the network itself and adopting massive data augmentation
methods. However, they ignored the impact of the characteristics of the dataset itself on
the segmentation results.
Semantic segmentation, as a pixelwise classification task, requires the classification
of every pixel. Nevertheless, not every pixel is of interest to us. A substantial amount
Appl. Sci. 2022, 12, 1051. https://doi.org/10.3390/app12031051 https://www.mdpi.com/journal/applsci