Citation: Chen, S.; Han, J.; Tang, M.;
Dong, R.; Kan, J. Encoder-Decoder
Structure with Multiscale Receptive
Field Block for Unsupervised Depth
Estimation from Monocular Video.
Remote Sens. 2022, 14, 2906. https://
doi.org/10.3390/rs14122906
Academic Editors: M. Jamal Deen,
Subhas Mukhopadhyay,
Yangquan Chen, Simone Morais,
Nunzio Cennamo and Junseop Lee
Received: 23 April 2022
Accepted: 14 June 2022
Published: 17 June 2022
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Article
Encoder-Decoder Structure with Multiscale Receptive Field
Block for Unsupervised Depth Estimation from
Monocular Video
Songnan Chen
1
, Junyu Han
2
, Mengxia Tang
2
, Ruifang Dong
2
and Jiangming Kan
2,
*
1
School of Mathematics and Computer Science, Wuhan Polytechnic University, No. 36 Huanhu Middle Road,
Dongxihu District, Wuhan 430048, China; chensongnan@whpu.edu.cn
2
School of Technology, Beijing Forestry University, No. 35 Qinghua East Road, Haidian District,
Beijing 100083, China; hanjunyu0801@bjfu.edu.cn (J.H.); tangmengxia@bjfu.edu.cn (M.T.);
ruifang_dong@bjfu.edu.cn (R.D.)
* Correspondence: kanjm@bjfu.edu.cn
Abstract:
Monocular depth estimation is a fundamental yet challenging task in computer vision as
depth information will be lost when 3D scenes are mapped to 2D images. Although deep learning-
based methods have led to considerable improvements for this task in a single image, most existing
approaches still fail to overcome this limitation. Supervised learning methods model depth estimation
as a regression problem and, as a result, require large amounts of ground truth depth data for training
in actual scenarios. Unsupervised learning methods treat depth estimation as the synthesis of a
new disparity map, which means that rectified stereo image pairs need to be used as the training
dataset. Aiming to solve such problem, we present an encoder-decoder based framework, which
infers depth maps from monocular video snippets in an unsupervised manner. First, we design an
unsupervised learning scheme for the monocular depth estimation task based on the basic principles
of structure from motion (SfM) and it only uses adjacent video clips rather than paired training data
as supervision. Second, our method predicts two confidence masks to improve the robustness of the
depth estimation model to avoid the occlusion problem. Finally, we leverage the largest scale and
minimum depth loss instead of the multiscale and average loss to improve the accuracy of depth
estimation. The experimental results on the benchmark KITTI dataset for depth estimation show that
our method outperforms competing unsupervised methods.
Keywords:
monocular depth estimation; unsupervised learning methods; structure from motion;
confidence mask
1. Introduction
Depth information plays a critical role in the area of robot and computer vision
tasks. Low-precision depth may affect the performance of many vision systems, such as
3D reconstruction [
1
,
2
], 3D object detection [
3
,
4
], autonomous driving [
5
] and semantic
segmentation [
6
,
7
]. In this paper, we focus primarily on the depth estimation of monocular
images that do not rely on different sensors. However, this is an ill-posed and inherently
ambiguous problem because the same 3D scene can be projected to infinitely many 2D
images. Obviously, this projection is irreversible.
To address this issue, depth sensors and related algorithms gradually become very
popular. However, current methods for depth estimation have the following disadvantages.
Depth sensors based on structured light, such as the Microsoft Kinect, are easily disturbed
by various conditions of illumination. Additionally, their effect in outdoor environments
significantly degrades, and the depth measurement distance is limited [
8
,
9
]. Light detection
and ranging (LiDAR) can provide accurate 3D information, so it is a reliable scheme for
depth perception in outdoor environments. However, sensors based on this technology
Remote Sens. 2022, 14, 2906. https://doi.org/10.3390/rs14122906 https://www.mdpi.com/journal/remotesensing