Citation: Cui, Y.; Liu, F.; Liu, X.; Li, L.;
Qian, X. TCSPANet: Two-Staged
Contrastive Learning and Sub-Patch
Attention Based Network for PolSAR
Image Classification. Remote Sens.
2022, 14, 2451. https://doi.org/
10.3390/rs14102451
Academic Editors: M. Jamal Deen,
Subhas Mukhopadhyay, Yangquan
Chen, Simone Morais, Nunzio
Cennamo and Junseop Lee
Received: 1 April 2022
Accepted: 12 May 2022
Published: 20 May 2022
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Article
TCSPANet: Two-Staged Contrastive Learning and Sub-Patch
Attention Based Network for PolSAR Image Classification
Yuanhao Cui, Fang Liu *, Xu Liu, Lingling Li and Xiaoxue Qian
Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University,
Xi’an 710071, China; yhcui@stu.xidian.edu.cn (Y.C.); xuliu@xidian.edu.cn (X.L.); llli@xidian.edu.cn (L.L.);
18031110272@stu.xidian.edu.cn (X.Q.)
* Correspondence: liuf63@xidian.edu.cn; Tel.: +86-136-3681-0137
Abstract:
Polarimetric synthetic aperture radar (PolSAR) image classification has achieved great
progress, but there still exist some obstacles. On the one hand, a large amount of PolSAR data is
captured. Nevertheless, most of them are not labeled with land cover categories, which cannot be
fully utilized. On the other hand, annotating PolSAR images relies more on domain knowledge
and manpower, which makes pixel-level annotation harder. To alleviate the above problems, by
integrating contrastive learning and transformer, we propose a novel patch-level PolSAR image clas-
sification, i.e., two-staged contrastive learning and sub-patch attention based network (TCSPANet).
Firstly, the two-staged contrastive learning based network (TCNet) is designed for learning the
representation information of PolSAR images without supervision, and obtaining the discrimina-
tion and comparability for actual land covers. Then, resorting to transformer, we construct the
sub-patch attention encoder (SPAE) for modelling the context within patch samples. For training
the TCSPANet, two patch-level datasets are built up based on unsupervised and semi-supervised
methods. When predicting, the classification algorithm, classifying or splitting, is put forward to
realise non-overlapping and coarse-to-fine patch-level classification. The classification results of
multi-PolSAR images with one trained model suggests that our proposed model is superior to the
compared methods.
Keywords:
classification; patch-level; polrimetric synthetic apeture radar (PolSAR); sub-patch
attention encoder (SPAE); transformer; two-staged contrastive learning based network (TCNet)
1. Introduction
With the rapid development of the spaceborne and air borne polarimetric synthetic
aperture radar (PolSAR) systems, a large amount of PolSAR data is available [
1
,
2
].
Due to the high-speed development of deep learning [
3
], a growing number of deep
learning based methods have been introduced to PolSAR image classification [
4
–
8
]. Al-
though these supervised deep learning methods have improved the recognition accuracy
to a large extent, they are based on a certain amount of data with human annotations [
9
].
Compared with the hard-to-obtain labeled PolSAR samples, unlabeled PolSAR data has a
huge advantage in quantity, but it is rarely used effectively, which is somewhat wasteful.
As a subset of unsupervised learning methods, self-supervised learning methods
avoid the extensive cost of collecting and annotating large-scale datasets [
10
], which
leverages input data itself as supervision and benefits almost all types of downstream
tasks [
2
]. Self-supervised learning approaches mainly fall into one of two classes: genera-
tive or discriminative. Discriminative approaches based on contrastive learning in the latent
space have recently shown great promise. In [
11
], Chen et al. proposed a simple framework
for contrastive learning of visual representations (simCLR). Through instance discrimination,
simCLR can mine information hidden behind unlabeled data, so as to obtain better sample
representation and further improve the performance of downstream classification tasks.
Remote Sens. 2022, 14, 2451. https://doi.org/10.3390/rs14102451 https://www.mdpi.com/journal/remotesensing