Article
Hyperspectral Image Classification Using Deep Genome
Graph-Based Approach
Haron Tinega
1
, Enqing Chen
1,2,
*, Long Ma
1
, Richard M. Mariita
3
and Divinah Nyasaka
4
Citation: Tinega, H.; Chen, E.; Ma, L.;
Mariita, R.M.; Nyasaka, D.
Hyperspectral Image Classification
Using Deep Genome Graph-Based
Approach. Sensors 2021, 21, 6467.
https://doi.org/10.3390/s21196467
Academic Editors: Panagiotis
E. Pintelas, Sotiris Kotsiantis and
Ioannis E. Livieris
Received: 19 August 2021
Accepted: 23 September 2021
Published: 28 September 2021
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2021 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
1
School of Information Engineering, Zhengzhou University, No. 100 Science Avenue,
Zhengzhou 450001, China; tinegaharon@gmail.com (H.T.); ielongma@zzu.edu.cn (L.M.)
2
Henan Xintong Intelligent IOT Co., Ltd., No. 1-303 Intersection of Ruyun Road and Meihe Road,
Zhengzhou 450007, China
3
Microbial BioSolutions, 33 Greene Street, Troy, NY 12180, USA; richard.mariita@microbialbiosolutions.com
4
The Kenya Forest Service, Nairobi P.O. Box 30513-00100, Kenya; dondieki@kenyaforestservice.org
* Correspondence: ieeqchen@zzu.edu.cn; Tel.: +86-158-0380-2211
Abstract:
Recently developed hybrid models that stack 3D with 2D CNN in their structure have
enjoyed high popularity due to their appealing performance in hyperspectral image classification
tasks. On the other hand, biological genome graphs have demonstrated their effectiveness in
enhancing the scalability and accuracy of genomic analysis. We propose an innovative deep genome
graph-based network (GGBN) for hyperspectral image classification to tap the potential of hybrid
models and genome graphs. The GGBN model utilizes 3D-CNN at the bottom layers and 2D-CNNs
at the top layers to process spectral–spatial features vital to enhancing the scalability and accuracy of
hyperspectral image classification. To verify the effectiveness of the GGBN model, we conducted
classification experiments on Indian Pines (IP), University of Pavia (UP), and Salinas Scene (SA)
datasets. Using only 5% of the labeled data for training over the SA, IP, and UP datasets, the
classification accuracy of GGBN is 99.97%, 96.85%, and 99.74%, respectively, which is better than the
compared state-of-the-art methods.
Keywords:
convolutional neural networks; hyperspectral images; hyperspectral image classification;
spectral–spatial features; hybrid convolution networks; genome graphs
1. Introduction
Hyperspectral imaging is a combination of spectroscopy and imaging technologies.
It involves using remote sensors to acquire a hyperspectral image (HSI) over the visible,
near-infrared, and infrared wavelengths to specify the complete wavelength spectrum at
each point on the earth’s surface [
1
]. Several efforts toward the enhancement of smart
cameras/sensors have been made over the past decades to produce high-quality hyper-
spectral image data for Earth Observation (EO) [
2
]. The recent improvement in camera
technology that utilizes complementary metal oxide semiconductor (CMOS) technology
and multi-camera schemes has resulted in even more sophisticated smart sensors that use
innovative algorithms such as adaptive cloud correction, which makes them adaptable to
dynamic conditions with uncertain geometric changes and vibrations [
3
]. When the vision
system or imaging device is combined with the main image processing unit, the resulting
sensor is called the smart camera/sensor. These advancements have led to improvements
in image resolution, acquisition speed, and the capability of providing images in which
single pixels provide information from across the electromagnetic spectrum of the scene
under observation, which in turn has improved the quality and speed of hyperspectral
image processing [
1
]. The HSI is acquired by moving the vision system across the earth
surface. The smart sensor raster-scans each scene in an image plane to extricate unique
spectral signatures, using thousands of spectral bands recorded in different wavebands,
Sensors 2021, 21, 6467. https://doi.org/10.3390/s21196467 https://www.mdpi.com/journal/sensors