Article
Paradox Elimination in Dempster–Shafer
Combination Rule with Novel Entropy Function:
Application in Decision-Level Multi-Sensor Fusion
Md Nazmuzzaman Khan * and Sohel Anwar
Department of Mechanical and Energy Engineering, IUPUI, Indianapolis, IN 46224, USA; soanwar@iupui.edu
* Correspondence: mdkhan@iupui.edu; Tel.: +1(317)-274-7640
Received: 10 October 2019; Accepted: 30 October 2019; Published: 5 November 2019
Abstract:
Multi-sensor data fusion technology in an important tool in building decision-making
applications. Modified Dempster–Shafer (DS) evidence theory can handle conflicting sensor inputs
and can be applied without any prior information. As a result, DS-based information fusion is
very popular in decision-making applications, but original DS theory produces counterintuitive
results when combining highly conflicting evidences from multiple sensors. An effective algorithm
offering fusion of highly conflicting information in spatial domain is not widely reported in the
literature. In this paper, a successful fusion algorithm is proposed which addresses these limitations
of the original Dempster–Shafer (DS) framework. A novel entropy function is proposed based on
Shannon entropy, which is better at capturing uncertainties compared to Shannon and Deng entropy.
An 8-step algorithm has been developed which can eliminate the inherent paradoxes of classical DS
theory. Multiple examples are presented to show that the proposed method is effective in handling
conflicting information in spatial domain. Simulation results showed that the proposed algorithm has
competitive convergence rate and accuracy compared to other methods presented in the literature.
Keywords:
Dempster–Shafer evidence theory (DST); uncertainty measure; novel belief entropy;
multi-sensor data fusion; decision-level sensor fusion
1. Introduction
Multi-sensor fusion means the combination of information from multiple sensors (homogeneous
or heterogeneous) in a meaningful way so that we can overcome any limitations inherent to a single
sensor or information source. Based on the identified strengths and weaknesses of previous work, a
principled definition of information fusion is proposed in Reference [
1
]: “Information fusion is the
study of efficient methods for automatically or semi-automatically transforming information from
different sources and different points in time into a representation that provides effective support for
human or automated decision making.” A multi-sensor system has two distinct advantages over a
single sensor system when used with a proper fusion algorithm:
•
A single sensor may provide faulty, erroneous results, and there is no way to modify that other
than by changing the sensor. A multi-sensor system provides results with diverse accuracy.
With the help of a proper fusion algorithm, faulty sensors can be easily detected.
•
A multi-sensor system receives information with wide variety and characteristics. Thus, it helps
to create a more robust system with less interference.
Now, to combine inputs from different sensors at the decision level to achieve correct object
classification, we need a robust decision-level sensor-fusion algorithm. As shown in Figure 1 [
2
], sensor
fusion can be represented at three different levels. The signal level can be explained if raw pixels
Sensors 2019, 19, 4810; doi:10.3390/s19214810 www.mdpi.com/journal/sensors