Citation: Liu, Z.; Hong, H.; Gan, Z.;
Wang, J.; Chen, Y. An Improved
Method for Evaluating Image
Sharpness Based on Edge
Information. Appl. Sci. 2022, 12, 6712.
https://doi.org/10.3390/
app12136712
Academic Editor: Silvia Liberata Ullo
Received: 8 June 2022
Accepted: 30 June 2022
Published: 2 July 2022
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Article
An Improved Method for Evaluating Image Sharpness Based on
Edge Information
Zhaoyang Liu , Huajie Hong, Zihao Gan *, Jianhua Wang and Yaping Chen
College of Intelligence Science and Technology, National University of Defense Technology,
Changsha 410073, China; zhaoyangliunudt@163.com (Z.L.); opalqq@163.com (H.H.); wangjh20a@163.com (J.W.);
yaping_chen2021@163.com (Y.C.)
* Correspondence: ganzihaoh@sina.com
Abstract:
In order to improve the subjective and objective consistency of image sharpness evaluation
while meeting the requirement of image content irrelevance, this paper proposes an improved
sharpness evaluation method without a reference image. First, the positions of the edge points are
obtained by a Canny edge detection algorithm based on the activation mechanism. Then, the edge
direction detection algorithm based on the grayscale information of the eight neighboring pixels is
used to acquire the edge direction of each edge point. Further, the edge width is solved to establish
the histogram of edge width. Finally, according to the performance of three distance factors based
on the histogram information, the type 3 distance factor is introduced into the weighted average
edge width solving model to obtain the sharpness evaluation index. The image sharpness evaluation
method proposed in this paper was tested on the LIVE database. The test results were as follows: the
Pearson linear correlation coefficient (CC) was 0.9346, the root mean square error (RMSE) was 5.78,
the mean absolute error (MAE) was 4.9383, the Spearman rank-order correlation coefficient (ROCC)
was 0.9373, and the outlier rate (OR) as 0. In addition, through a comparative analysis with two other
methods and a real shooting experiment, the superiority and effectiveness of the proposed method in
performance were verified.
Keywords:
image sharpness; no-reference; eight-neighborhood algorithm; edge width; distance factor
1. Introduction
With the significant advantages of non-contact, flexibility, and high integration, com-
puter vision measurement has broad application prospects in electronic semiconductors,
automotive manufacturing, food packaging, film, and other industrial fields. Image sharp-
ness is the core index to measure the quality of visual images; therefore, the research on
the evaluation method of visual image sharpness is one of the key technologies to achieve
visual detection [
1
–
3
]. Moreover, as people demand more and more sharpness in video
chats, HDTV, etc., the research of a more efficient image sharpness evaluation method has
become a pressing problem nowadays.
Generally, image sharpness evaluation methods can be divided into full-reference
(FR) sharpness evaluation methods, reduced-reference (RR) sharpness evaluation methods,
and no-reference (NR) sharpness evaluation methods. Among them, the FR sharpness
evaluation methods are used to judge the degree of deviation of the measured image
from the sharp reference image [
4
]. The RR sharpness evaluation methods evaluate the
measured image by extracting only part of the information of the reference image [
5
].
However, in practical applications, undistorted sharp reference images are usually difficult
to obtain. Therefore, the NR sharpness evaluation methods have higher research value and
wider application capability. Existing NR sharpness evaluation methods are formulated
either in the transform domain or in the spatial domain [
6
]. Transform domain-based
methods [7–10]
need to transform images from the spatial domain to other domains for
Appl. Sci. 2022, 12, 6712. https://doi.org/10.3390/app12136712 https://www.mdpi.com/journal/applsci