Citation: Boaye Belle, A.; Zhao, Y.
Evidence-Based Software
Engineering: A Checklist-Based
Approach to Assess the Abstracts of
Reviews Self-Identifying as
Systematic Reviews. Appl. Sci. 2022,
12, 9017. https://doi.org/10.3390/
app12189017
Academic Editor: Juan Francisco De
Paz Santana
Received: 15 August 2022
Accepted: 6 September 2022
Published: 8 September 2022
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Article
Evidence-Based Software Engineering: A Checklist-Based
Approach to Assess the Abstracts of Reviews Self-Identifying
as Systematic Reviews
Alvine Boaye Belle * and Yixi Zhao
Lassonde School of Engineering, York University, 4700 Keele St, Toronto, ON M3J 1P3, Canada
* Correspondence: alvine.belle@lassonde.yorku.ca
Abstract:
A systematic review allows synthesizing the state of knowledge related to a clearly formu-
lated research question as well as understanding the correlations between exposures and outcomes.
A systematic review usually leverages explicit, reproducible, and systematic methods that allow
reducing the potential bias that may arise when conducting a review. When properly conducted,
a systematic review yields reliable findings from which conclusions and decisions can be made.
Systematic reviews are increasingly popular and have several stakeholders to whom they allow
making recommendations on how to act based on the review findings. They also help support future
research prioritization. A systematic review usually has several components. The abstract is one of
the most important parts of a review because it usually reflects the content of the review. It may be
the only part of the review read by most readers when forming an opinion on a given topic. It may
help more motivated readers decide whether the review is worth reading or not. But abstracts are
sometimes poorly written and may, therefore, give a misleading and even harmful picture of the
review’s contents. To assess the extent to which a review’s abstract is well constructed, we used a
checklist-based approach to propose a measure that allows quantifying the systematicity of review
abstracts i.e., the extent to which they exhibit good reporting quality. Experiments conducted on
151 reviews published in the software engineering field showed that the abstracts of these reviews
had suboptimal systematicity.
Keywords:
systematic reviews; reporting guideline adherence; PRISMA (preferred reporting items for
systematic reviews and meta-analyses) statement; systematicity; evidence-based software engineering
1. Introduction
Systematic reviews are well established in the software engineering field [
1
]. A sys-
tematic review relies on explicit and systematic methods to collect and synthesize findings
of studies that focus on a given research question [
2
]. It provides high-quality evidence-
based syntheses for efficacy under real-world conditions and allows understanding the
correlations between exposures and outcomes [
3
]. A systematic review also allows making
recommendations on how to act based on the review findings and help support future
research prioritization [
4
,
5
]. The core features of a methodologically sound systematic
review include transparency, replicability, and reliance on clear eligibility criteria [
6
]. Sys-
tematic reviews are usually called secondary studies, and the documents they synthesize
are usually called primary studies [
1
]. Systematic reviews are usually resource intensive,
e.g., time-consuming, as they usually require that a team of systematic reviewers, i.e., the
authors of the review, work over a long period of time (approximately 67.3 weeks) to
complete the review [6].
Systematic reviews are crucial for various stakeholders because they allow them
to make evidence-based decisions without being overwhelmed by a large volume of
research [
4
,
7
–
10
]. These stakeholders include patients, healthcare providers, policy makers,
Appl. Sci. 2022, 12, 9017. https://doi.org/10.3390/app12189017 https://www.mdpi.com/journal/applsci