Citation: Mak, L.; Taheri, P. An
Automated Tool for Upgrading
Fortran Codes. Software 2022, 1,
299–315. https://doi.org/10.3390/
software1030014
Academic Editors: Sanjay Misra,
Robertas Damaševiˇcius and
Bharti Suri
Received: 16 June 2022
Accepted: 8 August 2022
Published: 13 August 2022
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Article
An Automated Tool for Upgrading Fortran Codes
Lesley Mak
1
and Pooya Taheri
2,3,
*
1
Computing Science & Information Systems Department, Langara College, Vancouver, BC V5Y 2Z6, Canada
2
Mechatronic Systems Engineering Department, Simon Fraser University, Surrey, BC V3T 0A3, Canada
3
School of Energy, British Columbia Institute of Technology, Burnaby, BC V5G 3H2, Canada
* Correspondence: ptaheri3@bcit.ca
Abstract:
With archaic coding techniques, there will be a time when it will be necessary to modernize
vulnerable software. However, redeveloping out-of-date code can be a time-consuming task when
dealing with a multitude of files. To reduce the amount of reassembly for Fortran-based projects, in
this paper, we develop a prototype for automating the manual labor of refactoring individual files.
ForDADT (Fortran Dynamic Autonomous Diagnostic Tool) project is a Python program designed
to reduce the amount of refactoring necessary when compiling Fortran files. In this paper, we
demonstrate how ForDADT is used to automate the process of upgrading Fortran codes, process the
files, and automate the cleaning of compilation errors. The developed tool automatically updates
thousands of files and builds the software to find and fix the errors using pattern matching and
data masking algorithms. These modifications address the concerns of code readability, type safety,
portability, and adherence to modern programming practices.
Keywords: error analysis; Fortran; Python; refactoring; software testing
1. Introduction
Software testing identifies any quality or performance issues within the software. In a
project environment, testing is used as a tool to provide feedback on the software’s current
state and to update the system’s requirements. Often, this requires significant resources or
time to deliver due to the coordination involving testing.
Project assembly such as compiling and building solutions are all necessary tools
in executing the implementation and testing phases of a software development lifecycle.
Notably, studies reveal that software validation and testing may cost upwards of 50% of the
development resources, which indicates how manual code implementation may throttle
software development [
1
,
2
]. By extension, since code verification must be performed
frequently to ensure correctness, this inadvertently contributes to a gradual increase in
overhead cost. Defect amplification, defined as a cascading effect of newly generated
errors in each developmental step, may be an unavoidable expense if it is left undetected.
Errors may cost upwards of three times the cost when periodic reviews are not part of the
design [
3
]. Indubitably, this sort of software testing model is unsustainable in the current
market, and it necessitates a more productive solution.
With respect to how resource-intensive testing may be, one approach to this dilemma
is to apply automation to improve the testing environment. In this case, there is a variety of
study work that demonstrates how automated regression testing can be optimized to fit
this criterion. Recent advances in unit testing utilize fault localization [
4
], selective fault
coverage [
5
], and regression algorithms [
6
] as a field of focus in automation. To this extent,
there is an increasing trend toward automation where developers practice improving
the testing quality of the software. According to a survey studying Canadian software
companies [
7
], many of the correspondents automate about 30% of the testing phase and
there is a ceiling for the degree of software automation in the testing environment. So, there
is a certain reliance on automation, and manual testing is still frequently used to cover
testing exceptions.
Software 2022, 1, 299–315. https://doi.org/10.3390/software1030014 https://www.mdpi.com/journal/software