用于升级Fortran代码的自动工具

ID:38744

阅读量:0

大小:6.51 MB

页数:17页

时间:2023-03-14

金币:2

上传者:战必胜
Citation: Mak, L.; Taheri, P. An
Automated Tool for Upgrading
Fortran Codes. Software 2022, 1,
299–315. https://doi.org/10.3390/
software1030014
Academic Editors: Sanjay Misra,
Robertas Damaševiˇcius and
Bharti Suri
Received: 16 June 2022
Accepted: 8 August 2022
Published: 13 August 2022
Publishers Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Article
An Automated Tool for Upgrading Fortran Codes
Lesley Mak
1
and Pooya Taheri
2,3,
*
1
Computing Science & Information Systems Department, Langara College, Vancouver, BC V5Y 2Z6, Canada
2
Mechatronic Systems Engineering Department, Simon Fraser University, Surrey, BC V3T 0A3, Canada
3
School of Energy, British Columbia Institute of Technology, Burnaby, BC V5G 3H2, Canada
* Correspondence: ptaheri3@bcit.ca
Abstract:
With archaic coding techniques, there will be a time when it will be necessary to modernize
vulnerable software. However, redeveloping out-of-date code can be a time-consuming task when
dealing with a multitude of files. To reduce the amount of reassembly for Fortran-based projects, in
this paper, we develop a prototype for automating the manual labor of refactoring individual files.
ForDADT (Fortran Dynamic Autonomous Diagnostic Tool) project is a Python program designed
to reduce the amount of refactoring necessary when compiling Fortran files. In this paper, we
demonstrate how ForDADT is used to automate the process of upgrading Fortran codes, process the
files, and automate the cleaning of compilation errors. The developed tool automatically updates
thousands of files and builds the software to find and fix the errors using pattern matching and
data masking algorithms. These modifications address the concerns of code readability, type safety,
portability, and adherence to modern programming practices.
Keywords: error analysis; Fortran; Python; refactoring; software testing
1. Introduction
Software testing identifies any quality or performance issues within the software. In a
project environment, testing is used as a tool to provide feedback on the software’s current
state and to update the system’s requirements. Often, this requires significant resources or
time to deliver due to the coordination involving testing.
Project assembly such as compiling and building solutions are all necessary tools
in executing the implementation and testing phases of a software development lifecycle.
Notably, studies reveal that software validation and testing may cost upwards of 50% of the
development resources, which indicates how manual code implementation may throttle
software development [
1
,
2
]. By extension, since code verification must be performed
frequently to ensure correctness, this inadvertently contributes to a gradual increase in
overhead cost. Defect amplification, defined as a cascading effect of newly generated
errors in each developmental step, may be an unavoidable expense if it is left undetected.
Errors may cost upwards of three times the cost when periodic reviews are not part of the
design [
3
]. Indubitably, this sort of software testing model is unsustainable in the current
market, and it necessitates a more productive solution.
With respect to how resource-intensive testing may be, one approach to this dilemma
is to apply automation to improve the testing environment. In this case, there is a variety of
study work that demonstrates how automated regression testing can be optimized to fit
this criterion. Recent advances in unit testing utilize fault localization [
4
], selective fault
coverage [
5
], and regression algorithms [
6
] as a field of focus in automation. To this extent,
there is an increasing trend toward automation where developers practice improving
the testing quality of the software. According to a survey studying Canadian software
companies [
7
], many of the correspondents automate about 30% of the testing phase and
there is a ceiling for the degree of software automation in the testing environment. So, there
is a certain reliance on automation, and manual testing is still frequently used to cover
testing exceptions.
Software 2022, 1, 299–315. https://doi.org/10.3390/software1030014 https://www.mdpi.com/journal/software
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭