高维数据特征选择的元启发式变长搜索框架

ID:38933

大小:1.53 MB

页数:13页

时间:2023-03-14

金币:2

上传者:战必胜
Citation: Saraf, T.O.Q.; Fuad, N.;
Taujuddin, N.S.A.M. Framework of
Meta-Heuristic Variable Length
Searching for Feature Selection in
High-Dimensional Data. Computers
2023, 12, 7. https://doi.org/
10.3390/computers12010007
Academic Editors: Phivos Mylonas,
Katia Lida Kermanidis
and Manolis Maragoudakis
Received: 30 October 2022
Revised: 16 December 2022
Accepted: 17 December 2022
Published: 27 December 2022
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
computers
Article
Framework of Meta-Heuristic Variable Length Searching for
Feature Selection in High-Dimensional Data
Tara Othman Qadir Saraf
1,2,3,
* , Norfaiza Fuad
2
and Nik Shahidah Afifi Md Taujuddin
2
1
Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia (UTHM),
Parit Raja 86400, Johor, Malaysia
2
Faculty of Electrical and Electronic Engineering, Universiti Tun Hussein Onn Malaysia (UTHM),
Parit Raja 86400, Johor, Malaysia
3
Faculty of Software and Informatics Engineering, College of Engineering, Salahaddin University,
Erbil 44001, Kurduistan, Iraq
* Correspondence: alsaraftara@gmail.com; Tel.: +60-0113-7065-258
Abstract:
Feature Selection in High Dimensional Space is a combinatory optimization problem with
an NP-hard nature. Meta-heuristic searching with embedding information theory-based criteria in the
fitness function for selecting the relevant features is used widely in current feature selection algorithms.
However, the increase in the dimension of the solution space leads to a high computational cost and
risk of convergence. In addition, sub-optimality might occur due to the assumption of a certain length
of the optimal number of features. Alternatively, variable length searching enables searching within
the variable length of the solution space, which leads to more optimality and less computational load.
The literature contains various meta-heuristic algorithms with variable length searching. All of them
enable searching in high dimensional problems. However, an uncertainty in their performance exists.
In order to fill this gap, this article proposes a novel framework for comparing various variants of
variable length-searching meta-heuristic algorithms in the application of feature selection. For this
purpose, we implemented four types of variable length meta-heuristic searching algorithms, namely
VLBHO-Fitness, VLBHO-Position, variable length particle swarm optimization (VLPSO) and genetic
variable length (GAVL), and we compared them in terms of classification metrics. The evaluation
showed the overall superiority of VLBHO over the other algorithms in terms of accomplishing lower
fitness values when optimizing mathematical functions of the variable length type.
Keywords:
feature selection; high dimensional space; meta-heuristic; solution space; variable length
1. Introduction
Feature Selection becomes a significant process in building most machine learning
systems. The role of feature selection is to exclude non-relevant features and to preserve
only relevant features for the goals of training and prediction [
1
]. Feature selection appears
in different areas, such as pattern recognition, data mining and statistical analysis [
2
]. The
process of feature selection is regarded as important for improving the performance of
prediction because less relevant features are excluded, and for increasing both memory
and computation efficiency when the data are classified as high-dimensional data [
3
].
The literature contains three main classes of methods for feature selection [
4
]; the first
one is the wrapper [
5
] and it measures the usefulness of features based on the classifier
performance, such as information gain, the chi-square test, fisher score, correlation and
variance threshold.
The second one is the filter [
6
], and it measures the statistical properties of features
and their relevance without relying on the classifier for the repeated steps of training and
cross-validation for enabling wrapper-based feature selection such as recursive feature
elimination, sequential feature selection and meta-heuristic algorithms. It is regarded as
efficient, but it is less accurate than the wrapper method. The third one is the embedded
Computers 2023, 12, 7. https://doi.org/10.3390/computers12010007 https://www.mdpi.com/journal/computers
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭