Citation: Saraf, T.O.Q.; Fuad, N.;
Taujuddin, N.S.A.M. Framework of
Meta-Heuristic Variable Length
Searching for Feature Selection in
High-Dimensional Data. Computers
2023, 12, 7. https://doi.org/
10.3390/computers12010007
Academic Editors: Phivos Mylonas,
Katia Lida Kermanidis
and Manolis Maragoudakis
Received: 30 October 2022
Revised: 16 December 2022
Accepted: 17 December 2022
Published: 27 December 2022
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Article
Framework of Meta-Heuristic Variable Length Searching for
Feature Selection in High-Dimensional Data
Tara Othman Qadir Saraf
1,2,3,
* , Norfaiza Fuad
2
and Nik Shahidah Afifi Md Taujuddin
2
1
Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia (UTHM),
Parit Raja 86400, Johor, Malaysia
2
Faculty of Electrical and Electronic Engineering, Universiti Tun Hussein Onn Malaysia (UTHM),
Parit Raja 86400, Johor, Malaysia
3
Faculty of Software and Informatics Engineering, College of Engineering, Salahaddin University,
Erbil 44001, Kurduistan, Iraq
* Correspondence: alsaraftara@gmail.com; Tel.: +60-0113-7065-258
Abstract:
Feature Selection in High Dimensional Space is a combinatory optimization problem with
an NP-hard nature. Meta-heuristic searching with embedding information theory-based criteria in the
fitness function for selecting the relevant features is used widely in current feature selection algorithms.
However, the increase in the dimension of the solution space leads to a high computational cost and
risk of convergence. In addition, sub-optimality might occur due to the assumption of a certain length
of the optimal number of features. Alternatively, variable length searching enables searching within
the variable length of the solution space, which leads to more optimality and less computational load.
The literature contains various meta-heuristic algorithms with variable length searching. All of them
enable searching in high dimensional problems. However, an uncertainty in their performance exists.
In order to fill this gap, this article proposes a novel framework for comparing various variants of
variable length-searching meta-heuristic algorithms in the application of feature selection. For this
purpose, we implemented four types of variable length meta-heuristic searching algorithms, namely
VLBHO-Fitness, VLBHO-Position, variable length particle swarm optimization (VLPSO) and genetic
variable length (GAVL), and we compared them in terms of classification metrics. The evaluation
showed the overall superiority of VLBHO over the other algorithms in terms of accomplishing lower
fitness values when optimizing mathematical functions of the variable length type.
Keywords:
feature selection; high dimensional space; meta-heuristic; solution space; variable length
1. Introduction
Feature Selection becomes a significant process in building most machine learning
systems. The role of feature selection is to exclude non-relevant features and to preserve
only relevant features for the goals of training and prediction [
1
]. Feature selection appears
in different areas, such as pattern recognition, data mining and statistical analysis [
2
]. The
process of feature selection is regarded as important for improving the performance of
prediction because less relevant features are excluded, and for increasing both memory
and computation efficiency when the data are classified as high-dimensional data [
3
].
The literature contains three main classes of methods for feature selection [
4
]; the first
one is the wrapper [
5
] and it measures the usefulness of features based on the classifier
performance, such as information gain, the chi-square test, fisher score, correlation and
variance threshold.
The second one is the filter [
6
], and it measures the statistical properties of features
and their relevance without relying on the classifier for the repeated steps of training and
cross-validation for enabling wrapper-based feature selection such as recursive feature
elimination, sequential feature selection and meta-heuristic algorithms. It is regarded as
efficient, but it is less accurate than the wrapper method. The third one is the embedded
Computers 2023, 12, 7. https://doi.org/10.3390/computers12010007 https://www.mdpi.com/journal/computers