Article
Feature Selection for Regression Based on Gamma Test Nested
Monte Carlo Tree Search
Ying Li
1
, Guohe Li
1,
* and Lingun Guo
1,2
Citation: Li, Y.; Li, G.; Guo, L.
Feature Selection for Regression
Based on Gamma Test Nested Monte
Carlo Tree Search. Entropy 2021, 23,
1331. https://doi.org/10.3390/
e23101331
Academic Editors:
Luis Hernández-Callejo,
Sergio Nesmachnow and
Sara Gallardo Saavedra
Received: 31 August 2021
Accepted: 7 October 2021
Published: 12 October 2021
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2021 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
1
Beijing Key Lab of Petroleum Data Mining, Department of Geophysics, China University of Petroleum,
Beijing 102249, China; 2016315014@student.cup.edu.cn (Y.L.); 2019310406@student.cup.edu.cn (L.G.)
2
College of Software, Henan Normal University, Xinxiang 453007, China
* Correspondence: lgh102200@sina.com
Abstract:
This paper investigates the nested Monte Carlo tree search (NMCTS) for feature selection
on regression tasks. NMCTS starts out with an empty subset and uses search results of lower
nesting level simulation. Level 0 is based on random moves until the path reaches the leaf node.
In order to accomplish feature selection on the regression task, the Gamma test is introduced to
play the role of the reward function at the end of the simulation. The concept Vratio of the Gamma
test is also combined with the original UCT-tuned1 and the design of stopping conditions in the
selection and simulation phases. The proposed GNMCTS method was tested on seven numeric
datasets and compared with six other feature selection methods. It shows better performance than the
vanilla MCTS framework and maintains the relevant information in the original feature space. The
experimental results demonstrate that GNMCTS is a robust and effective tool for feature selection. It
can accomplish the task well in a reasonable computation budget.
Keywords:
feature selection; regression; nested monte carlo tree search (NMCTS); filter; gamma
test; GNMCTS
1. Introduction
Feature selection is a commonly used procedure in data pre-processing. It is further
categorized into the filter, wrapper and embedded methods. The filter method generates
an optimal feature subset according to a certain evaluation function; it is independent of
a succeeded classifier or regressor. Therefore, it can obtain the final result faster. On the
contrary, the wrapper method evaluates feature subset according to classifier or regressor
result. Thus, it can achieve better performance on the classifier or regressor, but it takes
a longer time for the whole process. The embedded method integrates feature selection
and model training together. It utilizes learned hypotheses to accomplish feature selection
during model-optimized training. In order to achieve a more flexible model combination,
the filter method is a good choice.
The Monte Carlo Tree Search (MCTS) method has achieved many states of art perfor-
mances in the game domain, such as Go [
1
,
2
], LOA, Bubble Breaker, SameGame, etc. [
3
].
These games can be viewed as a large-scale Markov decision process. From this perspective,
it can also deal with online planning, route scheduling and combinatorial optimization
problems. The success of AlphaGo has had a profound influence on artificial intelligence
(AI) approaches. Many reinforcement learning methods were adapted in feature selection
problems and achieved satisfactory results. Typically, MCTS for feature selection has devel-
oped many fine frameworks [
4
–
6
]. It can be categorized into the filter or wrapper method
depending on the specific framework design. On the one hand, the classifier or regressor
results can be directly returned as a reward. On the other hand, evaluation value calculated
from certain criteria such as information gain, Fisher’s score, etc., can be used as a reward
during iteration. The process can then be considered as a filter method. To be specific, the
tree search combines selective strategy and simulation strategy called rollout to obtain the
Entropy 2021, 23, 1331. https://doi.org/10.3390/e23101331 https://www.mdpi.com/journal/entropy