基于Gamma检验嵌套蒙特卡罗树搜索的回归特征选择

ID:38873

大小:3.57 MB

页数:17页

时间:2023-03-14

金币:2

上传者:战必胜
entropy
Article
Feature Selection for Regression Based on Gamma Test Nested
Monte Carlo Tree Search
Ying Li
1
, Guohe Li
1,
* and Lingun Guo
1,2

 
Citation: Li, Y.; Li, G.; Guo, L.
Feature Selection for Regression
Based on Gamma Test Nested Monte
Carlo Tree Search. Entropy 2021, 23,
1331. https://doi.org/10.3390/
e23101331
Academic Editors:
Luis Hernández-Callejo,
Sergio Nesmachnow and
Sara Gallardo Saavedra
Received: 31 August 2021
Accepted: 7 October 2021
Published: 12 October 2021
Publishers Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2021 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
1
Beijing Key Lab of Petroleum Data Mining, Department of Geophysics, China University of Petroleum,
Beijing 102249, China; 2016315014@student.cup.edu.cn (Y.L.); 2019310406@student.cup.edu.cn (L.G.)
2
College of Software, Henan Normal University, Xinxiang 453007, China
* Correspondence: lgh102200@sina.com
Abstract:
This paper investigates the nested Monte Carlo tree search (NMCTS) for feature selection
on regression tasks. NMCTS starts out with an empty subset and uses search results of lower
nesting level simulation. Level 0 is based on random moves until the path reaches the leaf node.
In order to accomplish feature selection on the regression task, the Gamma test is introduced to
play the role of the reward function at the end of the simulation. The concept Vratio of the Gamma
test is also combined with the original UCT-tuned1 and the design of stopping conditions in the
selection and simulation phases. The proposed GNMCTS method was tested on seven numeric
datasets and compared with six other feature selection methods. It shows better performance than the
vanilla MCTS framework and maintains the relevant information in the original feature space. The
experimental results demonstrate that GNMCTS is a robust and effective tool for feature selection. It
can accomplish the task well in a reasonable computation budget.
Keywords:
feature selection; regression; nested monte carlo tree search (NMCTS); filter; gamma
test; GNMCTS
1. Introduction
Feature selection is a commonly used procedure in data pre-processing. It is further
categorized into the filter, wrapper and embedded methods. The filter method generates
an optimal feature subset according to a certain evaluation function; it is independent of
a succeeded classifier or regressor. Therefore, it can obtain the final result faster. On the
contrary, the wrapper method evaluates feature subset according to classifier or regressor
result. Thus, it can achieve better performance on the classifier or regressor, but it takes
a longer time for the whole process. The embedded method integrates feature selection
and model training together. It utilizes learned hypotheses to accomplish feature selection
during model-optimized training. In order to achieve a more flexible model combination,
the filter method is a good choice.
The Monte Carlo Tree Search (MCTS) method has achieved many states of art perfor-
mances in the game domain, such as Go [
1
,
2
], LOA, Bubble Breaker, SameGame, etc. [
3
].
These games can be viewed as a large-scale Markov decision process. From this perspective,
it can also deal with online planning, route scheduling and combinatorial optimization
problems. The success of AlphaGo has had a profound influence on artificial intelligence
(AI) approaches. Many reinforcement learning methods were adapted in feature selection
problems and achieved satisfactory results. Typically, MCTS for feature selection has devel-
oped many fine frameworks [
4
6
]. It can be categorized into the filter or wrapper method
depending on the specific framework design. On the one hand, the classifier or regressor
results can be directly returned as a reward. On the other hand, evaluation value calculated
from certain criteria such as information gain, Fisher’s score, etc., can be used as a reward
during iteration. The process can then be considered as a filter method. To be specific, the
tree search combines selective strategy and simulation strategy called rollout to obtain the
Entropy 2021, 23, 1331. https://doi.org/10.3390/e23101331 https://www.mdpi.com/journal/entropy
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭