Citation: Liu, K.; Chen, Y.; Tang, J.;
Huang, H.; Liu, L. Self-Attentive
Subset Learning over a Set-Based
Preference in Recommendation. Appl.
Sci. 2023, 13, 1683. https://doi.org/
10.3390/app13031683
Academic Editor: Giacomo Fiumara
Received: 5 January 2023
Revised: 25 January 2023
Accepted: 26 January 2023
Published: 28 January 2023
Copyright: © 2023 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Article
Self-Attentive Subset Learning over a Set-Based Preference
in Recommendation
Kunjia Liu, Yifan Chen *, Jiuyang Tang, Hongbin Huang and Lihua Liu
Laboratory for Big Data and Decision, National University of Defense Technology, Changsha 410073, China
* Correspondence: yfchen@nudt.edu.cn
Abstract:
Recommender systems that learn user preference from item-level feedback (provided to
individual items) have been extensively studied. Considering the risk of privacy exposure, learning
from set-level feedback (provided to sets of items) has been demonstrated to be a better way, since
set-level feedback reveals user preferences while to some extent, hiding his/her privacy. Since only
set-level feedback is provided as a supervision signal, different methods are being investigated
to build connections between set-based preferences and item-based preferences. However, they
overlook the complexity of user behavior in real-world applications. Instead, we observe that users’
set-level preference can be better modeled based on a subset of items in the original set. To this end,
we propose to tackle the problem of identifying subsets from sets of items for set-based preference
learning. We propose a policy network to explicitly learn a personalized subset selection strategy
for users. Given the complex correlation between items in the set-rating process, we introduce
a self-attention module to make sure all set members are considered in subset selecting process.
Furthermore, we introduce gumble softmax to avoid gradient vanishing caused by binary selection
in model learning. Finally the selected items are aggregated by user-specific personalized positional
weights. Empirical evaluation with real-world datasets verifies the superiority of the proposed
model over the state-of-the-art.
Keywords:
recommender systems; set-based preference learning; subset selection; user behavior
modeling
1. Introduction
Ubiquitous recommender systems provide effective solutions to improve users’ experi-
ences with online service platforms. Their main goal is to infer users’ interests through their
historical feedback information. Among the recommendation models applied, collaborative
filtering (
CF
) approaches have demonstrated effective performance [
1
]. Most
CF
methods
directly rely on the item-level feedback of users [
2
,
3
], which causes privacy concerns, since
the users’ preferences towards individual items are private opinions. Moreover, this privacy
risk potentially inhibits users from providing feedback in the first place. Thus, avoiding
direct item-level preference exposure is reasonable in recommender systems [4].
We refer to the problem of understanding the user’s preference for individual items
by learning from set-level feedback as set-based preference learning (
SPL
).The set-based
feedback reveals users’ preferences and naturally provides information hiding by blurring
interactions with individual items. Thus, users are more willing to provide set-based
feedback than item-level preferences. However, inferring item preference from set-based
feedback is nontrivial. Since only set-based preference is provided, there is no direct
supervision signal for item-level rating prediction. Therefore, the major challenge of
SPL
is
how to accurately infer a user’s preferences for individual items based on set-level feedback.
Since only set ratings are provided as supervision signals, the item-level predictions
should be aggregated as set-based preferences before being updated. Although a set
Appl. Sci. 2023, 13, 1683. https://doi.org/10.3390/app13031683 https://www.mdpi.com/journal/applsci