Citation: Wang, J.; Yue, T.; Liu, Y.;
Wang, Y.; Wang, C.; Yan, F.; You, F.
Design of Proactive Interaction for
In-Vehicle Robots Based on
Transparency. Sensors 2022, 22, 3875.
https://doi.org/10.3390/s22103875
Academic Editors: Enrico Vezzetti,
Andrea Luigi Guerra, Gabriele
Baronio, Domenico Speranza and
Luca Ulrich
Received: 7 April 2022
Accepted: 13 May 2022
Published: 20 May 2022
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Article
Design of Proactive Interaction for In-Vehicle Robots Based
on Transparency
Jianmin Wang
1,2,3
, Tianyang Yue
1
, Yujia Liu
1
, Yuxi Wang
1
, Chengji Wang
1
, Fei Yan
4
and Fang You
1,
*
1
Car Interaction Design Lab, College of Arts and Media, Tongji University, Shanghai 201804, China;
wangjianmin@tongji.edu.cn (J.W.); michaelyue0812@tongji.edu.cn (T.Y.); liuyujia@tongji.edu.cn (Y.L.);
wangyuxi@tongji.edu.cn (Y.W.); laoji@tongji.edu.cn (C.W.)
2
Shenzhen Research Institute, Sun Yat-Sen University, Shenzhen 518057, China
3
Nanchang Research Institute, Sun Yat-Sen University, Nanchang 330224, China
4
Ulm University, 89081 Ulm, Baden-Württemberg, Germany; fei.yan@uni-ulm.de
* Correspondence: youfang@tongji.edu.cn; Tel.: +86-21-6958-4745
Abstract:
Based on the transparency theory, this study investigates the appropriate amount of
transparency information expressed by the in-vehicle robot under two channels of voice and visual in
a proactive interaction scenario. The experiments are to test and evaluate different transparency levels
and combinations of information in different channels of the in-vehicle robot, based on a driving
simulator to collect subjective and objective data, which focuses on users’ safety, usability, trust,
and emotion dimensions under driving conditions. The results show that appropriate transparency
expression is able to improve drivers’ driving control and subjective evaluation and that drivers need
a different amount of transparency information in different types of tasks.
Keywords: interaction design; transparency; proactivity; in-vehicle robots
1. Introduction
With the rapid development of intelligent vehicles, drivers’ requirements of more
intelligent assistances from the cockpit have increased. More vehicles are equipped with
virtual image voice assistants or vehicle robots with a physical entity, etc. These in-vehicle
intelligent assistants enhance the intelligence level of the cockpit and can execute diverse
tasks. The interaction between human and in-vehicle robots is considered as an integration
of a complex social and technical system [
1
], which needs an advanced model to improve
safety and trust in autonomous vehicles [2].
Anthropomorphism and proactivity have been widely studied for the future in-vehicle
robots. A study by Waytz et al. [
3
] showed that a more anthropomorphic cockpit can
increase human trust and is perceived to have more human-like mental abilities. It also
showed that an anthropomorphic robot’s voice response can increase trust, pleasure, and
dominance of the situation compared to mechanical voice response [
4
]. However, there are
still concerns about communication barriers for such robots. The accuracy and validity of
the output produced by intelligent systems can be problematic because it is difficult for
the operator to interpret the output [
5
]. Part of the reason is that humans have limits to
understand the proactivity of robots [
6
]. The study showed that people are more receptive
to the support provided by robots with moderate proactivity than those with high or
low proactivity [
7
]. Another reason is that the robot interaction design does not match
human cognition.
An important condition for a robot to be able to interact fluently with humans is that
the two can share a common cognitive framework [
8
] and form a coherent mental expecta-
tion during the interaction without adding additional learning costs. Therefore, in order
to promote a common mental model between human operators and automated systems,
Sensors 2022, 22, 3875. https://doi.org/10.3390/s22103875 https://www.mdpi.com/journal/sensors