Research on Multimodal Human-Machine Interface for Takeover Request of Automated Vehicles

human machine interface takeover in driving automation non-driving related tasks takeover performance

Authors

  • Junfeng WANG School of Design and Innovation, Shenzhen Technology University, Shenzhen, China
  • Yue WANG School of Design and Innovation, Shenzhen Technology University, Shenzhen, China
  • Yin CUI
    cuiyin@sztu.edu.cn
    School of Design and Innovation, Shenzhen Technology University, Shenzhen, China

Downloads

In L3 automated driving, the driver performing the non-driving related tasks (NDRT) is easy to miss the takeover request and cause safety hazards. The takeover prompt strategy has a great impact on this situation. In this paper, four multi-modal takeover interfaces for automatic driving are designed to address the typical takeover scenarios in which the driver is under medium and high-level task loads. The driving simulator is used to conduct experiments, and each scheme’s takeover success rate, takeover time and takeover quality are selected as the evaluation criteria to study the effect of different interfaces on the driver’s takeover performance. The results show that the multimodal takeover interface can shorten takeover time, and the visual-auditory-tactile prompt has the shortest takeover time; and the visual-auditory prompt and auditory-tactile prompt have nearly the same takeover time, but the latter increases the longitudinal deceleration of the vehicle; the visual-tactile prompt has the worst takeover performance. These results provide practical implications for developing suitable interfaces to remind drivers to take over the automated vehicles.