Title: Semantic context-induced fast fusion network based driver attention prediction in complex scenarios

Authors: Jinglei Ren; Hailong Zhang; Yongjuan Zhao; Cong Lan

Addresses: College of Mechatronics Engineering, North University of China, Taiyuan, 030000, China ' College of Mechatronics Engineering, North University of China, Taiyuan, 030000, China ' College of Mechatronics Engineering, North University of China, Taiyuan, 030000, China ' College of Mechatronics Engineering, North University of China, Taiyuan, 030000, China

Abstract: Clarifying driving intention through the utilisation of the visual selective attention mechanism remains a pivotal research question in a domain of advanced driver assistance systems (ADAS) and human-machine collaborative autonomous driving technology. This paper proposes a semantic context-induced fast fusion network (SCFF-Net) segmenting the red green blue (RGB) video frames into images with different semantic regions, and develops an attention strategy to fuse the semantic context features of semantic images with the features of RGB frames to explore the complementarity among different features. A mixed model of self-attention and convolution integrated with the self-attention mechanism is further introduced by combining the global perception capability and the local feature extraction capability. Experimental results on the driver attention in driving accident scenarios dataset show that the proposed SCFF-Net can effectively improve the prediction accuracy of driver attention and the computing efficiency. It can also reduce redundant calculations.

Keywords: driver attention prediction; AC-mix; complex driving scenarios; computer vision; deep learning.

DOI: 10.1504/IJVSMT.2025.147336

International Journal of Vehicle Systems Modelling and Testing, 2025 Vol.19 No.2, pp.91 - 104

Received: 30 Aug 2024
Accepted: 28 Dec 2024

Published online: 14 Jul 2025 *

Full-text access for editors Full-text access for subscribers Purchase this article Comment on this article