Investigators

Chengbo Ai
University of Massachusetts - Amherst
Civil and Environmental Engineering

Final Report

View PDF|

Final Report Summary

View PDF|

Search

Project

Quantifying the Impacts of Situational Visual Clutter on Driving Performance Using Video Analysis and Eye Tracking

Visual clutter and its impact on driving performance have been widely acknowledged. Visual clutter has been taxonomically categorized into three types, including 1) the “situational clutter” that is sourced from the interaction among the driver, the vehicle, other road users, and the road infrastructure; 2) the “designed clutter” that is sourced from the existing traffic control devices, e.g., signage, signal, work zone, etc., and 3) the “built clutter” that is sourced from other roadside and roadway objects, e.g., billboard, roadside landscapes, etc. The impacts of both the designed clutter and the built clutter have been investigated using both naturalistic driving measures (including driving clips, vehicle status, etc.) and driving simulator measures (including driving clips, vehicle status, eye tracking measures, etc.). Unfortunately, the situational clutter remains an open question, although such a clutter type is considered to play a more lasting and profound role in impacting the driver’s performance.
The challenges in investigating the situational clutter are sourced from its complicated constitution of different contributors (e.g., vehicle, other road users, the road infrastructures, etc.) and its dynamically changing manner (e.g., dashboard display, traffic conditions and outlooks of the vehicles, dynamic road, and roadside landscapes, etc.). Although the psychology and cognitive science communities have investigated the situational visual clutter, there lacks effort in studying it in the driving context. The proposed study aims to address such a gap.
The objective of this proposed study is threefold: 1) to develop a new situational visual clutter model that objectively quantify the complex and dynamic driving scene based on eye tracking and video analysis; 2) to employ the developed model, and to quantify impact of the situational visual clutter on driving performance under an information searching scenario and a driving distraction scenario using driving simulation; and 3) to investigate the potential of employing the driving scene quantification to support other retrospective studies and data mining using the existing driving simulation data.

Supporting links:
Webinar
Dataset