Adaptive Feature Guidance: Modelling Visual Search with Graphical Layouts
Jussi P.P. Jokinen (a) Zhenxin Wang (b) Sayan Sarcar (b) Antti Oulasvirta (a) Xiangshi Ren (b) a – Department of Communications and Networking. Aalto University, Finland b- Center for Human-Engaged Computing (CHEC). School of Information at Kochi University of Technology, Japan
Abstract We present a computational model of visual search on graphical layouts. It assumes that the visual system is maximising expected utility when choosing where to fixate next. Three utility estimates are available for each visual search target: one by unguided perception only, and two, where perception is guided by long-term memory (location or visual feature). The system is adaptive, starting to rely more upon long-term memory when its estimates improve with experience. However, it needs to relapse back to perception-guided search if the layout changes. The model provides a tool for practitioners to evaluate how easy it is to find an item for a novice or an expert, and what happens if a layout is changed. The model suggests, for example, that (1) layouts that are visually homogeneous are harder to learn and more vulnerable to changes, (2) elements that are visually salient are easier to search and more robust to changes, and (3) moving a non-salient element far away from original location is particularly damaging. The model provided a good match with human data in a study with realistic graphical layouts.