About the Lab
In the Cognitive Science Lab, we study learning, visual
attention, and their interconnections so that we can understand how learning
changes the way we access information—both with our eyes, and via computer
interfaces—and how accessing the correct information can improve our learning.
Our methods are diverse: we use experimental studies, and naturalistic datasets
of real world tasks (e.g., video games) in combination with eye-tracking,
computational modelling of cognition, and big data analyses.
Recent work incorporates custom human-computer interfaces and virtual reality
into our toolbox. The new generation of spatial computing tools provided by
virtual, mixed and augmented reality technologies both encourages and requires
software design that respects how humans learn and attend.
One project is aimed at designing and testing a new computer interface call Ex
Novo, that is designed around what we know about human cognition. The effectiveness
of Human-Computer Interfaces (HCI) are constrained by the limits human memory
and attention. From a human memory and attention standpoint, existing
interfaces leave a lot to be desired. The most common computer interface
allows users to select actions from lists in a menu. The graphical user
interface (GUI) minimizes memory costs, but requires visual inspection and
careful targeting to select actions, thus slowing down performance. In
compensation, typical computer interfaces allow for rapid execution of actions
via keyboard hotkeys (such as ctrl-c to copy). Some hotkey combinations are
difficult to perform, requiring users to use awkward hand positions that require
visual inspection of the keyboard to execute, and most hotkey combination are
largely arbitrary, making memory of the cryptic combination a challenge.
Another important problem is that these two ways of initiating actions (menus
and hotkeys) are essentially entirely separate interfaces, and the time spent
learning one does almost nothing to help with the other.
Our Ex Novo ( which means *from the beginning*) interface that unifies
the speed of hotkeys with the learnability of a GUI. Because it is a
single consistent interface, users improve with experience from the slow
visually-guided choices of the novice, to the rapid, automatic actions of an
expert. Our research investigates how speed and performance differs
between ExNovo and traditional menu-based interfaces, and how interface
elements such as sound and visual coding might help to make interfaces easier
to learn.
Another current project investigates learning and attention in virtual reality.
VR has different costs and affordances, and our previous research suggests that
learning and attention will be affected. This project first seeks to understand
how will previous research applies to VR.
Ongoing projects include a longitudinal study of learning and information
access of players of the online strategy game StarCraft 2.