Jordan Barnes

Hello. I’m Jordan. I’m a PhD student working in the lab. I am primarily interested in computational cognitive models and embodied cognition. For this reason, the combination of eye tracking experiments and modeling work that we do here strongly appeals to my intuition about how best to study the underlying mechanisms of cognition. My ResearchGate profile can be found here and my CV can be found here.


In a collaborative project between myself, Calen Walshe, Mark Blair and Paul Tupper, we created a dynamic neural field model of an eye that can look around and learn simple category structures. This model which we call, Tempus, includes a visual field, an attention field, a saccadic motor planning field and neurons representing different states of knowledge of the task. Tempus appears to be able to account for a number of interesting properties found in human attentional allocation. We are currently preparing a manuscript based on the results of this work but a recent abstract of the model can be found here.


We all know that experience with the world helps us refine our expectations about the way things are. Usually, the more experience with a task we have, the the more we can automate that task and free ourselves up to do other kinds of cognitive work. Expectation learning like this is thought to work with eye-movements as well, as they are our first line of defense against irrelevant information. To explore this idea, Mark Blair, Calen Walshe, Caitlyn McColeman, Ekaterina Stepanova and I have developed a reinforcement learning model of eye-movements during category learning that we call RLAttn. Read more about here.