Prospective Graduate Students / Postdocs
This faculty member is currently not actively recruiting graduate students or Postdoctoral Fellows, but might consider co-supervision together with another faculty member.
This faculty member is currently not actively recruiting graduate students or Postdoctoral Fellows, but might consider co-supervision together with another faculty member.
Raised with 6 siblings on a mixed berry-dairy farm in Abbotsford BC, I attended colleges in Kansas and Winnipeg, before receiving my MA and PhD from Princeton University. My first Assistant Professor position was with Dalhousie University and then UBC, where I was eventually promoted to Full Professor and Distinguished University Scholar. I was honoured with nomination to Fellowship in the Royal Society of Canada in 2002 and was awarded the DO Hebb Award for Distinguished Research Contributions in 2013. My research has centred on the role of selective attention in vision, using behavioural methods in the lab and in life, eye and limb tracking, and studying neurotypical adults and children, elite athletes, musicians, and persons on the autism spectrum.
Dissertations completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest dissertations.
This dissertation explores the hypothesis that cognitive engagement is an important predictor of the relationship between exercise and executive functioning. Chapter 1 introduces the background claim that exercise benefits executive functioning. This includes reviewing the relationship between exercise and improvements in executive functioning via changes in cerebral blood flow and neuroplasticity. The exercise-executive function relationship is also reviewed via literature on exercise history, duration, intensity, and type. This review concludes by introducing the primary hypothesis of this dissertation, namely, that cognitively-engaging exercise should predict better executive functioning. Chapter 2 tested this hypothesis through an empirical study (N = 145) of undergraduates who self-reported their executive function use during exercise, and then completed executive function tasks (i.e., flanker and backward span). Students reporting engagement in exercise that relied on inhibitory control were found to perform better on a flanker task, and students reporting engagement in exercise that relied on cognitive flexibility performed better on a backward span task. Chapter 3 recruited an independent sample of undergraduates (N = 228) and had them complete different executive function tasks (i.e., stop-signal and trail making B). The main finding was that when students reported engaging in exercise that relied on inhibitory control they had faster stop-signal reaction time and made fewer trail making errors, and when they reported engaging in exercise that relied on cognitive flexibility they had slower stop-signal reaction time and trail making completion time. Chapter 4 recruited a more diverse sample of participants (e.g., older, more males; N = 225) and had them complete the same executive function tasks as chapter 2. The main finding was that correlations now ran in opposite directions. When individuals engaged in exercise that relied on inhibitory control, they performed worse on a flanker task, and when they engaged in exercise that relied on cognitive flexibility, they performed worse on a backward span task. Chapter 5 summarizes these findings and speculates that cognitively-engaging exercise may predict better or worse executive functioning depending on the underlying motivation and context driving one to exercise, as well as discussing the potential role of leisure activity.
View record
Observing others is predicting others. Humans have a natural tendency to make predictions about other people’s future behavior. This predisposition sits at the basis of social cognition: others become accessible to us because we are able to simulate their internal states, and in this way make predictions about their future behavior (Blakemore & Decety, 2001). In this thesis, I examine prediction in the social realm through three main contributions. The first contribution is of a theoretical nature, the second is methodological, and the third contribution is empirical. On the theoretical plane, I present a new framework for cooperative social interactions – the predictive joint-action model, which extends previous models of social interaction (Wolpert, Doya, & Kawato, 2003) to include the higher level goals of joint action and planning (Vesper, Butterfill, Knoblich, & Sebanz, 2010). Action prediction is central to joint-action. A recent theory proposes that social awareness to someone else’s attentional states underlies our ability to predict their future actions (Graziano, 2013). In the methodological realm, I developed a procedure for investigating the role of sensitivity to other’s attention control states in action prediction. This method offers a way to test the hypothesis that humans are sensitive to whether someone’s spatial attention was endogenously controlled (as in the case of choosing to attend towards a particular event) or exogenously controlled (as in the case of attention being prompted by an external event), independent of their sensitivity to the spatial location of that person’s attentional focus. On the empirical front, I present new evidence supporting the hypothesis that social cognition involves the predictive modeling of other’s attentional states. In particular, a series of experiments showed that observers are sensitive to someone else’s attention control and that this sensitivity occurs through an implicit kinematic process linked to social aptitude. In conclusion, I bring these contributions together. I do this by offering an interpretation of the empirical findings through the lens of the theoretical framework, by discussing several limitations of the present work, and by pointing to several questions that emerge from the new findings, thereby outlining avenues for future research on social cognition.
View record
Previous research has shown that two heads working together can outperform one working alone, but whether such benefits result from social interaction or the statistical facilitation of independent performance is not clear. Here I apply Miller’s (1982; Ulrich, Miller & Schröter, 2007) race model inequality (RMI) model to distinguish between these two possibilities. This model was developed to test whether response times to two signals compared to one were especially fast because the observer could detect a signal in either of two ways (i.e., separate activation models) or because both signals contributed to a common pool of activation (i.e., coactivation models). I explored the independent versus interactive benefits of social collaboration in four experiments. In a first experiment I replicated Miller’s classic finding that coactivation underlies the faster responses to two targets than one during simple visual search by a single individual. However I found that two-person team performance was no faster than the performance of two independent individuals. Reasoning that the division of the cognitive load between collaborators was important to achieving collaborative performance gains, I employed a more complex enumeration visual search task in three subsequent experiments. With this task I found that performance by two-person teams exceeded the fastest possible performance of two independent individuals. This violated Miller’s RMI and indicated that interpersonal interaction produced the collaborative cognition performance gains. I then linked the magnitude of these collaborative gains to features of the interpersonal interaction between team members, including verbal communication, affiliation, and non-verbal communication such as posture, gesture, and body movement. Together these experiments serve as an important proof of concept that Miller’s RMI can be applied to differentiate between the independent and interactive benefits of collaborative cognition. In addition they demonstrate that the interactive benefits of collaborative cognition are influenced by features of the social interaction between collaborators.
View record
I present new evidence about the relationships between learning and synaesthesia, particularly grapheme-colour synaesthesia, in which individuals experience letters and numbers as coloured. As part of the largest survey of synaesthetic tendencies ever performed, I show that second language acquisition can act as a trigger for the development of synaesthesia, such that children who learn a second language in grade school are three times more likely to develop synaesthesia as native bilinguals. I also demonstrate that previous reports of a sex bias in synaesthesia are almost certainly due to response and compliance biases, rather than any real differences in the prevalence of synaesthesia between men and women. In a detailed examination of the influences of learning on synaesthetic experiences, I show that synaesthetic colours are influenced by knowledge about letters’ shapes, frequencies, alphabetical order, phonology, and categorical qualities. Finally, I demonstrate that synaesthesia can itself be exploited in learning. All these results are presented as supporting a developmental learning hypothesis of synaesthesia, in which synaesthesia develops, at least in part, because it is useful.
View record
While searching for objects in a cluttered environment, observers confront two tasks: selecting where to search, and identifying the targets. Chapter 1 reviews major theories of visual search, and highlights their approaches to these two functions. While anatomical, neurological, and behavioural evidence suggests dissociation between spatial selection and identity extraction, there is a vast controversy about this issue among visual-search accounts. This review demonstrates that none of these theories has adopted the right tool to independently manipulate the two functions. A new methodology is suggested in which the two functions are manipulated independently using spatial cueing to manipulate localization, and the attentional blink – AB - to manipulate identification (AB: impaired identification of the second of two briefly-displayed sequential targets). In examining the separability of spatial selection and identity extraction, additive-factors logic is adopted: if two factors (here: spatial cueing and AB) influence independent stages of processing, they will have additive effects on the dependent measure. Conversely, whenever additivity occurs, the underlying mechanisms can be assumed to be independent. Experiments in Chapter 2 show that cueing and the AB have additive effects, confirming the hypothesis that the two functions are separable. The results are accounted for by relating them to two major parallel pathways in the visual system: the dorsal and ventral pathways. Based on the characteristics of each pathway, it is plausible to assume that spatial cues (indexing spatial selection) are processed along the dorsal pathway while identification is processed along the ventral pathway. The two functions are therefore separable because they are mediated by mechanisms that are anatomically and functionally distinct.The experiment in Chapter 3 was designed to address contrary evidence regarding the separability of location and identity processing. It shows that those results were due to a procedural, artefactual ceiling. In Chapter 4, a prediction is tested based on the interpretation of results in the first study: if cueing involves both the dorsal and ventral pathways it should interfere with the AB; the results support this prediction. Chapter 5 discusses how these results collectively support the separability of spatial and identity processing, and also discusses future directions.
View record
Attention is essential to everyday life: without some selective function to guide and limit the processing of incoming information, our visual system would be overwhelmed. A description of the spatiotemporal dynamics of attention is critical to our understanding of this basic human cognitive function and is the primary goal of this dissertation. In particular, the research reported here is aimed at examining two aspects of the spatiotemporal dynamics of attention: a) the rate at which the focus of attention is shrunk and expanded along with the factors that influence this rate, and b) the factorsgoverning whether attention is deployed as either a unitary or a divided focus. The present research examines the spatiotemporal dynamics of focal attention by monitoring the pattern of accuracy that occurs when participants attempt to identify two targets embedded in simultaneously presented streams of items. By asking participants to monitor these streams simultaneously, with the spatial and temporal positions of the two targets in the streams being varied incrementally, it is possible to index the extent of focal attention in both space and time. Chapter 2 develops this behavioural procedure and assesses the rate at which the focus of attention is contracted. A qualitative model is put forward and tested. Chapter 3 examines factors that modulate the temporal course of attentional narrowing in young adults who presumably can exercise efficient control of attentional processes. In contrast, Chapter 4 examines the effect of reduced attentional control by examining the same process in older adults. The second goal of this thesis was to examine whether focal attention is deployed as a unitary or a divided focus. These two perspectives are generally viewed as mutually exclusive. The alternative hypothesis pursued in Chapter 5 is that focal attention can be deployed as either a single, unitary focus or divided into multiple foci, depending on the observers mental set and on the task demands. The final chapter then combines and compares the findings across all experiments and evaluates how they fit in with current theories of visual attention.
View record
Theses completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest theses.
An objective state of mind refers to a mental state in which people perceive themselves as the object of another’s observation. Previous research has shown that this state affects people’s metacognitive process, emotional experience, and social behavior. An objective mental state often arises during everyday social interaction, but few studies have investigated how it influences one’s social perception during an encounter. Here we examine how the perception of others’ emotion is influenced by triggering an objective state of mind. We developed an online experiment using webcams, questions, and pre-programmed conversations to manipulate participants’ mental states. We then measured their accuracy in reading the emotional expressions of people they believed they were interacting with. Three conditions were compared. In the Evaluated condition, participants were asked to classify the emotional expressions of two study assistants, after being informed that one of the assistants might select them as a partner in a competitive game. In the Evaluating condition, different participants classified the emotional expressions of the same assistants, but this time believing that they would be able to select one assistant as a game partner. In the Neutral condition, the same emotion classification task was performed, but participants were not given any other instructions. The results showed that participants in the Evaluated condition were significantly less accurate in classifying emotions than in the other two conditions. We interpret this finding as supporting the view that an objective mental state reduces the ability to read other’s emotional cues. We discuss possible mechanisms by which this may occur, including increased stress, divided attention, and the role of latent imitation in forming empathy for others.
View record
Electrooculography (EOG) offers several advantages over other methods for tracking human eye movements, including its low cost and capability of monitoring gaze position when the eyelids are closed. Yet, EOG poses its own challenges, because in order to determine saccadic distance and direction, the electrical potentials measured by EOG must be calibrated in some way with physical distance. Moreover, the EOG signal is highly susceptible to noise and artifacts arising from a variety of sources (e.g., activity of the extraocular muscles). Here we describe a method for estimating a corrected EOG signal by simultaneously tracking gaze position with an industry standard pupil-corneal reflection (PCR) system. We first compared the two measurements with the eyes open under two conditions of full illumination and in a third condition of complete darkness. Compared to the PCR signal, the EOG signal was less precise and tended to overestimate saccadic amplitude. We harnessed the relation between the two signals in the dark condition in order to estimate a corrected EOG-based metric of saccade end-point amplitude in a fourth condition, where the participants eyes were closed. We propose that these methods and results can be applied to human-machine interfaces that rely on EOG eye tracking, and for advancing research in sleep, visual imagery, and other situations in which participants’ eyes are moving but closed.
View record
The visual system is remarkably efficient at extracting summary statistics from the environment. Yet at any given time, the environment consists of many groups of objects distributed over space. Thus, the challenge for the visual system is to summarize over multiple sets distributed across space. My thesis work investigates the capacity constraints and computational efficiency of ensemble perception, in the context of perceiving multiple spatially intermixed groups of objects. First, in three experiments, participants viewed an array of 1 to 8 intermixed sets of circles. Each set contained four circles in the same colors but with different sizes. Participants estimated the mean size of a probed set. Which set would be probed was either known before onset of the array (pre-cue), or after that (post-cue). Fitting a uniform-normal mixture model to the error distribution, I found participants could reliably estimate mean sizes for maximally four sets (Experiment 1). Importantly, their performance was unlikely to be driven by a subsampling strategy (Experiment 2). Allowing longer exposure to the stimulus array did not increase the capacity, suggesting ensemble perception was limited by an internal resource constraint, rather than an information encoding rate (Experiment 3). Second, in two experiments, I showed that the visual system could hold up to four ensemble representations, or up to four individual items (Experiment 4), and an ensemble representation had an information uncertainty (entropy) level similar to that of an individual representation (Experiment 5). Taken together, ensemble perception provides a compact and efficient way of information processing.
View record
Experimental aesthetics, one of the oldest branches of research psychology, empirically examines what elements of an image associate with beauty and preference. Drawing on this research the current paper hypothesizes that by applying the painterly techniques of impressionist era artists a modern applied problem of information visualization may be addressed, namely how to create effective and aesthetically pleasing depictions of data. To do so a series of weather maps obtained from the International Panel of Climate Change were rendered into four data visualization styles: industry standard glyphs, and three impressionism inspired styles titled interpretational complexity, indication and detail, and visual complexity. Two separate experiments were then conducted, each aimed at testing a key feature of effective data visualization, image recognition and the ability to communicate data trends. The first experiment found visual complexity visualizations to be comparable to glyphs on a new-old recognition task, and better than the styles interpretational complexity, and indication and detail. The second experiment found that visual complexity visualizations were more effective than glyphs at depicting and communicating data trends to the viewer. Incidental eye tracking data during both experiments suggests that impressionist visualizations were more engaging and aesthetically pleasing than glyphs as evident by a higher fixation count and greater pupil dilation. Individually experiments 1 and 2 demonstrate that the painterly techniques of visual complexity may be applied to create highly recognizable and communicative data visualizations. Collectively the two experiments support the broader hypothesis that by modelling the knowledge and expertise of artists we may create aesthetically pleasing and functional depictions of data. Following these results the thesis concludes with a discussion of future research and potential limitations, and how the present results relate to aesthetics research more broadly.
View record
The purpose of this study was to investigate links between attention restoration theory and executive function. A series of four experiments, each using a pre- versus post-test design, studied the influence of various interventions on executive function, as assessed by a backward digit span task and Raven’s progressive matrices. Experiment 1 began by testing the influence of cognitive strategy as manipulated through task instructions. Experiment 2 tested the influence of viewing slides of nature versus urban scenes, as predicted by attention restoration theory (Berman, et al., 2008). Experiment 3 repeated these procedures, using more engaging 10-min video tours of nature versus urban environments. Experiment 4 combined the successful instructional manipulations of Experiment 1 and the video manipulation of Experiment 3 to examine interactions between strategy and environment on executive function. The results showed that the nature video intervention reduced the influence of task instructions relative to the urban intervention. This supports Berman et al. (2008), who claim that exposure to nature has a restorative influence on executive function.
View record
Motion masking refers to the finding that objects are less visible when they appear as part of an apparent motion sequence than when they appear for the same duration in isolation. Against this backdrop of generally impaired visibility, there are reports of a relative visibility benefit when a target on the motion path is spatiotemporally predictable versus when it is unpredictable. The present study investigates whether prediction based on the shape of the originating stimulus in the motion sequence, and postdiction based on the terminating shape, is an aid to the visibility of a target in motion. In Experiment 1 these factors are examined separately for originating and terminating stimuli; in Experiment 2 they are examined in combination. The results show that both factors influence target discriminability in an additive way, suggesting that the processes of prediction and postdiction have independent influences on visibility. Experiment 3 examines the same display sequences with a different psychophysical task (i.e., detection) in an effort to reconcile the present findings with previous contradictory results. The upshot is that in contrast to the results for discrimination, target detection is influenced little by these factors. Experiments 4 and 5 examine the discrimination of a fine shape detail of the target, in contrast to the crude discrimination of target orientation in Experiments 1 and 2. This design also eliminates the opportunity for decision-biases to influence the results. The results show that predictable motion has a strong positive influence on target shape discrimination, to the extent that it makes a backward-masked target even more visible than when it appears in isolation. These findings are related to the empirical literature on visual masking and interpreted within the theoretical framework of object updating.
View record
Does person perception – the impressions we form from watching others’ behavior – hold clues to the mental states of people engaged in cognitive tasks? We investigate this with a two-phase method: in Phase 1 participants search on a computer screen (Experiment 1) or in an office (Experiment 2); in Phase 2 other participants rate their video-recorded behavior. We find ratings are sensitive to stable traits (search ability), temporary states (cognitive strategy), and environment (task difficulty). We also find that the visible behaviors critical to success vary between settings (e.g., eye movements are important in search on computer screens; head movements for search in an office). Positive emotions are linked to search success in both settings. These findings demonstrate that person perception can inform cognition beyond traditional measures of performance, and as such, offer great potential for studying cognition in natural settings with measures that are both rich and relatively unobtrusive.
View record
If this is your researcher profile you can log in to the Faculty & Staff portal to update your details and provide recruitment preferences.