top of page

Research Topics

Where is my cap?

Finding and recognizing objects in context

In daily life, we rarely encounter isolated objects. Rather, objects are embedded in a context (or: visual environment). In this line of research we investigate how observers capitalize on predictable aspects of the scene context to find and recognize visual objects. To this end, we conduct mostly fMRI, modeling, lab-based and online behavioral studies. This research line is funded by an NWO-Veni grant.

Example papers

  • Aldegheri, G., Gayet, S., & Peelen, M. V. (2023). Scene context automatically drives predictions of object transformations. Cognition, 238: 105521.

  • Gayet, S., & Peelen, M. V. (2022). Preparatory attention incorporates contextual expectations. Current Biology, 32(3), 687-692. 

  • Gayet, S., & Peelen, M. V. (2019). Scenes modulate object processing before interacting with memory templates. Psychological Science, 30(10), 1497-1509. 

Seeing vs remembering my cap

Short-term memory meets perception

Visual working memory allows us to interact with our visual environment by keeping visual information available for imminent action. In this line of research, we investigate the bidirectional interactions between working memory and concurrent sensory perception, to uncover the functional and neural mechanisms of visual working memory for goal-directed behavior. To this end, we conduct mostly lab-based behavioral, EEG, and fMRI studies.

Example papers

  • Chota, S., Gayet, S., Kenemans, J. L., Olivers, C. N. L., & Van der Stigchel, S. (2023). A matter of availability: Sharper tuning for memorized than for perceived stimulus features. Cerebral Cortex, 33(12), 7608-7618. 

  • Sahakian, A., Gayet, S., Paffen, C. L. E., & Van der Stigchel, S. (2023). Mountains of memory in a sea of uncertainty: Sampling the external world despite useful information in visual working memory. Cognition, 234:105381. 

  • Gayet, S., Guggenmos, M., Christophel, T. B., Haynes, J.-D., Paffen, C. L. E., Van der Stigchel, S., & Sterzer, P. (2017). Visual working memory enhances the neural response to matching visual input. Journal of Neuroscience, 37(28), 6638-6647. 

The battle under the cap

In competition for conscious access

Consciousness is a resource-costly state. Consequently, not all sensory input reaches conscious access. In this line of research, we investigate how behavioral relevance (e.g., task goals, threat, reward, expectations) of sensory input determines conscious access. To this end we conduct mostly lab-based behavioral studies and EEG studies. 

Example papers

  • Litwin, P., Motyka, P., & Gayet, S. (in press). Physiological arousal underlies preferential access to visual awareness of fear-conditioned (and possibly disgust-conditioned) stimuli. Emotion.

  • Gayet, S., Paffen, C. L. E., Belopolsky, A. V., Theeuwes, J., & Van der Stigchel, S. (2016). Visual input signaling threat gains preferential access to awareness in a breaking continuous flash suppression paradigm. Cognition, 149, 77-83. 

  • Gayet, S., Paffen, C. L. E., & Van der Stigchel, S. (2013). Information matching the content of visual working memory is prioritized for conscious access. Psychological Science, 24(12), 2472–2480. 

Cap or no cap?

Too good to be true

When high impact findings seem too good to be true, they often are. Because unjustified conclusions can substantially slow down the scientific process (but also because its fun), we set out to investigate alternative explanations and replicability of such high impact studies. This line of research focuses mostly (but not exclusively) on the field of non-conscious processing, and is partly funded by an NWO-Replication grant.

Example papers

  • Gayet, S., Sahakian, A., Paffen, C. L., & Van der Stigchel, S. (2022). No evidence for social factors in the overestimation of individuals from minority groups. Proceedings of the National Academy of Sciences, 119(47), e2214740119.

  • Gayet, S., Stein, T., Peelen, M. V. (2019). The danger of interpreting detection differences between image categories, Emotion, 19(5), 928-932.

  • Stein, T., Awad, D., Gayet, S., & Peelen, M. V. (2018). Unconscious processing of facial dominance: The role of low-level factors in access to awareness. Journal of Experimental Psychology: General, 147(11), 1-13.

Research Methods

Big-ass magnet


We use high-field (3T) and ultra high-field (7T) fMRI in combination with advanced machine learning techniques, to investigate visual and mnemonic object representations. Scanning occurs at the Spinoza Centre (Amsterdam) and the Donders Institute (Nijmegen).

Smashing buttons


We use a variety of behavioral measures, including online and lab-based manual response tasks, psychophysical modeling, eye-tracking, and pupillometry, in our many in-house lab spaces.

An actual cap!


We use 64-channel EEG recordings in combination with advanced machine learning techniques and rapid invisible frequency tagging (RIFT), to investigate the temporal dynamics of attentional and mnemonic processes. This is done in our many in-house EEG labs. 

Crunching numbers

Machine learning

We use deep neural networks to capture human-like aspects of visual perception, and relate them to human behavioral and neuroimaging data. We also use traditional computational modeling to arbitrate between predefined mechanisms of human perception, behavior, and perceptual decision making.

bottom of page