Automaticity of taxonomic and functional knowledge activation during real-world visual scene processing
Ciesielski, K; Webb, A; Spotorno, S
Although vision guides our everyday actions, the role of functional (action-based) knowledge in understanding real-world visual scenes has long been neglected. Typically, research has focused on taxonomic knowledge, related to the scene’s context and the expected objects within it, and suggested that its activation is automatic and obligatory during scene processing. We compared functional and taxonomic knowledge, examining automaticity, obligatoriness and the time course of their activation when they are task irrelevant. In two lexical-decision experiments (50% words, 50% pseudowords), we manipulated the relationship and Stimulus-Onset Asynchrony (SOA) between scene images and words. The words were either consistent (naming the scene image, or a highly plausible object or action) or inconsistent (naming another scene image, or an implausible object or action). No named objects and actions were depicted in the images. Experiment 1 used a picture-word interference paradigm, with the word or pseudoword superimposed on the scene image (0ms SOA). Experiment 2 used a priming paradigm, with the scene image presented as prime at 100ms, 200ms, 400ms, 800ms SOAs. Responses were faster for consistent than for inconsistent words, independently of the word type (scene, object, action) and only at 0ms or 100ms SOA. These results show that knowledge activation about the scene’s name and expected objects and actions is automatic and obligatory, but can subsequently be suppressed by endogenous processes. Moreover, they do not corroborate previous suggestions of primacy for functional over taxonomic understanding, indicating that predictions guiding visual scene processing similarly encompass all knowledge highly associated with the scene.
|Dec 1, 2021
|Dec 1, 2021
|173 - 174