Using Computational Modelling to Understand Cognition in the Ventral Visual-Perirhinal Pathway

Using Computational Modelling to Understand Cognition in the Ventral Visual-Perirhinal Pathway

Rosemary A. Cowell (University of California, San Diego, USA), Timothy J. Bussey (University of Cambridge, UK) and Lisa M. Saksida (University of Cambridge, UK)
DOI: 10.4018/978-1-60960-021-1.ch002

Abstract

The authors present a series of studies in which computational models are used as a tool to examine the organization and function of the ventral visual-perirhinal stream in the brain. The prevailing theoretical view in this area of cognitive neuroscience holds that the object-processing pathway has a modular organization, in which visual perception and visual memory are carried out independently. They use computational simulations to demonstrate that the effects of brain damage on both visual discrimination and object recognition memory may not be due to an impairment in a specific function such as memory or perception, but are more likely due to compromised object representations in a hierarchical and continuous representational system. The authors argue that examining the nature of stimulus representations and their processing in cortex is a more fruitful approach than attempting to map cognition onto functional modules.
Chapter Preview
Top

Introduction

Research in cognitive neuroscience seeks to understand the biological – specifically, neural – foundations of mental phenomena. Most theories in cognitive neuroscience seek to explain a particular cognitive function by specifying which parts of the brain contribute to that function and describing, at some level, the putative neural mechanisms that underlie their contribution. It seems clear that investigating a cognitive process from the standpoint of a well-specified theory can speed the acquisition of our understanding immeasurably. Well-specified theories give concrete predictions, are falsifiable and are explicit enough to be tested and refined; they therefore lend themselves to a systematic process of development into an ever more accurate model (Popper, 1999). The process of testing and refinement narrows down the number of experiments that are required in the investigation, focuses the research direction, and encourages a thorough, mechanistic understanding of the cognitive and neural phenomena. When it comes to hypothesizing about both cognitive function and the neural mechanisms employed by the brain – be those mechanisms specified at the level of synapses, localized networks of neurons, or whole neural systems – computational models can be very useful as a method of creating a well-specified theory with clear predictions.

In this chapter, we will describe a computational modeling framework for the investigation of visual cognition in the brain. Broadly speaking, we seek to understand the brain-based cognitive processes that enable us to apprehend, discriminate and remember visual objects, and how and why those cognitive processes break down following brain damage. In particular, we focus here on the functional organization of the brain regions devoted to processing visual objects. We ask, are the brain areas that underlie object processing functionally distinct from one another, such that they can be described as ‘cognitive modules’ for individual functions such as visual memory and visual perception? Or, are these cognitive functions in fact intimately linked and subserved by common neural substrates and mechanisms? As the report of our computational investigations will reveal, we favour the latter view, in which a single brain region is capable of participating in multiple cognitive functions and a single cognitive function finds its neural basis across multiple brain regions.

In addition, through a description of our proposed computational framework for understanding visual object processing, we will illustrate and advocate our modeling philosophy. In general, we adhere to the principle of Occam’s Razor and we place an emphasis on the correct choice and explicit declaration of the problem space that a model attempts to address. According to our view, a theoretician should first define clearly the target data of the theory (the problem space) in order to avoid any misunderstanding about the phenomena that one should expect the model to capture, and in order to restrict the scope of the model to a reasonably-sized and tractable domain. Within that problem space, the theoretician should account for the data using the simplest possible model (Occam’s razor). This is particularly applicable in cognitive neuroscience, where the aim is to understand how high-level cognitive processes emerge from systems-level or network-level processes: the explanatory power of a model is maximized when the mechanism driving the behavioural trends in the simulation data is clearly defined and observable, rather than obscured by having many complex computational details operating in parallel. In addition, the model should be formulated at the appropriate level of biological organization.

Complete Chapter List

Search this Book:
Reset