Approaches to Cognitive Science

By Eric Cruet

The genesis of cognitive science as a collaborative endeavor of psychology, computer science, neuroscience, linguistics, and related fields began in the the 1950s, however, its first major institutions (a journal and society) were established in the late 1970s.

A key contributor to the emergence of cognitive science, psychologist George Miller, dates its birth to September 11, 1956, the second day of a Symposium on Information Theory at MIT. Computer scientists Allen Newell and Herbert Simon, linguist Noam Chomsky, and Miller himself presented work that would point each of their fields in a more cognitive direction.

In the late 1970s, human experimental psychology, theoretical linguistics, and the computer simulation of cognitive processes saw progressive elaboration and coordination. In the 1990s, John McCarthy and Marvin Minsky at MIT developed a broad based agenda for the field they named artificial intelligence (AI), and the convergence of all the above led to the establishment of the multi-disciplinary field we recognize today.

Today, the inclusion of network theory, complexity science, advances in imaging modalities and visualization, and the ability to process entire data sets as opposed to small samples promise to significantly change the way in which the organization and dynamics of cognitive and behavioral processes are understood. Below, we describe a mix of classic and current approaches to cognitive science.

Distributed Cognition

Distributed cognition is a branch of cognitive science that proposes cognition and knowledge are not confined to an individual; rather, it is distributed across objects, individuals, artefacts, and tools in the environment.  Early work in distributed cognition was motivated by the fact that cognition is not only a socially (also, materially and temporally) distributed phenomenon, but one that is essentially situated in real practices [1].  The theory does not posit some new kind of cognitive process.  Rather, it represents the claim that cognitive processes generally are best understood as situated in and distributed across concrete socio-technical contexts.

Traditional cognitive science theory emphasizes an internalism that marginalizes (some would argue ignores) the role of external representation and problem solving in cooperative contexts.  Traditional approaches to description and design in human-computer interaction have similarly focused on users internal models of the technologies with which they interact.  In this case the theoretical focus is on how cognition is distributed across people and artifacts, and on how it depends on both internal and external representations.

The Cognitive Niche

Humans have the ability to pursue abstract intellectual feats such as science, mathematics, philosophy, and law.  This is surprising, given that opportunities to exercise these talents did not exist in the hunter-gatherer societies where humans evolved.

The “Cognitive Niche theory of cognition states that humans evolved to fill a mode of survival by manipulating the environment through causal reasoning and social cooperation. In addition, the psychological faculties that evolved to prosper in the cognitive niche can be co-opted to abstract domains by processes of metaphorical abstraction and productive combination like the ones found in the use of human language [2]. 

 

This theory claims several advantages as an explanation of the evolution of the human mind. It incorporates facts about the cognitive, affective, and linguistic mechanisms discovered by modern scientific psychology rather than appealing to vague, prescientific black boxes like “symbolic behavior”.  Finally, the cognitive adaptations comprise the “intuitive theories” of physics, biology, and psychology; the adaptations for cooperation comprise the moral emotions and mechanisms for remembering individuals and their actions; and the linguistic adaptations comprise the combinatorial apparatus for grammar and the syntactic and phonological units that it manipulates [3]. 

 

Connectionism

Connectionism is an alternate computational paradigm to that provided by the von Neumann architecture that has inspired classical cognitive science [4]. Originally taking its inspiration from the biological neuron and neurological organization, it emphasizes collections of simple processing elements in place of the centrally-controlled manipulation of symbols by rules that is typical in classical cognitive science. The simple processing elements in connectionism are typically only capable of rudimentary calculations (such as summation).

A connectionist network is a particular organization of processing units into a whole network. In most connectionist networks, the systems are trained using a learning rule to adjust the weights of all connections between processors in order to obtain a network that performs some desired input-output mapping.

Connectionist networks offer many advantages as models in cognitive science [5]. However, in spite of the fact that connectionism arose as a reaction against the assumptions of classical cognitive science, the two approaches have many similarities when examined from the perspective of Marr’s tri-level hypothesis [6].

There are many forms of connectionism, but the most common forms use neural network models.

Though there are a large variety of neural network models, they almost always follow two basic principles regarding the mind:

  1. Any mental state can be described as an (N)-dimensional vector of numeric activation values over neural units in a network.
  2. Memory is created by modifying the strength of the connections between neural units. The connection strengths, or “weights”, are generally represented as an (N×N)-dimensional matrix.

Connectionists are in agreement that recurrent neural networks (networks wherein connections of the network can form a directed cycle) are a better model of the brain than feedforward neural networks (networks with no directed cycles). Many recurrent connectionist models also incorporate dynamical systems theory. Many researchers, such as the connectionist Paul Smolensky, have argued that connectionist models will evolve toward fully continuous, high-dimensional, non-lineardynamic systems approaches.

Theoretical Neuroscience

Theoretical neuroscience is the attempt to develop mathematical and computational theories and models of the structures and processes of the brains of humans and other animals. It differs from connectionism in trying to be more biologically accurate by modeling the behavior of large numbers of realistic neurons organized into functionally significant brain areas. In recent years, computational models of the brain have become biologically richer, both with respect to employing more realistic neurons such as ones that spike and have chemical pathways, and with respect to simulating the interactions among different areas of the brain such as the hippocampus and the cortex. These models are not strictly an alternative to computational accounts in terms of logic, rules, concepts, analogies, images, and connections, but should complement other models to illustrate how mental functions can be translated and performed at the neural level.

Learning is arguably the central problem in theoretical neuroscience. It is possible that other problems such as the understanding of representationsnetwork dynamics and circuit function could be understood when the details of the learning process are known that, together with the action of the genome, produce these phenomena. 

Another tremendous challenge is “the invariance problem”.  Our mental experience suggests that the brain encodes and manipulates ‘objects’ and their relationships, but there is no neural theory of how this is done. We recognize, for example, a cup regardless of its location, orientation, size, or other variations such as lighting and partial occlusion. How do brain networks recognize a cup despite these complicated variations in the image data? How is the invariant part (‘cup-ness’) encoded separately from the variant part?

This is the ‘holy grail’ problem of the computer vision community, and we aim to tackle it by fortifying our learning algorithms with insights from the mathematics surrounding the concept of invariance. Invariance may also be seen in motor scenarios, cups being a class of things that we can drink from (what J.J.Gibson called an affordance).

 

References:

[1] Wilson, R. A., & Keil, F. C. (Eds.). (1999). The MIT encyclopedia of the cognitive sciences (Vol. 134). Cambridge^ eMA. MA.: MIT press.

[2] Pinker, S. (2010). The cognitive niche: Coevolution of intelligence, sociality, and language. Proceedings of the National Academy of Sciences107(Supplement 2), 8993-8999.

[3] Whiten, A., & Erdal, D. (2012). The human socio-cognitive niche and its evolutionary origins. Philosophical Transactions of the Royal Society B: Biological Sciences367(1599), 2119-2129.

[4] Bechtel, W., & Abrahamsen, A. A. (2002). Connectionism And The Mind : Parallel Processing, Dynamics, And Evolution In Networks (2nd ed.). Malden, MA: Blackwell.

[5] Dawson, M. R. W. (1998).Understanding Cognitive Science. Oxford, UK: Blackwell.

[6] Dawson, M. R. W. (2004). Minds And Machines : Connectionism And Psychological Modeling. Malden, MA: Blackwell Pub.