Advances in Building the Human Brain

By Eric Cruet

“The making of a synthetic brain requires now little more than time and labour….Such a machine might be used in the distant future…..to explore regions of intellectual subtlety and complexity at present beyond the human powers…..How will it end?  I suggest that the simple way to find out is to make the thing and see.”

Ross Ashby, Design for a Brain (1948, 382-83)

The human brain is exceedingly complex and studying it encompasses gathering information across a range of levels, from molecular processes to behavior. The sheer breadth of this undertaking has perhaps led to an increased specialization of brain research.  One of the areas of specialization that has gathered steam recently is the modeling of the brain on silicon.  However, even when considering computing’s exponential growth in processing power, it is still unimpressive as compared with the “specifications” of the human brain.

The average human brain packs a hundred billion or so neurons − connected by a quadrillion (1015) constantly changing synapses − into a space the size of a honeydew melon.  It consumes a measly 20 watts, about what one compact fluorescent light bulb (CFL) uses.  Replicating this awesome wetware with traditional digital circuits would require a supercomputer 1000 times more powerful than those currently available.  It would also require a nuclear power plant to run it.

Fortunately, the types of circuits needed to model the brain are not necessarily digital.  Currently there are several projects around the world focusing on building brain models that use specialized analog circuits.  Unlike traditional digital circuits in today’s computers, which could take weeks or even months to model a single second of brain operation, these analog circuits can duplicate brain activity as fast or even faster that it really occurs, while consuming a fraction of the power.  But the drawback of analog chips is that they aren’t very programmable.  This makes it difficult to make changes in the model, which is a requirement, since initially it is not known what level of biological detail is needed in order to simulate brain behavior.

In the race to build the first low power, large scale, digital model of the brain, the leading research effort is dubbed SpiNNaker (Spiking Neural Network Architecture), a project collaboration between the following universities and industrial partners:

  • University of Manchester
  • University of Southampton
  • University of Cambridge
  • University of Sheffield
  • ARM Ltd (link to these)
  • Silistix Ltd
  • Thales

The design of this machine looks a lot like a conventional parallel processor but it significantly changes the way the chips intercommunicate.  Traditional CMOS (digital) chips were not invented with parallel computing in mind, which is the way our minds operate.  The logic gates in silicon chips usually connect to a relatively few number of devices, whereas neurons in the brain receive signals from hundreds of thousands of other neurons.  In addition, neurons are always in a “ready” state, and respond instantaneously after receiving a signal.  Silicon chips rely on clocking to advance computation in discrete time steps, which consumes a lot of power.  Also, the connections between CMOS-based processors are fixed, and the synapses that connect neurons are always in flux.

One way to speed things up is custom analog circuits that directly replicate brain operation.  Some of the chips under development can run 10,000 times faster than their corresponding part of the brain while being energy efficient.  But as we mentioned previously, as speedy and efficient as they can be, they are not very flexible.

The basic building block of the SpiNNaker machine is a multicore System-on-Chip (see below). The chip is a Globally Asynchronous Locally Synchronous (GALS) system with 18 ARM968 processor nodes residing in synchronous islands, surrounded by a lightweight, packet-switched asynchronous communications infrastructure.  Clicking on the PCB (Printed Circuit Board) will take you to the SpiNNaker Project website.

The figure below illustrates shows that each SpiNNaker chip contains two silicon dies: the SpiNNaker die itself and a 128 MByte SDRAM (Synchronous Dynamic Random Access Memory) die, which is physically mounted on top of the SpiNNaker die and stitch-bonded to it.

The micro-architecture assumes that processors are ‘free’: the real cost of computing is energy. This is why we use energy-efficient ARM9 embedded processors and Mobile DDR (Double Data Rate) SDRAM, in both cases sacrificing some performance for greatly enhanced power efficiency.  These are the same type of chips found in today’s mobile electronics.

It is obvious that although great strides are being made at developing a “digital” brain, simply “building” a brain from the bottom up by replicating its parts, connections, and organization fails to capture its essential function—complex behavior. Instead, just as engineers can only construct cars and computers because they know how they work, we will only be able to construct a brain if we know how it works—that is, if we understand the biological and computational details that are carried out in individual brain areas, and how these details are implemented on the level of neural networks.

 

References:

http://apt.cs.man.ac.uk/projects/SpiNNaker/project/
Eliasmith, C., Stewart, T. C., Choo, X., Bekolay, T., DeWolf, T., Tang, C., & Rasmussen, D. (2012). A large-scale model of the functioning brain. science,338(6111), 1202-1205.
Pickering, A. (2010). The cybernetic brain: sketches of another future. University of Chicago Press.
Price, D., Jarman, A. P., Mason, J. O., & Kind, P. C. (2011). Building brains: An introduction to neural development. Wiley.