An ambitious project to model the cerebral cortex in silicon is under way at Stanford. The man-made brain could help scientists understand how the most recently evolved part of our brain performs its complex computational feats, allowing us to understand language, recognize faces, and schedule the day. It could also lead to new neural prosthetics.So we're ready to start making silicon brains, right?
"Brains do things in technically and conceptually novel ways--they can solve rather effortlessly issues which we cannot yet resolve with the largest and most modern digital machines," says Rodney Douglas, a professor at the Institute of Neuroinformatics, in Zurich. "One of the ways to explore this is to develop hardware that goes in the same direction."
Neurons communicate with a series of electrical pulses; chemical signals transiently change the electrical properties of individual cells, which in turn trigger an electrical change in the next neuron in the circuit. In the 1980s, Carver Mead, a pioneer in microelectronics at the California Institute of Technology, realized that the same transistors used to build computer chips could be used to build circuits that mimicked the electrical properties of neurons. Since then, scientists and engineers have been using these transistor-based neurons to build more-complicated neural circuits, modeling the retina, the cochlea (the part of the inner ear that translates sound waves into neural signals), and the hippocampus (a part of the brain crucial for memory). They call the process neuromorphing.
Now Kwabena Boahen, a neuroengineer at Stanford University, is planning the most ambitious neuromorphic project to date: creating a silicon model of the cortex. The first-generation design will be composed of a circuit board with 16 chips, each containing a 256-by-256 array of silicon neurons. Groups of neurons can be set to have different electrical properties, mimicking different types of cells in the cortex. Engineers can also program specific connections between the cells to model the architecture in different parts of the cortex.
We have now invented bricks. We have yet to invent architecture.
"We want to be able to explore different ideas, different connectivity patterns, different operations in these areas," says Boahen. "It's not really possible to explore that right now." Boahen ultimately plans to build chips that other scientists can buy and use to test their own theories of how the cortex operates. That new knowledge can then be built into the next generation of chips.Regular readers of this blog will realise just exactly how much we don't know about how the brain works. The idea of this piece of equipment is that scientists can try out various hypotheses about how bits of the brain function, and see how close to reality their models are. Then refine, and try again. Continue as needed.
Unfortunately it's not clear from the article whether any of the really complex stuff is being modelled. You see, the architecture merely describes electrical and data connectivity. As one commenter put it :
When a transistor models a neuron, it models parts of its electrical properties and perhaps also wiring pattern. But seldomly, it takes into account all of its intricate chemical and physical functionality. The temporal aspects stemming from deprivation of neurotransmitter, enhancement of synapses, gene expression leading to additional receptors and the like, are most often not inherent in the silicon models presently suggested and I would question whether they are part of Boahen's but will appreciate it if it is the case.Exactly. We know enough about the electrical signals to know that they are both amplitude- and frequency-modulated, at least simple(!) pain signals are.
I can see this technology being useful for testing various hypotheses, and even modelling simple neurological structures in isolation, such as retinas. But unless there is significant computational capability built into the connections, not just the neurons, it will not be a particularly good model of what happens in any biological brain. "Not particularly good" may just be enough to be useful though. We shall see.