I first blogged about Dr Potter's experiments with
From Wired :
The first generation of animats performed simple tasks. The virtual mouse tended to move in one direction (right). A dish-brain-controlled robot did manage to stay away from a moving target -- impressive-sounding perhaps but not particularly complicated. A robotic arm holding a set of pens and attached to a clump of neurons created art -- albeit in the eye of the beholder.
Researchers have found that lab-grown neuron cultures tend to fire in bizarrely synchronized, dishwide waves, eerily echoing the neural patterns seen during Alzheimer's disease.
"It's possible that this is a state of arrested development," Potter said, "or that the networks are asleep because they're missing the parts (humans) use to wake up. It's (also) possible that the networks are in some sort of epileptic state."
The repeated firing may have wiped the animats' memories, Potter said. His group has since learned to reduce the bursts with electric stimuli, which acts as a massage to ease the dish-brain's stress.
While he's quick to disavow any comparisons to Dr. Frankenstein, Potter admits the clumps have a certain amount of awareness.
"Since our cultured networks are so interconnected, they have some sense of what is going in themselves," he said. "We can also feed their activity back to them, to mediate their 'sense of self.'"
The next phase of animats will likely have an even keener sense of self.
"In the next wave, we hope to sequence behaviors." Potter said. "The sensory input resulting from one behavior will trigger the next appropriate behavior." In other words, he hopes the animats will learn.
Time to quote from another article of mine, back in October of the same year.
The problem of Animal Rights becomes acute and immediate when we consider the experimentation currently underway with Hybots. It can be persuasively argued that experimentation with primitive organisms like lampreys (Gugliotta 2001) and spiny lobsters(Aguilera 1999) do not involve "thinking creatures" as such. The fact that some of the neural processing can be replaced by an absurdly simple inorganic equivalent is strong evidence of this. A lamprey or a spiny lobster, despite being organic, may in fact be no more than a self-directing robot. The situation described by Graham-Rowe 2001 is less clear : only a few thousand neurons are used, and from Rat foetuses rather than the fully-developed animal, yet it is this very plasticity and higher level of development that leads one to suspect that the result may "think" in an animal fashion rather than merely be a robot with organic parts. Should such a Hybot be able to navigate a maze, then very troubling ethical issues arise regarding cruelty. We can plausibly avoid the issue when dealing with a non-organic artificial intelligence with the same external behaviour, but we know Rats think. And the situation regarding fully inorganic artificial intelligence is not as clear-cut as it once was, given the experimentation with Cyborgs and prosthetic brain parts. There is potential for suffering on a scale undreamt-of, and for very much longer than a normal lifespan. Call it Hell on Earth. Conversely, there is the possibility that we might fully understand the nature of thought, and resolve the issues of how we should treat animals. We may even be able to augment ourselves to become, if not Gods, perhaps a little more wise as well as intelligent. Call it Heaven on Earth.
As I said, troubling. With the pace of development, the time to think about ethical issues raised by this experimentation is now, not ten years hence.