Friday, 12 June 2009

Duck!

From the Scientific American :
When an Iraqi reporter threw his shoe at President Bush, University of Washington neurologists were delighted. But not because of politics. The fling was just real-world evidence of a theory they were testing. As the shoe flew, Bush ducked while Iraqi Prime Minister Maliki, who was standing right beside him, barely flinched. The reason, the researchers say, is that we have a dual vision system. Our brains "see" things well before our eyes do. Their report is in the June 11th issue of the journal Current Biology.

The scientists contend that Bush ducked because his brain’s action pathway categorized the trajectory of the shoe as a threat well before his perception pathway began to track its flight. Meanwhile, Maliki realized the shoe wasn’t headed his way and didn’t take evasive action.
Such "Reflex Actions" are common when harmful stimuli are applied to peripheral nerves in extremities. When we touch a hot plate, the heat and sensor signals from the nerves in the area of contact journey to the spinal column, where they trigger a "flinch" signal to the muscles, long before the signal travels up the spinal column to the brain. The reaction is literally thoughtless.

This minimises damage from obvious causes of harm. It's a short-circuit, a product of evolution, that means a threat that is very simple and utterly unambiguous can be dealt with simply, and in minimal time to decrease damage.

This is a similar effect, but with one important difference: the optic nerves are already directly connected to the brain, to minimise the delay time. But even that's not enough. There is unconscious processing within the brain, quite complicated processing, that will recognise incoming threats and trigger a rather more complex response than a mere flinch. One involving many muscles, the act of ducking, long before the conscious mind is aware of what is happening.

Rather more of our processing power than we like to think is, well, unthinking. Disconnected from conscious thought. Consciousness appears to be a higher-level product of the mind. Morevover, when learning how to walk, or to drive a car, we can change the brain in ways that cause the formation of complex subroutines to handle quite intricate tasks. All the conscious mind has to do is to think about what the desired result is, and the trained reflexes cause the various muscles to contract (moderated cybernetically with feedback from the nerves and optics) so that the desired result is accomplished, without conscious thought.

You could say that when it comes to ducking, consciousness is not all it's quacked up to be.

This reticulation of processing is common in many military systems. Well, common in the ones I've had anything to do with architecting, anyway. You distribute the processing as close to the effector and sensor as much as you can. Thus a sensor such as a radar will have an internal track extractor - but also report the raw data up the "chain of command" so more subtle effects can be detected using multiple sensors, though this will be delayed.

Once the "higher brain functions" in the Combat System, which are a composite of automation, manual evaluation, and computer-aided manual evaluation, decide on what to do - "shoot at target X with weapon Y using data from sensor Z" - then commands are given to form a "firing chain", an association of weapon and tracker (and thus target). The "higher brain functions" then have no further role to play, other than to receive reports on how the engagement is progressing. The rest is reflex.

One great advantage you have in making such systems is that you can have backups - so if the primary "higher consciousness" is damaged, you still have some capability through local control. The system is ductile, bending not breaking, slowly degrading rather than failing all at once. There's little actual redundancy though - every bit of processing is used with no wastage, when things are going well. You get absolutely the best possible performance. The idea is that even when things are going badly, the system should still work 'well enough' though.

Before the second Terminator movie, I called the two ways of making reliable systems "The Battleship and the Blob". In the first case, you armour-plate the software such that threats can't penetrate. You hope. But eventually, a threat will be too great for the armour to withstand, and the system will be damaged badly, possibly fatally. In the second case, even a minor threat will cause some damage. But you can fill it full of holes, and like The Blob, it keeps on coming. Even a threat big enough to shatter any armour plate in existence will just blow great chunks off, it will still advance, inexorably, just diminished a bit. The difference between Terminator I and Terminator II.

When it comes to data processing in safety critical systems, the "blob" approach is similar to the way that higher animals work. A case of Art imitating Life.

2 comments:

Anonymous said...

Music!

Anonymous said...

Zoe, you have the most interesting job in the world.