Saturday, April 5, 2008

a good analogy for trying to figure out how the brain works

imagine finding a TV screen on the ground, and not knowing what it did or how. how could one figure out how it works. one strategy would be to look at each pixel in isolation, and see how it responded to various inputs, and then try to determine the "pixel-code", mapping the inputs to statistical regularities of the outputs. this might work. alternately, one could start with the simplest TV one could find that shares the same essential properties, and begin exploring the circuit underlying the behavior (ie, the mechanisms). upon developing the underlying operational principles, ie, the functions that resistors, capacitors, etc. perform, one could then scale up to increasingly complex systems. eventually, one could build up to something like a jumbotron, but it would be ill-advised to study a jumbotron, without first understanding a 4" black and white TV.

i think a similar argument applies to neuroscience. we could just start sticking electrodes in the brains of primates and humans, and hope that we can figure things out. or we could start with a much simpler system, and try to unravel the basic governing principles at work. this analogy breaks down, however, in a number of places.

first, i think the things that are especially cool about brains, are things that humans definitely do, and other animals do to a lesser degree. as the brain becomes less complex, the megacool properties become less pronounced. for instance, here is something especially cool that we do. you tell me the meaning of a word, and i then understand it, possibly forever. it is not clear what a homolog of that is in the animal kingdom. fortunately, other supercool attributes of human cognition do seem to have homologs. for instance, our ability to recognize objects. this is a megahard problem computationally. somehow, however, pretty much all animals have figured out how to do it. it is a necessary condition for behavior, at least at a very coarse level (ie, determining whether objects are predators or prey). so, this fear may be mitigated by studying properties that are conserved evolutionarily.

second, analog circuit elements are relatively simple as compared with neurons. this fear assumes that the fundamental (ie, "atomic") unit of neural computation is a neuron. so, one way to mitigate this fear is to postulate that the fundamental unit is something much simpler, ie, a synapse. while synapses are still much more complicated than analog circuit elements (eg, modeling a synapse "accurately" probably requires several states or dimensions, whereas analog circuit elements only require one), they are certainly closer, and it probably doesn't make much sense to postulate anything more atomic than a synapses. on this perspective, neurons become somewhat like integrated circuits, and then the brain becomes the whole circuit board.

third, one could argue that brains are much more general devices than TV's. but maybe that is not true. input to brains come in several possible forms: visual, auditory, etc. similarly, input to TV's come in several possible forms: tuners, cable, DVD's, etc. the output of brains also only have a few possibilities: speech, body language, movements, etc. similarly, TV's output only audio and visual signals. but brains seem to have something that TV's don't: internal states. ok, technically, TV's have 2 internal states: on and off. even if one postulates that different channels correspond to different internal states, the number of possible internal states for a TV pales in comparison to those of a brain, which are innumerable. so, let's switch the analogy from a TV to a computer. a computer (with all the appropriate dressings like an OS, programs, etc.) has may possible internal states. one may think of an internal state as follows: for a particular input, the output is different for a different internal state. so, for a computer, when running one program, a particular key stroke may lead to saving a document, whereas in another program, that same exact keystroke will lead to sending the document. in that sense, each program may be considered to correspond with a different internal state. and yet, computers are still insufficient, as the number of internal states for a computer is discrete and finite (eg, about 1 per program). however, in brains, the number of internal states may not be finite, and is certainly not discrete. for instance, i could be in a relatively good mood, in which case if i get hit by a car, it is not quite that bothersome; whereas if i were in a bad mood, it might be infuriating.

thus it seems as if even analogizing with the most sophisticated devices that humans understand (ie, computers) is insufficient, as the complexity of the human brain - at the level of internal states - is incomparably more complex. nonetheless, this seems as if its the best analogy that we can come up with, so we must work from there.

No comments: