“No, intelligence does not come from a special kind of spirit or matter or energy but from a different commodity, information. Information is a correlation between two things that is produced by a lawful process (as opposed to coming about by sheer chance). We say that the rings of a stump carry information about the age of the tree because their number correlates with the tree’s age (the older the tree, the more rings it has), and the correlation is not a coincidence but is caused by the way trees grow. Correlation is a mathematical and logical concept; it is not defined in terms of the stuff that the correlated entities are made of.
Information is nothing special; it is found wherever causes leave effects. What is special is information processing.”
How the Mind Works, Steven Pinker (italics in original)
Information provides a middle way of understanding human intelligence. It does not rely on a ghost in the machine nor on a special form of fantasy matter that secretes mind. Information theory recognizes that all information needs to be embodied in some configuration of matter, which removes the ghost that has haunted Western philosophy, religion and science for centuries. Information theory recognizes that the particular configuration of materials is able to act as a symbol; its physical representation also carries meaning, that ethereal spook, by representing a true correlation between real states of affairs.
A symbol can stand for something, like the age of a tree, yet also has additional physical characteristics e.g. absorbs water and reflects light. Now here is the trick, the key piece of the explanatory power of the information theory of mind: imagine we build a machine that is sensitive to the physical characteristics of the symbol. A clever arrangement of light sensors, levers, maybe a magnet or two, and a pen could produce a contraption that makes a mark for every tree ring it encounters. It “reads” the arrangement of matter in one place and “writes” the results into another chunk of matter. Cause and effect is happening in this chain of events being executed by this dumb machine.
In special step two we decide to interpret the output in terms of the input, we count the pen marks and interpret them as the age of the tree stump. Subtle isn’t it? Those marks were not directly caused by the growth of any tree yet they carry an informational correlation. Take the same contraption and scan another nearby, smaller stump. Again marks are made. If we now compare the first and second set of marks we discover the age the original tree was when the second, smaller tree was planted. Our contraption is a kind of rational machine capable of drawing true conclusions from true premises.
No need for special matter or energy, nothing but the correct arrangement of parts, none of which are overtly rational or intelligent in themselves. The symbol unites the ability of carrying information with the ability to cause things to happen according to that information. In this case the rings correlate with the age of the tree and they trigger the beam of the light scanner. In the case of a neuron the action potential carries information, in that it is either firing or not, and the physical side might be that it terminates in a skeletal muscle cell and causes behavior.
When the output also contains information we have an information processor. In the case of our contraption the marks embody the age of the tree. Alan Turing designed a machine as a thought experiment that could produce correspondences between inputs and outputs. He was able to prove that any algorithm can be implemented on one of these Turing machines. These rational machines are of course ubiquitous today when we have computers everywhere. It is not at all hard for us to understand it is possible to build a machine that can take input symbols, operate on them and produce output symbols that “mean” something to us.
This pedestrian demonstration of the power of symbols we find in our computing devices is worth contemplating in light of the metaphysical puzzles that seem to obscure our ability to understand consciousness. How can the immaterial interact with the material? One camp insists only the material world is real and consciousness is a kind of illusion. The other camp insists only mind can be primarily real since it is what we most centrally experience and cannot conceivably interact with anything that is not mind. The symbol stands with a foot in both camps.
Not surprising for anyone familiar with the history of ideas that it was not long before the computer metaphor was being applied to the workings of the brain. In Rome where aqueducts represented cutting edge technology the mind and body were full of fluid-like humors. When electricity and steam were the cutting edge of mechanical engineering it was easy for Freud and others to liken the workings of body and mind to pressures building up, where good people had to keep a lid on it or else they might blow their top. With computers as the latest technology its metaphors were applied, likely with no more staying power then their intellectual predecessors.
The computational theory of mind is not postulating that the human brain works like a computer, which is a mistake numerous early researchers made, but that the human brain is capable of performing computations. The computations being referred to are not just the mathematical operations of addition, subtraction, differentiation, integration and the rest but also logical operations such as greater than or lesser than.
How can a brain do it? The brain is mostly a vastly complex collection of neurons whose defining characteristic is their ability to make connections with other cells, including other neurons. These connections are made through the synapses by using neurotransmitters. This is a subject worth exploring in detail for contemplatives but for today’s purpose suffice it to say that the neuron is capable of making a basic choice between firing or not firing a signal to its partner neuron. This is somewhat similar to the ones and zeros we hear about in computer science where an element in an electronic circuit can be either on, represented as 1 or off which is represented as 0.
In one common arrangement of neuronal connections there are a number of input neurons feeding their signals into a single target neuron. The target neuron is capable of summing these many inputs and only if their input strength reaches a particular level will that target neuron fire its signal in turn. The target signal will then become one input for the next target neuron in the chain in the same way.
It is said that the target neuron integrates the synaptic signals. This is like the summation behind the integration found in the Calculus; animal nervous systems embody an integration capability.
To understand the power of connections in this context it might help to take a moment to look at the simplified logic circuits of the computer’s CPU. The basic neuron toolbox consists of signals that can either fire or not which is determine by the types of input signals they receive. These functional building blocks are also sufficient to sketch rational symbol manipulations into silicon.
The design of the computer’s CPU has taught us about the ability electrical circuits to act as logical operators. In both computer science and logic these operators consist of AND, OR, and NOT (the XOR and NAND gates need not concern us here). A truth table is used to illustrate the results of these operators where the 1 stands for on and the 0 for off. The logic circuits only acquire the ability to act due to the results of the integration of the input signals.
AND: If and only if both input signal A is 1 and input signal B is 1 then output is 1. In words; if A and B are on then signal on but if only A or only B is on or neither A nor B are on, then signal off. AND gates can have more than two input signals.
OR: If and only if input signal A is 1 or input signal B is 1 but not both, then the output is 1. In words; if A is on or B is on but not both A and B is on, then signal on. OR gates can have more than two input signals.
This is probably elementary to many but it illustrates a very important point. These circuits are so simple they can be wired up on a breadboard in a few minutes yet they are implementing logical operations. Let that sink in. These wires, configured in these ways are capable of some form of “thinking” or “intelligence.” More accurately, they are capable of properly manipulating information symbols, with properly being defined as the way human logic proceeds in valid thinking.
Nervous systems are implementing something along the lines of these types of circuits. Using the building blocks of logic they are capable of effectively implementing If – Then – Else logic in order to control behavior.
Evolution favored the use of the nervous system throughout the animal kingdom because it enhances the repertoire of possible responses to the environment. Remember Pavlov’s dog that Skinner took as the paradigm of all learning? Pavlov starved the dog and then rang a bell each of the few times he fed it. Eventually the dog would salivate as soon as it heard the bell, whether or not food was forthcoming. These facts were explained as an association having been formed between the auditory signal of the bell and the anticipated relief of starvation. What more could be expected from a ‘dumb animal’?
Instead of a fixed set of responses like the older, behaviorist theory of associations between stimulus and response – the definition of crazy is to do the same thing over and over and expect a different outcome – the new paradigm recognizes the same associational behaviors as “multivariate, nonstationary time series analysis (predicting events will occur, based on their history of occurrences).” (Pinker 1997) It is not that we humans are lucky enough to have souls and animals are somehow animated bricks. It is that the mind-matter symbol manipulations are shared by all sentient beings with the differences being the number of connections available and how they are structured, not a difference of kind.
The new model finds a place for beliefs and desires which the behaviorists tossed out. When we watch an insect or an eagle it is not hard to describe their behavior in terms of belief and desire: the eagle wanted to eat and believed the field it is scanning would turn up a meal. The behaviorists objected to the use of belief and desire in a so-called scientific psychology because they could not be seen nor measured. Before an understanding of symbols it was not clear how meaning, the contents of beliefs and desires, could be the cause or result of anything. Now we can understand how a chain of symbols might work. The physical properties of the symbol cause processing whose output is another symbol with further processing of its physical properties until the target is a muscle cell and behavior results. Or run the chain the other way as a sensory input arrives in the nervous system transduced from its original physical form – light, sound, heat, pressure, frequency – it starts the symbol chain that might end up in a thought, ‘my, the sunset is gorgeous tonight.’
Patterns matter. The same alphabet can be found in every book written in English. It is the pattern of letters and the words they make that turns one book into Shakespeare and another into Newton. It is the same thing in the nervous system where the pattern of how the neurons are connected creates an enormous space of potential information processing procedures. “Minute differences in the details of the connections may cause similarly looking brain patches to implement very different programs.” (Pinker 1997)
The rule for the nervous system connections seems to be, use it or lose it. Connections that are used frequently grow stronger while connections used rarely will atrophy. We know now that the brain is highly plastic, it is constantly creating and destroying connections as well as adjusting the electrochemical signals the neurons are exchanging. On one level this is what our contemplative training is all about. Strengthening our skills with in meditation is reworking our minds, literally.