So Our Minds Work By Exclusion? Not so fast!


Exclusion is inextricable from formal logic, and is obviously thoroughly embedded in human language. But we should be extremely cautious about making any leaps towards concluding that the human mind must somehow work by exclusion. There's a whole lot of research to show that the human mind is, for want of a better term, very strongly "associative" rather than being based on exclusionary, or formally logical principles, as will be discussed a bit later. It would also be a mistake to assume that the brain somehow "works by language" or "thinks in words", with collections of neurons performing ratiocination in english or some cruder or more fundamental, but similar "language" that english can be mapped onto.

Language is indeed one thing human brains can produce and understand, but there's a lot going on inside there, and the amount of the brain involved in producing and understanding language may be relatively small. Language is closely associated with brain processes and interactions with others that we are conscious of, but that's rather understandable - those parts of our brains dedicated to interactions between individuals, to how we present and explain ourselves, are obviously going to find language useful in those tasks. But the brain evolved before human language did, and there's no reason to leap to conclusions that some form language must somehow be the way in which all regions of the brain communicate with each other, or that language is just the electrical signals between neurons writ large in some way or other. There's every reason to suppose that this isn't the case.

In a way, such hypotheses are reminiscent of crudely "homuncular" views about how we see: as if we see the real world through our eyes, as if those eyes were windows that our brains could somehow peer through, and see the real world just as it is. instead, our brains construct the world we see from sense data. According to the physics of "string theory", should that pan out, our brains may even be leaving out most of the dimensions and complexity of the eleven-dimensional world since those aren't relevant to daily life, and model merely four of its dimensions (including time) for the sake of simplicity, and efficiency. Our brains' neurons don't "talk" to each other with words, and there's no reason to suppose that language is the richest possible way to communicate, either - it's only a slim pipeline over which considerable information can pass between individuals, quite slowly. Not the sort of process or mechanism you'd want to build a quick-witted brain from. Language developed from crude alarm and food calls that are common to anthropoids, and we can easily over-estimate it's power and complexity - in any case, neurons aren't really small humans, or anything like that, that talk in predicate-subject form.

Of course, historically, humans have been very tempted to explain and understand the mind by close analogy to the most complex mechanical device they knew about. In medieval times, that was the clock, a marvel of complexity and precision at the time. Whenever the process of how humans thought was discussed, it was discussed in terms of clockwork, as Daniel Boorstein has detailed in his book, Discoveries. Now, it's the computer that's a tempting analogy. Computers and exclusion - the binary of yes, and no - go together very well, and hardly by coincidence, as was said in the page "Why Talk About Meaning as Exclusion?" It's predictably tempting to look at brains as computers because they come closer to imitating brain functions than previous machinery could. But brains just don't work that way. If it were so, the field of artificial intelligence would long ago have met the expectations that researchers in the fifties thought would be achieved in a decade or two. Turing himself estimated that as soon as machines could be built with at least 20 Megs of memory, they would have at least human intelligence - even at the glacial processing speeds of the time. But, it turned out that for computers, higher order calculus is a much easier nut to crack than apparently simpler tasks such as visual pattern recognition.

Now, we have some good psychological research to show that however our brains work, and it isn't the way our logic machines work. In fact, purely logical operations (and arithmetic) might better be ranked amongst the sorts of information processing that brains are worst at, perhaps in part precisely because we don't look for disconfirmation, as we might naturally be expected to do if brains were exclusion devices. Positive association seems to exert a much more powerful force on the human mind, even when that conflicts with logic, or even been contradicted.

Some of the most recent such research is discussed in popular form in the ABCNews article, "Knowing Better: Behavioral Puzzles in Business and Diplomacy" by John Allen Paulos, April 3, 2003.

This article discusses "the already classic book Judgement Under Uncertainty," by psychologists Amos Tversky and Daniel Kahneman. Kahneman received the 2002 Nobel prize in economics for this kind of work on human irrationality in decision making and logic. The authors don't show we're completely hopeless, just that our thinking deviates from that of logic machines in some very predictable ways - which goes some way toward showing that we just aren't logic machines to begin with.

To quote just one relevant section from the ABCNews article:

"A famous study by psychologist Peter Wason neatly illustrates how we tend to look only for confirmation of our ideas, seldom for disconfirmation. Wason presented subjects with four cards having the symbols A, D, 3, and 7 on one side and told them that each card had a number on one side and a letter on the other. He then asked which of the four cards needed to be turned over in order to establish the rule: Any card with an A on one side has a 3 on the other. Which cards would you turn over? (The answer is below.)

Answer: Most subjects picked the A and 3 cards. The correct answer is the A and 7 cards."

Other research has shown that learning formal or abstract logical principles may be useless or even counter-productive in helping us to think in logically correct ways about situations we encounter in life - whereas education in say, medicine, which presents logical puzzles clothed in familiar material guises, does help us to make better and more logic decisions about new situations.

But that doesn't mean that learning a smattering about how exclusionary logic works wouldn't help us in many situations where logic turns out to be more important than mere pattern recognition. Knowing how our language works at a more primitive level can help us to understand the limits of language and logic, and not overestimate the importance of the information we have, for instance. And it may cause us to pay more attention not just to "positive" information, but also important negative information whose significance we might otherwise overlook or neglect.

That isn't to say that exclusion, or elimination, has nothing to do with how the brain learns or grows - it may be that at the finest level, elimination of connections between neurons, (or limits on them) are in fact critical, both in learning, and in more primitive beings, in shaping brains adapted to their environments over many generations. Neural network research is examining such methods of training networks. But the net result of neural activities are cognitive processes that so far don't seem to be engaged in anything very similar to formal logic or even human language: so far as we can tell they aren't in any very obvious way creating significance by exclusion.

  • NEXT: More on Buddhist philosophy of logic in this context.

  • Back to: A discussion of meaning as exclusion.

  • Back to: An interactive view of deduction...

  • Back to: The sixteen things you can now say...

  • Back to the Interactive Exclusion Diagram (home page.)