I notice that there are still a lot of people who assume that the brain is some type of general computer that can be modeled with a Turing machine. This idea cannot be assumed – it must be indicated if not proven. Just because a problem can be solved by a Turing machine and that problem is solved by the brain does not mean that the way the brain solved the problem is by a Turing machine type method. Nor is it clear (at least to me) that problems that are not framed in symbols of some sort and manipulations of those symbols, can all be solved by a Turing machine. An identity between a Turing machine and a brain has yet to be shown.
We have no reason to believe that brains are digital, that they deal with discrete states analogous to 1s and 0s, true and false, or yes and no. Everything points to the brain dealing with continuous values rather than discrete ones. We have no reason to believe that the brain uses algorithms, pre-formulated and step-wise methods of processing. In fact the physiology of brain cells is too slow for step-wise methods for most tasks. Everything points to massive overlapping loops, covering much of the brain at any instant, which settle quickly to momentarily stable configurations. We should forget about the brain being like a Turing type computer. We can entertain analogue computers or control systems but not general digital computers.
This should not be a surprise. Take the idea of a general computer. Why would our brains have general facilities rather than be ‘purpose built’? They are the product of evolution. They evolved as part of a slowly evolving body and slowly changing niches. What we need the brain to do, it is likely to do very well. And, what we don’t need it to do, it will not be able to do. It is very good at pattern recognition and very poor at complex mental mathematics. (We use calculators but not pattern recognizers.) We can push the envelop in certain new directions but only by using facilities evolved for something else.
How did it come to be that so many people had accepted the computer metaphor in its most literal form. I recently ran across a historical perspective in a discussion of amodal symbols, symbols that are not grounded in any mode like the senses, actions, situations or emotions etc. – the sort of symbols a Turing type machine might use, ungrounded and disembodied symbols, so to speak. Symbols that are very problematic from the biological point of view. The brain is after all a biological organ and our cognition is a biological function.
From Barsalou, L. (2008). Grounded cognition. Annual Review of Psychology, 59, 617-645
“Perhaps surprisingly, grounded cognition has been the dominant view of cognition for most of recorded history. Nearly all prescientific views of the human mind going to back to ancient philosophers (e.g., Epicurus 341–270 B.C.E.) assumed that modal representations and imagery represent knowledge analogous to current simulation views. Even nativists, such as Kant (1787/1965) and Reid (1785/1969), frequently discussed modal images in knowledge (among other constructs).
In the early twentieth century, behaviorists attacked late nineteenth-century studies of introspection, banishing imagery from much of psychology for not being sufficiently scientific, along with other cognitive constructs. When cognitive constructs reemerged during the Cognitive Revolution of the mid-twentieth century, imagery was not among them, probably for two reasons. First, the new cognitivists remembered Watson’s attacks on imagery and wanted to avoid the same criticisms. Second, they were enthralled with new forms of representation inspired by major developments in logic, linguistics, statistics, and computer science. As a result, theories of knowledge adopted a wide variety of amodal representations, including feature lists, semantic networks, and frames.
When early findings for mental imagery were reported in the 1960s, the new cognitivists dismissed and discredited them. Nevertheless, the behavioral and neural evidence for imagery eventually became so overwhelming that imagery is now accepted as a basic cognitive mechanism. …amodal symbols were adopted largely because they provided elegant and powerful formalisms for representing knowledge, because they captured important intuitions about the symbolic character of cognition, and because they could be implemented in artificial intelligence.”