A first bit of knowledge

Deric Bownds (here) has a post on ideas of Ullman et al in a recent PNAS paper From simple innate biases to complex visual concepts. Here is the paper’s abstract:

Early in development, infants learn to solve visual problems that are highly challenging for current computational methods. We present a model that deals with two fundamental problems in which the gap between computational difficulty and infant learning is particularly striking: learning to recognize hands and learning to recognize gaze direction. The model is shown a stream of natural videos and learns without any supervision to detect human hands by appearance and by context, as well as direction of gaze, in complex natural scenes. The algorithm is guided by an empirically motivated innate mechanism—the detection of “mover” events in dynamic images, which are the events of a moving image region causing a stationary region to move or change after contact. Mover events provide an internal teaching signal, which is shown to be more effective than alternative cues and sufficient for the efficient acquisition of hand and gaze representations. The implications go beyond the specific tasks, by showing how domain-specific “proto concepts” can guide the system to acquire meaningful concepts, which are significant to the observer but statistically inconspicuous in the sensory input.

 

This research seems to illuminate the problem with new born learning – how much is knowledge of the world innate and how much is learned. For example, do we need an inherited language module to learn language? In the case of hands and gaze, the babies seem to need only a very simple concept/motivation to begin their learning – the concept of a ‘mover’ and the motivation to follow movers. A very small input of innate knowledge can start of baby off in learning if it is the right little bit.

 

Not seeing the trees for the forest

Here is an interesting abstract. It is from a paper by Poljac, de-Wit, Wagemans in Journal of Cognition, Perceptual wholes can reduce the conscious accessibility of their parts.

Humans can rapidly extract object and category information from an image despite surprising limitations in detecting changes to the individual parts of that image. In this article we provide evidence that the construction of a perceptual whole, or Gestalt, reduces awareness of changes to the parts of this object. This result suggests that the rapid extraction of a perceptual Gestalt, and the inaccessibility of the parts that make up that Gestalt, may in fact reflect two sides of the same coin whereby human vision provides only the most useful level of abstraction to conscious awareness.

This is exactly what is expected of a process that gives awareness of an integrated model of world – not awareness of the perceptions that were used to create the model.

Transparency

Thomas Metzinger wrote an outline of his book, ‘Being No One’, which puts his theory of consciousness in a very brief, compact form. He puts forward a list of constraints that any system must have to be conscious. The first three constrains gives a simple form of consciousness which he then elaborates with further constrains. Less than the first three will not produce conscious experience.

Constraint 1 is globality: all the contents of consciousness are found in the form of a single globally available integrated world-model. Constraint 2 is presence: the experience of consciousness is as a present ‘now’ in a flow of time. Our experience is being now in the world. Constraint 3 is transparency: consciousness does not include awareness of the process that created it; the mechanism is transparent.

“the most elementary form of conscious experience conceivable: The presence of a world. The phenomenal presence of a world is the activation of a coherent, global model of reality (Constraint 1) within a virtual window of presence (Constraint 2), a model that cannot be recognized as a model by the system generating it within itself (Constraint 3)… Phenomenal selfhood … it is a function realized by a lack of information.We do not experience the contents of our self-consciousness as the contents of a representational process, but simply as ourselves, living in the world right now. ”

It is the transparency that makes the experience seem to be direct when it is very indirect. It does not feel like the ‘world’ is constructed; that the ‘now’ is constructed; that the ‘self’ is constructed.

Here is a philosopher that has successfully turned his back on introspection as a source of knowing about the nature of consciousness. I cannot state too strongly how satisfying this theory is to me. It fits so well with what I think about consciousness and I will return to it in future posts.

It is worth reading this precis but be warned – this is a German philosopher writing in English and condensing an extremely long book. There are some hum-dingers of sentences.

Note: original link was wrong and has been corrected. Also, there is an alternative link in comment from helpful reader - thanks.

ResearchBlogging.org
Thomas Metzinger (2005). Precis of - Being No One Psyche - An Interdisciplinary Journal of Research on Consciousness, 11 (5)
http://www.theassc.org/files/assc/2608.pdf

Space perception is hard-wired

Science Daily has a report on investigation of animal sense of direction. (here) R. Langston found that baby rats have a space map before they can see or navigate outside the nest.

The research team implanted miniature sensors in rat pups before their eyes had opened (and thus before they were mobile). That enabled the researchers to record neural activity when the rat pups left the nest for the first time to explore a new environment.

The researchers were not only able to see that the rats had working navigational neurons right from the beginning, but they were also able to see the order in which the cells matured.

The first to mature were head direction cells. These neurons are exactly what they sound like — they tell the animal which direction it is heading, and are thought to enable an internal inertia-based navigation system, like a compass. “These cells were almost adult-like right from the beginning,” Langston says.

The next cells to mature were the place cells, which are found in the hippocampus. These cells represent a specific place in the environment, and in addition provide contextual information — perhaps even a memory — that might be associated with the place. Last to mature were grid cells, which provide the brain with a geometric coordinate system that enables the animal to figure out exactly where it is in space and how far it has travelled. Grid cells essentially anchor the other cell types to the outside world so that the animal can reliably reproduce the mental map that was made last time it was there.

It has been assumed by many, for a long time that our 3D space perception is hard-wired and not gained from experience of space. This and similar research seems to confirm that assumption.

No voters

There is an interesting post by J. Lehrer in Frontal Cortex (here). He examines the metaphor of consciousness being the result of a ‘vote’.

Like Crick and Koch, I believe our head holds a raucous parliament of cells that endlessly debate what sensations and feelings should become conscious. These neurons are distributed all across the brain, and their firing unfolds over time. This means that we are not a place: we are a process. As the influential philosopher Daniel Dennett wrote, our mind is made up “of multiple channels in which specialist circuits try, in parallel pandemoniums, to do their various things, creating Multiple Drafts as they go.” What we call reality is merely the final draft. (Of course, the very next moment requires a whole new manuscript.)

And yet, and yet…There is the problem of the election. If this blink of conscious perception is a vote, then where is the voter? We can disguise the mystery with euphemisms (top-down attention, executive control, etc.) but the mystery still exists, as mysterious as ever. We deny the ghost, but still rely on models, metaphors and analogies in which the ghost controls the machine.

The problem as I see it is that the mechanism of the election is not the right metaphor. If things are visualized in a sequential way, it is difficult to lose the ghost. First we imagine two areas of neurons and that they are organized in similar maps, like the map of the retina in the thalamus and the one or more maps of the retina in the cortex and enlarge the number of maps to cover all the things that might be in the content of consciousness. This we suspect exists. Then we imagine that the neurons in one area communicate with those with similar map positions in the other area, and vice versa, to give feedback loops. This also we suspect to exist between the thalamus and the cortex and between separate areas of the cortex. And further we imagine that these parallel loops between two versions of the same map type are a bit sloppy so that there is a good deal of overlap. Now we have a massive set of parallel overlapping feedback loops. This resembles, not a digital computer, but an enormous analogue computer. When the input to such a network changes, there would be a short period of instability and then it would settle down to a stable state. This would be the ‘best fit scenario’, ‘the lowest energy configuration’, the ‘consistent perception’, etc. As a general idea, this could be thought of as an ‘election’ without the need for ‘voters’.

More Fuster theory

Fuster’s theory is so interesting that I am posting on it again (see previous post). R. Cabezza gives an good overview of the cognit theory and points to the weaknesses. If you want to look at the original review by Cabezza, (here) is where it can be found, down the list, under “(2004) Networks of the Brain Unite”.

To characterize the cognitive structure of cortical networks, Fuster introduces the term cognit. At the cognitive level, a cognit is an item of knowledge, and at the neural level, an assembly of neurons and their connections. Cognitive functions consist of information transactions within and between cognits. While it is often assumed that networks underlying cognitive functions (e.g., attention network, memory network), involve different brain regions, Fuster proposes that the same cognits are part of different networks, and hence, that “it is the cognits and their networks that have topographical specificity, not the functions that use them”. This idea agrees with a point we and others have made regarding functional neuroimaging data: the same brain regions can be activated by different cognitive functions.

Fuster explains almost every cognitive phenomenon is in terms of cognits, allowing him to cover so much cognitive neuroscience in so little space. “Perception is the activation through the senses of a posterior cortical network, a perceptual cognit…”. Perceptual categories are mediated by high-level cognits that “pool attributes from widely dispersed cognits at lower levels.” Action involves high-level executive cognits feeding into motor cognits. Stored memories are cortical networks just like cognits. “To retrieve a memory is to reactivate the network that represents it”. Priming is the “preactivation of a memory network”. Attentional control involves enhancing the activation of some cognits and suppressing the activation of others. Top-down attention is “feedback from higher cognits upon lower ones” . Working memory is sustained activity of a cognit recently activated for executive function. “Linguistic representations essentially consist of cognits, and linguistic operations such as syntax, comprehension, reading, and writing consist of neural transactions within and between cognits”. Reasoning is the “matching of incoming temporal patterns to those patterns inherent in subnetworks that represent specific long-term facts”. Creativity is the creation of new cognits out of old ones, and consciousness is the activation of a cortical network beyond a certain threshold.

Using a single explanatory device for all cognitive functions yields a cognitive neuroscience model that is both elegant and parsimonious. The downside is that Fuster’s model cannot easily account for cognitive processes with clear cortical modules, such as face processing. … Another concept allowing Fuster to integrate different cognitive neuroscience domains is the perception-action cycle. This cycle is the extension to cortical processes of the basic biological mechanism in which sensory stimuli determine motor reactions that change the environment, leading to new stimuli, and so on. In Fuster’s view, this cybernetic cycle has been expanded in humans to include speech, reasoning, and executive processes, but remains the main causal mechanism of behavior and neural function. The complexity of human cortical function can be dramatically simplified by assuming that there are two main functional regions of the cortex, sensory (occipital, temporal, and parietal lobes) and motor (frontal lobes). Sensory signal and behavioral responses are separated in time, and hence, bridging time is one function of the perception-action cycle, particularly the frontal lobes.

Although most cognitive neuroscientists would agree with the perception-action cycle in principle, few would take it as far when accounting for higher-order cognitive functions. Fuster subdivides cognitive processes into perceptual and executive forms. Perceptual memory is memory acquired through the senses, including semantic memory, whereas executive memory is memory for sequences of behavior. Perceptual attention includes top-down modulation of sensory processing, whereas executive attention corresponds to what are usually called executive control processes. In the language domain, lexical information flows from the posterior perceptual cortex to the frontal executive cortex, where it’s integrated into speech or writing. The development of intelligence in children is also described as progressive expansion of the perception-action cycle. The perception-action cycle provides a wonderful unifying principle, but leads to implications that not every cognitive neuroscientist would accept. … Although some implications of the perception-action cycle may be controversial, the most unorthodox aspect of Fuster’s model is probably his unitarian memory view. Most researchers in cognitive neuroscience of memory assume the existence of multiple memory systems; some assume five systems (working, episodic, semantic, and procedural memory, and priming), some assume three (working, declarative, and nondeclarative memory), but almost everybody assumes at least two systems (working and long-term memory). … Fuster can disregard the explicit/implicit distinction because he does not believe that consciousness determines the neural correlates of memory: “In memory retrieval, the degree of conscious awareness may differ greatly, but conscious awareness per se defines neither the network nor the process of its reactivation”. More generally, Fuster does not seem to believe that consciousness plays a causal role in cognition. As he says in the final page, “consciousness is an epiphenomenona of activity in a shifting neural substrate”.

Fuster’s theory of cognits

J. Fuster has put forward a new model of thought based on the idea of memory networks he calls cognits. Here is the abstract from a recent paper, Cortex and memory: emergence of a new paradigm.

Converging evidence from humans and nonhuman primates is obliging us to abandon conventional models in favor of a radically different, distributed-network paradigm of cortical memory. Central to the new paradigm is the concept of memory network or cognit-that is, a memory or an item of knowledge defined by a pattern of connections between neuron populations associated by experience. Cognits are hierarchically organized in terms of semantic abstraction and complexity. Complex cognits link neurons in noncontiguous cortical areas of prefrontal and posterior association cortex. Cognits overlap and interconnect profusely, even across hierarchical levels (heterarchically), whereby a neuron can be part of many memory networks and thus many memories or items of knowledge.

And an abstract from a somewhat earlier paper, The cognit: a network model of cortical representation.

The prevalent concept in modular models is that there are discrete cortical domains dedicated more or less exclusively to such cognitive functions as visual discrimination, language, spatial attention, face recognition, motor programming, memory retrieval, and working memory. Most of these models have failed or languished for lack of conclusive evidence. In their stead, network models are emerging as more suitable and productive alternatives. Network models are predicated on the basic tenet that cognitive representations consist of widely distributed networks of cortical neurons. Cognitive functions, namely perception, attention, memory, language, and intelligence, consist of neural transactions within and between these networks. The present model postulates that memory and knowledge are represented by distributed, interactive, and overlapping networks of neurons in association cortex. Such networks, named cognits, constitute the basic units of memory or knowledge. The association cortex of posterior-post-rolandic-regions contains perceptual cognits: cognitive networks made of neurons associated by information acquired through the senses. Conversely, frontal association cortex contains executive cognits, made of neurons associated by information related to action. In both posterior and frontal cortex, cognits are hierarchically organized. At the bottom of that organization-that is, in parasensory and premotor cortex-cognits are small and relatively simple, representing simple percepts or motor acts. At the top of the organization-in temporo-parietal and prefrontal cortex-cognits are wider and represent complex and abstract information of perceptual or executive character. Posterior and frontal networks are associated by long reciprocal cortico-cortical connections. These connections support the dynamics of the perception-action cycle in sequential behavior, speech, and reasoning.

This work caught my attention in a posting to the blog The Quantum Lobe Chronicles by W. Lu. (here)

Although the modular modeling of the brain has utterly failed due to a lack of conclusive evidence, many neuroscientists continue to maintain this antiquated view… but why? Put quite simply, there was nothing better. However, thanks to Fuster, a new paradigm is emerging…
Introducing the cognit network model. It postulates that memory and knowledge are represented by interactive, distributed, and overlapping networks of neurons in association cortices.
The posterior-post-rolandic association cortex contains perceptual cognits and the frontal association cortex contains executive cognits. The prefrontal and posterior association cortices are linked by complex cognits in a hierarchical order. The parasensory and premotor cortex, found at the bottom of the hierarchy, contain relatively simple and small cognits which represent motor acts or simple percepts. At the top of the hierarchy is the temporo–parietal and prefrontal cortex containing larger cognits representing complex and abstract information of perception and executive control. The long reciprocal cortico–cortical connections between the posterior and frontal networks support sequential behavior, speech, and reasoning.

Morsella 1

I keep an eye on the Less Wrong site because it often prods me into different ways of looking at things. Recently I ran across a reference there to a P. Watts post on paper by E. Morsella, ‘The Function of Phenomenal States: Supramodular Interaction Theory’. (here)

Morsella gives an interesting list of consciousness theories.

… contemporary findings in fields as diverse as cognitive psychology, social psychology, and neuropsychology have demonstrated that, contrary to what our subjective experience leads us to believe, many of our complex behaviors and mental processes can occur without the guidance of phenomenal processing. That is, they can occur automatically, determined by causes far removed from our awareness. … It seems that the processes that once served as the sin qua non of choice and free will – goal pursuit, judgment, and social behavior – can occur without conscious processes, raising again the thorny question, What is consciousness for?…

Regarding the function of these states, many hypotheses and conjectures have been offered. For example, Block (1995) claimed that consciousness serves a rational and non-reflexive role, guiding action in a non-guessing manner; and Baars (1988,2002)has pioneered the ambitious conscious access model, in which phenomenal states integrate distributed neural processes. Others have stated that phenomenal states play a role in voluntary behavior (Shepherd 1994), language (Banks 1995, Carlson 1994, Macphail 1998), theory of mind (Stuss & Anderson 2004), the formation of self (Greenwald & Pratkanis 1984), cognitive homeostasis ( Damasio 1999), the assessment and monitoring of mental functions (Reisberg 2001), semantic processing (Kouider & Dupoux 2004), the meaningful interpretation of situations (Roser & Gazzaniga 2004), and simulations of behaviour and perception (Hesslow 2002).

A recurring idea in recent theories is that phenomenal states somehow integrate neural activities and information-processing structures that would otherwise be independent…This notion, here referred to as the integration consensus, has now resurfaced in diverse areas of research… Many of these theories speak of a central information exchange, where dominant information is distributed globally…regarding the integration consensus, a critical issue remaining pertains to which kinds of dissemination require phenomenal states and which kinds do not.

Morsella’s theory is an elaboration of integration theories. Why and under what circumstances is integration required?

…the difference between the two kinds of processes (conscious and unconscious) cannot simply be one of controllability, for reflexes are controlled, sometimes in highly sophisticated and dynamic ways. In addition, the difference cannot simply be one of complexity because reflexive processes can be highly complex but unconscious…Faced with these difficulties, perhaps it is then fair to conclude that conscious processes, unlike reflexes, are consciously controlled, but this obviously provides nothing more than a circular explanation for why the two kinds of processes are different.

… I propose that the difference between conscious and unconscious processes lies in the kinds of information that have to be taken into account in order to produce adaptive behavior. Whenever the most adaptive response entails considering certain different kinds of information, phenomenal states are called into play….I review the task demands of some representative conscious and unconscious conflicts…

More on his theory next post.

Awareness of the internal

There is obviously a difference between the access that I have to my own body and the access that I have to other people’s bodies. So, when I perform an action I get privileged access to the sensory consequences of this action from proprioception, from the nerves on the interior of the body. But I pay very little attention to these signals from my insides.

The brain clearly uses this information; it is very important for making our actions smooth, accurate, fast and so on, but it doesn’t seem to be very important for our consciousness. In essence, we do not need to be continually aware of our posture, muscle tone, stomach movements or the pressure of an arm against a table; and so, hours can go by without conscious awareness of any internal ‘feelings’ of our body. There is even a homeostatic system that registers things like blood sugar level, body temperature, blood oxygen levels and other important indications of our body’s biochemical health. We do not call this a ‘sense’ because it simply does not enter our awareness at all, ever. It is important though as is felt in the extreme panic and all consuming drives that result from something like lack of air.

Touch is different and any little change in touch that is at all, in the tiniest way, unexpected comes into conscious awareness. It resembles sight and hearing. The result of this difference between external and internal senses is that the spotlight of consciousness is much more likely to fall on the world outside our skins than on the events inside the skin. Touch is usually about the outside not the inside. We have a model of the world and ourselves in it. But as far as consciousness is concerned, our own bodies are just sketched in with far less detail than is available.

What is the exception to this ignoring of the internal – why, pain of course, the body’s alarm bell to ‘pay attention to me’. If I want to get pain out of my consciousness without drugs, it takes real skill and effort, a sort of self hypnosis. And with pain comes all the little internal ‘feelings’ from places near the source of the pain.

Although the brain has a good deal of information about the rest of the body and presumably uses this information in predictive modeling of the body in the world, this internal part of the predictive model is not often an important part of the our conscious awareness. It can be otherwise of course, an athlete or a dancer or a tight-rope walker probably has a lot more awareness of their internal workings than the rest of us – because it has become important.

3D revisited

A comment to the post Time and Space started me thinking about whether our perception is more 2D and less 3D than we realize.

The following excerpt is slightly off the subject of the comment, but still interesting in this regard. It is from a Scientific American interview by Lehrer of Sue Barry. She learned to see stereoscopically after 40 years of seeing two dimensionally. (here)

LEHRER: What was it like to see the world in 3-D? Could you describe your first reactions?

BARRY: Many people tell me that the world looks about the same to them whether they look with one eye or with two. They don’t think stereovision is all that important. What they don’t realize is that their brain is using a lifetime of past visual experiences to fill in the missing stereo information. Seeing in 3-D provides a fundamentally different way of seeing and interpreting the world than seeing with one eye. When I began to see in stereo, it came as an enormous surprise and a great gift.

For the first time, I could see the volumes of space between different tree branches, and I liked immersing myself in those inviting pockets of space. As I walk about, leaves, pine needles, and flowers, - even light fixtures and ceiling pipes - seem to float on a medium more substantial than air. Snow no longer appears to fall in one plane slightly in front of me. Now, the snowflakes envelope me, floating by in layers and layers of depth. It’s been seven years since I gained stereovision, but ordinary views like these still fill me with a deep sense of wonder and joy.”

Her description implies that although she had not automatically perceived three dimensions in her life, she definitely thought in 3D. She did not have to learn how to use this extra ingredient in her perception although it was definitely novel and surprising.

Another recent research report in ScienceDaily gives indications of where and how 3D calculations are done. ( here )

“They found, surprisingly, that 3-D motion processing occurs in an area in the brain—located just behind the left and right ears—long thought to only be responsible for processing two-dimensional motion (up, down, left and right). This area, known simply as MT+, and its underlying neuron circuitry are so well studied that most scientists had concluded that 3-D motion must be processed elsewhere. Until now…

For the study, Huk and his colleagues had people watch 3-D visualizations while lying motionless for one or two hours in an MRI scanner fitted with a customized stereovision projection system…The fMRI scans revealed that the MT+ area had intense neural activity when participants perceived objects (in this case, small dots) moving toward and away from their eyes. Colorized images of participants’ brains show the MT+ area awash in bright blue…

The tests also revealed how the MT+ area processes 3-D motion: it simultaneously encodes two types of cues coming from moving objects…There is a mismatch between what the left and right eyes see. This is called binocular disparity… For a moving object, the brain calculates the change in this mismatch over time. Simultaneously, an object speeding directly toward the eyes will move across the left eye’s retina from right to left and the right eye’s retina from left to right. “The brain is using both of these ways to add 3-D motion up,” says Huk. “It’s seeing a change in position over time, and it’s seeing opposite motions falling on the two retinas.”