Info

You are currently browsing the thoughts on thoughts weblog archives for January, 2010.

Calendar
January 2010
M T W T F S S
« Dec   Feb »
 123
45678910
11121314151617
18192021222324
25262728293031
Categories

Archive for January 2010

Prediction as intent


A report in Science, Movement Intention after Parietal Cortex Stimulation in Humans, by M. Desnurget and others, has the following summary:

Parietal and premotor cortex regions are serious contenders for bringing motor intentions and motor responses into awareness. We used electrical stimulation in seven patients undergoing awake brain surgery. Stimulating the right inferior parietal regions triggered a strong intention and desire to move the contralateral hand, arm, or foot, whereas stimulating the left inferior parietal region provoked the intention to move the lips and to talk. When stimulation intensity was increased in parietal areas, participants believed they had really performed these movements, although no electromyographic activity was detected. Stimulation of the premotor region triggered overt mouth and contralateral limb movements. Yet, patients firmly denied that they had moved. Conscious intention and motor awareness thus arise from increased parietal activity before movement execution.

So the parietal region is involved in the conscious experience of intention and desire to move (ie the will to move) and the conscious experience of having moved. It is not involved in the movement itself. On the other hand, the premotor region is involved in the movement’s execution but not the the conscious experience of the movement.

The key here may be that the construction of conscious experience is a projection in time of that will be happening later, at the time of the experience. The construction process would therefore need to have access to motor programs that are being created (or even considered) so as to predict and project the sensory effect of the action before it has occurred.

List of constraints


Human Nature Review has a review by M. Ghin of a book by T. Metzinger, Being No One: The Self-Model Theory of Subjectivity. (here) In it there is a list of constraints ‘which help us to judge whether a given representational state is also a conscious state’ which I find an interesting list.

  1. Global availability – an item that is in consciousness is integrated into an overall world-model.

  1. Presentationality- consciousness is experienced as in the now.

  2. Convolved holism- objects in consciousness are made up of other objects in a heirarchy.

  3. Dyamicity – experience is constantly changing or flow of events.

  4. Perspectivalness – we are the point of view for conscious experience

  5. Transparency – we do not see the construction of the conscious experience but have the illusion of direct contact with the world.

  6. Offline activation – there can be consciousness without sensory input (daydreams, hallucinations etc.)

  7. Representation of intensities – we can experience levels of intensity of qualia.

  8. Homogeneity – qualia are not mixtures of two other qualia.

  9. Adaptivity – consciousness has features that can be evolved

It sounds interesting. I will have to follow up on this, especially those constraints that we have not touched on much: convolved holism, representation of intensities and homogeneity.

A bit of working memory


ScienceDaily had an article on the research of B. Strowbridge and P. Larimer. (here) Their ‘first’ is to create stimulus-specific sustained activity patterns in brain circuits maintained in vitro using pieces of rodent hippocampus – memory in a petri dish.

Mossy cells are unusual because they maintain much of their normal activity even when kept alive in thin brain slices. The spontaneous electrical activity found in mossy cells was critical to their discovery of memory traces in this brain region.

When stimulating electrodes were inserted in the hippocampal brain slice the spontaneous activity in the mossy cells remembered which electrode had been activated. The memory in vitro lasted about 10 seconds, about as long as many types of working memories studied in people.

“This is the first time anyone has stored information in spontaneously active pieces of mammalian brain tissue. It is probably not a coincidence that we were able to show this memory effect in the hippocampus, the brain region most associated with human memory,” said Strowbridge.

The scientists measured the frequency of synaptic inputs onto the mossy cells to determine whether or not the hippocampus had retained memory…They also found the brain circuit that enabled the hippocampus to remember which input pathway had been activated. The memory effect occurred because of a rare type of brain cell called semilunar granule cells, described in 1893 by the father of neuroscience, Ramón y Cajal. The semilunar granule cells have an unusual form of persistent activity, allowing them to maintain memory and connect to the mossy cells.

Working memory is intimately involved with consciousness.

Ignition of consciousness


ScienceDaily has a report (here) on research by R. Malach, L. Fisch and I. Fried published in Neuron. They found an ‘ignition’ of intense neural activity associated with consciously seeing an image. They use a very powerful method (not available to everyone). Epileptic patients who have electrodes implanted in their brains in preparation for surgery are asked to volunteer of tests on perceptual awareness.

The subjects looked at a computer screen, which briefly presented a ‘target’ image… followed by a ‘mask’ … at different time intervals after the target image had been presented. This allowed the experimenter to control the visibility of the images — the patients sometimes recognized the targets and sometimes failed to do so. By comparing the electrode recordings to the patients’ reports of whether they had correctly recognized the image or not, the scientists were able to pinpoint when, where and what was happening in the brain as transitions in perceptual awareness took place.

Malach: ‘We found that there was a rapid burst of neural activity occurring in the high-order visual centers of the brain (centers that are sensitive to entire images of objects, such as faces) whenever patients had correctly recognized the target image.’ The scientists also found that the transition from not seeing to seeing happens abruptly. Fisch: ‘When the mask was presented too soon after the target image, it ‘killed’ the visual input signals, resulting in the patients being unable to recognize the object. The patients suddenly became consciously aware of the target image at a clear threshold, suggesting that the brain needs a specific amount of time to process the input signals in order for conscious perceptual awareness to be ‘ignited.”

This study is the first of its kind to uncover strong evidence linking ‘ignition’ of bursts of neural activity to perceptual awareness in humans. More questions remain: Is this the sole mechanism involved in the transition to perceptual awareness? To what extent is it a local phenomenon?

The big C


Here is an interesting take on consciousness. It is the C in an AtoZ by P. Long in My Brain on My Mind. (here)

Consciousness, according to neuroscientists Francis Crick and Christof Koch, is “attention times working memory.” “Working memory” being the type of memory that holds online whatever you are attending to right now. Add to “attention times working memory” a third element of consciousness—the sense of self, the sense of “I” as distinct from the object of perception. If I am conscious of something, I “know” it. I am “aware” of it. As neurobiologist António Damásio puts it in The Feeling of What Happens, “Consciousness goes beyond being awake and attentive: it requires an inner sense of the self in the act of knowing.” (It also requires the neurotransmitter acetylcholine.)

There is another theory of consciousness, the quantum physics theory of consciousness, in which quarks, a fundamental particle, have proto­consciousness. This theory is said to have an aggregation problem—how would zillions of protoconscious particles make a conscious being? It puts consciousness outside life forms and into moonrocks and spoons. I will leave that theory right here.

In dreamless sleep, we are not conscious. Under anesthesia, we are not conscious. Walking down the street in a daze, we are barely conscious. Consciousness may involve what neuroscientist Jean-Pierre Changeux postulates is a “global workspace”—a metaphorical space of thought, feeling, and attention. He thinks it’s created by the firing of batches of neurons originating in the brain stem whose extra-long axons fan up and down the brain and back and forth through both hemispheres, connecting reciprocally with neurons in the thalamus (sensory relay station) and in the cerebral cortex. These neurons are focusing attention, receiving sensory news and assessing it, repressing the irrelevant, reactivating long-term memory circuits, and, by comparing the new and the known, registering a felt sense of “satisfaction” or “truth,” which is brought home by a surge of the reward system (mainly dopamine).

Crick and Koch propose, rather, that the part of our gray matter necessary for consciousness is the claustrum, a structure flat as a sheet located deep in the brain on both sides. Looked at face-on, it is shaped a bit like the United States. This claustrum maintains busy connections to most other parts of the brain (necessary for any conductor role). It also has a type of neuron internal to itself, able to rise up with others of its kind and fire synchronously. This may be the claustrum’s way of creating coherence out of the informational cacophony passing through. For consciousness feels coherent. Never mind that your brain at this moment is processing a zillion different data bits.

Gerald Edelman’s (global) theory of consciousness sees it resulting from neuronal activity all over the brain. Edelman (along with Changeux and others) applies the theory of evolution to populations of neurons. Beginning early in an individual’s development, neurons firing and connecting with other neurons form shifting populations as they interact with input from the environment. The brain’s reward system mediates which populations survive as the fittest. Edelman’s theory speaks to the fact that no two brains are exactly alike; even identical twins do not have identical brains.

How, in Edelman’s scheme, does consciousness achieve its coherence? By the recirculation of parallel signals. If you are a neuron, you receive a signal, say from a light wave, then relay it to the next neuron via an electrical pulse. Imagine a Fourth of July fireworks, a starburst in the night sky. Different groups of neurons register the light, the shape, the boom. After receiving their respective signals, populations of neurons pass them back and forth to other populations of neurons. What emerges is one glorious starburst.

I myself do not have a theory of consciousness. Still, I am a conscious (occasionally) being. My sense of myself, my sense of an “I,” has some sort of neuronal correlate. I am conscious (aware) of the fact that I am teaching a writing seminar (observed object with neuronal correlate) on the literary form known as the abecedarium (observed object with neuronal correlate). I am conscious (aware) that I will be submitting my own abecedarium—this one—to the brainy writers in the class. Because I can imagine the future, because I can plan ahead (thanks in part to my frontal lobes), I feel apprehensive. How crazy! To imagine I could comprehend the Homo sapiens brain, the most complex object in the known universe, within the 26 compartments of an abecedarium.

I will try. I will color the cones and rods and convoluted lobes printed in black outline in my anatomy coloring book. I will teach my neurons to know themselves. As I write this, I picture our class seated around our big table. I can picture the face of each writer at the table. To each face I can attach a name. This is proof that, as of today, I have dodged dementia.

A comment


There is a site called Less Wrong that I visit (here) because occasionally there is an outstanding post there. I do not comment on the posts as a rule because it is something of a boys club of AI guys and I don’t feel that I belong. But last week there was a post that got me a little worked up and I commented. My efforts lost me some karma but never mind, I didn’t know I had been playing the Less Wrong game. Here is the comment:

“The local worldview reduces everything to some combination of physics, mathematics, and computer science, with the exact combination depending on the person. I think it is manifestly the case that this does not work for consciousness.”

No it doesn’t work because you have left out BIOLOGY. You cannot just jump from physics and algorithms to how brains function.

Here is the outline of a possible path:

  1. We know that consciousness has an important function because it consumes a great deal of energy – that’s how evolution works.

  2. Animals move – therefore they must have a model of where they are, where they are going etc. - like the old Swedish joke, ‘I cant yump when I got no place to stood’.

  3. To make a model, animals need to sense the environment and translate the info into elements of the model (perception).

  4. In order to use the model to plan and monitor motor action, they have to also model themselves – so the model is of the animal-in-the-world - the tree is not the real tree in reality but the modeled tree and the me in the model is not the real me in reality but the modeled me.

  5. In order to make a good model that was useful it would have to be a unified global model of the animal in the world – all the parts of the model have to be brought together in order to create the best fit scenario and in order for various functions to use the information.

  6. In order to make a good model that could be used to plan and valuate actions it would have to model the needs of the animal such as goals, motivations, emotions etc – the model has to have a theory of mind for the animal - so my thoughts in the model are not my real thoughts in reality but the modeled mind. When we introspect we are aware of our model of ourselves but not of ourselves in reality. Definitions can be a problem here – do we use the word ‘mind’ for cognition or for awareness? For we have trouble if we confuse these two things.

  7. To make the model more useful it should be predictive to overcome the time it takes to construct the model – so if ‘now’ is t, then the model would be created from the information the brain has at t-x used to predict what reality will be after x duration where x is the time it takes to construct the model – this allows errors in motor actions to be monitored and corrected because the sensory data coming it does not match the model prediction – even the ‘now’ is a modeled now and not the now in reality.

  8. So the biological criteria for a good model are unity, speed, accuracy and predictive power. The elements used to create the model must be easily manipulated in order to achieve these goals and must also be capable of being stored as memories, imagined, communicated etc. The qualia of the model will be anything and everything that is biologically possible and makes a good model. We have the data that the sense organs can measure and some effective ways of representing that information in the model.

So the question “Why red?” can be answered with “Why not – it works.” And the question “Where is the red?” can be answered by “In the structural elements of the model”. If someone has a better way to model the frequency of light, I have never heard of it.

If you cannot envisage this modeling as a sequential computer program that is because it isn’t one. It is a massively parallel assembly of overlapping feedback loops that involve most of the cortex, the thalamus, the basal ganglia and even points in the brain stem. It has more in common with analogue computers then digital ones.

The cortex is not the hub


An item in the Scientific American (here), Reviving Consciousness in Injured Brains by C. Koch, describes the effects of deep-brain stimulation. It is a reminder not to confuse the content of consciousness with its functional container.

Most scholars concerned with the material basis of consciousness are cortical chauvinists. They focus on the two cortical hemispheres that crown the brain. It is here that perception, action, memory, thought and consciousness are said to have their seat.

There is no question that the great specificity of any one conscious perceptual experience… is mediated by coalitions of synchronized cortical nerve cells and their associated targets in the satellites of the cortex, thalamus, amygdala, claustrum and basal ganglia. Groups of cortical neurons are the elements that construct the content of each particular rich and vivid experience. Yet content can be provided only if the basic infrastructure to represent and process this content is intact. And it is here that the less glamorous regions of the brain, down in the catacombs, come in… injury to large chunks of cortical tissue, particularly of the so-called silent frontal lobes, can lead to a loss of specific conscious content but without any massive impairment in the victim’s behavior. … But destruction of tissue the size of a sugar cube in the brain stem and in parts of the thalamus, especially if they occur simultaneously on the left and right sides, may leave the patient comatose, stuporous or otherwise unable to function… can cause consciousness to flee permanently…

pioneers are finding innovative ways to help. Their technology of choice is deep-brain stimulation (DBS). The method has been much in the public eye as a way to ameliorate the symptoms of Parkinson’s disease. Electrodes are implanted into a region just below the thalamus, the quail-egg-shaped structure in the center of the brain. When the electric current is turned on, the rigor and tremors of this movement disorder disappear instantly…Over the past 15 years neurosurgeon Takamitsu Yamamoto and his colleagues at the Nihon University School of Medicine in Tokyo stimulated parts of the intralaminar nuclei (ILN) of the thalamus in vegetative state and minimum conscious state patients. These regions were targeted because they are involved in producing arousal and in controlling widespread activity throughout the cortex. Indeed, according to the late neurosurgeon Joseph Bogen of the University of Southern California, the ILN is the one structure absolutely essential to consciousness.

The deep-brain stimulation is helpful to some patients, but it is early days. The research does show (again) that the cortex does not work without control from older parts of the brain.

The content of consciousness

J. Hoffman in the New York Times writes in ”Taking mental snapshots to Plumb our inner selves’, about the work of R. Hurlburt (here), who is attempting to document the contents of consciousness. The method is to fit a person with a random beeper and instructions to record everything they are aware of when the beeper sounds. The people were later interviewed about each recorded moment of consciousness.

After hundreds of introspective interviews, Dr. Hurlburt still hesitates to generalize from his findings. But he has observed that the basic makeup of inner life varies substantially from person to person.

My research says that there are a lot of people who don’t ever naturally form images, and then there are other people who form very florid, high-fidelity, Technicolor, moving images,” he said. Some people have inner lives dominated by speech, body sensations or emotions, he said, and yet others by “unsymbolized thinking” that can take the form of wordless questions like, “Should I have the ham sandwich or the roast beef?”

In a 2006 book, “Exploring Inner Experience,” Dr. Hurlburt suggests that these differences may be linked to personality and behavior. Inner speakers tend to be more confident, for example, and those who think in pictures tend to have trouble empathizing with others.

Many feel that this is not a very objective experiment. How do we know that people can or do report their conscious awareness in an isolated moment with accuracy, nothing added and nothing missed.

It may be that turning introspection into a science is as impractical as “trying to turn up the gas quickly enough to see how the darkness looks,” as William James wrote in 1890.

But Dr. Hurlburt remains hopeful. Maybe, he said, “it is possible with our modern technology to take a flash picture in the dark.”

Brain stem involvement in attention


ScienceDaily has an item on a paper in Dec Nature Neuroscience by reseachers at the Salk Institute. (here)

The work connects part of the brain stem that controls eye movements with control of mental attention. Mental attention is closely linked with consciousness.

Like a spotlight that illuminates an otherwise dark scene, attention brings to mind specific details of our environment while shutting others out. …

“Our ability to survive in the world depends critically on our ability to respond to relevant pieces of information and ignore others,” explains graduate student and first author Lee Lovejoy, who conducted the study together with Richard Krauzlis, Ph.D., an associate professor in the Salk’s Systems Neurobiology Laboratory. “Our work shows that the superior colliculus is involved in the selection of things we will respond to, either by looking at them or by thinking about them.”

As we focus on specific details in our environment, we usually shift our gaze along with our attention. “We often look directly at attended objects and the superior colliculus is a major component of the motor circuits that control how we orient our eyes and head toward something seen or heard,” says Krauzlis.

But humans and other primates are particularly adept at looking at one thing while paying attention to another. As social beings, they very often have to process visual information without looking directly at each other, which could be interpreted as a threat. This requires the ability to attend covertly.

It had been known that the superior colliculus plays a role in deciding how to orient the eyes and head to interesting objects in the environment. But it was not clear whether it also had a say in covert attention.

In their current study, the Salk researchers specifically asked whether the superior colliculus is necessary for covert attention. To tease out the superior colliculus’ role in covert attention, they designed a motion discrimination task that distinguished between control of gaze and control of attention.

The superior colliculus contains a topographic map of the visual space around us, just as conventional maps mirror geographical areas. Lovejoy and Krauzlis exploited this property to temporarily inactivate the part of the superior colliculus corresponding to the location of the cued stimulus on the computer screen. No longer aware of the relevant information right in front of them the subjects instead based all of their decision about the stimulus’ movement on irrelevant information found elsewhere on the screen.

“The result is very similar to what happens in patients with neglect syndrome,” explains Lovejoy, “Up to a half of acute right-hemisphere stroke patients demonstrate signs of spatial neglect, failing to be aware of objects or people to their left in extra-personal space.”

“Our results show that deciding what to attend to and what to ignore is not just accomplished with the neocortex and thalamus, but also depends on phylogenetically older structures in the brainstem,” says Krauzlis. “Understanding how these newer and older parts of the circuit interact may be crucial for understanding what goes wrong in disorders of attention.”

More Fuster theory


Fuster’s theory is so interesting that I am posting on it again (see previous post). R. Cabezza gives an good overview of the cognit theory and points to the weaknesses. If you want to look at the original review by Cabezza, (here) is where it can be found, down the list, under “(2004) Networks of the Brain Unite”.

To characterize the cognitive structure of cortical networks, Fuster introduces the term cognit. At the cognitive level, a cognit is an item of knowledge, and at the neural level, an assembly of neurons and their connections. Cognitive functions consist of information transactions within and between cognits. While it is often assumed that networks underlying cognitive functions (e.g., attention network, memory network), involve different brain regions, Fuster proposes that the same cognits are part of different networks, and hence, that “it is the cognits and their networks that have topographical specificity, not the functions that use them”. This idea agrees with a point we and others have made regarding functional neuroimaging data: the same brain regions can be activated by different cognitive functions.

Fuster explains almost every cognitive phenomenon is in terms of cognits, allowing him to cover so much cognitive neuroscience in so little space. “Perception is the activation through the senses of a posterior cortical network, a perceptual cognit…”. Perceptual categories are mediated by high-level cognits that “pool attributes from widely dispersed cognits at lower levels.” Action involves high-level executive cognits feeding into motor cognits. Stored memories are cortical networks just like cognits. “To retrieve a memory is to reactivate the network that represents it”. Priming is the “preactivation of a memory network”. Attentional control involves enhancing the activation of some cognits and suppressing the activation of others. Top-down attention is “feedback from higher cognits upon lower ones” . Working memory is sustained activity of a cognit recently activated for executive function. “Linguistic representations essentially consist of cognits, and linguistic operations such as syntax, comprehension, reading, and writing consist of neural transactions within and between cognits”. Reasoning is the “matching of incoming temporal patterns to those patterns inherent in subnetworks that represent specific long-term facts”. Creativity is the creation of new cognits out of old ones, and consciousness is the activation of a cortical network beyond a certain threshold.

Using a single explanatory device for all cognitive functions yields a cognitive neuroscience model that is both elegant and parsimonious. The downside is that Fuster’s model cannot easily account for cognitive processes with clear cortical modules, such as face processing. … Another concept allowing Fuster to integrate different cognitive neuroscience domains is the perception-action cycle. This cycle is the extension to cortical processes of the basic biological mechanism in which sensory stimuli determine motor reactions that change the environment, leading to new stimuli, and so on. In Fuster’s view, this cybernetic cycle has been expanded in humans to include speech, reasoning, and executive processes, but remains the main causal mechanism of behavior and neural function. The complexity of human cortical function can be dramatically simplified by assuming that there are two main functional regions of the cortex, sensory (occipital, temporal, and parietal lobes) and motor (frontal lobes). Sensory signal and behavioral responses are separated in time, and hence, bridging time is one function of the perception-action cycle, particularly the frontal lobes.

Although most cognitive neuroscientists would agree with the perception-action cycle in principle, few would take it as far when accounting for higher-order cognitive functions. Fuster subdivides cognitive processes into perceptual and executive forms. Perceptual memory is memory acquired through the senses, including semantic memory, whereas executive memory is memory for sequences of behavior. Perceptual attention includes top-down modulation of sensory processing, whereas executive attention corresponds to what are usually called executive control processes. In the language domain, lexical information flows from the posterior perceptual cortex to the frontal executive cortex, where it’s integrated into speech or writing. The development of intelligence in children is also described as progressive expansion of the perception-action cycle. The perception-action cycle provides a wonderful unifying principle, but leads to implications that not every cognitive neuroscientist would accept. … Although some implications of the perception-action cycle may be controversial, the most unorthodox aspect of Fuster’s model is probably his unitarian memory view. Most researchers in cognitive neuroscience of memory assume the existence of multiple memory systems; some assume five systems (working, episodic, semantic, and procedural memory, and priming), some assume three (working, declarative, and nondeclarative memory), but almost everybody assumes at least two systems (working and long-term memory). … Fuster can disregard the explicit/implicit distinction because he does not believe that consciousness determines the neural correlates of memory: “In memory retrieval, the degree of conscious awareness may differ greatly, but conscious awareness per se defines neither the network nor the process of its reactivation”. More generally, Fuster does not seem to believe that consciousness plays a causal role in cognition. As he says in the final page, “consciousness is an epiphenomenona of activity in a shifting neural substrate”.

|