real time fMRI feedback

Now that fMRI can be presented to subjects in nearly real-time feedback it is possible to directly observe the result of subjective mental experience on neural activity (or vice versa). This is another way to look at consciousness. Kalina Christoff and group have a recent paper (see citation) on feedback regulation of the rostrolateral prefrontal cortex (RLPRC), an area associated with awareness of meta-cognition or control of introspection. Here is the abstract:

Recent real-time fMRI (rt-fMRI) training studies have demonstrated that subjects can achieve improved control over localized brain regions by using real-time feedback about the level of fMRI signal in these regions. It has remained unknown, however, whether subjects can gain control over anterior prefrontal cortex (PFC) regions that support some of the most complex forms of human thought. In this study, we used rt-fMRI training to examine whether subjects can learn to regulate the rostrolateral prefrontal cortex (RLPFC), or the lateral part of the anterior PFC, by using a meta-cognitive awareness strategy. We show that individuals can achieve improved regulation over the level of fMRI signal in their RLPFC by turning attention towards or away from their own thoughts. The ability to achieve improved modulation was contingent on observing veridical real-time feedback about the level of RLPFC activity during training; a sham-feedback control group demonstrated no improvement in modulation ability and neither did control subjects who received no rt- fMRI feedback but underwent otherwise identical training. Prior to training, meta-cognitive awareness was associated with recruitment of anterior PFC subregions, including both RLPFC and medial PFC, as well as a number of other midline and posterior cortical regions. Following training, however, regulation improvement was specific to RLPFC and was not observed in other frontal, midline, or parietal cortical regions. These results demonstrate the feasibility of acquiring control over high-level prefrontal regions through rt-fMRI training and offer a novel view into the correspondence between observable neuroscientific measures and highly subjective mental states.

Previous studies have suggested the RLPFC’s role is to monitor, coordinate, integrate and evaluate the products of of higher level stages of cognitive processing. Subjects used observation of their own thoughts to increase RLPFC activity and they used external sensory and bodily sensations to decrease the activity. With real time feedback they significantly increased their ability to control the level of activity in the RLPFC. Controls attempted the same effects without any feedback or with sham feedback and did not achieve significant changes. However, trained mediators achieve similar (not identical) effects.

While previous rt-fMRI feedback training studies have shown that subjects could learn to regulate activation with the sensorimotor contex by imagining hand movements, the insula by recalling personal affectively charged events, the anterior cingulate by attending to and away from the painful properties of a stimulus, and the inferior frontal gyrus through the use of various strategies involving sub-vocal speech, here we show that subjects can use an abstract mental process such as metacognitive awareness of one’s own thoughts to regulate activation levels in one of the highest-order cortical association regions.

This is a very powerful tool. It is a fairly probable guess that what the subjects are doing is steering attention – the focus of conscious attention. We will hear much more from this experimental setup in the future and it will shine a light on consciousness.

ResearchBlogging.org

McCaig, R., Dixon, M., Keramatian, K., Liu, I., & Christoff, K. (2011). Improved modulation of rostrolateral prefrontal cortex using real-time fMRI training and meta-cognitive awareness NeuroImage, 55 (3), 1298-1305 DOI: 10.1016/j.neuroimage.2010.12.016

when there is no time to think

Jonah Lehrer writes about what it takes to be a quarterback (here).

First, it is not easy:

The ball is snapped. The quarterback drops back, immediately surrounded by a chorus of grunts and groans, the sounds of linemen colliding. The play has just begun, but the pocket is already collapsing around him. He must focus his eyes downfield on his receivers and know where they’re going while also reading the defense. Is that cornerback blitzing or dropping back? When will the safety leave the middle? The QB has fewer than three seconds to make sense of this mess. If he hesitates, even for a split second, he’ll get sacked. No other team sport is so dependent on the judgment of a single player…

Teams use tests of cognitive skill (Wonderlic) to judge potential quarterbacks, but many of the most successful had low scores on the tests. The wrong thing is being measured. There isn’t time in the pocket to use the type of cognition that Wonderlic measures.

So how, then, do they make their decisions? Turns out, every pass play is a pure demonstration of human feeling. Scientists have in recent years discovered that emotions, which are often dismissed as primitive and unreliable, can in fact reflect a vast amount of information processing. In many instances, our feelings are capable of responding to things we’re not even aware of, noticing details we don’t register on a conscious level. … “QBs are tested on every single pass play,” Hasselbeck says. “To be good at the position, you’ve got to know the answer before you even understand the question. You’ve got to be able to glance at a defense and recognize what’s going on. And you’ve got to be able to do that when the left tackle gets beat and you’re running away from a big lineman. That ability might not depend on real IQ, but it sure takes a lot of football IQ.”

And getting this football expertise:

“There is virtually no evidence that expertise is due to genetic or innate factors,” Ericsson says. “Rather, it strongly suggests that expertise requires huge amounts of effort and practice.” This is because it takes time to train our feelings, to embed those useful patterns into the brain. Before a quarterback can find the open man, parsing the defense in a glance, he must spend years studying cornerbacks and crossing routes. It looks easy only because he’s worked so hard.

What it takes to do anything complex really well is disciplined practice on very specific skills for enormous amounts of time – diligence, grit, dedication. This appears to be true of any athletic sport and any performance art.

What does all that practice do? I believe that one thing is that it eliminates the need for any conscious activity. Thought that has to pass stepwise through consciousness and working memory is slow and limited. Thought that results from having made all the necessary connections automatic is fast, smooth and accurate. Hours of practice is how this transfer is achieved.

fresh look at mirror neurons

A review paper by A. Casile, V. Caggiano and P. Ferrari is published in Neuroscientist Apr 2011 titled The Mirror Neuron System: A Fresh View. Here is the abstract:

Mirror neurons are a class of visuomotor neurons in the monkey premotor and parietal cortices that discharge during the execution and observation of goal-directed motor acts. They are deemed to be at the basis of primates’ social abilities. In this review, the authors provide a fresh view about two still open questions about mirror neurons. The first question is their possible functional role. By reviewing recent neurophysiological data, the authors suggest that mirror neurons might represent a flexible system that encodes observed actions in terms of several behaviorally relevant features. The second question concerns the possible developmental mechanisms responsible for their initial emergence. To provide a possible answer to question, the authors review two different aspects of sensorimotor development: facial and hand movements, respectively. The authors suggest that possibly two different “mirror” systems might underlie the development of action understanding and imitative abilities in the two cases. More specifically, a possibly prewired system already present at birth but shaped by the social environment might underlie the early development of facial imitative abilities. On the contrary, an experience-dependent system might subserve perception-action couplings in the case of hand movements. The development of this latter system might be critically dependent on the observation of own movements.

This fits with the idea that I have been favouring, that mirror neurons are activated by action concepts. They may be concepts similar to other concepts such as words, objects and the like. It is also interesting that we may be born with some and learn others through experience.

What change blindness says about memory

In change blindness some part of a scene is changed and the change is not noticed by the observer. This can happen when the change is not happening on the retina in a stable condition. It can happen when there is a mask (a blank screen to fast to see), a blink, an eye movement, a change in point of view, an interruption in a action and so on, anything disrupts the continuity of the retina image.

There was some questions of whether change blindness could happen with objects that were at the center of attention. Surely, if you were engaged in a conversation with someone, they could not be replaced with another person without the change being noticed. But they can as Simons and Levin showed in their paper (see citation).

What does this say about our memories? Simon and Levin say:

If we constantly noticed such changes, they would likely detract from our ability to focus on other, more important aspects of our visual world. Change detection as a method relies on the tendency of our visual system to assume an unchanging world. The fact that we do not expect one person to be replaced by another during an interaction may contribute to our inability to detect such changes. … Taken together, these experiments show that even substantial changes to the objects with which we are directly interacting will often go unnoticed. Our visual system does not automatically compare the features of a visual scene from one instant to the next in order to form a continuous representation; we do not form a detailed visual representation of our world. Instead, our abstract expectations about a situation allow us to focus on a small subset of the available information that we can use to check for consistency from one instant to the next.

In effect we delude ourselves as to the completeness of our immediate memory. We remember the unexpected if we notice but there is no guarantee that we will notice.

ResearchBlogging.org

Simons D.J., & Levin D.T. (1998). Failure to detect changes to people during a real-world interaction Psychonomic Bulletin and Review, 5, 644-649

Synaesthesia of concepts

We think of synaesthesia as an unusual sensory effect – the senses getting ‘mixed up’. But it may be more accurate to think of it as a ‘mix up’ in the binding of qualia to concepts. D. Nikolic, U. Jurgens, N. Rothen, B. Meier and A. Mroczko published a paper in Cortex, Swimming-style synesthesia (2011) that shows a clear concept component. I have not found free access to this paper but here is the abstract:

The traditional and predominant understanding of synesthesia is that a sensory input in one modality (inducer) elicits sensory experiences in another modality (concurrent). Recent evidence suggests an important role of semantic representations of inducers. We report here the cases of two synesthetes, experienced swimmers, for whom each swimming style evokes another synesthetic color. Importantly, synesthesia is evoked also in the absence of direct sensory stimulation, i.e. the proprioceptive inputs during swimming. To evoke synesthetic colors, it is sufficient to evoke the concept of a given swimming style e.g., by showing a photograph of a swimming person. A color-consistency test and a Stroop-type test indicated that the synesthesia is genuine. These findings imply that synesthetic inducers do not operate at a sensory level but instead, at the semantic level at which concepts are evoked. Hence, the inducers are not defined by the modality-dependent sensations but by the “ideas” activated by these sensations.

It would be interesting to find out if this effect is operating just at a semantic level or whether, as I suspect, it acts at a more general conceptual level. Can it happen with concepts that do not have associated name-words?

synaesthesis reversed by hypnosis

Terhune, Cardena and Lindgren published a paper, Disruption of synaesthesia by posthypnotic suggestion: an ERP study, and this paper was discussed by Vaughan Bell in the blog Mind Hacks and in the Guardian newspaper. (here)

Abstract:

This study examined whether the behavioral and electrophysiological correlates of synaesthetic response conflict could be disrupted by posthypnotic suggestion. We recorded event-related brain potentials while a highly suggestible face-color synaesthete and matched controls viewed congruently and incongruently colored faces in a color-naming task. The synaesthete, but not the controls, displayed slower response times, and greater P1 and sustained N400 ERP components over frontal-midline electrodes for incongruent than congruent faces. The behavioral and N400 markers of response conflict, but not the P1, were abolished following a posthypnotic suggestion for the termination of the participant’s synaesthesia and reinstated following the cancellation of the suggestion. These findings demonstrate that the conscious experience of synaesthesia can be temporarily abolished by cognitive control.

In Bell’s discussion, he points out that this is very unexpected “it is equally new to science because no one had suspected that synaesthesia could be reversed.” The synaesthesic effect was being measured by the addition time it took to identify targets that were coloured differently than the colour that the synaesthesic gives them and by the neurological signs of conflict between the two colours (the Stroop effect). Hypnosis can reverse this by eliminating the synaesthesic colour. How?

This trait (hypnotisability) is usually described as “suggestibility” but it is nothing to do with gullibility or being easily led. People susceptible to hypnosis are not more naive, trusting or credulous than anyone else, but they do have the capacity to allow seemingly involuntary changes to their mind and body. The key phrase here is that they “have the capacity to allow” because hypnosis cannot be used to force someone against their will…

When a suggestion is successful, the experience of it seeming to “happen on its own” is key and this is exactly what neuroscientists have been working with – by suggesting temporary changes to the mind that we wouldn’t necessarily be able to trigger on our own. In the case of the two experiments that managed to temporarily “switch off” the Stroop effect in highly hypnotisable people, the suggestion was that the words appeared as “meaningless symbols”. This avoided a clash between the colour and the word because the text suddenly appeared to be gibberish…

Neuroscientists Amir Raz and Jason Buhle suggest hypnosis is really when we allow suggestions to take over from our normally self-directed control of attention that deals with mental self-management, allowing science an exciting tool to “get under the hood” of the conscious mind.

If you find hypnosis intriguing then you will find Bell’s article very interesting, as I did.

How is the world represented without vision?

Vision is so important to humans that it is difficult to imagine how we can produce a conscious model of the world without it. And what is done with the third of the cortex that is involved in vision when it is idle. Kupers and others (see citation) have been comparing fMRI scans using congenitally blind, blind that were once sighted, sighted and blindfolded sighted individuals.

How do individuals who never had any visual experience since birth form a conscious representation of a world that they have never seen? How do their brains behave? What happens to vision-devoted brain structures in individuals who are born deprived of sight or who lose vision at different ages? To what extent is visual experience truly necessary for the brain to develop its functional architecture? What does the study of blind individuals teach us about the functional organization of the sighted brain in physiological conditions?

It is known that the cortex has capacity for plasticity and reorganization when input from a sense is lost. Other senses will use the spare cortical areas. Studies have shown that there are changes in the grey matter, the white matter under it, and the cell metabolism during this reorganization. In the blind, the occipital cortex (visual cortex) becomes involved in other senses and in a variety of cognitive functions including: lexical, semantic, phonological, attention, verbal memory, working memory.

A part of the cortex (extrastriate ventrotemporal cortex) is concerned with recognizing objects, a function that is very important to acquiring knowledge of the external world. Different categories of object give specific activity patterns in this region, termed object-form-topology. This processing relies heavily on vision. But blindfolded people who recognize an object from feel show very similar patterns to those that occur when they use sight. The patterns are supramodal – they do not depend on any particular sense.

The findings in the congenitally blind subjects are important also because they indicate that the development of topographically organized, category-related representations in the extrastriate visual cortex does not require visual experience. Experience with objects acquired through other sensory modalities appears to be sufficient to support the development of these patterns. Thus, at least to some extent, the visual cortex does not require vision to develop its functional architecture that makes it possible to acquire knowledge of the external world.

So the ventral ‘what’ pathway can process without vision. What about the dorsal ‘where’ pathway? Is spatial processing possible without vision? Yes, the dorsal pathway can use senses other than sight and does not require visual experience to develop. We process motion per se.

Both optic and tactile motion provide information about object form, position, orientation, consistency and movement, and also about the position and movement of the self in the environment.

And when they looked at mirror neurons, they found the same condition. Vision is not necessary for the development of a functional efficient mirror neuron system. This suggests that abstract representation of actions is also not tied to any particular sense.

The main hypothesis that we have put forward here is that the development of consciousness in the absence of vision is made possible through the supramodal nature of functional cortical organization. The more abstract representation of the concepts of objects, space, motion, gestures, and actions – in one term, awareness of the external world – is associated with regional brain activation patterns that are essentially similar in sighted and congenitally blind individuals. The morphological and/or functional differences that exist between the sighted and the blind brain are the consequence of the cross-modal plastic reorganization that mostly affects that part of the cortex that is not multimodal in nature.

What about the experience that results from the reorganization in the blind? It appears that the type of qualia is connected to the source of the input not the region that processes it.

The results of these TMS studies constitute the first direct demonstration that the subjective experience of activity in the visual cortex after sensory remapping is tactile, not visual. These findings provide new insights into the long-established scientific debate on cortical dominance or deference. What is the experience of a subject in whom areas of cortex receive input from sensory sources not normally projecting to those areas? Our studies suggest that the qualitative character of the subject’s experience is not determined by the area of cortex that is active (cortical dominance), but by the source of input to it (cortical deference). Our results are in line with evidence that sensory cortical areas receive input from multiple sensory modalities early in development.

ResearchBlogging.org

Kupers, R., Pietrini, P., Ricciardi, E., & Ptito, M. (2011). The Nature of Consciousness in the Visually Deprived Brain Frontiers in Psychology, 2 DOI: 10.3389/fpsyg.2011.00019

keeping attention on the danger

Here is the abstract of a paper by A. Shackman, J. Maxwell, B. McMenamin, L. Greischar and R. Davidson in The Journal of Neuroscience, Jan 2011, Stress Potentiates Early and Attenuates Late Stages of Visual Processing. The whole paper is not freely available unfortunately.

Stress can fundamentally alter neural responses to incoming information. Recent research suggests that stress and anxiety shift the balance of attention away from a task-directed mode, governed by prefrontal cortex, to a sensory-vigilance mode, governed by the amygdala and other threat-sensitive regions. A key untested prediction of this framework is that stress exerts dissociable effects on different stages of information processing. This study exploited the temporal resolution afforded by event-related potentials to disentangle the impact of stress on vigilance, indexed by early perceptual activity, from its impact on task-directed cognition, indexed by later postperceptual activity in humans. Results indicated that threat of shock amplified stress, measured using retrospective ratings and concurrent facial electromyography. Stress also double-dissociated early sensory-specific processing from later task-directed processing of emotionally neutral stimuli: stress amplified N1 (184–236 ms) and attenuated P3 (316–488 ms) activity. This demonstrates that stress can have strikingly different consequences at different processing stages. Consistent with recent suggestions, stress amplified earlier extrastriate activity in a manner consistent with vigilance for threat (N1), but disrupted later activity associated with the evaluation of task-relevant information (P3). These results provide a novel basis for understanding how stress can modulate information processing in everyday life and stress-sensitive disorders.

When involved in a task, the prefrontal cortex steers attention. Only surprising sensory input will usually overcome the task oriented focus of attention. It seems that stress overturns this situation and makes non-surprising sensory input override tasks in steering attention. This is probably important for the avoidance of danger – the environment requires careful monitoring. But I suppose this is part of why being upset can make it so hard to concentrate on what I’m trying to get done.

Faces

I have often wondered about how we recognize faces. We are so very good at recognition and so bad at describing faces in words. One person will say big nose, freckles, oval shape – and this is correct – but it is of no help to someone else in forming an image of the face. The way faces are recognized rarely rises to consciousness and therefore to useful for verbal description.

The current theory is that we build up by experience an ‘average face’. We then compare faces we encounter to this average face. It can be thought of as a mathematical ‘space’, a multi-dimensional face-space. The distance from the average corresponds to the amount of difference the face is from the average and the direction from the average face corresponds to the way/s the face differs from the average. So there is a center with arrows going out various distances in various directions to each known face.

This makes certain things clearer.

First, this is probably the reason that a person who has encountered only a few people from another racial group, has difficulty identifying people of that group. People are left stammering that all Chinese people look alike, knowing how stupid this sounds. They do not have a good average face for the unfamiliar group and therefore have difficulty establishing the differences between any face and the average.

Second, we use the average face as a sign post for beauty. The closer a face is to our average, the more attractive it is.

Third, the closer a face is to the average, the faster we recognize that it is a face. But knowing that a object is a face, the farther it is from average, the faster it is recognized as a particular face. These results depend on the density of faces in the face-space: high near the average and low far from the average.

Fourth, caricatures are recognized as the target although they are actually very, very different from the target face. But a good caricature is on the near exact direction from the average but just a much farther distance away then the target face. It is like following our imaginary arrow to the particular faces and then carrying on in the same direction for some distance.

Fifth, another interesting effect is archetypes. We can think of face-space having places (other than the average) of unusual high density of encountered faces surrounded by low density areas. Some particular face near the center of such a clump could come to stand for this type of face.

Sixth, this explains why we can sometimes hardly notice such prominent changes as new glasses, loss of a mustache etc. These are just not dimensions/directions in the face-space so they are not used to recognize faces.