Free-will again

A. Gottlieb has written a piece for The Economist’s More Intelligent Life magazine. Neurons v Free-will? (here).

 

 

Aside: Regular readers will know how I feel about this debate – both determinism and free-will are flawed ideas. What is needed is an actual understanding of how decisions are made. The two concepts can be made compatible by redefining them so that determinism does not involve predictability and free-will does not include consciousness. I would prefer to throw out both words than redefine them.

 

 

Gottlieb starts with the fact that determinism is a very old idea that just will not die. “Every age finds a fresh reason to doubt the reality of human freedom.” He mentions ancient Greek ideas of fate and necessity, God’s foreknowledge, science’s lawful universe, Freud’s unconscious and now findings of modern neuroscience. He says, “Hardly anybody doubts that the grey matter in our skulls underpins our thoughts and feelings, in the sense that a working brain is required for our mental life.”, and, “The more we find out about the workings of the brain, the less room there seems to be in it for any kind of autonomous, rational self. Where, in the chain of events leading up to an action, could such a thing be found?”. Even in the opinion of Webner and many others, “Investigations of the brain show that conscious will is an “illusion”.” But, I wonder, how determinism can die as long as the only alternative is conscious free-will? It does not die but it does not win the day either.

 

 

Gottlieb goes on: “In 2011, Sam Harris, an American writer on neuroscience and religion, wrote that free will “could not be squared with an understanding of the physical world”, and that all our behaviour “can be traced to biological events about which we have no conscious knowledge”. Really? There are now hopeful signs of what might be called a backlash against the brain.” What is this backlash? Gottlieb gives some evidence that scans have been overhyped but does not mention other methods that have not been overhyped. And in the case of decision making, they have confirmed that the process of decision making does not start in conscious awareness. This whole part is beside the point – Libet type experiments do not depend on scans.

 

 

Gottlieb feels that it is not practical to look for free-will in the brain. Referring to the simplicity of the Libet experiment, he says, “But while twitches of the wrist may be simple to monitor, they’re an odd place to search for free will.” I disagree, what would be a better place. Here we have a simple act that is easy to identify and time; it is done whenever the subject chooses without any sort of ‘want’, ‘need’, ‘habit’, ‘place in a chain of actions’, etc. to interfere with the subject’s freedom; it can be followed and timed without recourse to scans. If there is conscious free-will it should show itself here and it definitely does not. So why does conscious free-will not die if it has been shown to not happen where it should. How can it die when the only alternative is seen as a already determined future outside one’s control?

 

 

Tallis is quoted in the context of more complex and natural decisions, “And it would be crazy to think that conscious deliberation isn’t really involved in them.” (I must be crazy.) Gottlieb leans too heavily on Tallis’ ideas. Stephen Cave puts his finger on it when he says that most philosophers and scientists do in fact believe that mind is just the product of certain brain activity, even if we do not currently know quite how and that Tallis “does both the reader and these thinkers an injustice” by declaring that view “obviously” wrong. He has not put forward any alternative.

 

 

There is a way out of this dispute – reject both sides.

 

 

We do not have two brains or separate thinking systems or separate ‘minds’. We have one brain, it thinks, and some of the results of thinking are made conscious. We cannot have our brains being one thing and our consciousness another; that really is crazy. Our brains make decisions, they are as close to an autonomous, rational self as we get (and that is fairly close). The decisions are not determined or predictable – there is no shortcut to actually making the decision. The bottom line is that we do make decisions. If it is important to the processes of decision making for the brain to be globally aware of some step or to hold it in working memory, then it will be made conscious. If it does not needed to be conscious then it probably with not be. Strings of steps passing through consciousness may appear to be a thought process but really all the cognitive thinking work behind each step is not entering consciousness. We are not consciously making a decision; instead we are making a decision and we may be aware of some ingredients in the process. We are not in a determined world but we are in a physical one and part of it. We do not have conscious free-will – no problem here because that doesn’t mean we lack a will or are not responsible for what we do. Both determinism and free-will are straw men, red herrings, passe and flawed half-truths.

 

Interpreting spatial language

When we refer to spatial arrangements in language, there are three different ways to do it. We can see ourselves as central and refer to the positions of other objects by their headings from us. So, that post is behind me or that house is to my left – relative or egocentric frame. We can also see some other object as the reference. So, that post is behind the house or the garden is on the left of the car – intrinsic or object oriented frame. Finally we can use the world as the reference. So, the post is to the west of the house – absolute or world oriented frame. How people handle the choice of referent is largely a matter of culture and language. European languages mainly use a relative frame of reference as default – we are usually the reference. But there is often room for ambiguity. Does the “the ball is in front of the man” mean it that the ball is between me and the man or does it mean that the man is facing the ball? This interests me in particular because I often seem to misinterpret what is meant or am left wondering. In the past, I have put it down to being left-handed.

 

Janzen, Haum and Levinson (see citation) investigated relative and intrinsic using the possibilities of ambiguity. They created sentences like “the ball is in front of the man” and three drawings to go with the sentence. One drawing would be true in both relative and intrinsic interpretations; one would be false for both; one would be true for one interpretation and false for the other. They showed a picture and sentence and asked the subject to say whether the drawing was a correct description of the sentence. The subjects were given feedback on whether their answers were correct. This feedback was either based on the relative or the intrinsic interpretation and the subjects came to judge the sentence-drawing pairs according to the feedback type they were receiving. During a block of trials, consistent feedback (correct, incorrect) was given so inducing either a relative or intrinsic frame. Midway through the trials, the second block began and the feedback was switched to the alternative reference frame without any explanation . Only correct answers were used in the analysis. This gave results for identical sentence-drawing pairs viewed in each of the two frames of reference. The subjects spoke Dutch which tends to use relative reference. Event-related fMRI was used to follow the differences in cortical activity in the two reference frames following identical linguistic and visual input.

 

They found two networks, an intrinsic one and a relative one. The differentiation starts early at the level of sentence processing (that is before the drawing is shown and the answer required). Increased brain activity in bilateral parahippocampal gyrus was associated with the intrinsic frame of reference whereas increased activity in the right superior frontal gyrus and in the parietal lobe was observed for the relative frame of reference.

 

Comparing trials with intrinsic as well as relative pictures to baseline trials we found a shared widespread network with increased activity in occipital, parietal, temporal and frontal brain regions. This is in line with evidence from an fMRI study that distinguished viewer-, object-, and landmark-centered distance judgments, and found common activity for all three types in bilateral parietal, occipital, and right frontal premotor regions as well.

 

In the present study we directly compared intrinsic with relative trials and observed increased activity for intrinsic trials in bilateral parahippocampal gyrus, an area closely connected to the hippocampus through the entorhinal and perirhinal gyrus. Recent neuroimaging studies emphasize the importance of the parahippo-campal gyrus for the recognition of familiar as well as novel spatial environments and scenes and for object-location memory . To correctly solve intrinsic trials participants needed to consider the spatial relation of two objects and decide whether the scene matched a previously presented sentence. Therefore scene representation within the parahippocampal gyrus should be able to support intrinsic frames of reference.

 

 

fMRI data has shown that the parietal lobe is associated with representations of object locations in an egocentric reference frame. The present data when comparing relative trials to baseline trials supports the involvement of the parietal lobe. We observed increased activity in the left parietal lobe for the relative frame of reference only, confirming neurophysiological studies which report the involvement of the parietal lobe in egocentric coding.

 

Relative trials as compared to intrinsic trials also showed strongly increased activity in superior frontal gyrus. This is in line with findings from researchers who have observed a parietal/frontal network for viewer-centered coding.

 

This gives a glimpse at the way a language is interpreted in the brain by creating a model of what is understood by words.

 

 

ResearchBlogging.org

Janzen, G., Haun, D., & Levinson, S. (2012). Tracking Down Abstract Linguistic Meaning: Neural Correlates of Spatial Frame of Reference Ambiguities in Language PLoS ONE, 7 (2) DOI: 10.1371/journal.pone.0030657

BOLD confounds

The pitfalls of experimental methods in neuroscience have not all been worked out. I’ll say it again, no one result is reliable in science; what is convincing is a fabric of results – not a string but a fabric. This is especially true in a new field.

Micah at neuroconscience blog has a posting on possible BOLD signal problems. (here)

Particularly in fMRI research, we’re all too familiar with certain regions that seem to pop up in study after study, regardless of experimental paradigm. When it comes to areas like the anterior cingulate cortex (ACC) and insula (AIC), the trend is glaringly obvious. Generally when I see the same brain region involved in a wide a variety of tasks, I think there must be some very general level function which encompasses these paradigms. Off the top of my head, the ACC and AIC are major players in cognitive control, pain, emotion, consciousness, salience, working memory, decision making, and interoception to name a few. Maybe on a bad day I’ll look at a list like that and think, well localization is just all wrong, and really what we have is a big fat prefrontal cortex doing everything in conjunction. A paper published yesterday in Cerebral Cortex (Di, Kannurpatti, Rypma, Biswal: Calibrating BOLD fRMI Activations with Neurovascular and Anatomical Constraints) took my breath away and lead to a third, more sinister option: a serious methodological confound in a large majority of published fMRI papers.

An important line of research in neuroimaging focuses on noise in fMRI signals. The essential problem of fMRI is that, while it provides decent spatial resolution, the data is acquired slowly and indirectly via the blood-oxygenation level dependent (BOLD) signal. The BOLD signal is messy, slow, and extremely complex in its origins. Although we typically assume increasing BOLD signal equals greater neural activity, the details of just what kind of activity (e.g. excitatory vs inhibitory, post-synaptic vs local field) are murky at best. Advancements in multi-modal and optogenetic imaging hold a great deal of promise regarding the signal’s true nature, but sadly we are currently at a “best guess” level of understanding. This weakness means that without careful experimental design, it can be difficult to rule out non-neural contributors to our fMRI signal. Setting aside the worry about what neural activity IS measured by BOLD signal, there is still the very real threat of non-neural sources like respiration and cardiovascular function confounding the final result. This is a whole field of research in itself, and is far too complex to summarize here in its entirety. The basic issue is quite simple though.

Well, maybe not that simple – go to the original posting for the physiology. The upshot is that it is really important to control for the subject holding their breath or breathing differently at different times in the protocol.

The authors conclude that “(results) indicated that the adjustment tended to suppress activation in regions that were near vessels such as midline cingulate gyrus, bilateral anterior insula, and posterior cerebellum.” It seems that indeed, our old friends the anterior insula and cingulate cortex are extremely susceptible to neurovascular confound.

What does this mean for cognitive neuroscience? For one, it should be clear that even well-controlled fMRI designs can exhibit such confounds. This doesn’t mean we should throw the baby out with the bathwater though; some designs are better than others. Thankfully it’s pretty easy to measure respiration with most scanners, and so it is probably a good idea at minimum to check if one’s experimental conditions do indeed create differential respiration patterns. Further, we need to be especially cautious in cases like meditation or clinical fMRI, where special participant groups may have different baseline respiration rates or stronger parasympathetic responses to stimuli.

Experimental methods in neuroscience are new enough and complicated enough to be misleading. It is reassuring to me that there are researchers looking at possible short-comings of these methods.

 

Creative running

Christopher Bergland (here) believes that we think in a different way when we exercise.

Anyone who exercises regularly knows that your thinking process changes when you are walking, jogging, biking, swimming, riding the elliptical trainer, etc. New ideas tend to bubble up and crystallize when you are inside the aerobic zone. You are able to connect the dots and problem solve with a cognitive flexibility that you don’t have when you are sitting at your desk. This is a universal phenomenon, but one that neuroscientists are just beginning to understand. … Creativity is the ability to bring together disparate ideas in new and useful combinations. What is happening to the electrical, chemical and architectural environment of our brains when we exercise that stimulates our imagination and makes us more creative? What is the parallel between the waking dream state induced by exercise and the REM dream state experienced during sleep? Although these questions remain enigmatic, neuroscientists have identified that the non-thinking ‘default state‘ of consciousness is key to creative thinking. … Sweat is like WD-40 for your mind-–it lubricates the rusty hinges of your brain and makes your thinking more fluid. Exercise allows your conscious mind to access fresh ideas that are buried in the subconscious. Every thought that you have is a unique tapestry of millions of neurons locking together in a specific pattern-this is called an engram. If you do not ‘unclamp’ during the day, you get locked into a loop of rut-like thinking. If for any reason you are unable to do aerobic activity, focused meditation is also an excellent way to create a default state.

The piece has quotes from a number of writers and runners such as:

Ralph Waldo Emerson said of Thoreau: The length of his walk uniformly made the length of his writing. If shut up in the house, he did not write at all.”

I find this idea intriguing. There is no reason why the rhythm and effort of running (or even walking) would not affect both cognition and consciousness. There might even be some chemistry there. But also the ‘default network’ angle is interesting. If the motor part of the brain is busy and, because of moving, we cannot override the control of sensory input – then there cannot be a ‘task’ control of attention. It would be, or be like, the default network being in control.

A totally opposite but somehow the similar effect is my old trick of sitting still in the dark and silence to think. What would be the difference between: the motor and sensory parts of the brain working automatically and therefore leaving the rest of the brain free to mull; and, a sort of imposed sensory deprivation and motor inactivity letting the brain mull? Maybe or maybe not. I mention this for those of you who are like me and too old and sore to ever run again.

Control of attention

Two sorts of perceived items must compete for attention: items that are required for on-going tasks and items from the environment that are surprising or very conspicuous. We do not want to be hit by a bus because we are solving a little problem, nor do we want to be distracted from our concentration by every little change in our surroundings. How is the compromise accomplished?

 

In a recent paper (see citation) Mazaheri and others have studied this question. They used tracking of eye movements and EEG recordings to follow the choice between a target that was part of the ‘task’ and a distraction (one of: none, a distraction that was no more conspicuous than the target, and a highly conspicuous distraction). Would the eyes cascade towards the target or the distraction first? And was there a difference between the EEG events before a move towards the target compared with one towards the distraction?

 

Not too surprisingly, we found that people differed in their ability to concentrate and ignore the distraction. Despite this there was a clear pattern of activity. When there was distraction there appeared to be a prior disengagement from the task.

We found that an increase in pre-stimulus alpha activity over frontal-central regions was predictive of subsequent attentional capture by a salient distractor. Previous studies have found an alpha increase in a particular region to be indicative of the functional inhibition/disengagement of that region. … could reflect the disengagement of the frontal-eye fields (FEF). FEF is involved in top-down voluntary control of saccades and attention.

 

Also there was a decrease in the N1 wave response when the target won over the distractor for the first saccade. This is assumed to be due to a gating that favours items at the current spatial focus. An increase in N1 would imply greater task-relevant processing locked to the appearance of the stimulus in the trials where the target was looked at first.

 

As well as this locking effect tied to the stimulus (a fixed time after the stimulus appeared), there is also locking tied to the saccade (a fixed time before the eyes moved).

there was a central-parietal alpha burst just preceding the onset of the first saccade that was greater in amplitude for saccades to the target. … could index the transient inhibition of the prepotent response to saccade to the more salient distractor. … intraparietal sulcus contains an attentional priority map and is involved in saccadic control and is a good candidate for being the source of the inhibitory control signal seen here.

 

They also found evidence of a locking effect of a negative wave occurring before the eye movement.

A qualitative inspection of the saccade locked ERPs (event related potentials) suggests that this negative deflection is due to a latency shift in the slow negative drift building up to a potential preceding the saccade to a salient distractor.

 

It would be interesting to see how these indications varied if the task targets and the conspicuous distractors had varying importance/strength. What does it take for the top-down control to overcome bottom-up and vice versa?

 

ResearchBlogging.org

Mazaheri, A., DiQuattro, N., Bengson, J., & Geng, J. (2011). Pre-Stimulus Activity Predicts the Winner of Top-Down vs. Bottom-Up Attentional Selection PLoS ONE, 6 (2) DOI: 10.1371/journal.pone.0016243

Introspection is not as it appears

Here is another of the Edge question essays, by Timothy D Wilson, “We are what we do”. The Edge answers are (here).

My favorite is the idea that people become what they do. …

Self-perception theory turns common wisdom on its head. People act the way they do because of their personality traits and attitudes, right? They return a lost wallet because they are honest, recycle their trash because they care about the environment, and pay $5 for a caramel brulée latte because they like expensive coffee drinks. While it is true that behavior emanates from people’s inner dispositions, Bem’s insight was to suggest that the reverse also holds. If we return a lost wallet, there is an upward tick on our honesty meter. After we drag the recycling bin to the curb, we infer that we really care about the environment. And after purchasing the latte, we assume that we are coffee connoisseurs.

Hundreds of experiments have confirmed the theory and shown when this self-inference process is most likely to operate (e.g., when people believe they freely chose to behave the way they did, and when they weren’t sure at the outset how they felt).

Self-perception theory is an elegant in its simplicity. But it is also quite deep, with important implications for the nature of the human mind. Two other powerful ideas follow from it. The first is that we are strangers to ourselves. After all, if we knew our own minds, why would we need to guess what our preferences are from our behavior? If our minds were an open book, we would know exactly how honest we are and how much we like lattes. Instead, we often need to look to our behavior to figure out who we are. Self-perception theory thus anticipated the revolution in psychology in the study of human consciousness, a revolution that revealed the limits of introspection.

But it turns out that we don’t just use our behavior to reveal our dispositions – we infer dispositions that weren’t there before. Often, our behavior is shaped by subtle pressures around us, but we fail to recognize those pressures. As a result, we mistakenly believe that our behavior emanated from some inner disposition. Perhaps we aren’t particularly trustworthy and instead returned the wallet in order to impress the people around us. But, failing to realize that, we infer that we are squeaky clean honest. Maybe we recycle because the city has made it easy to do so (by giving us a bin and picking it up every Tuesday) and our spouse and neighbors would disapprove if we didn’t. Instead of recognizing those reasons, though, we assume that we should be nominated for the Green Neighbor of the Month Award. Countless studies have shown that people are quite susceptible to social influence, but rarely recognize the full extent of it, thereby misattributing their compliance to their true wishes and desires–the well-known fundamental attribution error. … In short, we should all heed Kurt Vonnegut’s advice: “We are what we pretend to be, so we must be careful about what we pretend to be.”

This is the same idea as the notion that our justifications are guesses that we produce when required and may not be the primary reasons for our actions. The most clear demonstrations of these guesses are in split brain subjects and people under hypnosis but there are many others. We cannot examine our motives through introspection but are in great danger of fooling ourselves.

Decision theory

Back to the Edge question (here). Stanislas Dehaene gave his answer to ‘What is your favorite deep, elegant, or beautiful explanation?’ as The Universal Algorithm for Human Decisions. Most is below:

All of our mental decisions appear to be captured by a simple rule that weaves together some of the most elegant mathematics of the past centuries: Brownian motion, Bayes’ rule, and the Turing machine.

Let us start with the simplest of all decisions: how do we decide that 4 is smaller than 5? Psychological investigation reveals many surprises behind this simple feat. First, our performance is very slow: the decision takes us nearly half a second… Second, our response time is highly variable from trial to trial, anywhere from 300 milliseconds to 800 milliseconds… Third, we make errors – it sounds ridiculous, but even when comparing 4 with 5, we sometimes make the wrong decision. Fourth, our performance varies with the meaning of the objects: we are much faster, and make fewer errors, when the numbers are far from each other (such as 1 and 5) than when they are close (such as 4 and 5).

Well, all of the above facts, and many more, can be explained by a single law: our brain takes decisions by accumulating the available statistical evidence and committing to a decision whenever the total exceeds a threshold.

Let me unpack this statement. The problem that the brain faces when taking a decision is one of sifting the signal from the noise. The input to any of our decision is always noisy: photons hit our retina at random times, neurons transmit the information with partial reliability, and spontaneous neural discharges (spikes) are emitted throughout the brain, adding noise to any decision. Even when the input is a digit, neuronal recordings show that the corresponding quantity is coding by a noisy population of neurons that fires at semi-random times, with some neurons signaling “I think it’s 4″, others “it’s close to 5″, or “it’s close to 3″, etc. Because the brain’s decision system only sees unlabeled spikes, not full-fledged symbols, it is a genuine problem for it to separate the wheat from the chaff.

In the presence of noise, how should one take a reliable decision? The mathematical solution to that the problem was first addressed by Alan Turing, when he was cracking the Enigma code at Bletchley Park. Turing found a small glitch in the Enigma machine, which meant that some of the German messages contained small amounts of information – but unfortunately, too little to be certain of the underlying code. Turing realized that Bayes’ law could be exploited to combine all of the independent pieces of evidence. Skipping the math, Bayes’ law provides a simple way to sum all of the successive hints, plus whatever prior knowledge we have, in order to obtain a combined statistic that tells us what the total evidence is.

With noisy inputs, this sum fluctuates up and down, as some incoming messages support the conclusion while others merely add noise. The outcome is what mathematicians call a “random walk” or “Brownian motion”, a fluctuating march of numbers as a function of time. In our case, however, the numbers have a currency: they represent the likelihood that one hypothesis is true (e.g. the probability that the input digit is smaller than 5). Thus, the rational thing to do is to act as a statistician, and wait until the accumulated statistic exceeds a threshold probability value. Setting it to p=0.999 would mean that we have one chance in a thousand to be wrong.

… There is a speed-accuracy trade-off: we can wait a long time and take a very accurate but conservative decision, or we can hazard a response earlier, but at the cost of making more errors. Whatever our choice, we will always make a few errors.

Suffice it to say that the decision algorithm I sketched, and which simply describes what any rational creature should do in the face of noise, is now considered as a fully general mechanism for human decision making. It explains our response times, their variability, and the entire shape of their distribution. It describes why we make errors, how errors relate to response time, and how we set the speed-accuracy trade-off. It applies to all sorts of the decisions, from sensory choices (did I see movement or not?) to linguistics (did I hear “dog” or “bog”?) and to higher-level conundrums (should I do this task first or second?). And in more complex cases, such as performing a multi-digit calculation or a series of tasks, the model characterizes our behavior as a sequence of accumulate-and-threshold steps, which turns out to be an excellent description of our serial, effortful Turing-like computations.

Furthermore, this behavioral description of decision-making is now leading to major progress in neuroscience. In the monkey brain, neurons can be recorded whose firing rates index an accumulation of relevant sensory signals. The theoretical distinction between evidence, accumulation and threshold helps parse out the brain into specialized subsystems that “make sense” from a decision-theoretic viewpoint.

As with any elegant scientific law, many complexities are waiting to be discovered… Nevertheless, as a first approximation, this law stands as one of the most elegant and productive discoveries of twentieth-century psychology: humans act as near-optimal statisticians, and any of our decisions corresponds to an accumulation of the available evidence up to some threshold.

To nit-pick a bit. Algorithm seems an inappropriate term in the title. Turin is mentioned in two contexts but only the phrase ‘our serial, effortful Turing-like computations’ seems to refer to what we call a Turing machine; the decoding trick does not have to do with Turing machines. The neural noise level in the brain seems a regulated parameter to due with sensitivity and not just an unavoidable by-product of neural activity. None of these picky things take away from the brilliant explanation.

Deaf hearing

A recent paper examined a patient with deaf-hearing, analogous to blind-sight, where there can be detection of a signal without conscious awareness of it. (citation below) For example, a person with blind-sight may avoid an obstacle without awareness of it; and, a deaf-hearing person may be startled and orient towards a noise without consciously hearing it.

 

Deaf-hearing seems to be the more rare condition and so this stroke victim was examined extensively. The path from within the ear through the brain stem to the thalamus was normal. But bilateral damage to the auditory areas of the cortex seemed to completely disrupt the processing of the signal. There appeared to be a problem with the communication between the thalamus and the cortex as well as damage to the cortex.

 

However, despite this break in the usual path, a signal designated P3 did occur within the cortex. The discussion has this passage:

Bernat et al. (2001) offer evidence that subliminal stimuli can evoke consistent P3 waves. They speculated that P3 could represent a link between unconscious and conscious awareness in the context updating processes. In our patient the generation of P3-like potentials implied that deviant stimuli were selectively processed bypassing networks involved in conscious perception. Schonwiesner et al. (2007) and Pandya (1995) hypothesized that association areas in and adjacent to the auditory parabelt might form an independent circuit from thalamo-cortical projections in the auditory system. These alternate pathways could be preserved in our patient and responsible for the generation of P3-like potentials.

 

What does a P3 wave indicate?

P3 offers a covert and indirect measure of attentional resource allocation that represents an index of change detection. P3 is related to the activity of associative cortical areas and is sensitive to complex processes around recalled information, stimulus significance, recognized auditory information and memory context updating. The sources of P3 are believed to be located in heteromodal areas of the fronto-parietal cortex and their activation might reflect an attention switch to an environmental change …. some authors have demonstrated an asymmetrical cortical activation of P3 by using unilateral auditory stimulation. Among others, Gilmore and colleagues (2009) argued that in normal condition the right hemisphere is more prominently engaged during working memory and updating processes underlying P3 …the patient showed robust P3-like components over the left posterior areas and a significantly lower distribution of the potentials over the right fronto-temporal and central areas in response to right ear stimulation. The left ear stimulation could not evoke any detectable responses.

 

In other words, P3 is about an event rather then a sound, switching attention to unusual events. In this patient there was a left hemisphere P3 but not the right hemisphere one – and it appears to be the right hemisphere P3 that engages consciousness.

 

The authors use these results to give input to the choice between two hypotheses. A simple hierarchical model where each level of processing is necessary for the next is not consistent with the findings. But the reverse hierarchy theory, which asserts that neural circuits mediating a certain percept can be modified starting at the highest representational level and progressing to lower-levels in search of more refined high resolution information to optimize perception, would not be ruled out by the paradoxical findings in this patient.

 

Personally, although not explicit in the paper, I find the results are more evidence for: one, the necessity of thalamus-cortex communication to consciousness; two, attention being a much more complex entity when just the focus of consciousness; three, that there is a difference in how the left and right hemispheres handle sound; and four, the importance of top-down inputs (expectations) in forming perceptions.

ResearchBlogging.org

Cavinato, M., Rigon, J., Volpato, C., Semenza, C., & Piccione, F. (2012). Preservation of Auditory P300-Like Potentials in Cortical Deafness PLoS ONE, 7 (1) DOI: 10.1371/journal.pone.0029909

Failure of conscious thought suppression

Don’t think of a purple elephant – you know the game. It is practically impossible to manage this seemingly simple task. We will think about the suppressed thought, probably every minute or so. Why?

In fact, the more seriously we put effort into avoiding the thought, the more it comes to consciousness. In order to avoid the thought, we have to keep monitoring whether the suppression is working. We, in effect, keep asking ourselves whether a purple elephant is anywhere near our consciousness. Every once in a while, that unconscious monitoring pops the thought into consciousness.

This sort of mechanism can explain some social gaffs and freudian slips. We are just trying too hard to avoid certain subjects/words. If we were less nervous about those subjects it would be easier to avoid blurting out the unacceptable remark we are determined to avoid. I think some people call this ‘don’t mention the war’ effect after a famous Fawlty Towers episode.

I heard a story of someone trying to learn to ride a bicycle. They were finally staying upright and feeling pretty good. After a few moments there was a post along the path and they concentrated on avoiding it. The more they told themselves to avoid the post, the more they were aimed at it and in the end hit it (dead center).

What comes into the center of consciousness is what is important either because it was not predicted (a surprise) or because it is part of the on-going task we are trying to accomplish. Usually these are referred to as bottom-up and top-down steering of attention. We have to be careful not to turn a no-no thought into a top-down focus of attention.

What is the preconscious?

I presume the word preconscious was first used by Freud and his meaning was thoughts that were not currently conscious but could be easily made conscious – all retrievable memory and perceptions that were not being attended to consciously would fit this description. It could be as wide as everything that is not repressed and not currently conscious. This does not seem a very useful designation.

A more modern use of preconscious is put forward by S Dehaene (director of INSERM-CEA group). It sounds very similar to Freud – ‘a transient state of activity in which information is potentially accessible, yet not accessed’ – but is quite different. He has three processes: subliminal, preconscious and conscious. Subliminal processing happens when there is not enough strength in the bottom-up signal for it to travel forward, preconscious processing happens when there is a feed forward but there is not enough top-down attention to trigger the reverberation of the thalamo-cortical network, conscious processing has sufficient bottom-up and top-down strength to ignite consciousness. The differences are that Dehaene is talking about processes not partitions of information, that he is not talking about escaping repression (or repression at all), and that he is not talking about information that is not current (such as the total of episodic memory as opposed to that part which might in a particular moment be recalled).

A recent paper, Pupillometry: a Window on the Preconscious, by Laeng and others, put another definition in the mix. They are dealing with very young babies. The preconscious is not mentioned in the abstract or press releases and is not mentioned in other papers of their that I have read (I am again not able to read to this particular paper in the origninal). I have not seen mention of consciousness, unconsciousness or preconsciousness in their other publications I have read so far and so I assume that they have a meaning like ‘before a baby is old enough to be fully conscious’. They are dealing with very young infants. Because changes in pupil size do not register consciously at any age it is unlikely that they are discussing the sort of thing that Freud or Dehaene are. This bit of the abstract is fairly clear, “given that pupillary responses can be easily measured in a noninvasive manner, occur from birth, and can occur in the absence of voluntary, conscious processes, they constitute a very promising tool for the study of preverbal (e.g., infants) or nonverbal participants (e.g., animals, neurological patients).” The meaning of preconscious seems to be ‘prior to attainment of consciousness through development, evolution or healing’.

 

I am sure there are people, not familiar with these concepts, that would have thought the word meant ‘on the way to being conscious but not there yet’, with the pre as a simple and quick time function. From Dehaene we have, “In a recent study of the attentional blink, we observed that up to about 180 ms after stimulus presentation, the occipito-temporal event-related potentials evoked by a invisible word were large and essentially indistinguishable from those evoked by a visible word. Yet on invisible trials, the participants’ visibility ratings did not deviate from the lowest value, used when no word was physically present. Thus, intense occipito-temporal activation can be accompanied by a complete lack of conscious report.” There needs to be a word for the part of early processing that is going to be conscious in another fraction of a second as opposed to the part that is going remain unconscious. Alas, this is not a way that the word, preconscious, is used as far as I can see.

So I will continue to avoid using the word – too confusing, too many possible meanings.