Info

You are currently browsing the thoughts on thoughts weblog archives for December, 2010.

Calendar
December 2010
M T W T F S S
« Nov   Jan »
 12345
6789101112
13141516171819
20212223242526
2728293031  
Categories

Archive for December 2010

Improving scan results

Yarkoni and others make a plea for collaboration and cumulative science in the mapping of brain functions, and in doing this they give much needed cautions to those of us that follow the images without any first-hand experience of a fMRI technology. I list here what they say are the short comings of individual studies.

  1. The studies are usually small, about 50-20 subjects, and the confidence thresholds are high, 0.001 or less. This limits the detection of activity to only very large effects and gives a misleadingly simple picture. Many effect are not seen in the small sample size or go unreported because they do not reach significance levels. Effects are likely to be far more widespread across the cortex rather than the highly localized picture we see.

  2. Even with very high confidence thresholds, there are many falsely positive results because of the extreme volume of results (the brain is big). Published false positive ‘lit spots’ in images are estimated at around 15%.

  3. Because of the expense on imaging studies, they are rarely actually duplicated by others. Instead near replications are done that give slightly different information. What is more it is difficult to replicate the exact location matches between studies even when the protocol is identical.

  4. One individual area of the brain can be activated by many tasks and so it is difficult to selectively pair an area to a function. The ‘area for X’ idea does not mean that if the area is active that therefore X is the current task. There is also the confusion of close and overlapping areas.

  5. The identification of networks, such as the ‘default’ and the ‘tasking’ ones, is hampered by the lack of agreement on how to describe both the brain areas and the cognitive tasks.

All these problems the authors feel could be reduced by sharing and integrating data. This requires common frameworks and formats, common databases and, of course, openness and cooperation. They detail these requirements.

 

ResearchBlogging.org
Yarkoni, T., Poldrack, R., Van Essen, D., & Wager, T. (2010). Cognitive neuroscience 2.0: building a cumulative science of human brain function Trends in Cognitive Sciences, 14 (11), 489-496 DOI: 10.1016/j.tics.2010.08.004

Probable, true or truthy

How do we know what is true? How do we decide what to believe? I am not going to present a logical, all-encompassing answer here, but just some helpful observations and my aims in this blog.

  1. Most ideas are possible and the level of probability ranges from very close to false to very close to true. In most scientific papers this is quantified as the ‘level of significance’. So a result that is said to be significant has a fairly high probability but not an overwhelming one – say in the region of 95% or more. If I read 20 papers this week, all with significant results at the 95% level, I should not be surprised if one of them turns out to be off base. This is not a failure of the scientific method or a scandal; it is the normal workings of experimentation. No matter how good the results look, they still may be a produce of chance.

  2. Some ideas are subjected to many, many experimental tests (different types of tests by different people over many, many years) and even though each experiment has a probability that is not amazing, the total probability of all of these experiments having been the product of chance becomes just too small to be taken seriously. This is what is meant by a ’scientific fact’ and it is quite different from the idea of absolute truth but is very trustworthy.

  3. Results are one thing but their explanation is another. The same results could be explained by different theories. Theories are judged by how large they are, how many ‘facts’ they explain and how accurate they explain those facts. In other words how useful and predictive they are. No one expects scientific facts to change but they do expect theories to change, grow, die, merge and so on. However, some theories are trusted to a high degree. They work well and there seems no real problems with them. These strong accepted theories form the foundations of the sciences. For example Biology is based on the Cell Theory, Evolution, the Central Dogma of molecular biology and so on. This theories have moved with the times but their core ideas last. To take down one of the central theories of a science is to tear the whole strong fabric of understanding. ‘Scientific truth’ is used to mean these strong theories.

  4. Thus a single experiment with a significance of even 1% or 0.01% is not going to change a scientific fact let alone a strong scientific theory. It takes a lot of results, contradictions and puzzles to bring down a strong theory or even an excepted fact. It happens but rarely.

  5. Some ideas that are treated as scientific because of their context and language but they are not. Pundits and even scientists (when writing books and articles rather then journal papers) will say things that appear to them self-evident but for which they have no shred of evidence. These ‘truthy’ statements are not part of science but of literature, speculation, journalism, politics or even bullshit. What an individual has to do to avoid falling into a truthy trap is not to think about probabilities but about their own biases. If someone says something that seems self-evident to you, you will not notice the lack of evidence. If they say something you find hard to believe then you will want to know what their evidence is. Lack of explicit or implicit evidence does not make something false, it just makes it a personal opinion of some particular person rather than a piece of science.

  6. New and complex experimental methods are particularly likely to produce result that are not repeatable. New and complex subjects of inquiry are particularly likely to be mis-interpreted using models that will not stand the test of time. Today these problems are especially true of neuroscience – a new area of inquiry with new methods and few consensus theories (but many old ideas from pre-scientific thought).

This is the reason that I try as much as I can to give an indication of the evidence behind the ideas that I deal with in my posts. The attempt, although not perfect, is to be clear about what ideas are my opinions (worthy as they are, I think) as opposed to scientific results, facts and theories.

As well, this post is a warning not to take individual results too seriously. A single paper may not turn out to be repeatable or the interpretation of what the experiment is actually measuring may change. It is the accumulation of evidence that counts. We want to build a fabric not a chain of evidence.

The noisy brain

It is generally assumed, currently, that neural synchronization is the method of communication in networks of neurons involved in perception, cognition and action. In a recent paper Ward and others (citation below) have investigated the importance of stochastic resonance in this synchrony. So what is this thing called stochastic resonance?

You will eventually run into stochastic resonance no matter which science you are involved in, from geology to quantum physics. The maths are not easy but the basic idea is simple, at least simple in its simplest form. Suppose you have a surface with two depressions on it and a high area between them. A ball can roll around in the one depression or the other but it has no way to climb out of one once in it. This is a bistable state – two stable states with an unstable state between them. Call the high area the threshold. Now suppose that at a regular interval something gives the ball a pull in the direction of the other depression but not enough of a pull to get it over the barrier. You can visualize this pull as a magnet on a pendulum that swings from directly over the one depression to directly over the other, back and forth. And make the ball an iron one. Call this pull the forcing periodic signal. It can bias the movement of the ball but not enough to get it over the barrier. Now suppose we wiggle the whole affair so that the ball has a fairly large random motion. But this random motion rarely is enough to take the ball over the barrier. This is the stochastic or random component, call it noise. Add the right amount of noise to the signal and presto, the signal added to the noise can often take the ball over the barrier. So a signal that is too weak to be effective is enhanced by the addition of noise. Ordinarily we think of noise as weakening a signal but in this case it is strengthened. Too little noise does not work and too much does not work either. The little extra pull of the signal loses significance when it is drowned in heavy noise. SR only works in a narrow band of noise strength that depends on the nature of the bistable and of the signal. This is a simple, even simplistic, way of visualizing stochastic resonance.

Wikipedia says:

Stochastic resonance (SR) is a phenomenon that occurs in a threshold measurement system (e.g. a man-made instrument or device; a natural cell, organ or organism) when an appropriate measure of information transfer (signal-to-noise ratio, mutual information, coherence, d, etc.) is maximized in the presence of a non-zero level of stochastic input noise thereby lowering the response threshold; the system resonates at a particular noise level.”

It is also assumed by many, on the basis of a number of experiments, that stochastic resonance is part of the environment of neurons in the brain and an ingredient of neural processing of information. But in what way does SR act? Where does the noise originate? What is the signal being brought over a threshold? What is the threshold? Where does the signal go and what does it do? Ward and his co-researchers look that what SR has to do with synchrony.

Here is the abstract:

Neural synchronization is a mechanism whereby functionally specific brain regions establish transient networks for perception, cognition, and action. Direct addition of weak noise (fast random fluctuations) to various neural systems enhances synchronization through the mechanism of stochastic resonance (SR). Moreover, SR also occurs in human perception, cognition, and action. Perception, cognition, and action are closely correlated with, and may depend upon, synchronized oscillations within specialized brain networks. We tested the hypothesis that SR-mediated neural synchronization occurs within and between functionally relevant brain areas and thus could be responsible for behavioral SR. We measured the 40-Hz transient response of the human auditory cortex to brief pure tones. This response arises when the ongoing, random-phase, 40-Hz activity of a group of tuned neurons in the auditory cortex becomes synchronized in response to the onset of an above-threshold sound at its “preferred” frequency. We presented a stream of near-threshold standard sounds in various levels of added broadband noise and measured subjects’ 40-Hz response to the standards in a deviant-detection paradigm using high-density EEG. We used independent component analysis and dipole fitting to locate neural sources of the 40-Hz response in bilateral auditory cortex, left posterior cingulate cortex and left superior frontal gyrus. We found that added noise enhanced the 40-Hz response in all these areas. Moreover, added noise also increased the synchronization between these regions in alpha and gamma frequency bands both during and after the 40-Hz response. Our results demonstrate neural SR in several functionally specific brain regions, including areas not traditionally thought to contribute to the auditory 40-Hz transient response. In addition, we demonstrated SR in the synchronization between these brain regions. Thus, both intra- and inter-regional synchronization of neural activity are facilitated by the addition of moderate amounts of random noise. Because the noise levels in the brain fluctuate with arousal system activity, particularly across sleep-wake cycles, optimal neural noise levels, and thus SR, could be involved in optimizing the formation of task-relevant brain networks at several scales under normal conditions.

Their research seems to indicate that stochastic resonance is contributing to the establishment of synchrony in local sensory areas and also between areas of the brain. As widespread synchrony is one the the hallmarks of the conscious process, we should watch this area of research closely. I noticed two particular results were interesting to me and not included in the abstract.

…it is striking that synchronization in the theta band has a more continuous and general character in this experiment, and is significantly non-zero even in the no-noise condition, whereas that in the alpha and gamma bands is more intermittent and tends to be significantly non-zero only in the added-noise conditions.

and

…it is apparent that attention did not abolish SR, as SR occurred for both left (attended ear) and right (unattended ear) standards. … The present data reinforce their conclusion that attention does not attenuate noise for near threshold stimuli … Rather, SR operates for weak stimuli in noise whether or not attention is being paid to them.

ResearchBlogging.org
Ward, L., MacLean, S., & Kirschner, A. (2010). Stochastic Resonance Modulates Neural Synchronization within and between Cortical Sources PLoS ONE, 5 (12) DOI: 10.1371/journal.pone.0014371

Blog answers

If you are a blogger on anything to do with the brain, take a look at this invitation to answer a questionnaire for the research of Alice Bell. (here) If you are interested in my answers, they are below.

Blog URL: http://charbonniers.org
Thoughts on thoughts: a blog on consciousness by Janet Kwasniak

What do you blog about?

The subject is consciousness but in the widest scientific sense so it includes attention, working memory, dreams, evolutionary reasons for consciousness and other aspects that are linked to consciousness and/or part of it. I try to avoid the temptation to expand into other areas. Absolutely no woo allowed, just science, philosophy or common sense.

Do you feel as if you fit into any particular community, network or genre if science blogging? (e.g. neuroscience, bad science, ex-sbling)

I fit into neuroscience. However I am not part of a blog group like Scienceblogs, Scientopia, Neuroscience blogs etc. I do put postings into ResearchBlogging under Neuroscience. I take part in the Encephalon carnival.

If so, what does that community give you?

(1) I don’t feel alone. (2) Reading the other bloggers gives me sources, ideas and a feel for what the general opinion is on questions. (3) Carnivals and ResearchBlogging give me exposure and alert me to new bloggers.

Are you paid to blog?

No. Nor do I have ads or ask for donations. And I don’t need to be paid because it costs me so little. I pay for my domain & service suppliers etc. and it totals less than a couple hundred dollars/pounds/euros a year; I use open software with no cost. Most people have hobbies that cost them more.

What do you do professionally (other than blog)?

I am retired and have been for about 10 years and live on a small pension with no work, paid or voluntary.

How long have you been blogging at this site?

The blog started June 2008 – two and a half years ago.

Have/ do you blogged elsewhere? When? Where?

No, this is my only blog. I do have a personal site started in 2006 (http://janetsplace.charbonniers.org) and one section of it (called views) could be thought of as a blog. It has a bit of science but it is rarely about the brain except in some items about language and about Alzheimers.

Would you describe yourself as a scientist, or as a member of the scientific community? Do you have any formal/ informal training in science? (if so, what area?)

I have never been a scientist but I have always been involved in science. I have been a medical technician, biological research technician, computer programmer, manager of laboratories and manager of computer systems. I have followed scientific developments for 50 years and have a Bachelors degree in Chemistry and Biology from the OU as well as an much earlier 2 year diploma in Medical Technology. I am a co-author on 4 peer-reviewed papers.

Note: Nothing above explains my interest in neuroscience. This interest stems from having been dyslexic (and left handed) in the days before it was a recognized condition in rural Canada. I have in effect taught myself to read and write with help from one teacher when I was 12 and my husband from the time I was 19 onwards. I got myself through school and tech training and only then learnt that my problem had a name. I have read everything about the brain that I could understand and get my hands on since then. Until recently, I could believe very little of what I found. Freud and similar, behaviorism, philosophical dualism: all were unconvincing. Now that neurobiology and cognitive science are blooming, I find that my general idea of how the brain works was vaguely on the right track and I am following the developments with great interest.

Do you have any formal training in journalism, science communication, or similar?

No formal training in either - but about 16 years in Toastmasters. I am excellent at oral communication and have a lot of practice at it. Written communication is difficult for me – I have to work at it. I think I am only average at written communication and I would have to be better than average to be published.

Do you write in other platforms? (e.g. in a print magazine?)

No.

Can you remember why you started blogging?

It occurred to me that I would probably be dead in 10 years and that I had things to say so I had better start now. I would never get a book published or even articles. If I tried to get published, all the work would go on the writing and not on the ideas. I would hate it and never actually get it done before I was senile. With blogging there was no pressure on the writing – people could take it as they found it or leave it. It is not that I don’t try to write well, it is just that I don’t succeed in writing to ‘publication’ standard.

I thought about Utube and doing talks in series on it but found blogging easy to set up.

What keeps you blogging?

(1) I enjoy it most of the time. (2) There is a routine so even if I find I am short of time or have other things I want to do, the routine gets the posts done. (3) Because I live in France and my French is not up to making interesting conversation, blogging gives me an English language outlet. Otherwise I would be suffering from ‘cabin fever’. (4) It feels like a worthy mission. (5) I keep learning and modifying my ideas.

Do you have any idea of the size or character if your audience? How?

I look at stats provided for my domain provider. I am currently running at about 18000 visits and about 30000 page views per month for the blog. The stats have risen steadily and not yet leveled off.

I put some postings (approximately half) on Research Blogging and they get about the same hits as other neuroscience posting there.

My blog rarely comes up on the early pages of a Google search and so is very far from a standout in popularity.

A couple of bloggers have put my link on their blog role, so at least some neuroscience bloggers follow my blog. Generally I have very little idea of the character of the audience.

What’s your attitude to/ relationship with people who comment on your blog?

I don’t get many comments and so I value those I get; I don’t pick fights; I try to respond in a positive way.

What do you think are the advantages of blogging? What are its disadvantages/ limitations?

For me blogging has very little disadvantages. Before the internet and blogging there was no way for someone like me to put ideas forward to the general public.

Do you tell people you know offline that you’re a blogger? (e.g. your grandmother, your boss)

Yes, everyone I can. But I don’t nag anyone to look at it.

Is there anything else you want to tell me about I haven’t asked?

(1) I post to the blog about every 3 days. (The personal site is updated monthly and a genealogical site is updated approximately yearly). This is between the daily output of some bloggers and the weekly or monthy output of others. (2) The postings are short compared to those of many other bloggers. They often have links to other blogs or papers, quotes from them and comments on the ideas in the source. No attempt at a chatty format is made. From time to time there are posting that summarize all or a number of previous postings. (3) I do not use a fancy layout or pictures (although I like this in the blogs I read) (4) I find pay walls to original papers and magazine articles very annoying. I don’t pay; don’t have access to a university library; therefore must rely on abstracts and other bloggers for the gist of many interesting developments. I wish everyone published on open access sites.

Why is science talking about freewill?

Science is about the physical reality. Scientists themselves can also think about things outside of physical reality but that is not science. So why is there a trickle of papers dealing with freewill? I ask this because I cannot find anything for freewill to be free-from other then the processes of matter and energy in the nervous system. In other words, freewill is freedom from physical reality. Science concerns itself with what can be actually investigated and it cannot investigate anything that is not physical. Whether freewill exists or not and what it might be like are not scientific questions.

Some (see Brembs paper) would like to make it a scientific question by unilaterally re-defining the word/phrase ‘freewill’. Changing the meanings of words is something that science does fairly often. But in this case it is DANGEROUS. The general media and the anti-neurobiology commentators are not going to warn the public that science has changed the meaning of freewill. They will just point out that science has accepted the spiritual. It would be as dangerous as changing the meaning of ‘intelligent design’ to mean a natural process of optimization. What science has to get across is that the freewill-vs-determinism argument is dead because BOTH ideas are flawed.

Now I hear people say, “you are making me an automaton, you are denying that I make decisions, you are taking away my spontaneity, you are saying that I am not responsible.” Nonsense, you are not an automaton but a living thing. Of course you make decisions and of course you are responsible for the decisions you make. What have these concerns have to do with some non-physical process? Leave the dualism at the door.

Introspection is not reliable – it is a process of educated guesses not direct knowledge. We guess at the world, and test our guesses and make a pretty good working model of the world. That model includes many of our thought processes. Our model of the tree is not the actual tree and likewise, our model of our thought process is not our actual thought process. We guess at our own motivation and we guess at other’s motivation. We know deep down that we can be fooled about our and about other’s motivation. Here is the abstract from a recent paper (Pronin, Kugler). It seems to show that we assume more freewill in ourselves than in others. It could also show that we guess motivation differently in ourselves and others. We may even, on occasion, see the reasons of someone else’s decision more clearly then we see our own decisions.

Abstract: Four experiments identify a tendency for people to believe that their own lives are more guided by the tenets of free will than are the lives of their peers. These tenets involve the a priori unpredictability of personal action, the presence of multiple possible paths in a person’s future, and the causal power of one’s personal desires and intentions in guiding one’s actions. In experiment 1, participants viewed their own pasts and futures as less predictable a priori than those of their peers. In experiments 2 and 3, participants thought there were more possible paths (whether good or bad) in their own futures than their peers’ futures. In experiment 4, participants viewed their own future behavior, compared with that of their peers, as uniquely driven by intentions and desires (rather than personality, random features of the situation, or history). Implications for the classic actor–observer bias, for debates about free will, and for perceptions of personal responsibility are discussed.

 

ResearchBlogging.org
Pronin, E., & Kugler, M. (2010). People believe they have more free will than others Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1012046108
Brembs, B. (2010). Towards a scientific concept of free will as a biological trait: spontaneous actions and decision-making in invertebrates Proceedings of the Royal Society B: Biological Sciences DOI: 10.1098/rspb.2010.2325

The many faces of Bayesian models

I will come clean to begin with. I like the general idea of Bayesian inductive reasoning – it is so clearly on the mark that it almost appears tautological. But as soon as actual numbers are put into actual equations, my confidence starts to melt to the extent that the numbers are often guesses. I find that many people have some sort of switch in their thinking so that when they are doing mechanical operations on numbers (solving equations) they appear to stop doing any critical thinking (evaluating ideas). A case in point is the way Bayesian equations were trusted, without critical examination of the priors fed into them, led in part to the recent credit crunch. So I whole-heartedly endorse the spirit of Bayesian logic but am suspicious of its practice.

The other thing that I have to declare is that I distrust people who appear to be squeamish about living things, people who want to understand biological systems without taking any notice of the biological aspects of the systems, want to understand thought without understanding brains.

So you can understand why I enjoyed Jones and Love’s paper on Bayesian Fundamentalism and Bayesian Enlightenment (pdf). It looks at both what is so promising about Bayes’ Rule, which they call Enlightened Bayesian approach, and what is so disturbing, which they call Bayesian Fundamentalism.

What is meant by these terms -

… placing too much emphasis on mathematical and computational power at the expense of theoretical development. In particular, there has been a considerable amount of work whose primary goal is to demonstrate that human behavior in some task is rational with respect to a particular choice of Bayesian model. We refer to this school of thought as Bayesian Fundamentalism, because it strictly adheres to the tenet that human behavior can be explained through rational analysis—once the correct probabilistic interpretation of the task environment has been identified—without recourse to process, representation, resource limitations, or physiological or developmental data. ”

… the Enlightened Bayesian approach, because it goes beyond the dogma of pure rational analysis and actively attempts to integrate with other avenues of inquiry in cognitive science. A critical distinction between Bayesian Fundamentalism and Bayesian Enlightenment is that the latter considers the elements of a Bayesian model as claims regarding psychological process and representation, rather than mathematical conveniences made by the modeler for the purpose of deriving computational- level predictions. Bayesian Enlightenment thus treats Bayesian models as making both rational and mechanistic commitments, and it takes as a goal the joint evaluation of both. ”

There are other uses of Bayesian logic -

… Agnostic Bayesian research is concerned with inferential methods for deciding among scientific models based on empirical data. This line of research has developed powerful tools for data analysis, but as with other such tools (e.g., analysis of variance, factor analysis) they are not intended as models of cognition itself. Because it has no position on whether the Bayesian framework is useful for describing cognition, Agnostic Bayes is not a topic of the present article. Likewise, research in pure Artificial Intelligence that uses Bayesian methods without regard for potential correspondence with biological systems is beyond the scope of this article. ”

Metaphors are indispensable in science, giving structure, understanding, parallels, insights and new ideas. But researchers must be clear not to mistake metaphors for the theories.

Bayesian Fundamentalism clearly rejects mechanism and shares this with Behaviorism.

The core assumption is that one can predict behavior by calculating what is optimal in any given situation. Thus, the theory is cast entirely at the computational level, without recourse to mechanistic (i.e., algorithmic or implementational) levels of explanation. As a meta-scientific stance, this is a very strong position. It asserts that a wide range of modes of inquiry and explanation are essentially irrelevant to understanding cognition. In this regard, the Bayesian program has much in common with Behaviorism. … Importantly, the limitation is not just on what types of explanations are considered meaningful, but also on what is considered worthy of explanation – that is, what scientific questions are worth pursuing and what types of evidence are viewed as informative.”

Likewise it has similarities with Evolutionary Psychology in the temptation for just-so stories.

Bayesian Fundamentalism is vulnerable to many of the criticisms that have been leveled at evolutionary psychology. Indeed, we argue that notions of optimality in evolutionary psychology are more complete and properly constrained than those forwarded by Bayesian Fundamentalists because evolutionary psychology considers other processes than simple adaptation. Because it is mechanisms that evolve, not behaviors, Bayesian Fundamentalism’s assertions of optimality provide little theoretical grounding and are circular in a number of cases.”

Whereas previous work in the heuristics-and-biases tradition cast the bulk of cognition as irrational using a fairly simplistic notion of rationality, Bayesian Fundamentalism finds rationality to be ubiquitous based on under-constrained notions of rationality. … Completely sidestepping mechanistic considerations when considering optimality leads to absurd conclusions. To illustrate, it may not be optimal or evolutionarily advantageous to ever age, become infertile, and die, but these outcomes are universal and follow from biological constraints. It would be absurd to seriously propose an optimal biological entity that is not bounded by these biological and physical realities, but this is exactly the reasoning Bayesian Fundamentalists follow when formulating theories of cognition. ”

There are also problems with what optimal means for developing minds.

On the other hand, Enlightened Bayesian models can be taken seriously as psychological theories.

According to the Fundamentalist Bayesian view, the hypotheses and their prior distribution correspond to the true environmental probabilities within the domain of study. However, as far as predicting behaviour is concerned, all that should matter is what the subject believes (either implicity or explicitly) are the true probabilites. … the question of whether people have veridical mental models of their environment can be separated from the question of whether people reason and act optimally with respect to whatever models they have.”

The Bayesian approach suggests that learning involves working backward from sense data to compute posterior probabilities over latent variables in the environment, and then determining optimal action with respect to those probabilities. This can be contrasted with the more purely feed-forward nature of most extant model, which learn mappings from stimuli to behavior and use feedback from the environment to directly alter the internal parameters that determine those mappings.”

Prior distributions offer another opportunity for psychological inquiry within the Bayesian framework. In addition to the obvious connections to biases in beliefs and expectations, the nature of the prior has potential ties to questions of representation. … Conjugate priors are a common assumption made by Bayesian modelers, but this assumption is generally made solely for mathematical convenience of the modeler, rather than for any psychological reason. However, considering a conjugate prior as part of the psychological theory leads to the intriguing possibility that the parameters of the conjugate family constitute the information that is explicitly represented and updated in the brain. ”

Even the algorithms that are used by Bayesians for approximate predictions of difficult calculations may give useful ideas of how the brain may make similar approximations. So Bayesian ideas have a great deal to offer in the context of a mechanistic metaphor.

Source:

Jones, M. & Love, B.C. (in press). Bayesian Fundamentalism or Enlightenment? On the Explanatory Status and Theoretical Contributions of Bayesian Models of Cognition. Behavioral and Brain Sciences (target article).

Motor bias

Eagleman and Sejnowski report a series of experiments that go a long way to pinning down the nature of our conscious perception of movement. A number of illusions were used in experiments showing that they shared a common process: flash-lag (moving object aligned with flash is offset), flash-drag (flash is offset as result of nearby motion), feature flash-drag (a change in a moving object is mis-located) and Frohlich illusion (the starting position of a suddenly appearing moving object is offset). The question that the researchers were answering is whether it is the position or the time that is altered in these illusion.

By varying the classic setups, the researchers found they could have the consciously perceived location of an object to be at a position where the object could not have physically been. Thus the illusion could not be based on an actual location with a fiddled time. It was the position that was being fiddled.

Several other characteristics of motion bias were also shown in the experiments.

  • In all the setups, there was a trigger, a particular event for the subject to use as the reference for ‘now’. The motion that was used to bias the position occurs after the trigger during about the next 80 ms and not before the trigger.

  • There does not appear to be two types of perception, one for stationary and one for moving objects. “the configuration of motion in the visual field influences the localization of both moving and stationary stimuli”. There can therefore be a trade-off between flash lag and flash drag.

  • Where features like colour change during the motion of an object. The binding of the feature is not changed but only the position of the object when the change is bound to it.

One aspect of the discussion is a problem for me. The authors appear to consider only one reason for this motion biasing. “The visual system attempts to correct for the processing delays in signals from eye to perception and accounts for these delays by shifting its localizations closer to where they would be if there were no neural delay.” They also assume, “localization computations might only be triggered on a need-to-know basis. If true, this suggests that it may be computationally expensive”.

I have for some time thought that prediction of the very near future was one of the functions (perhaps the main function) of consciousness. Far from the idea that prediction is only an occasional operation in ‘need to know’ situations, I think it may be continuously done with all motion all the time. As well as removing the experience of a lag in ‘now’, there for two other reasons for prediction. Comparison of predictive conscious experience with fresh sensory information is a possible method of monitoring the accuracy of perception and checking the validity of our understanding of the world. It is also a possible method for avoiding motor plans that conflict with each other and instead facilitate smoothly integrated motor programs.

ResearchBlogging.org
Eagleman, D., & Sejnowski, T. (2007). Motion signals bias localization judgments: A unified explanation for the flash-lag, flash-drag, flash-jump, and Frohlich illusions Journal of Vision, 7 (4), 3-3 DOI: 10.1167/7.4.3

Embodied metaphor

An article by R. Sapolsky in the NewYork Times talks about embodiment without really mentioning the word embodied (here). He is looking for the nature of metaphor and where our facility for it comes from.

Symbols, metaphors, analogies, parables, synecdoche, figures of speech: we understand them…It strikes me that the human brain has evolved a necessary shortcut for doing so, and with some major implications.

Consider an animal (including a human) that has started eating some rotten, fetid, disgusting food. As a result, neurons in an area of the brain called the insula will activate. Gustatory disgust. Smell the same awful food, and the insula activates as well. Think about what might count as a disgusting food (say, taking a bite out of a struggling cockroach). Same thing.

Now read in the newspaper about a saintly old widow who had her home foreclosed by a sleazy mortgage company, her medical insurance canceled on flimsy grounds, and got a lousy, exploitative offer at the pawn shop where she tried to hock her kidney dialysis machine. You sit there thinking, those bastards, those people are scum, they’re worse than maggots, they make me want to puke … and your insula activates. Think about something shameful and rotten that you once did … same thing. Not only does the insula “do” sensory disgust; it does moral disgust as well. Because the two are so viscerally similar. When we evolved the capacity to be disgusted by moral failures, we didn’t evolve a new brain region to handle it. Instead, the insula expanded its portfolio…

What are we to make of the brain processing literal and metaphorical versions of a concept in the same brain region? Or that our neural circuitry doesn’t cleanly differentiate between the real and the symbolic? What are the consequences of the fact that evolution is a tinkerer and not an inventor, and has duct-taped metaphors and symbols to whichever pre-existing brain areas provided the closest fit?

Something does not arise from nothing. Basically a nervous system (any nervous system) has a sensory input connected with a motor output. Any elaboration has to be grown on top of that. In other words, the elaboration must be an embodied metaphor. Nothing magic here; just start with the embodied and then pile one metaphor on top of another and you can end up with Shakespeare’s plays or Mozart’s music.

Prinz view of consciousness

The OnTheHuman site has an article by J. Prinz (here). I certainly like his approach and find his arguments very convincing.

We … ask which of our psychological states can be conscious. Answers to this question range from boney to bulgy. At one extreme, there are those who say consciousness is limited to sensations; in the case of vision, that would mean we consciously experience sensory features such as shapes, colors, and motion, but nothing else. This is called conservatism (Bayne), exclusivism (Siewert), or restrictivism (Prinz). On the other extreme, there are those who say that cognitive states, such as concepts and thoughts, can be consciously experienced, and that such experiences cannot be reduced to associated sensory qualities; there is “cognitive phenomenology.” This is called liberalism, inclusivism, or expansionism. If defenders of these bulgy theories are right, we might expect to find neural correlates of consciousness in the most advanced parts of our brain. …

Not only do I think consciousness is restricted to the senses; I think it arises at a relatively early level of sensory processing. Consider vision. According to mainstream models in neuroscience, vision is hierarchically organized. Let’s consider where in that hierarchy consciousness arises. … I think consciousness arises at the intermediate level. We experience the world as a collection of bounded objects from a particular point of view, not as disconnected, edged, or viewpoint invariant abstractions. … I think this is true in other senses as well. For example, when we listen to a sentence, the words and phrases bind together as coherent wholes (unlike low-level hearing), and we retain specific information such as accent, pitch, gender, and volume (unlike high-level hearing). Across the senses, the intermediate-level is the only level at which perception is conscious. …

Expansionists say we can be conscious of concepts and thoughts, and that such experiences outstrip anything going on at the intermediate-level of perception. … Associative visual agnosia … cannot recognize objects, but they seem to see them. When presented with an object, they can accurately describe or even draw its shape, but they can’t say what it is. Bayne thinks their experiences are incomplete. He thinks knowing the identity of an object changes our experience of it. This is intuitively plausible. … Instead, we can suppose that our top-down knowledge of the meaning changes how we parse the image. … imaginatively impose a new orientation; we segment figure and ground; and we generate emotions and verbal labels, which we experience consciously along with the image; these are just further sensory states—bodily feelings in the case of emotions, and auditory images in the case of words. I think features of this kind can also explain what is missing in agnosia. Without meaning, images can be hard to parse, and associated images and behaviors do not come to mind.

Another argument comes from Charles Siewert. He focuses on our experience of language. Sometimes, when hearing sentences, we undergo a change in phenomenology, and that change occurs as a result of a change in our cognitive interpretation of the meanings of the words. … Phenomenology also changes when we repeat a word until it becomes meaningless, or when we learn the meaning of a word in a foreign language. In all these cases, we experience the same words across two different conditions, but our experience shifts, suggesting that assignment of meaning is adding something above and beyond the sound of the words. … But there are many sensory changes that take place as a result of sentence comprehension. First, we form sensory imagery. … Second, comprehension effects parsing. … Third, comprehension entails knowing how to go on in a conversation … Fourth, meaning effect emotions. …

The third argument I will consider comes from David Pitt. He begins with the observation that we often know what we are thinking, and we can distinguish one thought from another. This knowledge seems to be immediate, not inferential, which suggests we know what we are thinking by directly experiencing the cognitive phenomenology of our thoughts. The most obvious reply is that knowledge of what we are thinking is based on verbal imagery. … I think this is a kind of illusion. We erroneously believe that we are directly aware of the contents of our thoughts when we hear sentences in the mind’s ear. This belief stems from two things. First, we often use verbal imagery as a vehicle for thinking …Second, when contemplating a word that we understand, we can effortlessly call up related words or imagery, which gives us the impression that we have a direct apprehension of the meaning of that word. Our fluency makes us mistake awareness of a word for awareness of what it represents. …

Putting these points together, I think restrictivsts should admit that thinking has an impact on phenomenology, but that impact can be captured by appeal to sensory imagery including images of words, emotions, and visual images of what our thoughts represent. Expansionists must find a case where cognition has an impact on experience, without causing a concomitant change in our sensory states. That’s a tall order.

At this point the dispute between restrictivists and expansionists often collapses into a clash on introspective intuitions. … By way of conclusion, I will try to break this stalemate by sketching five reasons for thinking restrictivism is preferable even if introspection does not settle the debate.

Next comes the arguments for excluding cognitive phenomenology.

  1. To make a convincing case for cognitive phenomenology, expansionists should find a case where the only difference between two phenomenologically distinct cases is a cognitive difference. But so far, no clear, uncontroversial case has been identified.

  2. The second argument points to the fact that alleged cognitive qualities differ profoundly from sensory qualities in that the latter can be isolated in imagination. … If other qualia can be isolated, why not cognitive qualia?

  1. Third, it is nearly axiomatic in psychology that we have poor access to cognitive processes. … The only processes we ever seem to experience consciously are those that we have translated, with great distortion, into verbal narratives.

  2. A fourth argument follows on this one. The incessant use of inner speech is puzzling if we have conscious access to our thoughts. Why bother putting all this into words when thinking to ourselves without any plans for communication? …

  1. Finally, expansionism seems to dash hopes for a unified theory of consciousness. … But there is little reason to think a single mechanism could explain how both perception and thought can be conscious, if cognitive phenomenology is not reducible to perception. This is especially clear if the mechanism is attention. There is no empirical evidence for the view that we can attend to our thoughts. There are no clear cognitive analogues of pop-out, cuing, resolution enhancement, fading, multi-object monitoring, or inhibition of return. Thoughts can direct attention, but we can’t attend to them. Or rather, thoughts become objects of attention only when they are converted into images, words, and emotions. Expansionists might say that thought and sensations attain consciousness in different ways, but, if so, why think that the term “consciousness” has the same meaning when talking about thoughts, if it does not refer to the same mechanism?

This fits with the idea that only what enters the cortex through the thalamus, can be involved in the thalamo-cortical loops that synchronize their firing during the conscious experience. This category is sensory input (except the bulk of smell) and input about movement and emotion input via the basal ganglia.

Super MRI

One of the reasons that the neo-cortex has center stage in our view of the brain is that it is big, very big; another is that it is relatively bigger in humans than in animals; and finally is the fact that we can examine it more easily than other parts of the brain. So, hey, it just must be the center of thought. A trick to correct this habit of thought for a few moments now and then is to envision the neo-cortex as the computer used by the thalamus, archeocortex and basal ganglia to do the donkey work in sorting out the detail in perception, motor programming etc. You may not want to get too fond of this picture but it is a good antidote for the continuous emphasis of the neo-cortex.

Things may change. It seems that a more powerful fMRI is now available and it can actually show specific parts the the thalamus etc. ‘lighting up’. Here is the abstract of a recent paper from Otto-von-Guericke University:

Thalamocortical loops, connecting functionally segregated, higher order cortical regions, and basal ganglia, have been proposed not only for well described motor and sensory regions, but also for limbic and prefrontal areas relevant for affective and cognitive processes. These functions are, however, more specific to humans, rendering most invasive neuroanatomical approaches impossible and interspecies translations difficult. In contrast, non-invasive imaging of functional neuroanatomy using fMRI allows for the development of elaborate task paradigms capable of testing the specific functionalities proposed for these circuits. Until recently, spatial resolution largely limited the anatomical definition of functional clusters at the level of distinct thalamic nuclei. Since their anatomical distinction seems crucial not only for the segregation of cognitive and limbic loops but also for the detection of their functional interaction during cognitive-emotional integration, we applied high resolution fMRI on 7 Tesla. Using an event-related design, we could isolate thalamic effects for preceding attention as well as experience of erotic stimuli. We could demonstrate specific thalamic effects of general emotional arousal in mediodorsal nucleus and effects specific to preceding attention and expectancy in intralaminar centromedian/parafascicular complex. These thalamic effects were paralleled by specific coactivations in the head of caudate nucleus as well as segregated portions of rostral or caudal cingulate cortex and anterior insula supporting distinct thalamo-striato-cortical loops. In addition to predescribed effects of sexual arousal in hypothalamus and ventral striatum, high resolution fMRI could extent this network to paraventricular thalamus encompassing laterodorsal and parataenial nuclei. We could lend evidence to segregated subcortical loops which integrate cognitive and emotional aspects of basic human behavior such as sexual processing.

All the anatomical detail aside (not that it is not important) what we are finally coming close to seeing is the heart of the system – the interaction of the various parts of the brain, the important feedback loops, and not just the neo-cortex. I believe that we need to understand those loops before we can come close to understanding the mind. We have a new window – great.

 

ResearchBlogging.org
Metzger CD, Eckert U, Steiner J, Sartorius A, Buchmann JE, Stadler J, Tempelmann C, Speck O, Bogerts B, Abler B, & Walter M (2010). High field FMRI reveals thalamocortical integration of segregated cognitive and emotional processing in mediodorsal and intralaminar thalamic nuclei. Frontiers in neuroanatomy, 4 PMID: 21088699

|