Forget Sherlock Holmes

The received knowledge is that human cognitive abilities are due to humans having a larger frontal cortex than other animals. This ‘fact’ is now in doubt. Even if Sherlock Holmes had a noble forehead, we can’t assume the the frontal lobes are the key to intelligence. This idea may even be hindering clearer understanding of the brain.

 

 

ScienceDaily reports (here) on research by Barton Venditti. “Human frontal lobes are not relatively large”, inPNAS, May 2013.

 

 

Here is the abstract:

 

One of the most pervasive assumptions about human brain evolution is that it involved relative enlargement of the frontal lobes. We show that this assumption is without foundation. Analysis of five independent data sets using correctly scaled measures and phylogenetic methods reveals that the size of human frontal lobes, and of specific frontal regions, is as expected relative to the size of other brain structures. Recent claims for relative enlargement of human frontal white matter volume, and for relative enlargement shared by all great apes, seem to be mistaken. Furthermore, using a recently developed method for detecting shifts in evolutionary rates, we find that the rate of change in relative frontal cortex volume along the phylogenetic branch leading to humans was unremarkable and that other branches showed significantly faster rates of change. Although absolute and proportional frontal region size increased rapidly in humans, this change was tightly correlated with corresponding size increases in other areas and whole brain size, and with decreases in frontal neuron densities. The search for the neural basis of human cognitive uniqueness should therefore focus less on the frontal lobes in isolation and more on distributed neural networks.”

 

 

No help needed from Tallis

There are two methods to examine the world: basically the philosophical way and the scientific way. The one I am calling philosophical uses introspection, semantic argument and logic, rational arguments from axiomatic first principles. The one I am calling scientific is based on observational or experimental evidence and building of models/theories consistent with evidence. This does not mean that individual scientists and philosophers do not reach over the fence and use the other method occasionally. And, of course, in the final judgement, both methods depend on being convincing to the audience. Clean logic is convincing; good fit with reality is convincing. I am talking about two stereotypes rather than a distinct line of demarcation.

 

 

The difference is illustrated by the use of ‘truth’. In the philosophical method, truth has to do with logic. A logical statement is true or false. In the scientific method, truth has to do with how close a model/theory is to reality. Another difference is the respect that introspection has in philosophy and its lack of respect in science, especially neuroscience.

 

 

There was recently an article in the Guardian by Raymond Tallis (here) in which he complains about the lack of metaphysics in physics and other slights of philosophy by science. But the signs of hubris are on his side.

 

 

An argument that he uses in several places is that science has not solved several questions after many decades of effort. The attempt to reconcile its two big theories, general relativity and quantum mechanics, has stalled for nearly 40 years.”, and, “The attempt to fit consciousness into the material world, usually by identifying it with activity in the brain, has failed dismally…”. It seem odd for a philosopher to be concerned about a mere half a century when some philosophical questions have been around for a few millennium with many an analysis and never a consensus. The idea that if a question is not answered in a short time, it will never be answered is laughable. Some questions are hard, some are badly framed, some lack the tools that have not yet been invented. But philosophy is not in a position to demand that science always be fast and thus slow is equal to failure. Philosophy is not speedy.

 

 

Further, Tallis wants a particular answer from neuroscience. Beyond these domestic problems there is the failure of physics to accommodate conscious beings. … there is no way of accounting for the fact that certain nerve impulses are supposed to be conscious (of themselves or of the world) while the overwhelming majority (physically essentially the same) are not. In short, physics does not allow for the strange fact that matter reveals itself to material objects (such as physicists). What sort of gibberish is this? I would be very surprised if any neuroscientists were trying to understand consciousness in this way. First physics is not about accommodating consciousness. Consciousness will be accommodated by physiology, and that by biology, and that by chemistry, and that by physics. There are physiological differences between consciousness and unconsciousness and these are amenable to experimentation. Hidden dualism is not part of the science.

 

 

Another tack of Tallis is to point out that the science is not understandable.The dismissive “Just shut up and calculate!” to those who are dissatisfied with the incomprehensibility of the physicists’ picture of the universe is simply inadequate. (as if philosophical works were always that comprehensible) Indeed, it is unfortunate that sometimes we cannot internalize what we think reality is like. Our brains appear only able to deal with three dimensions, linear time, strict causality and so on. That is fine, it works very well for our normal lives and has been evolutionarily successful. But we do not have to reject models of reality that have enormous predictive accuracy because we find them difficult to comprehend. Nobody promised that reality was going to be comfortable. There is no reason to dismiss a model of reality because it takes some mathematical knowledge to use it. We have been adapted to survive in reality, not reality to fit us. Hidden dualism is not part of science and neither is a hidden theology where the nature of the university was ordained to fit the architecture of our brains.

 

 

He confuses two ideas of time as well. Physics is predisposed to lose time because its mathematical gaze freezes change. Tensed time, the difference between a remembered or regretted past and an anticipated or feared future, is particularly elusive. One idea has to do with how memories are stored from conscious experience and the other has to do with objective measurement of one element of space-time. Is my hippocampus to dictate the nature of the cosmos? If mathematics freezes change, what are all the little t‘s doing in differential calculus equations? The little t’s are indicating time is a component of change.

 

 

One thing the science does not need is help from Tallis.

 

 

 

Sorry

I have prided myself on posting regularly and putting all the effort I can into the substance of my posts. So I apologize deeply for the lack of posts since early March. I have had a series of interruptions to my routine related to my computer, personal life and family commitments. (I am living in chaos but will emerge happily from it.) It will be late June before I can get back to my blog.

I also want to apologize to any readers who have commented on a posting and had that comment disappear. I had to get rid of many thousands of pieces of spam and trackback – in doing so I may have slipped up and removed a legitimate comment. I hope not, because I welcome comments.

You will hear from me again in the summertime.

Driving neurons backwards

The conventional picture of how a nerve cell behaves is that signals are received at synapses in the dendrites. If they are sufficient, the cell body produces a spike that travels down the axon to the synapses with other neurons. There have been some odd mechanisms added to this picture like activity starting at the cell-axon junction. Now there is a really novel behavior found. This is reported in a paper by Bukalo, Campanac, Hoffman and Fields in PNAS, Synaptic plasticity by antidromic firing during hippocampal network oscillations.

 

Some cells in the hippocampus that are involved in memory can be driven backwards – signals past up the axon to the cell body and then on to the synapses in the dendrites. That is really different. The process appears to re-balance the sensitivity of groups of synapses. It happens during sleep’s sharp-wave ripple complexes.

 

It seems reasonable that when a system is driven in one direction for a whole day that it would be an advantage to reset the system back so that there was ‘headroom’ for another day’s activities. This would need to be done without losing the relative changes in synaptic sensitivity that had been gained during the day (in fact, consolidate them) – in other words, to preserve the memories and learning that had happened during the day.

 

Here is the abstract:

Learning and other cognitive tasks require integrating new experiences into context. In contrast to sensory-evoked synaptic plasticity, comparatively little is known of how synaptic plasticity may be regulated by intrinsic activity in the brain, much of which can involve nonclassical modes of neuronal firing and integration. Coherent high-frequency oscillations of electrical activity in CA1 hippocampal neurons [sharp-wave ripple complexes (SPW-Rs)] functionally couple neurons into transient ensembles. These oscillations occur during slow-wave sleep or at rest. Neurons that participate in SPW-Rs are distinguished from adjacent nonparticipating neurons by firing action potentials that are initiated ectopically in the distal region of axons and propagate antidromically to the cell body. This activity is facilitated by GABA-mediated depolarization of axons and electrotonic coupling. The possible effects of antidromic firing on synaptic strength are unknown. We find that facilitation of spontaneous SPW-Rs in hippocampal slices by increasing gap-junction coupling or by GABA-mediated axon depolarization resulted in a reduction of synaptic strength, and electrical stimulation of axons evoked a widespread, long-lasting synaptic depression. Unlike other forms of synaptic plasticity, this synaptic depression is not dependent upon synaptic input or glutamate receptor activation, but rather requires L-type calcium channel activation and functional gap junctions. Synaptic stimulation delivered after antidromic firing, which was otherwise too weak to induce synaptic potentiation, triggered a long-lasting increase in synaptic strength. Rescaling synaptic weights in subsets of neurons firing antidromically during SPW-Rs might contribute to memory consolidation by sharpening specificity of subsequent synaptic input and promoting incorporation of novel information.

We do not know the code

A recent Scientific American blog post by John Horgan (here) looks at the possible success of the two big (really big $1 billion and $3 billion) brain research projects and finds them too optimistic. The post is worth reading.

 

Horgan points out that the human genome was as successful as it was because we were already on the right track in understanding genetics. We may have been surprised by some of the detail but we had the code. We known what form the information took and at least the most important ways in which it was manipulated. But neurobiology is different – we do not have the code. We do not know the form of the information or how it is manipulated.

 

There is another difference which he mentions in passing. The scale is very different. The brain is really big. Much, much larger a puzzle than the genome and it is also more varied and changeable.

 

Something that is not mentioned is the lack of tools. I have thought of the situation in more pessimistic moods as trying to map the universe with nothing but binoculars. We simply find it difficult to measure the brains activity with fine detail. My pessimistic picture is an explorer starting out on a long journey with a faulty map, poor equipment and in persistent fog.

 

In an optimistic mood, I think that collecting data in a systematic way will finally lead to a ‘eureka’ moment when we see how it fits together and the puzzle is solved. This would be comparable to find the structure of DNA and then leading on to molecular biology and the genomes.

 

Good luck to the new projects: the Human Brain Project and the Brain Activity Map. (Even if they have been oversold.)

 

 

Search for better brain metaphors

I remember when computers would be able to speak/understand natural language – it was just around the corner in the ’60s. And since then it has faded further into the future with each new attempt to solve the problem. A recent Scientific American Mind blog post by Ben Thomas gives a similar forecast for brain connectivity. (here) It is an interesting piece on how something can be too optimistic but still worth trying to do.

 

Here are a few random bits from the post on the subject of models and metaphors of the brain:

 

In 1956, a legion of famed scientific mindsdescended on Dartmouth College to debate one of mankind’s most persistent questions: Is it possible to build a machine that thinks? The researchers had plenty to talk about – biologists and mathematicians had suggested since the 1940s that nerve cells probably served as binary logic gates, much like transistors in computer mainframes. Meanwhile, computer theorists like Alan Turing and Claude Shannon had been arguing for years that intelligence and learning could – at least in theory – be programmed into a machine of sufficient complexity. Within the next few decades, many researchers predicted, we’d be building machines capable of conscious thought. Fifty-odd years after that first Dartmouth Conference, our sharpest supercomputers still struggle to hold basic conversations. … The more we learn about how the brain works, the more interwoven and inextricable we realize its components and processes are – and the less like a computer it seems.

 

Twenty years ago, researchers compared the brain to a supercomputer packed with billions of microchips. At the turn of the twentieth century, it was a great steam engine; a hundred years before that, an intricate piece of clockwork. And so on, back through the millennia – until we reach the ancient Greeks, who seem to have unleashed this torrent of metaphors by likening the human brain to acatapult (note below). In every age, the brightest scientists and philosophers find themselves tempted to describe the brain in terms of the moment’s latest technology – that is, until new technologies and brain breakthroughs turn those descriptions into clunking relics of bygone eras. … The brain and its workings, in other words, have a way of defying easy classification. Peer inside a neuron and you won’t find any binary switches or churning gears – only an ecosystem of protein structures and neurotransmitter molecules; a sub-cellular country that differs profoundly from any machine built by human hands.

 

Each metaphor is an improvement. But remember the saying, “What you don’t understand is simple”. We really don’t understand thinking and so it seems a much simpler process than it is. Consciousness is so effortless to us because the way it is produced is hidden from us.

 

Note: catapult reference is from Science and Language blog (here) :

Because we do not understand the brain very well we are constantly tempted to use the latest technology as a model for trying to understand it. In my childhood we were always assured that the brain was a telephone switchboard. (‘What else could it be?’) I was amused to see that Sherrington, the great British neuroscientist, thought that the brain worked like a telegraph system. Freud often compared the brain to hydraulic and electro-magnetic systems. Leibniz compared it to a mill, and I am told some of the ancient Greeks thought the brain functions like a catapult. At present, obviously, the metaphor is the digital computer.” (John R Searle, or so the Internet says.)

 

One organism

There is a very good posting (here) in the new Scientific American Mind Blogs, by Jon Lieff. He discusses the ties between the immune system and the nervous system. I recommend reading it. My posting here is not about what the Sc.American post said but what it reminded me of, the reaction to biology.

 

It is the under-the-radar unease with biology that is on my mind. This is what seems to be at the root of dividing ourselves into the biological and the intellectual. Whether the divide is between the frontal cortex and the rest of the cortex, the cortex and the rest of the brain, the brain and the rest of the nervous system, or the nervous system and the rest of the body – the divide is a mistake. It is unreal. The parts work together to form an organism. The parts cannot live, let alone work, by themselves. We may study the parts separately, but we should not be surprised that they cooperate and have to be seen as part of a whole. Each of us is a single system, an organism.

 

I may be somewhat too sensitive to this unease with biology. For example, I do not disapprove of vegetarianism. I am not a vegetarian, but I can understand. I can understand those that feel meat is unhealthy and they can get healthier protein and fat elsewhere. I can understand someone having a moral objection to killing animals. I can understand someone raised on a no meat diet not finding meat appetizing (as I might find grasshoppers unappetizing because I was not feed them in childhood but instead learned dislike rather than a taste for them). I can understand someone feeling it is part of their identity, especially religious identity. I am sure there are other reasons that I would find reasonable. But I have encountered many vegetarians who appear to have none of these positions (or to have them only as superficial justifications for a deeper motive). When it gets right down to it, they do not want to be reminded of what the inside of animals actually looks like. Perfectly ordinary muscle, blood, connective tissue, fat, or any internal body parts make them nauseous. They find ‘wet’ and ‘moist’ to be disturbing words, ‘animal’ is an insult. How can someone suffer from such a deep self hate? It makes it hard to accept themselves as animals.

 

I find biology so amazingly beautiful and engaging that I seem to have almost no common ground with those that are disturbed by it. When we understand our thinking processes, it will be a biological understanding, and more satisfying for that. A biological explanation will replace all non-biological metaphors. The understanding will not separate us from the rest of our bodies. We will be connected with the whole of the biosphere. At least I hope so.

Decisions – conscious and unconscious

Previous experiments have looked at unconscious decision making. A new paper (citation below) confirms those experiments and adds more information.

 

The authors are looking at the hypothesis that extrastriate and prefrontal neural regions are active during the encoding of decision information and continue to process that information during a subsequent distractor task. “It is possible that reactivation occurring in these extrastriate-hippocampal-dorsolateral prefrontal regions might support continued visual and semantic processing of decision information during an unconscious thought period.” It has been shown by others that a period of unconscious thought can led to better decisions than a period of conscious thought or an immediate decision without a period of thought of either kind, at least with certain types of problem – large, vague, disorganized ones. These researchers confirmed previous results but with fMRI scans to add information on the areas of the brain that were involved.

 

They used a 2-back memory task as a distractor that made conscious thought on anything but that task impossible. The scans were during: 2-back task alone, 2-back task while making the decision unconsciously, making the decision consciously. The participants first encoded the information need to make the decision and then went on to make the decision consciously or unconsciously. This encoding phase was also scanned.

 

When the activity associated with the 2-back task was subtracted from the unconscious thought, the remaining activity was in the prefrontal cortex, right thalamus and left frontal operculum. Activity was seen in the left intermediate visual cortex and right dorsolateral prefrontal cortex during encoding and during unconscious thought. The reactivation of the encoding activity predicted the decision-making performance. Neural regions involved in encoding decision information continue to process this information outside of conscious awareness. Conscious thought, on the other hand, had activity in a prefrontal network that did not overlap with any regions active during unconscious thought.

 

The nature of the unconscious mind has long challenged philosophers and scientists, but the present work offers a new perspective on this topic by way of examining the brain. We find that brain regions that are active during encoding new decision information reactivate while the brain coordinates responses to other unrelated tasks, when participants are prompted to make decisions.

 

I think it is important to look at the 2-back memory task. This makes very great demands on the working memory and practically no other facility – no arithmetic or logic needed. This is why it works so well at shutting down conscious thought and does not seem to infer with unconscious thought. But this clean division is not likely to be the normal state. Use of working memory, consciousness and unconscious cognition are likely to be active together and in cooperation (except in sleep). What is shown is what unconscious thought is capable of but not how is may be normally used.

 
ResearchBlogging.org

Creswell, J., Bursley, J., & Satpute, A. (2013). Neural Reactivation Links Unconscious Thought to Decision Making Performance Social Cognitive and Affective Neuroscience DOI: 10.1093/scan/nst004

I'm on ScienceSeeker-Microscope

Correction to post on Rolfs paper

A month ago, I posted (here) on a paper reported in ScienceDaily. (citation below) I had not read the paper but commented on a quote of the author, included in the ScienceDaily item, which to me implied a dated understanding of a division between perception and cognition. The authors have kindly sent me a copy of their paper. I have found nothing in this paper to support my remarks on the quote. I assume that the quote was misleading for some reason. These things happen and I thank the authors for setting me straight on their position.

 

The author’s criteria for perception and for cognition are quite clear and experimentally based. They base ‘perception’ on the existence of visual adaptation at a specific location on the retina and similar phenomena. This implies the effect occurring at a stage where the retina layout is still the source of the neuronal map. The specific location is a retinal location not a location in the model of the world that is being produced.

Visual adaptation demonstrates the perceptual consequences of a reduction in the responsiveness of neural populations that encode primary visual features. Using this general paradigm, we provided support for the existence of adaptable, visual neurons (or neural populations) that underlie the perception of at least one causal interaction in dynamic scenes. Stimuli that do not appear causal (including our ‘‘slip’’ adaptation stimuli) leave the responses of these neurons unaffected. These neuronal populations must be located in brain areas that encode visual information in an eye- centered reference frame, because the resulting aftereffects are specific to the adapted location on the retina.

 

They indicate the likely regions where the perception occurs (where there are retinal maps) and where their methodology is useful:

Candidates for such areas are the mediotemporal area V5 and the superior temporal sulcus, both of which have eye-centered representations and are part of a network involved in the perception of causal launches. These areas also respond to other forms of meaningful motion patterns, such as biological motion. Using adaptation, we can now examine the visual computations underlying the perception of causal structure in the visual world. These include not only the routines recognizing familiar motion patterns but also complex interactions involving cause and effect, possibly even animacy and intentionality.

 

It is clear that the authors have not said anything in this paper that implies the categorization that I complained about. Their view is perfectly reasonable:

This finding allows us to move phenomena that have been regarded as higher-level processes into the realm of perception, opening them to systematic study using the tools of perceptual science. … these percepts require sophisticated inference, and it is now widely agreed that perception is the locus of these advanced decisional processes.

 

ResearchBlogging.org

Rolfs, M., Dambacher, M., & Cavanagh, P. (2013). Visual Adaptation of the Perception of Causality Current Biology, 23 (3), 250-254 DOI: 10.1016/j.cub.2012.12.017

I'm on ScienceSeeker-Microscope

Human astrocytes are different

Comparing human brains (and to a lesser extent all primate brains) to other animals like the mouse, we have many more, much bigger and much more complex astrocytes. Astrocytes have contributed to our larger brain by an order of magnitude more than neurons have. Astrocytes make contact and ‘surround’ synapses; one human astrocyte can encompasses 2 million synapses. They seem to look over the communication between neurons and are involved in long-term potentiation, the first stage of memory and learning. They release TNFalpha which increases the strength of synaptic transmissions. One human astrocyte makes contact with more synapses because of their bigger size and longer thin fibrils reaching to more distant synapses.

 

Astrocytes communicate with neighbouring astrocytes through movement of calcium ions. Waves of calcium pass through groups of astrocytes. These waves are faster and more extensive in human astrocytes. So as a communicating group, astrocytes affect the electrical and chemical environment of neuron synapses. And human astrocytes appear to do it better.

 

So… clever idea – put human astrocytes in mice and see what happens. Xiaoning Han et al (citation below) injected new born mice with human cells destined to become astrocytes. The human cells florished at the expense of the mouse ones, migrated to the right places and intergrated with each other and the mouse astrocytes. But they were the size and complexity that they would have been in a human brain. So the mice ended up with the more numerous, bigger and more connected human astrocytes amongst their own mouse ones. Like in humans the calcium waves were faster and the TNFalpha more potent. That this procedure worked as well as it did is a bit of a surprise.

 

When the mice were adult they were tested against control mice that had transplants of mouse rather than human astrocytes. The human astrocytes gave significantly better memories and learning. When the TNFalpha was disrupted, the human astrocyte advantage was much reduced.

 

What can be done with this development?

First, we could think of the brain differently. Last year, I posted what if? One of the imagined shifts of viewpoint was:

“There is a trickle of new results about the function of glial cells (those ignored cells that outnumber the neurons by factors like 10). What if: more of less all the work in the brain was actually done by very local groups of glial cells and neurons functioned like a kind of telephone system between groups of glia.”

Second, we can stop taking the simpler computer metaphors, ones containing only neurons and weighted connections, as a reasonably detailed model of the brain. “We are our connectome” also becomes less believable. The Neuron Theory has taken a little knock – there is more to brain processing then neurons firing.

Thirdly, these mice can be used to study astrocytes using procedures that are possible in animals but not humans.

Fourthly, they would be good systems to study diseases of the astrocytes and even to show whether a disease involves astrocytes or not.

 

Here is the paper’s summary:

Human astrocytes are larger and more complex than those of infraprimate mammals, suggesting that their role in neural processing has expanded with evolution. To assess the cell-autonomous and species-selective properties of human glia, we engrafted human glial progenitor cells (GPCs) into neonatal immunodeficient mice. Upon maturation, the recipient brains exhibited large numbers and high proportions of both human glial progenitors and astrocytes. The engrafted human glia were gap-junction-coupled to host astroglia, yet retained the size and pleomorphism of hominid astroglia, and propagated Ca 2+ signals 3-fold faster than their hosts. Long-term potentiation (LTP) was sharply enhanced in the human glial chimeric mice, as was their learning, as assessed by Barnes maze navigation, object-location memory, and both contextual and tone fear conditioning. Mice allografted with murine GPCs showed no enhancement of either LTP or learning. These findings indicate that human glia differentially enhance both activity-dependent plasticity and learning in mice.

 

ResearchBlogging.org

Han, X., Chen, M., Wang, F., Windrem, M., Wang, S., Shanz, S., Xu, Q., Oberheim, N., Bekar, L., Betstadt, S., Silva, A., Takano, T., Goldman, S., & Nedergaard, M. (2013). Forebrain Engraftment by Human Glial Progenitor Cells Enhances Synaptic Plasticity and Learning in Adult Mice Cell Stem Cell, 12 (3), 342-353 DOI: 10.1016/j.stem.2012.12.015

I'm on ScienceSeeker-Microscope