Info

You are currently browsing the thoughts on thoughts weblog archives for February, 2011.

Calendar
February 2011
M T W T F S S
« Jan   Mar »
 123456
78910111213
14151617181920
21222324252627
28  
Categories

Archive for February 2011

Encephalon #84 psychology articles, neurobiology and more

Encephalon Carnival #84 is hosted here this month and it is such an honour. Last spring after enjoying Encephalon for some time, I finally decided to submit a post – and – would you believe it, Encephalon had just disappeared. But it had introduced me to a number of great bloggers. Now Encephalon is re-born, thanks to Mike Lisieski and this is the fourth edition under his watchful eye.
This month we have:

We have a group of post on how things are not as simple and straight forward in brain and behaviour as they may appear.

Jena Pincott at Jena Pincott has Why do women get physically aroused and not even know it? and gives 4 answers rather than just one.

Scicurious at Scientopia has Dieting, Stress and the Changing Brain on why dieting does not work – its epigenetics. (more from Scicurious below)

Sandy Gautam at The Mouse Trap has The Quest for the Holy Glial, a review of Field’s book, The Other Brain. A plug: Sandy has just started a web newspaper, The sandygautam cognitive Daily.

Zen Faulkes at Neurodojo has Spikes without Sodium looking at C.elegan’s use of calcium in place of sodium. (more Neurodojo posts below)

Taylor Burns at Cognoculture has Testosterone and human aggression showing the what testosterone does depends on the situation.

Khalil Cassimally at Lab-coat Life has Brains Breathe: Dopamine’s Role in Preterm Infants putting forward yet another role for dopamine.

Dave Deriso at The Artful Brain has What is Truth? And What is ‘Walking-Dead’ Syndrome? Which looks at Cotard syndrome and what it does to concepts of truth, reality and self.

Edmund Blair Bolles at Babel’s Dawn has Co-Evolution is Real with a look at culture-biology co-evolution in the origin of language. This was one of the first blogs I started to read and has remained one of my favourites – therefore I want to plug the book, Babel’s Dawn: a natural history of the origins of speech.

And we have some posts that are critical in nature. Hooray for the bloggers who keep us from being taken in.

Leyla Adali at MedSci Discoveries has Do Vaccines Cause Autism? talking about how the vaccine doubt lives on, unfortunately for all children.

Zen Faulkes at Neurodojo has We haven’t seen this in a mammal! Rewrite the textbooks! setting straight what is new and what is not in axon-axon communication. And he also has Are Cows Magnetic Sensors? giving another look at north facing cows.

Daniel Lende at Neuroanthropology has The Brain is Essential – But Don’t Call It Essentialist! which is a review of 7 other posts to help us stay out of thought ruts.

Karen Franklin at In the News: Forensic psychology, criminology and psychology-law has Paint brushes and soap: The slippery slope of unfettered power exposing detention center abuses. Here is a link to forensic psychology

Janet Kwasniak (that’s me) at Thougths on thoughts (that’s here) has Analog thinking examining the difference between digital and analog metaphors for the brain.

Next is a group of blogs that has tender-heartedness, sight and sound mixed with their serious messages.

Scicurious at Scientopia has This is your Brain on Music adding a bit more to the dopamine picture.

Raymond Ho Eric Johnson is Eric Johnson’s Raymond Ho’s guest at The Prancing Papio and has Touching Death, a moving look at primate reaction to death.

Michael Lisieski has a guest post at Scientific American, Pleasure, reward…and Rabbits! Why do animals bhave as they do? It has enchanting examples for an explanation of behaviour. Don’t miss the kids eating lemons.

Last - a pair of unique posts that haven’t fitted into the other groups.

Jeremy Burman is Jacy Young’s guest at Advances in the History of Psychology and has Brief History of PsycINFO, a history, with links, of psychology source material search engine.

Romeo Vitelli at Providentia has The Mathematician in the Asylum about Andre Bloch’s life. It reminds me of the old joke’s punch-line, “I may be mad but I’m not stupid”.

The next edition (#85) will be hosted by Neurdojo – check Encephalon for details.

Apology - I mistakenly mixed up who was guest on whose blog. Eric Johnson was the guest of Raymond Ho and the author of the posting. Very sorry. JK

 

Analog thinking

George Dyson has a contribution in the answers to the Edge question (here) writing about analog computers.

Imagine you need to find the midpoint of a stick. You can measure its length, using a ruler (or making a ruler, using any available increment) and digitally compute the midpoint. Or, you can use a piece of string as an analog computer, matching the length of the stick to the string, and then finding the middle of the string by doubling it back upon itself. This will correspond, without any loss of accuracy due to rounding off to the nearest increment, to the midpoint of the stick. If you are willing to assume that mass scales linearly with length, you can use the stick itself as an analog computer, finding its midpoint by balancing it against the Earth’s gravitational field.

So far so good – a nice example.

There is no precise distinction between analog and digital computing, but, in general, digital computing deals with integers, binary sequences, and time that is idealized into discrete increments, while analog computing deals with real numbers and continuous variables, including time as it appears to exist in the real world. The past sixty years have brought such advances in digital computing that it may seem anachronistic to view analog computing as an important scientific concept, but, more than ever, it is.

Here I have to differ a bit. I think there is a very precise distinction. Everything about a digital system is discrete and discontinuous – the strings of digits that make up a number (whether binary or not) are not infinitely long and therefore are discontinuous like the time, marked out in the ticks of a clock. On the other hand, everything in an analog system is continuous – physical quantities like voltage and real or scaled real time. An analog computer is a physical model of a system in which the starting conditions can be set and then the behaviour of the model over time can be followed. Digital computers are not physical models but mathematical/logical ‘models’. Of course there can be hybrids, analog computers with some digital components as elements of the model or digital computers with analog components that are sampled. Dyson goes on to discuss interesting analog aspects of social networks, for example, the Facebook network and its activity as a model of a social web.

But my interest is in the brain and I see it as an analog system. We were misled by the seemingly digital nature of the firing spikes of some neurons, but now that we are aware of firing rates, synapses, electromagnetic fields and so on, it is plain that the brain is a physical organ using continuous not discrete quantities. It is quite literally a physical system that models the world and the self-organism in that world. I think the brain is not digital, does not use digital type commands, addresses, clock ticks and so on and therefore does not use what is ordinarily meant by software algorithms. We can use the digital computer as a metaphor but only to a limited extent – and an analog computer metaphor would be somewhat more realistic, although at some point the computer metaphor is likely to break down (as all metaphors eventually break).

Now, I am well aware that most people working in Artificial Intelligence are likely to disagree with my stance. The argument for an artificial brain goes thus: brains do cognition – cognition is computing – all computers are equivalent to the universal Turing Machine – therefore a conventional type of computer (von Neumann) can emulate a brain – except if the brain is analog and then there needs to be an approximation (but it will still be good enough) – however it may take more resources then are available (but that is a different problem). Each statement has some degree of inaccuracy unless the terms are very carefully defined. Here is part of the Whole Brain Emulation – a Roadmap (pdf):

Whole brain emulation, often informally called “uploading” or “downloading”, has been the subject of much science fiction and also some preliminary studies. The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain.

So this is the general idea. How is the analog problem dealt with?

A surprisingly common doubt expressed about the possibility of simulating even simple neural systems is that they are analog rather than digital. The doubt is based on the assumption that there is an important qualitative difference between continuous and discrete variables. If computations in the brain make use of the full power of continuous variables the brain may essentially be able to achieve “hypercomputation”, enabling it to calculate things an ordinary Turing machine cannot. … However, brains are made of imperfect structures which are, in turn, made of discrete atoms obeying quantum mechanical rules forcing them into discrete energy states, possibly also limited by a space‐time that is discrete on the Planck scale (as well as noise, see below) and so it is unlikely that the high precision required of hypercomputation can be physically realized. Even if hypercomputation were physically possible, it would by no means be certain that it is used in the brain, and it might even be difficult to detect if it were (the continual and otherwise hard to explain failure of WBE would be some evidence in this direction). However, finding clear examples of non‐Turing computable abilities of the mind would be a way of ruling out Turing emulation.

A discrete approximation of an analog system can be made arbitrarily exact by refining the resolution. If an M bit value is used to represent a continuous signal, the signal‐to‐noise ratio is approximately 20 log10(2M) dB (assuming uniform distribution of discretization errors, which is likely for large M). This can relatively easily be made smaller than the natural noise sources such as unreliable synapses, thermal, or electrical noise. The thermal noise is on the order of 4.2∙10‐21 J, which suggests that energy differences smaller than this can be ignored unless they occur in isolated subsystems or on timescales fast enough to not thermalize. Field potential recordings commonly have fluctuations on the order of millivolts due to neuron firing and a background noise on the order of tens of microvolts. Again this suggests a limit to the necessary precision of simulation variables.

I would be the last person to say that this will definitely not work; I am just not willing to bet a penny on success. There is that old saying that what is not understood seems easy – this is difficult and more so because of our lack of understanding. I note that the Roadmap has a curious paragraph mentioning understanding (underlining added):

The interplay between biological realism (attempting to be faithful to biology), completeness (using all available empirical data about the system), tractability (the possibility of quantitative or qualitative simulation) and understanding (producing a compressed representation of the salient aspects of the system in the mind of the experimenter) will often determine what kind of model is used. The appropriate level of abstraction and method of implementation depends on the particular goal of the model. In the case of WBE, the success criteria discussed below place little emphasis on under-standing, but much emphasis on qualitatively correct dynamics, requiring much biological realism (up to a point, set by scale separation) and the need for data‐driven models. Whether such models for whole brain systems are tractable from a modelling and simulation standpoint is the crucial issue.

Over time and at great expense, we will create mock-ups of various areas and functions in the brain in the search to understand it. They will be worth the effort and expense. And when we have a fairly good understanding of how the whole brain works, we will probably choose not to put the resources into a whole and excellent emulation and instead use the resources for tasks that computers do better than brains.

 

different aspects of the default mode network

The default mode network is not as simple as it seemed. There are probably several configurations. A recent paper (by D Stawarczyk and others) has looked at the difference between the default network when the subject is not attending to a task and when the subject is ignoring sensory stimulating from the outside world.

Here is the abstract:

The default mode network (DMN) is a set of brain regions that consistently shows higher activity at rest compared to tasks requiring sustained focused attention toward externally presented stimuli. The cognitive processes that the DMN possibly underlies remain a matter of debate. It has alternately been proposed that DMN activity reflects unfocused attention toward external stimuli or the occurrence of internally generated thoughts. The present study aimed at clarifying this issue by investigating the neural correlates of the various kinds of conscious experiences that can occur during task performance.

Four classes of conscious experiences (i.e., being fully focused on the task, distractions by irrelevant sensations/perceptions, interfering thoughts related to the appraisal of the task, and mind-wandering) that varied along two dimensions (‘‘task- relatedness’’ and ‘‘stimulus-dependency’’) were sampled using thought-probes while the participants performed a go/no-go task. Analyses performed on the intervals preceding each probe according to the reported subjective experience revealed that both dimensions are relevant to explain activity in several regions of the DMN, namely the medial prefrontal cortex, posterior cingulate cortex/precuneus, and posterior inferior parietal lobe. Notably, an additive effect of the two dimensions was demonstrated for midline DMN regions. On the other hand, lateral temporal regions (also part of the DMN) were specifically related to stimulus-independent reports. These results suggest that midline DMN regions underlie cognitive processes that are active during both internal thoughts and external unfocused attention. They also strengthen the view that the DMN can be fractionated into different subcomponents and reveal the necessity to consider both the stimulus- dependent and the task-related dimensions of conscious experiences when studying the possible functional roles of the DMN.

The digits between 1 and 9 were shown at the center of a screen. Subjects were asked to be as quick and accurate as possible in responding to each number except if the number was 3. Series of stimuli were followed by a thought-probe which interrupted the task. For each probe, subjects were asked to characterize the conscious experience they has in the few trials prior to the probe. They were given four possible responses: on-task, task-related interferences, external distractions, mind-wandering. In total the responses were respectively 32, 22, 26, 21%. Subjects had training trials, and trials in and out of a scanner. This is an interesting blend of high-tech fMRI scanning, cognitive computer screen and keyboard experimentation and reporting of conscious thoughts.

I think it may be too early to label the default network as having one, two of even four or five functions. I assume that there are various network configurations for various tasks and likewise various configurations for various ‘resting’ or default conditions. The worry-wart that is stewing over some imagined problem is likely to have a very different mind-wandering configuration to the person anticipating their up-coming vacation. I look forward to examinations of the default network for years to come.

 

ResearchBlogging.org

Stawarczyk, D., Majerus, S., Maquet, P., & D’Argembeau, A. (2011). Neural Correlates of Ongoing Conscious Experience: Both Task-Unrelatedness and Stimulus-Independence Are Related to Default Network Activity PLoS ONE, 6 (2) DOI: 10.1371/journal.pone.0016997

The greyness of depression

My intuition was very wrong. I thought that the greyness of depression was part of a change in the process of constructing consciousness that reduced the vividness of experience. A recent post in Discovery Magazine by E. Strickland (here) points to two papers which put the dulling of experience near the sense organ level rather than the consciousness level.

Here is a bit from the abstracts of these papers:

Biol Psychiatry. 2010 Jul 15;68(2):205-8. Epub 2010 Mar 31 “Seeing gray when feeling blue? Depression can be measured in the eye of the diseased” Bubl, Kern, Ebert, Bach, Tebartz van Elst.

Everyday language relates depressed mood to visual phenomena. Previous studies point to a reduced sensitivity of subjective contrast perception in depressed patients. One way to assess visual contrast perception in an objective way at the level of the retina is to measure the pattern electroretinogram (PERG). To find an objective correlate of reduced contrast perception, we measured the PERG in healthy control subjects and unmedicated and medicated patients with depression…Unmedicated and medicated depressed patients displayed dramatically lower retinal contrast gain. We found a strong and significant correlation between contrast gain and severity of depression. This marker distinguishes most patients on a single-case basis from control subjects. A receiver operating characteristic analysis revealed a specificity of 92.5% and a sensitivity of 77.5% for classifying the participants correctly.

And doi:10.1016/j.neuroscience.2010.05.012 “Reduced olfactory bulb volume and olfactory sensitivity in patients with acute major depression” Negoias, Croy, Gerber, Puschmann, Petrowski, Joraschky, Hummel.

The purpose of this study was to assess olfactory function and olfactory bulb volume in patients with acute major depression in comparison to a normal population. Twenty-one patients diagnosed with acute major depressive disorder and 21 healthy controls matched by age, sex and smoking behavior participated in this study. Olfactory function was assessed in a lateralized fashion using measures of odor threshold, discrimination and identification. Olfactory bulb volumes were calculated by manual segmentation of acquired T2-weighted coronal slices according to a standardized protocol. Patients with acute major depressive disorder showed significantly lower olfactory sensitivity and smaller olfactory bulb volumes. Additionally, a significant negative correlation between olfactory bulb volume and depression scores was detected. Their results provide the first evidence, to our knowledge, of decreased olfactory bulb volume in patients with acute major depression. These results might be related to reduced neurogenesis in major depression that could be reflected also at the level of the olfactory bulb.

It is a good thing to have my intuitions shown to be very wrong every now and then. Keeps me honest and on my toes. I will have to be careful about thinking that one of the minor reasons for consciousness is to supply vividness to our experiences – to engage and entertain us. Forget that idea unless there is some evidence for it.

 

Behavioral Economics

There is a piece in Slate (here) by Tim Hartford, recopied from The Financial Times, on Behavioural Economics.

Behavioral economics has never been hotter. It’s not just the success of books such as Nudge, Predictably Irrational, and Basic Instincts, but the political influence of the field: One of Nudge’s authors, Cass Sunstein, runs the Office of Information and Regulatory Affairs for Barack Obama, and his co-author Richard Thaler has been advising David Cameron’s new Behavioral Insight Team, based in the Cabinet Office.

A simple summary of behavioral economics——I’ve borrowed this one from the Guardian—is that it is the study of “how people actually make decisions rather than how the classic economic models say they make them.” But this approach is now under attack, from Gerd Gigerenzer, a psychologist, and Nathan Berg, an economist, and they argue that behavioral economics is not nearly as realistic as its boosters claim. While it does study what decisions we make, the very last thing it does is study how we make them—and as a result it is even more wedded to silly accounts of the way human beings think than its neoclassical rival.

…Consider the human response to risk. Neoclassical economics says that we act as if considering all possible outcomes, figuring out the probability and utility of each outcome, multiplying the probabilities with the utilities, and maximizing expected utility. Clearly we do not in fact do this—nor do we act as if we do.

Behavioral economics offers prospect theory instead, which gives more weight to losses than gains and provides a better fit for the choices observed in the laboratory. But, say Berg and Gigerenzer, it is even more unrealistic as a description of the decision-making progress, because it still requires weighing up every possible outcome, but then deploys even harder sums to produce a decision. It may describe what we choose, but not how we choose.

This is tough on behavioral economists, because in order to be taken seriously by other economists they have had to play the optimizing game.

Behavioural Economics has been bothering me. Why ignore neurobiology and cognitive neuroscience and start a parallel ’science’? It just doesn’t make sense, and so I have been trying to think of what is there in the current neuroscience that economists cannot accept. I thought of lots of things but the most convincing I have come up with is very simple. Economist are looking for an equation/s for human behaviour. They want an explanation, yes, but one that can be incorporated in their mathematical models of the economy so that they can predict how the market will react in any situation. An theory that is not mathematically amenable to their modeling is going to be useless to them.

In the search for an explanation of consciousness, I will not expect much enlightenment from the behavioral economists. They are looking for a different sort of explanation.

Join the carnival

I will be hosting the Encephalon carnival #84 here at Thoughts on Thoughts near the end of Feb. If you wish to take part - please send links to me at janet(at)charbonniers(dot)org before Feb 24.

Rationality

Another answer to the Edge question (here), has intrigued me. This one is written by Alison Gopnik, the author of The Philosophical Baby. She rejects the usual image of the unconscious and replaces it by a very rational mind.

One of the greatest scientific insights of the twentieth century was that most psychological processes are not conscious. But the “unconscious” that made it into the popular imagination was Freud’s irrational unconscious — the unconscious as a roiling, passionate id, barely held in check by conscious reason and reflection. This picture is still widespread even though Freud has been largely discredited scientifically.

She draws a picture of Turing’s rational unconscious.

Alan Turing, the father of the modern computer, began by thinking about the highly conscious and deliberate step-by-step calculations performed by human “computers” like the women decoding German ciphers at Bletchley Park. His first great insight was that the same processes could be instantiated in an entirely unconscious machine with the same results. A machine could rationally decode the German ciphers using the same steps that the conscious “computers” went through. And the unconscious relay and vacuum tube computers could get to the right answers in the same way that the flesh and blood ones could. … Turing’s second great insight was that we could understand much of the human mind and brain as an unconscious computer too.

More recently, cognitive scientists have added the idea of probability into the mix, so that we can describe an unconscious mind, and design a computer, that can perform feats of inductive as well as deductive inference. Using this sort of probabilistic logic a system can accurately learn about the world in a gradual, probabilistic way, raising the probability of some hypotheses and lowering that of others, and revising hypotheses in the light of new evidence. This work relies on a kind of reverse engineering. First work out how any rational system could best infer the truth from the evidence it has. Often enough, it will turn out that the unconscious human mind does just that.

The Freudian picture identifies infants with that fantasizing, irrational unconscious, and even on the classic Piagetian view young children are profoundly illogical. But contemporary research shows the enormous gap between what young children say, and presumably what they experience, and their spectacularly accurate if unconscious feats of learning, induction and reasoning. The rational unconscious gives us a way of understanding how babies can learn so much when they consciously seem to understand so little. …

I like the portrait of the unconscious mind as being workman-like processes to produce movement, preception etc. and not a leftover Freudian construct. I have three problems, only little nitpicking ones, with her picture. First I think the metaphor between brain and computer is overused these days and the idea that the brain is a Turing computer is a stretch too far. She doesn’t say it is, but comes very close. Second, I find ‘rationality’ is maybe a misleading word for the what I see as the appropriateness to a living organism of the brain’s processes. Rationality, in ordinary use, implies a sort of mathematical purity without attention given to values, emotions and significances. And finally, she appears to give room for a conscious mind separate from an unconscious one, although she never actually says this. I prefer the model: we have a single mind, part of its processing is constructing consciousness, only a small part of thought reaches conscious awareness.

Despite my small reservations, this piece is so refreshing a take on thought.

What seems conscious?

Folk psychology is interesting – a mixture of those quasi-psychological ideas that we have as babies, as primitives and which we cannot really shake even if we not longer find them useful. A group of experimental psychologist have been looking at the folk psychology of consciousness. Which entities do we assume are conscious and which not? They -A. Arico, B. Fiala, R. Goldberg, S. Nichols- have a forthcoming paper (pdf) which puts forward their Agency theory.

Looking at what it is that various researchers have found to be reacted to or described as a conscious entity they have constructed a list of attributes which in various combinations trigger people to think of the entity as an ‘agent’: eyes, particular sorts of motion trajectories, contingent interaction with others. They also have constructed a list of reactions to an ‘agent’ by a person: gaze following, attributions of mental activity, disposition to attribute conscious mental states, anticipation of goal-directed behaviour, imitation. Whether it is an other person, an moving animal, a clever animation of simple shapes or a fluffy ball that engages a baby with responsive beeps, the immediate unconscious categorization is conscious-thing which has to be over-ridden if we know better.

To test this hypotheses they timed the response to pairs of objects and adjectives which people judged true or not. Hesitation was take to be due to a mismatch between quick, automatic, folk categorization and a normal cognitive decision. With no mismatch there is a quicker answer. With some small problems the results were in keeping with their model.

Here is their abstract:

This paper proposes the ‘AGENCY model’ of conscious state attribution, according to which an entity’s displaying certain relatively simple features (e.g., eyes, distinctive motions, interactive behavior) automatically triggers a disposition to attribute conscious states to that entity. To test the model’s predictions, participants completed a speeded object/attribution task, in which they responded positively or negatively to attributions of mental properties (including conscious and non-conscious states) to different sorts of entities (insects, plants, artifacts, etc.). As predicted, participants responded positively to conscious state attributions only for those entities that typically display the simple features identified in the AGENCY model (eyes, distinctive motion, interaction) and took longer to deny conscious states to those entities.

The paper ends with some interesting philosophical observations:

…illustrates how the answer to the descriptive problem might influence the way the relevant intuitions are used in philosophical debate. For depending on what one thinks about the epistemic status of the relevant psychological processes, one might be led either to dismiss the intuitions, or to give them special weight. An answer to the descriptive problem might also bear on the idea that people intuitively embrace a ‘folk dualism’, according to which the mind is radically different than the body. It’s plausible that one aspect of such a dualism is the apparent gulf between consciousness and physical objects. For instance, when we think about a brain as a massive collection of neurons that has various chemical and physical characteristics, it is not at all intuitive that this mass has consciousness. Something similar can, of course, be said for other bodily organs. Even after we are told that the brain is the part of the body responsible for consciousness, this does not render it intuitive that the brain is where conscious experience occurs. We suggest that part of the reason for this is that when we consider brains as hunks of physical stuff, we are considering descriptions that exclude the sort of cues that tend to activate the low-level processes that generate the intuitive sense that an entity is conscious. Hence, it’s not surprising if we find some initial resistance to the idea that the physical brain is conscious. In this light, it’s somewhat ironic that, while people have difficulty thinking of the brain as conscious, they have no trouble at all thinking that ants are conscious. On the contrary, our experiments indicate that people have trouble thinking that ants are not conscious.

What is beneath personality?

When I was in secondary school in the 1950s I encountered the idea of personality. It seemed to be something that other students understood but I never got the hang of. And I still haven’t some 60 years later. Types of personality have the feel of something that might be a very rough approximation of something else that might be interesting, or not. They feel like something that will be forgotten when something else, layers below them, is understood. We certainly differ from one another in how we go about living our lives and people see archetypes in their friends and aquaintances like they see architypeal faces – but that does not make a system.
The inventory of types of personality changes too often; the list is not stable. And it is assumed to be at least partially inherited but there is only problematic evidence for this. Here is part of Jonah Lehrer’s report (here) of some research on the subject:

There’s an interesting new paper in Biological Psychiatry on the genetic variations underlying human personality. The study relied on a standard inventory of temperaments – novelty-seeking, harm avoidance, reward dependence and persistence – as measured in 5,117 Australian adults. What did the scientists find? Mostly nothing. The vast genetic search came up empty. (and later about Peace Corp testing) Mischel realized that the problem wasn’t the tests—it was their premise. Psychologists had spent decades searching for traits that exist independently of circumstance, but what if personality can’t be separated from context?… This led Mischel to construct a new metaphor for human personality. While modern psychology still clung to a model of personality rooted in the humors of the ancient Greeks – we were born with a certain amount of choleric temperament and that was it – Mischel proposed a model of personality called interactionism. … And this might be why the Australian study came up empty: We’re trying to find the genes for personality constructs that don’t exist. (not genetic)

Reading answers to the Edge question (here), I ran across personality again. Once by Geoffrey Miller showing that not much has changed since I first accountered personality:

To understand insanity, we have to understand personality. There’s a scientific consensus that personality traits can be well-described by five main dimensions of variation. These “Big Five” personality traits are called openness, conscientiousness, extraversion, agreeableness, and emotional stability. The Big Five are all normally distributed in a bell curve, statistically independent of each other, genetically heritable, stable across the life-course, unconsciously judged when choosing mates or friends, and found in other species such as chimpanzees. They predict a wide range of behavior in school, work, marriage, parenting, crime, economics, and politics.

I doubt there is the consensus that Miller claims.

Also in the Edge answers is an article that may start to point at what might be under the surface. Helen Fisher first divides personality into two components: character and temperament. She puts what we inherit into temperament and what we acquire in our lives into character. This sound like my pet peeve of nature verses nuture – I insist that every single solitary aspect of living organisms is both a mixture, an interaction, between genes and environment and almost always cannot be quantified as to how much it is due to one or the other.

Leave that - and go on to what she has to say about temperament:

Some 40% to 60% of the observed variance in personality is due to traits of temperament. They are heritable, relatively stable across the life course, and linked to specific gene pathways and/or hormone or neurotransmitter systems. Moreover, our temperament traits congregate in constellations, each aggregation associated with one of four broad, interrelated yet distinct brain systems: those associated with dopamine, serotonin, testosterone and estrogen/oxytocin. Each constellation of temperament traits constitutes a distinct temperament dimension.
For example, specific alleles in the dopamine system have been linked with exploratory behavior, thrill, experience and adventure seeking, susceptibility to boredom and lack of inhibition. Enthusiasm has been coupled with variations in the dopamine system, as have lack of introspection, increased energy and motivation, physical and intellectual exploration, cognitive flexibility, curiosity, idea generation and verbal and non-linguistic creativity.
The suite of traits associated with the serotonin system includes sociability, lower levels of anxiety, higher scores on scales of extroversion, and lower scores on a scale of “No Close Friends,” as well as positive mood, religiosity, conformity, orderliness, conscientiousness, concrete thinking, self-control, sustained attention, low novelty seeking, and figural and numeric creativity.
Heightened attention to detail, intensified focus, and narrow interests are some of the traits linked with prenatal testosterone expression. But testosterone activity is also associated with emotional containment, emotional flooding (particularly rage), social dominance and aggressiveness, less social sensitivity, and heightened spatial and mathematical acuity.
Last, the constellation of traits associated with the estrogen and related oxytocin system include verbal fluency and other language skills, empathy, nurturing, the drive to make social attachments and other prosocial aptitudes, contextual thinking, imagination, and mental flexibility.

This still seems simplistic and a ‘just so’ explanation. But it is, at least, a start at looking at what is behind our feeling of archetypes in behavior.

What does this have to do with consciousness? Perhaps nothing.

Changing dominant hemispheres

Being left-handed, I have had a special interest in the differences between the brain’s hemispheres. However, over the years I have come to suspect much that is said about the differences. All this left-brain right-brain nonsense is just that, nonsense. Aside from language and spatial processing, there has been little evidence for hemisphere specialization. Now a study has been published by Chi and Snyder that shows a very large effect but also with a good deal of vagueness about exactly what the effect actually is.

Establishing a direct current through the brain from a cathode to an anode on the scalp (transcranial direct current stimulation) will inhibit neural activity at the cathode end and increase it at the anode end. Here is the abstact:

Our experiences can blind us. Once we have learned to solve problems by one method, we often have difficulties in generating solutions involving a different kind of insight. Yet there is evidence that people with brain lesions are sometimes more resistant to this so-called mental set effect. This inspired us to investigate whether the mental set effect can be reduced by non-invasive brain stimulation. 60 healthy right-handed participants were asked to take an insight problem solving task while receiving transcranial direct current stimulation (tDCS) to the anterior temporal lobes (ATL). Only 20% of participants solved an insight problem with sham stimulation (control), whereas 3 times as many participants did so (p = 0.011) with cathodal stimulation (decreased excitability) of the left ATL together with anodal stimulation (increased excitability) of the right ATL. We found hemispheric differences in that a stimulation montage involving the opposite polarities did not facilitate performance. Our findings are consistent with the theory that inhibition to the left ATL can lead to a cognitive style that is less influenced by mental templates and that the right ATL may be associated with insight or novel meaning. Further studies including neurophysiological imaging are needed to elucidate the specific mechanisms leading to the enhancement.

A threefold improvement is certainly a clear effect! As a tool tDCS is not a very precise and so it is difficult to know what is really happening in the brain – how large an area is affected, how localized and so on. The authors discuss these problems.

Although I find much that is written about lateralization very simplistic, there is something here to uncover. Even in the most symmetrical animals there seems to be some asymmetry. There are traces of hemispheric dominance in many groups of vertebrates. It is likely that there is was long-standing evolutionary logic in the pronounced handedness of humans. Chi and Snyder’s experiment is more interesting for showing that there are large effects there to be investigated then in the conclusions that can be drawn from this particular setup.

 

ResearchBlogging.org

Chi, R., & Snyder, A. (2011). Facilitate Insight by Non-Invasive Brain Stimulation PLoS ONE, 6 (2) DOI: 10.1371/journal.pone.0016655

|