Embodied cognition – language

It is hard to overstate the importance of language – but some manage it. Language has a very big billing by some people – the singular mark of being human; the only medium of thought; the foundation of consciousness; the basis of social relations and more. This seems over the top to me, but language is still very, very important and we need to understand how it comes to be so.

 

As a school girl (along with millions of school children), the contradiction of ‘dictionaries’ occurred to me. We cannot define the meaning of all words using only words. What gives a word meaning is still an open question with many not too convincing answers. My own favourite answer is that most words get their meaning/s from their position in a web of words, by their relationships to other words. The web can be thought of as a mass of variously nested and overlapping metaphors/schema/maps. The foundation of this web has to be some pre-verbal concepts, some real structural relationships that form the pre-metaphors used to create all others. In other words, there must be points of ‘grounding’. The points of contact of language with non-linguistic reality have to be what young babies have, what they come with: the structure of their bodies, what they can sense, and the actions they can take. So the beginning of language (for the species and for every individual in it) has to be embodiment before culture can start to make its contribution.

 

There is little doubt that the particular language we speak can affect how we think. Whorf wrote:

We are thus introduced to a new principle of relativity, which holds that all observers are not led by the same physical evidence to the same picture of the universe, unless their linguistic backgrounds are similar, or can in some way be calibrated.

Although the extent of this Sapir-Whorf effect is not agreed and variations from a strong to a weak form are found, the belief in some effect of language on thought and perception does not interfere with a belief in some embodiment. There is an effect of the physical body on language. Just because I am discussing embodiment here does not mean it is the only process involved.

 

Let us start with phonemes – the individual sounds of language. Mark Changizi has proposed that culture builds on what the brain is capable of and the brain has evolved the capabilities needed for living in the natural world. Here is part of an interview of Changizi by Lende (here):

(We can identify objects from their sound as well as look and feel. This is an adaptation to natural world.) For example, there are primarily three “atoms” of solid-object physical events: hits, slides and rings. Hits are when two objects hit one another, and slides where one slides along the other. Hits and slides are the two fundamental kinds of interaction. The third “atom” is the ring, which occurs to both objects involved in an interaction: each object undergoes periodic vibrations — they ring. They have a characteristic timbre, and your auditory system can usually recognize what kind of objects are involved. For starters, then, notice how the three atoms of solid-object physical events match up nicely with the three fundamental phoneme types: plosives, fricatives and sonorants. Namely, plosives (like t, k, p, d, g, b) sound like hits, fricatives (s, sh, f, z, v) sound like slides, and sonorants (vowels and also phonemes like y, w, r, l) sound like rings.

Even syllables are structured like solid object interactions. When we hit a bell, we hear the hit followed by the ring. The objects ring after the events of hits and slides, while the fundamental morphology of language is consonant-vowel syllable. Language uses the brains ability to derive meaning from the sound of objects by restricting language sounds to mimics of object sounds. This allows us to use part of the brain adapted for one purpose for a different but neurologically similar one.

 

What about the words that are formed from these phonemes? They may have their roots in onomatopoeia or the ‘bow-bow’ theory of language origin. Or perhaps synaesthesia is the first step to language, as put forward by Ramachandran and Hubbard in their 2001 paper, Synaesthisia – A Window Into Perception, Thought and Language. Asking people to guess which object had the name ‘kiki’ and which ‘bouba’, they found that 95% of people labelled the spiky object as kiki and the curvy one as bouba.

 

The classification of words is another possible area of embodiment. Does the brain have different processes for different types of words? Here is the abstract from Mestres-Misse, Rodriguez-Fornells, Munte (2009) Neural differences in the mapping of verb and noun concepts onto novel words:

A dissociation between noun and verb processing has been found in brain damaged patients leading to the proposal that different word classes are supported by different neural representations. This notion is supported by the facts that children acquire nouns faster and adults usually perform better for nouns than verbs in a range of tasks. In the present study, we simulated word learning in a variant of the human simulation paradigm that provided only linguistic context information and required young healthy adults to map noun or verb meanings to novel words. The mapping of a meaning associated with a new-noun and a new-verb recruited different brain regions as revealed by functional magnetic resonance imaging. While new-nouns showed greater activation in the left fusiform gyrus, larger activation was observed for new-verbs in the left posterior middle temporal gyrus and left inferior frontal gyrus (opercular part). Furthermore, the activation in several regions of the brain (for example the bilateral hippocampus and bilateral putamen) was positively correlated with the efficiency of new-noun but not new-verb learning. The present results suggest that the same brain regions that have previously been associated with the representation of meaning of nouns and verbs are also associated with the mapping of such meanings to novel words, a process needed in second language learning.

 

The following research reminded me of trying to learn some Swahili and dealing with the idea of noun classes, many of them. Just, Cherkassly, Aryal, Mitchell (2010) A Neurosemantic Theory of Concrete Noun Representation Based on the Underlying Brain Codes identified three noun classes. (They were not counting people, abstracts etc. in the three.) Here is the abstract:

This article describes the discovery of a set of biologically-driven semantic dimensions underlying the neural representation of concrete nouns, and then demonstrates how a resulting theory of noun representation can be used to identify simple thoughts through their fMRI patterns. We use factor analysis of fMRI brain imaging data to reveal the biological representation of individual concrete nouns like apple, in the absence of any pictorial stimuli. From this analysis emerge three main semantic factors underpinning the neural representation of nouns naming physical objects, which we label manipulation, shelter, and eating. Each factor is neurally represented in 3–4 different brain locations that correspond to a cortical network that co-activates in non-linguistic tasks, such as tool use pantomime for the manipulation factor. Several converging methods, such as the use of behavioral ratings of word meaning and text corpus characteristics, provide independent evidence of the centrality of these factors to the representations. The factors are then used with machine learning classifier techniques to show that the fMRI-measured brain representation of an individual concrete noun like apple can be identified with good accuracy from among 60 candidate words, using only the fMRI activity in the 16 locations associated with these factors. To further demonstrate the generativity of the proposed account, a theory-based model is developed to predict the brain activation patterns for words to which the algorithm has not been previously exposed. The methods, findings, and theory constitute a new approach of using brain activity for understanding how object concepts are represented in the mind.

 

What is the use of words? Babel’s Dawn (here) has made an excellent case for words being similar to pointing. They steering the joint attention of the speaker and listener. But by analogy words point to concepts in our brains. Grossman and Johnson (2010), Selective prefrontal cortex responses to joint attention in early infancy, show its importance to communication:

Infants engaged in joint attention use a similar region of their brain as adults do. Our study suggests that the infants are tuned to sharing attention with other humans much earlier than previously thought. This may be a vital basis for the infant’s social development and learning. In the future this approach could be used to assess individual differences in infants’ responses to joint attention and might, in combination with other measures, serve as a marker that can help with an early identification of infants at risk for autism.

 

We now seem to be leaving phonics and semantics to enter grammar. It seems to me that the sequence we assume is natural to the brain, goal – plan- action – result – evaluation, when fitted to our actions and the actions of others makes the form of subject – verb – object or actor – action – result, a form fitted to our brains. But in what order? Here is the abstract for Goldin-Meadow, So, Ozyurek, Mylander (2008) The natural order of events: How speakers of different languages represent events nonverbally:

To test whether the language we speak influences our behavior even when we are not speaking, we asked speakers of four languages differing in their predominant word orders (English, Turkish, Spanish, and Chinese) to perform two nonverbal tasks: a communicative task (describing an event by using gesture without speech) and a noncommunicative task (reconstructing an event with pictures). We found that the word orders speakers used in their everyday speech did not influence their nonverbal behavior. Surprisingly, speakers of all four languages used the same order and on both nonverbal tasks. This order, actor–patient–act, is analogous to the subject–object–verb pattern found in many languages of the world and, importantly, in newly developing gestural languages. The findings provide evidence for a natural order that we impose on events when describing and reconstructing them nonverbally and exploit when constructing language anew.

 

So why is it humans who have developed such an amazing tool for communication? There are probably many reasons – the ability to trust other individual, need to replace/enhance a gestural form of communication, the abilities gained in mastering tool making, need to care for children that were not being carried and so on. One answer is the particular FOX2P gene that humans have. The FOX2P gene is a transcription factor, that is a gene that controls the use of many other genes. It is a very old developmental gene that helps to build the fetal heart, chest and the brain at least. All vertebrates have this gene and a similar gene is found in other animals (like bees). Our particular form of the gene is different from the form in chimps and is closer to the form in song birds, bats, cetaceans and importantly Neanderthals. What do these animals have in common? – sensorimotor coordination of sound production, plasticity of neural circuits allowing learning the vocal patterns/skills, and ability to handle sequences of sound. This gene appears to have started changing in humans at least 400 thousand years ago and have reached its present form around 100 thousand years ago. Humans with a fault in this gene (a very rare condition) have severe language problems.

 

In thinking about the embodiment of language, we can use language as a stand-in for all of our culture. Language appears to be the most extensive and basic of our cultural constructions. It is probably one of the oldest, maybe only beaten by tool making. The evolution of a cultural change is much faster than the evolution of genetic changes. So although it is clear that language involved both cultural and genetic changes, the order would be a cultural change first taking advantage of existing body structure followed by the culture forcing a fine-tuning of the body through conventional evolution. This can ratchet up immense cultural creations on a minimum of genetic change. The continual embodiment of the culture is the key to its quick elaboration.

 

This is the seventh in a series on embodied cognition. There is still one to come.

Here are the first six in the series:

http://charbonniers.org2011/06/15/embodied-cognition-posture/

http://charbonniers.org2011/06/18/embodied-cognition-face/

http://charbonniers.org2011/06/27/embodied-cognition-space/

http://charbonniers.org2011/07/06/embodied-cognition-gut/

http://charbonniers.org2011/07/15/embodied-cognition-morality/

http://charbonniers.org2011/07/21/embodied-cognition-handedness/

 

Causes of binocular rivalry

Binocular rivalry is an experimental setup where different images are projected to each eye. We do not consciously see both images but an alternative awareness of one and then the other. From paper sited below (Roeber, Veser, Schroger, O’Shea):

One sees one of the images for a few moments, referred to as the dominant image, while the other is completely invisible, suppressed. Then, after a brief period of transition, when both or parts of the two images are seen together, the other image becomes dominant and the first becomes suppressed. The images continue to alternate in visual consciousness randomly as long as one bothers to look at them. Binocular rivalry is an important phenomenon for researching the neural correlates of consciousness because visual consciousness changes without any change in the physical stimulation.

 

The cause of the rivalry have been proposed by two theories. It could be the top-down result of attention; or, it may be a bottom-up mechanism involving reciprocal inhibition and adaptation.

The behavioural, fMRI, and EEG evidence is consistent with attention’s being required for rivalry to occur. But Paffen et al. proposed an intriguing alternative hypothesis, at least for their behavioural results. They proposed that:

  • Attention is not required for rivalry to occur,

  • Attention increases the underlying neural activity of each of the representations of the rival stimuli that compete in the low-level rivalry mechanism; this is similar to increasing the contrast of the rival stimuli, and

  • This increase in activity leads to greater adaptation, leading to faster alternations.

….We decided to test Paffen et al.’s explanation of attention’s effects on rivalry by measuring ERPs. ERPs are changes in electrical activity of the brain that follow some event, measured from electrodes placed on the scalp. ERPs have temporal resolution in the order of milliseconds. The typical form of the ERP when the event is the sudden appearance of a specific visual object or feature includes a positive component peaking about 100 ms after the event, the P1, and a negative component about 170 ms after the event, the N1. …If attention affects binocular rivalry by boosting neural responses to the rival stimuli, then attending to rival stimuli should increase ERPs from a change to a rival stimulus compared to when attention is on something else. If adaptation affects binocular rivalry and attention is accompanied by increasing adaptation, as proposed by Paffen et al., then attending to rival stimuli should decrease ERPs from a change to a rival stimulus. We found the latter: Attending to the rival stimuli decreases the size of the N1 compared with when attention is on something else.

 

In particular when attention was on the rival grating images and subjects had to report changes in the orientation the ERP (N1 160-210 ms) was smaller than when attention was on a fixation target distant from the grating images.

To explain this paradoxical effect of attention, we propose that rivalry occurs in the attend-to-fixation condition (we found an ERP signature of rivalry in the form of a sustained negativity from 210–300 ms) but that the mechanism processing the stimulus changes is more adapted in the attend-to-grating condition than in the attend-to-fixation condition. This is consistent with the theory that adaptation gives rise to changes of visual consciousness during binocular rivalry.

 

For my interest, this is a further separation of attention from consciousness. Although they are found together most of the time – they do appear to be separate processes.

ResearchBlogging.org

Roeber, U., Veser, S., Schröger, E., & O’Shea, R. (2011). On the Role of Attention in Binocular Rivalry: Electrophysiological Evidence PLoS ONE, 6 (7) DOI: 10.1371/journal.pone.0022612

Insight and creativity

There is a type of cognition that very definitely has no contribution from consciousness: problem solving by an abrupt insight (Eureka or Aha). This is not a sequential incremental method but a transformation, restructuring or reformulation of the problem. There is no conscious forewarning of the insight. What is known about how the type of cognition works?

 

Here is Shehl, Sandkuhler, Bhattacharya (2009) Posterior Beta and Anterior Gamma Oscillations Predict Cognitive Insight; abstract:

Pioneering neuroimaging studies on insight have revealed neural correlates of the emotional ‘‘Aha!’’ component of the insight process, but neural substrates of the cognitive component, such as problem restructuring (a key to transformative reasoning), remain a mystery. Here, multivariate electroencepalogram signals were recorded from human participants while they solved verbal puzzles that could create a small-scale experience of cognitive insight. Individuals responded as soon as they reached a solution and provided a rating of subjective insight. For unsolved puzzles, hints were provided after 60 to 90 sec. Spatio-temporal signatures of brain oscillations were analyzed using Morlet wavelet transform followed by exploratory parallel-factor analysis. A consistent reduction in beta power (15–25 Hz) was found over the parieto-occipital and centro-temporal electrode regions on all four conditions—(a) correct (vs. incorrect) solutions, (b) solutions without (vs. with) external hint, (c) successful (vs. unsuccessful) utilization of the external hint, and d) self-reported high (vs. low) insight. Gamma band (30–70 Hz) power was increased in right fronto-central and frontal electrode regions for conditions (a) and (c). The effects occurred several (up to 8 ) seconds before the behavioral response. Our findings indicate that insight is represented by distinct spectral, spatial, and temporal patterns of neural activity related to presolution cognitive processes that are intrinsic to the problem itself but not exclusively to one’s subjective assessment of insight.

 

This implies that we can unconsciously notice that we are thinking about a problem in an unsuccessful way, search for an more successful framing, and evaluating the new way of thinking about the problem. To me, this hints at working memory not being required in this search for a transformation. Another interesting result is the gamma increase was in the right hemisphere (rather than left or both). This implies that the usually less dominant hemisphere was carrying the load in finding a transformation.

 

Does this have anything to say about creativity? – apparently not. Here is the abstract from Dietrich, Kanso (2010); A review of EEG, ERP, and neuroimaging studies of creativity and insight:

Creativity is a cornerstone of what makes us human, yet the neural mechanisms underlying creative thinking are poorly understood. A recent surge of interest into the neural underpinnings of creative behavior has produced a banquet of data that is tantalizing but, considered as a whole, deeply self-contradictory. We review the emerging literature and take stock of several long-standing theories and widely held beliefs about creativity. A total of 72 experiments, reported in 63 articles, make up the core of the review. They broadly fall into 3 categories: divergent thinking, artistic creativity, and insight. Electroencephalographic studies of divergent thinking yield highly variegated results. Neuroimaging studies of this paradigm also indicate no reliable changes above and beyond diffuse prefrontal activation. These findings call into question the usefulness of the divergent thinking construct in the search for the neural basis of creativity. A similarly inconclusive picture emerges for studies of artistic performance, except that this paradigm also often yields activation of motor and temporoparietal regions. Neuroelectric and imaging studies of insight are more consistent, reflecting changes in anterior cingulate cortex and prefrontal areas. Taken together, creative thinking does not appear to critically depend on any single mental process or brain region, and it is not especially associated with right brains, defocused attention, low arousal, or alpha synchronization, as sometimes hypothesized. To make creativity tractable in the brain, it must be further subdivided into different types that can be meaningfully associated with specific neurocognitive processes.

 

Insight may or may not be part of any thinking process – creative or not. Creativity is probably so varied and so complex a process that it cannot be correlated with any particular neural picture.

 

Embodied cognition – handedness

I am left-handed and so I noticed when I was quite young that words were not favourable to lefties: right as opposed to left, dexterous as opposed to sinister, droit as opposed to gauche. I was told that this was a purely linguistically fossilized prejudice; people did not really put a value characteristic on handedness any more.

But it turned out that consciously and more often unconsciously, our cognition is affected by a value judgment based on left-and-right. For example, A Kranjec, M Lehet, B Bromberger, A Chatterjee (2010), A Sinister Bias for Calling Fouls in Soccer, looked at the effect of culture on ref calls. Here is the abstract:

Distinguishing between a fair and unfair tackle in soccer can be difficult. For referees, choosing to call a foul often requires a decision despite some level of ambiguity. We were interested in whether a well documented perceptual-motor bias associated with reading direction influenced foul judgments. Prior studies have shown that readers of left-to-right languages tend to think of prototypical events as unfolding concordantly, from left-to-right in space. It follows that events moving from right-to-left should be perceived as atypical and relatively debased. In an experiment using a go/no-go task and photographs taken from real games, participants made more foul calls for pictures depicting left-moving events compared to pictures depicting right-moving events. These data suggest that two referees watching the same play from distinct vantage points may be differentially predisposed to call a foul.

But there were other non-cultural associations. The body-specificity hypothesis states that conventions of language and culture could not explain how people with different kinds of bodies think differently in predicable ways, even about highly abstract ideas. Evidence for this theory was given by Casasanto (2009), Embodiment of Abstract Concepts: Good and Bad in Right- and Left-Handers. Here is the abstract:

Do people with different kinds of bodies think differently? According to the body-specificity hypothesis, people who interact with their physical environments in systematically different ways should form correspondingly different mental representations. In a test of this hypothesis, 5 experiments investigated links between handedness and the mental representation of abstract concepts with positive or negative valence (e.g., honesty, sadness, intelligence). Mappings from spatial location to emotional valence differed between right-hand left-handed participants. Right-handers tended to associate rightward space with positive ideas and leftward space with negative ideas, but left-handers showed the opposite pattern, associating rightward space with negative ideas and leftward with positive ideas. These contrasting mental metaphors for valence cannot be attributed to linguistic experience, because idioms in English associate good with right but not with left. Rather, right- and left-handers implicitly associated positive valence more strongly with the side of space on which they could act more fluently with their dominant hands. These results support the body-specificity hypothesis and provide evidence for the perceptuomotor basis of even the most abstract ideas.

Casasanto has shown that the handedness and with it the moral tinge on the words are not premanent. “People generally think their judgements are rational, and their concepts are stable,” says Casasanto. “But if wearing a glove for a few minutes can reverse people’s usual judgements of what’s good and bad, perhaps the mind is more malleable than we thought.”

Here is an abstracts: D. Casasanto and E. Chrysikou (2011); When Left Is “Right”” Motor Fluency Shapes Abstract Concepts.

Right- and left-handers implicitly associate positive ideas like “goodness” and “honesty” more strongly with their dominant side of space, the side on which they can act more fluently, and negative ideas more strongly with their nondominant side. Here we show that right-handers’ tendency to associate “good” with “right” and “bad” with “left” can be reversed as a result of both long- and short-term changes in motor fluency. Among patients who were right-handed prior to unilateral stroke, those with disabled left hands associated “good” with “right,” but those with disabled right hands associated “good” with “left,” as natural left-handers do. A similar pattern was found in healthy right-handers whose right or left hand was temporarily handicapped in the laboratory. Even a few minutes of acting more fluently with the left hand can change right-handers’ implicit associations between space and emotional valence, causing a reversal of their usual judgments. Motor experience plays a causal role in shaping abstract thought.

Here is part of the conclusion of this paper:

Motor fluency has been linked previously with preferences for things that people can act on with their hands…These effects can be readily explained in terms of motor affordances: People mentally simulate performing the action that an object would afford if they were to act on it, such as picking up a spatula or typing letters, and their preference judgments vary according to how fluent this action would be.

Yet motor tendencies also predict judgments about abstract ideas and things people can never manipulate with their hands, as when left- or right-handers attribute more intelligence or honesty to alien creatures depicted on their dominant side of a page than to those depicted on their nondominant side (Casasanto, 2009). In the present study, changes in motor fluency influenced participants’ judgments about the spatialization of imaginary creatures, on the basis of the creatures’ intangible qualities. These results demonstrate a causal link between manual motor

fluency and abstract judgments and suggest that this link is not necessarily mediated by mental simulation of action affordances. Associations between emotional valence and left/right space may be established through habits of fluent and disfluent hand actions, but these associations generalize to influence judgments about things people can never see or touch. It remains a challenge for future research to characterize the neurocognitive mechanisms by which physical experience generalizes to shape abstract conceptions of good and bad.

Before it was shown that handedness was not completely fixed, one could think that the prehaps the same genetics/development that produced a person’s handedness also produced their abstract associations with left and right. But as the associations change with the hand that is most able and dexterous, this is a direct embodiment of positive properties with the dominant hand and side along with negative properties with the non-dominant hand and side. It cannot be due to language or culture. This leaves the more active motor cortex as the key to the effect.

This is the sixth in a series on embodied cognition. There will be future ones still to come.

Here are the first five in the series:

http://charbonniers.org2011/06/15/embodied-cognition-posture/

http://charbonniers.org2011/06/18/embodied-cognition-face/

http://charbonniers.org2011/06/27/embodied-cognition-space/

http://charbonniers.org2011/07/06/embodied-cognition-gut/

http://charbonniers.org2011/07/15/embodied-cognition-morality/

 

 

Is attention part of consciousness

Although they usually occur together, top-down attention and consciousness are separate processes according to a review of experimental evidence.

 

There is too much information arriving through the senses for all of it to receive priority perception. Top-down attention selects, in light of current behavioral goals, a portion of the input defined by a circumscribed region in space (spatial or focal attention), but a particular feature (feature-based attention), or by an object (object-based attention) for further processing. Consciousness does not appear to select but to integrate information, in order to summarize all relevant information of the current situation into a compact form. This integrated summary can used used by planning, error detection, decisions, language, memory and cognition. “From this viewpoint, we can regard selective focal attention as an analyzer and consciousness as a synthesizer.” If they have different functions they are likely to be dissociated under some circumstances.

 

The authors give a set of four types of events:

  1. Top-down attention is not required and can be found without consciousness: formation of afterimages, rapid vision (less than 120 ms), zombie behaviours.

  2. Top-down attention is not required but give rise to consciousness: pop-out, iconic memory, gist, animal/gender detection, partial reportability.

  3. Top-down attention required but can be found without consciousness: priming, adaptation, processing of objects, visual search, thoughts.

  4. Top-down attention required and give rise to consciousness: working memory, detection and discrimination of unexpected/unfamiliar stimuli, full reportability.

The second and third group show that attention is not necessary or sufficient for consciousness.

The bulk of the paper is a review of the relevant evidence. I pick a few items that seemed of particular interest to me to highlight.

 

Gist: In a dual-task paradigm when attention is focused at one spot, a peripheral stimulus can still be detected, as can the gist of a scene and characteristics like the gender of a face. “Interestingly, what is considered a change in gist and what is not, seems to be affected by expertise. This suggests that consciousness without attention develops in response to extensive experience with a particular class of images.” Also, “observers often do perceive the gist of the scene and can accurately perceive the category of the object (whether it is a face, a natural scene, a letter, etc.). Even with a mere 30 ms exposure to natural scenes, followed by a mask, observers can clearly perceive their gist…within these 30 ms, top-down attentional bias could not have taken effect.”

 

Neural activity: In priming experiments “attention to invisible stimuli and visibility of unattended stimuli both enhanced the priming effects, but via distinctive neuronal mechanisms.” Different neural activity has been found for for visibility (54-64 Hz from 250 – 500 ms in contralateral occipital) and attention (76-90 Hz from 350-500 ms in parietal sensors).

 

Two streams:

“There are some striking parallels between the two-stream hypothesis for perception and action on the one hand and the division between attention and consciousness on the other hand. Attention primarily reduces the complexity of incoming input so that the brain can process it online and in real time. This might/could be a function of Milner and Goodale’s (2008) dorsal visual stream for action. In fact, the “pre-motor” theory of attention argues that visual attention evolved from the need to plan to move the eyes to one location. Overt eye movements and covert attention are closely related in both neural and functional ways. In terms of anatomical structure, front-parietal areas have been implicated in the control of attention, which are, of course, part of the dorsal, vision-for-action pathway. On the other hand, the ventral, vision-for-perception pathway has been linked to consciousness.”

The two streams work together except under unusual conditions. This link of the streams to attention and consciousness is without doubt an over-simplification but an interesting starting point for investigations.

 

Here is the abstract:

Recent research has slowly corroded a belief that selective attention and consciousness are so tightly entangled that they cannot be individually examined. In this review, we summarize psychophysical and neurophysiological evidence for a dissociation between top-down attention and consciousness. The evidence includes recent findings that show subjects can attend to perceptually invisible objects. More contentious is the finding that subjects can become conscious of an isolated object, or the gist of the scene in the near absence of top-down attention; we critically re-examine the possibility of “complete” absence of top-down attention. We also cover the recent flurry of studies that utilized independent manipulation of attention and consciousness. These studies have shown paradoxical effects of attention, including examples where top-down attention and consciousness have opposing effects, leading us to strengthen and revise our previous views. Neuroimaging studies with EEG, MEG, and fMRI are uncovering the distinct neuronal correlates of selective attention and consciousness in dissociative paradigms. These findings point to a functional dissociation: attention as analyzer and consciousness as synthesizer. Separating the effects of selective visual attention from those of visual consciousness is of paramount importance to untangle the neural substrates of consciousness from those for attention.

 

 

ResearchBlogging.org

van Boxtel, J., Tsuchiya, N., & Koch, C. (2010). Consciousness and Attention: On Sufficiency and Necessity Frontiers in Psychology, 1 DOI: 10.3389/fpsyg.2010.00217

Embodied cognition – morality

We have an emotion of disgust. It appears to be a way of identifying and avoided things that can harm us by contagion. So things that smell or look or feel or taste in particular ways, reminding us of waste, rot, death, illness, disfigurement and the like, will disgust us. We will feel like vomiting, washing, withdrawing, gagging, and holding our noses. We have a standard facial expression. This emotional aversion protects us from contamination. We can also feel a sort of moral contamination, a sense of impurity. Apparently we do not need two emotions when one will do. We can also feel morally disgusted. The same bodily reactions will do; the same networks in the brain can do the processing; the same facial expression can signal.

 

Here is the abstract from Haidt, Rozin, Mccauley, Imada (1997), Body, Psyche, and Culture: The Relationship between Disgust and Morality:

“Core disgust” is a food related emotion that is rooted in evolution but is also a cultural product. Seven categories of disgust elicitors have been observed in an American sample. These include food, animals, body products, sexual deviance, body-envelope violations, poor hygiene, and contact with death. In addition, social concerns such as interpersonal contamination and socio-moral violations are also associated with disgust. Cross-cultural analyses of disgust and its elicitors using Israeli, Japanese, Greek and Hopi notions of disgust were undertaken. It was noted that disgust elicitors have expanded from food to the social order and have been found in many cultures. Explanations for this expansion are provided in terms of embodied schemata, which refer to imaginative structures or patterns of experience that are based on bodily knowledge or sensation. A mechanism is suggested whereby disgust elicitors are viewed as a prototypically defined category involving many of the embodied schemata of disgust. It is argued that each culture draws upon these schemata and its social and moral life is based on them.

 

And an abstract from Ritter and Preston (2011), Gross gods and icky atheism: Disgust responses to rejected religious beliefs:

Disgust is an emotional response that helps to maintain and protect physical and spiritual purity by signaling contamination and motivating the restoration of personal cleanliness. In the present research we predicted that disgust may be elicited by contact with outgroup religious beliefs, as these beliefs pose a threat to spiritual purity. Two experiments tested this prediction using a repeated taste-test paradigm in which participants tasted and rated a drink before and after copying a passage from an outgroup religion. In Experiment 1, Christian participants showed increased disgust after writing a passage from the Qur’an or Richard Dawkins’ The God Delusion, but not a control text. Experiment 2 replicated this effect, and also showed that contact with an ingroup religious belief (Christians copying from the Bible) did not elicit disgust. Moreover, Experiment 2 showed that disgust to rejected beliefs was eliminated when participants were allowed to wash their hands after copying the passage, symbolically restoring spiritual cleanliness. Together, these results provide evidence that contact with rejected religious beliefs elicits disgust by symbolically violating spiritual purity. Implications for intergroup relations between religious groups is discussed, and the role of disgust in the protection of beliefs that hold moral value.

 

What about prejudice against the disabled, obese and different cultures/races? Here is the abstract of Navarrete, Fessler, Eng (2006): Elevated ethnocentrism in the first trimester of pregnancy:

Recent research employing a disease-threat model of the psychology of intergroup attitudes has provided preliminary support for a link between subjectively disease-salient emotional states and ethnocentric attitudes. Because the first trimester of pregnancy is a period of particular vulnerability to infection, pregnant women offer an opportunity to further test this association. We explored the expression of intergroup attitudes in a sample of pregnant women from the United States. Consistent with the predictions of the disease-threat model, results from our cross-sectional study indicate that favoritism toward the ingroup peaks during the first trimester of pregnancy and decreases during the second and third trimesters. We discuss this finding in light of the possible contributions of cultural and biological factors affecting ethnocentrism.

 

Another bodily reaction, different than the disgust just discussed, is to bitterness. Bitter plants are more likely to contain poisons, which is probably why we can taste it and find the taste so unpleasant. But bitterness is also a trigger for a type of disgust tinged with moral outrage. Here is Eskine, Kacinik and Prinz (2011), Gustatory Disgust Influences Moral Judgment:

Can sweet-tasting substances trigger kind, favorable judgments about other people? What about substances that are disgusting and bitter? Various studies have linked physical disgust to moral disgust, but despite the rich and sometimes striking findings these studies have yielded, no research has explored morality in conjunction with taste, which can vary greatly and may differentially affect cognition. The research reported here tested the effects of taste perception on moral judgments. After consuming a sweet beverage, a bitter beverage, or water, participants rated a variety of moral transgressions. Results showed that taste perception significantly affected moral judgments, such that physical disgust (induced via a bitter taste) elicited feelings of moral disgust. Further, this effect was more pronounced in participants with politically conservative views than in participants with politically liberal views. Taken together, these differential findings suggest that embodied gustatory experiences may affect moral processing more than previously thought.

 

Various groups have found that the anterior insular cortex is active in situations involving revolting smells and sights, disgusted facial expressions, and moral disgust. It seems to be an area that connects internal and external perceptions. The feeling is not confined to insular activity but can even have motor affects. Here is the abstract for Lee and Schwartz (2010), Of dirty hands and dirty mouths: Embodiment of the moral purity metaphor is specific to the motor modality involved in moral transgression:

Abstract thoughts about morality are grounded in concrete experiences of physical cleanliness. Noting that natural language use expresses this metaphorical link with reference to the body part involved in an immoral act (e.g., “a dirty mouth”; “dirty hands”), we address the role of motor modality in the embodiment of moral purity. We find that conveying a malevolent lie on voicemail (using the mouth) increases the desire to clean one’s mouth, but not the desire to clean one’s hands; conversely, conveying the same lie on email (using one’s hands) increases the desire to clean one’s hands, but not one’s mouth. Additional findings suggest that conveying a benevolent message may decrease the desire to clean the involved body part. Secondary analyses of earlier studies further support the assumption that the embodiment of moral purity is specific to the motor modality involved in the act.

They also found the opposite –

Note, however, that people not only avoid physical contact with morally tainted people and objects, but also seek physical contact with virtuous ones. Hence, they may not only attempt to remove the metaphorical residue of immoral acts, but also avoid removing the residue of virtuous acts, making mouthwash (hand-sanitizer) particularly unappealing after conveying a virtuous message on voicemail (email).

 

So moral cognition has a bodily dimension, but exactly what is this moral cognition. I have heard people say that all you need is the Golden Rule, that covers it. Others say look at the outcome and do what causes the least total harm or most total good. Or some say to just follow the good book. Pinker lists five sorts of moral principles: harm, fairness, community, authority and purity. He may be right but maybe they collapse into three or break up into twenty. (Just my natural suspicion of exactly 7 types of personality or 3 types of communication etc.) However, Pinker does give a good description of his “primary colors of our moral sense”: do unto others as you would have done to you (fairness); do not commit adultery (purity); honour your father and mother (authority); do no murder (harm); and do not covet your neighbour’s ox (community). Priority between the types depends on the individual and the times. My point here is that morality may be a rag bag of different sensitivities and only grouped together as one sense by the nature of their embodiment. By circumstance a number of different things become entangled with the same or similar disgust signals from the body. I assume that morality is also held together by feelings of guilt and shame and they are also embodied signals. (vis Lady MacBeth’s hand washing).

 

We consciously know what is wrong because our body finds wrong things disgusting and those feelings are available to consciousness from bodily signals of disgust. If we do wrong we know it consciously because we feel the bodily effects of guilt and shame. Our brains have learned (or were born with) what it should be disgusted by. But this is not a perception and so seems to be unable to be part of the content of consciousness. The reaction of the body to disgust can be perceived and therefore enter the conscious model.

 

 

This is the fifth in a series on embodied cognition. There will be future ones still to come.

Here are the first four in the series:

http://charbonniers.org2011/06/15/embodied-cognition-posture/

http://charbonniers.org2011/06/18/embodied-cognition-face/

http://charbonniers.org2011/06/27/embodied-cognition-space/

http://charbonniers.org2011/07/06/embodied-cognition-gut/

 

 

an excuse for a posting

 

I find my self unusually busy right now. I so am serving up a short, light posting this time.

Consciousness became a word in English in the 1500s in its Latin form, conscius. It meant ‘knowing with’ or ‘having joint or common knowledge with another’. But the most common use was in the Latin phrase conscius sibi, or ‘knowing with oneself’. This phrase was used with the figurative meaning of ‘knowing that one knows’. This etymology explains why there are two very different uses of consciousness – an individual one and a group one. Class consciousness is in effect ‘joint knowledge with others of class’ and comes from conscius. Whereas Locke’s definition is ‘the perception of what passes in a man’s own mind’ and comes from conscius sibi. It is used to mean knowing, percipient, aware, cognizant in non-technical English.

Today we find among the various reasons for consciousness is the sharing or mutual availability of information between different parts of the brain. In that sense, the etymology is ironically quite apt. Scientifically, the word is used to describe a very distinct physical process in the brain which has other reasons as well is sharing within the brain.

The Latin word conscientia would translate as ‘shared knowledge’ too, but in it’s English-Latin use it had the sense of a witness having knowledge of the deeds of someone else. When it became the English word ‘conscience’ it took on the meaning of our moral witness to our own acts.

How to get from monkey to man

A Japanese group have a Japanese take on the evolution of primates (see citation) and their paper has some interesting aspects: a method to train and study macaques, a view of evolutionary selection, and a history of Japanese science in this area. All are interesting.

 

The macaques were trained to use tools but it was not easy. The training was done in baby steps with an intensive training period for each step, but eventually the monkeys seemed to get it. First the monkeys just retrieved food on a long spoon, then the spoon was replaced by a rake with food on near side, then the distance between rake and food was increased, then the food was put on the far side of the rake. Each step took several days of training, but at this stage there was a change and the monkeys began to learn quicker (they seemed to understand racks). Training continued with the rake and food behind a sight barrier but a video of the action on a monitor that the monkey could view. The monkeys were taught to retrieve food even when the size and position of the video image was changed. Finally they were taught to use a smaller rake to obtain a longer rake to use to retrieve food. This involves pre-planning and sequential combining of tools but the monkeys learned this last skill extremely quickly. Softly, softly, with patience – teach a monkey. This method was used on 50 macaques and all of them learned to use the rakes as tools and with facility.

 

During the training the brains of the monkeys were being followed for changes. In particular they looked at the body image of the hand through the activity of intraparietal bimodal neurons that respond both to tactile stimulation of the hand and visual stimuli presented in the same spatial vicinity as the hand – in other words, the image of the hand in space. At the point when training speeded up there was a change in these neurons.

When our rake-trained monkeys wielded the rake in order to retrieve food, these same neurons’ visual receptive fields extended outwards along the axis of the tool to include the rake’s head. In other words, it appeared that either the rake was being assimilated into the image of the hand or, alternatively, the image of the hand was extending to incorporate the tool. Whenever a monkey was not regarding the rake as a tool and just held it passively as an external object, the visual receptive field withdrew from the rake head and was again limited to the space around the hand.

 

When direct sight was replaced by the video on a monitor:

…we found that neurons with tactile receptive fields on the hand were now endowed with visual receptive fields around the image of the hand. Furthermore, these visual receptive fields could extend to the head of the video image of the rake…

 

The first 10-14 days of training (which could not be reduced) followed by much easier learning implied that there was not merely functional plasticity within the existing neural circuitry, but larger scale neuroplastic reorganization. They look for it and found it.

In the bank of the intraparietal sulcus, where the bimodal neurons described above reside, the expression of immediate early genes and the elevation of neurotrophic factors and their receptor was synchronized with the time course of the cognitive learning process…These training-induced genetic expressions turned out to be a part of morphological modification of the intraparietal neural circuitry.

 

Using tracers to see what was changed in the wiring of the monkeys’ brains – axons had extended beyond their former range into a new cortical area and made synapses there – a new interaction between the temporopariental junction and the intraparietal cortex.

 

Monkeys have latent abilities that can be triggered in a proper environment. In humans this is spontaneous and in monkeys requires artificial training. What does this say about human evolution? The author’s make a case for intentional niche construction. In a very simplified outline of types of selection and of environmental niches, I see the following:

 

First there may be genetic drift which does not involve selection of any kind but can result in evolutionary change. There is natural selection which is the archetypal selection. It preserves the genes of those best fitted to their niche. Sexual selection is an accepted type resulting in certain traits being selected by the mating preferences of the opposite sex (think peacock’s tails). Also accepted is kin selection where survival of related animals can be equivalent to one’s only survival in evolutionary terms. There is group selection which is not completely accepted. If an individual’s survival depends on the success of its group, then group selection would be expected to operate. All of these can operate with or without a change in the animal’s niche. Some animals create their own niches (think beavers); they have evolved to fit the niche but also evolved to create that niche. Take that a stage further to an animal that can intentionally change their behaviour to fit a new niche (think animals with some culture) and further still to an animal that can intentionally create a new niche (think human culture, communication and child rearing).

 

The author’s envisage a particular kind of human evolution:

The evolutionary path that led from the monkey brain to the human brain must have proceeded through a continuous, incremental process of natural selection. Nothing completely new should have been added to the primate brain. Evolution has limited the means for reoganizing so complex a structure; these means mainly involve tinkering with size and developmental timetables. One of our main claims here is that the precursors of the mental functions that allowed the human intellect achieve a cultural snowball effect are present, even if only in latent or inchoate forms, in our primitive primate ancestors. A corollary claim is that certain forms of training can produce incremental but functionally significant changes in the non-human primate brain that mimic, perhaps even recapitulate, some of the key neurogenetic changes our ancestors underwent during their long march towards becoming us.

 

The author’s also discuss whether the Japanese culture and relationship with monkeys, makes these ideas more natural and acceptable to Japanese scientists. I am not going to comment on that part of the paper. The discussion of Japanese science is also not mentioned in the abstract.

 

Here is the abstract:

We trained Japanese macaque monkeys to use tools, an advanced cognitive function monkeys do not exhibit in the wild, and then examined their brains for signs of modification. Following tool-use training, we observed neurophysiological, molecular genetic and morphological changes within the monkey brain. Despite being ‘artificially’ induced, these novel behaviours and neural connectivity patterns reveal overlap with those of humans. Thus, they may provide us with a novel experimental platform for studying the mechanisms of human intelligence, for revealing the evolutionary path that created these mechanisms from the ‘raw material’ of the non-human primate brain, and for deepening our understanding of what cognitive abilities are and of those that are not uniquely human. On these bases, we propose a theory of ‘intentional niche construction’ as an extension of natural selection in order to reveal the evolutionary mechanisms that forged the uniquely intelligent human brain.

 

 

ResearchBlogging.org

Iriki, A., & Sakura, O. (2008). The neuroscience of primate intellectual evolution: natural selection and passive and intentional niche construction Philosophical Transactions of the Royal Society B: Biological Sciences, 363 (1500), 2229-2241 DOI: 10.1098/rstb.2008.2274

Embodied cognition – gut

Most of us think of the brain is the nervous system and when we think more clearly we would add the sense organs and their nerves, the spinal cord and the other nerves coming in and going out of the brain. But wait, there are other nerves and neurons – another three systems. The sympathetic, the parasympathetic and the enteric nervous systems. This last one was named the ‘second brain’ in 1996 by Gershon who studied it.

In the embryo, the little clump of cells that is destined to be nervous tissue splits and part becomes the brain’s system and the other part becomes the gut’s system. They are connected later by the vagus nerve. The gut ends up with 100 million neurons, over half of the neurons residing outside the brain. It seems to make a good control system to manage digestion.

The sympathetic system is a series of ganglions (like tiny brains) along the spine and connected to the spinal cord. Nerves from the ganglions go to almost every organ in the body and helps keep them working at the right level. When there is stress or danger, the sympathetic system creates the conditions for the body to react with ‘fight or flight’, such as shutting down digestion and increasing heart rate.

If the sympathetic system is the accelerator than the parasympathetic system is the brake. The parasypathetic ganglions are near or in the organs they effect, and are connected to the spinal cord or the cranial nerves. The system is thought of as the rest and digestion control. Working somewhat opposite to the sympathetic system, the organs of the body are put in maintenance mode with digestion being promoted and the heart rate decreased.

The brain controls the sympathetic and parasympathetic systems but this control never becomes conscious. So we may know we are angry from our bodies (pounding fast heart, sweat including palms, unclear vision, hot, clenched teeth, pacing etc.) but we may or may not know the cause as it may or may not have risen to consciousness. And we are not aware of the signals to the body, only the body’s response. It is not surprising that ancient people thought of the heart, stomach and so on as the seat of emotion and even thought. This cognitive embodiment is old news.

The second brain is new news; its study has only recently begun. The brain has much less control of the enteric nervous system. Here is part of a Scientific American article by Adam Hadhazy on the subject:

Thus equipped with its own reflexes and senses, the second brain can control gut behavior independently of the brain, Gershon says. We likely evolved this intricate web of nerves to perform digestion and excretion “on site,” rather than remotely from our brains through the middleman of the spinal cord. “The brain in the head doesn’t need to get its hands dirty with the messy business of digestion, which is delegated to the brain in the gut,” Gershon says. He and other researchers explain, however, that the second brain’s complexity likely cannot be interpreted through this process alone.
The second brain informs our state of mind in other more obscure ways, as well. “A big part of our emotions are probably influenced by the nerves in our gut,” Mayer says. Butterflies in the stomach—signaling in the gut as part of our physiological stress response, Gershon says—is but one example. Although gastrointestinal (GI) turmoil can sour one’s moods, everyday emotional well-being may rely on messages from the brain below to the brain above. For example, electrical stimulation of the vagus nerve—a useful treatment for depression—may mimic these signals, Gershon says.
Given the two brains’ commonalities, other depression treatments that target the mind can unintentionally impact the gut. The enteric nervous system uses more than 30 neurotransmitters, just like the brain, and in fact 95 percent of the body’s serotonin is found in the bowels. Because antidepressant medications called selective serotonin reuptake inhibitors (SSRIs) increase serotonin levels, it’s little wonder that meds meant to cause chemical changes in the mind often provoke GI issues as a side effect. Irritable bowel syndrome—which afflicts more than two million Americans—also arises in part from too much serotonin in our entrails, and could perhaps be regarded as a “mental illness” of the second brain.

We may think all this is ‘only emotion’ not really thinking, not actual cognition. However, emotion is extremely important to thinking. It supplies the purpose, the value criteria and the significance to thought.

This is the fourth in a series on embodied cognition. There will be future ones still to come.

Here are the first three in the series:

http://charbonniers.org2011/06/15/embodied-cognition-posture/

http://charbonniers.org2011/06/18/embodied-cognition-face/

http://charbonniers.org2011/06/27/embodied-cognition-space/

learning to see and identify

When I was first learning to identify things through a microscope, it seemed an impossible task. There was a sea of variously shaped and coloured splauches and each one had to be examined with respect to a long set of specification that had to be memorized. But I discovered after a few of these learning tasks that what was difficult at first became extremely easy. I would just look generally at the microscope field and the cells I was searching for just popped out of the background. A similar thing happened when I was first visiting African game parks. At first I would be scanning the vista with binoculars and taking a long time to spot any animal that was visible. After a while I just looked without binoculars and saw animals, dozens of them. I needed the binoculars to identify them sometimes or to see what they were doing but for finding them in the first place – better just to relax and look. I have often wondered how we learn these search tricks.

There is a recent paper by Hussain, Sekuler and Bennett, Superior Identification of Familiar Visual Patterns a Year After Learning. I do not have access to the paper but the ScienceDaily posting is (here) The paper seems to be about this ability. Here is the abstract:

Practice improves visual performance on simple tasks in which stimuli vary along one dimension. Such learning frequently is stimulus-specific and enduring, and has been associated with plasticity in striate cortex. It is unclear if similar lasting effects occur for naturalistic patterns that vary on multiple dimensions. We measured perceptual learning in identification tasks that used faces and textures, stimuli that engage multiple stages in visual processing. Performance improved significantly across 2 consecutive days of practice. More important, the effects of practice were remarkably stable across time: Improvements were maintained approximately 1 year later, and both the relative difficulty of identifying individual stimuli and individual differences in performance were essentially constant across sessions. Finally, the effects of practice were largely stimulus-specific. Our results suggest that the characteristics of perceptual learning are similar across a spectrum of stimulus complexities.

These were not straight forward images.

Over the course of two consecutive days, participants were asked to identify a specific face or pattern from a larger group of images. The task was challenging because images were degraded — faces were cropped, for example — and shown very briefly. Participants had difficulty identifying the correct images in the early stages, but accuracy rates steadily climbed with practice.

There are some important differences between this type of memory and usual episodic memory. This is a ‘how to’ memory, like riding a bike; they are memories of how to find and identify x where no two x are the same. These memories are formed and held differently and do not suffer the modification and weakening over time that memories of events do. They are not fact or semantic memories either. I am sure I can identify quickly cell types that I learned 50 years ago but I am not sure I would remember their names or facts about them. Facts and names are getting hazy. Even some of the animal names have become a little faint.

These particular how-to-perceive memories (unlike riding a bicycle) have a very conscious affect. We see the objects of our search pop out clearly from the surroundings. This must be a product of attention swinging to them as soon as they are identified.