Unconscious language and math

This paper (citation below) starts with the assumption (call the modal view) that, “It is not surprising then that the modal view holds that the semantic processing of multiple-word expressions and performing of abstract mathematical computations require consciousness (reason: they are human skills). In more general terms, sequential rule-following manipulations of abstract symbols are thought to lie outside the capabilities of the human unconscious. ” The authors intend to weaken this modal view.

 

They point out that previous experiments have shown unconscious processing of: single words and numbers, simple arithmetic facts, and additions with no numbers over 6. But more demanding tasks have not been shown to be unconsciously possible. The paper attempts to show more demanding unconscious cognition. “we argue that people can semantically process multiple-word expressions and that they can perform effortful arithmetic computations outside of conscious awareness.”

 

What is different in their experiments is that unconscious processing is given some time.

In all of our experiments, we use Continuous Flash Suppression (CFS), a cutting edge masking technique that allows subliminal presentations that last seconds. CFS is a game changer in the study of the unconscious, because unlike all previous methods, it gives unconscious processes ample time to engage with and operate on subliminal stimuli. Indeed, in the present set of experiments, we show that humans can semantically process subliminal multiple-word expressions and that they can nonconsciously solve effortful arithmetic equations.

CFS consists of a presentation of a target stimulus to one eye and a simultaneous presentation of rapidly changing masks to the other eye. The rapidly changing masks dominate awareness until the target breaks into consciousness. Importantly, this suppression may last seconds. We used this technique in two different ways. In the first section, the critical dependent variable was the time that it took the stimuli to break suppression and pop into consciousness (popping time). In the second section, we used masked expressions as primes and measured their influence on consequent judgments. Objective and subjective measures ensured unawareness of the primes.

 

The results were that semantically incoherent expressions popped before coherent ones showing that unconscious processing was needed to explain the indication of incoherence in multiple-word expressions. And more negative expressions popped faster than non-negative ones indicating unconscious processing to find the tone of the expression. When unconsciously primed with three term arithmetic equations involving subtraction, numbers that were the answer to the equation popped earlier than other numbers, implying that the equation was solved unconsciously. Under slightly different conditions all addition equations also appeared to be solved unconsciously. “These data show that unconscious processes can perform sequential rule-following manipulations of abstract symbols—an ability that, to date, was thought to belong to the realm of conscious processing.”

 

Their conclusion:

To conclude, research conducted in recent decades has taught us that many of the high-level functions that were traditionally associated with consciousness can occur nonconsciously … for example, learning, forming intuitions that determine our decisions, executive functions, and goal pursuit. Here, we showed that uniquely human cultural products, such as semantically processing a number of words and solving arithmetic equations, do not require consciousness. These results suggest that the modal view of consciousness and the unconscious, a view that ties together (our unique) consciousness with (humanly unique) capacities, should be significantly updated.

 

I have a problem with both the modal view and the conclusions of this research group. There is an assumption that consciousness is a cognitive process rather than just a memory and awareness process. Once this assumption is made, it is reasonable to come to their conclusions. What I believe may be happening is that the difference is not in the number of steps or complexity of the cognition but in whether working memory and/or global access is required but the nature of the complexity. If the actual cognition was a conscious function then we really should be aware of that cognition. We should be able to experience the nitty gritty of the process. Instead we get the sub-results of cognition as each step is solved because that sub-result is needed to be in working memory. Learning, practice, habit etc. can change the size/complexity of the steps so that more can be done without recourse to working memory.

 

As far as consciousness being uniquely human – this notion is dead but has not quite been been put in its grave yet.

ResearchBlogging.org

Sklar, A., Levy, N., Goldstein, A., Mandel, R., Maril, A., & Hassin, R. (2012). Reading and doing arithmetic nonconsciously Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1211645109

Word retrieval

When we attempt to find the word for something, related words are also accessed (as in word association, priming, freudian slips, and simple errors). But these related words are of two types, taxonomic and thematic:

Across all types of speakers and all manner of testing, semantic naming errors overwhelmingly reflect taxonomic relations; that is, the predominant error is a category coordinate (apple named as “pear” or “grape”), superordinate (apple → “fruit”), or subordinate (apple → “Granny Smith”). A small subset are thematic errors, such as apple → “worm” or bone → “dog,” in which the target and error are from different taxonomic categories but frequently play complementary roles in the same actions or events.

 

Does this reflect a difference in semantic memory for the two types or not? The researchers of a recent paper, Schwartz etal. (see citation below), used the errors made by stroke victims compared to the location of their brain damage to show a difference between taxonomic and thematic storage in the brain. Their results:

We found that taxonomic errors localized to the left anterior temporal lobe and thematic errors localized to the left temporoparietal junction. This is an indication that the contribution of these regions to semantic memory cleaves along taxonomic- thematic lines. Our findings show that a distinction long recognized in the psychological sciences is grounded in the structure and function of the human brain.

 

What is the relationship between these two ways of retrieving the right word?

Although many thematic errors in our corpus do involve objects with complementary functions in action events (dog → “bone”; zipper → “jacket”), many others are linked by other types of relation, such as spatial relations (e.g., anchor → “sea”) or causal relations (e.g., ambulance → “fire”). This goes along with a broader role for this TPJ area in the representation of relational information, which may be what undergirds its essential contribution to sentence comprehension. We suggest that in the process of identifying an object for naming, relevant event representations are retrieved or simulated that create a momentary linkage between the target concept and others in the event context. This process probably takes place bilaterally in the TPJ, but it is the component on the left that conveys information about these linked concepts to left-lateralized lexical-phonological systems. Lesions here render this communication noisier or less precise, thereby reducing the natural advantage of the target concept over its contextual associates and encouraging an error in which one of these associates is named in place of the target. …

we propose that the ATL and TPJ are each multimodal hubs that extract somewhat different relationships. The ATL extracts perceptual feature similarity for the purpose of object processing, whereas the TPJ extracts role relations for the purpose of event processing. The ATL system is the dominant one in naming, which explains why taxonomic errors predominate over thematic errors.

 

I find this very interesting in the context of how we make/understand sentences and how we reason in metaphors. The separation of conceptual structures from the elements that comprise them is indicated. It seems like something deep about how we think is nearing the surface.

 

ResearchBlogging.org

Schwartz, M., Kimberg, D., Walker, G., Brecher, A., Faseyitan, O., Dell, G., Mirman, D., & Coslett, H. (2011). From the Cover: Neuroanatomical dissociation for taxonomic and thematic knowledge in the human brain Proceedings of the National Academy of Sciences, 108 (20), 8520-8524 DOI: 10.1073/pnas.1014935108

Just because Chomsky said it does not make it true

A recent blog by Dorothy Bishop (here) discusses the ideas of Noam Chomsky. It is easy to agree with her. I have thought for some time that the language that Chomsky talks about is not the language I utter or listen to. It is something strange and alien.

 

My first problem is that Chomsky does not seem interested in communication. How well some particular example of language is at communicating something seems somewhat irrelevant to him. To me communication is the core of language. We can communicate things like information, requests, commands, but also we can communicate emotion, attitude, bonding and such like. Of course we can communicate without language but it is communication that gives value to language. It is the reason we value it. And better communication was almost without doubt the evolutionary pressure that molded language.

 

Chomsky seems to think of language as a form of logic. This is an old idea and we can see that from the etymology of our words for grammatical entities, like ‘subject’ and ‘predicate’. But Venn diagrams are easier to comprehend than syllogisms – we can ‘do’ logic outside language. He starts out dealing with sentences as the basic unit of language. However, if you have ever read accurate transcripts of actual unprepared utterances, you will know how rare the well formed sentence is. Most communication is phrases, with some sentences and some carry-on sentences that are more like paragraphs. The absolute rather than relative importance of the sentence comes when bits of language are taken out of the context of a conversation so that they have to stand alone and when language is written. And, the sentence is important very specially when language is treated as a form of logic. There is something dry and unnatural about single written sentences being parsed with diagrams. Gone is the voice, the rhythm, the volume, the face and hands. We communicate with our whole bodies and it loses a lot when it is reduced to 10 or so words in a diagram.

 

Chomsky seems to think that language is essential for thought. In my experience, thought is multi-dimensional and language is a two-dimensional string. Grammar is only the way to translate multi-dimensional thoughts in and out of strings of words. Chomsky seems to think that I could not have the thought if I didn’t have the language. If that were the case than why is it so difficult to put many clear thoughts into words? I agree that many thoughts start out as words – thinking in words as opposed to pictures or numbers or metaphoric concepts or music for example. But I cannot think of a type of thinking (other than purely semantic games) that could not be done without language. But communicating thoughts without language is much more difficult. Language can have the form that a has an attribute b. This is an extension of a normal perception process, the perceptual binding of b to a. We do it all the time without words. It does not take language to think ‘the sky is blue’ or ‘my mother is happy’ - it takes language to communicate the thought not to think it. We also have as a normal way to understanding the world the agent-action-outcome framing of events. This way of thinking is built right into the brain. We do not need language to think is subject – verb – object. We naturally think in terms of a continuum of time centered on a ‘now’. The brain thinks about different types of things in separate regions – people separate from small objects separate from places and so on. Verb tenses, noun cases, number are there with or without language. Language is not essential for thought.

 

Anyone who has tried to learn a foreign language can tell you how much more important words are than grammar in communicating with minimum knowledge of a language. Chomsky seems to think that semantics are much less (something – interesting, important, critical – I am not sure) than grammar. This is not true when the criteria is communication success. People do communicate in pidgins with the most rudimentary grammar. The Bishop posting (link above) has much on the learning of language by children and how Chomsky has misjudged that.

 

Finally, Chomsky has a somewhat unusual idea of the origin of language. He gives lip service to evolution but he never actually uses the idea. He prefers, a one mighty leap approach. There was no language and then all of a sudden there was ‘merge’ and that allowed language. We can see why he finds this easier to deal with than a long slow evolution. He wants a distinct clear separation of man from other animals, a difference of kind rather than degree. If man is unique because of language than language must not be found in other animals. I think that Chomsky finds this idea very, very important. Other animals communicate and some of them can use devices similar to words. The more we look at animals (especially primates, dogs, elephants, whales, crows, parrots) the more we find roots of language, until all that is left that has not been demonstrated in some animal or other is ‘merge’. Those that are interested in the evolution of language look for those traces of continuity between how various animals communicate and how we do. Chomsky seems to be trying to identify a discontinuity as the main prize.

 

Babel’s Dawn - the book

A number of years back I encountered a blog called Babel’s Dawn written by Edmund Blair Bolles. It is now inactive although all the postings are still on-line to read at http://www.babelsdawn.com . It has been turned into a book called Babel’s Dawn, A Natural History of the Origins of Speech by the same author, EB Bolles. The book is a narrative and a very easy, enjoyable read. It is the same material as the blog but not in essay form. Instead it is ordered chronologically and presented as a walk through a museum exhibition. If you are at all interested in language, human nature, evolution, culture (and I expect many of my readers are interested in that type of subject matter) get the book and have a good read over Christmas.

 

The narrative starts with a character given the name Sara who is the putative last common ancestor of us and Chimps 6 million years ago. Using characters like this at points along the way, language is traced from its roots to something we understand as a proper language over a time span of 5.1 million years. What has to happen further in the last 900,000 years is added at the end. At no place was I left wondering, how it we get from there to here – no almighty leaps – no magic fairy dust.

 

The logic is convincing. It does not rely on many new powers but is grounded in perception, attention, and communal living. It bypasses rules of syntax, symbols, and the like to get a much more biologically based notion of what language is and what it does.

 

The key idea is that apes have the abilities that we adapted into language but they do not use them in the way we do, mainly because they do not trust one another. We are trusting, cooperating, social animals. We jointly pay attention to a topic (Bolles calls this the speech triangle of speaker, listener and topic). It is to our advantage to do this but it is not to the chimps advantage. From this trusting joint attention all else flows. Words steer attention. Verbs connect topics with news about them. Metaphor allows us to treat all things as though they were concrete and could be perceived. The theory make good sense and seems to fit the data.

 

It is a just-so-story and will probably be overtaken by new data and new ideas eventually. Bolles is well aware of that and points it out himself. But it is a very well done tale, very careful with the data, and in my opinion, it is many head-and-shoulders above other attempts to trace language’s origins.

 

Again I urge you to read the book!

 

Embodied cognition - language

It is hard to overstate the importance of language – but some manage it. Language has a very big billing by some people – the singular mark of being human; the only medium of thought; the foundation of consciousness; the basis of social relations and more. This seems over the top to me, but language is still very, very important and we need to understand how it comes to be so.

 

As a school girl (along with millions of school children), the contradiction of ‘dictionaries’ occurred to me. We cannot define the meaning of all words using only words. What gives a word meaning is still an open question with many not too convincing answers. My own favourite answer is that most words get their meaning/s from their position in a web of words, by their relationships to other words. The web can be thought of as a mass of variously nested and overlapping metaphors/schema/maps. The foundation of this web has to be some pre-verbal concepts, some real structural relationships that form the pre-metaphors used to create all others. In other words, there must be points of ‘grounding’. The points of contact of language with non-linguistic reality have to be what young babies have, what they come with: the structure of their bodies, what they can sense, and the actions they can take. So the beginning of language (for the species and for every individual in it) has to be embodiment before culture can start to make its contribution.

 

There is little doubt that the particular language we speak can affect how we think. Whorf wrote:

We are thus introduced to a new principle of relativity, which holds that all observers are not led by the same physical evidence to the same picture of the universe, unless their linguistic backgrounds are similar, or can in some way be calibrated.

Although the extent of this Sapir-Whorf effect is not agreed and variations from a strong to a weak form are found, the belief in some effect of language on thought and perception does not interfere with a belief in some embodiment. There is an effect of the physical body on language. Just because I am discussing embodiment here does not mean it is the only process involved.

 

Let us start with phonemes – the individual sounds of language. Mark Changizi has proposed that culture builds on what the brain is capable of and the brain has evolved the capabilities needed for living in the natural world. Here is part of an interview of Changizi by Lende (here):

(We can identify objects from their sound as well as look and feel. This is an adaptation to natural world.) For example, there are primarily three “atoms” of solid-object physical events: hits, slides and rings. Hits are when two objects hit one another, and slides where one slides along the other. Hits and slides are the two fundamental kinds of interaction. The third “atom” is the ring, which occurs to both objects involved in an interaction: each object undergoes periodic vibrations — they ring. They have a characteristic timbre, and your auditory system can usually recognize what kind of objects are involved. For starters, then, notice how the three atoms of solid-object physical events match up nicely with the three fundamental phoneme types: plosives, fricatives and sonorants. Namely, plosives (like t, k, p, d, g, b) sound like hits, fricatives (s, sh, f, z, v) sound like slides, and sonorants (vowels and also phonemes like y, w, r, l) sound like rings.

Even syllables are structured like solid object interactions. When we hit a bell, we hear the hit followed by the ring. The objects ring after the events of hits and slides, while the fundamental morphology of language is consonant-vowel syllable. Language uses the brains ability to derive meaning from the sound of objects by restricting language sounds to mimics of object sounds. This allows us to use part of the brain adapted for one purpose for a different but neurologically similar one.

 

What about the words that are formed from these phonemes? They may have their roots in onomatopoeia or the ‘bow-bow’ theory of language origin. Or perhaps synaesthesia is the first step to language, as put forward by Ramachandran and Hubbard in their 2001 paper, Synaesthisia – A Window Into Perception, Thought and Language. Asking people to guess which object had the name ‘kiki’ and which ‘bouba’, they found that 95% of people labelled the spiky object as kiki and the curvy one as bouba.

 

The classification of words is another possible area of embodiment. Does the brain have different processes for different types of words? Here is the abstract from Mestres-Misse, Rodriguez-Fornells, Munte (2009) Neural differences in the mapping of verb and noun concepts onto novel words:

A dissociation between noun and verb processing has been found in brain damaged patients leading to the proposal that different word classes are supported by different neural representations. This notion is supported by the facts that children acquire nouns faster and adults usually perform better for nouns than verbs in a range of tasks. In the present study, we simulated word learning in a variant of the human simulation paradigm that provided only linguistic context information and required young healthy adults to map noun or verb meanings to novel words. The mapping of a meaning associated with a new-noun and a new-verb recruited different brain regions as revealed by functional magnetic resonance imaging. While new-nouns showed greater activation in the left fusiform gyrus, larger activation was observed for new-verbs in the left posterior middle temporal gyrus and left inferior frontal gyrus (opercular part). Furthermore, the activation in several regions of the brain (for example the bilateral hippocampus and bilateral putamen) was positively correlated with the efficiency of new-noun but not new-verb learning. The present results suggest that the same brain regions that have previously been associated with the representation of meaning of nouns and verbs are also associated with the mapping of such meanings to novel words, a process needed in second language learning.

 

The following research reminded me of trying to learn some Swahili and dealing with the idea of noun classes, many of them. Just, Cherkassly, Aryal, Mitchell (2010) A Neurosemantic Theory of Concrete Noun Representation Based on the Underlying Brain Codes identified three noun classes. (They were not counting people, abstracts etc. in the three.) Here is the abstract:

This article describes the discovery of a set of biologically-driven semantic dimensions underlying the neural representation of concrete nouns, and then demonstrates how a resulting theory of noun representation can be used to identify simple thoughts through their fMRI patterns. We use factor analysis of fMRI brain imaging data to reveal the biological representation of individual concrete nouns like apple, in the absence of any pictorial stimuli. From this analysis emerge three main semantic factors underpinning the neural representation of nouns naming physical objects, which we label manipulation, shelter, and eating. Each factor is neurally represented in 3–4 different brain locations that correspond to a cortical network that co-activates in non-linguistic tasks, such as tool use pantomime for the manipulation factor. Several converging methods, such as the use of behavioral ratings of word meaning and text corpus characteristics, provide independent evidence of the centrality of these factors to the representations. The factors are then used with machine learning classifier techniques to show that the fMRI-measured brain representation of an individual concrete noun like apple can be identified with good accuracy from among 60 candidate words, using only the fMRI activity in the 16 locations associated with these factors. To further demonstrate the generativity of the proposed account, a theory-based model is developed to predict the brain activation patterns for words to which the algorithm has not been previously exposed. The methods, findings, and theory constitute a new approach of using brain activity for understanding how object concepts are represented in the mind.

 

What is the use of words? Babel’s Dawn (here) has made an excellent case for words being similar to pointing. They steering the joint attention of the speaker and listener. But by analogy words point to concepts in our brains. Grossman and Johnson (2010), Selective prefrontal cortex responses to joint attention in early infancy, show its importance to communication:

Infants engaged in joint attention use a similar region of their brain as adults do. Our study suggests that the infants are tuned to sharing attention with other humans much earlier than previously thought. This may be a vital basis for the infant’s social development and learning. In the future this approach could be used to assess individual differences in infants’ responses to joint attention and might, in combination with other measures, serve as a marker that can help with an early identification of infants at risk for autism.

 

We now seem to be leaving phonics and semantics to enter grammar. It seems to me that the sequence we assume is natural to the brain, goal – plan- action – result – evaluation, when fitted to our actions and the actions of others makes the form of subject – verb – object or actor – action – result, a form fitted to our brains. But in what order? Here is the abstract for Goldin-Meadow, So, Ozyurek, Mylander (2008) The natural order of events: How speakers of different languages represent events nonverbally:

To test whether the language we speak influences our behavior even when we are not speaking, we asked speakers of four languages differing in their predominant word orders (English, Turkish, Spanish, and Chinese) to perform two nonverbal tasks: a communicative task (describing an event by using gesture without speech) and a noncommunicative task (reconstructing an event with pictures). We found that the word orders speakers used in their everyday speech did not influence their nonverbal behavior. Surprisingly, speakers of all four languages used the same order and on both nonverbal tasks. This order, actor–patient–act, is analogous to the subject–object–verb pattern found in many languages of the world and, importantly, in newly developing gestural languages. The findings provide evidence for a natural order that we impose on events when describing and reconstructing them nonverbally and exploit when constructing language anew.

 

So why is it humans who have developed such an amazing tool for communication? There are probably many reasons – the ability to trust other individual, need to replace/enhance a gestural form of communication, the abilities gained in mastering tool making, need to care for children that were not being carried and so on. One answer is the particular FOX2P gene that humans have. The FOX2P gene is a transcription factor, that is a gene that controls the use of many other genes. It is a very old developmental gene that helps to build the fetal heart, chest and the brain at least. All vertebrates have this gene and a similar gene is found in other animals (like bees). Our particular form of the gene is different from the form in chimps and is closer to the form in song birds, bats, cetaceans and importantly Neanderthals. What do these animals have in common? - sensorimotor coordination of sound production, plasticity of neural circuits allowing learning the vocal patterns/skills, and ability to handle sequences of sound. This gene appears to have started changing in humans at least 400 thousand years ago and have reached its present form around 100 thousand years ago. Humans with a fault in this gene (a very rare condition) have severe language problems.

 

In thinking about the embodiment of language, we can use language as a stand-in for all of our culture. Language appears to be the most extensive and basic of our cultural constructions. It is probably one of the oldest, maybe only beaten by tool making. The evolution of a cultural change is much faster than the evolution of genetic changes. So although it is clear that language involved both cultural and genetic changes, the order would be a cultural change first taking advantage of existing body structure followed by the culture forcing a fine-tuning of the body through conventional evolution. This can ratchet up immense cultural creations on a minimum of genetic change. The continual embodiment of the culture is the key to its quick elaboration.

 

This is the seventh in a series on embodied cognition. There is still one to come.

Here are the first six in the series:

http://charbonniers.org2011/06/15/embodied-cognition-posture/

http://charbonniers.org2011/06/18/embodied-cognition-face/

http://charbonniers.org2011/06/27/embodied-cognition-space/

http://charbonniers.org2011/07/06/embodied-cognition-gut/

http://charbonniers.org2011/07/15/embodied-cognition-morality/

http://charbonniers.org2011/07/21/embodied-cognition-handedness/

 

Affirming

There is something comical about the frustrated using impossible means to an end – example- in Fawlty Towers remember Basil punishing his car by beating it with a tree branch. Does it make the car behave better? No, it makes Basil ridiculous and us laugh, the car is unmoved and unmoving.

 

Think of the person with very low self esteem earnestly saying some affirming phrase. What happens? The person feels ridiculous and their self esteem is further damaged. We cannot fool ourselves; saying something that we do not believe is not going to accomplish anything.

 

I think there is a way of having productive conversations with ourselves. Ask questions. Does this ring a bell? Something surprising occurs and you think ‘What was that?’ and immediately a few possibilities spring to mind. What, when, where, how, why, who, which, whether, how much, etc. those are some questions we should be asking ourselves if we want to improve a situation.

 

A somewhat cartoon-like way to see this is as a bunch workers in rooms. They can phone one another if necessary but they also have an intercom. X has a problem and can get no help from its usual phone contacts so it goes on the ‘blower’ and yells, “anyone know why I feel low today?” Others do not know who it is on the blower, but push their buttons and yell back. “Maybe we are getting a cold.” “Is it because we have not see a good friend for days.” “We are out of money.” Now we can do things to help the situation – crawl into bed, phone a friend, make a budget etc.

 

If instead X had gone on the blower and said, “Cheer up everyone!”, no one would have paid any attention. Or if they did, they might feel bullied and therefore uncooperative. Or they might feel even more low because they were not about to just cheer up.

 

Of course this is not meant to be taken seriously. It is not an accurate metaphor for how the brain works. It remains true that our internal voice is a help in solving problems. And it remains true that we cannot convince ourselves of what we do not believe by just saying it. We cannot diet by telling ourselves to eat less but we can diet by asking ourselves how we are going to arrange our lives so that we eat less.

 

Here is the abstract from a paper showing the danger of unconvincing affirmations:

Positive self-statements are widely believed to boost mood and self-esteem, yet their effectiveness has not been demonstrated. We examined the contrary prediction that positive self-statements can be ineffective or even harmful. A survey study confirmed that people often use positive self-statements and believe them to be effective. Two experiments showed that among participants with low self-esteem, those who repeated a positive self-statement (“I’m a lovable person”) or who focused on how that statement was true felt worse than those who did not repeat the statement or who focused on how it was both true and not true. Among participants with high self-esteem, those who repeated the statement or focused on how it was true felt better than those who did not, but to a limited degree. Repeating positive self-statements may benefit certain people, but backfire for the very people who “need” them the most.

 

Citation: Wood, J., Elaine Perunovic, W., & Lee, J. (2009). Positive Self-Statements: Power for Some, Peril for Others Psychological Science, 20 (7), 860-866 DOI: 10.1111/j.1467-9280.2009.02370.x

 

Syntax in the mind

ScienceDaily has a report on a paper by K. Allen, S. Ibara, A. Seymour, and N. Botvinick published in Psychological Science, “Abstract structural representations of goal-directed behaviour”. (here) They draw parallels between syntax in language and how we understand the actions of others.

There are oceans and oceans of work on how we understand languages and how we interpret the things other people say … the same principle might be applied to understanding actions. For example, if you see someone buy a ticket, give it to the attendant, and ride on the carousel, you understand that exchanging money for a piece of paper gave him the right to get on the round thing and go around in circles. (Researchers used) action sequences that followed two contrasting kinds of syntax — a linear syntax, in which action A (buying a ticket) leads to action B (giving the ticket to the attendant), which leads to outcome C (riding the carousel), and another syntax in which actions A and B both independently lead to outcome C. They were testing whether the difference in structure affected the way that people read about the actions.

People can read sentences faster if they have the same syntax as the preceding sentence. The researchers found the similarly people could read sentences faster if the relationship between actions had the same pattern as the preceding sentence.

This indicates that readers’ minds had some kind of abstract representation of the ways goals and actions relate. … It’s the underlying knowledge structure that kind of glues actions together. Otherwise, you could watch somebody do something and say it’s just a random sequence of actions.

Here is the abstract:

Linguistic theory holds that the structure of a sentence can be described in abstract syntactic terms, independent of the specific words the sentence contains. Nonlinguistic behavior, including goal-directed action, is also theorized to have an underlying structural, or “syntactic,” organization. We propose that purposive action sequences are represented cognitively in terms of a means-ends parse, which is a formal specification of how actions fit together to accomplish desired outcomes. To test this theory, we leveraged the phenomenon of structural priming in two experiments. As predicted, participants read sentences describing action sequences faster when these sentences were presented amid other sentences sharing the same parse. Results from a second experiment indicate that the underlying representations relevant to observed action sequences are not strictly tied to language processing. Our results suggest that the structure of goal-directed behavior may be represented abstractly, independently of specific actions and goals, just as linguistic syntax is thought to stand independent of other levels of representation.

This seems a somewhat predictable notion but it is nice to have some confirmation of it.

The where, when, how and why

A recent review article by Friedemann Pulvermuller looks at what is known about the neurobiology of language. He uses the question of what recent progress has been in the where, when, how and why of language processing in the brain. He does a masterful job and yet I am, personally, disappointed. In what way I am disappointed comes later. First comes Pulvermuller’s insights.

Where: We have our old friends Wernicke’s and Broca’s areas and their surroundings, referred to as the left-perisylvian language cortex. They are very heavily connected to one another. But that is not the only where: widespread areas in both hemispheres can be involved depending on the meaning of words. For example the word “kick” causes activity in areas dealing with the legs.

When: Processing is not serial; phonological, lexical, syntactic, semantic and pragnatic processing are simultaneous. Basic understanding is gained within 250 msec of hearing/seeing even if the utterance is not attended to. Robust activity at about 500 msec depend on a combination of strong (loud) stimulus, attention to it or need for re-analysis.

How: It appears that there is not separation between perception and action but a complex interaction between them involving prediction. Bottom-up sensory activity produces a hypothesis, then a top-down action-like synthesis produces a prediction to be matched with further input.

Why: Here we have a number of important features to explain but very few answers. Basically, the brain needs to do its language work with speed, flexibility and ease of learning.

Now for my disappointment. First, there is a lot here that is similar to non-language processing, especially the predictive testing and monitoring, and this is hardly mentioned. The gulf between studying preception, cognition and action in the language sphere and in other activities is not necessary and hinders progress in both. Secondly, there is my focus of interest, consciousness, which is also hardly mentioned. In the section on timing, it would have been reasonable to notice that event-related-activity that dies out before about 250 msec does not reach consciousness. Those that remain active passed about 300 msec do reach consciousness. This is probably the nature of Pulvermuller’s early and late activity but he does not make the connection. Reaching consciousness allows the use of working memory, which would likely be essential to re-analysis an utterance. Further, consciousness and attention are usually closely linked. I would like to find hints to why language seems more likely to reach consciousness then many other activities and I found none in this article.

Aside from my perhaps unreasonable obsession with consciousness, I hope that many read the article because it brings together the bio and the linguistics in the Bioliguistics Journal.
ResearchBlogging.org
Friedemann Pulvermuller (2010). Brain-Language Research: Where is the Progress Biolinguistics, 4 (2), 255-288

Communication between brains

The Scientific American has an item by R.D. Fields about the research of U. Hasson (here). It compares the activity in a listener compared to a speaker.

There have been many functional brain imaging studies involving language, but never before have researchers examined both the speaker’s and the listener’s brains while they are communicating to see what is happening inside each brain. The researchers found that when the two people communicate, neural activity over wide regions of their brains becomes almost synchronous, with the listener’s brain activity patterns mirroring those sweeping through the speaker’s brain, albeit with a short lag of about one second. If the listener, however, fails to comprehend what the speaker is trying to communicate, their brain patterns decouple…

(overcoming technical problems) He asked his student to tell an unrehearsed simple story while imaging her brain. Then they played back that story to several listeners and found that the listener’s brain patterns closely matched what was happening inside the speaker’s head as she told the story.

The better matched the listener’s brain patterns were with the speaker’s, the better the listener’s comprehension, as shown by a test given afterward… there is no mirroring of brain activity between two people’s brains when there is no effective communication (except for some regions where elementary aspects of sound are detected. When there is communication, large areas of brain activity become coupled between speaker and listener, including cortical areas involved in understanding the meaning and social aspects of the story.).

Interestingly, in part of the prefrontal cortex in the listener’s brain, the researchers found that neural activity preceded the activity that was about to occur in the speaker’s brain. This only happened when the speaker was fully comprehending the story and anticipating what the speaker would say next.

What an elegant demonstration of communication!

Without language

There is a group of people who are effectively invisible, functioning adults with no language. They are there but we just not not met them. They are born completely deaf and are not taught sign language or lip reading and, in fact, miss out not just on language but on knowledge that language exists. Now that they are known to exist, the question arises, how? Those non-linguistic adults live amongst us without being noticed (unbelievable - wild animals live amongst us in our cities and most of us do not see them). It must be much harder to survive without language than with it. So we must accept that these people are very good at understanding and using their environments. They must be continually solving problems -successfully. No sheltered workplaces, social workers, welfare payments, special education or any aspect of the net that is meant to catch the handicapped is available to them. No help is available from all the written and verbal signposts that litter our streets and airways. They cannot talk with others to ask or tell anything. They survive presumably because they are very intelligent, continuously observe the world and use their cognitive abilities to their up most. A description from neuroanthropology is (Life without language) and I urge you is read it.

So can people have thought without words? Well, the evidence-based answer would seem to be, yes, but it’s not the same sort of thought. Some things appear to be easier to ‘get’ without language (such as imitation of action), other things appear to be a kind of ‘all-at-once’ intuition (such as suddenly realizing all things have names), and other ideas are difficult without language being deeply enmeshed with cognitive development over long periods of time (like an English-based understanding of time as quantitative and spatialized). In other words, language is not simply an either/or proposition, but part of a cognitive developmental niche that shapes both our abilities and (unperceived) disabilities relative to the fully cognitively matured language-less individual.

Here is my take on the difference between cognition with and without language – absolutely speculative exercise in guesswork – take it with a grain of salt.

Problems that involve only sensory precepts or motor actions can be solved with or without consciousness – either way language is not needed. So our language-less man would be aware of his surroundings and his intent/action arcs like an ordinary person and would have memory of that awareness. To this extent his consciousness and his cognition would be like ours. He would even be able to manipulate some concepts or symbols although it is questionable how abstract these non-linguistic concepts can become. We can assume what language is not required for much of the simple communication between people. If other primates can live their lives without language why should a human not be able to do it.

But there is two sorts of thinking that I cannot imagine a language-less person engaging in. This is the kind that uses the cycle of: taking to yourself, being conscious of the inner voice, holding it in working memory, using access to that memory to retrieve the idea in the inner speech. This cycle would allow two parts of the brain that are not well connected in the manner needed, to exchange information through the global access available in consciousness and working memory.

The other type of thought that might be difficult to the person without language is the elaboration of abstract concepts. I believe this depends on nested series of metaphors/analogies. As the child metaphors become more distant from their concrete original parents, they become, in effect, a set of symbols related by a set of relationships. The connection to the senses and actions are lost. Without a ‘language system’ it becomes more and more difficult to handle more and more abstract symbols and relationships. I assume it would only be possible at an elementary level.

We know that handicapped people find ways around their handicaps and so I would expect the language-less to be very resourceful in developing ways to think that bypass the need for language and this might actually make them better at some specific cognitive tasks. But there is a limit.