Synchrony in social interaction

Humans are social animals. What does it mean to be social? A great many things, I am sure, would be put forward to answer that question and most would be accurate. I am intrigued by how we manage to coordinate: coordinate to communicate, coordinate in joint actions, coordinate to align goals. I cannot imagine social life without coordination between people.

Recently there was a report in the Scientific American on work by Uri Hasson. He showed a coupling between brain activity in a speaker and listener but only when there was verbal understanding. This work used fMRI, so it was difficult to arrange a real conversation against the noise and isolation of the equipment. We seems to be able to steer one another’s brain activity. Previous blog is communication between brains.

Now, in PloS is a different take on how brains coordinating, a paper by Dumas, Nagel, Soussignan, Martinerie and Garners, Inter-Brain Synchronization during Social Interaction (here). In this case the behaviour that was being aligned was meaningless hand movements. Subjects could create their own movements, mimic each other or not, synchronize their movements or not, and control taking turns while their EEGs and behaviour were recorded. Here is the abstract:

During social interaction, both participants are continuously active, each modifying their own actions in response to the continuously changing actions of the partner. This continuous mutual adaptation results in interactional synchrony to which both members contribute. Freely exchanging the role of imitator and model is a well-framed example of interactional synchrony resulting from a mutual behavioral negotiation. How the participants’ brain activity underlies this process is currently a question that hyperscanning recordings allow us to explore. In particular, it remains largely unknown to what extent oscillatory synchronization could emerge between two brains during social interaction. To explore this issue, 18 participants paired as 9 dyads were recorded with dual-video and dual-EEG setups while they were engaged in spontaneous imitation of hand movements. We measured interactional synchrony and the turn-taking between model and imitator. We discovered by the use of nonlinear techniques that states of interactional synchrony correlate with the emergence of an interbrain synchronizing network in the alpha-mu band between the right centroparietal regions. These regions have been suggested to play a pivotal role in social interaction. Here, they acted symmetrically as key functional hubs in the interindividual brainweb. Additionally, neural synchronization became asymmetrical in the higher frequency bands possibly reflecting a top-down modulation of the roles of model and imitator in the ongoing interaction.


It is fairly clear that synchronized activity in the brain is important to the nature of thought. Different rhythms are involved in different activities. Hebb’s famous quote, “cells that fire together, wire together”could also be, “cells that fire together, give us consciousness”. Or it could be enlarged in scope to “brains that fire together, communicate.”

we were able to show that the alpha-mu rhythm was the most robust interbrain oscillatory activity discriminating behavioral synchrony vs. non synchrony in the centroparietal regions of the two interacting partners. The alpha-mu band is considered as a neural correlate of the mirror neuron system functioning. Specific frequencies of this band (9.2–11.5 Hz) over the right centroparietal region have been proposed as a neuromarker of social coordination.


This work and, I hope, future research along these lines are very important steps towards understanding our social nature.


Citation: Dumas G, Nadel J, Soussignan R, Martinerie J, Garnero L (2010) Inter-Brain Synchronization during Social Interaction. PLoS ONE 5(8): e12166. doi:10.1371/journal.pone.0012166




There is a new project (like the human genome project) called the human connectome project to which the NIH has given $30 million. The hope is to map the connections in the whole human nervous system. Some new experimental procedures seem to make this possible although it will take an enormous amount of work. It also needs the sort of powerful computer systems that now exist to build the map and make it usable – the amount of data with be enormous.

Only some corners have been mapped so far. The indications from these starts are that the brain is organized more like a flat network and less like a hierarchy. This is in line with recent thinking and moving away from top-down/bottom-up thinking. There seems to be no top and no bottom. Also being confirmed is the notions of loops and circuits, structures involved in feedback.

When I was young, the brain is envisaged as a telephone exchange, then as a computer, but now the analogy is with the internet. The idea is that there are a large number of ways to get from any one neuron to another and back again. We are now seeing the first experimental evidence for this new way of seeing the brain.

The basic connections are made during the development of the nervous system. There is a complicated dance of migrating cells forming layers, sheets, grids and knots. Mistakes in this process cause some very serious conditions. Then, with the person born into and living in the real world, this structure of connections is tailored in each individual. Connections are lost and gained to fit a particular person’s age, history, surroundings, culture, language and so on. The original basic architecture is not lost in this tailoring and learning process. Useful links are strengthened and useless ones weakened or lost, but the structure remains.

We can assume that there will be surprises along the way to this map just as there were with the human genome project. Perhaps we will be able to see the architecture of consciousness in a few years.

The sounds we hear

A paper in Nature Neuroscience, ‘Predicting visual stimuli on the basis of activity in auditory cortices’, by Meyer, Kaplan, Essex, Webber, Damasio, Damasio, gives a picture of the role of the earliest sensory cortex in conscious experience. If a perception is in consciousness then it can be found in the ealy sensory cortex even if it is not part of the current sensory input.

Using multivariate pattern analysis of functional magnetic resonance imaging data, we found that the subjective experience of sound, in the absence of auditory stimulation, was associated with content-specific activity in early auditory cortices in humans. As subjects viewed sound-implying, but silent, visual stimuli, activity in auditory cortex differentiated among sounds related to various animals, musical instruments and objects. These results support the idea that early sensory cortex activity reflects perceptual experience, rather than sensory stimulation alone.

They discuss the evidence that this also happens in sight and touch.

There is growing evidence for an involvement of early sensory cortices in the conscious experience of sight and touch. For example, in perceptual illusions, activity in primary visual and somatosensory cortices has been shown to correspond more closely to the subjects’ visual or haptic experience than to the physical properties of the stimuli presented. Furthermore, when subjects imagine visual objects in the complete absence of perceptual input, primary visual cortices are activated and appear to specifically represent the contents of the subjects’ visual experience. Activity in primary visual cortices has also been shown to correlate with stimuli that are kept active in working memory. Although previous studies have established that early auditory cortices can be activated during auditory imagery, auditory hallucinations and the perception of implied sound, the content specificity of such activations has not yet been demonstrated. Our findings suggest that, just as in the visual and somatosensory modalities, activity at the earliest stages of cortical auditory processing correlates specifically with the experience of sound reported by the subjects, rather than with the actual auditory environment alone, as the latter was entirely silent during the presentation of the video clips.

So does this mean that we are closer to qualia? No matter why a sight, touch or sound is in consciousness (current perception, imagining, memory, hallucination) its footprint is found in the early sensory cortex where we would expect only signals just starting their perceptual journey.

Reverse engineering the brain

A few months ago there was an article by T. Sejnowski in the Scientific American Mind Matters (here). The question was how long it will take to be able to build a brain resembling our own brains. He talked about the two front runners, who differ in their approach but have the same time estimate, about a decade for the first reverse-engineered brains.

The backdrop for the debate is one of dramatic progress. Neuroscientists are disassembling brains into their component parts, down to the last molecule, and trying to understand how they work from the bottom up. Researchers are racing to work out the wiring diagrams of big brains, starting with mice, cats and eventually humans, a new field called connectomics. New techniques are making it possible to record from many neurons simultaneously, and to selectively stimulate or silence specific neurons. There is an excitement in the air and a sense that we are beginning to understand how the brain works at the circuit level. Brain modelers have so far been limited to modeling small networks with only a few thousand neurons, but this is rapidly changing.

There is a dispute between Dharmendra Modha of IBM and Henry Markram of the Ecole Polytechnique Federale de Lausanne Blue Project. The two groups are the front runners but differ in philosophy.

Both groups are simulating a large number of model neurons and connections between them. Both models run much, much slower than real time. The neurons in Modha’s model only have a soma — the cell body containing the cell nucleus — and simplified spikes. In contrast, Markram’s model has detailed reconstructions of neurons, with complex systems of branching connections called dendrites and even a full range of gating and communication mechanisms such as ion channels. The synapses and connections between the neurons in Modha’s model are simplified compared to the detailed biophysical synapses in Markram’s model. These two models are at the extremes of simplicity and complex realism.

This controversy puts into perspective a tension between wanting to use simplified models of neurons, in order to run simulations faster, versus including the biological details of neurons in order to understand them. Looking at the same neuron, physicists and engineers tend to see the simplicity whereas biologists tend to see the complexity. The problem with simplified models is that they may be throwing away the baby with the bathwater. The problem with biophysical models is that the number of details is nearly infinite and much of it is unknown. How much brain function is lost by using simplified neurons and circuits?

I think it will take both types of simulation to understand consciousness and it will need simulation of the mid-brain as well as the cortex and the rest of the fore-brain. The hind-brain may even need to be included.

Addition – Reverse engineering rebuttal

It seems that at present there is a discussion about expecting reverse engineering of the brain in a decade. Ray Kurzweil who is a predictor of the future gave a speech at the Singularity Summit predicting that the brain would be reverse engineered in about 10 years. PZ Meyers in Pharyngula has attached Kurzweil’s logic. (here) Meyers is right in my opinion about the ignorance of biology on the part of Kurzweil – He seems to be in some other world and not worth listening to. However, Meyers himself shows his doubt about when reverse engineering will produce results. He feels 10 years is just wrong. Markram and Modha who are attempting it by different methods both hope to be somewhere significant in 10 years. They are not making foolish assumptions like Kurzweil. They are not starting with the genetic code etc. but with studies of architecture and the behavior of ion channels and the like. Meyers remarks do not touch their efforts as far as I can see.



Some avoid the word ‘communication’ because it is in some ways too vague and in others too specific. I like it because it does not make arbitrary boundaries between different modes of communication, reasons for engaging in it, or content. What is specific about communication is that it takes at least two to communicate; it is not about what is said, written, illustrated, singed, sang or whatever, nor about what is heard, read, seen and so on, it is about an exchange between two minds.

I regularly read a blog by E. Bolles called Babel’s Dawn (here). He reviews many books and articles on the origins of language but always comes back to his favourite idea, that there is a triangle of joint attention involving the speaker, listener and topic. Words pilot attention to topics, a bit like pointing a finger but more complex. I like this idea.

This fits with an idea that is a favourite of mine. I see a word as having no meaning by itself, its meaning is a result of its relationship to other words. There are a few words, proper names of places, people etc., that can take their meaning from actually pointing to something. Other words point to concepts and archetypes in the mind. And in turn, the concepts get their meaning from their connection with other concepts, a web of actual connections that by their relationships define their meanings. This is why we seem to rely on metaphor so heavily. If we have a group of entities (words, concepts, things) and they are joined by lines (relationships, actions) to form a structure that we know and understand, we can re-use that structure. As an example, we have place A, place B, moving from A to B, and the thing moving so that the structure is a journey. This can be elaborated with the reason/goal of the move, the path, events along the way, and other elements/relationships. A can be thought of as the start and B as the destination. We can re-use the metaphor for a hike, a drive, a train journey, a boat ride and every different use adds depth and complexity to the metaphor. Now if I want to steer joint attention to the end point of a plan, I can say, ‘think about our goal and how close we have come to it’. We can go further and use the structure for non-journey ideas: a career, a life, a task and so on. Using words to pilot joint attention only works because we share a large number of elaborate metaphors. We learn our language/s and our culture’s metaphors and meaning structures and because we share these with others, we can (almost literally) point to something in someone else’s mind. This is an amassing thing – instead of using my finger to point to a tree in the yard, I can use a word to point at a tree concept in your head.

Another idea that I find interesting is the synchronization of two people in communication. We do not wait until someone stops speaking to parse the meaning of what they have said. In really successful communication, we start timing our own thinking to be in the same timing pattern as the speaker. We predict what the other is going to say ahead of hearing it. We take in their whole person to understand what they are saying: voice, face, posture, movement as well as words. We do not communicate in just words but with our whole selves. Apparently this synchronization, prediction and shared concepts can be vaguely by made out in fMRI scans and these patterns break down as soon as the two people lose understanding of what the other is saying. We are actually able to share a joint attention to a topic that is a concept in our brains.

There is a question often raised – does our language reflect the nature of our thoughts, or does thought reflect the nature of our languages? I cannot think of this question in any other way than the structure and processes of the brain dictate the general form of language (joint attention, metaphor, synchrony and so on). But the shared culture is what makes communication work. We have to want to communicate, we have to share a language and very large numbers of metaphors before it clicks. Sharing a culture has a large effect on what we think (but not how we go about the thinking).

So now back to the topic of this blog. It seems that we communicate with ourselves as well as others and we do at least a fair amount of this internal communication through consciousness. The production of speech is not conscious, the perception of speech is not conscious. We have no feeling for how either of these things is done – it is opaque. But the meaning, the high level representation of the voice and words are usually made conscious. This may be because the formation of a grammatical utterance is quite complicated so that working memory is required to hold parts of the stream while other parts are prepared or processed. Anything that needs working memory is extremely likely to be made conscious and transferred to short-term memory. I cannot see how most speech could be produced or understood without making use of working memory. Short-term memory would also be needed for any utterance longer than a simple sentence or phrase; for a conversation we need to know what went before.

I assume that many (maybe most) animals have concepts, communication, working memory and consciousness. But over the last few hundred thousand years, humans have fashioned from these common attributes, the marvel of verbal communication. Again Babel’s Dawn has a constant idea that the reason language was acquired by humans and not other animals is in the nature of our societies. Put quite simply, we have come to have trust in sharing information with our fellows. Playing with language is as dangerous as playing with fire or wolves, but the gains are just as great, probably greater.


In the spring, Neurophilosophy had a post on the research of D. Havas into the effects of botox on emotion. (here).

Do you smile because you’re happy, or are you happy because you are smiling? Darwin believed that facial expressions are indeed important for experiencing emotions…Botox, which is used by millions of people every year to reduce wrinkles and frown lines on the forehead, works by paralyzing the muscles involved in producing facial expressions…it impairs the ability to process the emotional content of language, and may diminish the quality of emotional experiences…

Havas recruited 40 women for the new study, all of whom were seeking first-time botox injections as a cosmetic treatment for frown lines on the forehead. These participants were asked to read sentences describing happy, sad or emotionally neutral situations. Immediately afterwards, they were taken to the physician, who gave them a single injection of botox into the corrugator supercilii, or “frown” muscle. (Botox acts by inhibiting the release of the neurotransmitter acetylcholine from motor neurons, leading to temporary muscle paralysis 24-48 hours later. Typically, the procedure is repeated after 3-4 months; with time, the muscles may atrophy, or waste away, through disuse.) Two weeks after the injection, the participants returned to lab to read another set of similar sentences.

The reseachers found that botox slowed the reading of the sentences containing sad emotional content, which, as the earlier work showed, would normally cause the frown muscle to contract. The reading time for the happy and neutral sentence was the same in both sessions. The researchers assume that the increase in reading time means that paralysis of the frown muscles hindered the participants’ understanding of the emotional content of the sad sentences. They also argue that their findings support the hypothesis that feedback from the muscles involved in producing facial expressions is critical in regulating emotional experiences.

…news stories completely overlook the more profound implication of the results – that by paralyzing the muscles involved in producing facial expressions, botox may actually diminish the experience of emotion in those who use it.

This is another piece of evidence that consciousness is primarily concerned by perception. In this case it is registering an emotional state primarily from the perception of movement of facial muscles.

Space perception is hard-wired

Science Daily has a report on investigation of animal sense of direction. (here) R. Langston found that baby rats have a space map before they can see or navigate outside the nest.

The research team implanted miniature sensors in rat pups before their eyes had opened (and thus before they were mobile). That enabled the researchers to record neural activity when the rat pups left the nest for the first time to explore a new environment.

The researchers were not only able to see that the rats had working navigational neurons right from the beginning, but they were also able to see the order in which the cells matured.

The first to mature were head direction cells. These neurons are exactly what they sound like — they tell the animal which direction it is heading, and are thought to enable an internal inertia-based navigation system, like a compass. “These cells were almost adult-like right from the beginning,” Langston says.

The next cells to mature were the place cells, which are found in the hippocampus. These cells represent a specific place in the environment, and in addition provide contextual information — perhaps even a memory — that might be associated with the place. Last to mature were grid cells, which provide the brain with a geometric coordinate system that enables the animal to figure out exactly where it is in space and how far it has travelled. Grid cells essentially anchor the other cell types to the outside world so that the animal can reliably reproduce the mental map that was made last time it was there.

It has been assumed by many, for a long time that our 3D space perception is hard-wired and not gained from experience of space. This and similar research seems to confirm that assumption.

Willingness and Willfulness

There is a piece in the Scientific American by W. Herbert on the work of I. Senay on willingness verses willfulness (here).

Willingness is a core concept of addiction recovery programs—and a paradoxical one. Twelve-step programs emphasize that addicts cannot will themselves into healthy sobriety—indeed, that ego and self-reliance are often a root cause of their problem. Yet recovering addicts must be willing. That is, they must be open to the possibility that the group and its principles are powerful enough to trump a compulsive disease…(I. Senay) figured out an intriguing way to create a laboratory version of both willfulness and willingness—and to explore possible connections to intention, motivation and goal-directed actions. In short, he identified some key traits needed not only for long-term abstinence but for any personal objective, from losing weight to learning to play guitar.

Senay did this by exploring self-talk. Self-talk is just what it sounds like—that voice in your head that articulates what you are thinking, spelling out your options and intentions and hopes and fears, and so forth. It is the ongoing conversation you have with yourself. Senay thought that the form and texture of self-talk—right down to the sentence structure—might be important in shaping plans and actions. What’s more, self-talk might be a tool for exerting the will—or being willing….

(in a priming experiment) those primed with the interrogative phrase “Will I?” expressed a much greater commitment to exercise regularly than did those primed with the declarative phrase “I will.” What’s more, when the volunteers were questioned about why they felt they would be newly motivated to get to the gym more often, those primed with the question said things like: “Because I want to take more responsibility for my own health.” Those primed with “I will” offered strikingly different explanations, such as: “Because I would feel guilty or ashamed of myself if I did not.” This last finding is crucial. It indicates that those with questioning minds were more intrinsically motivated to change.

This idea that the way we talk to ourselves is the way we establish motivation is extremely interesting. It fits with a notion that some things are accomplished by passing through consciousness and therefore being generally accessible to the whole brain.

Communication between brains

The Scientific American has an item by R.D. Fields about the research of U. Hasson (here). It compares the activity in a listener compared to a speaker.

There have been many functional brain imaging studies involving language, but never before have researchers examined both the speaker’s and the listener’s brains while they are communicating to see what is happening inside each brain. The researchers found that when the two people communicate, neural activity over wide regions of their brains becomes almost synchronous, with the listener’s brain activity patterns mirroring those sweeping through the speaker’s brain, albeit with a short lag of about one second. If the listener, however, fails to comprehend what the speaker is trying to communicate, their brain patterns decouple…

(overcoming technical problems) He asked his student to tell an unrehearsed simple story while imaging her brain. Then they played back that story to several listeners and found that the listener’s brain patterns closely matched what was happening inside the speaker’s head as she told the story.

The better matched the listener’s brain patterns were with the speaker’s, the better the listener’s comprehension, as shown by a test given afterward… there is no mirroring of brain activity between two people’s brains when there is no effective communication (except for some regions where elementary aspects of sound are detected. When there is communication, large areas of brain activity become coupled between speaker and listener, including cortical areas involved in understanding the meaning and social aspects of the story.).

Interestingly, in part of the prefrontal cortex in the listener’s brain, the researchers found that neural activity preceded the activity that was about to occur in the speaker’s brain. This only happened when the speaker was fully comprehending the story and anticipating what the speaker would say next.

What an elegant demonstration of communication!

Brain’s electrical fields

The electrical signals traveling the along neurons create a surrounding electrical field which adds to the fields created by the activity of other neurons. The EEG trace is the result of these combined fields. Scientists have been attempting to obtain more and more information on the processes of the brain by studying these fields. But what about the opposite effect – do electrical fields affect the activity of neurons?

Scientific American has a article by F. Jabr (here) on research by D. McCormick showing that this is a feedback loop.

A few neurons are like individuals talking to each other and having small conversations. But when they all fire in unison, it’s like the roar of a crowd at a sports game… They surrounded the cortical sample with an electric field that approximated the size and polarity of the fields produced by an intact ferret brain during slow-wave sleep to create an exaggerated version of the exact feedback loop they were investigating. Essentially, they enveloped the brain slice in an echo of itself.
When the team applied this electric field echo, they found it amplified and synchronized the neural activity in the brain slice. The field didn’t create disorder—it increased harmony. The “roar” of the brain slice became louder and more regular. “It’s kind of like if you were cheering at a football game and someone played over the speaker the sound of the crowd cheering and you started responding to that, too, cheering along with both the real crowd and the speaker playback,” McCormick explains. “It’s a kind of reinforcing feedback.”
Not only did the researchers show that this positive feedback facilitated the synchronous slow waves of electrical activity in the slice of ferret brain, they also showed that an electric field of the same strength, but opposite polarity, disrupted its synchronous neural activity.

This is not a surprising result; it is to be expected that an electrical field would affect an electrical current. It also appears that the brain responds to magnetic fields and I presume also produces them. It is also clear that neuron activity is affected by various chemical gradients. This should put paid to the idea that the brain is digital. Many aspects of communication in the brain vary continuously (like an electrical field does) and everything is not ‘fire or don’t fire’ (like a digital computer’s 1 or 0 and nothing in between). Computers are a very, very limited analogy for biological brains.