Embodiment


Gestures are important to communication, learning, feeling and thinking – an aspect of embodiment.

Ellen Campana reports in the Scientific American site (here) on research by Goldin-Meadow, Mitchell and Wagner-Cook.

Historically, the field has viewed concepts, the basic elements of thought, as abstract representations that do not rely on the physicality of the body. This notion, called Cartesian Dualism, is now being challenged by another school of thought, called Embodied Cognition. Embodied Cognition views concepts as bodily representations with bases in perception, action and emotion. There is much evidence supporting the Embodied Cognition view. However, until now there has never been a detailed, experimentally supported account of how embodiment through gesture plays a role in learning new concepts.

Laura Sanders also reports on this research in Science News (here)

Children whose parents speak to them more are known to have higher vocabularies. But gesturing also affects vocabulary, even when all speech effects are removed from the analysis, the researchers say. Gesturing effects go above and beyond speech effects, says Goldin-Meadow…A child’s gestures may spark more “teachable moments,” creating opportunities for verbal reinforcement of ideas. “The child points at a dog and the parent says, ‘Yes, that’s a dog,’” …Research also suggests that gesturing may encourage children to think more creatively by bringing out new ideas and improving clarity. By manipulating how much children gestured, researchers gauged the influence of gesturing. Older children told to gesture while solving math problems on a chalkboard got the answer right more frequently than children who were told not to gesture. “These gestures are not mere hand waving. Kids are extracting meaning from gestures,” says Goldin-Meadow.

ScienceDaily reports on work by A. Lleras (here)

Swinging their arms helped participants in a new study solve a problem whose solution involved swinging strings, researchers report, demonstrating that the brain can use bodily cues to help understand and solve complex problems…”In other words, by directing the way people move their bodies, we are – unbeknownst to them – directing the way they think about the problem.”…Even after successfully solving the problem, almost none of the study subjects became consciously aware of any connection between the physical activity they engaged in and the solution they found…”The results are interesting both because body motion can affect higher order thought, the complex thinking needed to solve complicated problems, and because this effect occurs even when someone else is directing the movements of the person trying to solve the problem,” Lleras said…The new findings offer new insight into what researchers call ‘embodied cognition,’ which describes the link between body and mind…

People tend to think that their mind lives in their brain, dealing in conceptual abstractions, very much disconnected from the body,” he said. “This emerging research is fascinating because it is demonstrating how your body is a part of your mind in a powerful way. The way you think is affected by your body and, in fact, we can use our bodies to help us think.”

The location of objects


ScienceDaily has an item (here) about the work of M. McCloskey on a subject called AH reported in the book, Visual Reflections: A Perceptual Deficit and Its Implications. She had an unusual visual perception deficit that caused her to see objects in the wrong locations.

“When AH looks at an object, she sees it clearly and knows what it is, but she’s often dramatically wrong about where it is. For example, she may reach out to grasp a coffee cup that she sees on her left, but miss it completely because it is actually on her right. And when she sees an icon at the top of her computer screen, it may really be at the bottom of the screen….Studying AH has taught us about how the brain codes where things are — some parts of the visual brain use codes very much like the x and y coordinates we learned about in algebra class… They discovered that when an object was stationary and remained in view for a least a second or two, AH often would see it in the wrong place. However, if an object was shown to her very briefly, or if the object was put in motion, she was able to see its location accurately…. These results tell us that the visual system has separate pathways, one for perceiving stable, non-moving objects, and the other for objects that are moving or otherwise changing. AH’s pathway for stable objects is abnormal, but her pathway for moving or otherwise changing objects is normal…”

As well as saying something about how the brain handles location, it seems to say something about how the brain creates objects. There is an implication that objects are made up of a lot of separate aspects. It is not so much that there is binding of various properties like colour to an object but perhaps the object perception itself is nothing but its various bindings. A number of qualia bound together = object. Worth thinking about…

Two-way arrow


Deric Bownds at Mindblog reports on a paper by Koch et al. (here) The gist of it is that stepping backward (as opposed to forward) mobilizes cognitive resources. “Thus, whenever you encounter a difficult situation, stepping backward may boost your capability to deal with it effectively.”

We are used to viewing situations with the arrow pointing from the brain to the rest of the rest of the body. But it this case the arrow is pointing from a bodily movement to the brain. So an actor creating the scene of a mathematician solving a problem, might take a step back from the blackboard, mutter and rub his beard for a moment and then spring back towards the board to write in the next line of the solution. The step back stands for a snag in the solution, the mutter and rub stands for being lost in thought, and spring back stands for the eureka moment. It is assumed that it is the problem that makes the man step back.

The same idea of a reversed arrow also occurs in the reports of artificially made facial expressions affecting mood. Normally it is the mood that causes the facial expression but the opposite causal direction is possible. Ditto for posture.

In reality we are talking about three parts of the same brain: the part that moves the muscles, the part that perceives through the senses the behavior of some part of the body and the part of the brain that is involved in mood or thought. These parts of the brain probably act in concert for either direction of the casual arrow; they are probably connected by feedback loops; and the difference between the two directions would be small changes in timings and signal strengths. If stepping back and cognitive control happen together then they happen together.

Back to the actor pretending to be a mathematician – we seem to recognize the convention but it is not done in a way that normally enters consciousness. We perceive another person’s thinking but are not aware of the clues we use to make that judgment.

Not so odd a result


More following my posting, Odd result. Mindblog also posted on this research. (here). The picture painted is as follows:

First: We prepare commands for a voluntary action. This is done in the pre-motor cortex for actions that are reactions to external stimuli and the pre-supplementary motor cortex for ‘intentional’ actions. These commands are signaled to the motor cortex.

Second: The motor cortex executes the commands.

Third: Both the pre-supplementary motor cortex and the motor cortex sent signals to the parietal cortex, where the sensory consequences of the motor command is predicted. The pre-supplementary motor cortex signal give the sense of an urge to move. The motor cortex signal is used to create a prediction in the parietal cortex, the very near future projection of the movement in enters consciousness.

Normally, the conscious experience would be of a movement preceded or not by a sense of an urge to move.

However under the experimental conditions. If the movement is produced by direct stimulation of the motor cortex in a way that bypasses the signals to the parietal cortex – the movement happens but there is no consciousness of it as the prediction is not made. If the parietal cortex is stimulated directly than the urge and/or the movement are made conscious without the movement actually happening.

So the result is not so odd after all.

The question of how free is free-will now is a question of the difference between the way that the pre-motor cortex and the pre-supplementary motor cortex initiate the creation of a motor command.

The realness of virtual reality


ScienceDaily has a report on the EU research project called Presenccia. (here) It is led by M. Slater and has contributors from neuroscience, psychology, psychophysics, mechanical engineering and philosophy.

Despite advances in computer graphics, few people would think virtual characters or objects are real. Yet placed in a virtual reality environment most people will interact with them as if they are really there….In trying to understand presence – the propensity of humans to respond to fake stimuli as if they are real – the researchers are not just gaining insights into how the human brain functions. They are also learning how to create more intense and realistic virtual experiences, opening the door to myriad applications for healthcare, training, social research and entertainment…

For one experiment they developed a virtual bar, which test subjects enter by donning a virtual reality (VR) headset or immersing themselves in a VR CAVE in which stereo images are projected onto the walls. As the virtual patrons socialise, drink and dance, a fire breaks out. Sometimes the virtual characters ignore it, sometimes they flee in panic. That in turn dictates how the real test subjects, immersed in the virtual environment, respond.

“We have had people literally run out of the VR room, even though they know that what they are witnessing is not real,” says Slater. “They take their cues from the other characters.”…

All had physical reactions, measured by their skin conductivity, perspiration and heart rate, showing that, at a subconscious level, people’s responses are similar regardless of whether what they are experiencing is real or virtual. The plausibility of the events enhances the sense that what is happening is real. Plausibility, Slater says, is therefore more important to presence than the quality of the graphics in a VR environment.

Like with illusions, knowing that something is not true does not meaning we do not automatically react as if it was true. I think the brain is always trying to attach plausible narratives to its perceptions, and it just carries on doing that even in a sham environment.

An odd result


Here is an item that Karen Hopkin wrote in the Scientific American website. (here)

For every action, there’s a reaction. And for many movements we make, there’s an intention: we think about moving, and we move. Now a study published in the May 8th issue of the journal Science suggests that the experience of moving is all in your mind. Because the part of the brain that’s active when you intend to move is the same part that lets you feel like you did.
Two separate brain regions are involved in moving your body. One part provides the intention, and the other powers the actual movement. But researchers didn’t know which part let you know that you actually moved.
In the new study, scientists were working with patients undergoing surgery to remove a brain tumor. Surgeons often electrically stimulate the area around the tumor while the patient is awake and can provide feedback, so they can avoid damaging critical tissue. The scientists found that zapping one particular part of the brain made their patients feel like they wanted to move their arms, lips or tongue. And ramping up the stimulation to that spot made them feel like they’d done it. But when the team poked at the region that actually caused motion, the patients didn’t know they moved—a finding that’s oddly moving.

This is not necessarily as odd as it sounds. We don’t know what normally stimulates the region and whether strong stimulation comes from a different source than milder stimulation; in fact, we don’t know a great many other things.

Movement as the foundation


If we think of the human mind, it is a very elaborate structure and so it is hard to see clearly the foundation of the structure. People have thought that the mind was a device for solving logical problems, or for living socially with others, or for predicting the future, or for creating new things… How you judge a mind depends on where you are coming from in terms of the ‘what is it for?’ question.

Let’s follow the clues from a biological point of view. The simplest thing with some weensy, teensy whiff of mind-like something is a free-living, single-celled creature that moves. It has two of the characteristics of life, mobility and irritability, among the others. These two seem to go together; the more the mobility, the more the irritability; the less of one, the less of the other. These little creatures can sense which side of them is warmer, or more lit, or has more nutrients, or has less poison. Having sensed a difference, the organism can move along the gradients, attracted by some things and repelled by others. All cells have this ability to some extent whether they are free-living or part of a multicellular organism.

As soon as we have multicellular organisms, there is a tendency for cells to specialize. Some take over the duty of digestion, some of respiration, some of elimination, some of reproduction and so on including irritability and mobility. All the cells retain at least some ability at all the characteristics of cells but specialize in a narrower range. They form organs with different functions contributing to the well-being of the whole organism. The mobility function is taken over by primitive muscle cells and the irritability function is taken over by primitive sensing cells. The sense cell senses something and communicates this directly to the muscle cell. There is no more mind here then in the free-living cell – very, very, very little. The organism senses the environment and reacts to it in an absolutely fixed manner.

Now we add neurons connecting the sensing cells to the acting cells. Even the first layers of inter-neurons make a difference; they allow rhythms of action, patterns of action, sensing of patterns and simple learning. The synapses in the pathways between sensing and doing allow processing. As the number of inter-neurons increases, the sophistication of processing increases.

Layers take on different tasks. A predator would have layers of neurons that integrated sensory signals to recognize the pattern of the animal’s prey. Others might track the direction of moving prey. Others choose patterns of muscle movements and so on. So a toad will flick its tongue out and catch a fly that flies too close by. This is a very stereotypical action, almost reflexive, but it does imply an operational concept of a fly object and of a flick of the tongue action. To that extent they are elements in a rudimentary model of the environment and the animals interaction with it.

More layers of neurons allow action that is less stereotypic. An animal that finds and tracks its prey, sneaks up on it and picks the right moment to spring, has to have a very elaborate model of itself in its environment. The animal must have a goal and a plan. It must predict outcomes. It must have an integrated picture of the world and itself in it. It must have memories of events and have learned from those memories. By this level of neuron layers between the senses and the muscles, we have consciousness although perhaps we do not have full self-consciousness until we add the need to work in a social group.

The roots of mind go back to very primitive organism and it was built on the foundation of movement. What is it for? It is for successful behaviour. Its for knowing where you stand and where you will land before you decide to jump.

Limitations on working memory

A paper has been published by a large multidisciplinary group from Sweden and Spain, using computer simulations and fMRI scans to look at working memory. The research is reported in Science Daily (here) and (here).

The working memory, which is our ability to retain and process information over time, is essential to most cognitive processes, such as thinking, language and planning. It has long been known that the working memory is subject to limitations, as we can only manage to “juggle” a certain number of mnemonic items at any one time. Functional magnetic resonance imagery (fMRI) has also revealed that the frontal and parietal lobes are activated when a sequence of two pictures is to be retained briefly in the visual working memory. However, just how the nerve cells work together to handle this task has remained a mystery…

For their project, the researchers used techniques from different scientific fields, applying them to previously known data on how nerve cells and their synapses function biochemically and electrophysiologically. They then developed, using mathematical tools, a form of virtual or computer simulated model brain. The computations carried out with this “model brain” were tested using fMRI experiments, which allowed the researchers to confirm that the computations genuinely gave answers to the questions they asked.

With their model brain, the team was able to discover why the working memory is only capable of retaining between two and seven different pictures simultaneously. As the working memory load rises, the active neurons in the parietal lobe increasingly inhibit the activity of surrounding cells. The inhibition of the inter-neuronal impulses eventually becomes so strong that it prevents the storage of additional visual input, although it can be partly offset through the greater stimulation of the frontal lobes. This leads the researchers to suggest in their article that the frontal lobes might be able to regulate the memory capacity of the parietal lobes…This finding was also replicable in follow-up experiments on humans.


Top down processing


A recent paper by Dima et al, “Understanding why patients with schizophrenia do not perceive the hollow-mask illusion using dynamic causal modeling”, in NeuroImage has been commented on by a number of bloggers. Here is a large part of the abstract:

Patients suffering from schizophrenia are less susceptible to various visual illusions. For example, healthy participants perceive a hollow mask as a normal face, presumably due to the strength of constraining top-down influences, while patients with schizophrenia do not … However the neural mechanisms underpinning this effect remain poorly understood. We used functional magnetic resonance imaging to investigate the hollow-mask illusion in schizophrenic patients and healthy controls. The primary aim of this study was to use measures of effective connectivity arising from dynamic causal modeling (DCM) to explain differences in both the perception of the hollow-mask illusion and associated differences in neural responses between patients with schizophrenia and controls, which we hypothesised would be associated with difference in the influences of top-down and bottom-up processes between the groups. Consistent with this explanation, we identified differences between the two groups in effective connectivity. In particular, there was a strengthening of bottom-up processes, and weakening of top-down ones, during the presentation of ‘hollow’ faces for the patients. In contrast, the controls exhibited a strengthening of top-down processes when perceiving the same stimuli. These findings suggest that schizophrenic patients rely on stimulus-driven processing and are less able to employ conceptually-driven top-down strategies during perception, where incoming sensory data are constrained with reference to a generative model that entails stored information from past experience.

In this illusion, a hollow (concave) mask of a face appears as a normal face (convex). Cannabis users may also be less deceived by the illusion whilst on the drug. Dima’s theory is that perception principally comprises three components: firstly, sensory input (bottom-up); secondly, the internal production of concepts (top-down); and thirdly, a control (a ‘censor’ component), which covers interaction between the two first components. In schizophrenia there is a lack of connectivity between the top-down and bottom-up processes.

Jonah Lehrer gives a clear description of the process. (here)

What happens then? In order to make sense of this visual cacophony, the brain has to do what cameras don’t: interpret the input. It has to parse all those lines and figure out which objects are where. As I’ve noted before, we now know that what we end up seeing is highly influenced by something called “top-down processing,” a term that describes the way cortical brain layers project down and influence (corrupt, some might say) our actual sensation. After the inputs of the eye enter the brain, they are immediately sent along two separate pathways, one of which is fast and one of which is slow. The fast pathway quickly transmits a coarse and blurry picture to our prefrontal cortex. Meanwhile, the slow pathway takes a meandering route through the visual cortex, which begins meticulously analyzing and refining the lines of light. The slow image arrives in the prefrontal cortex about 50 milliseconds after the fast image.

Why does our mind see everything twice? Because our visual cortex needs help. After the prefrontal cortex receives its imprecise picture, the “top” of our brain quickly decides what the “bottom” has seen, and begins slyly doctoring the sensory data. (It’s somewhat akin to tweaking a photo in photoshop…) Form is imposed onto the formless rubble of the V1. If these interpretations are removed, our reality becomes unrecognizable.

And (here).

The polite term for this mental ability is “top-down processing,” a term that describes the way cortical brain layers project down and influence (corrupt, some might say) our actual sensation. After the inputs of the eye enter the brain, they are immediately sent along two separate pathways, one of which is fast and one of which is slow. The fast pathway quickly transmits a coarse and blurry picture to our prefrontal cortex. Meanwhile, the slow pathway takes a meandering route through the visual cortex, which begins meticulously analyzing and refining the lines of light. The slow image arrives in the pre-frontal cortex about 50 milliseconds after the fast image.

I find the ‘top-down’ a misleading name for this process. Maybe ‘gestalt’ would be better. I like ‘fast’ and ‘slow’. There is also fast and slow processes in perceiving sounds – and a theory that a disconnect between them is a cause of dyslexia. The fast process is likely to produce the vague, wide-angle outline in consciousness while the slow process is likely to produce the vividly, detailed focus.

Baby mind

Jonah Lehrer had an article in the Boston Globe, “Inside the Baby Mind”. (here)


  It describes the hyper-awareness of the young mind.

…Unlike the adult mind, which restricts itself to a narrow slice of reality, babies can take in a much wider spectrum of sensation – they are, in an important sense, more aware of the world than we are…
Gopnik argues that, in many respects, babies are more conscious than adults. She compares the experience of being a baby with that of watching a riveting movie, or being a tourist in a foreign city, where even the most mundane activities seem new and exciting. “For a baby, every day is like going to Paris for the first time,” Gopnik says. “Just go for a walk with a 2-year-old. You’ll quickly realize that they’re seeing things you don’t even notice.”…
In a sense, there’s a direct trade-off between the mind’s flexibility and its proficiency. As Gopnik notes, this helps explain why a young child can learn three languages at once but nevertheless struggle to tie his shoelaces.
But the newborn brain isn’t just denser and more malleable: it’s also constructed differently, with far fewer inhibitory neurotransmitters, which are the chemicals that prevent neurons from firing. This suggests that the infant mind is actually more crowded with fleeting thoughts and stray sensations than the adult mind. While adults automatically block out irrelevant information, such as the hum of an air conditioner or the conversation of nearby strangers, babies take everything in: their reality arrives without a filter. As a result, it typically takes significantly higher concentrations of anesthesia to render babies unconscious, since there’s more cellular activity to silence.
The hyperabundance of thoughts in the baby brain also reflects profound differences in the ways adults and babies pay attention to the world. If attention works like a narrow spotlight in adults – a focused beam illuminating particular parts of reality – then in young kids it works more like a lantern, casting a diffuse radiance on their surroundings.
“We sometimes say that adults are better at paying attention than children,” writes Gopnik. “But really we mean just the opposite. Adults are better at not paying attention.
…As Gopnik notes, this mental state – the experience of being captivated by entertainment – is, in many respects, a fleeting reminder of what it feels like to be a young child. “You are incredibly aware of what’s happening – your experiences are very vivid – and yet you’re not self-conscious at all,” she says. …Gopnik notes that a number of other situations, from Zen meditation to the experience of natural beauty, can also lead to states of awareness so intense that the self seems to disappear. …
If people could never regress into this babylike consciousness, then we’d struggle with the kind of tasks that require us to stop being self-conscious and lose ourselves in the job. Such moments are often described as “flow” activities, and can occur whenever we’re completely captivated by what we’re doing…

What an interesting picture of the mind learning about the world almost from scratch!