M | T | W | T | F | S | S |
---|---|---|---|---|---|---|
« May | Jul » | |||||
1 | 2 | 3 | 4 | 5 | ||
6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 | 14 | 15 | 16 | 17 | 18 | 19 |
20 | 21 | 22 | 23 | 24 | 25 | 26 |
27 | 28 | 29 | 30 |
- 26/08/2011: Navigating the up and down
- 23/08/2011: Using unconscious information
- 20/08/2011: Memory outside the hippocampus
- 17/08/2011: Blind geometry
- 14/08/2011: Another look at LIDA
- 11/08/2011: Dualism in many guises
- 08/08/2011: Making the vague visible
- 05/08/2011: The LIDA model
- 02/08/2011: Embodied cognition - what is it?
- 31/07/2011: Embodied cognition - language
- August 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011
- January 2011
- December 2010
- November 2010
- October 2010
- September 2010
- August 2010
- July 2010
- June 2010
- May 2010
- April 2010
- March 2010
- February 2010
- January 2010
- December 2009
- November 2009
- October 2009
- September 2009
- August 2009
- July 2009
- June 2009
- May 2009
- April 2009
- March 2009
- February 2009
- January 2009
- December 2008
- November 2008
- October 2008
- September 2008
- August 2008
- July 2008
- June 2008
Embodied cognition - face
Long before we communicated with language, we would be communicated with our bodies, especially our faces. Everyone knows we ‘talk’ with facial expressions but do we ‘hear’ ourselves with them.
A long time ago when the split brain operation was new and the effects were just starting to come out, I read a report that is still with me although I have not been able to find a reference. The set up was that something was shown to the right hemisphere and the left hemisphere was asked a question about it. The left hemisphere guessed and gave an answer. If the answer was right, say ‘yes’ was right, then nothing further happened. But if the answer was wrong, ‘no’, then there was a period when the person appeared uncomfortable and finally said, ‘I mean yes’. What happened during the uncomfortable period was that the person frowned. The right hemisphere heard the wrong answer, it produced a frown, the left hemisphere felt the frown and decided that the answer had been wrong. Caution: this is what I remember and may differ in many ways from the actual report. (If you have the original I would value a link.) At the time I thought that the person would have had to learn this trick, but now I think it probably comes quite naturally.
What is the evidence for embodied facial expressions? There is some from over 20 years ago, the abstract from Strack, Martin, Stepper (1988) Inhibiting and facilitating conditions of the human smile: A nonobtrusive test of the facial feedback hypothesis.
We investigated the hypothesis that people’s facial activity influences their affective responses. Two studies were designed to both eliminate methodological problems of earlier experiments and clarify theoretical ambiguities. This was achieved by having subjects hold a pen in their mouth in ways that either inhibited or facilitated the muscles typically associated with smiling without requiring subjects to pose in a smiling face. Study 1’s results demonstrated the effectiveness of the procedure. Subjects reported more intense humor responses when cartoons were presented under facilitating conditions than under inhibiting conditions that precluded labeling of the facial expression in emotion categories. Study 2 served to further validate the methodology and to answer additional theoretical questions. The results replicated Study 1’s findings and also showed that facial feedback operates on the affective but not on the cognitive component of the humor response. Finally, the results suggested that both inhibitory and facilitatory mechanisms may have contributed to the observed affective responses.
We can communicate with by this facial pathway. One person has an emotional affect and this shows in their facial expression which another person mimics and by doing so, feels the emotion. An emotional state has been communicated. There is part of the discussion of Anders, Heinzle, Weiskop, Ethofer, Haynes, Flow of affective information between communciating brains (2011):
In conclusion, our data support current theories of intersubjectivity by showing that affect-specific information is encoded in a very similar way in the brains of senders and perceivers engaged in facial communication of affect. Information is successively transferred from the sender’s brain to the perceiver’s brain, eventually leading to what has been called a ‘shared space’ of affect.
What does this say about communicating with ourselves? Neal and Chartrand, Embodied Emotion Perception: Amplifying and Dampening Facial Feedback Modulates Emotion Perception Accuracy (2011) tested this:
How do we recognize the emotions other people are feeling? One source of information may be facial feedback signals generated when we automatically mimic the expressions displayed on others’ faces. Supporting this “embodied emotion perception,” dampening (Experiment 1) and amplifying (Experiment 2) facial feedback signals, respectively, impaired and improved people’s ability to read others’ facial emotions. In Experiment 1, emotion perception was significantly impaired in people who had received a cosmetic procedure that reduces muscular feedback from the face (Botox) compared to a procedure that does not reduce feedback (a dermal filler). Experiment 2 capitalized on the fact that feedback signals are enhanced when muscle contractions meet resistance. Accordingly, when the skin was made resistant to underlying muscle contractions via a restricting gel, emotion perception improved, and did so only for emotion judgments that theoretically could benefit from facial feedback.
And also: Havas, Glenberg, Gutowski, Lucarelli, Davidson, Cosmetic Use of Botulinum Toxin-A Affects Processing of Emotional Language (2011)
How does language reliably evoke emotion, as it does when people read a favorite novel or listen to a skilled orator? Recent evidence suggests that comprehension involves a mental simulation of sentence content that calls on the same neural systems used in literal action, perception, and emotion. In this study, we demonstrated that involuntary facial expression plays a causal role in the processing of emotional language. Subcutaneous injections of botulinum toxin-A (BTX) were used to temporarily paralyze the facial muscle used in frowning. We found that BTX selectively slowed the reading of sentences that described situations that normally require the paralyzed muscle for expressing the emotions evoked by the sentences. This finding demonstrates that peripheral feedback plays a role in language processing, supports facial-feedback theories of emotional cognition, and raises questions about the effects of BTX on cognition and emotional reactivity. We account for the role of facial feedback in language processing by considering neurophysiological mechanisms and reinforcement-learning theory.
So our emotions are embodied in our facial expressions: emotions cause facial expressions and facial expressions cause emotions. What does this have to do with consciousness? How emotion enters consciousness is a bit of a mystery, but we know that sensory input via the thalamus is used to create the model of reality from which aspects become part of conscious awareness. So the muscles of the face could give us (or reinforce) knowledge of our emotional state or its intensity through a sensory pathway.
This is the second in a series on at embodied cognition.