We do not see yourselves as others see us at least as far as body language is concerned. The British Psychological Society Research Digest reported on the work led by W. Hofmann (here). People were video taped and the recording shown to the subject and to others.
…The premise of the new study is the tip-of-the-iceberg idea that what we know about ourselves is fairly limited, with many of our impulses, traits and beliefs residing below the level of conscious access. The researchers wondered whether people would be able to form a truer picture of themselves when presented with a video of their own body language… they weren’t able to.
…What was going on? Why can’t we use a video of ourselves to improve the accuracy of our self-perception? One answer could lie in cognitive dissonance – the need for us to hold consistent beliefs about ourselves. People may well be extremely reluctant to revise their self-perceptions, even in the face of powerful objective evidence.
…”When applied to the question of how people may gain knowledge about their unconscious self, the present set of studies demonstrates that self-perceivers do not appear to pay as much attention to and make as much use of available behavioural information as neutral observers,” the researchers said.
This seems a fairly general situation. We are often very surprised at how we sound on recording as well as how we look. And we are often surprised at how others assess our attitudes and motivations.
A ScienceDaily item (here) reports on a study lead by R. Shulman into the energy consumption of consciousness.
…functional magnetic resonance imaging has shown that many areas of the brain, not just one or two, are recruited during tasks such as memory tests and are scant help in studying the state of being conscious. (and) the amount of energy used in such tasks is minute, about one percent of baseline energy available to the brain. “Neuroimaging has been looking at the tip of the iceberg,” Shulman said. “We looked at the rest of the iceberg.” What is the other 99 percent of energy consumption doing? Shulman and colleagues have proposed that it is needed to maintain a person in a state of consciousness. Heavily anesthetized people are known to show approximately 50 percent reductions in cerebral energy consumption.
…Properties of (consciousness), such as the high energy and the delocalized fMRI signals, allow the person to perform the interconnected activities that make up our everyday lives. Shulman suggests that these more energetic properties of the brain support human behavior and should be considered when interpreting the much weaker signals that are typically recorded during fMRI studies.
Nothing that costs that much metabolically is going to be a frill; consciousness must be important if it is that expensive. The function need not be obvious though. For example, consciousness might be a necessary part of forming memories and having memories of events is a very valuable thing.
A study by A. Horowitz reported in ScienceDaily (here) purports to show that dogs do not feel guilt.
This study sheds new light on the natural human tendency to interpret animal behavior in human terms. Anthropomorphisms compare animal behavior to human behavior, and if there is some superficial similarity, then the animal behavior will be interpreted in the same terms as superficially similar human actions. This can include the attribution of higher-order emotions such as guilt or remorse to the animal…Horowitz was able to show that the human tendency to attribute a “guilty look” to a dog was not due to whether the dog was indeed guilty. Instead, people see ‘guilt’ in a dog’s body language when they believe the dog has done something it shouldn’t have – even if the dog is in fact completely innocent of any offense….Dogs looked most “guilty” if they were admonished by their owners for eating the treat. In fact, dogs that had been obedient and had not eaten the treat, but were scolded by their (misinformed) owners, looked more “guilty” than those that had, in fact, eaten the treat. Thus the dog’s guilty look is a response to the owner’s behavior, and not necessarily indicative of any appreciation of its own misdeeds.
The problem here is that there is no control in the experiment. Would a child look guilty if they had done something they should not have but no one had noticed? I have my doubts about some children. Would a child look guilty if they had done nothing they shouldn’t have, but a parent started shouting at them and poking a finger in their face as if they had done something bad? I think I have known children who would look guilty because they assumed they had done some terrible unknown thing. And if you know what you have done wrong, you might look less guilty then if you had done something wrong and did not even know what it was.
Yes, we have to guard against anthropomorphism but we also have to guard against anti-anthropomorphism. Dogs are very social animals and so there is every reason for them to have social emotions like guilt as well as non-social ones like fear.
An item in ScienceDaily (here) reports on research by R Desimone’s group at MIT into gamma waves associated with attention. The report uses an interesting analogy to describe the waves.
Just as our world buzzes with distractions — from phone calls to e-mails to tweets — the neurons in our brain are bombarded with messages. Research has shown that when we pay attention, some of these neurons begin firing in unison, like a chorus rising above the noise. Now, a study in the May 29 issue of Science reveals the likely brain center that serves as the conductor of this neural chorus. … neurons in the prefrontal cortex — the brain’s planning center — fire in unison and send signals to the visual cortex to do the same, generating high-frequency waves that oscillate between these distant brain regions like a vibrating spring. These waves, also known as gamma oscillations, have long been associated with cognitive states like attention, learning, and consciousness. …
To explain neural synchrony, Desimone uses the analogy of a crowded party with people talking in different rooms. If individuals raise their voices at random, the noise just becomes louder. But if a group of individuals in one room chant together in unison, the next room is more likely to hear the message. And if people in the next room chant in response, the two rooms can communicate. …
Desimone looked for patterns of neural synchrony in two “rooms” of the brain associated with attention — the frontal eye field (FEF) within the prefrontal cortex and the V4 region of the visual cortex. …
When the monkeys first paid attention to the appropriate object, neurons in both areas showed strong increases in activity. Then, as if connected by a spring, the oscillations in each area began to synchronize with one another. Desimone’s team analyzed the timing of the neural activity and found that the prefrontal cortex became engaged by attention first, followed by the visual cortex — as if the prefrontal cortex commanded the visual region to snap to attention. The delay between neural activity in these areas during each wave cycle reflected the speed at which signals travel from one region to the other — indicating that the two brain regions were talking to one another.
Again, more of the Firth podcast (here) that was the subject of the last few posts.
It’s interesting that engineers have a very different way of looking at the world than psychologists. Psychologists tend to have a loop which says there’s perception, signals come in about the world, you interpret them, and then you act. The perception is the input and the act is the output. Engineers look at it completely the other way around. They say you act upon the world, you put something into the world—that’s the input, is acting upon the world. And then something happens—which is the output—that enables you to decide what to do next.
And I think this captures this much more active way of thinking about the world which engineers have. Whereas the more passive view of psychologists where somehow you have a perception which you can somehow work out what’s going on, it’s very much the other way around. We have to act in order to create—to make the world send us back information which helps us to interpret what it is.
The dopamine signal is a prediction error. So, basically if something unexpectedly nice happens, then you get a shot of dopamine; and so, the dopamine neurons become more active. And if you expect something nice to happen and it does happen, there’s no response; because there’s not an error. If we expect it to happen and it doesn’t happen, then the activity goes down. So, that’s a negative error.
There may be a another way to look at this – a feedback loop does not have a beginning and a end. It is circular. Three components are interacting: the sensory data, the predictive model, and the action commands. Start with the sensory data – the data arrives, it is compared with the model, where it does not match it forces a change to the action commands. OR- Start with the action commands – the commands are given, they are used to create a predictive model of what will happen, the model is compared with the resulting sensory data, where it doesn’t match it results in changes the action commands. OR – Start with the model – keep it accurate by fine-tuning the action/prediction side and the sensing/perception side so that they match. The problem of how to understand a feedback loop is classic and there are good engineering formulas covering the subject (think op-amps, servo mechanisms and the like).
There is more from a podcast interview with Chris Frith (here). This time I quote his views on Baysian perception.
…Perception is a two-way process. This is why I talk about Reverend Thomas Bayes, who produced this formula two hundred years ago. What he’s essentially pointing out is that our perception of the world depends on two things: that is to say, the sensory information that’s coming in through our eyes and ears, and our prior expectations and our knowledge of the world. And it’s the balance of these two that creates what we experience.
His formula tells you how much do you have to change your model of the world given the new evidence that’s coming in. So if you have very strong expectations, that will affect what you actually perceive. In a sense you can’t perceive things that you don’t know something about already…
And also, people who study how the brain works suggest that the brain is a Baysian system that is concerned with making predictions, and collecting sensory evidence, and then looking at the prediction errors to decide what to do next. And certainly learning about the world these days is very much conceived in terms of a Baysian process where you predict what’s going to happen and then you adjust your learning on the basis of these prediction errors.
More of the Firth podcast (here) that was the subject of the last post. Here he deals with the separation we feel not just from the physical world but from other people. Again, we are clearly embedded the social world.
If I could go back to the mirror (neuron) story, one of the studies I found particularly interesting, that Sarah-Jayne Blakemore did, was touch. We found that if you see someone being touched – on their face, for example – then the bit of your brain that would be activated if your face was touched lights up, even though it’s not being touched. So, in a sense you’re sharing their sensory experience by watching them. But what is interesting here is you’re not aware of this, that it’s happening in your brain.
If you are aware of it, in fact you’re a rather unusual person. There is a special from of synesthesia which a couple of people we know have, so that when they see someone being touched they actually say, ‘I can feel it on my own face.’ The interesting thing to us was that everybody actually has this happening to them, but they’re just not aware of it: we’re actually experiencing what’s happening to other people all the time, but below awareness.
…But nevertheless experience ourselves as independent agents who can do whatever we like; we’re not really influenced by what’s going on. But in fact we are.
More of the Firth podcast (here) that was the subject of the last post. Here he deals with the separation we feel from the world even though we are clearly embedded in it.
…Not only are we separate from the physical world, but we’re separate from the mental world of other people and feel ourselves as very much independent agents…The interesting question is why is this a good thing. I mean this is presumably advantageous in some evolutionary sense to have our experience like this.
And one way of looking at it is if you think about vision again, you have a picture of the world on the back of your eye – on the retina – and every time you move your eye this picture completely changes, so that completely different bits of the retina have the various objects that you’re looking at one them. And this is happening several times a second. If we were aware of that – it would drive us completely crazy. So, the brain has developed this system that stabilizes everything. It says there’s a world out there which is completely stable and doesn’t move. So by separating us out in this way it makes experience of the world that is much easier to take. .. And makes us think that perception just happens.
There is a podcast interview with Chris Frith (here) that is very interesting. I am planning to do a few posts on his ideas, starting with the difficulty in understanding the mind through introspection.
…But the way our brain works in a sense makes us tend to be dualists, so it’s very difficult for us to think about how the mental and the physical interact. And this is partly because the way the brain works is that it hides from us most of the work it does.
Something like 90% of brain activity never reaches consciousness at all. And so, we don’t know about it through introspection…
…In the 40′s when computers came into action, people thought they would be able to build electronic brains – as newspapers called it in those days – which would do the sort of things the humans could do. And they made a very bid mistake, because what they thought at that time was that the easy thing for these electronic brains to do would be to perceive the world, because that’s so easy for us, whereas the difficult things for these computers to do would be to play chess, because that’s so difficult for us. But it turns out – not that long ago, that the computer has been built that beat the best chess player in the world, by they’re still very bad at perceiving things, or reading handwriting, or anything like that.
My friend, Daniel Wolpert, has this nice example that you can make a computer that can play chess but no one has really developed a computer that’s particularly good at picking up the chess piece and moving it to the new position on the board. So, we get a very strange idea of what’s easy and what’s difficult from our introspection.