Science Daily has an item (here) about a new research project.
A team of University of Hertfordshire philosophers lead by Professor Paul Coates and Dr Sam Coleman is conducting a three-year research project to explore conscious experiences that contemporary science still cannot explain.
Funded with £380,000 from the Arts and Humanities Research Council, and involving the collaboration of some of the worlds leading philosophers and cognitive scientists, the project will attempt to answer the mystery of consciousness.
Professor Coates explains: When we see a sunset or hear a symphony our sense organs, brains and bodies are moved in ways that are well understood by the physical and biological sciences. But during such experiences we also enjoy distinctive forms of conscious awareness. Yet this undeniable fact about our conscious lives is stubbornly resistant to scientific understanding. How is it even possible for purely physical brain activity to produce conscious experience? How do the qualities that manifest themselves in experience relate to the very different properties that are referred to in scientific descriptions of the physical world?
To find the answers to these questions Professor Coates and Dr Coleman and their team will re-examine our fundamental concepts relating to consciousness and physical reality. They will look at experimental results in psychology and brain science and at phenomenology and other forms of philosophical enquiry.
The team will also include Professor Shaun Gallagher, Philosophy, University of Hertfordshire and Professor Tony Marcel, Psychology, University of Hertfordshire.
Is this an attempt to help scientists ask the right questions, something I would applaud, or is it an attempt to put a fence around the questions that should not be answered by science, something I would boo? Time will tell.
An article in the NewScientist by Linda Geddes (here) talks about what gets our attention it is surprise. Pierre Baldi and Laurent Itti have been investigating surprise.
A dominant theory from the 1950s has it that the amount of attention we pay to an object or event is linked to the volume of information our brains need to form an understanding of it. For example, our attention should hover over intricate patterns longer than over a plain surface . To test their hypothesis (that it was surprise rather than volume of information that prompted attention), the pair developed a computer model which simulated a population of visual neurons “watching” video clips, just as your brain would watch it through the eye’s retina. They used the model to analyse short video clips and mark which regions of the videos it considered the most surprising – which they rated in wows. “Something that is very surprising has a high wow content,” says Baldi.
When they showed the videos to human volunteers, their eye movements correlated with what the computer had rated as being worthy of attention. “We found that human observers did indeed look at surprising things while watching TV, much more than they looked at information-rich objects,” says Itti a Bayesian theory of surprise in which an event is surprising if it changes our beliefs.
This idea fits with the notion that we predict the near future and compare our prediction with the actual events. Surprises show deficiencies in our model of reality. These deficiencies would need to be corrected.
The Research Digest Blog had a posting (here) titled A spontaneous experience of a sensed presence caught on EEG.
A patient who was being treated after a motor accident, had experiences of a presence when she was alone and happened to be having her EEG recorded when the episode happened.
It began with a feeling of an electric shock in her right hand, was followed by her arms and hands feeling icy cold, then vibrations went through her body, before she experienced the feeling that a man was in the room with her, even though she was actually alone.
A look at the EEG scans showed that a burst of electrical activity, similar to that observed in an epileptic seizure, occurred in her left temporal lobe at approximately the same time that she reported the sensed presence on her right-hand side.
“Although over the last 20 years we have assessed hundreds of patients who reported the emergence of a sensed presence … this is the first time the reports of a strong sensed presence and related sensations occurred spontaneously while our screening electroencephalographic measurements were in progress,” said Persinger and his coauthor Sandra Tiller.
This seems to be another of those fringe qualia (the I am not alone feeling) that populate consciousness. We can only see that it is there when it is inappropriately there.
Mind Hack blog (here) led me to a review of the video game Mirrors Edge by Clive Thompson (here). What he discusses is what he calls a proprioception hack. Proprioception is the sense of where your body is in space and this game appears to pull consciousness into the action.
Clive Thompsons description of playing the game:
The hot new videogame is a sort of “first-person runner”: You’re a courier who travels across the rooftops of a locked-down, police-state city, delivering black-market messages by using acrobatic feats of parkour. You’re constantly leaping over gaps 40 stories in the air, tightrope-walking along suspended pipes and vaulting up walls like a ninja .
The upshot is that these small, subtle visual cues have one big and potent side effect: They trigger your sense of proprioception. It’s why you feel so much more “inside” the avatar here than in any other first-person game. And it explains, I think, why Mirror’s Edge is so curiously likely to produce motion sickness. The game is not merely graphically realistic; it’s neurologically realistic .
Indeed, the sense of physicality is so vivid that, for me anyway, the most exhilarating part of the game wasn’t the obvious stuff, like leaping from rooftop to rooftop. No, I mostly got a blast from the mere act of running around. I’ve never played a game that conveyed so beautifully the athletically kinetic joys of sprinting of jetting down alleyways, racing along rooftops and taking corners like an Olympian. It’s an interesting lesson of game physics: When you feel like you’re truly inside your character, speed suddenly means something.
Vaughan gives some explanation:
In other words, it remaps your body schema so that you feel more fully that you are the character in the game. When your character runs fast, you feel it is you running fast. When your character jumps across between two buildings and looks down, you feel a moment of sickening vertigo Perhaps what this is because when we automatise an action such as a run, a jump or a roll part of the process of making it automatic is losing the experience of the component parts. So, when a computer game feels like real, it is because real feels like nothing — we just ask our brains ‘jump’ and the motor system sorts out the details without our any deep experience of how the jump is performed.
This game shows that our conscious experience of action amounts to the elaboration of the combination of an intent and sensory feedback with the correct timing.
There is a new Research Report by WP Banks and EA Isham, We Infer Rather than Perceive the Moment We Decided to Act (here). I have divided this subject in two and this is the second part dealing with their conclusions.
On the contrary, we propose that the reported W (reported time when action was consciously initiated) is not uniquely determined by any generator of the RP (EEG readiness potential). Rather, W is the time participants select on the basis of available cues, chief among them being the apparent time of response. Eagleman (2004) suggested that the critical cue for judgment of intention is perception of the response, thus reversing the assumed causal relation between intention and action. Here, we report an explicit test of this hypothesis.
A seminal experiment found that the reported time of a decision to perform a simple action was at least 300 ms after the onset of brain activity that normally preceded the action. In Experiment 1, we presented deceptive feedback (an auditory beep) 5 to 60 ms after the action to signify a movement time later than the actual movement. The reported time of decision moved forward in time linearly with the delay in feedback, and came after the muscular initiation of the response at all but the 5-ms delay. In Experiment 2, participants viewed their hand with and without a 120-ms video delay, and gave a time of decision 44 ms later with than without the delay. We conclude that participants’ report of their decision time is largely inferred from the apparent time of response. The perception of a hypothetical brain event prior to the response could have, at most, a small influence.
We do not take our findings to indicate that conscious intention has no role in behavior, but rather that the intuitive model of volition is overly simplisticit assumes a causal model by which an intention is consciously generated and is the immediate cause of an action. Our results imply that the intuitive model has it backwards; generation of responses is largely unconscious, and we infer the moment of decision from the perceived moment of action.
There is a new Research Report by WP Banks and EA Isham, We Infer Rather than Perceive the Moment We Decided to Act (here). I am going to divide looking at this paper in two: first is what they have to say about previous research and then what they report on their experiments.
The question of free will has been debated since antiquity. The debate has traditionally been conducted only in theoretical or logical terms, but has recently been given empirical content by the research of Libet, Gleason, Wright, and Pearl (1983). They made the question of volition a neurophysiological one, and thus opened it to scientific investigation.
Kornhuber and Deecke (1965) found that a simple voluntary act such as pressing a key was preceded by an electroencephalographic (EEG) component known as the “readiness potential” (RP) that began 500 ms to about 1,000 ms before the action. Libet et al. (1983) asked participants to monitor a spot of light moving around a clock face and to report the location of the spot when the action was consciously initiated. The reported time, termed W, was approximately 200 ms before the response. This time of decision implies that neurological preparation for the action began about 300 to 800 ms before the person consciously made the decision to act. Conscious will would thus seem to be a latecomer in the process of choice, rather than the instigator of choice.
When a simple measurement challenges bedrock intuitions about free will, it is no surprise that it would be challenged.
These criticisms have been answered by Banks, Pockett, Miller and Haggard and it appears that Libets clock method are viable for timing the awareness of decisions.As the reported time when action was consciously initiated (W) is not an artifact of experimental apparatus or procedure.
what does it mean? Researchers have reasonably searched for a cluster of neural events corresponding to W among those generating the RP.
In the second post, I look at a different theory investigated by Banks and Islam.
Im still reading through the replies to the Edge 2009 question, what will change everything. What game-changing scientific ideas and developments do you expect to live to see? (here). Lera Boroditsky feels that the big change will be in epistemology. But first I will share her great joke.
There is an old joke about a physicist, a biologist, and an epistemologist being asked to name the most impressive invention or scientific advance of modern times. The physicist does not hesitate”It is quantum theory. It has completely transformed the way we understand matter.” The biologist says “No. It is the discovery of DNAit has completely transformed the way we understand life.” The epistemologist looks at them both and says “I think it’s the thermos.” The thermos? Why on earth the thermos? “Well,” the epistemologist explains patiently, “If you put something cold in it, it will keep it cold. And if you put something hot in it, it will keep it hot.” Yeah, so what?, everyone asks. “Aha!” the epistemologist raises a triumphant finger “How does it know?”
On with the serious stuff another take on embodiment.
modern Cognitive Science has taken the role of empirical epistemology. The empirical approach to the origins of knowledge is bringing about breathtaking breakthroughs and turning what once were age-old philosophical mysteries into mere scientific puzzles.
Let me give you an example. One of the great mysteries of the mind is how we are able to think about things we can never see or touch. How do we come to represent and reason about abstract domains like time, justice, or ideas? All of our experience with the world is physical, accomplished through sensory perception and motor action. Our eyes collect photons reflected by surfaces in the world, our ears receive air-vibrations created by physical objects, our noses and tongues collect molecules, and our skin responds to physical pressure. In turn, we are able to exert physical action on the world through motor responses, bending our knees and flexing our toes in just the right amount to defy gravity. And yet our internal mental lives go far beyond those things observable through physical experience; we invent sophisticated notions of number and time, we theorize about atoms and invisible forces, and we worry about love, justice, ideas, goals, and principles. So, how is it possible for the simple building blocks of perception and action to give rise to our ability to reason about domains like mathematics, time, justice, or ideas?
But in the past ten years, research in cognitive science has started uncovering the neural and psychological substrates of abstract thought, tracing the acquisition and consolidation of information from motor movements to abstract notions like mathematics and time. These studies have discovered that human cognition, even in its most abstract and sophisticated form, is deeply embodied, deeply dependent on the processes and representations underlying perception and motor action it means that the evolutionary adaptations made for basic perception and motor action have inadvertently shaped and constrained even our most sophisticated mental efforts When we study the mechanics of knowledge building, we are approaching an understanding of what it means to be humanthe very nature of the human essence. Understanding the building blocks and the limitations of the normal human knowledge building mechanisms will allow us to get beyond them.
This post is more of V.S. Ramachandrans reply to this years Edge question (here), this time on the notion of self.
“Neurological conditions have shown that the self is not the monolithic entity it believes itself to be. It seems to consist of many components each of which can be studied individually, and the notion of one unitary self may well be an illusion Consider the following disorders which illustrate different aspects of self.”
He lists a number of disorders:
1.out of body experiences as a result of some right hemisphere strokes.
2.the intense desire to have a limb amputated (apotemnophilia) as a result of being born with an incomplete internal image of the body.
3.trans-sexuality lack of harmony between the different sources of sexual identity (external anatomy, internal body image, sexual orientation and sexual identity to others).
4.patient with phantom arm feeling anothers touch sensations.
5.patient claims to be dead and rejects evidence he is alive (Cotards syndrome).
6.Sufferers from Capgras delusion feel that some people, like a mother, are imposters because they do not feel the familiarity and recognition that they should. Some can also duplicate themselves.
7.Some people cannot move or interact although they appear to be awake (kinetic muftis). They later say that they were conscious but had no desire to de anything, or had lost their will.
8.Consciousness can be split into a separate visual and auditory self (akinetic mutism)
“We will now consider two aspects of self that are considered almost axiomatic. First its essentially private nature. You can empathise with someone but never to the point of experiencing her sensations or dissolving into her (except in pathological states like folie a duex and romantic love). Second, it is aware of its own existence. A self that negates itself is an oxymoron. Yet both these axioms can fall apart in disease; without affecting other aspects of self. An amputee can literally feel his phantom limb being touched when he merely watches a normal person being touched. A person with Cotard’s syndrome will deny that he exists; claiming that his body is a mere empty shell. Explaining these disorders in neural terms can help illuminate how the normal self is constructed.”
Ramachandran makes a good case for self not being a simple, single, obvious entity.
It is the time of year again when Edge does its yearly question. This year it is, what will change everything. What game-changing scientific ideas and developments do you expect to live to see? Among the replies is one from V.S. Ramachandran. I am never disappointed in what Ramachandran has to say and I intend to have a few postings centered on his essay, Self Awareness: the Last Frontier. The Edge contributions are (here).
As well as self, Ramachandran has an insight into qualia.
The qualia problem is well known. Assume I am an intellectually highly advanced, color-blind martian. I study your brain and completely figure out down to every last detail what happens in your brainall the physico-chemical eventswhen you see red light of wavelength 600 and say “red”. You know that my scientific description, although complete from my point of view, leaves out something ineffable and essentially non-communicable, namely your actual experience of redness. There is no way you can communicate the ineffable quality of redness to me short of hooking up your brain directly to mine without air waves intervening (Bill Hirstein and I call this the qualia-cable; it will work only if my color blindness is caused by missing receptor pigments in my eye, with brain circuitry for color being intact.) We can define qualia as that aspect of your experience that is left out by methe color-blind Martian. I believe this problem will never be solved or will turn out (from an empirical standpoint) to be a pseudo-problem. Qualia and so-called “purely physical” events may be like two sides of a Moebius strip that look utterly different from our ant-like perspective but are in reality a single surface So to understand qualia, we may need to transcend our ant-like view, as Einstein did in a different context. But how to go about it is anybody’s guess.
One of the examples of the disruption of the normal self is:
A patient with a phantom arm simply watches a student volunteer’s arm being touched. Astonishingly the patient feels the touch in his phantom. The barrier between him and others has been dissolved.
Ramachandran discusses motor mirror neurons and then goes on to discuss other types of mirror neurons.
There are also: “touch mirror neurons” that fire not only when your skin is touched but when you watch someone else touched. This raises an interesting question; how does the neuron know what the stimulus is? Why doesn’t the activity of these neurons lead you to literally experience the touch delivered to another person? There are two answers. First the tactile receptors in your skin tell the other touch neurons in the cortex (the non-mirror neurons) that they are not being touched and this null signal selectively vetos some of the outputs of mirror neurons. This would explain why our amputee experienced touch sensations when he watched our student being touched; the amputation had removed the vetoing. It is a sobering thought that the only barrier between you and others is your skin receptors! I mention these to emphasize that despite all the pride that your self takes in its individuality and privacy, the only thing that separates you from me is a small subset of neural circuits in your frontal lobes interacting with mirror neurons. Damage these and you “lose your identity”your sensory system starts blending with those of others. Like the proverbial Mary of philosopher’s thought experiments, you experience their qualia.
There was a recent post in the Neurophilosophy site about our brains way of handling space (here), Rats know their limits with border cells.
Spatial navigation is the process on which we rely to orient ourselves within the environment and to negotiate our way through it. Our ability to do so depends upon cognitive maps, mental representations of the surrounding spaces, which are constructed by the brain and are used by it to calculate one’s present location, based on landmarks in the environment and on our movements within it, and to plan future movements.
We now know that the circuitry encoding the cognitive map lies in the hippocampus and surrounding areas, and that these parts of the brain contain at least 3 distinct types of neurons which together encode an organism’s location within its environment and the paths it takes to move through it. In the current issue of Science, researchers from the Norwegian University of Science and Technology in Trondheim report that they have discovered a fourth class of neuron involved in spatial navigation.
Research into the cellular basis of spatial navigation began in the early 1970s, with the discovery of place cells by John O’Keefe and Jonathan Dostrovsky. Place cells were initally found in the hippocampus of the rat, and have since been found in other organisms, including humans; each fires only when an animal is in a specific location within its environment. Then, in 1984, James Ranck of SUNY Health Sciences Center in New York identified head direction cells in the presubiculum, which is adjacent to the hippocampus. As their name suggests, these neurons fire only when an animal is facing a certain direction.
The third type of neuron involved in spatial navigation is the grid cell, which was first identified in 2005, and is found in the entorhinal cortex, which also lies next to the hippocampus, and in rodents is located at the caudal (back) end of the temporal lobe. Unlike place cells, grid cells fire when the animal is at multiple locations in its environment. These locations are evenly spaced, so that a grid cell increases its firing rate periodically as the animal traverses a space. . Grid cells encode different scales, such that small groups of grid cells have a unique periodicity; this scaling is mapped onto the entorhinal cortex, so that the scale encoded increases systematically along its top-to-bottom axis.
In a paper published in 2000, Neil Burgess predicted the existence of what he called boundary vector cells, which encode the organism’s distance from geometric borders surrounding its environment. The prediction was based on a computational model of place cell activity, but until now there has been no experimental evidence for such cells. Edvard Moser and his colleagues, who first described grid cells in 2005, now confirm the existence of these neurons in the rat brain . they found that the firing rates of the cells increased only when the animals were at one or several of the walls of the enclosure, irrespective of the length of the border or its relationship with other borders in the surroundings the cells are responsive to borders in general, and not just walls. Moser suspects that border cells align grid cells to borders, and are thus involved in defining the perimeter of the animals’ environment. He also suggests that they are involved in route-planning – although they are sparse in number, they are distributed widely throughout the entorhinal cortex, in such a way that they could provide grid cells with information about approaching obstacles and borders.