Info

You are currently browsing the thoughts on thoughts weblog archives for March, 2011.

Calendar
March 2011
M T W T F S S
« Feb   Apr »
 123456
78910111213
14151617181920
21222324252627
28293031  
Categories

Archive for March 2011

Anticipating eye movements

ScienceDaily has a report (here) of a paper by Rolfs, Jonikaitis, Deubel and Cavanagh, Predictive remapping of attention across eye movements. Here is the abstract:

Many cells in retinotopic brain areas increase their activity when saccades (rapid eye movements) are about to bring stimuli into their receptive fields. Although previous work has attempted to look at the functional correlates of such predictive remapping, no study has explicitly tested for better attentional performance at the future retinal locations of attended targets. We found that, briefly before the eyes start moving, attention drawn to the targets of upcoming saccades also shifted to those retinal locations that the targets would cover once the eyes had moved, facilitating future movements. This suggests that presaccadic visual attention shifts serve to both improve presaccadic perceptual processing at the target locations and speed subsequent eye movements to their new postsaccadic locations. Predictive remapping of attention provides a sparse, efficient mechanism for keeping track of relevant parts of the scene when frequent rapid eye movements provoke retinal smear and temporal masking.

Every third of a second the eyes jump to a new place in the visual field. You may have experienced this consciously when extremely tired on a bumpy road. (Time to stop and rest!) But normally our consciousness delivers a world that is a stable place with no jerky sequences of images – this takes some finesse.

Encephalon #85

Encephalon #85 is up at Neuro Dojo ( here). Enjoy.

Prediction engine

Here is another answer from the Edge question contributers (here). Andy Clark, author of Supersizing the Mind: Embodiment, Action, and Cognitive Extension, writes about prediction.

The idea that the brain is basically an engine of prediction is one that will, I believe, turn out to be very valuable not just within its current home (computational cognitive neuroscience) but across the board: for the arts, for the humanities, and for our own personal understanding of what it is to be a human being in contact with the world.

We predict what our senses are going to deliver and the errors are used to correct the prediction process. Clark lists some implications of this process.

1. This means, in effect, that all perception is some form of ‘expert perception’, and that the idea of accessing some kind of unvarnished sensory truth is untenable.

2. models suggest that what emerges first is the general gist (including the general affective feel) of the scene, with the details becoming progressively filled in as the brain uses that larger context — time and task allowing — to generate finer and finer predictions of detail. There is a very real sense in which we properly perceive the forest before the trees.

3. the line between perception and cognition becomes blurred. What we perceive (or think we perceive) is heavily determined by what we know, and what we know (or think we know) is constantly conditioned on what we perceive (or think we perceive).

4. if we now consider that prediction errors can be suppressed not just by changing predictions but by changing the things predicted, we have a simple and powerful explanation for behavior and the way we manipulate and sample our environment.

It is hard to think of a better way to remain in sync, in tune, appropriate to a changing environment then a predictive loop continuously correcting errors between our expectations and what actually happens. It seems to me the conscious experience is that prediction made available to all the processes of the brain: action, perception, cognition, learning.

Dijksterhuis revisited

One of the easiest errors to make is to get too attached to the words you use and your pet definitions for them. I really, really try to avoid purely semantic arguments. Recent reading, and re-reading, of papers by Ap Dijiksterhuis has made me look again at how I define mind and thought.

When explaining his UTT (unconscious thought theory), Dijksterhuis uses the word mind only once (three time if counting a quote and a common set phrase) – he avoids the idea of two minds in one skull. Instead he talks of two modes of thought, a conscious mode and an unconscious mode. “we do not assume separate systems. UTT describes the characteristics of two processes, rather than two systems or modules.” I can be comfortable with two ways for a single system to work, rather than two systems. And as far as thought is concerned he appears to label the ‘heavy lifting’ of thought as unconscious mode. “However, this does not mean that conscious thought comprises only conscious processes. One could compare it to speech. Speech is conscious, but various unconscious processes (such as those responsible for choice of words or syntax) have to be active in order for one to speak. Likewise, conscious thought cannot take place without unconscious processes being active at the same time.” He has even moved, over time, to reduce some contrasts between conscious and unconscious modes as far as attention and goals are concerned. “…the understanding of the implementation of volitional behavior: implicit learning, evaluative conditioning, and unconscious thought. It is concluded that these processes are goal dependent and that they need attention, but that they can generally proceed without awareness.” So I can live with ‘two modes of thought’ – well really I’m starting to like it. My beef with ‘conscious thought’ has been that thought (the process as opposed to the results) is not conscious, we are not aware of the cognitive gears turning, only the results and sub-results of thought reach consciousness. We are only aware of the model of the world and not the construction of the model. There is only a semantic difference between the two descriptions.

What Dijksterhuis has to say about his UTT is very interesting. He has a number of differences between the two modes of thought. Conscious thought has a very limited capacity - “Depending on the context, consciousness can process between 10 and 60 bits per second. For Example, if you read, you process about 45 bits per second, which corresponds to a fairly short sentence. The entire human system combined, however, can process about 11,200,000 bits per second.” Because of this limit, thought that uses consciousness needs to use schema, stereotypes, top-down control and simplifying methods rather than having all the information available. Ironically “despite the fact that stereotypes are activated automatically, they are applied while one consciously thinks about a person or a group.” Also there is a danger from the capacity limit in prejudgment in order to simplify - “predecisional distortion shows that even when not explicitly given an expectancy, people quickly create their own guide to further conscious thought.” Relative importance also suffers from the limit on capacity. “Conscious thought leads people to put disproportionate weight on attributes that are accessible, plausible and easy to verbalize.

The limited capacity of conscious thought is probably due to the use of working memory, a facility with a very limited capacity. But there are advantages to rule based thought in using working memory to hold sub-results from each step. “The key to understanding why the unconscious cannot do arithmetic is that it cannot follow rules…The distinction between rule-based and associative thinking largely maps onto the distinction between consciousness and the unconscious. During conscious thought, one can deal with logical problems that require being precise and following rules strictly, whereas during unconscious thought, one can…distinguish between following rules and merely conforming to them, and this distinction is very important here. For example, an apple conforms to gravity by falling down rather than up, but it does not actively follow a rule in doing so.” I believe this is similar to the reason that speech tends to be in conscious mode, like math and logic, it is a stepwise, precise, rule based process to put a sentence together.

And finally the limited capacity of the conscious mode means it is convergent while the unconscious mode is divergent. This is relevant to creativity. “Creativity long associated with the notion of incubation (after some initial conscious mode thought).” Unconscious thought can range over fringe aspects rather than being confined to what is centrally important.

But I will continue with my words. I think that four things should be separated: there is (1) thought/cognition, there is (2) working memory, there is (3) focus of attention and there is (4) conscious experience. All four are probably complex groups rather that simple processes. I assume that thought/cognition is basically unconscious. Probably the only path to episodic memory is through working memory and the only path to working memory is through consciousness. The process of attention is likely to direct thought and also determine what is included in both consciousness and working memory. Consciousness is an integrated model of the world, self, now and here, which is accessible to most unconscious processes. And using Dijksterhuis’ words, you could call that two modes of thought – one conscious and one unconscious.

 

 

ResearchBlogging.org

Dijksterhuis, A., & Nordgren, L. (2006). A Theory of Unconscious Thought Perspectives on Psychological Science, 1 (2), 95-109 DOI: 10.1111/j.1745-6916.2006.00007.x

Bos, M., Dijksterhuis, A., & Baaren, R. (2008). On the goal-dependency of unconscious thought☆ Journal of Experimental Social Psychology, 44 (4), 1114-1120 DOI: 10.1016/j.jesp.2008.01.001

Dijksterhuis, A., & Aarts, H. (2010). Goals, Attention, and (Un)Consciousness Annual Review of Psychology, 61 (1), 467-490 DOI: 10.1146/annurev.psych.093008.100445

Learning to be Conscious

ScienceDaily has an item (here) on a paper by Plailly, Delon-Martin and Royet on learning to imagine odours, something that most people are unable to do. They show it is a learned skill.

Here is the paper’s abstract:

Areas of expertise that cultivate specific sensory domains reveal the brain’s ability to adapt to environmental change. Perfumers are a small population who claim to have a unique ability to generate olfactory mental images. To evaluate the impact of this expertise on the brain regions involved in odor processing, we measured brain activity in novice and experienced (student and professional) perfumers while they smelled or imagined odors. We demonstrate that olfactory imagery activates the primary olfactory (piriform) cortex (PC) in all perfumers, demonstrating that similar neural substrates were activated in odor perception and imagination. In professional perfumers, extensive olfactory practice influences the posterior PC, the orbitofrontal cortex, and the hippocampus; during the creation of mental images of odors, the activity in these areas was negatively correlated with experience. Thus, the perfumers’ expertise is associated with a functional reorganization of key olfactory and memory brain regions, explaining their extraordinary ability to imagine odors and create fragrances.

The activation of the primary olfactory cortex is similar to the way that visual and auditory mental imagery uses their primary sensory cortex regions to produce images as well as perception. In other words our brain generates its own sensation.

As perfumers become more skilled, they have to rely less on memory. Processing becomes more streamlined with training, resulting in reduced activation of hippocampus and other areas.

In this study, the perfumers were able to imagine the odors rapidly, sometimes instantaneously, whereas the students experienced some difficulties and needed to concentrate their attention. By easily reactivating the mnesic representations of odors, perfumers can mentally compare and combine scents with the aim of creating new fragrances.

My opinion is that we can learn to make many things conscious with training. Some people learn to control bodily functions like heart rate by watching a needle on a dial. What seems to be needed is a good deal of motivation and some feedback channel to compare signal with perception or a ‘language’ to describe phenomena – ie a mapping to something that is easily made conscious like words or sights/sounds. It seems that only a few hundred people in the world are motivated to be perfumers. There are probably only a small number of professional ‘noses’ in other fields.

Is Buddhism compatible with neuroscience?

David Weisman in Seed Magazine (here) has an article on the relationship of neuroscience and Buddhism. He has recently been surprised to find that they “do not appear to profoundly contradict.”

…They (Buddhists) believe in an impermanent and illusory self made of shifting parts. They’ve even come up with language to address the problem between perception and belief. Their word for self is anatta, which is usually translated as ‘non self.’ One might try to refer to the self, but the word cleverly reminds one’s self that there is no such thing.

When considering a Buddhist contemplating his soul, one is immediately struck by a disconnect between religious teaching and perception. While meditating in the temple, the self is an illusion. But when the Buddhist goes shopping he feels like we all do: unified, in control, and unchanged from moment to moment. The way things feel becomes suspect. And that’s pretty close to what neurologists deal with every day…

Both Buddhism and neuroscience converge on a similar point of view: The way it feels isn’t how it is. There is no permanent, constant soul in the background. Even our language about ourselves is to be distrusted (requiring the tortured negation of anatta). In the broadest strokes then, neuroscience and Buddhism agree…

I don’t mean to dismiss or gloss over the areas where Buddhism and neuroscience diverge. Some Buddhist dogmas deviate from what we know about the brain. Buddhism posits an immaterial thing that survives the brain’s death and is reincarnated…

Like other religions there are a number of types of Buddhism and they vary a great deal in their ideas and sophistication. There are Buddhists who welcome the findings of neuroscience and I am sure there are many who don’t.

Reincarnation is not an idea that neuroscience embraces. It has been inherited from Hinduism by all the eastern religions that grew out of early Hinduism: modern Hinduism, Buddhism, Sikhism, Jainism, Falun Gong etc. Buddhism did not develop the idea of reincarnation – they just didn’t reject it. It is not sought. Like in Buddhisms cousin religions, the ideal that is sought is escape from reincarnation to a state of nirvana. Nirvana is not a type of heaven but instead it is a dissolving of self, time, space in an infinite unity.

Space and time

I have taken for granted that our sense of time is founded on our sense of space and that this is an example of embodied thought. A recent paper (citation below) examines this assumption and shows it far from clear.

Kranjec and Chatterjee say:

Is time an embodied concept? People often talk and think about temporal concepts in terms of space. This observation, along with linguistic and experimental behavioral data documenting a close conceptual relation between space and time, is often interpreted as evidence that temporal concepts are embodied. However, there is little neural data supporting the idea that our temporal concepts are grounded in sensorimotor representations. This lack of evidence may be because it is still unclear how an embodied concept of time should be expressed in the brain. The present paper sets out to characterize the kinds of evidence that would support or challenge embodied accounts of time. Of main interest are theoretical issues concerning (1) whether space, as a mediating concept for time, is itself best understood as embodied and (2) whether embodied theories should attempt to bypass space by investigating temporal conceptual grounding in neural systems that instantiate time perception.

In their analysis they stick very tightly to two theoretical ideas. The first is that simulation is the method of embodiment. To prove an embodied representation, one needs to find sensory or motor neurons whose activity is grounding the representation. The second is that abstraction of concepts relies on relational schema as outlined by Lakoff and Johnson. These schema can either be verbal in nature (and left hemisphere in location) or analog in nature (and right hemisphere in location). They use the research of Kemmerer is show some neural differences in processing space and time

In the end the question is left open – Is space embodied? Is time embodied through space? This is not surprising given their assumptions of embodiment necessarily implying simulation and schema. But if we think about it, what is there of a primary sensory nature or of a primary motor nature to ground space or time in the sense of motor and sensory neuron activity? We do not directly physically sense space or time nor do we use our muscles to directly affect them. They are part of the framework and not things perceived, except as the where and when other things are perceived. And they are not things done, except as the where and when of actions done.

Instead space seems grounded in the activity of the hippocampus and nearby cortex, in the form of activity of space cells, grid cells, heading cells, border cells and a library of location maps. And time seems grounded in the sequential moments of memory, again in the form of the hippocampal and related activity. In seems a reasonable idea to include the machinery of memory along with the machinery of the senses and of muscular activity as part of the body that gives cognition its grounding. It is the process of consciousness that is likely to create not only the ’self’ and the ‘world’ but also the ‘here’ and the ‘now’ as a structure, a model of reality, into which objects and movements can be placed and which the hippocampus can remember.

 

ResearchBlogging.org

Kranjec, A., & Chatterjee, A. (2010). Are Temporal Concepts Embodied? A Challenge for Cognitive Neuroscience Frontiers in Psychology, 1 DOI: 10.3389/fpsyg.2010.00240

Another metaphor

I have found another Edge answer that is very interesting (here). Donald Hoffman, author of Visual Intelligence, describes a metaphor for sensory qualia – a computer desktop.

Our perceptions are neither true nor false. Instead, our perceptions of space and time and objects, the fragrance of a rose, the tartness of a lemon, are all a part of our “sensory desktop,” which functions much like a computer desktop.

I have encountered people who judge our senses by how accurate they are. They are not happy with the lack of a one-to-one mapping between wave length of light and the perception of colour. The illusions that fool us are treated as mistakes. This all is interpreted as sloppiness in biological systems. But really, the purpose of our perceptions is not accuracy but usefulness.

Graphical desktops for personal computers have existed for about three decades. Yet they are now such an integral part of daily life that we might easily overlook a useful concept that they embody. A graphical desktop is a guide to adaptive behavior. Computers are notoriously complex devices, more complex than most of us care to learn. The colors, shapes and locations of icons on a desktop shield us from the computer’s complexity, and yet they allow us to harness its power by appropriately informing our behaviors, such as mouse movements and button clicks, that open, delete and otherwise manipulate files. In this way, a graphical desktop is a guide to adaptive behavior.

Graphical desktops thus make it easier to grasp the nontrivial difference between utility and truth. Utility drives evolution by natural selection. Grasping the distinction between utility and truth is therefore critical to understanding a major force that shapes our bodies, minds and sensory experiences.

We must take our sensory experiences seriously, but not literally. This is one place where the concept of a sensory desktop is helpful. We take the icons on a graphical desktop seriously; we won’t, for instance, carelessly drag an icon to the trash, for fear of losing a valuable file. But we don’t take the colors, shapes or locations of the icons literally. They are not there to resemble the truth. They are there to facilitate useful behaviors.

This is useful to keep in mind then thinking about just how personal our personal conscious experience is. Of course we know that we can’t actually know whether your red is the same as my red. But we know that we have a map of our retinas in our thalamus and another in the cortex at the back of our heads. There may be others too. These maps are linked with nerves so that the same points on the maps communicate with one another. A place on the retina has a corresponding place in the thalamus map and the cortex map. This is accomplished by a combination of genetically produced developmental chemicals and ordinary experiences of the world environment. There is no reason for either identical or for significantly different results from this developmental program. Similarly we have the same chemicals in our retina to respond to colours etc. So the answer to whether your red is the same as mine is probably – not identical but extremely similar. Further, it hardly matters because the reason for the red or any other shade is to inform and guide our behaviour – the system has evolved to give us ‘adaptive behaviour’. Qualia have evolved to contrast what needs to be separated, to notice what needs to be noticed, to be attracted or alarmed as appropriate and they seem to do a good job of it.

Real prediction is not possible

Another interesting piece from the answers to the Edge question (here) is Rudy Rucker’s. She is author of Lifebox, the Seashell, and the Soul. Her piece of useful wisdom:

A little-known truth: Every aspect of the world is fundamentally unpredictable. Computer scientists have long since proved this. How so? To predict an event is to know a shortcut for foreseeing the outcome in advance….The world can simultaneously be deterministic and unpredictable. In the physical world, the only way to learn tomorrow’s weather in detail is to wait twenty-four hours and see even if nothing is random at all. The universe is computing tomorrow’s weather as rapidly and as efficiently as possible any smaller model is inaccurate, and the smallest error is amplified into large effects.

At a personal level, even if the world is as deterministic as a computer program, you still can’t predict what you’re going to do. This is because your prediction method would involve a mental simulation of you that produces its results slower than you. You can’t think faster than you think. You can’t stand on your own shoulders.

It’s a waste to chase the pipedream of a magical tiny theory that allows us to make quick and detailed calculations about the future. We can’t predict and we can’t control. To accept this can be a source of liberation and inner peace. We’re part of the unfolding world, surfing the chaotic waves.

The only way a decision is deterministic is that it is made by the brain. The only way it is free is that it cannot be predicted. The freewill vs determinism is a dead end (like nature vs nurture). We should be concerned with how decisions are actually made in neuro-scientific terms and how to make better ones in psychological terms.

Embodied robots

Want to build an artificial brain? – try building an embodied robot. It makes sense that to embody an AI system implies giving it a body to embody in. A guide to the advantages, challenges and problems of artificial embodied cognition are examined in a recent Frontiers in Psychology article (citation below).

We are given useful definitions of words that elsewhere are often used interchangeably. Cognition is ‘grounded’ in the physical properties of the world. ‘Embodied’ cognition on top of grounding is shaped by the physical constraints of the body and its sensorimotor interactions. ‘Situated’ cognition is context-specific on top of embodiment.

Then we are giving a general walk through the ideas surrounding embodiment in both AI and biology. The five authors have written a fictional conversation between someone with a computer bent and someone with a biological bent, but both interested in embodiment. A good case is made that neurobiology can assist AI and that AI can assist neurobiology – if they collaborate. The end point of this dialogue is the agreement that robotics is the way to go to investigate embodiment.

… a necessary complement to all these methodologies is to increasingly adopt cognitive robotics as their experimental platform, rather than designing models of isolated phenomena, or relaxing too many constraints about sensorimotor processing and embodiment. Indeed, it seems to me that cognitive robotics offer a key advantage to the aforementioned methodologies, because it emphasizes almost all of the components of grounded models: the importance of embodiment, the loop among perceptual, motor and cognitive skills, and the mutual dependence of cognition and sensorimotor processes.

They follow that with a list of challenges that an embodied robot presents:

Challenge 1: Taking a developmental viewpoint to explore why and how embodied cognition could have originated. Through evolution, individual development and learning, how does embodiment come to be?

Challenge 2: Exploring the (causal) influence of embodied phenomena for cognitive processes. Amodal cognition models have to be abandoned. The sensory, cognitive and motor processes ‘leak’ into one another.

Challenge 3: Specifying the time course of activation for embodied concepts. Sensory, cognitive and motor processing is not sequential but overlap in time.

Challenge 4: Developing embodied computational models of symbolic and linguistic operations. Symbolic manipulations, which loosely includes reasoning and abstract thinking, predication, conceptual combination, language and communications must be re-thought in terms other than traditional symbolic processing.

Challenge 5: Realizing situated and complete architectures without losing contact with data. There must be general integration of real perception to real action so as to model internal needs and motivation. But as the robots have different physical ways of sensing and acting, the results will not be directly comparable to biological systems. Compromise will be needed.

Challenge 6: Realizing realistic social scenarios for studying collaborative, competitive, communication, and cultural abilities. There will need to be a level of robot society or of robot-human interaction to model this type of embodiment.

Much as I enjoyed and learned from this paper, I was a little disappointed with the lack of any mention of consciousness (of course, you have to realize that I am a consciousness nut). There was a general idea that sensory modalities had to be integrated and to share structure with motor maps/codes. But there was not discussion of whether this was likely to be a product of consciousness or not (or even maybe sometimes). There was mention of prediction but again not of the predictive nature of consciousness. A introspective modality was mentioned but not how a system could introspect without consciousness. Phrases like feed forward, feed back, lateral activation were used but with no hint that the neurological signature of consciousness is just such waves of activation sweeping forward, sideways and back. They may have had one of several reasons for this lack: that it would over-complicate the discussion; that it would be difficult to say what the equivalent of human consciousness would be in a robot; that it was a taboo word in their conversations; or that embodied cognition might be possible without it.

 

ResearchBlogging.org

Pezzulo, G., Barsalou, L., Cangelosi, A., Fischer, M., McRae, K., & Spivey, M. (2011). The Mechanics of Embodiment: A Dialog on Embodiment and Computational Modeling Frontiers in Psychology, 2 DOI: 10.3389/fpsyg.2011.00005