Revisiting grandmother cells

Recent work by R. Q. Quiroga’s group, ‘Explicit Encoding of Multimodal Percepts by Single Neurons in the Human Brain’ in Current Biology, has looked at how we remember people, places and categories. The paper’s abstract is below:

“Different pictures of Marilyn Monroe can evoke the same percept, even if greatly modified as in Andy Warhol’s famous portraits. But how does the brain recognize highly variable pictures as the same percept? Various studies have provided insights into how visual information is processed along the “ventral pathway,” via both single-cell recordings in monkeys and and functional imaging in humans and. Interestingly, in humans, the same “concept” of Marilyn Monroe can be evoked with other stimulus modalities, for instance by hearing or reading her name. Brain imaging studies have identified cortical areas selective to voices and and visual word forms and. However, how visual, text, and sound information can elicit a unique percept is still largely unknown. By using presentations of pictures and of spoken and written names, we show that (1) single neurons in the human medial temporal lobe (MTL) respond selectively to representations of the same individual across different sensory modalities; (2) the degree of multimodal invariance increases along the hierarchical structure within the MTL; and (3) such neuronal representations can be generated within less than a day or two. These results demonstrate that single neurons can encode percepts in an explicit, selective, and invariant manner, even if evoked by different sensory modalities.”

This sounds a lot like grandmother cells to me. Here we have pathways starting with different sensory modalities and going through separate perceptual processes leading ultimately to the same cell in the memory apparatus of the hippocampus. Is this not the neural instantiation of our knowing people, places and abstract concepts? Are these not grandmother cells – a neuron that can be activated by any activity the would amount to a definition of my grandmother and in turn activate attributes of my grandmother missing from the original activity? A photograph of her face brings to mind the sound of her singing.

Time and space

The mindhacks blog had a posting on the connection between time and space perception. Here is the abstract of the paper discussed, ‘Prismatic Lenses Shift Time Perception’ by F Frassinetti etal.

“Previous studies have demonstrated the involvement of spatial codes in the representation of time and numbers. We took advantage of a well-known spatial modulation (prismatic adaptation) to test the hypothesis that the representation of time is spatially oriented from left to right, with smaller time intervals being represented to the left of larger time intervals. Healthy subjects performed a time-reproduction task and a time-bisection task, before and after leftward and rightward prismatic adaptation. Results showed that prismatic adaptation inducing a rightward orientation of spatial attention produced an overestimation of time intervals, whereas prismatic adaptation inducing a leftward shift of spatial attention produced an underestimation of time intervals. These findings not only confirm that temporal intervals are represented as horizontally arranged in space, but also reveal that spatial modulation of time processing most likely occurs via cuing of spatial attention, and that spatial attention can influence the spatial coding of quantity in different dimensions.”

I have thought it likely that the processes that are used for one type of perception are also used for any others that can be made to ‘fit’. Particularly, the hippocampus is part of a neural system adept at space and place perception/memory and it also seems to be doing the same for time. Why not for any and everything that we can map with location and direction: numbers, music, procedures etc? So high notes are high, low numbers are low (and to the left if that is the way we read), procedures move along a path, we move into the future and leave the past behind. Some postulate that we think this way because of our language, but it seems more likely that we talk this way because of how we perceive.

Error signals

Again, more of the Firth podcast (here) that was the subject of the last few posts.

It’s interesting that engineers have a very different way of looking at the world than psychologists. Psychologists tend to have a loop which says there’s perception, signals come in about the world, you interpret them, and then you act. The perception is the input and the act is the output. Engineers look at it completely the other way around. They say you act upon the world, you put something into the world—that’s the input, is acting upon the world. And then something happens—which is the output—that enables you to decide what to do next.

And I think this captures this much more active way of thinking about the world which engineers have. Whereas the more passive view of psychologists where somehow you have a perception which you can somehow work out what’s going on, it’s very much the other way around. We have to act in order to create—to make the world send us back information which helps us to interpret what it is.

….

The dopamine signal is a prediction error. So, basically if something unexpectedly nice happens, then you get a shot of dopamine; and so, the dopamine neurons become more active. And if you expect something nice to happen and it does happen, there’s no response; because there’s not an error. If we expect it to happen and it doesn’t happen, then the activity goes down. So, that’s a negative error.

There may be a another way to look at this – a feedback loop does not have a beginning and a end. It is circular. Three components are interacting: the sensory data, the predictive model, and the action commands. Start with the sensory data – the data arrives, it is compared with the model, where it does not match it forces a change to the action commands. OR- Start with the action commands – the commands are given, they are used to create a predictive model of what will happen, the model is compared with the resulting sensory data, where it doesn’t match it results in changes the action commands. OR – Start with the model – keep it accurate by fine-tuning the action/prediction side and the sensing/perception side so that they match. The problem of how to understand a feedback loop is classic and there are good engineering formulas covering the subject (think op-amps, servo mechanisms and the like).

Baysian perceptions

There is more from a podcast interview with Chris Frith (here). This time I quote his views on Baysian perception.

…Perception is a two-way process. This is why I talk about Reverend Thomas Bayes, who produced this formula two hundred years ago. What he’s essentially pointing out is that our perception of the world depends on two things: that is to say, the sensory information that’s coming in through our eyes and ears, and our prior expectations and our knowledge of the world. And it’s the balance of these two that creates what we experience.

His formula tells you how much do you have to change your model of the world given the new evidence that’s coming in. So if you have very strong expectations, that will affect what you actually perceive. In a sense you can’t perceive things that you don’t know something about already…

And also, people who study how the brain works suggest that the brain is a Baysian system that is concerned with making predictions, and collecting sensory evidence, and then looking at the prediction errors to decide what to do next. And certainly learning about the world these days is very much conceived in terms of a Baysian process where you predict what’s going to happen and then you adjust your learning on the basis of these prediction errors.

My own tentative view

Here is my (tentative) way of looking at consciousness.

Mind is a biological function (like circulation or digestion) and the brain is the principle organ that accomplishes this function. Brain ‘does’ mind. Mind is not the only function of the nervous system but it is the major function of the forward parts of the brain. Brain ‘is for’ mind. This is in the same sense that a heart does circulation and a heart is for circulation.

The mind function consists of: building, maintaining and refining a model of reality; using the model to predict, plan, decide, initiate and control responses to reality; and storing/maintaining an edited version of the model as a memory of experience for comparison and learning. This model includes a self in the world. This modeled world is not the same as reality. Neither is the modeled self the same as reality. The more effectively reality is modeled, the better the model. Consciousness is the edited model, prior to, during or immediately after it is stored as a memory.

The edited model has a focus and a larger outline. In other words, a large part of the model is in the edited version but with sparse detail, while one aspect is in detailed focus. Introspection happens when the focus of consciousness is the edited version itself. Introspection is not inspection of reality or of the brain’s model of reality or of the process of forming that model but it is inspection of the edited version of the model prepared for memory.

The model is a pseudo real time model. It contains a modeled ‘now’. ‘Now’ in the model is simply the immediate past (from memory) of the model projected by prediction into the immediate future giving a model of ‘now’. The same or similar modeling processes can stimulate the past and the future as well as the pseudo now. This appears in consciousness as a past-now-future continuum with additional flash back, flash forward, and timeless imaginings occasionally superimposed.

Modeling is constrained by the architecture of the brain and by the primitives (like movement) it builds and elaborates on. The modeling process creates discrete objects occupying locations in three dimensions. The modeling process attaches properties to the objects (and other elements) of the model such as colour or pitch. The modeling process models motor activity from goals, intention, initiation, to action and outcomes. It monitors for surprises or inconsistencies in order to correct its control of perception and motor activity.

We can think of the brain, doing its modeling, as the ultimate artist creating the continuous qualia of our lives. We can also think of the brain, doing its modeling, as the ultimate scientist continuously comparing its prediction with what reality gives it in the next fraction of a second and correcting its model of reality accordingly.

Very little of this modeling process is available in the edited version for consciousness/memory. Even in the areas of focus, the process is hidden except for shorthand indicators (fringe perceptions) like: this is the past, a decision is made, that was good, that is known. If an attempt is made to follow the process through introspection, the result is educated guesswork.

How does the brain create this model? It does not create a continuously changing model but a series of ‘snapshots’. These are melded together in working memory to give the impression of a stream of consciousness. Activity in the brain builds up to each installment of the model. This activity amounts to an increase in the synchronization over larger and larger areas of the brain. The synchronization appears to connect all the components and levels of analysis into one whole – a particular pattern of neural activity. And it is the wherewithal to recreate the pattern that is stored and then further processed.

The synchronization is achieved by the interaction between the thalamus and the cortex. Almost all inputs into the cortex pass through the thalamus and almost all cortical areas send signals back to the thalamus. The thalamus appears (in a rough way) to initiate and control cortical activity. The different areas of the cortex are connected to one another as well as to the thalamus. This gives billions of interlocking and overlapping, parallel feedback loops. Massive, overlapping parallel feedback loops in some arrangements can settle quickly on a stable condition. This would give synchronization in a model that was the ‘best fit scenario’ incorporating sensory input, memory, actions in progress etc.

This way of looking at consciousness gives an identity between the whole of the mind (modeling) and the activity of part of the brain (modeling) – they are one and the same without a trace of duality.

Other philosophical problems can be approached. Epistemology would change from search for knowledge to search for understanding. Modeling does not result in knowledge but in understanding. The criteria of a good model is not ‘truth’ but relevance, consistency, extent, and predictive accuracy. A good model is very much like a good scientific theory – it is reliable but not ‘truth’ in a pristine sense. The modeling process is capable of choice but it would not be free from constrains although, under introspection, it would appear to be freer than it was. As the modeling is affected by the attitudes, values and goals of the brain and the attitudes, values and goals are affected by the modeling, the whole system is responsible for its actions and therefore can be morally judged. All our philosophical attitudes would have to shift to accommodate this new way to look at our thought but there need not be a radical change.

 

The P3 wave

A paper published a year and a half ago is similar to the Gaillard paper that has been discussed in the last two posts. The older paper is titled Brain Dynamics Underlying the Nonlinear Threshold for Access to Consciousness and its lead author is Antoine Del Cul  (here). The methods used have less resolution in some ways then the Gaillard group but have the advantage of varying the masking in small steps between unconscious and conscious events. In the transition region between ‘seen’ and ‘not seen’ it is very informative.

Using scalp electrode they recorded ERPs (event related potentials) during testing. A target was shown briefly (16ms) and was followed by a mask. The time between the target and the mask was varied and this SOA (stimulus onset asynchrony) crossed a threshold from subliminal to conscious perception. The transition was documented in two ways: a forced choice and a subjective report.

The effect that occurred with the same threshold curve as consciousness was called the P3 wave. It was not a local potential but spread across both hemispheres and from the front to the back of the cortex. It occurred about 400 ms after the target. Up until the P3 wave there were only smaller differences between the data for subliminal and conscious processing and these differences did not show a threshold effect but varied in a linear way.

This gives some information to settle a question in my mind. Which of the following is closer to what happens?

1.      Subliminal and conscious perception are entirely different processes – ruled out.

2.      Subliminal and conscious perception start out the same but then differ progressively – seems indicated.

3.      Subliminal and conscious perception are the same process and consciousness occurs (or not) after perception is complete – not ruled out or indicated.

The global workspace

There is a good description of the global workspace model of consciousness quoted below. It comes from the paper mentioned in the last post, Gaillard’s Converging Intracranial Makers of Conscious Access (here).

“This model, in part inspired from Bernard Baars’ theory, proposes that at any given time, many modular cerebral networks are active in parallel and process information in an unconscious manner. Incoming visual information becomes conscious, however, if and only if the three following conditions are met: Condition 1: information must be explicitly represented by the neuronal firing of perceptual networks located in (sensory) areas coding for the specific features of the conscious percept. Condition 2: this neuronal representation must reach a minimal threshold of duration and intensity necessary for access to a second stage of processing, associated with a distributed cortical network involved in particular parietal and prefrontal cortices. Condition 3: through joint bottom-up propagation and top-down attentional amplification, the ensuing brain-scale neural assembly must “ignite” into a self-sustained reverberant state of coherent activity that involves many neurons distributed throughout the brain.

Why would this ignited state correspond to a conscious state? The key idea behind the workspace model is that because of its massive interconnectivity, the active coherent assembly of workspace neurons can distribute its contents to a great variety of other brain processors, thus making this information globally available. The global workspace model postulates that this global availability of information is what we subjectively experience as a conscious state. Neurophysiological, anatomical, and brain-imaging data strongly argue for a major role of prefrontal cortex, anterior cingulate, and the associative areas that connect to them, in creating the postulated brain-scale workspace.”

Neural correlates of consciousness

A very important investigation has been reported. The original paper is here; it is reviewed in ScienceDaily here, The New Scientist  here, and Neurophilosophy here. The research was done by a French team led by R. Gaillard.

In the course of treating 10 epileptic patients, electrodes were placed in their brains to identify the focuses of their attacks. The patients agreed to take part in some research while the electrodes were in place. The data from electrodes placed inside the brain has resolution (both in timing and location) that neither brain scans nor EEGs have. On the other hand the researchers used only a few electrodes (176) and they are placed where they were needed for clinical reasons not for research reasons. The patients were shown words under conditions where some reached consciousness and some did not. The differences between the electrode signals in these two conditions were collected to give a picture of the difference between processing of sensory input that ends in consciousness and that which doesn’t. This is an attempt at the ‘holy grail’ of the NCC, neural correlates of consciousness.

The results were:

1.      No specific seat of consciousness was found but rather there was an involvement across most the cortex.

2.      The early stages (less the 200ms) of conscious and unconscious processing were very similar.

3.      “Conscious word processing was associated with the following four markers: (1) sustained iERPs within a late time window (>300 ms after stimulus presentation); (2) sustained and late spectral power changes, combining a high-gamma increase, beta suppression, and alpha blockage; (3) sustained and late increases in long-range phase coherence in the beta range; and (4) sustained and late increases in long-range causal relations.” In other words, when the word became conscious (1) activity relating to the word continued past 300 msec (2) the late activity lasted some time and involved an increase in high frequency, decrease in medium frequency and blocking of low frequency waves (3) the medium frequency waves became synchronized between distant parts of the cortex (4) changes in one part of the cortex caused changes in distant parts.

4.      The results were consistent with B. Baar’s Global Workspace model of conscious access. I believe it might be consistent with a number of other models as well.

5.      Activity started in the back of the brain, moved progressively forward and reached the frontal cortex in those events reaching consciousness. It then feed back to re-activate the areas in the back of the brain again.

A big network

Here is one way to look at perception.

 

Think of a glass. I reach for it in order to pick it up and fill it with wine. But instead of a smooth lift, my hand raises it a little too fast and it feels a little warmer than expected. In a flash the glass-glass becomes a plastic-glass and I have a feeling that it is not the right glass for my wine. This change in my perception of the world happens in a split second.

 

In effect the constraints on my perception of the world have changed and therefore the best fit with the constraints changes. I go from having a glass-glass in my hand to having a plastic-glass. My vision gives a large number of constraints on my world, so does hearing, so do all the senses of touch, etc. My memory of what the world was previously and what I expect it to be now are also large sets of constraints. I was born with a framework world and have added constraints through learning throughout my life. What I perceive must be consistent with all these constraints at the same time.

 

Many times a second, a new best fit model of the world is put in memory.

 

If we think of the brain as an analog computer, or a massively parallel computer or an enormous network of processors, it is not difficult to see how this model of the world can be formed so quickly. The signals rattle around for a short time and then stabilize, the model is stored and the process repeats.

 

The question now is – is consciousness the newly minted memory or the process of making the memory, the stabilized model or some aspect of using the model, or some combination?

A different type of computer

 

 

Once, long ago, I encountered a discussion on different ways of computing – I cannot remember who wrote the article or when/where it was. Two images have stayed in my mind.

 

If you have a lot of broken spaghetti and you want to find the longest piece, you could do three things. One, you could measure each piece with a ruler, number each piece and tabulate the lengths as you go. After this you would scan the values you recorded for the largest value, note its number and retrieve the piece. Two, you could pick up a piece and compare it to another piece, discard the shortest and continue comparing to another piece. When all pieces but the one you are currently holding are in the discard pile, you have the longest piece. Or three, you could gather up the whole lot of spaghetti in your hands and hold them in an upright bundle. You would then tap the bundle lightly on the table so that all the pieces rested on the table. The longest piece would stand out as the tallest and could be picked out easily. Method three is very fast and easy when compared to the others. It is like an analog calculation rather than a digital one.

 

The other example I remember was finding the centre of three points – the point which is equal distant from all three points or the centre of the circle that passes through all three points. It is possible to construct, with compass and straight edge, the line that is equally distant between each pair of points. There will be a point where the three lines cross and that is the point you want. Another way is to attach an identical spring to each point and then bring the other ends of the three springs together and attach them all to the same little vertical stake. Where the stake rests is the centre point as the three springs will be identically extended. Unless you have to go out and buy the springs etc. but do not have to buy a compass, the spring method is instantaneous. Again it is like the different between an analog calculation and a digital one.

 

When we think of the brain as a computer, we have to be careful about what type of computer we have in mind, as well as remembering that this is just a rough metaphor.