Narratives


I knew a man once that only really thought he understood the meaning of a concept if he knew it through history, fiction or anecdotal narrative. I would not have credited such a way of understanding the world except for knowing him. Some people really ‘get’ algebra and they feel they really know something if they can describe it mathematically. Other people need their explanations to be graphic: an illustration, a map or a diagram. Still others need ideas to be verbal in order to easily grasp them. My friend needed a dramatic plot or a parable to have that feeling of understanding. Words were not enough; there had to be a plot. Most of us can use many or even all those vehicles to understanding, maybe more ways that I have never noticed. We use different ways to understand different things.

Most people I know understand their own lives in narrative form. G. Strawson argues that this is not true of all of us (here). The article starts with this abstract:

I argue against two popular claims. The first is a descriptive, empirical thesis about the nature of ordinary human experience: ‘each of us constructs and lives a “narrative” . . . this narrative is us, our identities’ (Oliver Sacks); ‘self is a perpetually rewritten story . . . in the end, we become the autobiographical narratives by which we “tell about” our lives’ (Jerry Bruner); ‘we are all virtuoso novelists. . . . We try to make all of our material cohere into a single good story. And that story is our autobiography. The chief fictional character . . . of that autobiography is one’s self’ (Dan Dennett). The second is a normative, ethical claim: we ought to live our lives narratively, or as a story; a ‘basic condition of making sense of ourselves is that we grasp our lives in a narrative’ and have an understanding of our lives ‘as an unfolding story’ (Charles Taylor). A person ‘creates his identity [only] by forming an autobiographical narrative – a story of his life’, and must be in possession of a full and ‘explicit narrative [of his life] to develop fully as a person’ (Marya Schechtman).

Strawson identifies four characteristics of this sort of life story narrative. First, the person has a diachronic rather than episodic viewpoint, or in other words, they live in a time continuum rather than in the present. Second, the person has to have a tendency to search for a unifying or form-finding construction. Third, the person uses story-telling conventions. Fourth, the person continues to revise the story.

Like episodic and verbal type memory, narrating a our life seems to me to require that the elements come from conscious experience. If we look at this from the view point of learning in general and learning about ourselves in particular than the reason for narrative seems clear.

  1. We construct and remember experience in the form of memories of moments of consciousness. The memory has the setting, the on-going action and especially important things such as elements that are surprising or significant. To learn from experience, we must have experience.

  2. We remember moments of consciousness in a time ordered sequence, making a little episode of the episodic memory. We can attach meaning to such episodes by associating them with cause and effect links. Causality allows us to use our experiences to understand and predict the world around us.

  3. As memories get older, they are consolidated into larger and larger units. Many nearly identical episodes became one. All the trips to work in a first job, long ago, become one memory. The trip to the store yesterday is still an individual memory. It seems that memory is reworked in light of our present knowledge and interests over and over again.

  4. If we want to tell someone about something that happened, we put these memories in words and those words than become associated with the memory. Often we talk to ourselves about a memory and by doing so make it a narrative to be remembered, partly in a narrative form.

Thus moments of experience become meaningful episodes which become summaries of life and finally become narratives.

Strawson believes that personal narrative is not a universal human characteristic and he may be right. But one thing we know is that long, long ago humans learned to make tools, to harass fire and use language – then, well fed, they sat around the fire and told stories into the night.

Confusing writing


Here is a little bit of writing from the New Scientist (here) by Kate Douglas that is a good example of the knots that people tie themselves in to avoid facing up to having one not two minds. The idea that everyone has one mind that does all the thinking and makes a small amount of the finished product available as a global awareness is the idea that we need to accept. We do not think consciously, not, not, not – we think unconsciously. Consciousness takes place after the thought, cognition, perception, and intention is done. Science writers should pull up their socks and stop mixing thought with consciousness.

Subconscious thought is the brain’s dumb autopilot – the chump behind repetitive tasks, Freudian slips and all the other things we do “without thinking”. That was certainly the prevailing view in the 20th century, but the subconscious has lately gone up in the world. It takes centre stage in creativity, puts the “eureka!” into problem-solving, plays a crucial role in learning and memory, and it’s even better at making tough decisions than rational analysis is (New Scientist, 1 December 2007, p 42).

It was in the 1980s that the late neuroscientist Benjamin Libet saw a spark of brain activity 300 milliseconds before subjects consciously chose to twitch a finger. We now know the unconscious decision happens even earlier. In 2008, John-Dylan Haynes at the Bernstein Center for Computational Neuroscience in Berlin, Germany, found brain activity up to 10 seconds before a conscious decision to move, Nature Neuroscience, vol 11, p543 .

Stanislas Dehaene, director of the Cognitive Neuroimaging Unit at INSERM, France, has elegantly revealed the subtle interplay between subconscious and conscious thought. In his experiment, volunteers saw a word flashed onto a screen, followed almost immediately by a picture, which masks conscious perception of the word. As the time interval between the two increases, the word suddenly pops into consciousness – accompanied by characteristic activity on a brain scan. This usually happened when the interval reached around 50 milliseconds, but when emotional words such as “love” or “fear” were used, it happened a few milliseconds earlier. It is as though the decision about the word’s importance and attention-worthiness was taken by the subconscious itself (PLoS Biology, vol 5, e260).

Experiments like these have changed our views about the relationship between conscious and subconscious thought, putting the latter firmly in charge. Think of consciousness as a spotlight, with the subconscious controlling when to turn it on and where to direct the beam. “The conscious mind is not free,” says Haynes. What we think of as “free will” is actually found in the subconscious.

Why colour?


Mark Changizi wrote a posting in PsychologyToday about colour qualia (here). He has some good thoughts about colour that stem from his research.

How do we know that your ‘red’ looks the same as my ‘red’? For all we know, your ‘red’ looks like my ‘blue’. In fact, for all we know your ‘red’ looks nothing like any of my colors at all! If colors are just internal labels, then as long as everything gets labeled, why should your brain and my brain use the same labels?…

However, I would suggest that most discussions of rearrangements of color qualia severely underestimate how much structure comes along with our color perceptions. Once one more fully appreciates the degree to which color qualia are linked to one another and to non-color qualia, it becomes much less plausible to single color qualia out as especially permutable…other qualia are deeply interconnected with hosts of other aspects of our perceptions. They are part of a complex structured network of qualia, and permuting just one small part of the network destroys the original shape and structure of the network – and when the network’s shape and structure is radically changed, the original meanings of the perceptions (and the qualia) within it are obliterated. But we’re beginning to know more about what colors are for, and as we learn more, color qualia are becoming more and more like other qualia in their non-permutability…

colors are part of a three dimensional space of colors, a space having certain well-known features. The space is spanned by a red-green axis, a yellow-blue axis, and a black-white axis. These three axes have opponent colors at opposite ends, and these extreme ends of the axes are pure or primary (i.e., not being built via a combination of other colors). All the colors we know of are a perceptual combination of these three axes. For example, burnt orange is built from roughly equal parts yellow and red, and is on the darker side of the black-white dimension…

Our primate color vision is peculiar in its cone sensitivities (with the M and L cones having sensitivities that are uncomfortably close), but these peculiar cone sensitivities are just right for sensing the peculiar spectral modulations hemoglobin in the skin undergoes as the blood varies in oxygenation. Also, the naked-faced and naked-rumped primates are the ones with color vision; those primates without color vision have your typical mammalian furry face…In essence, … our color-vision eyes are oximeters like those found in hospital rooms, giving us the power to read off the emotions, moods and health of those around us. On this new view of the origins of color vision, color is far from an arbitrary permutable labeling system. Our three-dimensional color space is steeped with links to emotions, moods, and physiological states, as well as potentially to behaviors. …

Furthermore, these associations are not arbitrary or learned. Rather, these links from color to our broader mental life are part of the very meanings of color – they are what color vision evolved for…The entirety of these links is, I submit, what determines the qualitative feel of the colors we see. If you and I largely share the same “perceptual network,” then we’ll have the same qualia. And if some other animal perceives some three-dimensional color space that differs radically in how it links to the other aspects of its mental life, then it won’t be like our color space. …its perceptions will be an orange of a different color.

The question now is not why colour? or is you red the same as my red? but why are colours so vivid and beautiful in our consciousness?.

Babies know more, bigger, longer


ScienceDaily has an item on research by S. Lourenco on the concept of magnitude in babies (here). It is published in Psychological Scinece, General Magnitude Representation in Human Infants.

“We’ve shown that 9-month-olds are sensitive to ‘more than’ or ‘less than’ relations across the number, size and duration of objects. And what’s really remarkable is they only need experience with one of these quantitative concepts in order to guess what the other quantities should look like,” Lourenco says… “Babies like to stare when they see something new,” Lourenco explains, “and we can measure the length of time that they look at these things to understand how they process information.”

When the infants were shown images of larger objects that were black with stripes and smaller objects that were white with dots, they then expected the same color-pattern mapping for more-and-less comparisons of number and duration. For instance, if the more numerous objects were white with dots, the babies would stare at the image longer than if the objects were black with stripes.

“When the babies look longer, that suggests that they are surprised by the violation of congruency,” Lourenco says. “They appear to expect these different dimensions to correlate in the world.”

The findings suggest that humans may be born with a generalized system of magnitude. “If we are not born with this system, it appears that it develops very quickly,” Lourenco says. “Either way, I think it’s amazing how we use quantity information to make sense of the world.”

From an good developmental program point of view: what we are born with we do not have to take time or effort to build and do not have to risk mistakes; but, what we are not born with can be learnt in a flexible, plastic way that will fit our environment but this takes time, effort and may fail to be ‘right’. A compromise is to be born with a few very important things in-born with all else being build within the framework of the in-born framework. So we are probably born with a bare model of the world – 3 dimensional space, the passage of time, space populated by objects, and so on. Magnitude is a useful idea to be born with, only learning to apply it – rather than having to first understand aspects of the world in order to create the idea of magnitude. This framework will be a constant feature of conscious experience.

The little engine that could – maybe


I have always been a little skeptical of the use of self-motivating statements. ScienceDaily has an item on the subject. (here)

Little research exists in the area of self-talk, although we are aware of an inner voice in ourselves and in literature. …Recent research by University of Illinois Professor Dolores Albarracin and Visiting Assistant Professor Ibrahim Senay, along with Kenji Noguchi, Assistant Professor at Southern Mississippi University, has shown that those who ask themselves whether they will perform a task generally do better than those who tell themselves that they will…The participants showed more success on an anagram task, rearranging set words to create different words, when they asked themselves whether they would complete it than when they told themselves they would… in a seemingly unrelated task simply write two ostensibly unrelated sentences, either “I Will” or “Will I,” and then work on the same task. Participants did better when they wrote, “Will” followed by “I” even though they had no idea that the word writing related to the anagram task.

Why does this happen? Professor Albarracin’s team suspected that it was related to an unconscious formation of the question “Will I” and its effects on motivation. By asking themselves a question, people were more likely to build their own motivation…”The popular idea is that self-affirmations enhance people’s ability to meet their goals,” Professor Albarracin said. “It seems, however, that when it comes to performing a specific behavior, asking questions is a more promising way of achieving your objectives.”

The idea that something in consciousness is going to fool or bully the mind/brain, seems pretty weird – but posing a question and making that conscious would prompt planning and this is not trying to fool or bully oneself.

Hearing yourself speak


F. Huettig and R. Hartsuiker have a paper in Language and Cognitive Processes, Listening to yourself is like listening to others: External, but not internal, verbal self-monitoring is based on speech perception. (here) The abstract is below.

Theories of verbal self-monitoring generally assume an internal (pre-articulatory) monitoring channel, but there is debate about whether this channel relies on speech perception or on production-internal mechanisms. Perception-based theories predict that listening to one’s own inner speech has similar behavioural consequences as listening to someone else’s speech. Our experiment therefore registered eye-movements while speakers named objects accompanied by phonologically related or unrelated written words. The data showed that listening to one’s own speech drives eye-movements to phonologically related words, just as listening to someone else’s speech does in perception experiments. The time-course of these eye-movements was very similar to that in other-perception (starting 300 ms post-articulation), which demonstrates that these eye-movements were driven by the perception of overt speech, not inner speech. We conclude that external, but not internal monitoring, is based on speech perception.

This appears quite complex. The paper differentiates between our consciousness of our speech when it is not actually produced aloud and when spoken. The implication is that we produce and monitor our speech but are only consciously aware of the speech until we hear it. However, we become conscious of our internal, unspoken speech in a different way. This makes consciousness simpler but language more complicated. Consciousness is again a question of perception. But as BPS Research Digest puts it:

It’s important to clarify: we definitely do monitor our speech internally. For example, speakers can detect their speech errors even when their vocal utterances are masked by noise. What this new research suggests is that this internal monitoring isn’t done perceptually – we don’t ‘hear’ a pre-release copy of our own utterances. What’s the alternative? Huettig and Hartsuiker said error-checking is somehow built into the speech production system, but they admit: ‘there are presently no elaborated theories of [this] alternative viewpoint.’

Motor actions connected to memory

Neurophilosophy has a posting on embodiment (here) that looks to a motor action/emotional memory link using K. Dijkstra’s work among others.

These results show that bodily movements can influence the rate at which autobiographical memories are recalled as well as the emotional content of the memories. The results of the first experiment demostrate that what we do with our bodies can affect how we think – memory recollection was more efficient when the direction of movement was congruent with the valency of the emotional content of the memory. The second experiment further demonstrates, for the first time, that meaningless bodily movements can also influence what we choose to think about, with upwards movements being associated with positive memories and downward movements with negative ones.

It is well known that memory recollection is facilitated when the context in which recollection occurs matches that in which encoding took place. Classical studies of context-dependent retrieval focused on aspects of the environment in which memory encoding and retrieval take place, and Dijkstra extended this to show that context also includes body posture. The new findings show that movements which are completely unrelated to the encoding of emotional memories can also influence their retrieval. They add to a growing body of evidence that supports the embodied cognition hypothesis; specifically, they provide evidence that thinking involves creating mental simulations of bodily experiences, and that knowledge is represented by partial re-enactments in the brain which activate the same systems associated with real experiences.

Two theories of mind


ScienceDaily has a report (here) on a paper by E. Kalbe and others in Cortex, Dissociating cognitive from affective theory of mind: A TMS study. The ability to infer what another person is thinking is an essential tool for social interaction and is known by neuroscientists as “Theory of Mind” (ToM).

The researchers then applied repetitive transcranial magnetic stimulation (rTMS) to a part of the brain thought to be involved in rational inference — the right dorsolateral prefrontal cortex — in order to interfere temporarily with the activity in that part of the brain and test its effect on the ToM abilities of the volunteers…The findings showed that the temporary interference in this particular area of the brain had an effect on the rational inference abilities (cognitive ToM) of the volunteers, but not on their abilities to infer emotions (affective ToM). … this suggests that certain skills and behaviours known as “executive functions,” such as cognitive flexibility and set-shifting, may be important while the brain is working out what someone else is thinking.

This does not tells us about the source of affective ToM.

Eureka at the neural level


Science Daily reports (here) on work by D. Durstewitz and others reported in Neuron, Abrupt Transitions between Prefrontal Neural Ensemble States Accompany Behavioral Transitions during Rule Learning.

While it is clear that new rules are often deduced through trial-and-error learning, the neural dynamics that underlie the change from a familiar to a novel rule are not well understood…”The ability of animals and humans to infer and apply new rules in order to maximize reward relies critically on the frontal lobes. In our study, we examined how groups of frontal cortex neurons in rat brains switch from encoding a familiar rule to a completely novel rule that could only be deduced through trial and error…they found that the same populations of neurons formed unique network states that corresponded to familiar and novel rules. Interestingly, although it took many trials for the animals to figure out the new rule, the recorded ensembles did not change gradually but instead exhibited a rather abrupt transition to a new pattern that corresponded directly to the shift in behavior, as if the network had experienced an “a-ha” moment…Taken together, these findings provide concrete support for sudden transitions between neural states rather than slow, gradual changes. “In the present problem solving context where the animal had to infer a new rule by accumulating evidence through trial and error, such sudden neural and behavioral transitions may correspond to moments of ‘sudden insight,'” concludes Dr. Durstewitz.

So the evidence is gathered and a new rule devised over time, but the new rule is put into use suddenly. I wonder how that is done: the rule is evaluated and modified elsewhere and then installed in the prefrontal network, or each neuron in that area of the prefrontal lobe is associated with other cells (not normally thought part of any network) and these cells accumulate the changes until the system reaches some sort of threshold and flips to the new configuration, or some other mechanism. My (uneducated) guess is that the results of trail-and-error are stored and computed elsewhere.

Making and testing predictions


Dana site has a piece by Kayt Sukel, ‘Does the brain use the Scientific Method?’ (here) It reports on the work of A. Alink and group on predictive feedback in the brain.

The idea of a “little scientist” inside in our heads making and testing predictions is not a new one… How are human beings able to suss out the environment around them so quickly and efficiently? One idea is that our brains are forming predictions from the top down. That is, we use data from our past experiences to help cull all the extraneous sensory data that is flowing in from the environment. Neuroanatomy seems to support this idea… “We believe that the brain actually constantly has some kind of expectation about what will happen next,” says Alink. “Sensory input provides information about whether those predictions are correct.”… “Everywhere you look in the brain, almost every connection you see has one going in the other direction, too,” says Moshe Bar, a neuroscientist at Harvard Medical School. “The more we thought about this anatomical set-up, the more it seemed like there must be some kind of feedback happening.”… “As the stimulus becomes less predictable, we’d expect the signal in the brain to increase. And as it becomes more and more predictable, the activation should systematically reduce,” says Alink. “That’s what we found. With the least predictable stimuli, we saw the highest response in V1. In the most predictable, the lowest. And in between the two, an intermediate level of activation. It seems that our brain works hard to hypothesize and then test what’s going to happen next.”

This group is studying prediction with the hope of understanding depression. They theorize that faults in the prediction mechanism may be a cause of depression and more serious conditions.