About JKwasniak

Not much to say

Bird brains

ScienceDaily has an item (here) comparing the networks in bird brains with those in mammals. The paper is: O. Güntürkün, M. Wild, T. Shimizu, V. Bingman, M. Shanahan; “Large-scale network organization in the avian forebrain: a connectivity matrix and theoretical analysis”; Frontiers in Computational Neuroscience, 2013

 

 

The researchers found both homologous similarities and convergent evolution similarities. These would likely underlie the cognitive capabilities of birds: innovative tool manufacture, referential gesturing, planning for future needs, mirror self-recognition, causal reasoning, long-term recollection, transitive inference, complex pattern recognition, optimal choice, and numerical discrimination. These feats are similar to many mammals but the mammal brain and the avian one have different architecture. This research shows many similarities.

 

 

Their conclusion:

 

The graph-theoretical analysis presented here reveals a connective core of five inter-connected hub nodes in the pigeon forebrain. In graph-theoretical terms, these regions are the most topologically central and most richly connected to the rest of the network, and are thus central to information flow in the avian brain. These findings are suggestive of the possibility that the same set of regions is central to avian cognition. Several researchers have hypothesized that intelligence evolved convergently in birds and primates. Our data are compatible with this idea, but hint at a somewhat more complex picture. For regions like the hippocampal APH, homology with their mammalian counterpart is likely, and the similarity of hippocampal network organization between birds and mammals is therefore likely due to shared evolutionary history. But several key structures in the pigeon connectome, such as NCL, AD, and AI, are functionally analogous but probably not homologous to corresponding mammalian structures. In these cases, shared network topology may be the outcome of convergent evolution. It is noteworthy that in both mammals and birds, the topologically central regions are also cognitively significant. It may therefore be reasonably hypothesized that during the evolution of taxa with demonstrably high cognitive abilities, similar selective pressures were at work resulting in similar network architectures.

 

Overall, our analysis suggests that, despite the absence of cortical layers, the avian brain conforms to the same organizational principles as the mammalian brain on a deeper, network-topological level. Future work will no doubt produce further refinements to the underlying connectome data. However, we anticipate that the central findings of the present paper will remain valid, namely the modular, small-world network topology of the avian brain and the presence within it of a connective core of hub nodes that includes hippocampal and prefrontal-like structures.

 

 

 

How and why

There are a number of ways to look at behavior. A bird selects material to make a nest. One person might say that the bird is looking for twigs of a particular size because those can be woven into the shape of nest that the bird wants as a snug place to lay eggs. This implies a certain amount of thinking, remembering, learning and motivation on the bird’s part. We are looking at the bird’s behavior similarly to how we look at our own. Or it might be that the bird has an instinctive program to build a nest. The bird is doing what feels ‘right’ to the bird. Again this is similar to how we view our own instinctive behavior. These are how explanations. They are concerned with how the behavior gets initiated in the brain.

 

But there are also why explanations. It does not matter whether the behavior is a product of reflex, instinct, drive or cognition – the why explanation will be about the evolution of the animal, its traits, its niche and the environmental problems it faces. It is about the strategy that the animal uses to stay alive and have descendants. The same evolutionary explanation could be used with very different how explanations. The opposite could also be the case one how used with many whys.

 

It is confusing enough when talking about animals – why explanations are often heard as how ones with the implication that the animal knows about things and thinks about things that are not credible for that animal.

 

When we talk about people it becomes very difficult. This is because people tend to work on the assumption that they know how and why they decide to do things. They are not working in the realm of neuroscience (how) and genetic/social evolution (why) but working in the folk psychology realm with a very different how and why. They think that all is transparent to them, they have direct knowledge of their minds but a more realistic view is that almost nothing is transparent to them and they are guessing about the workings of their minds. It is disturbing to be told that you did something for reasons and by processes that you never dreamed of. Whose mind is it anyway?

 

Still to many people it is also fascinating to understand the brain. I often think that this field is going to be confusing until people forget ‘I think therefore I am’ and start looking at ‘I am therefore I think’. I know I exist so I don’t need a philosophical proof of that. What I don’t know much about is my thinking and for that I need a scientific understanding – a neuroscience how and an evolutionary why.

 

Inner voice

A recent paper, Inner speech captures the perception of external speech, by M Scott and others in JASA Letters (see citation below), opens with the observation: Throughout the day most of us engage in a nearly ceaseless internal banter. This stream of inner speech is a core aspect of our mental lives, and is linked to a wide array of psychological functions. Despite this centrality, inner speech has received little scientific attention.

The reason for this, I would guess, is because many people still think of inner speech as being in-and-of consciousness and therefore not requiring explanation – simply being what it appears to be. Because the processes that create conscious experience are transparent to us, it is easy to assume there are no such processes.

This group does investigate the source of inner speech. They start with the hypothesis the it is an effect of corollary discharge.

Corollary discharge is a neural signal generated by the motor system that serves to prevent confusion between self-caused and externally-caused sensations. When an animal performs an action, its motor system uses a forward model (an internal model of the body) to predict the sensory consequences that will result. This prediction is corollary discharge. Corollary discharge is relayed to sensory areas where it serves to segregate out incoming sensations that match the prediction since these are likely to be caused by the animal’s action. Corollary discharge is what allows you to speak without confusing your voice with other voices/sounds in the environment.Auditory corollary discharge (for speech) is therefore an internal prediction of the sound of one’s own voice.

They looked for the tell-tail signs of corollary discharge. One is that corollary discharge can affect perception. When in doubt we perceive what we expect to perceive. This is called perceptual capture.

Perceptual capture is a shift in perception caused by the fact that corollary discharge is an anticipation and as such can pull ambiguous stimuli into alignment with the anticipated percept.

So if inner speech is a corollary discharge phenomenon (the auditory content of inner speech is provided by corollary discharge) than it should show perceptual capture. Their experiments tested this.

In the spectrum from normal speech, through silent speech, to imagined speech, they looked at both ‘mouthed speech’ and entirely silent speech. Both types showed perceptual capture with the mouthed speech having a stronger capture then the silent. Ambiguous speech sounds were heard as similar to the imagined phoneme. /ɑˈbɑ/ and /ɑˈvɑ/ were used with the inner voice of one of them affecting the hearing of the other.

For both mouthed and pure inner speech, participants were more likely to hear an ambiguous sound as matching the content of their imagery. The two-way directionality of the effect (mouthing/imagining /ɑˈbɑ/ pulling perception in one direction but mouthing/imagining /ɑˈvɑ/ pulling in the opposite direction) demonstrates that it is the content of the inner speech that is responsible for the effect, not some extraneous factor. This experiment also shows a distinction between Mouthing and Imagining: For both /ɑˈbɑ/ and /ɑˈvɑ/, the Mouth conditions were significantly different from the corresponding Imagine conditions. This is predicted under the assumption that greater articulator engagement triggers more corollary discharge engagement.

To rule out phoneme priming as the cause of the effect, a second experiment was done.

Experiment 2 demonstrates that inner speech can make the perception of an external sound match the subphonemic aspects of an imagined sound, without there being phonemic identity between imagined sound and percept. By extension, this experiment also tests the claim that subphonemic content exists in inner speech…. The mouthed/imagined sounds were /ɑˈfɑ/ and /ɑˈpɑ/. /ɑˈfɑ/ is similar to /ɑˈvɑ/ at a subphonemic level (both are labiodental fricatives) and /ɑˈpɑ/ is similar to /ɑˈbɑ/ (both are bilabial stops). If imagery of /ɑˈfɑ/ can cause an /ɑˈbɑ/ ∼ /ɑˈvɑ/ ambiguous target to be perceived as /ɑˈvɑ/ (and the converse for imagery of /ɑˈpɑ/) that would indicate the presence of perceptual capture at the subphonemic level. … this experiment succeeded in showing subphonemic influences from pure inner speech in addition to mouthed inner speech.

So this look at inner speech implies that we are not thinking in this inner speech but at some pre-motor non-conscious level and this unconscious thinking goes through a motor program and then becomes a sensory input into the content of consciousness. That seems plausible.

ResearchBlogging.org

M Scott, H Yeung, B Gick, J Werker (2013). Inner speech captures the perception of external speech Journal of the Acoustic Society of America Letters, 133 (4)

Needing nature

It sometimes disturbs me that some people I know are so far removed from nature. It is not ‘natural’. I was once standing in someone’s house and suddenly realized that it had no plants, no animals and only a view that was largely man-made. It even smelled of cleaners. There was no contact with other life. And I thought, “How do these people keep their sanity in this environment – we would criticize the keeping of a zoo animal in a place so artificial and devoid of natural surroundings.” People need at least to garden, have pets and take holidays in wild places to have a reasonable perspective on existence.

 

 

What happens when I spend time among trees or by an ocean is that I think differently. Worrying and planning about things tends to recede. My thoughts become less semantic and more sensual-motor-emotional – I may be several moments at a time without any ‘words’ in my consciousness. A profound sense of relaxation and of being at ease with the world, at home in a sense, is the result of these environments. Note also how good dog visits are for people in senior homes and hospitals.

 

 

It is important if we are to live successfully in this world without destroying it or ourselves (or both), that we avoid this divorce from things biological. We have created cultures that more and more separate us from the environment that we evolved to live in. We do not have to destroy any culture but we do need to introduce more of the biological (the other living things) into it. We should stop looking for what separates us from other animals and look for what connects us to them, build bridges rather than fences.

 

 

We are animals! That should be one of the fundamentals of our sense of identity. Some scientists and philosophers are working on this, such as these.

 

J. Bussolini. Recent French, Belgian and Italian work in the cognitive science of animals: Dominique Lestel, Vinciane Despret, Roberto Marchesini and Giorgio Celli. Social Science Information, 2013; 52 (2): 187 DOI: 10.1177/0539018413477938

 

Abstract: This paper is a review of the work of four scholars who have made substantial new developments in our understanding of animal mind and animal–human interactions. Dominique Lestel indicates that culture is rooted in the animal realm and draws upon ethology and ethnography to study animal worlds. Vinciane Despret pays heed to complex animal–human sociality and combines critical psychology and ethology to take account of animal mind. Roberto Marchesini argues that animal influence on humans is widespread and is foundational to culture; he uses anthropology and ethology to expand the field of animal–human interactions. Giorgio Celli holds that ethology permeates the spaces of everyday life and that animals such as cats demonstrate complex problem-solving and social behavior.

 

Shared Life: An Introduction by Dominique Lestel and Hollis Tayor Social Science Information 52(2) 183-186 doi:10.1177/0539018413477335

 

We Westerners have become so accustomed to the image of a triumphant forward march (whose history recounts how man became so autonomous that he reached freedom) that we have neglected the possibility that such a history could be in the end frankly pathological (given how we have bent over backwards to become autistic). … A key question now is to know how the human of the 21st century can reactivate his animality and animalize himself anew when all Western thought since the Greeks tells him that he is human precisely because of this rupture with animality.”

 

Relief of anxiety with meditation

ScienceDaily has an item (here) on a paper by Zeidan, Martucci, Kraft, McHaffie and Coghill; Neural correlates of Mindfulness Meditation-Related Relief, in Social Cognitive and Affective Neuroscience 2013. They compared simple attention-to-breathing with mindfulness meditation in the relief of anxiety to find the brain areas likely to be responsible for the lowering of anxiety after mindfulness meditation.

 

 

Here is the abstract:

 

Anxiety is the cognitive state related to the inability to control emotional responses to perceived threats. Anxiety is inversely related to brain activity associated with the cognitive regulation of emotions. Mindfulness meditation has been found to regulate anxiety. However, the brain mechanisms involved in meditation-related anxiety relief are largely unknown. We employed pulsed arterial spin labeling MRI to compare the effects of distraction in the form of attending to the breath (ATB; before meditation training) to mindfulness meditation (after meditation training) on state anxiety across the same subjects. Fifteen healthy subjects, with no prior meditation experience, participated in 4 d of mindfulness meditation training. ATB did not reduce state anxiety, but state anxiety was significantly reduced in every session that subjects meditated. Meditation-related anxiety relief was associated with activation of the anterior cingulate cortex, ventromedial prefrontal cortex and anterior insula. Meditation-related activation in these regions exhibited a strong relationship to anxiety relief when compared to ATB. During meditation, those who exhibited greater default-related activity (i.e. posterior cingulate cortex) reported greater anxiety, possibly reflecting an inability to control self-referential thoughts. These findings provide evidence that mindfulness meditation attenuates anxiety through mechanisms involved in the regulation of self-referential thought processes.

 

 

The aim seems not to be to show that meditation relieves anxiety, which has been shown many times to be the case. Instead they tried to identify the brain activity that led to this relief. “Mindfulness is premised on sustaining attention in the present moment and controlling the way we react to daily thoughts and feelings,” Zeidan said. “Interestingly, the present findings reveal that the brain regions associated with meditation-related anxiety relief are remarkably consistent with the principles of being mindful.”

 

 

Closer look at the thalamus

Like an archaeologist noticing low hummocks on the ground, he can theorize but will not get very far in understanding until he digs below the surface. So it is with the brain. The activity on the surface of the cortex is unlikely to be understood until the activity below it is studied too. It is now possible to look more closely to the thalamus with higher powered fMRI and this development is examined by C. Metzger and others. (citation below)

 

 

While a continuing debate on segregated networks generally focuses on cortical regions, the interaction of subcortical structures - foremost thalamus and basal ganglia - with these cortical networks, their influence and control has hardly been investigated and therefore remains poorly understood. … For fMRI, insufficient spatial resolution in most studies limited the interpretation of thalamic activation, while continuous innovation in high resolution fMRI (hr-fMRI) now enables the functional investigation of small, anatomically well-described subcortical structures including the thalamus - also in humans.”

 

 

The thalamus is difficult to study. It is small and has different functional areas packed closely together. It is vital and cannot be interfered with in any major way. Until recently fMRI could not measure activity in small enough regions to ‘see’ the functional structure of the thalamus. It is connected to almost every other part of the brain (cortico-striatal-thalamo-cortical loops, sensory input, reticular formation and other tracks) and involved in a great many brain functions (consciousness, memory, cognition, perception, motor control, emotion at the very least) so it is important not to treat it as one, ‘the thalamus’, but as its individual functional and anatomical parts.

 

 

An example of recent research is the separation of the parts of the thalamus connected to the task oriented network and the default mode network. These networks are far-flung on the cortex but connected to two separate but adjacent parts of the thalamus. The area of the thalamus called the mediodorsal nucleus (MD) appears to be connected to cortical areas of the default network. The area called the centromedian/parafasicular complex (CM) appears to be connected to the task attention network. These thalamus areas may, in fact, be orchestrating the two networks. The illustration from the paper shows the two networks. (Click on image to enlarge)

FIGURE 2 | (adapted from Eckert et al., 2011): Network segregation based on relative fiber counts. (A) Sagittal plane; (B) cornal plane; (C) transversal plane (D) color bars: indicating the level of T-values for each region shown in (A–C). Regions with preferential connectivity to the MD are shown in blue and those connecting stronger to the CM/Pf complex are shown in red, the strength of the connectivity are visualized in the brightness of the blue and red colors. The PCC and the nucleus accumbens do not show significant preferences and appear in green. Abbreviation MD, mediodorsal thalamic nucleus; CM, centromedian/parafasicualar complex of the thalamus; amy, left amygdala; hipp, left hippocampus; PCC, posterior cingulate cortex; put, right putamen; pall, right pallidum; NAcc, right nucleus accumbens; caudate, right caudate nucleus; dlPFC, right dorsolateral prefrontal cortex; dACC, dorsal anterior cingulate cortex; pgACC, pregenual anterior cingulate cortex; aI/fo, left anterior insula-frontal operculum.

 

 

The authors also have examples of clearer identification of areas involved in memory, emotion and motor control. I look forward to more high resolution studies.

 

 

Here is the abstract:

 

The thalamus, a crucial node in the well-described cortico-striatal-thalamo-cortical circuits, has been the focus of functional and structural imaging studies investigating human emotion, cognition and memory. Invasive work in animals and post-mortem investigations have revealed the rich cytoarchitectonics and functional specificity of the thalamus. Given current restrictions in the spatial resolution of non-invasive imaging modalities, there is, however, a translational gap between functional and structural information on these circuits in humans and animals as well as between histological and cellular evidence and their relationship to psychological functioning. With the advance of higher field strengths for MR approaches, better spatial resolution is now available promising to overcome this conceptual problem. We here review these two levels, which exist for both neuroscientific and clinical investigations, and then focus on current attempts to overcome conceptual boundaries of these observations with the help of ultra-high resolution imaging.

ResearchBlogging.org

C.D. Metzger, Y.D. van der Werf, M. Walter (2013). Functional mapping of thalamic nuclei and their integration into cortico-striatal-thalamo-cortical loops via unltra-high resolution imaging - from animal anatomy to in vivo imaging in humans Frontiers of Neuroscience, 7

Ideas that spread

Why do some things get picked up and become viral, causing a buzz? What prompts a person to tell others about what they have heard?

ScienceDaily reports on a paper by Falk, Morelli, Welborn, Dambacher, Lieberman, entitled Creating Buzz: The Neural Correlates of Effective Message Propagation, in Psychological Science. The researchers look at scans of people ‘deciding’ whether an idea should be passed on.

Lieberman said, “Our study suggests that people are regularly attuned to how the things they’re seeing will be useful and interesting, not just to themselves but to other people. We always seem to be on the lookout for who else will find this helpful, amusing or interesting, and our brain data are showing evidence of that. At the first encounter with information, people are already using the brain network involved in thinking about how this can be interesting to other people. We’re wired to want to share information with other people. I think that is a profound statement about the social nature of our minds….We’re constantly being exposed to information on Facebook, Twitter and so on. Some of it we pass on, and a lot of it we don’t. Is there something that happens in the moment we first see it — maybe before we even realize we might pass it on — that is different for those things that we will pass on successfully versus those that we won’t?”

The temporoparietal junction was the only part of the brain that predicted a subject’s likelihood of passing an idea. The TPJ is part of the ‘mentalizing network’ along with the dorsomedial preforntal cortex.

When we read fiction or watch a movie, we’re entering the minds of the characters — that’s mentalizing. As soon as you hear a good joke, you think, ‘Who can I tell this to and who can’t I tell?’ Making this judgment will activate these two brain regions. If we’re playing poker and I’m trying to figure out if you’re bluffing, that’s going to invoke this network. And when I see someone on Capitol Hill testifying and I’m thinking whether they are lying or telling the truth, that’s going to invoke these two brain regions.”, Lieberman said. So when we find that an idea would be interesting to others – we are motivated to pass it on. It’s a very social thing to do.

Here is the abstract:

Social interaction promotes the spread of values, attitudes, and behaviors. Here, we report on neural responses to ideas that are destined to spread. We scanned message communicators using functional MRI during their initial exposure to the to-be-communicated ideas. These message communicators then had the opportunity to spread the messages and their corresponding subjective evaluations to message recipients outside the scanner. Successful ideas were associated with neural responses in the communicators’ mentalizing systems and reward systems when they first heard the messages, prior to spreading them. Similarly, individuals more able to spread their own views to others produced greater mentalizing-system activity during initial encoding. Unlike prior social-influence studies that focused on the individuals being influenced, this investigation focused on the brains of influencers. Successful social influence is reliably associated with an influencer-to-be’s state of mind when first encoding ideas.

Do we think in language?

I think most people agree that we do not actually think in words. If we did: we would not search for the right word to express a thought; we could not think as pre-verbal infants; we would not have integrated thought if we were bilingual - and on and on. We do not think in actual words but perhaps in something like concepts. Those concepts could be represented by words or pictures or sounds or whatever. But do we actually think in concepts. Some would say that we think in symbols and with them we can ‘compute’ or think. Perhaps we have symbols standing for concepts and we manipulate these with some sort of rule system.

 

 

Here is a quote from Schlenker. “This does not mean that thought is not a system that manipulates symbols; in fact a widespread contemporary model, the ‘computational model of the mind’, suggests that the mind should be analyzed by analogy with a computer, which manipulates abstract symbols. On this view, thought is just symbol manipulation. But the symbols in question need not be part of verbal language; they may be part of what Pinker calls ‘Mentalese’, which is just another term for ‘language of thought’.”

 

 

I do not find this very convincing. Are we not just guessing here? What do we actually know about thought? We know what we are conscious of, but not necessarily its original form, and we know what we can observe in some tricky experiments. We also can make some assumptions. That does not add up to much understanding.

 

 

Let us make some assumptions from the bottom up for a change. If we climb phylogenetic history, we have no brains, then increasingly complex brains. Somewhere in the simplest brains are the simplest thoughts and in more complex brains are more complex thoughts. Thought is after all what brains do – they are matched, inseparable and just two aspects of the same thing. The only processes of thought that are possible are those that the brain can do; the nature of thought and the structure of the brain must evolve together. As the brain does not look or act like a computer and cannot (so far) be one-to-one mapped to a computer, the whole computer-brain metaphor has to be taken as approximate.

 

 

There are many things that the brain does that we can be relatively sure of. Take ‘objects’ as an example. We know that in reality objects are very hard to put a finger on. They do not have crisp boundaries. But our brains create very separate, clearly bounded objects; it places them in very definite places in space and tidies them up with more consistent colour, size, texture etc. It attaches the input from various senses and memories together so that by seeing an object we may know how it will feel, smell, handle etc. The object can acquire a meaning. Is an object a concept or a symbol? Do we care about this label? The difference is hard to pin point – has it to do with completeness or implied meaning or what?

 

 

If we are thinking about an object in the category of ‘tree’, does it matter whether we are thinking the word ‘tree’, or the image of an archetypal tree, or the memory of a particular tree? The brain seems to use some fixed categories (such as ‘face’) and seems to have the ability to create categories when it is useful to group similar objects. The objects have attributes (or they bind to characteristics such as colour) and it appears that similar attributes allow categories to be formed. We can call objects and categories of objects by names like concept or symbol if we like but it does not change their nature as facilities of the brain.

 

 

Besides objects we have places and the brain’s way of creating and dealing with space. We have events, those units of memory that are closely causally and temporally connected. Events seems also to be created out of an undivided reality. They can be strung together to make larger events. We create a method of motor understanding that goes something like: need/opportunity – goal – plan – intent – decision – action – outcome. And when we find that we can understand a particular event in this way, we assume the actor is animate and thinks; we do ‘theory-of-mind’. This idea of thought is starting to resemble language with its nouns, adjectives, verbs, subjects and objects, sentences and so on. But the arrow is in the direction of language resembling thought and not thought resembling language. By language I am including forms of “mentalese”. If there is mentalese then it would resemble thought.

 

 

But why should the brain do it’s thought this way. Why create objects, places, events out of an undifferentiated reality? I find an answer, maybe a correct one, in the need for consciousness and memory in order to integrate brain activity and learn from experience. It is an awareness and storage problem that leads to creating these concepts/symbols as a way of accessing information.

 

 

So, in summary, my assumption (just a guess really) is:

 

  1. language developed to be compatible with the nature of

  2. consciousness/memory/learning-from-experience which, much earlier, developed to be compatible with the

  3. low level operations of the brain in sensing and acting (without these operations forming a integrated model of reality).

 

 

 

Attention in mindfulness meditation

There is an interesting article on meditation (see citation) which puts control of attention at the beginning of mindfulness meditation training and practice. This type of meditation is used traditionally by Buddhists but now also by many medical support programs (for depression, anxiety and the like) and self-help groups. The training varies but always seems to put an early emphasis on controlling attention.

 

 

The article gives three respected definitions for mindfulness:

 

the awareness that emerges through paying attention on purpose, in the present moment, and non-judgmentally to the unfolding of experience moment by moment” - “characterized by dispassionate, non-evaluative, and sustained moment-to-moment awareness of perceptible mental states and processes. This includes continuous, immediate awareness of physical sensations, perceptions, affective states, thoughts, and imagery” - “a receptive attention to and awareness of present events and experience”

 

 

Meditation seems to increase control over three key areas: attention, cognition and emotion. But control of attention seems to be mastered first and used to gain control of the other two. Once those skills are in place the training goal is a mental stance of non-judging awareness. This is intended to produce behavior that is aware, flexible and autonomous as well as general well-being.

 

 

But how is attention changed in meditation training? The model is that three attentional networks cooperate; they have the functions of alerting, orienting and executing control. The right frontal and right parietal cortex and the thalamus are involved in alerting functions. The superior parietal cortex, temporal parietal junction, frontal eye fields, and superior colliculus are involved in orienting. The anterior cingulate cortex (ACC), lateral ventral cortex, prefrontal cortex, and basal ganglia contribute to executive control processes. However, the executive function may be split to divide out a salience network that detects relevant events for cognition, homeostasis and emotion (dorsal ACC, ventrolateral prefrontal cortex, anterior insula).

 

 

Sustained attention can be studied using Attentional Blink experiments. Presumably efficiency allows sustainablity:

 

The attentional blink task requires participants to attend to a rapidly changing stream of stimuli (e.g., letters) and to report the identity of two embedded target stimuli (e.g., digits) after each trial. Performance to the second target in the stream typically suffers if it appears within 500 ms after the first target, the so-called attentional blink effect. This performance detriment was significantly reduced after the meditators had completed their meditation retreat. In parallel, the amplitude of the P3b event-related potential (ERP) elicited by the first target stimulus, was decreased in meditators. The participants with the greatest decrease of the P3b amplitude also showed the largest decrease in attentional blink size. Because the P3b component is considered to index the allocation of attentional resources, these results suggest that the meditation training improved the meditators ability to sustain attentional engagement in a more balanced and continuous fashion. This was expressed as enhanced allocation of neural resources, which facilitated the detection of the second target. An additional analysis of the phase of oscillatory theta activity following successfully detected second targets showed a reduced variability across trials, a signature of more consistent deployment of attention in meditators. Taken together these findings indicate improved efficiency in engaging and disengaging from relevant target stimuli, i.e., flexibility of allocating attentional resources. ”

 

 

Studying how attention is monitored and restored after lapses often uses the Stroop Word-Color Task:

 

The task requires participants to rapidly name or indicate the color of the font a word is presented in. The highly automatized function of reading leads to performance decrements (slower responses and/or higher error rates) in the incongruent condition, i.e., when the meaning of a color word conflicts with its font color (e.g., “GREEN” presented in red). High proficiency in this task is thus thought to indicate

 

good attentional control and relatively low automaticity or impulsivity of one’s responses. Employing cross-sectional comparisons, several studies reported significantly better performance for meditators than non-meditators on this task and found that task performance was also related to lifetime

 

meditation experience and levels of self-reported mindfulness …The results showed that meditation practice influenced the neuronal responses to the Stroop stimuli in two important ways. Firstly, it led to a relative increase of lateral posterior N2 amplitudes (160–240 ms) over both hemispheres, irrespective of stimulus congruency. These changes in the meditation group were primarily driven by increased activity in the left medial and lateral occipitotemporal areas for congruent stimuli, which was contrasted by decreased activity in similar brain areas in the control group. The second difference between meditators and controls was observed in the P3 component, peaking between 310 and 380 ms, primarily for incongruent stimuli. While the participants in the control group exhibited an increase of the P3 amplitude for incongruent stimuli, a decrease was observed for the meditation group, attributed to reduced activity in lateral occipitotemporal and inferior temporal regions of the right-hemisphere …better Stroop performance in meditators, commonly attributed to de-automatization, may—at least partially—be due to less emotional reactivity and may thus reflect improved emotion regulation strategies rather than attentional control processes. This perspective highlights the close link between attention regulation and emotion regulation skills”

 

 

Longitudinal studies indicate that meditation practice results in significant changes to earlier stimulus processing in terms of enhanced/more consistent, dynamic, and flexible attentional functions. Improvements in attentional selection and control appear to be primarily mediated by more flexible attentional resource allocation that modulates early stimulus processing, possibly in a modality independent fashion. Rather than enhancing response inhibition processes per se, the study by Moore revealed meditation-related improvements to earlier stages of stimulus processing in terms of more focused attentional resources (indexed by the enhanced N2) and more efficient perceptual discrimination and conflict resolution processes (indexed by the reduced P3). When considering these two findings together, an interesting interpretation emerges: the more successful attentional amplification of the color word stimuli may have influenced the subsequent object recognition processes in positive ways, so that less attentional resources needed to be invested. ”

 

 

There appears to be an indication that the improved control of attention is important in meditation training and is the basis on which other skills are mastered.

 

ResearchBlogging.org

Malinowski P. (2013). Neural mechanisms of attentional control in mindfulness meditation Frontiers in Neuroscience, 7 (8) : 10.3389/fnins.2013.00008

Math not necessary sometimes

It seems hard to believe that any educated person thinks that plants can do math, despite the almost endless crop of headlines such as Plants ‘do math’ to control overnight food supplies from the BBC and

 

Plants do ‘sophisticated’ math to ration food at night from UPI. These are just news release and headline writers trying to get attention. But the idea that all the problems that we would potentially see as math problems are actually solved with symbolic, rule based, serial operations rather than some other method seems fairly general. It takes something like plants using math to make us insist on some other answer. Cognition does not have to always involve semantic, mathematical or logical manipulations of symbols.

 

 

There was an old argument, largely settled, about how a fly ball (a ball that is batted high in baseball) is caught. The ball flies, the fielder runs, and he ends up at the same place at the same time as the ball – puts out his gloved hand and the balls falls into it. The math is complicated but the skill is not. The fielder runs so that the ball has a constant bearing. If the fielder can manage to keep the ball in exactly the same place in his visual field he will be able to catch it. The rule is used on the high seas – if one captain see the another ship stay at the same bearing, he must take evasive action. If he doesn’t the two ships will hit (in fact, hit at that bearing angle). How does an airplane pilot know the direction that his plane is moving? It is moving towards the only stationary spot in his vision. And, for the car driver, the reason that a blind spot in his vision is dangerous, is that anything that stays hidden in the blind spot (keeps the same bearing behind the obstruction) will inevitably hit him. Most people know this unconsciously, some learn it consciously, but few do the math it takes to understand exactly why this is a ‘fact’ about moving in a changing world. We use embodied cognition via a perception-action loop more or less automatically.

 

 

Now we have some indication of where in the sensory-motor feedback system this skill resides. We follow the bearing of an perceived object in space, as opposed to a spot in the visual field. The object is perceived and placed in space prior to tracking its bearing. We have this indication because we can catch using the sound of an object without being able to see the object.

 

 

The experiments are in a paper by Shaffer, Dolgov, Mcmanama, Swank, Maynor, Kelly, and Neuhoff, Blind(fold)ed by science: A constant target-heading angle is used in visual and nonvisual pursuit, published in Psychonomic Bulletin & Review. (ScienceDaily review)

 

 

Here is the abstract:

 

Previous work investigating the strategies that observers use to intercept moving targets has shown that observers maintain a constant target-heading angle (CTHA) to achieve interception. Most of this work has concluded or indirectly assumed that vision is necessary to do this. We investigated whether blindfolded pursuers chasing a ball carrier holding a beeping football would utilize the same strategy that sighted observers use to chase a ball carrier. Results confirm that both blindfolded and sighted pursuers use a CTHA strategy in order to intercept targets, whether jogging or walking and irrespective of football experience and path and speed deviations of the ball carrier during the course of the pursuit. This work shows that the mechanisms involved in intercepting moving targets may be designed to use different sensory mechanisms in order to drive behavior that leads to the same end result. This has potential implications for the supramodal representation of motion perception in the human brain.