A first bit of knowledge

I'm on ScienceSeeker-Microscope

Deric Bownds (here) has a post on ideas of Ullman et al in a recent PNAS paper From simple innate biases to complex visual concepts. Here is the paper’s abstract:

Early in development, infants learn to solve visual problems that are highly challenging for current computational methods. We present a model that deals with two fundamental problems in which the gap between computational difficulty and infant learning is particularly striking: learning to recognize hands and learning to recognize gaze direction. The model is shown a stream of natural videos and learns without any supervision to detect human hands by appearance and by context, as well as direction of gaze, in complex natural scenes. The algorithm is guided by an empirically motivated innate mechanism—the detection of “mover” events in dynamic images, which are the events of a moving image region causing a stationary region to move or change after contact. Mover events provide an internal teaching signal, which is shown to be more effective than alternative cues and sufficient for the efficient acquisition of hand and gaze representations. The implications go beyond the specific tasks, by showing how domain-specific “proto concepts” can guide the system to acquire meaningful concepts, which are significant to the observer but statistically inconspicuous in the sensory input.

 

This research seems to illuminate the problem with new born learning – how much is knowledge of the world innate and how much is learned. For example, do we need an inherited language module to learn language? In the case of hands and gaze, the babies seem to need only a very simple concept/motivation to begin their learning – the concept of a ‘mover’ and the motivation to follow movers. A very small input of innate knowledge can start of baby off in learning if it is the right little bit.

 

Unconscious language and math

This paper (citation below) starts with the assumption (call the modal view) that, “It is not surprising then that the modal view holds that the semantic processing of multiple-word expressions and performing of abstract mathematical computations require consciousness (reason: they are human skills). In more general terms, sequential rule-following manipulations of abstract symbols are thought to lie outside the capabilities of the human unconscious. ” The authors intend to weaken this modal view.

 

They point out that previous experiments have shown unconscious processing of: single words and numbers, simple arithmetic facts, and additions with no numbers over 6. But more demanding tasks have not been shown to be unconsciously possible. The paper attempts to show more demanding unconscious cognition. “we argue that people can semantically process multiple-word expressions and that they can perform effortful arithmetic computations outside of conscious awareness.”

 

What is different in their experiments is that unconscious processing is given some time.

In all of our experiments, we use Continuous Flash Suppression (CFS), a cutting edge masking technique that allows subliminal presentations that last seconds. CFS is a game changer in the study of the unconscious, because unlike all previous methods, it gives unconscious processes ample time to engage with and operate on subliminal stimuli. Indeed, in the present set of experiments, we show that humans can semantically process subliminal multiple-word expressions and that they can nonconsciously solve effortful arithmetic equations.

CFS consists of a presentation of a target stimulus to one eye and a simultaneous presentation of rapidly changing masks to the other eye. The rapidly changing masks dominate awareness until the target breaks into consciousness. Importantly, this suppression may last seconds. We used this technique in two different ways. In the first section, the critical dependent variable was the time that it took the stimuli to break suppression and pop into consciousness (popping time). In the second section, we used masked expressions as primes and measured their influence on consequent judgments. Objective and subjective measures ensured unawareness of the primes.

 

The results were that semantically incoherent expressions popped before coherent ones showing that unconscious processing was needed to explain the indication of incoherence in multiple-word expressions. And more negative expressions popped faster than non-negative ones indicating unconscious processing to find the tone of the expression. When unconsciously primed with three term arithmetic equations involving subtraction, numbers that were the answer to the equation popped earlier than other numbers, implying that the equation was solved unconsciously. Under slightly different conditions all addition equations also appeared to be solved unconsciously. “These data show that unconscious processes can perform sequential rule-following manipulations of abstract symbols—an ability that, to date, was thought to belong to the realm of conscious processing.”

 

Their conclusion:

To conclude, research conducted in recent decades has taught us that many of the high-level functions that were traditionally associated with consciousness can occur nonconsciously … for example, learning, forming intuitions that determine our decisions, executive functions, and goal pursuit. Here, we showed that uniquely human cultural products, such as semantically processing a number of words and solving arithmetic equations, do not require consciousness. These results suggest that the modal view of consciousness and the unconscious, a view that ties together (our unique) consciousness with (humanly unique) capacities, should be significantly updated.

 

I have a problem with both the modal view and the conclusions of this research group. There is an assumption that consciousness is a cognitive process rather than just a memory and awareness process. Once this assumption is made, it is reasonable to come to their conclusions. What I believe may be happening is that the difference is not in the number of steps or complexity of the cognition but in whether working memory and/or global access is required but the nature of the complexity. If the actual cognition was a conscious function then we really should be aware of that cognition. We should be able to experience the nitty gritty of the process. Instead we get the sub-results of cognition as each step is solved because that sub-result is needed to be in working memory. Learning, practice, habit etc. can change the size/complexity of the steps so that more can be done without recourse to working memory.

 

As far as consciousness being uniquely human – this notion is dead but has not quite been been put in its grave yet.

ResearchBlogging.org

Sklar, A., Levy, N., Goldstein, A., Mandel, R., Maril, A., & Hassin, R. (2012). Reading and doing arithmetic nonconsciously Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1211645109

I'm on ScienceSeeker-Microscope

Reality that matters

I once heard a programmer state that they could do a better job of vision that the brain does. This was because he would be accurate with the colour and brightness of the image so that two identical spots in a scene would be identical in the image of the scene. I was shocked at this misunderstanding of vision. Vision is not about this sort of accuracy, but a different and more important sort of accuracy.

 

A good example is at the start of a Wikipedia article on optical illusions (here) and it gives a definition, “an optical illusion is characterized by visually perceived images that differ from objective reality.” This assumes that the illusion is a less accurate idea of what is real than some other method. This is silly.

 

If I want to play chess on a black and white board (like in the Wikipedia page), I am interested in which squares are black and which are white – the nature of their pigment. I do not want this understanding of the board to change as a shadow passes over it. Not being able to see the constant ‘reality’ of the board would make playing chess very difficult. On the other hand I do not give a fig for the exact number of photons coming from any point on the board.

 

My being able to live in the world depends on seeing it with brightness and colour consistency. I want to see the tiger in the orange dusk light and also in the bluish moonlight. I am not interested in the actual amounts of each frequency reflected from the animal. I want to recognize the reality of a cat that does not change colour with the time of day. It should stay its real orange and black stripes.

 

So what is the objective reality that vision is trying show? It is the location, shape and chemical/physical nature of the surfaces of matter. The matter itself and not what may be registered by an instrument under some arbitrary condition. We want to know about the thing and not its reflection. Its reflection is only useful in what it can tell us about the thing. The thing is the level of reality that is useful to us.

 

I suspect the ‘kluge’ idea. Our memory would be less useful if it was stored in a permanent record like a video recorder. A video-like record would not be a good idea. It is not a fault that that our memories change. I do not want to have memories with the insight of a 10 year old when I am 70. Our memory is about understanding and learning, not about creating an accurate, complete autobiography.

 

 

I'm on ScienceSeeker-Microscope

 

Creating a mind is not near

It’s not exactly a surprise but Gary Marcus (Kluge: The Haphazard Evolution of the Human Mind author) does not see eye to eye with Ray Kurzweil ( The Singularity is Near author). In the New Yorker (here) Marcus reviews Kurtzeil’s new book, How to Create a Mind: The Secret of Human Thought Revealed.


In Marcus’ view, Kurzweil puts forward ideas but does not back them up with evidence – to Kurzweil the ideas are obvious. He presents the ‘pattern recognition theory of mind’. Marcus describes this thus, “the part of the brain that is most associated with reasoning and conscious thought, the neocortex, is seen as a hierarchical set of pattern-recognition devices, in which complex entities are recognized as a statistical function of their constituent parts.” Marcus has problems with this theory.

We all know that one thing the brain is very good at is pattern recognition and better than our computers at it. But saying that the brain is primarily doing pattern recognition is like saying that a car is primarily doing steering. Memory is more than pattern recognition; motivation, motor control, mood and a list of other things are not primarily pattern recognition. This is just not a reasonable way to look at the brain – there is more to the brain than the neocortex, there is more functions that perception, the processes are more complex than algorithms (and certainly there are more processes than just one).

I look forward to the day when we have a mockup of a brain on a computer. But I think we are going to get there using methods like Markram’s, following the biological trail. If someone builds a great computer program that does something well, that is great. But don’t pretend that it says something about our brains unless it actually does say something about our brains.

The Marcus review just confirms my prior characterization of Kurzweil and so I will not be taking the time to read the book. I’m not interested in ‘writing in the sky with a pitchfork’ as an old saying goes.


Word retrieval

When we attempt to find the word for something, related words are also accessed (as in word association, priming, freudian slips, and simple errors). But these related words are of two types, taxonomic and thematic:

Across all types of speakers and all manner of testing, semantic naming errors overwhelmingly reflect taxonomic relations; that is, the predominant error is a category coordinate (apple named as “pear” or “grape”), superordinate (apple → “fruit”), or subordinate (apple → “Granny Smith”). A small subset are thematic errors, such as apple → “worm” or bone → “dog,” in which the target and error are from different taxonomic categories but frequently play complementary roles in the same actions or events.

 

Does this reflect a difference in semantic memory for the two types or not? The researchers of a recent paper, Schwartz etal. (see citation below), used the errors made by stroke victims compared to the location of their brain damage to show a difference between taxonomic and thematic storage in the brain. Their results:

We found that taxonomic errors localized to the left anterior temporal lobe and thematic errors localized to the left temporoparietal junction. This is an indication that the contribution of these regions to semantic memory cleaves along taxonomic- thematic lines. Our findings show that a distinction long recognized in the psychological sciences is grounded in the structure and function of the human brain.

 

What is the relationship between these two ways of retrieving the right word?

Although many thematic errors in our corpus do involve objects with complementary functions in action events (dog → “bone”; zipper → “jacket”), many others are linked by other types of relation, such as spatial relations (e.g., anchor → “sea”) or causal relations (e.g., ambulance → “fire”). This goes along with a broader role for this TPJ area in the representation of relational information, which may be what undergirds its essential contribution to sentence comprehension. We suggest that in the process of identifying an object for naming, relevant event representations are retrieved or simulated that create a momentary linkage between the target concept and others in the event context. This process probably takes place bilaterally in the TPJ, but it is the component on the left that conveys information about these linked concepts to left-lateralized lexical-phonological systems. Lesions here render this communication noisier or less precise, thereby reducing the natural advantage of the target concept over its contextual associates and encouraging an error in which one of these associates is named in place of the target. …

we propose that the ATL and TPJ are each multimodal hubs that extract somewhat different relationships. The ATL extracts perceptual feature similarity for the purpose of object processing, whereas the TPJ extracts role relations for the purpose of event processing. The ATL system is the dominant one in naming, which explains why taxonomic errors predominate over thematic errors.

 

I find this very interesting in the context of how we make/understand sentences and how we reason in metaphors. The separation of conceptual structures from the elements that comprise them is indicated. It seems like something deep about how we think is nearing the surface.

 

ResearchBlogging.org

Schwartz, M., Kimberg, D., Walker, G., Brecher, A., Faseyitan, O., Dell, G., Mirman, D., & Coslett, H. (2011). From the Cover: Neuroanatomical dissociation for taxonomic and thematic knowledge in the human brain Proceedings of the National Academy of Sciences, 108 (20), 8520-8524 DOI: 10.1073/pnas.1014935108

I'm on ScienceSeeker-Microscope

Falling into unconsciousness

The MIT news site (here) has an item on research by Patrick Purdon and others, Rapid fragmentation of neuronal networks at the onset of propofol-induced unconsciousness, published in PNAS. They found that communication between brain areas became very local under anesthetic.

By monitoring brain activity as patients were given a common anesthetic, the researchers were able to identify a distinctive brain activity pattern that marked the loss of consciousness. This pattern, characterized by very slow oscillation, corresponds to a breakdown of communication between different brain regions, each of which experiences short bursts of activity interrupted by longer silences.

This is another example of studying epileptic patients waiting for surgery with electrodes placed in their brains.

Using two different-sized electrodes, the researchers were able to obtain two different readings of brain activity. The larger electrodes, slightly bigger than a penny, were spaced about a centimeter apart and recorded the overall EEG, or brain-wave pattern. Smaller electrodes, in an array only 4 millimeters wide, recorded from individual neurons, marking the first time anyone has recorded from individual neurons in human patients as they lost consciousness. Between 50 and 100 electrodes were implanted in each patient, clustered in different regions.

The larger electrodes showed the fall into unconsciousness is an abrupt change to long waves of about 1 hertz in the EEG. But the small electrodes showed that individual neurons were still active.

Individual neurons revealed that within localized brain regions, neurons were active for a few hundred milliseconds, then shut off again for a few hundred milliseconds. This “flickering” of activity created the slow oscillation seen in the EEG. When one area was active, it was likely that another brain area that it was trying to communicate with was not active. Even when the neurons were on, they still couldn’t send information to other brain regions. … When consciousness is lost, there may still be information coming into the brain, but that information is remaining localized and doesn’t get integrated into a coherent picture. … Failure of information integration had previously been suggested as a possible mechanism for loss of consciousness, but no one was sure how that might happen. This finding really narrows it down quite a bit. It really does, in a very fundamental way, constrain the possibilities of what the mechanisms could be.

Here is the abstract:

The neurophysiological mechanisms by which anesthetic drugs cause loss of consciousness are poorly understood. Anesthetic actions at the molecular, cellular, and systems levels have been studied in detail at steady states of deep general anesthesia. However, little is known about how anesthetics alter neural activity during the transition into unconsciousness. We recorded simultaneous multiscale neural activity from human cortex, including ensembles of single neurons, local field potentials, and intracranial electrocorticograms, during induction of general anesthesia. We analyzed local and global neuronal network changes that occurred simultaneously with loss of consciousness. We show that propofol-induced unconsciousness occurs within seconds of the abrupt onset of a slow (<1 Hz) oscillation in the local field potential. This oscillation marks a state in which cortical neurons maintain local patterns of network activity, but this activity is fragmented across both time and space. Local (<4 mm) neuronal populations maintain the millisecond-scale connectivity patterns observed in the awake state, and spike rates fluctuate and can reach baseline levels. However, neuronal spiking occurs only within a limited slow oscillation-phase window and is silent otherwise, fragmenting the time course of neural activity. Unexpectedly, we found that these slow oscillations occur asynchronously across cortex, disrupting functional connectivity between cortical areas. We conclude that the onset of slow oscillations is a neural correlate of propofol-induced loss of consciousness, marking a shift to cortical dynamics in which local neuronal networks remain intact but become functionally isolated in time and space.

Visual working memory

An item in ScienceDaily (here) reports on a paper by Salazar, Gray and others, Content-Specific Fronto-Parietal Synchronization During Visual Working Memory in Science Express.

 

They looked at the visual short-term memory in the monkey brain by recording from neuron activity. The monkeys had to remember an object or location during a delay period and determine if it matched a later signal. They were rewarded if correct. For each object, the pattern of synchronous activity between the neurons was noted.

Brain waves of many neurons in the two hubs, called the prefrontal cortex and posterior parietal cortex, synchronized to varying degrees — depending on an object’s identity. This and other evidence indicated that neurons in these hubs are selective for particular features in the visual field and that synchronization in the circuit carries content-specific information that might contribute to visual working memory.

The researchers also determined that the parietal cortex was more influential than the prefrontal cortex in driving this process. Previously, many researchers had thought that the firing rate of single neurons in the prefrontal cortex, the brain’s executive, is the major player in working memory.

 

The location and identity of the objects was represented by the pattern of synchronous waves between the parietal and prefrontal cortex.

 

Here is the abstract:

Lateral prefrontal and posterior parietal cortical areas exhibit task-dependent activation during working memory tasks in humans and monkeys. Neurons in these regions become synchronized during attention-demanding tasks, but the contribution of these interactions to working memory is largely unknown. Using simultaneous recordings of neural activity from multiple areas in both regions, we find widespread, task-dependent, and content-specific synchronization of activity across the fronto-parietal network during visual working memory. The patterns of synchronization are prevalent among stimulus-selective neurons and are governed by influences arising in parietal cortex. These results indicate that short-term memories are represented by large-scale patterns of synchronized activity across the fronto-parietal network.

I'm on ScienceSeeker-Microscope

 

correlates of return of consciousness

I dealt with this paper, citation below, when it came out last May. But now Deric Bownds Blog (here) links to it. I am revisiting the paper, not in full – for that see the previous post (other post).

 

Three aspects seem very important: the difference between the state of consciousness and the contents of consciousness; how little activity of the neocortex is needed for the state of consciousness as opposed to the contents; and, the importance of the thalamus to consciousness.

 

Whereas theories on the particular contents of consciousness, such as visual consciousness, argue for the importance of cortical structures, theories focusing on consciousness as a state stress the importance of subcortical or thalamocortical structures . Having awareness of the environment or of one’s self is fundamentally based on being in a con scious state. There is limited human data on which brain structures engage to serve this foundation of consciousness. This study was designed to reveal the minimal neural correlates associated with a conscious state. …

The recovery from anesthesia does not occur all at once, but rather it appears to occur in a bottom-up manner. When emerging from deep anesthesia there will first be signs of autonomic arousal, followed by a slow return of brainstem reflexes, eventually leading to reflexive or uncoordinated somatic movements that occur somewhat before subjects can willfully respond to simple commands. As shown in our results, only minimal cortical activity is necessary at this point. Thus, emergence of a conscious state, the essential foundation of consciousness, precedes the full recovery of neocortical processing required for rich conscious experiences. We propose that the failure of simple processed EEG monitoring technology to detect patient awareness during anesthesia is at least partly due to the aforementioned pattern of brain arousal. …

The structures that activated when consciousness resumed were the brainstem, the thalamus, the hypothalamus, and the ACC (anterior cingulate cortex).

 

 

ResearchBlogging.org

Langsjo, J., Alkire, M., Kaskinoro, K., Hayama, H., Maksimow, A., Kaisti, K., Aalto, S., Aantaa, R., Jaaskelainen, S., Revonsuo, A., & Scheinin, H. (2012). Returning from Oblivion: Imaging the Neural Core of Consciousness Journal of Neuroscience, 32 (14), 4935-4943 DOI: 10.1523/JNEUROSCI.4962-11.2012

I'm on ScienceSeeker-Microscope

Temporal binding due to causality

ScienceDaily has an item (here) reporting a paper by Marc J. Buehner, Understanding the Past, Predicting the Future: Causation, not Intentional Action, is the Root of Temporal Binding, in Psychological Science.

 

When events happen close together in time and space they can be bound together as part of the same meaningful episode. And further more they are perceived to be closer together in time. This has been called temporal binding.

Research has shown that our perceptual system seems to pull causally-related events together — compared to two events that are thought to happen of their own accord, we perceive the first event as occurring later if we think it is the cause and we perceive the second event as occurring earlier if we think it is the outcome.

 

This has been thought to be due to motor intention.

Some researchers have hypothesized that our perceptual system binds events together if we perceive them to be the result of intentional action, and that temporal binding results from our ability to link our actions to their consequences.

 

Buehner questioned this hypothesis.

“We already know that people are more likely to infer a causal relation if two things are close in time. It follows, via Bayesian calculus, that the reverse should also be true: If people know two things are causally related, they should expect them to be close in time,” Buehner says. “Time perception is inherently uncertain, so it makes sense for systematic biases in the form of temporal binding to kick in. If this is true, then it would suggest that temporal binding is a general phenomenon of which intentional action is just a special case.”

 

He compared time predictions of baseline events, events caused by the subject, and events caused by a machine. The time prediction of events caused by the subject and by a machine were the same and differed from the baseline events which were not causally related. Intentionality is not the cause of the binding, perceived causality was. “Causation instills a subjective time warp in people’s minds.

Two ways of thinking

ScienceDaily has an item (here), a paper by Anthony Jack and others entitled fMRI reveals reciprocal inhibition between social and physical cognitive domains in NeuroImage.

The researchers found that two modes of thinking (analytical and social) are mutually exclusive. Our brains can figuratively change configuration between a logical and an empathetic way of thinking but cannot do both a once. Normally when not engaged in a task, the brain may cycle between the two modes of thought, but when faced with a particular task it would choose the more appropriate mode. An example is: “How could a CEO be so blind to the public relations fiasco his cost-cutting decision has made? When the analytic network is engaged, our ability to appreciate the human cost of our action is repressed.” Is this why economists, used to numeric solutions, find utilitarian logic attractive for moral problems?

The origin of the idea is interesting.

Jack said that a philosophical question inspired the study design: “The most persistent question in the philosophy of mind is the problem of consciousness. Why can we describe the workings of a brain, but that doesn’t tell us what it’s like to be that person? The disconnect between experiential understanding and scientific understanding is known as the explanatory gap. In 2006, the philosopher Philip Robbins and I got together and we came up with a pretty crazy, bold hypothesis: that the explanatory gap is driven by our neural structure. I was genuinely surprised to see how powerfully these findings fit that theory.”…

“We see neural inhibition between the entire brain network we use to socially, emotionally and morally engage with others, and the entire network we use for scientific, mathematical and logical reasoning. … This shows scientific accounts really do leave something out — the human touch. A major challenge for the science of the mind is how we can better translate between the cold and distant mechanical descriptions that neuroscience produces, and the emotionally engaged intuitive understanding which allows us to relate to one another as people.”

Of course, as usual, I find just two configurations a little too pat. Let’s assume a more nuanced set of configurations but difficulty in using two or more at a time. Maybe two major types with variations or something like that will turn out to be a better model after more research.

Here is the abstract:

Two lines of evidence indicate that there exists a reciprocal inhibitory relationship between opposed brain networks. First, most attention-demanding cognitive tasks activate a stereotypical set of brain areas, known as the task-positive network and simultaneously deactivate a different set of brain regions, commonly referred to as the task negative or default mode network. Second, functional connectivity analyses show that these same opposed networks are anti-correlated in the resting state. We hypothesize that these reciprocally inhibitory effects reflect two incompatible cognitive modes, each of which is directed towards understanding the external world. Thus, engaging one mode activates one set of regions and suppresses activity in the other. We test this hypothesis by identifying two types of problem-solving task which, on the basis of prior work, have been consistently associated with the task positive and task negative regions: tasks requiring social cognition, i.e., reasoning about the mental states of other persons, and tasks requiring physical cognition, i.e., reasoning about the causal/mechanical properties of inanimate objects. Social and mechanical reasoning tasks were presented to neurologically normal participants during fMRI. Each task type was presented using both text and video clips. Regardless of presentation modality, we observed clear evidence of reciprocal suppression: social tasks deactivated regions associated with mechanical reasoning and mechanical tasks deactivated regions associated with social reasoning. These findings are not explained by self-referential processes, task engagement, mental simulation, mental time travel or external vs. internal attention, all factors previously hypothesized to explain default mode network activity. Analyses of resting state data revealed a close match between the regions our tasks identified as reciprocally inhibitory and regions of maximal anti-correlation in the resting state. These results indicate the reciprocal inhibition is not attributable to constraints inherent in the tasks, but is neural in origin. Hence, there is a physiological constraint on our ability to simultaneously engage two distinct cognitive modes. Further work is needed to more precisely characterize these opposing cognitive domains.