Not exactly mind-reading

I don’t pretend to understand the computations that have been used in this study, only the general idea. The results are both a lot more and also a lot less than they appear. This is a group that have been able to fairly accurately identify a black and white photo that a subject in a fMRI scanner is looking at. They now attempt to identify a short movie clip that is being watched. There is an enormous problem here because the fMRI signal is associated with blood flow and is too slow to keep up with moving images. They manage to overcome this problem of speed with some mathematical cleverness which I don’t understand. This gives a coded output of brain activity for a particular short movie clip (not the original scan but a derived coded one).

 

Having this coding method, they used it in a big way. For each subject, many thousands of such short clips were viewed and the coded fMRI scan for each clip-subject combination was stored. The subjects than view a target clip, not used in the previous scans. This scan is coded and then compared with the enormous bank of coded scans from the library of clip-subject-coded scans triplets. The 100 clips with the most similar coded scans are averaged to give a composite movie clip. This tends to be fuzzy but with a resemblance to the target clip. (link to clips)

 

The videos of the composite movie clips are somewhat misleading if one is unaware of how they were constructed. They are colored because the clips are coloured, but no colour information was included in the coding process. The colour is an artifact of combining clips. The colour has the effect of enhancing the impression that actual qualia are being extracted from the scanner – a very, very misleading impression that is hard to shake.

 

Of the video comparisons published, the most successful are people that are not moving much. This may also be an artifact of the clips that were used to produce the library. It may also be a result of the process. I have the notion that movement is not being captured very accurately unless it is slow and involving large rather than small objects/elements.

 

Don’t miss a look at the clips if you have not done so already. (there is a link above) This is not mind-reading but it is an definite achievement. Those interested in the mathematical nitty gritty should read xcorr’s posting.

 

ResearchBlogging.org

Nishimoto, S., Vu, A., Naselaris, T., Benjamini, Y., Yu, B., & Gallant, J. (2011). Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies Current Biology DOI: 10.1016/j.cub.2011.08.031

Interesting description

In a recent Scientific American site article (here) I noticed this quote of Peter Watts and found it quite eloquent in its own way.

“Consciousness continues to confound us on all fronts — we haven’t even established what it’s good for,” Watts says. “It’s slow, metabolically expensive, and — as far as we can tell — unnecessary for intelligence. More fundamentally, we don’t have a clue how it works — how can the electrical firing of neurons produce the subjective sense of self? How can a bunch of ions hopping the synaptic gap result in the sense of this little thing behind the eyes that calls itself ‘I?’”

“One thing we have discovered is that consciousness involves synchrony — groups of neurons firing in sync throughout different provinces of the brain,” he says. “Something else we’ve known for some time is that when you split the brain down the middle — force the hemispheres to talk the long way around, via the lower brain, instead of using the fat high-bandwidth pipe of the corpus callosum — you end up with not one conscious entity but two, and those two entities develop different tastes, opinions, even different religious beliefs.”

“What this seems to point to is that consciousness is a function of latency — it depends upon the synchronous firing of far-flung groups of neurons, and if it takes too long for signals to cross those gaps, consciousness fragments. ‘I’ decoheres into ‘we,’” Watts says.

Learning to see events

ScienceDaily has a report (here) on infants learning how to divide up continuous movement into discrete events. The research is published in Psychological Science: Roseberry S, Richie R, Hirsh-Pasek K, Shipley T and Gilinkoff R, Watching the world in motion, babies take a first step toward language (2011).

Do babies, before they have language, divide motion into the sort of events that we have words for, like sit down, jump, walk? This is the question of how we learn to divide reality ‘at the joints’.

Infants use “statistical learning” — they compute the likelihood that one event follows another and use that information to predict future events, says Sarah Roseberry. Based on these probabilities, infants find boundaries between events, a critical step for learning words.

The babies were shown videos of a routine of hand/arm movements repeated. The same video was re-cut to change the action. The babies preferred to look at the original rather than the re-cut video.

Other research has shown that babies use statistics to find the boundaries between syllables in the language they hear, and that they track probabilities in series of static pictures — say, a triangle, a diamond, and a square. But this study is the first to observe statistical learning with “continuous, dynamic events,” say the authors.

Roseberry says the work adds to a growing understanding of the earliest building blocks of language. “Although these babies were between just 7 and 9 months of age, they were already dividing the world into events” using the “tool” of statistical learning. “It is these events that will be named with words,” she continues. “A few months later, when they can hook up words to the events they see, they will begin to use language.”

This fits with a particular way of modeling the consciousness to episodic memory chunking in adults. The model assumes that consciousness is continually predicting what comes next. If the prediction is wrong then the event is closed and remembered, working memory cleared, and a new event is started. Reality is being divided at the point of the unexpected – at the joint. Watching a person walking; the next predicted move is left foot forward; instead there is a stop of motion; end of walk event and start of new event; new event becomes a turn to right.

Here we go again

Sometime soon I will have to stop rising to what is said about free will. I started this posting in response to K. Smith’s article (see citation below) because the article seemed a way to confuse rather than to clarify. I almost finished when I read the sensible blog by Bjoern Brembs (here) with which I largely agree. It did not cover all my problems with Smith’s article but enough make me simply change course. I confined myself to one interesting passage rather than the rest of the badly done article.

 

Haynes’s research and its possible implications have certainly had an effect on how he thinks. He remembers being on a plane on his way to a conference and having an epiphany. “Suddenly I had this big vision about the whole deterministic universe, myself, my place in it and all these different points where we believe we’re making decisions just reflecting some causal flow.” But he couldn’t maintain this image of a world without free will for long. “As soon as you start interpreting people’s behaviours in your day-to-day life, it’s virtually impossible to keep hold of,” he says.

 

The Haynes quote is interesting. I have had such moments of clarity from time to time where I am part of a ‘causal flow’ (although I would describe myself and my surroundings as physical reality rather than deterministic universe). I would be very surprised if this was a rare image although it is probably not common either.

 

Here is an aside – as regular readers of my blog will know, I disagree with determinism as much as I disagree with free will. Here is a condensed version for those who have not encountered previous blogs on this subject on my site. Both free will and determinism are flawed, outdated ideas. I think that we make decisions using only material brains and physical processes but the decisions are not very predictable and we are usually responsible for them. Causal does not necessarily mean predictable. I do not think that consciousness has much to do with decision making or any other cognitive process. But whether decisions can or cannot contain some conscious processes along with unconscious ones, either way, it does not affect our ownership of our decisions. I am more interested in my decisions being appropriate than free and I am more interested in how decision are made than in arguments about free will and determinism. This way of looking at things is partially due to a similar image to the one Haynes reports.

 

Beside the Haynes like image, there is an image that I have much, much more often and that I find very comforting. I seem to be very whole and full of rhythms of heart beat, breathing, walking, blinking – not distinct but intermingled. I seem in real contact with the world around me, with the thinnest of boundaries, almost nothing separates me from the rest of reality. On top of this feeling is something like a flickering movie screen – a thin insubstantial stream of consciousness. It is very comforting to feel that I am more real and more substantially than that movie. I feel part of reality rather than free of it and I do not feel like part of a clock-work type of reality but something far more complex. The image is gone when I try to actually live in the world – get things done, communicate in words and so on. The movie becomes the me-in-the-world. Now I’m in consciousness again, but also with a renewal of a deep trust and identification with the unconscious part.

 

Of course such images are not very well expressed in words, but words is what we have – sorry. We also have science, so we do not have to rely on vague feelings, but can work for a more accurate understanding.

 

ResearchBlogging.org

Smith, K. (2011). Neuroscience vs philosophy: Taking aim at free will Nature, 477 (7362), 23-25 DOI: 10.1038/477023a

The mind’s touch

ScienceDaily has an item (here) reporting a paper by Damasio’s group, Seeing Touch is Correlated with Content-Specific Activity in Primary Somatosensory Cortex. They examined the touch equivalent of the mind’s eye.

“When asked to imagine the difference between touching a cold, slick piece of metal and the warm fur of a kitten, most people admit that they can literally ‘feel’ the two sensations in their ‘mind’s touch,’ ” said Meyer, the lead author of the study. “The same happened to our subjects when we showed them video clips of hands touching varied objects,” he said. “Our results show that ‘feeling with the mind’s touch’ activates the same parts of the brain that would respond to actual touch.”

Human brains capture and store physical sensations, and then replay them when prompted by viewing the corresponding visual image. “When you hold a thought in your mind about a particular object, that is not just mental fluff. It is rather a detailed memory file that is being revived in your brain,” Antonio Damasio said.

Here is the abstract:

There is increasing evidence to suggest that primary sensory cortices can become active in the absence of external stimulation in their respective modalities. This occurs, for example, when stimuli processed via one sensory modality imply features characteristic of a different modality; for instance, visual stimuli that imply touch have been observed to activate the primary somatosensory cortex (SI). In the present study, we addressed the question of whether such cross-modal activations are content specific. To this end, we investigated neural activity in the primary somatosensory cortex of subjects who observed human hands engaged in the haptic exploration of different everyday objects. Using multivariate pattern analysis of functional magnetic resonance imaging data, we were able to predict, based exclusively on the activity pattern in SI, which of several objects a subject saw being explored. Along with previous studies that found similar evidence for other modalities, our results suggest that primary sensory cortices represent information relevant for their modality even when this information enters the brain via a different sensory system.

Buddhism and neuroscience

Seed has an article by David Weisman on whether and how Buddhism overlaps neuroscience. (here)

He says he started as a sceptic about this overlap, but had changed his mind.

Despite my doubts, neurology and neuroscience do not appear to profoundly contradict Buddhist thought. Neuroscience tells us the thing we take as our unified mind is an illusion, that our mind is not unified and can barely be said to “exist” at all. Our feeling of unity and control is a post-hoc confabulation and is easily fractured into separate parts. As revealed by scientific inquiry, what we call a mind (or a self, or a soul) is actually something that changes so much and is so uncertain that our pre-scientific language struggles to find meaning. Buddhists say pretty much the same thing. They believe in an impermanent and illusory self made of shifting parts. They’ve even come up with language to address the problem between perception and belief. Their word for self is anatta, which is usually translated as ‘non self.’ One might try to refer to the self, but the word cleverly reminds one’s self that there is no such thing. … Both Buddhism and neuroscience converge on a similar point of view: The way it feels isn’t how it is.

He sees the difference between Buddhism and other religions that cannot find common ground with neuroscience as the idea of human exceptionalism.

Early on, Buddhism grasped the nature of worldly change and divided parts, and then applied it to the human mind. The key step was overcoming egocentrism and recognizing the connection between the world and humans. We are part of the natural world; its processes apply themselves equally to rocks, trees, insects, and humans. Perhaps building on its heritage, early Buddhism simply did not allow room for human exceptionalism. … When Judeo-Christian belief conflicts with science, it nearly always concerns science removing humans from a putative pedestal, a central place in creation. (starting with the earth not being the center of the universe)

But he does point out that there is no overlap on the concept of reincarnation. He cannot see neuroscience ever being comfortable with a consciousness that can survive the death of the brain. And reincarnation is fairly deeply fixed in Buddhist thought. However he does note that the Dalai Lama is believed to be the Dalai Lama because he is the reincarnation of a line of Dalai Lamas – but surprizingly he is planning to choose his successor before he is dead, retire and let the new Dalai Lama take over. That doesn’t sound much like reincarnation to me – surely one who have to be dead before being reincarnated.

Slowing perception down

According to one way of understanding perception, it would not be surprising if perception was completed before conscious awareness could contain the percept. Why is it important to examine this? So that experiment methods of assessing conscious awareness are valid. Gregori-Grgic, Balderi and de’Sperati look at this question (see citation below) by slowing the processes down.

 

Visual perception involves the establishment of edges, corners, textures, colours, location into objects in space. It also involves establishing the motion of these objects. A minimum of this work must be done before a ‘scene’ can be experienced. By degrading motion with noise, limiting the duration of observation, and asking for the direction of motion (or a guess at it) at particular times, they separated the ability to discriminate the direction of motion from the ability to report seeing the direction of motion.

 

The flourishing of studies on the neural correlates of decision-making calls for an appraisal of the relation between perceptual decisions and conscious perception. By exploiting the long integration time of noisy motion stimuli, and by forcing human observers to make difficult speeded decisions – sometimes a blind guess – about stimulus direction, we traced the temporal buildup of motion discrimination capability and perceptual awareness, as assessed trial by trial through direct rating. We found that both increased gradually with motion coherence and viewing time, but discrimination was systematically leading awareness, reaching a plateau much earlier. Sensitivity and criterion changes contributed jointly to the slow buildup of perceptual awareness. It made no difference whether motion discrimination was accomplished by saccades or verbal responses. These findings suggest that perceptual awareness emerges on the top of a developing or even mature perceptual decision. We argue that the middle temporal (MT) cortical region does not confer us the full phenomenic depth of motion perception, although it may represent a precursor stage in building our subjective sense of visual motion.

 

The authors do not see conscious perception as a point in time.

…This does not imply strictly serial processes, as the processes underlying discrimination and awareness can coexist in time (race model). Note that we are not suggesting to take the saturation of perceptual awareness as the temporal marker of the perceptual delay; rather, its gradual buildup suggests that the notion of a precise point in time where conscious perception is realized may be too strict, at least with our degraded motion stimuli.

 

Other experimenters should take note of the result.

In disentangling decision from conscious perception, our study warns against an indiscriminate use of monkeys’ saccadic eye movement as a proxy for conscious visual perception (e.g., for what monkeys ‘‘see’’), even when accuracy is rewarded. More generally, our findings indicate that, especially when time is an issue, objective forced-choice responses may not provide a full account of visual perception, as the perceptual decision can be taken when the formation of perceptual awareness is still underway.

 

The decision of the direction of motion is not made on the basis of the conscious percept.

That perceptual awareness is more sluggish than motion discrimination may appear somewhat unsettling, as we tend often to assume that conscious perception precedes decision. However, phenomena such as blindsight and unconscious perception suggest that automatic decisions are indeed possible under certain conditions. In more ordinary contexts, many sensory- driven actions, as well as the stimuli that originated them, pass mostly unnoticed, as when driving or in sports. Similarly, we incessantly decide where to make the next gaze shift, despite poor or null awareness of peripheral – and sometimes also central – visual information. Awareness may just follow.

 

This ‘unsettling’ appears a bit naïve. There is no reason why an unconscious process creating part of a percept and some other unconscious process doing something else (moving the eye, navigating) should not share what information they are ‘wired’ to share. Sharing through working memory/consciousness is slow and limiting in the amount of information that be shared. And, indeed, the authors are not at all ‘unsettled’ but have a putative model of what is happening.

…the answer may lay in the particular mechanism that is thought to regulate the formation of the decision signal from the underlying visual signal. Several findings, both in humans and monkeys, indicate area MT (Middle Temporal Cortex) as a crucial node for motion processing. In recent years a growing body of data have disclosed also the role of area LIP (Lateral Intraparietal) in perceptual decisions involving motion stimuli. When a monkey is instructed to make an eye movement to report the direction of a random-dot kinematogram, neurons in LIP pick up sensory evidence, presumably from MT, and integrate it for some hundreds of ms until a decision bound is reached, and an oculomotor command issued. Importantly, the decision is reached even when the stimulus is still available or the response procrastinated, because LIP neurons exploit only the initial part of the discharge of MT neurons . Thus, monkey LIP seems to work as a device that implements a relatively quick rise-to- threshold mechanism for various types of visuo-motor responses when a perceptual decision is required. In this way, the decision is ready even though MT neurons are still processing the motion input. Note that structures other than LIP could be involved in motion discrimination when the response is verbal, perhaps as part of a circuit for more abstract decision-making, although the fact that we found a pattern of results virtually identical for saccades and verbal responses suggests that similar mechanisms may be at play.

 

They are careful to avoid leaving the impression that MT is the location of the motion conscious precept.

Clearly, a less simplistic view is that awareness is a large-scale, distributed property, in which case no single cortical structure may exhibit a macroscopic activation that co-varies on its own with the conscious percept.

 

One aspect that troubled me was the lack of discussion of the timing of consciousness ‘frames’. As awareness is not continuous but discretely updated, an awareness gradually increasing for times longer than the consciousness cycle should have been addressed. It would seem to affect the interpretation.

 

ResearchBlogging.org

Gregori-Grgič, R., Balderi, M., & de’Sperati, C. (2011). Delayed Perceptual Awareness in Rapid Perceptual Decisions PLoS ONE, 6 (2) DOI: 10.1371/journal.pone.0017079

Nothing new

The New Scientist had an article on the antiquity of the building blocks of nervous systems, written by Michael Marshall. (here) Rather than talking about similar chemicals in primitive animals, fungi, plants and even bacteria, he looks at a single celled Monosiga brevicolis which aggregates under some conditions and so is at the boundary between single-celled and multicelled organisms. It has many ingredients needed for a nervous system.

Some of the building blocks are: ion channels and voltage gated calcium ion channels (both found in bacteria), ion channels that can give a traveling action potential on a cell membrane, gap junctions, receptors for glutamate in other messengers, release of messengers during action potential. All these are found in organisms that are not multicellular.

Back to the collared flagellate or choanoflagellate, M brevicolis, and its equipment. It has no nervous system, being a single cell, but it does have a lot of the where-with-all. Marshall mentions a number of papers and I quote parts of their abstracts below.

Burkhardt etal, Primordial neurosecretory apparatus identified in the choanoflagellate Monosiga brevicollis. PNAS 2011:

“We found that the Munc18/syntaxin 1 complex from M. brevicollis is structurally and functionally highly similar to the vertebrate complex, suggesting that it constitutes a fundamental step in the reaction pathway toward SNARE assembly. We thus propose that the primordial secretion machinery of the common ancestor of choanoflagellates and animals has been co-opted for synaptic roles during the rise of animals.”

Xinjiang Cail, Unicellular Ca2+ Signaling ‘Toolkit’ at the Origin of Metazoa, Mol Biol Evol (2008):

“we demonstrate for the first time the presence of an extensive Ca2+ signaling ‘toolkit’ in the unicellular choanoflagellate Monosiga brevicollis. Choanoflagellates possess homologues of various types of animal plasma membrane Ca2+ channels including the store-operated channel, ligand-operated channels, voltage-operated channels, second messenger-operated channels, and 5 out of 6 animal transient receptor potential channel families. Choanoflagellates also contain homologues of inositol 1,4,5-trisphosphate receptors. Furthermore, choanoflagellates master a complete set of Ca2+ removal systems including plasma membrane and sarco/endoplasmic reticulum Ca2+ ATPases and homologues of 3 animal cation/Ca2+ exchanger families. Therefore, a complex Ca2+ signaling ‘toolkit’ might have evolved before the emergence of multicellular animals.”

Alie & Manuel, The backbone of the post-synaptic density originated in a unicellular ancestor of choanoflagellates and metazoans, BMC Evol Biol (2010):

“The time of origination of most post-synaptic proteins was not concomitant with the acquisition of synapses or neural-like cells. The backbone of the scaffold emerged in a unicellular context and was probably not involved in cell-cell communication. Based on the reconstructed protein composition and potential interactions, its ancestral function could have been to link calcium signalling and cytoskeleton regulation. The complex later became integrated into the evolving synapse through the addition of novel functionalities.”

Liebesking etal, Evolution of sodium channels predates the origin of nervous systems in animals, PNAS (2011):

“Voltage-dependent sodium channels are believed to have evolved from calcium channels at the origin of the nervous system. A search of the genome of a single-celled choanoflagellate (the sister group of animals) identified a gene that is homologous to animal sodium channels and has a putative ion selectivity filter intermediate between calcium and sodium channels. Searches of a wide variety of animal genomes, including representatives of each basal lineage, revealed that similar homologs were retained in most lineages. One of these, the Placozoa, does not possess a nervous system. We cloned and sequenced the full choanoflagellate channel and parts of two placozoan channels from mRNA, showing that they are expressed. Phylogenetic analysis clusters the genes for these channels with other known sodium channels. From this phylogeny we infer ancestral states of the ion selectivity filter and show that this state has been retained in the choanoflagellate and placozoan channels.”

So from a billion years ago, nervous systems have been ready to exist when they were needed. They were needed by the combination of movement and multicellularity, in other words, animals. Nervous systems were not needed by multicellular organisms that did not move about and they were not needed by the little movers that were single-celled.

Petit-mal

ScienceDaily (here) reports research by Huguenard and others, ‘A new mode of corticothalamic transmission revealed in the Gria4-/- model of absence epilepsy’.

Absence or petit-mal seizures are a sudden loss of consciousness for a short period which may or may not be noticed by onlookers but is not noticed by the person having the seizure. “It’s like pushing a pause button.”

A bioengineered strain of mice, without the GluA4 receptor, is prone to these seizures and were used to investigate the cause of absence seizures. During seizures (human and mouse) there is an unusual, strong oscillation involving the cortex and thalamus. What causes this rhythm?

Normally:

To keep from being constantly bombarded by distracting sensory information from other parts of the body and from the outside world, the cortex flags its activity level by sending a steady stream of signals down to the thalamus, where nearly all sensory signals related to the outside world are processed for the last time before heading up to the cortex. In turn, the thalamus acts like an executive assistant, sifting through sensory inputs from the eyes, ears and skin, and translating their insistent patter into messages relayed up to the cortex. The thalamus carefully manages those messages in response to signals from the cortex.

These upward- and downward-bound signals are conveyed through two separate nerve tracts that each stimulate activity in the other tract. In a vacuum, this would soon lead to out-of-control mutual excitement, similar to a microphone being placed too close to a P.A. speaker. But there is a third component to the circuit: an inhibitory nerve tract that brain scientists refer to as the nRT. This tract monitors signals from both of the other two, and responds by damping activity. The overall result is a stable, self-modulating system that reliably delivers precise packets of relevant sensory information but neither veers into a chaotic state nor completely shuts itself down.

The bioengineered mice lack the GluA4 receptor which is critical to the stimulation of nRT cells.

This leaves nRT receiving signals from one tract, but not the other, which upsets the equilibrium usually maintained by the circuit. As a result, one of its components — the thalamocortical tract — is thrown into overdrive. Its constituent nerve cells begin firing en masse, rather than faithfully obeying the carefully orchestrated signals from the cortex. This in turn activates the nRT to an extraordinary degree, because its contact with the thalamocortical tract is not affected in these mice…. In the face of over-amped signaling from the thalamocortical tract, however, the fraction of excited nRT nerve cells rose much higher, perhaps as much as 50 percent — enough to effectively silence all signaling from the thalamus to the cortex — a key first step in a seizure….But the shutdown was transitory. A property of thalamic cells (like other nerve cells) is that when they’ve been inhibited they tend to overreact and respond even more strongly than if they had been left alone. After a burst of nRT firing, this tract’s overall inhibition of the thalamocortical tract all but halted activity there for about one-third of a second. Like boisterous schoolchildren who can shut up only until the librarian leaves the room, the thalamocortical cells resumed shouting in unison as soon as the inhibition stopped, and a strong volley of signaling activity headed for the cortex. Then the nRT’s inhibitory signaling recommenced, and the stream of signals from the thalamus to the cortex ceased once again. This three-Hertz cycle of oscillations consisting of alternating quiet and exuberant periods repeated over the course of 10 or 15 seconds was the electrophysiology of a seizure.

The group is now looking for triggers that could produce a similar malfunction in humans, that would allow the cortico-thalamo-cortical transmission system to escape the control of the nRT (reticular thalamic nucleus).

Here is the abstract:

Cortico-thalamo-cortical circuits mediate sensation and generate neural network oscillations associated with slow-wave sleep and various epilepsies. Cortical input to sensory thalamus is thought to mainly evoke feed-forward synaptic inhibition of thalamocortical (TC) cells via reticular thalamic nucleus (nRT) neurons, especially during oscillations. This relies on a stronger synaptic strength in the cortico-nRT pathway than in the cortico-TC pathway, allowing the feed-forward inhibition of TC cells to overcome direct cortico-TC excitation. We found a systemic and specific reduction in strength in GluA4-deficient (Gria4−/−) mice of one excitatory synapse of the rhythmogenic cortico-thalamo-cortical system, the cortico-nRT projection, and observed that the oscillations could still be initiated by cortical inputs via the cortico-TC-nRT-TC pathway. These results reveal a previously unknown mode of cortico-thalamo-cortical transmission, bypassing direct cortico-nRT excitation, and describe a mechanism for pathological oscillation generation. This mode could be active under other circumstances, representing a previously unknown channel of cortico-thalamo-cortical information processing.

I see this result somewhat differently. More from the bottom up then the to down. Consciousness is driven by waves of activity – waves from the brain stem up through the ascending reticular formation into the thalamus (the nRT part) and from the thalamus radiated to most of the cortex. It is the rhythm from below that drives the thalamus-cortex rhythm not vice versa. (Of course seizures are different and may be driven differently.) I do not have access to the original paper and so I am not sure whether the authors imply in it that the cortex controls the thalamus. I continue to view the close relationship between the thalamus and the cortex during consciousness as a partnership of equals. However it is closer to ‘the thalamus having the assistance of the cortex’ rather than ‘the thalamus acting as the executive assist to the cortex’ in my view. Perhaps further work on absence seizures will change my mind.