Info

You are currently browsing the thoughts on thoughts weblog archives for July, 2010.

Calendar
July 2010
M T W T F S S
« Jun   Aug »
 1234
567891011
12131415161718
19202122232425
262728293031  
Categories

Archive for July 2010

Reconsolidation


Memories move through stages when they are formed: they are ‘encoded’ or some other process of being prepared (perhaps working memory), they are held in an early stage (short term memory), changes occur in synapses in both the hippocampus and cortex forming a somewhat stable memory from minutes to hours later (synaptic consolidation), the memories are processed so that they are less dependent on hippocampus and more on the cortex from days to years after (system consolidation). Consolidated memories are fairly stable – but- when they are recalled into consciousness, they can be modified. This seems to be the main way in which memories are re-consolidated (or changed) over time. Passing a memory through consciousness means it is re-stored, either unchanged or updated depending on circumstances. Much of sleep is busy with the consolidation and re-consolidation of memories. This is a simplified outline of the current science on memory and it is important to realize that more is unknown than known.

An interesting item in BPS research digest (here) looks at experiments with a treatment for traumatic memories, EMDR, eye movement desensitization and reprocessing.

A controversial treatment for post-traumatic stress disorder involves the traumatised person holding a painful memory in mind while simultaneously following with their eyes the horizontal movements of their therapist’s finger… Raymond Gunter and Glen Bodner have tested three possible explanations…

(first) relative to staring straight ahead, eye-movements increased arousal levels. (undermining) the idea that eye movements activate an innate investigatory reflex that inhibits fear and provokes relaxation.

A second experiment showed that both horizontal and vertical eye movements reduced the vividness and emotionality of the students’ memories. (undermining) the idea that horizontal eye movements aid interhemispheric communication, thus allowing the more rational left hemisphere to process the right hemisphere’s traumatic memories.

(third) experiment showed that the students’ memories became less vivid and emotional, not only when they performed concurrent horizontal eye movements, but also if they instead performed a simultaneous simple hearing task. This undermines the idea that EMDR works specifically by taxing the so-called “visuo-spatial sketch-pad” of working memory. It suggests instead that the mechanism underlying EMDR is a more general effect based on taxing the big boss of short-term memory - the central executive.

…performing a concurrent task, be it eye movements or some other distraction, while also recalling a painful memory, allows a person to be exposed to that memory, without having the mental resources available to get too upset by it. Over time, this process acts like a form of gentle exposure to the memory, as the person learns that they can, after all, cope with their past.

This seems a clear case of a memory being forced to change during re-consolidation – by passing through consciousness under conditions that modify the memory.

Finding out our feelings


There is a trick that sometimes helps to make a decision. When we have thought about two options for a long time and mulled over many different aspects of each and have no decision. When we have even taken a pencil and paper and organized the pros and cons and get still no decision. The options are perfectly balanced. Then we flip a coin to decide. In the instant that the result of the coin toss is known, there is a feeling of contentment and relief or there is a feeling of disappointment and dissatisfaction. We really did have a preference even if we were not aware of it. So we follow our feeling rather than the result of the coin toss. Of course, we only do this because we have established that the options are almost equal in attractiveness.

I was reminded of the way this feeling is so immediate and strong, yet fleeting – very different to a slowly growing feeling – when I read a blog by Neurosceptic (here).

Schopenhauer’s trick relies on the fact that emotion is faster than thought. A letter takes you by surprise: even if you’re expecting to hear from someone, you don’t know exactly when it will arrive. It arrives: in that first second your emotions have a chance to show through, before your thoughts have got into gear. It works with emails and phone calls as well, of course, but not with any encounter which is planned out in advance.

What is happening here? Why would an unsuspected emotional reaction be triggered so fast and strong? The feeling had never risen to consciousness before and so it probably had not risen to a bodily emotion before either. Some part of the situation had been dealt with but had never needed the use of working memory and so was hidden from awareness. Maybe – on the other hand –

Without language


There is a group of people who are effectively invisible, functioning adults with no language. They are there but we just not not met them. They are born completely deaf and are not taught sign language or lip reading and, in fact, miss out not just on language but on knowledge that language exists. Now that they are known to exist, the question arises, how? Those non-linguistic adults live amongst us without being noticed (unbelievable - wild animals live amongst us in our cities and most of us do not see them). It must be much harder to survive without language than with it. So we must accept that these people are very good at understanding and using their environments. They must be continually solving problems -successfully. No sheltered workplaces, social workers, welfare payments, special education or any aspect of the net that is meant to catch the handicapped is available to them. No help is available from all the written and verbal signposts that litter our streets and airways. They cannot talk with others to ask or tell anything. They survive presumably because they are very intelligent, continuously observe the world and use their cognitive abilities to their up most. A description from neuroanthropology is (Life without language) and I urge you is read it.

So can people have thought without words? Well, the evidence-based answer would seem to be, yes, but it’s not the same sort of thought. Some things appear to be easier to ‘get’ without language (such as imitation of action), other things appear to be a kind of ‘all-at-once’ intuition (such as suddenly realizing all things have names), and other ideas are difficult without language being deeply enmeshed with cognitive development over long periods of time (like an English-based understanding of time as quantitative and spatialized). In other words, language is not simply an either/or proposition, but part of a cognitive developmental niche that shapes both our abilities and (unperceived) disabilities relative to the fully cognitively matured language-less individual.

Here is my take on the difference between cognition with and without language – absolutely speculative exercise in guesswork – take it with a grain of salt.

Problems that involve only sensory precepts or motor actions can be solved with or without consciousness – either way language is not needed. So our language-less man would be aware of his surroundings and his intent/action arcs like an ordinary person and would have memory of that awareness. To this extent his consciousness and his cognition would be like ours. He would even be able to manipulate some concepts or symbols although it is questionable how abstract these non-linguistic concepts can become. We can assume what language is not required for much of the simple communication between people. If other primates can live their lives without language why should a human not be able to do it.

But there is two sorts of thinking that I cannot imagine a language-less person engaging in. This is the kind that uses the cycle of: taking to yourself, being conscious of the inner voice, holding it in working memory, using access to that memory to retrieve the idea in the inner speech. This cycle would allow two parts of the brain that are not well connected in the manner needed, to exchange information through the global access available in consciousness and working memory.

The other type of thought that might be difficult to the person without language is the elaboration of abstract concepts. I believe this depends on nested series of metaphors/analogies. As the child metaphors become more distant from their concrete original parents, they become, in effect, a set of symbols related by a set of relationships. The connection to the senses and actions are lost. Without a ‘language system’ it becomes more and more difficult to handle more and more abstract symbols and relationships. I assume it would only be possible at an elementary level.

We know that handicapped people find ways around their handicaps and so I would expect the language-less to be very resourceful in developing ways to think that bypass the need for language and this might actually make them better at some specific cognitive tasks. But there is a limit.

The brains of birds


ScienceDaily reports on work by Y. Wang and others (here) which compares the mammalian neo-cortex with structures in the brains of birds.

For more than a century, neuroscientists believed that the brains of humans and other mammals differed from the brains of other animals, such as birds (and so were presumably better). This belief was based, in part, upon the readily evident physical structure of the neocortex, the region of the brain responsible for complex cognitive behaviors.

Specifically, the mammalian neocortex features layers of cells (lamination) connected by radially arrayed columns of other cells, forming functional modules characterized by neuronal types and specific connections. Early studies of homologous regions in nonmammalian brains had found no similar arrangement, leading to the presumption that neocortical cells and circuits in mammals were singular in nature.

For 40 years, Karten and colleagues have worked to upend this thinking. In the latest research, they used modern, sophisticated imaging technologies, including a highly sensitive tracer, to map a region of the chicken brain (part of the telencephalon) that is similar to the mammalian auditory cortex. Both regions handle listening duties. They discovered that the avian cortical region was also composed of laminated layers of cells linked by narrow, radial columns of different types of cells with extensive interconnections that form microcircuits that are virtually identical to those found in the mammalian cortex.

The findings indicate that laminar and columnar properties of the neocortex are not unique to mammals, and may in fact have evolved from cells and circuits in much more ancient vertebrates.

This has several ramifications. In vertebrates, different species have brains that differ more in degree and less in kind and therefore simpler brains may be very useful experimental subjects. They may be easier to work with by still give valuable insights. It also weakens the taboo on anthropomorphism. If it acts like cognition – it may be cognition. And finally there is nothing like two different examples of the same principle to find the important aspects of the principle. In trying to understand how the neo-cortex module functions it is useful to have the mammal and bird versions to compare. And consciousness need not be thought of as strictly a mammal thing just because in mammals it involves the neo-cortex.

The effect of a word


I have been avoiding, because of a lack of clarity, saying much about the relationship between language and consciousness. It is obviously important but hard to get a handle on. A recent article has prompted me to focus on this relationship. The article is by G. Lupyan and M. Spivey in PloS ONE, Making the Invisible Visible: Verbal by Not Visual Cues Enhance Visual Detection (here). Below is the abstract.

Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues.

Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity. A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery.

Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.

To what extent can high-level cognitive expectation influence low-level sensory processing? Allocating visual attention to a location improves reaction times to probes appearing in that location. The spread of attention is also affected by specific objects: cuing an object speeds responses to a probe within the cued object’s boundaries.

We are dealing here with the edge between subliminal and conscious knowledge, where with a verbal cue the letter rises to conscious awareness but without the verbal cue it is not consciously seen. The results say a lot about perception, language and consciousness.

The results give conformation to the idea that there is top-down influence on very basic and early sensory perception.

The simple detection task is compatible with one of two broad conclusions: a) visual detection processes in visual cortex are influenced by auditory linguistic signals, or b) the process of detecting visual signals includes non-visual areas of cortex which are richly influenced by auditory linguistic signals. Either conclusion requires rejecting the assumption that “simple” visual tasks such as object detection depend only on the visual characteristics of a stimulus. … The present findings appear to conform to … requirements for … cognitive penetrability of early vision because information from outside the visual system (the linguistic label) is affecting visual sensitivity… We conclude based on the present findings that auditory verbal cues actually alter perceptual processing of the named objects rather than alter a higher level decision process.

What happens when an object that is being perceived has been given a name?

One way to understand our results is by conceiving of verbal labels as providing modulatory feedback to the visual system (The Label Feedback Hypothesis). Attention (one form of top-down control) has been shown to affect response properties of neurons in the very first visual area receiving top-down projections—the lateral geniculate nucleus (thalamus area)—and there is a large literature on effects of context, task-demands, and expectations on neural responses in primary visual cortex. The present results offer evidence that verbal labels, by virtue of their pre-existing association with visual stimuli, modulate visual processing by providing a “head-start” to the visual system, facilitating the processing of stimuli associated with the label. This type of continuous interaction between top-down and bottom-up processes is consistent with a number of theoretical frameworks

Currently ongoing experiments indicate that similar results can be obtained for pictures of everyday objects and animals: hearing common nouns can facilitate the detection of pictures from the named category… (Other results) suggest that the format of the cue, in addition to its modality, is important: verbal auditory cues (e.g., “cow”) facilitated visual identification and discrimination more than nonverbal auditory cues (e.g., the sound of a cow mooing”).

There is now accumulating evidence that higher level semantic information can influence visual perception in some surprising ways. For instance, auditory processing of verbs associated with particular directions of motion (e.g., fly, bomb) interferes with visual discrimination tasks along the vertical axis and increases sensitivity to the congruent motion direction in random-dot kinematograms. Moreover, linguistic input can guide visual search in an incremental and automatic fashion. Ascribing meaning to unfamiliar shapes using verbal labels improves the efficiency of visual search for these shapes. In fact, simply hearing a word that labels the target improves the speed and efficiency of search (compared to not hearing the label, but still knowing the target’s identity). For instance, when searching for the number 2 among 5’s, participants are faster to find the target when they actually hear “find the two” immediately prior to the search trial – even when they know that the 2 is the target because is has been so for the entire block of trials.

Words and their meaning have a great influence on the focus of attention, on the content of consciousness and on the details of perceptive processing. This is in keeping with Bolles’ model in the Babel’s Dawn blog (here) of speech being about joint attention, with words being the way to point attention to a particular topic.

Evidence for predictive awareness


Is seems generally accepted in neuroscientific circles that the brain predicts the results of motor action, constructs a prediction of sensory signals, compares the prediction with the outcome and uses the error to correct motor action and perceptual processes. It is also accepted that the prediction is singular and global in nature. Many assume it is Bayesian, at least in spirit. Below is some representative evidence for this predictive process being linked to the awareness that we experience. What we are aware of is the prediction.

(This was written to appear elsewhere and so I have avoided the use of the word consciousness in order to bypass fruitless discussion. But most neuroscientists do not avoid the word and in fact, many consider it the object of their investigations. If you find the wording awkward, then just substitute consciousness for awareness.)

  1. The failure of simultaneity between moving events and stationary objects: Nijhawan’s experiment had an object move across the visual field and pass a flashbulb on the way. The flashbulb flashed at exactly the moment that the object passed it. Subjects reported perceiving the object pass the bulb before the bulb flashed. The brain ’sees’ a split second into the future for moving objects but not individual stationary ones. Bai’s showed that there were more mistaken ‘out’ calls in tennis than ‘in’ calls because the officials perceive the ball is having moved further then it had.

  2. The systematic nature of some types of visual illusion: Changizi’s investigation of illusions mathematically predicts the extent that certain attributes of an object (smaller size, slower speeds, greater luminance contrast, farther distance, low eccentricity, greater proximity to the vanishing point, greater proximity to the focus of expansion) produce similar perceptual effects (increased perceived size, increased perceived speed, decreased perceived luminance contrast, deceased perceived distance). This shows that the visual system uses mechanisms for compensating for neural delay during forward motion in order to ‘perceive the present’.

  3. Problems with timing of normal sensory input and direct stimulation of the cortex: Libet’s experiment showed that stimulation of the skin reached awareness in a much shorter time then stimulation of the exposed surface of the cortex. This prompted him to propose a system of backward referrals of the timing of events. A much cleaner explanation is a predictive projection into the future occurs in normal awareness.

  4. Comprehension of language has a predictive nature: Berkum reported that event-related-potentials show the same pattern for unexpected words as for actual grammatical errors. This implies a ‘look ahead’ feature in language comprehension that produces surprise when predictions are wrong.

  5. There is an error registering system in the brain: Firth states that the brain is Bayesian, correcting its understanding on the basis of comparing prediction with current input. He states that the dopamine signal is a prediction error indicator. Dopamine neurons become more active if a good surprise happens, do not change their activity if there is no surprise and become less active if a negative surprise happens. Menon’s investigation of Go/NoGo experiments shows clear error-related activity (named ERN and ERP) in a wide network of cortical areas. Others have slightly different statements of the error registering system but all seem to agree that errors are identified and corrected. A prediction is needed in order to generate an error signal.

  6. Prediction is required for movement: Morsella has a theory that explains much about awareness. It postulates that it is used to meld together separate skeletomotor plans to avoid conflict. The conflict would only be apparent if our awareness predicted the course of the somewhat independent plans to see how they interacted with each other and the environment before the conflict actually happened. Llinas and Roy postulate that the main function of brain is a global one to implement intelligent motricity through prediction of the consequences of impending motion. They have outlined a thalamocortical system to do this that resembles the ‘neural correlates of consciousness’.

  7. Energy use by the sensory areas of the brain is higher for unexpected input: Alink’s investigation found that the response in V1 to unexpected signals was higher than for expected ones. This is thought to be the result of feedback from higher levels of predictive information.

  8. Reaching awareness takes time: Results of the bulk of experiments following the neural events leading up to awareness, it takes about 300msec from event to awareness of the event. There appears to be no evidence that we appear to live our lives a third of a second out of sync with the world. Predictive awareness eliminates this problem.

Source of emotion


FrontalCortex has a great posting on the magic of Wii games. (here) Lehrer connects the player’s movement with the emotional feeling of the game as an illustration of the embodied mind.

To understand how the Wii turns stupid arcade games into a passionate experience, we have to revisit an old theory of emotion, first proposed by William James. In his 1884 article “What is an emotion?” James argued that all of our mental feelings actually begin in the body. Although our emotions feel ephemeral, they are rooted in the movements of our muscles and the palpitations of our flesh. .. For most of the 20th century, James’ theory of bodily emotions was ignored. It just seemed too implausible. But in the early 1980s, the neuroscientist Antonio Damasio realized that James was mostly right: Many of our emotions are preceded by changes in our physical body. Damasio came to this conclusion after studying neurological patients who, after suffering damage in their orbitoprefrontal cortex or somatosensory cortex, were unable to experience any emotion at all. Why not? The tight connection between the mind and body had been broken. Even though these patients could still feel their flesh-they weren’t paraplegic-they could no longer use their body to generate feelings. And if you can’t produce the bodily symptoms of an emotion-the swelling tear ducts of sadness, or the elevated heart rate of fear-then you can’t feel the emotion. .. As Damasio puts it, “the essence of feeling an emotion is the experience of such [bodily] changes in juxtaposition to the mental images that initiated the cycle.” The resulting state of consciousness-an emulsion of thought and flesh, body, and mind-is our feeling of fear.

The content of consciousness is largely, perhaps almost totally, derived from the process of perception. It may be that emotions must be sensed in order to be felt consciously.

Access through consciousness


ScienceDaily has an item on research by M. Pessiglione investigating subliminal motivation. (here) In the experiment they gave the subject a subliminal look at the level of reward available for the strength of a hand squeeze. The size of reward affected the strength of the squeeze. In a second experiment the subliminal reward information was projected to only one eye (the therefore one hemisphere of the cortex) and the effect on the squeeze was only found for the one hand (controlled by the same hemisphere) and not the other.

The research shows that it’s possible for only one side of the brain, and thus one side of the body, to be motivated at a time, says Pessiglione. “It changes the conception we have about motivation. It’s a weird idea, that your left hand, for instance, could be more motivated than your right hand.” He says this basic research helps scientists understand how the two sides of the brain get along to drive our behavior.

The way I interpret this is that the size of the reward affects motivation. This is true even if the picture of the reward has too short a duration to reach consciousness. When the subliminal information is projected to only one side of the cortex, it remains only local knowledge. The other hemisphere would have knowledge of the reward only if it rose to consciousness and the short duration prevents this. Consciousness would seem to be very important for access and coordination between the two hemispheres at least in some situations.

Types of cognition

The Frontal Cortex blog has a very interesting posting (here) about learning and intelligence. Lehrer points out that g, general intelligence, measured by the IQ test is not the only intelligence. He discusses a type1 and type2 cognitive system.

In order to understand the limitations of general intelligence, at least as presently defined, it’s important to delve into one of the of the great themes of modern psychology, which is the essential role of the unconscious. While Freud associated the unconscious with the unspeakable urges of the id, we now know that our mental underworld is actually a remarkable information processing device, which helps us make sense of reality.

This has led to the dual process model of cognition, in which the mind is divided into two general modes. There is Type 1 thinking, which is largely unconscious, automatic, contextual, emotional and speedy; it turns out that most of our behavior is shaped by these inarticulate thoughts. (Consider, for instance, what happens when you brake for a yellow light, or order a dish on a menu as soon as you see it, or have an “intuition” about how to approach a problem.) And then there is Type 2 thinking, which is deliberate, explicit, effortful and intentional. (Imagine an amateur chess player, contemplating the implications of each potential move.) Needless to say, intelligence tests excel at measuring Type 2 thought processes, which is why the standard IQ test largely relies on abstract puzzles and math problems, and correlates with working memory performance.

The end result is a growing contradiction between how we define intelligence - it’s all about explicit thought and g - and how we conceptualize cognition, which is inextricably bound up with Type 1 processes. (In other words, we currently measure intelligence by excluded the vast majority of the information processing taking place inside our head.)

… There’s a growing body of evidence that reliable differences exist in Type 1 thinking, and that these differences have consequences. This helps explain why even the most mundane features of Type 1 thinking … significantly correlate with math and verbal scores on the ACT. Other studies have found that performance on a variety of implicit learning tasks - the kind of learning that takes place in Type 1 - were significantly associated with academic performance, even when “psychometric intelligence,” or g, was controlled for. In other words, not every unconscious works the same way.

I believe this difference between implicit and explicit cognition has to do with the use of short term memory. Cognition that needs to use short term memory will, I believe, have to make conscious the information to be saved for use in later cognitive processes. While cognition that does not require the use of short term memory is faster and easier if none of it rises into consciousness. A cognitive process that repeatedly passes a sub-product through consciousness/working memory will appear to be done in a ‘conscious mind’ although all the processing is actually done unconsciously.

Further I think that if a cognitive task is repeated many times that the networks of neurons involved in the cognition will grow and change so that the used of working memory will be reduced or even eliminated. Then the task will not rise to consciousness and will appear to be automatic.

The measure of type 2 intelligence may be largely the result of the capacity of working memory and type 1 intelligence may be the result of speed and conductivity of the brain’s networks. There is probably a role for the cerebellum, thalamus and other brain areas in intelligence.

Pop science


Our society has a problem with the dissemination of information about science. There are the scientific journals that are far, far to specialized for anyone outside that particular area of science to read and understand, even if they have a fairly good general education that included sciences. Then there are popular science books and articles written by scientists (and some good journalists) for the lay public. The problem with these is that they are too rare, far too rare. And finally there are articles and books written by people who are basically ignorant of the subject matter and are out to shock, titillate, entertain or discredit. Quite often these books/articles use dichotomies in a pretend conflict for effect. The rule appears to be ‘hang your article on a controversy’. An example is discussed by Ledoux (here). It is the left-brain verses right-brain fake dichotomy that has annoyed him.

Here are some pop science ideas that bother me. See any the these and you know that the author is either ignorant of the subject or cutting corners.

  1. nurture verses nature: We cannot separate genetics for environment in any sort of useful quantitative way; they are too interwoven, interdependent and an multifaceted in their interactions. This is a political football and not a scientific question. Everything about you is controlled by genetics and, at the same time, everything about you is controlled by your environment – and this is not impossible. Genetics and environment are not in competition.

  2. The gene for ‘x’: There is no gene for aggression, for mathematics, for autism etc. Genes control things like a type of cell migration which in turn (with other genes and environmental factors in the womb) produces the anatomical structure of the brain. That in turn, with other genes and environmental factors, gives tendencies toward ‘x’. How genetics works is not rocket science – it is more complicated than rocket science. We can have a gene for a particular enzyme because genes code for proteins, but not for disliking spinach.

  3. Mind verses matter: Dualists are now very rare in philosophy and even rarer in neuroscience. Forget about some immaterial mind stuff. It is a dead as vitalism.

  4. Brain verses body: The brain and the rest of the body are not separate systems. What is happening in the brain effects the body beyond just the muscles and glands. And what is happening in the body effects the brain beyond just the sense organs. The ‘embodied mind’ is pretty much accepted.

  5. The reptilian brain/the primitive brain: We do not have some unchanged ancient part of our brain. We share some anatomy with all other vertebrates but none of it has remained unchanged by evolution. Various structures in the hind-, mid- and fore-brain work together although they arose in evolution over time with the hind- first and the fore-brain last. It is like saying that the heart is more primitive than the lungs because hearts are evolutionarily older. We do not have shark hearts or reptile brains.

  6. Left brain verses right brain: The two hemispheres are connected and work together, very closely together. The only time you have a two brains is when the two hemispheres are surgically separated.

  7. Conscious mind verses unconscious mind: This is the big one as far as this blog is concerned. We do not have two minds – we have one. Most of what the mind does is not in our awareness – we are unconscious of it. But some of what the mind does we are aware of – we are conscious of it. We have an unconscious mind and consciousness of some of the products of that mind (perception, cognition, intention etc.) To read some pop science you would think that the unconscious was some hidden evil trying to undermine our best efforts.

  8. Freewill verses determinism: I think most neuroscientists do not accept either concept but instead envision a complex decision making and control process in the brain which is neither free or determined as those words are ordinarily used in this context. (But it is still a question for some philosophers.)

  9. Humans verses other animals: This is sometimes said as, ‘humans are unique’. Doh! All species are unique. It is only natural that we are more interested in what makes us unique than what makes fruit bats unique. We share the basics with other animals. We share the forerunners of our most typical abilities with our closest animal relatives. There is no reason to think that other animals do not think, feel, communicate etc. even if we do these things much better then they do.

|