I notice your frustration

A occasional commenter, Quen_tin, has a very different way of seeing things to myself. His comments are appreciated, always serious and never rude. But I can feel his frustration in the beginning of his last comment. “Sorry but… What exactly are you talking about ??? Mental entities that are not conscious ? Thoughts we are not aware of ? I think your concepts are ill-defined.”

 

So I feel that it is time to give an overview of my guesses as to how consciousness works. I have avoided taking my guesses too seriously because the science is so fluid. And this blog was started to follow the science rather than put forward a complete theory of my own. But on the other hand, there seems no piecemeal way to explain my problems with Quen_tin’s comments.

 

The philosophical ideas that I find most comfortable are Thomas Metzinger’s (pdf).

The present theory develops a detailed story about precisely what properties representations in a given information-processing system must possess in order to become phenomenal representations, ones the content of which is at the same time a content of consciousness. Let us start with what I call the “minimal concept of consciousness” and then proceed to enrich it. Phenomenologically, minimal consciousness is described as the presence of a world. This minimal notion involves what is called (1) the globality-constraint (a world-model that is available as a world model), (2) the presentationality-constraint (experience of presence in that world, now), and (3) the transparency-constraint (unavailability of earlier processing stages). … The presence of a world. The phenomenal presence of a world is the activation of a coherent, global model of reality (Constraint 1) within a virtual window of presence (Constraint 2), a model that cannot be recognized as a model by the system generating it within itself (Constraint 3). Please note how all that such a system would experience would be the presence of one unified world, homogeneous and frozen into an internal Now as it were.

Then the self is introduced. …one could say that you are the content of your PSM (phenomenal self-model ). A perhaps better way of making the central point intuitively accessible could be by saying that we are systems that constantly confuse themselves with the content of their PSM (phenomenal self-model ). And Mezinger carries on to elaborate a picture of consciousness that seems very reasonable to me.

 

It is not that I have accepted this theory as the only possibility. However it is a possibility – and therefore, the phenomenal should not be beyond scientific explanation.

 

So the way I look at it is that there is a brain and that brain does stuff like action, perception, cognition, memory and whatever else we associate with mind or thought or mental processing. Mind is a function of brain, mind is what brain does. A small but very important function of the brain, or a small part of the mind-function, is consciousness. It is a model of the world including a model of the self that is available to much of the brain. It is connected with attention, working memory and short-term prediction. It is important to the working of the brain but does not do action, perception, or cognition. It does not do thought; it does awareness. It has no direct (as opposed to modeled) knowledge of the world or the self. Therefore introspection has no direct privileged knowledge.

 

Because it is the route to working memory, any thought process that requires the storage facility of working memory will have its interim steps appear in consciousness (not the thought processes but the sub-conclusions on the way to a final conclusion). It will have some of the appearance of the thought being consciously done because the interim steps appear in order in the stream of consciousness. But we still have only one unified brain/mind doing the thinking. We are never aware through consciousness of the thinking process, the ‘cogs moving’.

 

It is true that one of the reasons that I find this a comfortable way of looking at consciousness is a metaphysical choice, taken so long ago that it is lost in the fog of childhood. I believe in a ‘reality’ that is physical and material in nature, with a very concrete sort of belief. I find it impossible to believe in non-physical/non-material things. The disembodied, spiritual and such like are not entertained. I shy away from words that sit on the fence like ’emergent’; they seem to want to be both physical and spiritual at the same time. So my only way of understanding consciousness is scientific.

 

At this point, I hear in my ear that some readers are saying to themselves, “but why is red the way it is?”. Where do the qualia come from? I don’t know how the brain does qualia. But I see it as a scientific question all the same. I do not buy that it is so mysterious that it cannot be understood. Why is red the way it is? Why not. How is red the way it is? Who knows. But I trust we will know someday.

 

 

6 thoughts on “I notice your frustration

  1. Thank you for providing more detail on your position. I am very glad that you devote a post to answer my previous comments in detail.

    Maybe you overestimate our divergences on some points: I am not a dualist, I don’t think a spiritual substance is required for explaing our minds and I too view consciousness as a global model of the world, built by the brain, and whose originating processes are not transparent (for they are not part of the model itself). That is, I fully subscribe to the 3 constraints of Thomas Metzinge.

    However there are some differences between our views.

    - You seem to understand “thought” or “mental” as synonymous with “brain process” while I understand “thought” and “mental” as refering to the content of consciousness. I think you are confusing two levels of description, the 1st person and 3rd person levels. “I think it will rain” means that the idea that it will rain is somehow instantiated inside my model of reality (“rain” and “it will” are different elements of my model of reality). T

    o me, saying that consciousness “does not do thought; it does awareness” is meaningless since thoughts are the content of consiousness and (being an object of) awareness is a property of thoughts, so if something does the thoughts, it does the awareness too.

    Phrased differently, I don’t think that Metzinge’s criteria 1 and 2 are conceptually independent.

    - Similarly I don’t think there exist “direct knowledge”: any kind of knowledge is a kind of model, and again, I would define knowledge as something that pertains consciousness. Maybe you are tempted to view the input of a brain process as “direct knowledge” of the world? Again, this is confusing two different levels of description and promoting a naive understanding of knowledge as “direct perception of facts”. An input signal is not any kind of knowledge, it must be interpreted first.

    There is a huge litterature on the subject in philosophy, going back to middle ages, and I think that it is rather consensual nowadays to view knowledge as the confrontation of a model to reality rather than pure reception of facts or sense-data (cf. Wilfried Sellars argument on the “myth of the given”).

    - More importantly, you seem to view consciousness as a side-effect of brain processes. I would emphasize more on the active and dynamic role of consciousness. I think that crucial “high level” parts of what we do, we do it within the model of reality that is consciousness.

    Think of a football player: if he has no representation of the field and of the location of different players, if he has no representation of the rules of the game, then he cannot play at all. The terms of his decisions (run on the left side and shoot) are only meaningful within the model of reality that is consciousness, which contains the rules and a spatial representation of the field, together with what it means to “run” and to “shoot”. Moreover his decisions are dynamically updating the model (“I just shot”).

    This is related to my first point: I don’t see why the thoughts/actions and the awareness should be separated features, as if awareness was a kind of useless duplicata of our underlying processes. I don’t mean that the whole of our brain processes are conscious, obviously not… I don’t mean that the content of our consciousness, including our propensities to act in a way or another, is not strongly determined by unconscious brain processes either, obviously it is. I mean that conscious actions, as far as they are conscious, happen within consciousness and not elsewhere. If ever your unconscious brains was beginning to do things that you do not want to (it happens sometimes) then obviously, you will realize it and you will be able to fix that bad habits, consciously. You don’t have to be aware of every single process to be in control on a high level. Only people suffering from mental disorders are no more in control.

    Moreover there are strong arguments (e.g. from philosopher Gilbert Ryle in his great book “the concept of mind”) for not distinguishing between the “awareness” part and the “active” part. Awareness is not some kind of magic propery that is added to some brain processes on arbitrary grounds… It is just the property for an action to happen inside the high-level model of reality that is consciousness, and the model is the only framework of these actions.

    I suppose you will mention Libet’s experiments (or its more recent replications) as an argument to distinguish between the ‘active’ and the ‘awareness’ parts. I don’t think they are decisive: to me, they reveal the existence of unconscious propensities which definitely exist, but they do not contradict the view that high-level actions really happen inside consciousness.

    On the contrary, an epiphenomenalist view of awareness is puzzling if not self-contradictory: it’s hard to understand why we should trust science if we are mere spectators of our brain processes and if any meaningful idea (including all of our knowledge) is nothing but an additional useless delusional layer! Knowledge is the starting point of the whole discussion, you cannot undermine it. Maybe the root of our divergence here is the reduction of mental content to something else (more on that later).

    - Here comes our main point of divergence: I don’t think that the “hard problem of consciousness” is a scientific problem at all. I also was a strong materialist in my childhood, but then it occurs to me (progressively, and quite recently I must admit -mostly after reading philosophy) that everything we call “matter” or “physical” is not “something that exists” but “something we represent/model” (with some prerequisite to it: being expressible, measurable, reproducible, …), and that our thoughts and representations exist more certainly than anything we think of or represent, they are the genuine reality, and finally, that if our representations are accurate (I don’t deny that), it is not necessarily in virtue of their direct correspondance to “what exists” but rather in virtue of their correspondance to a relational, interactive structure available to contextualized observers/actors located inside reality (in other words, there is no accessible “God view”). Let us call this a “kantian turn”, followed by a “structural realism turn”… If you accept that view, you don’t have to be a dualist and the mind-body problem is not so difficult: it’s just that 1st person and 3rd person perspectives are two sides of a coin, and none of them is the full story.

    But you don’t even need such a turn to admit that Metzinge’s criteria for consciousness are not all scientifically or empirically tractable. Let us review those criteria :

    1) Consciousness is a “global model”.

    - is a “model” a scientific object? (Then you can you make a scientific “model of a model”) What are the empirical criteria for being a “model”? I think a model only makes sense if related to a subject who interprets it. So the concept of “subject” is required, but it is not a scientific (objective) concept.

    - what is “global”, and more precisely how do you account for the unity of consciousness if it is supposedly composed of independent particles? You must specify exactly where the “global model” starts and ends, but if every particle is independent, any separation seems arbitrary. Now consciousness does not only exists in the head of scientists (as does a ‘system’ whose definition is arbitrary), it exists for real… That is the main reason why I reject the reduction of mental content.

    2) Consciousness is “experience of that world, now”.

    - how do you decide that some material system has “experience”? What does it mean scientifically, what does it change empirically? I decide that other people have experience because they are very much like myself and communicate with me, and because I do have experience. Actually, the only proper definition of “experience” I can think of is “you know, that thing that I have right now”… which is not very scientific, you will admit. But I can’t think of any scientific/empirical/objective proof of that idea. Saying “that specific system have experience” does not change anything to the math behind…

    - how do you decide that an experience is “now”? There is no “now” in science, only time relations between events (before, after or separated by a space-like interval). I would rather say that “now” is to be defined on the basis of consciousness than the converse: the time when I am conscious. Again the only proper definition I can think of refers to my own subjective experience, nothing “public” and objective (even simultaneity is not objective in science).

    3) “unavailability of earlier process-stage”… Well, I have nothing to say about this 3rd criteria that is not contained in my previous remarks.

    I insist that all these are not problems specific to some scientific theories that could be overcome in the future when new discoveries are made, but that they are conceptual problems intrinsic to the very nature of science (reductionnism, …) and to the fact that several concepts involved in consciousness are subjective and prior to scientific activity (“now”, “experience”, …).

    In summary, if I agree with Metzinge’s criteria for consciousness, I also see that they require some metaphysical concepts in order to be meaningful, which are not empirical concepts but are rather the prerequisite for having empirical concepts. My idea is not to invoke something that is “spiritual” instead of “physical” or “material” to explain that, because I think that any reference to something “physical” or “material” already involves, at its very root, a reference to this kind of metaphysical non-empirical concept that is a “first person perspective” (or a “viewpoint”, an “experience”, a “now”, …). You must have it, already, to talk about something “physical”, and “physical” is merely the name we give to what appear to an existing subject. I’ll just be fine if you accept that such a reference to one’s own subjectivity is unavoidable for tackling consciousness.

  2. I enjoyed the original post and find my self in agreement with its main thrust.

    In response to Quen,

    “I would define knowledge as something that pertains consciousness.”

    I find this to be a bad definition of knowledge. When the chess computer makes a series of 3 plays that guarantees checkmate, and does this more consistently and unwavering than the best human can (where we assume the computer does it non-consciously and the human does it with consciousness), I find it difficult NOT to call this knowledge. We could connect this back to the consciousness of the designers, but that move breaks down for varying reasons, one being that we get computers performing various feats that conscious humans cannot accomplish.

    But the intuition that “consciousness” is providing humans with a connection to “knowledge” is at least broken down, as it seems that most of those connections could be structured non-consciously. Where consciousness eventually may fit in may be narrow and specific “knowledge” processes that require global availability of a great deal of information or something like that, but that is really just empty speculating.

    I know you push some of that into deep philosophical territory, which is fine, but I find your football “knowledge” example to be unsatisfactory:

    “1. Think of a football player: if he has no representation of the field and of the location of different players, if he has no representation of the rules of the game, then he cannot play at all. The terms of his decisions (run on the left side and shoot) are only meaningful within the model of reality that is consciousness, which contains the rules and a spatial representation of the field, together with what it means to “run” and to “shoot”. Moreover his decisions are dynamically updating the model (”I just shot”).”

    The idea that we could not create a non-conscious being that performs the role of this game as well as humans seems like an empty belief in the specialness of humans and specialness of human consciousness. Compare this for instance to Watson on Jeopardy! who played that game better than humans, as well as satisfying the basics of language expression, it played the language game adequately as well. Things like navigating spatial reality, the deer running through the forest or the player through the field of other players, is not something we have duplicated on computers to the extent that we have duplicated navigating logic and trivial facts with Watson and chess computers, but Google is seemingly making inroads with their cars. But the belief that such football field navigation requires consciousness seems akin to the false belief that the ability to beat a human on the game of Jeapordy! Or in a game of chess requires consciousness. In other words, I think we have reached a point where we must be humble about such claims as “it requires consciousness to play football well.” Certainly computers can obey the parameters of the “rules of the game” without needing consciousness, as well as things like strategy of the game, and so on.

    The chess computer clearly shows that this statement is wrong, “The terms of his decisions (run on the left side and shoot) are only meaningful within the model of reality that is consciousness . . .” I can only assume you mean by “meaningful” that it helps this agent win the game, it connects to game strategy in the right way, but there is nothing uniquely in “conscious representation” that does that, this is clearly a process we can duplicate non-consciously and still have this being committing “meaningful” actions. The non-conscious chess computer can represent the board, represent past opponent plays to help form future plays, and so on. The usefulness of our “conscious” representation in such instances is undermined and the representative structures of non-conscious chess computers and presumably the non-conscious computer football player could equally “represent” those features and logical structures of the game.

    From there you can continue to talk about the “active and dynamic role” of human consciousness, but there is just not good reason to believe that we gain abilities and “knowledge,” at least narrowly defined as the ability to navigate the world or to win a game or manipulate information, because of that “active and dynamic role” of consciousness; there are other ways we could duplicate the usefulness of those navigations without consciousness.

    Which goes back to what knowledge is. You can maintain that the human who sees during the course of a chess game the three-step-check-mate-move, who understands the logic of why she is guaranteed to win, has more “knowledge” than the computer who necessarily “sees” and executes the move based on their programming. If we claim that, then we have taken “knowledge” away from behavior and representative functions (I assume the chess computer “represents” the board and structure in some way), and place “knowledge” into some consciousness realm, where knowledge is not really knowledge unless it is also accompanied by consciousness. Doing so, though, we would lose sight of what we think is most important about our knowledge, our ability to navigate the world and manipulate that world. My knowledge of how to multiply 222 X 461 must be accompanied by my conscious processes, but this does not mean that I am better at those processes, that my “knowledge” of those processes is better than a calculator’s knowledge of those processes.

  3. Lyndon and Quentin: The football player is interesting. It represents the problem of consciousness and skill. We use consciousness to learn a skill (say passing the ball), then we practice until we form a sequential (procedural) memory of the skill and are very skillful at it. This is very common for: athletes, musicians, skilled workers, car drivers, key-board operators and ordinary everyday actions that we do without detailed thinking. The sequential memory is not available to consciousness and any attempt to examine the skill consciously interferes with the action and makes it less skillful (It may even be housed in the cerebellum rather than the cerebrum). Athletes have to learn to put their conscious attention on the target or goal of their action and NOT on the action. If they try to concentrate on the action, they will not be able to use their sequential memory and will ‘choke’. This is why confidence is so important – it stops the doubts that promote conscious interference.
    This can also be seen as an example of system 1 and system 2 thinking (I am not sure that there is an agreed definition of what these are – my interpretation is that they are essentially identical except that one uses working memory and the other doesn’t). System 2 uses working memory and is therefore: slow, limited in number of items dealt with simultaneously, enters consciousness. System 1 does not use working memory and is therefore: fast, able to handle a great deal of information simultaneously, does not enter consciousness. The quarterback simply does not have the time to use System 2 and there is too much data needed for successful passing to be handled by System 2. He will use System 1 automatically and quickly. If he stops to think consciously, a whole bunch of big guys will jump on him and take his ball away.
    Just because consciousness presents a model of the world does not mean that its model is the only copy in the brain. In fact I think the conscious model is a somewhat edited version suitable for episodic memory.
    I don’t think that consciousness is useless. In fact I think is one of the most important processes in the brain – it just is not involved in cognition, thought and the like - except to present the results of it. It is not a ‘side-effect of brain processes’ but one of the brain processes and a very important one, ‘very active and dynamic’ just not a cognitive process. If we subtracted consciousness from the brain, we would not get a zombie like Chalmers suggests, we would have a deficient brain that would have problems with motor skills, memory, learning, language, imagination and so on. That does not mean that consciousness does thought.
    JK

  4. In responser to Lyndon :

    When a farmer builds an irrigation system, would you say that the irrigation system *knows* how to bring water to the plants? Would you say that the connection to the farmer’s intention to water his field breaks, because the farmer would be unable to bring so much water only with his hands? Would you say that the irrigation system *represents* the layout of the field? I suppose not. I think the same goes with any computer program: it does not “know your name” when it greets you, it just concatenate strings, etc.

    In my view, Watson does not know how to play jeopardy, but its designers do so well that they manage to win at Jeopardy without even hearing the questions (with the help of a computer they called Watson). They deserve the congratulations, not Watson. The same for any playing-chess program, and the same for any robot that would be designed to play football. Now of course, Watson could not learn how to play chess: computers do not learn, they do what they are designed for.

    You could mention bayesian algorithms as a counter-argument (algorithm which “learn” in confrontation with new data, such as anti-spam filters) but even then the framework within which the algorithm works is so constrained that I think “knowledge”, “learning” and “meaning” are still a misnomer, at most metaphors, as they are when related to any kind of algorithm.

    My view is that computers are mere extensions of our brains in the outside world, just as other tools are extensions of our arms etc. Human beings build them, implement them, start and stop them, use them, feed them with electricity and input data, fix them, maintain them, etc. So whenever you see a behaviour that looks like “knowledge” or “representation” somewhere, you can be sure there is a human being behind (or another living form, but animals are yet to build computers).

    This brings me to JK’s interesting comment, which I think corroborates my view. Indeed, system 1 is both unconscious and “systematic”: it performs predictable and automatic tasks. In that sense, system 1 is quite analogous with algorithms, whereas system 2 is not. When a tasks becomes automatic, it becomes unconscious as well. I think it reveals that consciousness is quite the opposite of an algorithmic process, even though it strongly rests on algorithms or mechanisms for its existence.

    When a football player trains, he does quite what a computer programmers does: he implements automatic tasks in his own body for later use. This allows him to concentrate on high-level actions (such as strategic issues), which are conscious actions which happen right inside consciousness. The player does “run on the left and shoot’ even though he is not controlling each of his muscles because “run” is a high-level action.

    Now I would not talk about “model”, “knowledge”, “meaning” or “thought” about these automatic tasks, for the same reason I don’t talk about knowledge about a computer. Because a model of the world is consciously present when an habit is “implemented” does not mean that this habit has a model or knows something by itself.

    Also, I think there is a slight misunderstanding of Chalmers argument in JK’s last comment. Chalmers argument is metaphysical: he does not intent to substract consciousness as a process from the brain, but to substract the awareness aspect, the ‘what-it-is-likeness’ associated with conscious processes, while retaining any functional aspect. Consciousness would be functionaly the same “from the outside”, but without awareness. Then you get a zombie.

  5. <p><p><p><p>Quinton, there are some statements in your reply to Lydon that I have problems with.<br /><br /><br /><br />
    a. Irrigation systems and Watson are one thing, but I don’t think that it is impossible to build a thinking machine (it would not be much like a brain except that it could think) and I don’t think this is all or nothing – there would be degrees of thinking. I am not happy about using ‘algorithm’ to describe brain processes, at least until we understand them enough to see whether they are strictly speaking algorithmic.<br /><br /><br /><br />
    b. I think that what I said about system 2 shows clearly that I don’t accept that there is any difference between the cognition part of the two systems. The difference is that one uses working memory and the other does not. It is only what is stored in working memory that rises to consciousness and not the cognition that produced it.<br /><br /><br /><br />
    c. “this allows him to concentrate on high-level actions (such as strategic issues), which are conscious actions which happen right inside consciousness.” To me this is meaningless – what is conscious actions happening inside consciousness? This implies the consciousness is in a different place, or time, or something then the rest of the brain – that it is separate. There is no need to think in terms of two separate minds.<br /><br /><br /><br />
    d. I don’t think I misunderstand Chalmers. He imagines being able to subtract consciousness which he see as an awareness and not a cognitive process. I disagree with him as to the functions of consciousness as well as awareness and do not think taking away the awareness would leave a zombie that was indistinguishable from other humans. It would be hard to tell the difference at first (like a sleep walker) but it would show up in having no ‘experience’, no ‘learning from experience’, no ‘memory of experience’ and other more subtle ways. </p><br /><br /><br />
    <p>Sometime ago I did a series of posts on the functions of consciousness which you may find interesting.<br /><br /><br /><br />
    Possible functions of consciousness 1 - leading edge of memory http://charbonniers.org/2011/10/20 <br /><br /><br />
    Possible functions of consciousness 2 – gate to meaning http://charbonniers.org/2011/10/29 <br /><br /><br />
    Possible functions of consciousness 3 – working memory http://charbonniers.org/2011/11/04 <br /><br /><br />
    Possible functions of consciousness 4 – place to imagine http://charbonniers.org/2011/11/16 <br /><br /><br />
    Possible functions of consciousness 5 – create ‘now’ http://charbonniers.org/2011/11/25 <br /><br /><br />
    Possible functions of consciousness 6 – presence ‘here’ http://charbonniers.org/2011/12/01 <br /><br /><br />
    Possible functions of consciousness 7 – attention on the significant http://charbonniers.org/2011/12/10 <br /><br /><br />
    Possible functions of consciousness 8 – broadcasting waves http://charbonniers.org/2011/12/19 <br /><br /><br />
    Possible functions of consciousness 9 – marking agency http://charbonniers.org/2011/12/25 <br /><br /><br />
    Possible functions of consciousness 10 – being oneself http://charbonniers.org/2012/01/03 <br /><br /><br />
    Possible functions of consciousness 11 - summary http://charbonniers.org/2012/01/06 </p><br /><br />
    <p>Yours JK</p></p></p></p>

  6. a.

    I don’t think it is impossible either, but a thinking machine would be conscious. That is my point.
    I also think such machines would be very different from today’s computers on many aspects, but that’s another discussion..

    b.

    Maybe “algorithmic” is not the proper term. I mean: a functional process that has inputs and outputs and whose outputs are fully predictable from its inputs. System 1 is clearly “algorithmic” in that sense: the purpose of training is to reach a perfect reproductibility of one’s processes. I don’t think System 2 (and consciousness or thoughts in general) are “algorithmic” in that peculiar sense.

    c.

    No I don’t mean that consciousness is in a different place from the brain. I mean that conscious actions are intractable from the outside, that they are not functional processes and do not reduce to such processes. It’s just a different level of description that requires high-level human-being concepts to be understood and described, and such concepts do not explicitely reduce to physical concepts. It requires an unavoidable reference to one’s own consciousness and high-level concepts.

    d.

    Another subtility: Chalmers does not claim that substracting awareness would result in an indistinguishable external behaviour. He claims that physicalism (or functionnalism) entails this statement, and that therefore, physicalism must be false. Adding or removing awareness inside a physical system does not change the math that describes that system, therefoore systems do not reduce to their mathematical (physical) description.

Leave a Reply

Your email address will not be published. Required fields are marked *