Log in

No account? Create an account
shpalman [userpic]

Consciousness... consciousness of other people... consciousness of beer... unconsciousness...

27th April 2008 (10:59)

In his editorial introducing Lionel Milgrom’s latest paper, “A New Geometrical Description of Entanglement and the Curative Homeopathic Process” [1], Alex Hankey cites “Facing up to the problem of consciousness” [2] by philosopher David Chalmers, which merits a blog post of its own. Hankey’s editorial itself is dealt with in the post entitled Inconsistent with health and healing.

Consciousness... consciousness of other people... consciousness of beer... unconsciousness...

BPR3Chalmers distinguishes the ‘easy’ and ‘hard’ problems related to explanations of consciousness. The easy problems relate to the explanation of what he redefines as awareness;

  • the ability to discriminate, categorize, and react to environmental stimuli;
  • the integration of information by a cognitive system;
  • the reportability of mental states;
  • the ability of a system to access its own internal states;
  • the focus of attention;
  • the deliberate control of behavior;
  • the difference between wakefulness and sleep.

whereas the hard problem is that of explaining experience:

“When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.”

Chalmers maintains therefore that what is required to explain experience is something which does not fit into or arise out of cognitive science and neuroscience,

“These methods have been developed precisely to explain the performance of cognitive functions, and they do a good job of it. But as these methods stand, they are only equipped to explain the performance of functions. When it comes to the hard problem, the standard approach has nothing to say.”

and even some sort of quantum mechanical effect or Penrose’s “nonalgorithmic processing” can’t be the missing ingredient, because these are physical effects which should already be covered by the time that cognitive science and neuroscience have done their stuff. Chalmers still wants to know why any such purely physical process should give rise to experience - “Experience may arise from the physical, but it is not entailed by the physical... When it comes to a problem over and above the explanation of structures and functions, [reductive] methods are impotent.” He then goes on to explain that he’s not being vitalist, but he admits that his position, in which he wants to “take experience itself as a fundamental feature of the world, alongside mass, charge, and space-time,” qualifies as a variety of dualism. He maintains though that it is an “innocent version of dualism... nothing in the approach contradicts anything in physical theory.” Well he says that, but he can’t know that without knowing something that nobody else does about cognitive science and neuroscience, or making his idea so nebulous that its connection to reality can be wherever it’s easiest to hide. The fact that this idea’s “overall shape is like that of a physical theory” is useless if he’s already decided that there’s nothing physical about it. (Incidentally, he mentions Maxwell’s introduction of electromagnetics as “new fundamental components of a physical theory” in addition to the wholly mechanical processes that previous physical theories appealed to” but it’s important to remember that electromagnetics led to special relativity [3] (and from there to general relativity) which changed mechanics quite a lot, so it’s not necessarily safe to go adding new fundamental entities without expecting anything else in the rest of physics to have to change. Anyway, mass and charge are fundamental because they are conserved quantities (once we realize that energy has mass [4]) and conservation laws relate to symmetry; it’s hard to argue that “experience” is a conserved quantity.)

But finally we get to the three principles which Chalmers thinks might go into a theory of consciousness:

  1. Structural coherence
  2. Organizational invariance
  3. The double-aspect theory of information

The first two of these are described by Chalmers as nonbasic principles but he considers the third and final one to be his “candidate for a basic principle that might form the cornerstone of a fundamental theory of consciousness.”

Structural coherence
“Any information that is consciously experienced will also be cognitively represented.” No argument there: put someone in an fMRI machine or PET scanner and watch the correlations between what they report and what’s going on in their brain. You have a cortical map of your body, and other parts of the cortex map visual or auditory fields.
Organizational invariance
“Any two systems with the same fine-grained functional organization will have qualitatively identical experiences.” The thought experiment here is to consider two conscious systems which have the same structure but are built from different components, and imagine that they might have different experiences, and then imagine progressing piece by piece from one to the other. The experience should surely need to change as we went from one system to another, but Chalmer decides this has to happen suddenly at one step in the progression rather than smoothly across the whole range; is there any real reason for that, or is it just that he is thinking in terms of mutually exclusive experiences of the same external phenomenon? One system might see red and have the experience of red, and another might see red and have the experience of blue, but is there one in the middle which would have the experience of grey or purple? This is the sort of question which reminds me why I have little patience for philosophy. It’s the sort of thing students discuss in pubs. Maybe we could sort it out in this case by examining the visual cortex, but then we’d probably be back to “awareness” rather than “experience”.
The double-aspect theory of information

Shannon [5] introduced the concept of information entropy to quantify the information content of a message. You can actually get an idea of the entropy of a Word document or something by seeing how much you can compress it with winzip (OpenOffice documents are already compressed). Documents which compress a lot contain a lot of repeated information* and have a low entropy, whereas a truly random string of bits is incompressible and has high entropy. But information entropy doesn’t say anything about meaning: that seemingly random string of bits might turn out to be an OpenOffice document, once you’ve installed the software on your computer which allows it to make sense of it and translate it into text on the screen... and then it might turn out to be in a language which you don’t understand, until you learn that language and then you have the mental software to translate the text into mentalese [6]. It’s fine to represent information as if it lived in its own information space, because on a certain level that string of bits and the pattern of lights on the screen have the same information content even if they have completely different physical forms, but it doesn’t give the information space any physical reality. You always find that the information is encoded in some physical medium, whether it be magnetic domains on a hard disk, charges on the gates of transistors, the orientations of liquid crystal molecules, or firing patterns of neurons. In some sense it’s the same information, but it relies on either the computer or the human having the right software in order to get the meaning out. How that translation into mentalese [6] leads to “experience” of that information isn’t at all obvious. It still seems like “awareness” to me. What’s fundamental about information? It’s just an arrangement of something which isn’t quite as random as it could be.

(* - recently I had to make some posters in PowerPoint and I noticed that even simple edits caused the file size to increase significantly, but I discovered by accident that using “Save as” instead of “Save” cause the filesize to decrease back to its original value, even if you didn’t actually change the filename or anything.)

So what we are left with probably isn’t as exciting as Hankey seems to make out.


  1.  L. R. Milgrom, J. Alt. Comp. Med. 14, 329 (2008).
  2.  D. J. Chalmers, J. Conciousness Studies 2, 200 (1995).
  3.  A. Einstein, Annalen der Physik 17, 891 (1905a).
  4.  A. Einstein, Annalen der Physik 18, 639 (1905b).
  5.  C. E. Shannon, Bell System Tech. J. 27, 379 (1948).
  6.  S. Pinker, The Stuff of Thought: Language as a Windows into Human Nature (Allen Lane, 2007).
free hit counter javascript
This document was translated from LATEX by HEVEA.