Home » 2018 » March

Monthly Archives: March 2018


CONSCIOUSNESS — and the Color-Phi phenomenon

When you watch a movie, your retina is stimulated 24 times each second with 24 static images.  An object that takes up adjacent positions in each successive image is perceived as moving smoothly.  The effect can be demonstrated experimentally with a single bright spot that is successively presented at one place and then at an adjacent place (see Figure 15.1).   If the delay between the two presentations is short, the spot appears to move, rather than disappear and then reappear.  This is termed the phi phenomenon.  There is a related effect in which the two spots are different colors.  What is seen is a single moving spot which changes color at about the midpoint of its travel.

This is a puzzle for some cognitivists.  A philosopher and a psychologist conjecture as follows:

[Philosopher Nelson] Goodman wondered: “How are we able…to fill in the spot at the intervening place-times along a path running from the first to the second flash before that flash occurs?” …. Unless there is precognition, the illusory content cannot be created until after some identification of the second spot occurs in the brain.  But if this identification of the second spot is already “in conscious experience” would it not be too late to interpose the illusory ­color-­switching-while-moving scene between the conscious experience of spot 1 and the conscious experience of spot 2?…[other experimenters] proposed that the inter­vening motion is produced retrospectively, built only after the second flash occurs, and “projected backwards in time”….But what does it mean that this experienced motion is “projected backwards in time”?[1]

Presented in this way, the color-phi effect certainly seems baffling, at least to philosopher Goodman.  Dennett and Kinsbourne describe, rather picturesquely, two standard cognitive ways of dealing with this effect.  One, which they term “Orwellian,” is that we experience things in one way, but then revise our memories, much as Minitruth in Orwell’s 1984 revised history.  The color-phi effect thus becomes a post-hoc reinterpretation: two spots are experienced, but a smoothly moving, color-changing spot is reported.   Dennett and Kinsbourne term the other standard approach “Stalinesque,” by analogy with Stalin’s show trials, in which false evidence is created but report­ed accurately.  In this view, what is reported is what was actually experienced, though what was experienced was not what (objectively) happened.

Dennett and Kinsbourne dismiss both these accounts in favor of what they term a “multiple-drafts” model: “Our Multiple Drafts model agrees with Goodman that retrospectively the brain creates the content (the judgment) that there was intervening motion, and that this content is then available to govern activity and leave its mark on memory.  But our model claims that the brain does not bother “constructing” any repre­sentations that go to the trouble of “filling in” the blanks” [2].  In the multiple-drafts model con­sciousness becomes a distributed construct, like “The British Empire” (their analogy), which is not uniquely located in time or space.

Theoretical behaviorism has a much simpler way of looking at the color-phi effect.  First, note that like all other psychological phenomena, the effect involves three conceptually separate domains:

Domain 1: The first is the domain of felt experience, the phenomenological domain.  There is a certain quality (philosophers call this quale) associated with the color-phi experience.  This is subjective and science has nothing to say about it.  From a scientific point of view, I cannot say whether “green” looks the same to you as to me; I can only say whether or not you make the same judgments about colored objects as I do. This point used to be a commonplace in philosophy, but apparently it needs to be reiterated from time to time: “That different people classify external stimuli in the ‘same’ way does not mean that individual sense qualities are the same for different people (which would be a meaningless statement), but that the systems of sense qualities of different people have a common structure (are homeomorphic systems of relations)”  wrote Friedrich Hayek.[3] The same idea was on the table at the dawn of behaviorism: “Suppose, for example, that I introspect concerning my consciousness of colors.  All you can ever really learn from such introspection is whether or not I shall behave towards those colors in the same ways that you do.  You can never learn what those colors really ‘feel’ like to me.” – this from that most cognitive of behaviorists, Edward Tolman[4].

What this means is that if you and I are standing in the same place, we see the same chair to the left of the same table, we judge these two greens to be the same and the red to be different from the green, and so forth.  What we cannot say is that my green is the same as yours,  What we can say (unless one of us is color-blind), is that my green bears the same relation to my yellow as your green does to your yellow.

I can also know if you say the same things about color-phi-type stimuli as I do.  Note that this is a behavioristic position, but it is not the version of behaviorism dismissed by Dennett and Kinsbourne, when they say “One could, then, ‘make the problems disappear’ by simply refusing to take introspective reports seriously.”[5]  As we will see shortly, the question is not whether phenomenological reports should be ignored – of course they should not – but how they should be interpreted.

Domain 2: The second domain is physiological, the real-time functioning of the brain.  The color-phi experiment says nothing about the brain, but another experiment, which I will discuss in a moment, does include physiological data.

Domain 3: The third domain is the domain of behavioral data, “intersubjectively verifi­able” reports and judgments by experimental subjects.  The reports of people in response to appropriate stimuli are the basis for everything objective we can know about color-phi.

Much of the muddle in the various cognitive accounts arises from confusion among these three domains.  For example, an eminent neuroscientist writes: “The qualia question is, how does the flux of ions in little bits of jelly – the neurons – give rise to the redness of red, the flavor of Marmite or paneer tikka masala or wine?”[6]  Phrased in this way we don’t know and can’t know.  But phrased a little differently, the question can yield a scientific answer: What brain state or states corresponds to the response “it tastes like Marmite”?  As Hayek and many others have pointed out (mostly in vain), the phenomenology question – which always boils down to “how does red look to you?” – is not answerable.  All we can know is whether red, green, blue, etc. enter into the same relations with one another with the same results for you as for me – Hayek’s ‘homeomorphic relations.’

Color phi provides yet another example of the same confusion.  Dennett and Kinsbourne write “Conscious experiences are real events occurring in the real time and space of the brain, and hence they are clockable and locatable within the appropriate limits of precision for real phenomena of their type.”[7]  Well, no, not really.  What can be clocked and located are reports of conscious experi­ences and measurements of physiological events.  Conscious experiences are Domain 1, which has neither time nor space, but only ineffable qualia.  The only evidence we have for these qualia (at least, for someone else’s) is Domain 3.  And we can try and correlate Domain 3 data with Domain 2 data and infer something about the brain correlates of reported experiences.  But that’s all.  Dennett and Kinsbourne’s confident claim just confuses the issue.

All becomes much clearer once we look more closely at Domain 3: What did the subjects see?  What did they say about it, and when did they say it?  The real-time events in the color-phi experiment are illustrated in Figure 15.2, which is a version of the general framework of Figure 13.1 tailored to this experiment.  Time goes from top to bottom in discrete steps.  At time 0 the red spot is lit and goes out; there is a delay; then the green spot is lit; there is another delay and then the subject reports what he has seen, namely a continuously moving red spot that changes to green half way through its travel: “RRRRGGGG.”  Stimulus and Response are both Domain 3. The properties of the states are as yet undefined.  Defining them requires a theory for the effect, which I’ll get to in a moment.

Confusion centers on the subject’s response “RRRRGGGG”.  What does this response mean?  This seems to be the heart of the puzzle, but the unknowable quale here is scientifically irrele­vant.   We do not, can not, know what the subject “sees.”  That doesn’t mean the subject’s response is meaningless.  What it can tell us is something about other, “control” experiments that might give the same quale.  Figure 15.3 shows one such control experiment.  In this experiment, a single spot really is moving and changing color at the midpoint: RRRRGGGG, and the subject’s report is, appropriately, “RRRRGGGG.”  The similarity between the responses to the really moving stimulus and to the color-phi stimulus is what the statement “the color-phi stimulus looks like a continuously moving spot that changes color” means.  The point is that we (i.e., an external observer) cannot judge the subject’s quale, but we can judge if his response is the same or differ­ent on two occasions.  And as for the subject, he can also judge whether one thing looks like another or not.  These same-different judgments are all that is required for a scientific account.

A Theory of Color-Phi

The comparison between these two experiments suggests an answerable scientific prob­lem, namely: “What kinds of process give the same output to the two different histories illustrat­ed in the two figures?”  More generally, what characterizes the class of histories that give the response “RRRRGGGG”?  The answer will be some kind of model.  What might the process be?  .  It will be one in which the temporally adjacent events tend to inhibit one another, so that initial and terminal events are more salient than events in the middle of a series.  Thus, the input sequence RRRRGGGG might be registered[8] as something like RRRRGGGG — a sort of serial-position effect, i.e., stimuli in the middle of a series have less effect than the stimuli on the ends (see Chapter 2).   In the limit, when the stimuli are presented rapidly enough, stimuli in the middle may have a negligible effect, so that the input RRRRGGGG yields the registered sequence R….G, which is indistinguishable from the color-phi sequence.    It would then make perfect sense that subjects makes the same response to the complete sequence and the color-phi sequence.

The same response, yes, but just what response will it be?  Let’s accept the existence of a perceptual process that gives the same output to two different input sequences: RRRRGGGG and R……G.  The question is, Why is the response “RRRRGGGG,” rather than “R……G”?  Why do people report the abbreviated sequence as appearing like the complete sequence?   Why not (per contra) report RRRRGGGG  as R……G?  Why privilege one of the two possible interpretations over the other?   It is here that evolution and personal history comes into play[9].   Just as in the Ames Room (Chapter 1) the visual system takes the processed visual input (in this case R……G) and infers, unconsciously, the most likely state of world that it signifies.  Since alternating on-and-off spots are rare in our evolutionary history the inference is that a single moving spot is changing color.

Thus, by responding “RRRRGGGG,” rather than “R……G” we may simply be playing the evolutionary odds.  Given that these two sequences produce the same internal state, the most likely state of the world is RRRRGGGG  – the moving, color-changing spot – rather than the other.  So RRRRGGGG is what we report—and perceive (the subject isn’t lying)[10].

This approach to the color-phi effect is as suitable for non-human as human animals.  As far as I know, no one has attempted a suitable experiment with pigeons, say, but it could easily be done.  A pioneering experiment very similar in form was done many years ago by Donald Blough when he measured pigeons’ visual threshold, something that also raises ‘consciousness’-type questions.  After all, only the pigeon knows when he ceases to ‘see’ a slowly dimming stimulus.  Blough’s solution was a technique invented by his colleague, sensory physiologist Georg Békésy, to measure human auditory thresholds[11].   Blough describes his version of the method in this way: “The pigeon’s basic task is to peck key A when the stimulus patch is visible and to peck key B when the patch is dark. The stimulus patch, brightly lighted during early periods of training is gradually reduced in brightness until it falls beneath the pigeon’s absolute threshold.”[12]  As the patch dims and becomes invisible, so the pigeon’s choice shifts from key A to key B.  Blough’s experiment tracked the pigeon’s dark-adaptation curve – the change in threshold as the light dimmed – which turned out to be very similar in form to curves obtained from people.

Exactly the same procedure could be used to see when a pigeon shifts from seeing on-and-off lights to a continuously moving-and-color-changing light.  The pigeon is confronted with two choice keys, on the left (A) and the right (B).  In between is a digital display that can show either a continuously moving[13] dot that changes color from red to green in mid-travel (continuous), or two dots, a red on the left and green on the right, that alternate (alternating; see Figures 15.1 and 15.3).   The animal would first be trained to peck key A when alternating is presented (with an alternation rate slow enough to make the two dots easily visible as separate events); and to peck key B when the continuously moving light is presented.  The rate of the continuous display would need to match the alternation rate of the alternation display.  As the experiment progresses, the alternation rate is slowly increased just as, in Blough’s experiment, stimulus brightness was slowly decreased.  I very much expect that the animal will at some point change its preference from key A, indicating that it sees the two dots as separate stimuli, to key B, indicating that they look like the continuous stimulus.

The point is that consciousness can perfectly well be studied using methods that require no verbal report – merely a response signaling that sequences are perceived as similar to one thing or the other.  The attempt to interpret phenomena like color-phi in terms of  ‘consciousness’ usually leads to a muddle.  It’s a scientific hindrance rather than a help.

This story of the color-phi problem parallels exactly the history of research on another perceptual phenomenon: color vision.  An early dis­covery was that people sometimes see “red” (for example) when no spectrally red light is present – just as people sometimes see movement when nothing is actually moving (in movies, for example).  Later research expanded on this theme through the study of after-effects, color-contrast and Land effects[14] eventually showing a wide range of disparities between the color seen and the wavelengths present.  The solution to the problem was the discovery of processing mechanisms that define the necessary and sufficient physical-stimulus conditions for a person to report “green,” “red” or any other color.   “Consciousness” forms no part of this account either.

My analysis of the color-phi effect sheds some light on a pseudo-issue  in cognitive psychology and artificial intelligence: the so-called binding problem.  A philosopher describes it this way:

I see the yellow tennis ball.  I see your face and hear what you say.  I see and smell the bouquet of roses.  The binding problem arises by paying attention to how these coherent perceptions arise.  There are specialized sets of neurons that detect different aspects of objects in the visual field.  The color and motion of the ball are detected by different sets of neurons in different areas of the visual cortex…Binding seeing and hearing, or seeing and smelling, is even more complex…The problem is how all this individually processed information can give rise to a unified percept.[15]

What does “unified perception” amount to?  We report a unified percept “cat.”  When confronted with a cat we can say “cat,” can identify different aspects of the cat, can compare this cat to others like it, and so on.  The cognitive assumption is that this requires some sort of unity in the brain: “The answer would be simple if there were a place where all the outputs of all the processors involved delivered their computations at the same time, a faculty of consciousness, as it were.  But…there is no such place.”

There is no such place…Yes, that is correct.  But why on earth should there be?  From a behavioristic point of view, ‘binding’ is a pseudo-problem.  We report continuous movement in the color-phi effect, but nothing moves in the brain.  All we have is a functional equivalence between the brain state produced by a moving dot and the brain state produced by two flashing dots.  The same is surely true for the cat percept.  There is a state (probably a large set of states) that the subject reports as “cat.”   This state can be invoked by the sight of a cat, a sketch of a cat, the sound of a cat, and so on.  We have no idea about the mechanism by which this comes about – perceiving a cat is more complex than perceiving movement of a dot – but there is no difficulty in principle in understanding what is happening.

Why does there seem to be a problem?  Because of a conflation of Domain 1 with Domain 2.  The percept “cat” is real and unified in Domain 1, but that has no bearing on Domain 2, the underlying physiology.  Recall Kinsbourne and Dennett’s erroneous claim that “Conscious experiences are … are clockable and locatable…”  No, they’re not.  Reports, or electrochemical brain events, are “clockable…, etc.” but qualia are not.  The “cat” percept is just one of a very large number of brain states.  It is the one evoked by the sight of a cat, the word “cat,” the sight of dead mouse on the doorstep, etc.  But the phenomenology of that state has no relevance to its physical nature[16] – any more than there needs to be any simple relation between the contents of a book and its Dewey decimal number.   The point is that the brain is (among other things) a classifier.   It classifies the color-phi stimulus and a moving-color-changing stimulus in the same way – they have the same Dewey decimal number.  That’s what it means to say that we “see” two physically different things as the same.  That’s all it means.

The only objective “unity” that corresponds to the phenomenal unity of the percept is that a variety of very different physical stimuli can all yield the percept “cat.”   People show a sophisticated kind of stimulus generalization.  There are simple mechanical equivalents for this.  Associative memories  can “store” a number of patterns and recreate them from partial inputs.  Physicists describe their function in this way:

Associative memory is the “fruit fly” or “Bohr atom” of this field.  It illustrates in about the simplest possible manner the way that collective computation can work.  The basic problem is this: Store a set of p patterns in such a way that when presented with a new pattern , the network responds by producing whichever one of the stored patterns most closely resembles .[17]

Given parts of one of the patterns as input, the network responds with the complete pattern (Plate 15.1).  So, given a picture of a cat, a poem about a cat, cat’s whiskers, or a meeow, the result is the percept “cat.”  But neither cat nor any other percept exists in recognizable form in the network (Domain 2).  Nothing is “bound.”  Nothing needs to be.

Don’t be misled by the fact that in this kind of network, the output looks like the input.  An even simpler network will just activate a particular node when all or part of the target stimulus is presented.  The basic idea is the same.  The network has N stable states; when a stimulus is presented to it, it will go to the state whose prototype, the stored stimulus, is most similar to the presented stimulus.


[1] Dennett & M. Kinsbourne (1992) Time and the observer: The where and when of conscious­ness in the brain. Behavioral and Brain Sciences, 15, 183-247.  P. 186

[2] Op. cit., p. 194

[3] Hayek, F. A., (1979) The counterrevolution of science: studies in the abuse of reason. Indianapolis: Liberty Press (reprint of the 1952 edition), p. 37.  Friedrich Hayek (1899-1992) won the Economics Nobel in 1974 but also made important contributions to sensory psychology.

[4] E.C. Tolman A new formula for behaviorism. Psychological Review, January, 44-5, 1922. P. 44.

[5] Op. cit., p. 187

[6] V. S. Ramachandran  A Brief Tour of Human Consciousness, PI Press, New York, 2004, p. 96.

[7] Op. cit., p. 235

[8] What exactly do you mean by “registered” a critic might reasonably ask?   “Register” just refers to the properties of the internal state.  Another way to pose the problem is to say that we need a model such that the inputs RRRRGGGG and R….G yield the same internal state.

[9] Roger Shepard has discussed the role of evolution in the perceptual process:  Shepard, R. N. (1987) Evolution of a mesh between principles of the mind and regularities of the world.   In J. Dupré (Ed.) The latest on the best: essays on evolution and optimality (Pp. 251-275). Cambridge, MA: Bradford/MIT Press.

[10] But how can you be actually seeing what isn’t there (as opposed to simply reporting what is)?  Answer: that’s what you always do.  Sometimes what you see corresponds more or less closely to physical reality, sometimes it’s an approximation, sometimes it’s wholly imaginary (see, for example, Oliver Sacks’ Hallucinations, Borzoi, 2012).

[11] Hungarian émigré Békésy won the Nobel Prize in Physiology or Medicine in 1961. http://www.nobelprize.org/nobel_prizes/medicine/laureates/1961/bekesy-lecture.html

[12] Methods of tracing dark adaptation in the pigeon. Blough, Donald S. Science, Vol 121, 1955, 703-704, and http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1403870/?page=1

[13] Obviously with a digital display this ‘continuous’ movement will be a succession of separate images.  But if the refresh rate is high enough and the spatial resolution fine enough the movement will appear continuous to any man or animal.

[14] http://www.physics.umd.edu/lecdem/services/demos/demoso3/o3-03.htm

[15] Flanagan, O. (1992) Consciousness reconsidered. Cambridge, MA: MIT/Bradford.

[16] Britain’s Princess Anne, an avid horsewoman, fell at a jump a few years ago and suffered a concussion.  She reported seeing herself from above lying on the ground.  If brain states are “clockable and locatable,” just how high above the ground was Princess Anne?

[17] Hertz, J. A., Krogh, A, & Palmer, R. G. (1989) Neural computation. Reading, MA: Addison-Wesley.  P. 11.