Problem of
explanatory gap [Excerpts only]
The general difficulty with the
problem of consciousness—unlike the very similar problem of God as we will see—is
that its solution seems, at once, to be urgent and elusive. In other words, it
seems that we cannot do without the concept of consciousness while, despite
voluminous discussion especially in recent years, the concept continues to
resist elucidation. It is this apparently ‘unsolvable’ aspect of the problem
that interests me in this paper…
To
bring out the ubiquitous character of consciousness, David
Chalmers (1996, xii) introduces his influential work on the conscious mind as
follows:
‘I find myself absorbed in an
orange sensation, and something is going on. There is something that
needs explaining, even after we have explained the process of discrimination
and action: there is the experience.’
Since
experiences are both subjective and ubiquitous part of our lives, their
occurrences call for explanation. The question arises as to what can
legitimately be the form of such as explanation. Since, by the nature of the case,
there is no third-person description of the phenomenon itself—that is, I cannot
describe what it is for Chalmers to undergo the reported sensation, I can only
describe mine—all we can do is to look for the unique conditions that are ‘objectively’
satisfied at the locus of the concerned sensation…
So, what is it that we need to show?
Here Chalmers says that it is not enough that we have explained the ‘process of
discrimination and action,’ we need to explain the (qualitative, subjective) experience
itself. For example, Ned Block (2007) reports interesting work by Nancy
Kanwisher and colleagues who showed that there is strong correlation between
face-experiences and the activation in a very specific area of the brain located
at the bottom of the temporal lobe in the right hemisphere, called the fusiform face area. Block views the
fusiform face area as an informationally encapsulated Fodorian module (Fodor
1983), a view that raises problems for the reportability of these experiences
according to Block; I set such problems aside. Suppose there are other ‘modules’
for experience of fruits, canines, fuzzy drinks, etc. The working of these
modules then would count as discriminating various stimulus items, if any, and constitute
the neural actions that lead to these discriminating representations…
Still,
a description of a module in a particular state does not amount to a
description of the resulting experience—what
it feels like—of faces, fruits, fuzzy drinks, and the like. That is the
problem raised by Chalmers: there is a crucial residue… In a later
paper, Block (2009) restates Huxley’s problem by observing that ‘we have no idea why the neural basis
of an experience is the neural basis of that experience rather than another
experience or no experience at all.’ Block calls this the problem of explanatory gap. Let us assume that the state of the art
is such that we have no clue as yet to the real ‘hard nut’ of the problem of
explanatory gap: the problem of residue, that something is going on…
But suppose that, contrary to the
state of the art, some detailed account of the activation of the brain does
furnish a satisfactory account of the feel of what it is like to experience the
computer screen. Will that count as an account
of phenomenal consciousness, even if we have given up any form of dualism to
agree that the brain is the seat of consciousness if anything is? Is the brain
the right object to which the concept of consciousness legitimately applies? Is
the brain, at that unique moment, undergoing phenomenal consciousness?
Concerning the old issue of
whether computers think, Noam Chomsky replied that legs don’t walk, people do,
even if people walk with legs; similarly, computers or brains don’t think,
people do. The trouble with neural correlationism is that it simply misses the
grain of explanation that involves the entire organism to which the concept of
consciousness typically applies. The objection is bolstered by the fact that
the common notion of consciousness, which is the only notion currently at
issue, does not refer to states of brains at all. It is not at all implausible
to think of people correctly applying the notion of consciousness in a variety
of circumstances without any knowledge about underlying brains; otherwise, most
fables will not work. Of course, neuroscientists are free to use technical
terms to denote the relevant unique activation states, if any, of the brain,
which they believe instantiate conscious states of a subject. But that
nomenclature will apply to the subject’s brain, not to the subject herself.
Having noted the crucial distinction, we may as well hold on to biological
correlation as the only physical basis of consciousness.
(To be continued)
No comments:
Post a Comment