Friday, 6 January 2017

Yearning for Consciousness--Part V



Consciousness revisited [Excerpts]


[Recapitulation: Every moderately complex organism endowed with sensory systems will undergo some or other form of experience when affected by stimuli. In that sense, following our own case, we may safely infer that those organisms attain a state of phenomenal consciousness as the ‘feel’ varies from stimulus to stimulus. The fact that animals respond to discriminating stimuli, wriggle in pain, and enjoy tickles, are enough evidence for ascription of phenomenal consciousness. The entire weight is from our own case. What is that case? In the human case, there is the additional feature of what may be called second-order judgement (Chalmers) like ‘I am having red sensation’ or just ‘I am having nice experience.’ These judgements carry concepts of sensation and experience as constituents. What do they indicate? They cannot indicate brain states except in unusual circumstances, they may indicate local ‘feel’ (pain) in case of injury, may be having of colour sensation in the appropriate cases. But is there a feel of tree when there is the visual experience of tree, when we abstract away from information and access? The answer is uncertain, but we cannot fail to have the concept of experience even in those cases. In some cases then, the referent of the term feel is fictional. Why then do we have the concept of phenomenal consciousness in the first place?]

[…]

Following the lead from the concept of schizophrenia as our paradigmatic example of a hybrid concept, we might expect then that parts of the concept may in fact yield to science while the more stable and regulatively significant parts refuse to do so. To these latter parts then, the notion of ascription will truly apply. In other words, the point of interest is to see whether the scientifically elusive parts in fact converge on what we take to be the real significance of the concept. To examine the possibility just suggested, let me dwell briefly on the idea that the problem of consciousness is the hardest problem for a science of the mind.

To my knowledge, it used to be said, at the early stages of cognitive science, that while consciousness is the hardest problem, meaning is the second hardest problem (Pylyshyn 1984, Chapter 1). It is interesting to note that not so long ago before that, these two were taken to be, roughly, the same problem. Recall, for example, that Rudolf Carnap (1932) thought of erlebs, or units of experiences in a time-slice, to furnish the basic units of meaning of, say, colour-words; the rest, for Carnap, was set theory. The general idea of experiences supplying the ‘basis’ of meaning continued through the writings of Quine (1960) and Follesdal (1975), among others; for example, Quine’s theory of language was based on the idea of stimulus-meaning […]

The only reason why I made a brief survey of the literature is to highlight the point that the idea that the problem of meaning is the second hardest problem is not immediately obvious. Prima facie, it makes a great deal of sense to say that the meaning of cow is essentially linked to experiences of cows; that’s where meaning must ultimately be coming from. Then why is the problem of meaning taken to be the second hardest problem ranking below the problem of consciousness? The answer, in my opinion, maybe found in thinking of the concept of meaning as a hybrid concept. The notion of meaning that is entertained in the Carnap-Follesdal-Nyaya axis is essentially an ascriptive concept which in turn is linked to the ascriptive notion of phenomenal consciousness. I do not think anything decisive has happened in the meantime for us to locate this concept of meaning within science, notwithstanding tall claims from connectionist circles. If we bravely ignore Davidson’s philosophical objections to the very idea, the problem stays where it is. In a moment we will see why.

Yet true to the hybrid nature of the concept of meaning, some aspects of the original thick concept have indeed been brought under control. As quick examples, one may cite the work of Noam Chomsky and Donald Davidson[…] What is of interest is that neither of these approaches have anything to do with what we might label phenomenal meaning and, despite claims to the contrary, each of these approaches concern fairly remote, technical aspects of meaning far removed from our ordinary ascriptions of the concept of meaning. Our ordinary meaning of cow does concern phenomenal awareness of cows. In that the scenario parallels the history around the concept of schizophrenia. Thus, as far as the notion of phenomenal meaning is concerned, the problem of meaning is exactly as hard as the problem of consciousness[…]

Phenomenal consciousness, along with phenomenal meaning, thus poses an apparently unsolvable problem. In this the problem is not unlike the problem of God. For something to be an account at all, we want the account to be empirically significant; yet, by definition, an account of God cannot be empirically significant since God cannot both be empirically significant, and be the cause for such empirical significance. In other words, we recognize something to be empirically significant because God caused it to be so; that’s the concept of God we want. Our needs then, while raising the expectation for some account, prevent us from doing so. As noted, the concept of God may in fact be given up precisely for the preceding impasse.

Not so with the concept of phenomenal consciousness. Following Strawson, even if we apply P-predicates such as is smiling, is in love, is in pain, is thinking hard, and the like, to entities by first identifying them via M-predicates, we do not apply these P-predicates to all entities we identify via M-predicates; for example, we identify trees and computers with M-predicates. Much of the groundwork for the functioning of language and society requires that we are able to form a conception of persons, and person-like creatures, after having formed a similar conception of ourselves first. The concept of consciousness plays a singular role in anchoring this conception of ourselves. No wonder then that the concept shows up the moment we wish to extend the concept of a person to fetuses, flora and fauna. Almost instinctively, we ask: is it conscious?.

In a quick generalization, therefore, it follows that we need the concept to form some conception of an ethical order consisting of fellow beings just as we need the concept of beauty to form a conception of an aesthetic order. Those needs, it should now be obvious, are essentially normative with no demand for descriptive truth; hence, there is no demand for a theory. Just as we may sometimes dispense with the concept of God, it is quite possible that we ought to be able to dispense with the concept of consciousness as well; every ascriptive concept must provide room for that, provided that we are able to retain the notion of an ethical order otherwise. At the moment we have no idea of how to do so; so we retain the concept of consciousness.

Concluded

No comments:

Post a Comment