Dualism and Rationalist Psychology

Wolfram Hinzen

Talk, Workshop “Prospects for dualism”

Universiteit van Amsterdam
4 Februari 2005

from http://home.medewerker.uva.nl/w.hinzen/bestanden/DualandRatPsychPaper.pdf, archived at www.newdualism.org

1. The rational basis of contemporary materialism

Whatever the rational (as opposed to sociological) foundation of contemporary materialism is, it does not lie in physics as such. First, there is a good question of whether physics ever since Newton supported a mechanical model of explanation. Second, if phenomenal states were posited as irreducible primitives which are connected to non-phenomenal states by new psycho-physical laws, the end result would still be a physical theory. Third, if we agreed that human choices on how to act, described in intentional vocabulary, enter into the very causal or explanatory structure of modern physics, then modern physics entails dualism, as some physicists have indeed argued (cf. Stapp 2004). Not knowing enough physics, I will leave possible arguments for dualism from within physics aside at this point.

So contemporary materialism is a metaphysical view. As a metaphysical view, though, it has proved hard to defend it without begging the question against the dualist. So maybe at least an argumentative parity between materialism and dualism should be conceded: both are begging the question. But of course we would like to go beyond mere parity.

I’d like to start by noting that if babies were to write philosophy of mind textbooks, dualism would quite possibly be an unquestioned assumption. Experimental studies have suggested that small infants have expectations about animate objects that they do not have about inanimate objects.1 Somewhat more strikingly, 5-month old infants actually seem to have some trouble understanding that physical constraints on inanimate objects such as continuity (objects move on connected paths) also apply to humans, hence that humans are material things too (Kuhlmeier, Bloom and Wynn, submitted). It is a later developmental accomplishment to realize that the laws of object physics apply to humans. But we never quite stop being intuitive dualists in this sense either. Thus we do crucially distinguish mental vocabulary, from physical vocabulary, and the concepts that these vocabularies encode seem as distinct to us as any concepts can be. Our ethical practice depends on this distinction.

So materialism, if true, is likely true a posteriori. Dualism is at least a conceptual truth, which needs to be overturned if materialism is to come out right. This would be then perhaps similar to the way in which Newton, like most of his contemporary readers, found it simply absurd and incomprehensible that bodies would move other bodies without touching them. But he believed it nonetheless.

This very parting of ways between conceptual and scientific truth was in fact what led Locke to believe that there was no obstacle either to concede that, in principle, matter can think. The idea makes no sense, he agreed. But, he added, we can’t assume that God in his actions is bound by what humans find rational. In modern terms, this is the contention that the truth is all and only a posteriori. Something that’s a priori impossible can thereby still be a truth.

1 Thus they expect humans to act in a self-propelled and goal-driven fashion, principles of interpretation that they do not apply to inanimate objects. E.g., they may observe the physical motion of a human arm grasping repeatedly for one of two toys, and expect it to continue to do so if the toy is displaced. Observing a rod performing the same movements, no such conclusion is drawn (Woodward 1998).

Locke’s point of view, widely shared in scientific circles of the 18th century, is not a materialist position. It is the idea that the Newtonian revolution has made our notion of matter so utterly mysterious and “spiritual” that it became pointless to contrast it with another substance, more “spiritual” than it. The “body” of the Cartesian mind-body dualists had been a machine; now it was shown, by Newton, not to be one. Hence the body exploded, not the ghost, as Ryle would have it. We may call this view, urged in our time by Chomsky, an open or encompassing monism, and contrast it with contemporary physicalism, which excludes the mental from the physical. From the viewpoint of the former monism, wanting to reduce the immaterial to matter is just to be in the grip of a mechanistic model of explanation that science does not any more support.

But this view should nonetheless leave us unsatisfied. The intuitive dichotomy we started with remains exactly the one it was before. There is still this huge conceptual gap, no matter how encompassing the materialism is. Everything is left where it was.

2. From conceptual to metaphysical truth

Let us look then at the following Kripke/Chalmers style argument for dualism, which does derive a metaphysical gap from a conceptual one:

(i) It is conceptually possible that there should be a physical and functional duplicate of us that would not be conscious (a ‘Zombie’: take me).

(ii) If a conscious state can exist in the absence of the physical states it is correlated with, then it can’t be identical to the latter, or supervene on them.

A first objection I wish to address is that there may be a physical necessity for conscious states to arise when their putative neural correlates are instantiated. Then Zombies are physically impossible. But physical necessity is not strong enough for materialism to go through. The easiest way to see this is by pointing to the possibility that mental properties might be posited as fundamental and irreducible entities, while being connected to physical properties by means of new psycho-physical laws. It is clear that this would be a dualist position, even though certain material states will, of physical necessity, require that certain mental properties hold.

The second reason why physical necessity is not enough is the very idea of reductive explanation. That idea arguably requires a conceptual entailment between the reduced and the reducing state. To see this, ask why nobody today is a dualist about life. It is because our concept of life is such that it picks out a phenomenon that can be fully accounted for in terms of structural and functional processes that physiology, biochemistry, and evolutionary theory describe. That there is nothing left to explain once structure and function are dealt with, is something following from our concept of life: if that concept was different, e.g. if it essentially included conscious or experiential aspects, life would not be thought to have been reduced. In short, whenever a reductive analysis is claimed, it depends on what concepts have been used to describe the phenomenon in question, and it is thus perfectly appropriate to ask for a conceptual entailment if a reductive explanation is to succeed.

On the other hand, we might argue that no matter what our concepts may tell us or what a priori intuitions we may have, there may be an essence to certain material states that is there for science to discover. Such an essence, it may then be hoped, will ultimately make it impossible that a Zombie could exist, our conceptual intuitions to the contrary. On this view, we suggest or hypothesize that there is a neurophysiological correlate of conscious states, call it C, such that it is of the essence of C that it is or yields a conscious experience with a specific content, call it E. There is something stronger than physical necessity here that ties the mental to the physical, possibly even an identity between C and E. Still, that necessity or identity is not backed up by our concepts, and it’s not meant to be. Call this view crypto-physicalism, because of its assumption of hidden essential identities.

This objection to dualism rests on the substantive assumption that there are such things as essences which science discovers. It also leaves unexplained why some identity of the form C=E, for some C, should hold, if it does. Again, these identities are not meant to hold in virtue of our concepts, or in virtue of what C and E mean. Basically, this view seems to simply assume these psychophysical identities as epistemically primitive bridging principles (similar to psycho-physical laws), rather than explaining them (see Chalmers and Jackson 2001, sect. 6).

But I’d like to focus on a different objection to crypto-physicalism. An example of the relevant a posteriori identities is often held to be that we also discovered that water is H2O, crucially without there being any conceptual entailment between the concepts of water and of H2O. Clearly, these are concepts as different as two concepts can be.

Now, let us agree with most philosophers that this discovered “identity” (“water is H2O”) is, not only true, but necessary, the reason being that it is hard indeed to see how water, if it is H20, which it is, could be something else, say XYZ. But it quickly transpires that crypto-physicalism is haunted by the fact that humans have no problem whatsoever grasping the thought that water, as such, might not be H2O. A 5-year old child, say – surely a competent user of the concept water, in one relevant sense of “competent” – would not have any such problem. For a mind such as this, all that ultimately matters for whether something is water or not, is for that mind to conceive of it as water.

Even for us adults, who all believe that water is H2O, it seems simply false to say that being H2O is a necessary condition for applying the concept of water to something. If you can lack the concept H2O, while having the concept of water, how can the former be a necessary condition for having the latter? It is not a sufficient condition for applying the concept of water to something either, for it is not even a sufficient condition for having a concept of water at all. The same remark applies to other descriptive conditions holding of watery things, like being liquid. The reason is the same: the concept of water simply isn’t the concept of liquidity, and while in a belief of yours, the two can figure both, that does not make one a necessary or sufficient condition for the other.

To cut a much longer story short, I think it is arguable that if push comes to shove, the only thing that’s necessary and sufficient for something’s being water is that it is, well, water. The deepest reason for that may simply be that no simple concept is another concept; not even the concept TWO is the concept THE ONLY EVEN PRIME, despite the fact that, of necessity, one is instantiated if the other is. This is basically to say that descriptivism about concepts is wrong: the meaning of a simple concept like water is not given by or identical with the meaning of an associated description, containing other concepts. Although we can elucidate our concept of water by mentioning other and related concepts, and also various things we believe, the concept is not any of these other concepts, or a combination of them.

The reason this is fatal for crypto-physicalism is that discovering a posteriori necessities will leave an entire dimension of meaning out of account, because that dimension is independent of any scientific discovery. In fact, if I am right, the whole approach leaves the concept of the thing to be reduced itself, and the mental content it determines, simply out of account. The property determined by the concept of water by itself is as such a perfectly good property. If that property is independent of empirical discoveries, prior to them and in fact a condition for them, then what chemistry may or may not discover about earthly substances that we apply the concept to is quite irrelevant to that concept itself. In this sense, it is semantically stable across whatever changes there may be in its correlated chemical composition. The same will hold for a putative identity claiming that joy has been reduced to some neural correlate C. The mental content of joy will simply be presupposed, and while something is said about its neural correlate, nothing of interest is said about it.

Now, there seems something wrong about this conclusion: that it is too strong, or show too much. Kripke famously noted an asymmetry for cases like water and cases like joy: for something to be joy is simply for it to feel or seem like, well, joy. By contrast, this circularity in the identification of what it is to be joy vanishes for cases like water: something can seem like water to our 5-year old, but not be water, because it’s not H2O. Kripke concludes from this that although it may seem to us that water might not be H2O, this is really false: being H20, it can’t be something else. Hence the necessity that’s needed for reduction is restored. In the case of joy, by contrast, it isn’t and cannot be: a contingency remains, which is why materialism is mistaken.

But then, what we seem to have shown above is that materialism is mistaken for water too, and that’s apparently simply too much. But that is only apparently so, for what I argued is that the identity “water is H2O” shows us nothing about the mental content of the concept of water,which it presupposes, no matter what it has to say about water, the substance. A reduction of our concept of water, or the property it as such determines, is nowhere here in sight. The reason, I suspect, that this point is overlooked is that Chalmers and most others trust that concepts can be reductively explained, say by letting them supervene on causal or functional relations between syntactically individuated mental representations, on the one hand, and the environment, on the other.

In the absence of that way-out, to which I return, it is quite irrelevant that the concept of joy is associated with a “qualitative feel”, and the other one not. To be sure, water is different from joy. This we know by having the concepts that pick out these things. They have different contents, and it is this, mental content as such, that causes the real problem.

Our reconstruction of this route to dualism is thus a rationalist one, and crucially not the quasi-empiricist one that puts all the emphasis on the vague idea of having the “qualitative feel” of something that we contemplate in introspection. Note, on the other hand, that our knowledge of concepts has an immediate experiential impact: a human hearing linguistic sounds patterns, say, having the necessary linguistic concepts, has different experiences of them than a cat does. But here the having of the concept conditions the experience, not vice versa.

Summarizing what we have so far, there may be a supervenience failure of rather grand dimensions, since it now affects our concepts as such, no matter whether they denote mental properties or physical ones. My conclusion has rested on the suggestion that these concepts, which the entire debate presupposes, determine each of them as such a property which can ultimately, for all it seems, only be circularly identified, i.e. by reference to the concept itself that denotes it. It is thus time to reflect more deeply on what concepts are, and whether they can be reductively explained.

3. Rationalist psychology

The major basis of contemporary physicalism has been functionalism, or the computational picture of the mind. As standardly viewed, say by Lycan (2003), this is a metaphysical speculation, designed to ground a materialist world view. If a mental state is of its metaphysical essence a computational state – the state of a machine computing outputs for certain inputs – there is no point where a need arises to accept any other than physical facts.

Functionalism was born when Chomsky argued that the behavioristic project to replace all mentalistic vocabulary by solely physical vocabulary brought about no benefit whatsoever in the explanation of (linguistic) behavior. For Chomsky, this did not mean that mentalistic vocabulary has a role to play in the explanation of behavior, which is what most official functionalists came to hold. In fact, he believes then as now that the explanation of how humans act, a problem involving the phenomenon of free will, is an intractable problem for science; perhaps the very notion of a cause is misapplied when applied to human behavior.

It is interesting and somewhat ironic that Fodor in his book of (2000) should finally come to an abstractly similar conclusion, namely that the computational reduction of mind can’t hold for everyday (inductive and abductive) reasoning about actions. He thereby drops one commitment of his functionalism that never was a part in Chomsky’s original version of it, if I am right.

Positing computational states for Chomsky was also not meant to engage in any speculations about the metaphysical essence of mental states, or how these would embed in the biological world. The suggestion simply was that a particular (indeed high) level of abstraction might be fruitful in the study of human higher cognition. Put differently, the idea was to start, not from the brain (or behaviour), but from what humans know, leaving intuitive preconceptions aside on what we think they can know. Only after we know what we (qua humans) know, the dualism question regarding this system of knowledge can be posed. Thus in particular, if we establish that we know things that can’t be computed from other things we know, and the computational picture of mind provides our basis for physicalism, physicalism will fail.2

2 In a recent address, Fodor and Lepore tell us that “you don’t have a chance of deriving your metaphysics from your epistemology, however hard you try” (2004: 90). As far as I can see, the opposite conclusion is true, and the conclusion is entirely straightforward.

4. What we do know

One thing that humans know about language, without knowing that they do, is that (2) is the wrong question to derive from (1)

(1) the woman who is not pregnant is walking her dog

(2) *is the woman who – not pregnant is walking her dog?

We know this because we know that it’s the matrix (main clause) auxiliary that’s to be fronted, not the embedded one, and because we know the difference between an auxiliary and a lexical verb. As for the first fact, movement depends, not on linear or numerical order, or other acoustic or perceptual parameters, but hierarchical order. In other words, what a child needs to mentally represent is the fact that

[NP the woman who is not pregnant]

is a phrase, and that the movement operation that targets the auxiliary for the sake of fronting it will ignore the auxiliary contained in that NP. Children unfailingly grasp this generalization without explicit instruction. Moreover, they systematically fail to make certain mistakes that would be evidence for the fact that they learn them, in the sense of inducing them from the data (Crain and Nakayama 1987).3

Why are there no mistakes of this kind? One part of the answer is that the idea of a mistake requires a domain of information that’s represented, and a representation that can then match that domain or not. This is missing in the case at hand. There is no distinction between the system of knowledge in question and what it is a knowledge of. There is simply no representation of anything that’s outside that system going on. Language is what it is because of the mental structures that the child’s mind contains. Hence there is no question of these structures being right or wrong.

3 As for the difference between auxiliaries and lexical verbs, Stromswold (2000) did not find 12 children she investigated making errors by confounding auxiliaries with lexical verbs. This was despite the fact that, as she also shows, the auxiliary system is of a truly mind-boggling complexity and the distinction between an auxiliary and a lexical verb is anything but obvious for a creature lacking those conceptual distinctions.

How then did these structures get into the child’s mind? The strange thing is they engender necessities, and it is not clear how either physical or experiential processes yield those. As for the latter, the argument from the poverty of the stimulus has recently once more been heavily scrutinized (Ling Rev 2002), but the stimulus looks as poorish as ever. As for the former, suppose, as we might well do in the light of recent evidence (see Groszinsky 2003 for a review), that laws of syntactic movement are localizable in brain matter, namely Broca’s area, where not much else appears to be localized. Then if laws are indeed what we are talking about, how does that physical structure ever give rise to these laws, if indeed Hume is right and laws never supervene on the physical? If we have decided to let putative laws supervene on our psychological activity, then what do we say if we find that our psychological activity itself is lawful? Then not psychologism will be the mistake, but a psychologistic picture of the mind.

An empiricist instinct may here tempt us to deny apparent laws of the mind, just as Hume denied that our concept of a CAUSE or a SELF makes the sense we think it does. But here it counts to confront our explanatory instincts with what we do seem to know. Consider that we clearly do grasp the rule of adding “plus 2”. All of us have, when asked what number that operation yields when applied to 1000, the intuition: it is 1002. In fact we intuit that this is the necessary consequence of this rule. There is no issue of “approximating” the right answer here: there simply is only one answer. Once grasping the rule, there is now in fact an infinity of answers we know to give, just as there is, in language, an infinity of expressions for us to understand. Once grasping the rule generating the expression “a rose is a rose, a lily is a lily, a tulip is a tulip, etc.”, we can continue it even for instances like “a blicket is a blicket”. Why do we know how to continue, despite having never heard of blickets? Because we grasp the rule that any X is an X, an algebraic relationship that posits a non-statistical relationship between X and itself. You apply the rule to blickets by understanding that a blicket is an instance of X, and not because you know what a blicket is, or have encountered any.

Experience is irrelevant, thus, and it is not surprising that our most advanced multi-layer perceptrons find this particular relationship very hard – predictably, for they learn from experience (Marcus 2001). Unless they simply implement classical cognitive architectures involving computations over typed variables, these models will thus not have a plausible cognitive interpretation. The problem seems principled. No matter what experience you have made in the arithmetical case, it will necessarily remain consistent with the answer “17”, a conclusion that experience could never in principle rule out. But we do. Furthermore, there is again no real possibility of a mistake, and for the same reason as in the language case, except for conceding that we might be temporarily out of step with our internalized system of knowledge. Attributing this knowledge to some fancy of the brain will also not help. For it is not neurophysiologically but mathematically impossible to conclude that 17 is the right answer. Maybe it becomes a neurophysiological impossibility once we code the rule in question into the brain. But in that case the brain as such is again not responsible for the rule, which it is merely implementing in the way that an arbitrary machine might do it as well.

Maybe we have arrived at a reason then that we are looking simply in the wrong place if we are looking in the brain for what we know, or for the contents of our minds. This in itself is in a way the very official doctrine of functionalism too, so it may be unsurprising. But now we have deprived functionalism of its traditionally accompanying metaphysics: for it is simply not clear what physical or experiential origins of specific mental contents there could possibly be. But what about the computational component of the theory?

5. Particulate systems

Abler (1989) has suggested a general distinction between “particulate” and “blending” systems in nature. A particulate system has simple as well as complex elements. A simple element has no constituents. A complex element has, ultimately, simple elements as constituents. Chemistry is an instance of this. If you conjoin H and O, what you get is a compound with emergent properties that contains H and O. They are preserved in the compound as constituents rather than blending into one another. They are also recoverable from it, by undoing the compositional process that builds the compound. Language is a particulate system, if, in the same way, when we compose a V and an N, the V-N compound that we get is not some kind of mixture of both, but is a new unit with emergent properties containing the V and the N as constituents. If we compose the same V with N’, a different noun, the resulting compounds will thus contain the V as an identical element.

How do the emergent properties in the V-N compound derive? As if it was alchemy, you first think the concept “kill”, say, and then the concept “Bill”, say, and then, instead of having two concepts thought separately, you suddenly have something entirely new: a complex concept, which does not mean what “kill” means or what “Bill” means, nor even means the conjunction or the thinking-together of both “kill” and “Bill”. Instead a new content appears in front of your mind: an event is represented of which Bill is an inalienable participant, playing the role of the Theme.

Now I think we have no idea how this happens. However it happens, I’d like to suggest four things. The emergent concept is, firstly, a new concept. It’s not one of the two old ones, or a mixture of them; most importantly, it can’t arise from merely associating the one original concept with the other, or by simply first thinking the former, and then immediately after thinking the second: in none of these ways would anything new arise. Secondly, although something is added to the content of the two original contents combined, this is no further lexical content, but a purely structural one: lexically, the new concept depends entirely on the contents of “kill” and “Bill” alone. Thirdly, the new concept is known a priori, i.e. on the basis of knowing the simple concepts and that mysterious rule of composition alone: you did not have to look at the world to derive this knowledge, and no amount of looking at the world could refute it either. Finally, new such concepts arising by combining others can’t be thought of as yet other simple concepts.

The reason for this is that there is a finite limit to how many words we can understand. But there is no known finite limit to how many complex expressions we can understand. As Fodor notes, “the family of concepts: MISSILE, ANTI-MISSILE, ANTI-ANTI-MISSILE MISSILE … is able to bring indefinitely many different things before the mind, including: missiles; missiles for shooting down missiles; missiles for shooting down missiles for shooting down missiles…and so on”. This can go on literally forever, on the basis of two simple concepts ANTI and MISSILE alone, and all of these contents that will appear in front of your mind will be perfectly systematically related to one another, rather than each being entirely new entities in this universe. They are non-statistically related in the very way that 1000 was to 1002 in the rule we discussed above.

It is crucial here that we are not only talking about an infinite syntactic combinatorics. We are talking about a generative engine pairing determininate sounds with determinate representational contents or meanings. That is only possible under the assumption of compositionality, the thesis that the meaning of a complex concept literally reduces to its simple concepts and whatever the modes of composition are. You know the meaning of the expression I use when I tell you about “the Maroccan barber that shaved my daughter on the 14th of last August”. The challenge is to explain why you do know this, in the exact rather than approximate way you do, despite not having learned or heard it. In this sense it is knowledge a priori, and compositionality explains why you have it.4

4 Now, all of this infinite knowledge of meaning is analytic rather than synthetic: it tells you nothing about the world. It just allows you to describe the world. But is there knowledge about the world prior to encountering it too? If standard poverty of the stimulus data hold water, there of course is plentiful. That syntactic movement in question formation is structure-sensitive is an example for synthetic knowledge in this sense: since human language need not have had that feature, we are talking about a contingent aspect of language, known a priori.

Despite widespread claims that non-classical connectionist architectures have met the challenge of accounting for linguistic productivity in the absence of compositionality (or some approximation of it), I agree with Hadley (2004) that the challenge is unmet.5

5 Connectionists such as Elman (1995) have viewed language as a blending system in this sense, while linguistic theory virtually uniformly views it as a particulate system. But again we should not ask, what can we know since we are a learning neural net, but we should ask: given what we know, what architecture must a neural net have for it to have a cognitive interpretation? Not asking this question, Elman arrives at the result that the recursive structure-building rules that classical architectures assume can be “approximated”. But such approximations have as such no claim for cognitive significance. For a human mind it simply does not matter that

Now, the functionalist’s own computer model of the mind, in order for it to ground materialism, depended on reducing mental entities or states to, broadly speaking, structure and function. But functions cannot be computed if the internal structures are not there on which they depend. In these structures, however, one thing is absolutely not computable, as long as we talk about a particulate system: the ultimate primitives that a complex concept in the sense above contains. It seems there must be a computational account of the emergence of a complex meaning in the above sense; if so, we have shown that at least complex concepts or meanings supervene on something, namely simple concepts and syntax. But that says exactly nothing about a computational reduction of simple concepts or meanings, simply because these won’t be computable by definition. In this sense, primitives, or simple concepts, fall outside the reductionist program.

6. Knowing your own concepts

The reason that Fodor, as a functionalist, did not conclude this is that he believed (though not always, see Fodor and Lepore 2002, p. 161) that there is a functionalist reduction even of concepts. But such a reductionist account appears to be simply not forthcoming for simple concepts. Say we give up, with Fodor, on the idea that simple concepts supervene on computations over their mental representations. Fodor’s reasons aside, the idea does not work for many reasons such as Goedel’s or Penrose’s: no purely formal axiomatic characterization of statements describing our concept of a natural number will possibly capture what it means for us. Our intuition of what that concept means is an irreducible feature of our mathematical cognition, which cannot be captured through any formal manipulations over symbols.

Then one idea that remains is to reduce simple concepts to their mental representations and a relation of reference. This idea is a behaviorist heritage which Fodor at least in (1990) unashamedly and explicitly endorsed: the idea is that a symbol token that I use means what it does because of how it is controlled by occurrences of whatever external property it denotes. For all non-sensory concepts, this proposal immediately inherits all the emptiness that its behaviorist antecedent had. In particular, if I am a creature lacking the concepts of a Verb, or of justice, then standing in causal relations to things that are verbs or justice, whatever relations these may be, won’t make me have them.

Fodor (1998) tries to explicate the same idea by saying that for a mind to have a concept C is for it to “resonate” to the property that C denotes. But if we ask when precisely it is that we so resonate, it seems the answer is that we need to have the concept in question.

This argument does not need to be pursued much further, though, since Fodor (2004), in a remarkable paper called “A brief refutation of the 20th century”, unwittingly appears to refute himself, or his earlier behaviorist incarnations. For the conclusion, correct in my view, that we are offered there, is the conclusion that there simply is no non-circular account of concept possession, of what “knowing some concept C” is, at all.

Having the concept of water for example does not reduce to a sorting ability, because the sorting ability depends on having the concept in question. To see this, suppose the contrary, that having the concept of water is being able to sort things according to whether they fall under this concept or not. Then ask when the sorting manifests the having of the concept of water. This is clearly not so, if you have sorted according to the concept H2O. Note that even a sorting according to the concept TWO need not manifest a sorting according to the concept THE ONLY EVEN PRIME. So when is a sorting a sorting according to the concept C? Only if it is a sorting according to C. Having a particular concept can only be circularly specified via the idea of sorting, by re-using the concept. The conclusion we are then envisaging is that no concept supervenes on any other concept than itself.

In the light of this startling (and obviously dualist) conclusion, it is reassuring to see that some of the most prominent accounts of lexical acquisition actually do often not even touch upon the problem of how we learn concepts. If we look at Bloom (2000), or Gleitman et al. (2005), say, both works summarizing decades of research on lexical acquisition, what we find them concerned with is the “mapping problem”, the problem of mapping unknown sounds to known concepts. For some accounts the claim to have solved the problem of the origin of concepts, such as Carey’s “bootstrapping” account of the origin of our number concept, one can straightforwardly show they are presupposing the concept in question (similar remarks hold for the “theory-theory” of concepts proposed by Gopnik or Perner, as explicitly argued in Leslie 2003).

7. Knowing concepts

With this detour to the entirely opaque origin of concepts, I close by returning to my initial theme, whether dualism as a conceptual or a priori truth can be overturned by empirical inquiry, say through the finding of a neural correlate. The short answer should now be: not if we cannot make our simple concepts mean something else than what they do mean. Now, can we? And can we be mistaken about what a concept means, say a mental state concept such as joy? Having said what we did about simple concepts, it is perfectly obvious that we may be mistaken about what a word in a particular language means. Thus you might utter “shoes”, meaning table. In such a circumstance, you are of course not confused about what a table is. But it should be entirely unobvious how it is possible to be confused or mistaken about what a (simple) concept means.

If it was possible, then there would be a concept, C, that figured in your thinking, but you mistake it for another concept. So you think, TABLE, say, but take it to mean SHOE, say. But it seems that either you use a concept, and then you use it, or you use another concept, and then you use the other one. If there was such a thing as being mistaken about a concept (or being right about it), you would have to have some “epistemic distance” to it: you would have to be able to look at it, so to speak, and ask: “what concept are you, really, say? Are you this concept, or another concept?” But the idea that we contemplate concepts in this fashion, and that there is this epistemic distance, appears simply confused.

The possibility of truth, hence of a mistake, arises neither with simple nor with complex concepts, which concern the contents of our own mind. It arises only where we go beyond the head. This possibility is not there for concepts. Whether sentences are true can be learned, assuming their meaning is understood, and you can be right or wrong about whether they are true. You can be wrong about which word expresses which concept in a community, too, and perhaps you can be wrong when looking at an object and thinking of it as a chair. But it is simply not clear whether the question whether you are right about a concept does ever arise. The issue can be whether THIS IS A CHAIR, a judgement expressed by a sentence, not what CHAIR, a concept expressed by a word, means. Perhaps there isn’t such a thing as being right or wrong about a concept, and perhaps that’s why you can’t learn a concept, and developmental psychologists can’t explain us how we come up with them.

Least of all, it seems, you could acquire the concept of consciously experiencing something, in a state where you lacked that concept, hence are a Zombie. Lacking that concept, I could look at you as long as I wanted, and even dig in your brain as much as you would allow. But the idea that you have anything like conscious experiences need not occur in my mind, ever. If I lack the idea of what that is, I simply can’t be even looking for it. What I’ll be looking at will be nothing other than a Zombie like myself, no matter what behavioral and neuro-physiological evidence I am collecting.

It’s also clearly not so that we gazed into the inner space of our own consciousness, and tried to learn, on the basis of what we found there, what a conscious experience is. This is, again, to build up an epistemic distance where none exists. We don’t find out empirically what a conscious experience is, in the way I find out who my father is: we simply are conscious, and if we are not, we are lost cases. In this way, conscious experience and meaning always presuppose themselves; there is nothing more primitive than them, and I take that to be a dualist conclusion.