1

How does mental content feature in conscious thought? In this paper I’ll use the term ‘thought’ widely to cover judging that grass is green, wondering if the economy is going to improve, entertaining the possibility that Platonism about mathematics is true, and so on. In sections 2 & 3 I’ll argue that for a thought to be conscious the content of that thought must be conscious, and that it is not possible to give an adequate account of a thought’s content being conscious without appealing to what is now called ‘cognitive phenomenology’. Sensory phenomenology cannot do the job. If one claims that the content of a conscious thought is unconscious, one is really claiming that there is no such thing as conscious thought. So, one must either accept that there is such a thing as cognitive phenomenology, or deny the existence of conscious thought.

Once it is clear that conscious thought requires cognitive phenomenology, there is a pressing question about the exact relationship between a thought’s cognitive phenomenological properties and its content. In sections 4 & 5, I’ll discuss the nature of this relationship. I’ll argue for a very tight connection between a thought’s cognitive phenomenological properties and its content. That is, I’ll argue that there is a 1:1 correspondence between cognitive phenomenological types and a certain set of basic concept types.

2

We are all familiar with the idea of sensory phenomenology, the idea of what it’s like to see the color red, or to taste warm cornbread, or to hear perfect middle C. Cognitive phenomenology is less familiar and more difficult to catch hold of. A first stab at characterizing cognitive phenomenology is to point out that there is something it is like to think that formal logic is fun or to think that temperance is a virtue, and this ‘what it’s likeness’ is irreducible to any sensory phenomenology that may be associated with these thoughts. So, cognitive phenomenology is a kind of phenomenology that is essentially something over and above sensory phenomenology and that is paradigmatically found in cases of conscious thought.Footnote 1 I am therefore arguing that the notion of ‘phenomenal’ or ‘phenomenological’ must be extended beyond the sensory.Footnote 2

Before getting into what cognitive phenomenology is, or why it is necessary to explain conscious thought, I’ll begin with some general observations about the content of thought. Consider a thought T—the thought that p, or that a is F. What is its content? Answers include:

It’s the propositional object of T—the proposition p (Fa).

It’s the meaning of ‘that p’ (that Fa).

It’s how T represents the world as being, so we sometimes speak of the representational content of a thought.

It’s what is contained in the that-clause, which may be judged as true or false.

What do ‘that’-clauses or ‘content-clauses’ designate? Some hold that they designate state of affairs in the world, while others claim that they designate propositions, hence the popular phrase ‘propositional content’. If one holds that ‘that’-clauses designate propositions, one raises the question of what exactly propositions are. Some hold that propositions are abstract entities of some sort, e.g. Fregean senses, functions from worlds to sets, while others hold that propositions are combinations of objects and properties.

I can’t hope to decide between these views, but I’m going to begin by explicating a thought’s content in terms of representational content, the idea that a thought represents reality as being a certain way. I’ll divide representational content into two kinds, internal and external. On this account, a thought’s external representational content is a matter of the object(s) and properties the thought is about. My thought about the cup in front of me represents the cup in front of me; its external representational content is the cup in front of me.

My thought about the cup is also constituted by concepts and the deployment of concepts, and these concepts and their deployment are to be characterized as internal content. As a first pass, I’ll characterize this internal content in terms of what I can share with my ‘twin’ on ‘Twin Earth’. Twin Earth is just like earth except for certain minimal microphysical differences. For example, water on earth is composed of H2O, whereas on Twin Earth what they call ‘water’ is composed of XYZ. (This microphysical difference can be set aside for now.) My twin and I are twins in the familiar sense that how things seem to us ‘from the inside’, from our first person perspective, is exactly the same. My thought that the cup is brown and my twin on Twin Earth’s thought that the cup is brown share internal representational content (we both deploy the concept cup) despite having distinct external representational content, because our thoughts are about different material objects. I’m thinking about the cup in front of me, and my twin is thinking about a numerically distinct cup in front of her.

This twin earth example illustrates how the two kinds of representational content that feature in thought can come apart in various ways. It shows how two thoughts can have the same internal representational content, because they deploy the same concepts (cup, brown), but different external representational content, because they are about numerically distinct objects. It is also possible that a thought can have internal representational content without having external representational content: for example, my thought about the golden mountain has internal representational content but no external representational content because there is no golden mountain.Footnote 3

This distinction will be important in sections 4 & 5, where I consider the relationship between a thought’s cognitive phenomenological properties and its internal and external representational content. In section 3, however, where I am simply arguing for the existence of cognitive phenomenology, the distinction can be ignored, and so I will generally just speak of a thought’s representational content.

3

Let’s turn to conscious thought. I want to begin by proposing that we have an extremely robust intuitive grasp of what conscious thought is just in virtue of having conscious thoughts. Given this, conscious thought cannot be utterly mysterious to us in the way that quarks can be to the scientifically untutored. One can imagine answering the question ‘what is a quark?’ with ‘I have no idea’, whereas this kind of response to ‘what is conscious thought?’ seems quite implausible. It can’t be that one has no idea about what conscious thought is given our intimate acquaintance with it.

We already know that thoughts have content. The question I want to ask is this: what is the relationship between a particular thought’s being conscious and that thought’s content?

The natural suggestion, surely is this:

If a thought T is a conscious thought, the representational content of T must be in some way consciously entertained.

I'll call this the ‘conscious content’ principle, or ‘CC’ for short, for we can express it naturally by saying that the representational content of a conscious thought must be conscious. Some may bristle at the phrase ‘conscious content’, but it is often used in the literature. For example, in defining his notion of ‘access-consciousness’ (A-consciousness) Block (2002:209) says, “…what makes content A-conscious is not anything that could go on inside a module, but rather informational relations among modules…Content is A-conscious in virtue of (a representation with that content) reaching the Executive system…” Dennett (2009: 453) asks of the contents that arrive in the global workspace: “Precisely when do these winning contents become conscious?”

Why think CC is true? As mentioned above, I’ll argue that denying CC—claiming that a thought can be conscious without the content of that thought somehow being conscious—is really denying that there is such a thing as conscious thought.

Before examining CC further, let’s first ask, what makes a conscious thought conscious? There are perhaps three main groups of views about this (here I’ll only give very rough characterizations of them):

  1. [1]

    phenomenological views, according to which, a thought is conscious in virtue of having phenomenology, or more weakly, if and only if it has phenomenology;

  2. [2]

    higher-order theories, according to which, a thought is conscious in virtue of an unconscious higher order mental state being directed at it;

    and

  3. [3]

    ‘access-consciousness’ views, according to which, an occurrent thought is a conscious thought in virtue of having enough of the right sort of informational relations to other mental states.

[1] contrasts sharply with [2], because according to [2] a thought is conscious in virtue of an extrinsic relation between two independent mental states, whereas [1] asserts that a thought is conscious in virtue of having phenomenology. According to [1], phenomenology is an intrinsic feature of conscious thought; not so for [2].Footnote 4 [1] contrasts with [3] because [3] rejects the view that phenomenology is essentially involved in saying what makes a thought a conscious thought.

I reject both [2] and [3]. I won’t go into detail here, except to say that I reject [3] because I think there can be mental states which satisfy the definition of A-consciousness, but which are not conscious at all, given our intuitive understanding of consciousness.Footnote 5 Block agrees, as far as I can see. There’s been a lot of misunderstanding, because Block originally introduced the notion of A-consciousness precisely as an attempt to see how close a state could get to being a genuinely conscious state, i.e. a phenomenally conscious state, without actually being a conscious state at all. As for [2], the higher-order view, I reject it for a number of reasons including its inability to deal to with the possibility of a mismatch between what the higher-order state represents and what the lower-order state represents, and the implausibility of the idea that two unconscious states add up to a conscious state.Footnote 6

Since I’m not going into detail about why I reject [2] and [3], one can read the conclusions of this paper as conditional: if one agrees that [1] is the correct understanding of what a conscious thought consists in, one must accept the existence of cognitive phenomenology.

Let’s now get straight to [1], the overwhelmingly natural suggestion that conscious thoughts are conscious in virtue of having phenomenology. The question is then this: Given a particular conscious thought, what does its phenomenology have to be like for it to be the particular conscious thought that it is? And, crucially, how does its phenomenology relate to its representational content?

There are many philosophers who believe that there is sensory phenomenology, but reject the existence of cognitive phenomenology. I think there must be cognitive phenomenology. This is a difficult idea, so why accept it? Well, if one accepts that phenomenological properties are necessary for conscious thought, and rejects the existence of cognitive phenomenological properties, then one must hold that what makes occurrent thought conscious thought is simply its being somehow intimately associated with, or indeed partly constituted by, merely sensory phenomenological properties.

Can this be right? Consider John’s having the conscious thought that the earth is round. The first thing to note is there is a great variety of kinds of sensory phenomenology that may be associated with this thought, e.g. a blue patch, a blue circle, an image of the solar system and so on. But unless the link between the sensory phenomenology and the conscious thought is to be utterly arbitrary, something needs to be said about how the phenomenology in question is explanatorily or intelligibly connected to the thought in question being conscious. It can’t be just a lucky accident that some piece of sensory phenomenology makes a thought conscious. One is bound to wonder exactly how an image of a blue patch accounts for John’s conscious occurrent thought being the conscious thought that the earth is round. What’s the connection between the image and the thought? Presumably, some sort of connection between the thought and the particular sensory phenomenology in question must be made.

The natural proposal is that an adequate account of how phenomenology can be involved in giving an adequate account of what makes an occurrent thought that p a conscious thought that p must somehow link the phenomenology component of the thought to the representational content component of the thought. It seems, then, that

  1. [4]

    The phenomenology that makes a particular occurrent thought a conscious thought must be explanatorily or intelligibly linked to the representational content of that thought.

Here are two sensory phenomenological proposals for satisfying (4):

  1. [4a]

    ‘Sentences do not merely stand in for thought, but actually constitute thoughts. When we produce sentences in silent speech, they issue forth from unconscious representations that correspond to what those sentences mean ….Sentences inherit their truth conditions from the unconscious ideas that generate them. So produced, these sentences aren’t arbitrary marks, but rather meaningful symbols. If we define a thought as a mental state that represents a proposition, then mental sentences qualify as thoughts.’ (Prinz 2011: 187; italics my emphasis)

  2. [4b]

    the representational content causes some associated sensory phenomenology.

On one natural reading, [4a] doesn’t satisfy the conscious content principle at all. That is, according to the conscious content principle, a thought’s being conscious must involve the conscious entertaining of its content, but Prinz’s proposal clearly characterizes the occurrence of the content as non-conscious. What about [4b]? If [4b] is to satisfy the conscious content principle, then the story can’t just be that there is some unconscious content causing some associated sensory phenomenology. One possibility is this: that the representational content of the thought [that the earth is round] simply causes one to have some sensory phenomenology without making the thought conscious at all. In this case the representational content [round earth] is simply causing a (possibly co-occurring) blue patch image or perhaps causing a thought about blue patches.

If [4a] and [4b] are committed to a thought’s content being non-conscious, it looks as if the claim is that the thought is really non-conscious and that there is some associated sensory phenomenology that is conscious. On this proposal, conscious thoughts come out as non-conscious. In fact, according to these proposals, it looks as if there is no such thing as conscious thought at all.

Can [4b] be restated so that it does satisfy CC, the conscious content principle? Could the causal relationship between the content and the sensory phenomenology result in the content itself being conscious? One might try to maintain that an image of a blue patch is the representational content of the thought that the earth is round. This proposal, however, runs into all of the well-known problems faced by the British Empiricists ‘picture theory of thinking’. (For one thing, pictures can't represent grammatical articulation.)

One might argue that the sensory phenomenology represents what the non-conscious content represents in virtue of the non-conscious content causing that sensory phenomenology. But again, surely non-conscious content could cause sensory phenomenology without that sensory phenomenology representing what the non-conscious content represents. So, how does the sensory phenomenology itself become representational? Prinz 2011 makes some proposals about how sensory phenomenology might do its representational work in terms of resemblance. He suggests, for example, that a mental image represents a walrus by representing how a walrus looks.Footnote 7 But this clearly isn’t sufficient, and many thoughts have no conceivably picturable content: how, for example, is the conditional pictured in my thought that if the train leaves on time, I’ll make my appointment? Schopenhauer makes the point well:

While others speak, do we somehow instantaneously translate their speech into imaginative pictures that fly past us at lightning speed and move around and link themselves together, forming and colouring themselves according to the ever increasing stream of words and grammatical forms? What a tumult there would be in our heads while listening to a speech or reading a book! It does not happen like this at all. The meaning of speech is immediately understood, grasped exactly and determinately without, as a rule, being mixed up with any imaginative pictures (1819: 62–3).

The situation so far seems to be the following. Either sensory phenomenology is in some sense representing representational content and thus (somehow) making representational content conscious, or the relationship between representational content and sensory phenomenology is simply a causal relationship. If the former, the question simply arises again about what a representational content’s being conscious consists in. And the latter is just a version of a representational content’s being non-conscious, which violates CC.

Faced with the inability of proposals based on sensory phenomenology to satisfy the conscious content principle, I propose that the only plausible way to explain how a thought can be conscious, and hence how the content of a thought can be conscious, is to claim that there is cognitive phenomenology associated with, and indeed essentially constitutive of, all conscious thoughts. That is, for any representational content that is consciously occurrent, consciously entertained, there must be some distinctively cognitive phenomenological apprehension of that content.

At this point, one may claim that one is simply aware of some content in having conscious thought. The content of some given thought is conscious because one is aware of that content. But now the question arises, what does this ‘awareness’ amount to (given that we’ve rejected ‘access-consciousness’ accounts)? On one natural understanding, awareness is simply a synonym for experiential awareness (putting aside blindsight cases for the moment). And since experiential awareness involves phenomenology, so does awareness. Take the case of perceptual experience. In having a visual experience of a red round ball, one is aware of the content, the red round ball, at least partly in terms of experiencing certain kinds of sensory phenomenology, e.g. what it’s like to see red, what it’s like to see a color-shape, etc. Similarly, then, in genuinely consciously thinking that temperance is a virtue, one must be aware of the content in question, being temperate and being virtuous, at least partly in virtue of experiencing certain kinds of phenomenology. And if the phenomenology can't be sensory, it seems that it must be cognitive.Footnote 8

4

Up to this point I have been arguing for the claim that either there is cognitive phenomenology or there is no conscious thought. Since obviously there is conscious thought, there is cognitive phenomenology.

But now a pressing question arises about the relationship between a thought’s cognitive phenomenological properties and its representational content.Footnote 9 Given the distinction between internal and external representational content introduced earlier, asking about the relationship between cognitive phenomenological properties and representational content is really asking two questions: given any particular conscious thought, how are cognitive phenomenological properties and internal representational content related, and how are cognitive phenomenological properties and external representational content related? Is there a unique cognitive property or properties associated with each internal representational content and with each external representational content? Or is there room for variety in the cognitive phenomenology that can be associated with the different kinds of representational content? And if so, why isn't this variety problematic in the way it was for sensory phenomenology?

In attempting to answer these questions, we can start with the following claims:

[5E] associated with each external representational content is some non-sensory cognitive phenomenological property or set of properties that is possessed by conscious thoughts with those external representational contents.

[5I] associated with each internal representational content is some non-sensory cognitive phenomenological property or set of properties that is possessed by conscious thoughts with those internal representational contents.

[5E] and [5I] assert fairly weak connections between internal and external representational content and cognitive phenomenological properties. [6E] and [6I] provide a stronger statement of the link:

[6E] associated with each external representational content is some non-sensory cognitive phenomenological property or set of properties that is essentially possessed by any conscious thought that is a thought with that external representational content.

[6I] associated with each internal representational content is some non-sensory cognitive phenomenological property or set of properties that is essentially possessed by any conscious thought that is a thought with that internal representational content.

I will begin with a consideration of the relationship between cognitive phenomenological properties and external representational content. I will argue that while [5E] is too weak, [6E] is too strong, and that, although it is hard to state precisely, something in between [5E] and [6E] is most plausible. In the case of the relationship between cognitive phenomenological properties and internal representational content I will defend a version of [6I], although it will need considerable refinement before it is acceptable.

First, it seems clear that there can be different cognitive phenomenological properties associated with two thoughts with the same external representational content. In setting this out, I'll indicate a subject’s possession of cognitive phenomenological properties, when she is having a conscious thought that p, by saying that the ‘subject takes herself to be thinking p’.Footnote 10 Consider then a subject who consciously thinks that the morning star is bright and then consciously thinks that the evening star is bright. In the first thought the subject takes herself to be thinking in a ‘morning star-ish’ way, and in the second thought the subject takes herself to be thinking in an ‘evening star-ish’ way. Thus the cognitive phenomenological properties associated with each conscious thought are different despite their having the same external representational content, Venus. So [6E] is too strong as a constraint on the relationship between cognitive phenomenological properties and external representational content.

[5E], however, is too weak, because it places no constraints at all on the relationship between cognitive phenomenological properties and external representational content. But it’s simply not the case that there are no constraints. One cannot take oneself to be thinking about the number two, and thus have ‘two-ish’ cognitive phenomenological properties and really have a chair as external representational content. That is just too far off beam.

The conclusion so far, then, is that given a conscious thought’s external representational content, there are some constraints on what that thought’s cognitive phenomenological properties can be. But how do we specify these constraints? Given a purported thought with a chair as external representational content, for example, what do the subject’s cognitive phenomenological properties have to be before we no longer count the subject as consciously thinking of the chair? I don’t think a precise principle can be formulated in answer to this question. The best we may be able to do is proceed on a case by case basis, but we may get some help in answering this question by turning to the issue of how cognitive phenomenological properties and internal representational content are related.Footnote 11

At the beginning of this paper I characterized internal representational content in terms of the deployment of concepts. So in asking about the relationship between internal representational content and cognitive phenomenological properties, I am in effect asking about the relationship between the deployment of concepts and cognitive phenomenological properties. I don’t have any particular theory of concepts in mind, and I hope that what I say is compatible with most theories of concepts.

I’ll defend the claim that there is a strong internal connection between the deployment of particular concepts and particular cognitive phenomenological properties. That is, the deployment of a given concept in conscious thought can only give rise to or be associated with a particular cognitive phenomenological property or properties. I will in other words accept a version of [6I].

It may seem obvious—ultimately a matter of definition—that there must be some internal connection between cognitive phenomenological properties and concept deployment. One might nonetheless begin by asking why one should think there is any such connection.

One thing that is worth noting straight away is that whereas the inverted spectrum cases seem plausible, or at least possible, for certain sensory phenomenological properties, they seem straightforwardly and universally impossible when it comes to cognitive phenomenological properties. Let’s suppose for the sake of argument that someone proposes that being a circle and being a line are directly opposed in a quality space in the way in which red and green are standardly taken to be opposed in the color quality space. As before, we may characterize a subject’s possession of certain cognitive phenomenological properties by saying that the subject takes herself to be thinking that p. For there to be an inverted spectrum case, it has to be possible for a subject to take herself to be thinking of a circle while employing the concept line, and equally for her to take herself to be thinking of a line while employing the concept circle. This doesn’t seem possible, because it doesn’t seem possible that a subject could think of a circle while taking herself to be thinking of a line or vice versa. That is, it doesn’t seem possible to give an adequate description of what the subject is actually thinking—is she thinking of a line or a circle? There seems to be no way of giving a principled answer to this question. And having made the point artificially in terms of line and circle, we can simply consider straightforward conceptual opposites like virtue and vice.

If spectrum inversion is universally impossible in the cognitive case, then we may have taken one small step towards demonstrating an internal connection between certain cognitive phenomenological properties and certain concept types. But [6I] makes a much stronger claim than has been established by the impossibility of spectrum inversion for the cognitive case. It claims that a particular concept must be associated with a particular cognitive phenomenological property or a particular range of cognitive phenomenological properties. But why think this—and how might one specify the range?

To answer these questions, I’ll consider the following two questions:

  1. [7]

    is it possible for two thinkers to share exactly the same cognitive phenomenological properties while not sharing the same concepts?

  2. [8]

    is it possible for two subjects to deploy the same concepts, but not share exactly the same cognitive phenomenological properties?

I’ll argue that there is a way of answering ‘yes’ to both of these questions that does not undermine a robust internal connection between cognitive phenomenological types and concept types. This is an indirect way of arguing for [6I], but I am attempting to support [6I] by showing that cases that may at first glance threaten it on closer inspection do not. (I’ll gesture toward a more direct argument in section 5.)

Consider [7]. If my twin on Twin Earth and I are thinking thoughts that can be represented in our respective languages with the sentence ‘water is wet’, do we necessarily share the same cognitive phenomenology? The answer is yes, by hypothesis.Footnote 12 Do we share the same concepts? The work of Putnam, Kripke, and Burge has illuminated the respect in which the concepts we use to think with, such as water and arthritis, hook up directly to our environment in such a way that if our environment were different, despite appearing the same, the concepts we use to think with would be different. My word ‘water’ hooks onto H2O; my watery concept (water) hooks onto H2O. My twin’s word ‘water’ hooks onto XYZ; her watery concept (‘Twater’) hooks onto XYZ. This seems to show immediately that cognitive phenomenological properties and concepts can diverge, and hence that internal representational content and cognitive phenomenology can diverge, since our watery concepts (I'll simply say 'our water concepts' from now on) are different while our cognitive phenomenology is the same.

This seems true, but I don’t think the case presents a real problem for my defense of [6I]. One reason is that only some aspects of a concept are relevant to a thought’s cognitive phenomenology. In the case of my twin’s and my deployment of our respective water concepts these are all the aspects we can summarize under the heading the watery stuff. So when my twin and I are thinking about what we each call ‘water’ in our respective languages, there is the right kind of overlap with respect to our respective water concepts—‘the watery stuff’—such that we can correctly be said to share exactly the same cognitive phenomenology, despite the difference in our concepts. (One might say that cognitive phenomenology is internally related to the ‘narrow’ elements of concepts; for me and my twin, the narrow elements of our concepts overlap.)

One might wonder whether any particular one--or all--of the descriptive elements covered here by ‘the watery stuff’ must be present when my twin and I are having our respective water thoughts. Could my twin or I have something like a ‘bare’ water thought—with no descriptive content associated with our respective water concepts? I think the answer to this question is ‘no’, and that traditional Frege puzzles provide strong evidence for this answer. Arguing for this claim, however, would take us too far afield, so for my purposes here, I’m going to assume that general concepts like water that are deployed in thought involve descriptive content. That this is so seems to be a basic fact of intentionality: to think of some object x is to think of it in some particular way.

My answer to [7] appeals to the idea that concepts have aspects. Once this is admitted we can see how the same idea can be used to show how question [8] above—is it possible for two subjects to deploy the same concepts, but not share exactly the same cognitive phenomenology—can be answered affirmatively without undermining [6I]. First, it is possible that two thinkers may be thinking with the same concepts while having different cognitive phenomenology because they are hooking onto different aspects of the concept. Second, it is possible that two thinkers may be thinking with the same concepts while having different cognitive phenomenology because one thinker or perhaps both thinkers may only partially possess the concept or concepts in question.

The first possibility—that two thinkers may be thinking with the same concepts but have different cognitive phenomenology because they are hooking onto different aspects of the concept—doesn’t present a problem for the view I am defending here. The aspects of a concept that a thinker is ‘thinking with’ are what will determine the cognitive phenomenology of the thought, and as long as two thinkers are thinking with the same aspects, their cognitive phenomenology will be the same. If two thinkers are thinking with different aspects of a concept, their cognitive phenomenology will differ. If two thinkers only have partial overlap with respect to the aspects of concepts they are thinking with, their cognitive phenomenology will overlap in so far as those aspects are concerned. The idea here is that there is a sense in which, when we think, the concepts we think with are often not fully explicit in the thought itself. So, when I think about knowing some proposition, [I know that my name is Roger], for example, although I can be said to fully possess the concept of knowledge, I certainly don’t have to be thinking explicitly about true justified belief + some fourth condition. Perhaps I am only focused on the truth aspect of the concept of knowledge.

With respect to the second possibility—that two thinkers may be thinking with the same concepts, narrowly conceived, but have different cognitive phenomenology because one thinker or perhaps both thinkers may only partially possess the concept or concepts in question—imagine an expert in physics thinking that an electron is a fundamental particle and my thinking that an electron is a fundamental particle. Despite deploying the same concepts, our cognitive phenomenology will be different, because I only have a vague sense of what an electron is. But again, I don’t think this threatens [6I]. The variation in cognitive phenomenology possible here is just the result of the complex ways in which concepts feature in our thoughts.

5

‘Even if concepts have aspects, and even if we can account for a certain amount of variation among thinkers’ cognitive phenomenology in terms of those aspects, can’t we always ask about the cognitive phenomenology associated with those aspects and ask again whether there will be variation with respect to the cognitive phenomenology associated with those aspects?’

I think at this point we are approaching a bedrock claim, namely that there just isn’t going to be variation of cognitive phenomenology given some set of concepts, perhaps a set of very basic concepts. On this view, there has to be an internal connection between cognitive phenomenological properties and a certain set of basic concept types. (I don’t have a worked out theory about what basic concepts are, but roughly I take them to be concepts that are unanalyzable or for which no concepts are conceptually prior.)

Two questions immediately arise: First, what are the grounds for making such a claim? Second, why can’t the proponent of the sensory-phenomenological proposal make the same claim? To take the second question first. It is an undeniable fact that there is massive variation of possible accompanying sensory phenomenology given any conscious thought deploying any representational content type. So it is demonstrably false that there is an internal connection between sensory phenomenological types and representational content types, and it may be the case that there is no sensory phenomenology stuff at all in the case of thinking a particular conscious thought. The same cannot be said of the connection between cognitive phenomenological properties and basic concepts.

Now to answer the first question. I began this paper with the claim that all conscious episodes, and hence all conscious thoughts, necessarily and essentially involve phenomenology. The phenomenology of conscious thought is either going to be sensory or cognitive assuming that this disjunction is exclusive and exhaustive.Footnote 13 I have argued that sensory phenomenology cannot do the job, and that we can therefore conclude that any adequate account of conscious thought must appeal to cognitive phenomenology. I have tried to show that although there can be some variation of cognitive phenomenological properties given particular types of internal and external representational content, this variation doesn’t threaten the truth of [6I]. But now we can ask again, why believe [6I]?

At this point I’ll only consider the connection between cognitive phenomenological properties and basic concepts. Basic concepts will either be simple or complex. Let’s suppose they are simple. Do we have any reason to conclude that there is a unique cognitive phenomenological property associated with each basic concept, or could there be a variety of cognitive phenomenological properties associated with each basic concept? For the sake of simplicity, let's suppose that if there is variety then there are four distinct and unrelated cognitive phenomenological properties associated with a given basic concept.

Let’s assume that the concept of space is basic and simple. If there is an internal connection between a unique cognitive phenomenological property and the concept of space, we have an immediate answer to the question, ‘how are you thinking about space?’ You must be instantiating the unique cognitive phenomenological property internally connected to the concept of space. This gives us an explanatory account of what makes this particular conscious thought possible, given that all conscious episodes involve phenomenology, and that we can't account for what makes a conscious thought the particular thought that it is by reference to sensory phenomenology.

Now suppose that there are four distinct and unrelated cognitive phenomenological properties associated with the concept of space. First, there seems to be some need to explain how a simple concept has four different cognitive phenomenological properties associated with it. Further, it seems that the connection between these different cognitive phenomenological properties and the concept cannot be an internal connection. For if the concept is simple, it has no structure; so it is difficult to see how it can be internally connected to four different cognitive phenomenological properties. With the loss of this internal connection we get two undesirable results. First, it seems that the connection between the cognitive phenomenological properties and simple concepts becomes a brute connection or a matter of brute fact. For if the connection between the concept and the cognitive phenomenological properties is not internal, and if it is difficult to see what kind of external connection there could be (not causal or historical), it seems that the connection must be brute. Second, we lose the answer to the question, ‘How are you thinking of space?’, because we can no longer appeal to an internal connection between the concept of space and a cognitive phenomenological property. The answer to this question has become a matter of brute fact, so there is no real explanation of how it is that you are consciously thinking specifically of space. It’s true that one could say that you’re thinking of space because you’re instantiating a cognitive phenomenological property, but that that property makes it possible for you to think about space would just be a brute fact, because that property’s connection to the concept of space is a brute connection.Footnote 14

6

Many questions remain, but I will stop here. In the first half of this paper I argued that appeal to cognitive phenomenology is necessary in order to say what it is for a thought to be a conscious thought--in order to say what a conscious thought consists in. A central part of that argument involved showing that sensory phenomenology on its own couldn’t be used to account for how representational content features in conscious thought. That argument appealed in part to the sheer variety of sensory phenomenology that could be associated with the content of conscious thought. Two questions then arise for any attempt to give an account of conscious thought that appeals to cognitive phenomenology. First, what is the relationship between cognitive phenomenological properties and representational content? Second, if there can be variety of cognitive phenomenological properties given a particular representational content, does that undermine the cognitive phenomenological approach in a way similar to the way in which it undermined the sensory phenomenological approach? In answer to the second question, I have tried to show that there can be variety of cognitive phenomenological properties given a particular representational content, and that this does not undermine the cognitive phenomenological approach. In answer to the first question, I have put forward a strong thesis regarding the relationship between cognitive phenomenological properties and internal representational content. I have proposed that there has to be an internal connection between cognitive phenomenology and a certain set of basic concept types.