Keywords

10.1 Introduction

Philosophical discussions of phenomenal consciousness are often cast in the idiom of realism/anti-realism debates. See, for example the “phenomenal realism” discussed by Chalmers (2003), Block (2002), and McLaughlin (2003) as well as the “qualia realism” discussed by Kind (2001), Graham and Horgan (2008), and Hatfield (2007). Often, the realists label themselves as such in the interest of making an existence claim and casting their opponents as those nihilists or eliminativists who would deny the existence of phenomenal consciousness and/or qualia. For example, critics of Daniel Dennett often characterize him as denying the very existence of consciousness.Footnote 1 But, at least sometimes, there is more being claimed by the realists than the mere existence of consciousness: They are claiming that what exists also exists independently (Independently of what? More on this shortly). It’s open, then, for a consciousness anti-realist to affirm the existence of consciousness while denying that its existence is independent in a way interesting to realism/anti-realism debaters. My aim in the present paper is to explore such an existence-affirming consciousness anti-realism, especially as exemplified in Daniel Dennett’s career-spanning work on consciousness, key components of which of course include his books Content and Consciousness (1969) as well as Consciousness Explained (1991b), and Sweet Dreams (2005).

What sense can be made of independence in the context of discussions about consciousness? In other realism debates—debates, for instance, about numbers, colors, or physical objects—independence claims are often cast in terms of mind-independence (Khlentzos 2011). A realist about electrons holds that electrons would still have existed even if no minds did. A realist about colors holds that an object can have a color even if no mind exists to perceive its color. While formulations of independence claims along such lines may make sense for colors and physical objects, they may initially seem ill-suited for making coherent independence claims about phenomenal consciousness. It makes little sense to say that consciousness could have existed even if no minds existed. It makes little sense to say that qualia exist independently of how things are perceived or experienced. Despite the inapplicability of these forms of independence claims to consciousness, there is a sensible way of interpreting a relevant independence claim: It is a claim about consciousness occurring independently of what one thinks or believes. The anti-realism under present consideration denies this sort of independence claim.

The consciousness anti-realism I focus on in the present paper is a view that Dennett has defended across several works—it’s part of the “semi realism” of his “Real Patterns” (1991a), a view of consciousness described by Dennett (1994) as opposed to “hysterical realism”. Given the way that Block (2002, p. 392) characterizes “phenomenal realism” as a thesis that “allows the possibility that there may be facts about the distribution of consciousness which are not accessible to us even though the relevant functional, cognitive, and representational facts are accessible,” Dennett may appear to certain eyes to be an anti-realist merely for the fact that his view on consciousness dating all the way back to Content and Consciousness is functionalist, cognitivist, and representationalist. However, there is a much more specific anti-realist view of Dennett’s that I want to focus on here. In Consciousness Explained, Dennett describes this view as “first-person operationalism,” a thesis that “brusquely denies the possibility in principle of consciousness of a stimulus in the absence of the subject’s belief in that consciousness” (1991b, p. 132).

Dennett’s most famous argument for his first-person operationalism (hereafter, FPO) proceeds by pointing out the alleged empirical underdetermination of theory-choice between “Stalinesque” and “Orwellian” explanations of certain temporal anomalies of conscious experience (Dennett, op. cit., pp. 115–126). The explanations conflict over whether the anomalies are due to misrepresentations in memories of experiences (Orwellian) or misrepresentations in the experiences themselves (Stalinesque).

David Rosenthal (1995, 2005b, c) has offered that his Higher-order Thought theory of consciousness (hereafter, “HOT theory”) can serve as a basis for distinguishing between Orwellian and Stalinesque hypotheses and thus as a basis for resisting FPO. The gist of HOT theory is that one’s having a conscious mental state consists in one’s having a higher-order thought (a HOT) about that mental state.Footnote 2

I’ll argue that HOT theory can defend against FPO only on a “relational reading” of HOT theory whereby consciousness consists in a relation between a HOT and an actually existing mental state. I’ll argue further that this relational reading leaves HOT theory vulnerable to objections such as the Unicorn Argument (Mandik 2009). To defend against such objections, HOT theory must instead admit of a “nonrelational reading” whereby a HOT alone suffices for a conscious state. Indeed, HOT theorists have been increasingly explicit in emphasizing this nonrelational reading of HOT theory (Rosenthal 2011; Weisberg 2010, 2011). However, I’ll argue, on this reading HOT theory collapses into a version of FPO.

The remainder of the paper will go like this: In Sect. 10.2 I’ll say some more about Dennettian anti-realism (FPO) and the Orwellian/Stalinesque argument. In Sect. 10.3 I’ll lay out a HOT-theoretic version of the Orwellian/Stalinesque distinction that depends on a relational reading of HOT theory. In Sect. 10.4 I’ll spell out the case for a nonrelational reading of HOT theory and how HOT theory is thereby led to a kind of FPO.

10.2 Anti-realism, Consciousness, and FPO

10.2.1 Clarifying Consciousness Anti-realism

In this subsection I want to rapidly clarify key terms. My aim in the present section is not to argue that one set of construals is better than another, but instead to lay out a series of stipulations to facilitate the rest of the discussion.

Consciousness aside for a moment, let’s think about the general structure of realism/anti-realism theses and debates between them. A realist position, say realism about dogs, is a conjunction of an existence claim and an independence claim, where the independence in question is often glossed as “mind independence”. An imprecise statement of dog realism is “dogs exist and exist mind-independently.” Each conjunct admits of multiple precisifications. I’ll have little to say in the present paper about precisifications of the existence claim. Let it suffice that I intend existence claims to be tenseless and actual-world directed. So, items in the past and future exist, though no item in a nonactual possible (or impossible) world does. The extinction of dogs will not, then, falsify dog realism.

Precisifications of the independence claims require more care, especially if we want to formulate coherent claims of mind-independence about things that are themselves mental. One precisification of independence that will not serve present purposes is one stated simply in terms of minds, as in “X exists independently of any mind existing.” Clearly, plugging “minds” in for “X” generates an incoherence. Precisifications that avoid such an incoherence appeal instead to specific kinds of mental state, say specific kinds of thought, belief, or judgment. “Minds exist independently of anyone thinking, believing, or judging that minds exist” contains no obvious incoherence. Precisifications of the independence claim along this line will be what I have in mind for the rest of the paper. Of interest will be the question of whether one’s conscious experience exists independently of one’s thinking, believing, or judging it to exist.

Given that realist theses are each a conjunction of an existence claim and an independence claim, opponents of realism come in two varieties: Nihilists, who deny the existence claim, and idealists, who deny the independence claim. A Berkeleyan idealist about dogs (a “bark”-leyan?) does not deny that dogs exist, but instead denies that dogs exist independently of being perceived.

I will simply set nihilism aside in this paper, and reserve “anti-realism” for the idealist variety. While Dennett’s critics sometimes accuse him of denying that consciousness exists, it should be clear that Dennett’s statement of FPO doesn’t support such a reading. In denying “the possibility in principle of consciousness of a stimulus in the absence of the subject’s belief in that consciousness,” Dennett is clearly not denying an existence claim, but instead an independence claim. The kind of anti-Dennettian that I am interested in can be briefly described as holding that we can sort mental states into two varieties, experiences and thoughts, and that conscious instances (and facts about them) of the first variety obtain independently of instances of the second variety.

One further set of issues I want to address before leaving this subsection concerns which facts about consciousness are at issue. What we get directly from the Dennett quote is that FPO is anti-realist about “consciousness of a stimulus”. Some consciousness theorists, especially HOT theorists, will detect an ambiguity in this phrase. Many, if not all, follow Rosenthal in distinguishing “transitive consciousness” (being conscious of something) from “state consciousness” (a mental state’s being conscious) (For Rosenthal’s discussion of the distinction, see, for instance, (2005a, p. 4)). If there is such a distinction, then the possibility opens of having a state in virtue of which one is conscious of something without that state itself being a conscious state. For example one might have a perceptual state by which one is conscious of a red rose without the perceptual state itself being conscious. Other theorists do not urge such a distinction. Dretske, for instance, says that conscious states are states “we are conscious with, not states we are conscious of” (1995, pp. 100–101). Perhaps (though I’m unsure) Dennett counts among such theorists. However, regardless of where one stands on this issue, there is an interesting anti-realist thesis to be stated explicitly in terms of state consciousness. Modifying the Dennett quote accordingly yields a thesis that “brusquely denies the possibility in principle of a conscious experience of a stimulus in the absence of the subject’s belief in that consciousness” (altered text italicized). For the remainder of the paper, I shall be interpreting FPO as including this thesis.

Before proceeding to the next section, I should note that, contra Kiefer (2012a) and Muñoz-Suárez (personal correspondence) one view of Dennett’s that is not a part of FPO is his view that certain of a speaker’s speech acts determine the contents of that speaker’s intentional states. This thesis of a dependence of thought upon speech and other expressions is separable from FPO, which is a thesis of a dependence of consciousness upon thought.

10.2.2 The Orwellian/Stalinesque Argument for FPO

The phi phenomenon is a species of illusory motion, as when one views the flashing stationary lights on a marquee. Color phi is a species of the phi phenomenon in which the stationary stimuli differ in color and the apparently moving object changes color mid-trajectory. Subjects in a color phi experiment look at a computer screen upon which a green circle appears then disappears. A small time later in a position a small distance away from where the green circle was, a red circle of the same size appears and then disappears. The time elapsed between the disappearance of the green and the appearance of the red is very short. It’s so short that, as a subject in this experiment, it would appear to you as if a single circle appears, moves across the screen, and then disappears. Further, the single moving circle would appear to start off green and change to red midway in its trajectory. This is color phi and it is weird.

Color phi is not just weird because we don’t know how the brain creates illusory motion from nonmoving stimuli. Here’s the really puzzling thing about color phi: How does the brain know to change the moving green circle to red before the red circle appears? Clairvoyance aside, clearly it cannot. So the experience of the red-to-green change needs to have happened after the brain receives information of the appearance of the red circle. We want further details in explaining this, and here we feel pulled toward two competing explanations, explanations that Dennett famously dubs “Orwellian” and “Stalinesque”.

My mnemonic for Dennett’s labels is that “Stalinesque” shares an “s” and a “t” with “show trial,” and “Orwellian” has an “r” in common with “revisionist history.” Both explanations have key roles for the notions of consciousness and of falsehood, but differ with respect to the questions of which states are conscious and which ones are false representations.

Let’s start by looking at the revisionist history, that is, the false memory, posited by the Orwellian explanation. On this explanation, the key mental events and their temporal order are as follows: First there is a conscious experience of a green circle, next there is a conscious experience of a red circle, and finally there is a false memory of a single circle having moved and changed from green to red. On the Orwellian explanation, there is neither a conscious experience of motion nor one of color change, but instead a false memory that movement and color change were experienced.

Let us turn now to the Stalinesque explanation, which posits a show trial. On this explanation, the false mental state posited is not a memory but an experience. On the Stalinesque explanation, the key mental events and their temporal order are as follows: First there is an unconscious receipt of information concerning the green circle, next there is an unconscious receipt of information concerning the red circle, and finally, based on these raw materials, a conscious experience is assembled—a false experience of a green circle moving and changing to red mid-trajectory.

On the face of it, these seem to be distinct competing explanations of the empirical data. The Orwellian explanation posits two accurate conscious experiences of two stationary, differently colored circles followed by a false memory of having experienced a single moving circle that changes color. The Stalinesque explanation posits a false conscious experience of motion and mid-trajectory color-change and an accurate memory of that experience. To highlight their differences, we can describe the explanations as follows: the Orwellian posits a false memory and accurate conscious experience, whereas the Stalinesque posits a false conscious experience and an accurate memory (of what the experience was).

If these are indeed distinct explanations, then which one is the correct one? Dennett argues persuasively that no amount of evidence, either first-personal or third-personal, will determine theory choice here. I’m persuaded. I find it easy to be so persuaded.

To attempt to persuade yourself of Dennett’s conclusion, first imagine being a subject in a color phi experiment. What you introspect is that there has been a visual presentation of a moving, color-changing circle. Your introspective judgment is that you have experienced such an episode. But to resolve the Stalinesque v. Orwellian debate on introspective grounds, your introspective judgment would need to wear on its sleeve whether its immediate causal antecedent was a false memory (Orwellian) or a false experience (Stalinesque). But clearly, no such marker is borne by the introspective judgment. So much for the first-person evidence!

So now, imagine being a scientist studying a subject in a color phi experiment. Imagine availing yourself of all of the possible third-personal evidence. Suppose you avail yourself to evidence gleaned via futuristic high-resolution (both spatially and temporally) brain scanners. Such evidence, let us suppose, will allow you to determine not only which brain events occur and when, but also which brain events carry which information, and which brain events are false representations. This is, of course, to presume solutions to very vexing issues about information, representation, and falsehood, solutions that might beg the question against a Dennettian anti-realism about representation and perhaps, thereby, against Dennettian anti-realism about consciousness, but I won’t pursue this line of thought here. However, we will here suppose that such solutions can be arrived at independently of resolving issues about consciousness. Clearly, then, the evidence that you have will, by itself, tell you nothing about which states are conscious. So much for the third-person evidence!

To surmount this hurtle for strictly third-person approaches, you may feel tempted to either ask the subject what their conscious experiences are like, or allow yourself to be a subject in this experiment. However, either way you will only gain access to an introspective judgment with a content that we have already seen as underdetermining the choice between the Orwellian and the Stalinesque.

Given that there’s no real difference between the Orwellian and Stalinesque scenarios, what matters for consciousness is what the scenarios have in common, namely the content of the belief or thought that one underwent a conscious experience of a color-changing, moving circle. There’s nothing independent of this belief content that serves to make it true, so having a belief with such-and-such content is all there is to being in so-and-so conscious state.

10.3 HOT Orwellian and HOT Stalinesque Scenarios

Dennett’s Orwellian/Stalinesque argument turns on a kind of underdetermination of theory by evidence. Of course, what evidence underdetermines, additional theory can sometimes settle. Rosenthal constructs HOT-theoretic versions of the Orwellian and Stalinesque scenarios that are distinguishable given the resources of HOT theory (1995, p. 362). However, that there are some Orwellian and Stalinesque scenarios that are distinguishable from each other doesn’t suffice to refute FPO. Dennett himself admits that some Stalinesque scenarios are distinguishable from some Orwellian scenarios (especially at macroscopic time-frames) (Dennett 1991b, p. 117). What matters instead is that there are some Orwellian and Stalinesque scenarios that are not distinguishable from each other. I aim in the present section to show that there are Orwellian and Stalinesque scenarios that HOT theory serves to distinguish only on a relational reading of HOT theory.

One way to convey the gist of HOT theory is by saying that a state is conscious when a HOT is about that state. Reading this relationally, we have two relata and a relation between them. The relata are the HOT and the state that it is about. The relation the HOT bears to its target is an “aboutness” relation, or as I’ll prefer to say, a “representing relation”. So, when a visual experience of a red circle is accompanied by a HOT that bears the representing relation to it, then the visual experience is a conscious one. If, instead, the visual experience is unaccompanied by any such HOT, the experience is an unconscious one. Sometimes HOT theorists themselves put HOT theory in ways that invite the relational reading. For example, Rosenthal (2005c, p. 322) writes that his is “a theory according to which a mental state is conscious just in case it is accompanied by a higher-order thought (HOT) to the effect that one is in that state.” Prima facie, this talk in terms of accompaniment makes a representing relation seem central to HOT theory. However, perhaps in the final analysis Rosenthal’s commitment to the relational reading may be merely a superficial appearance. I’ll return to this issue in Sect. 10.4. For the present section, I will keep the relational reading at the forefront.

With this relational reading of HOT theory in mind, let us think through how color phi can be explained. In color phi, it seems to one that one has an experience of a moving circle that changes color. In order for it to seem to one that one is having an experience of a moving, color-changing circle, there needs to be a HOT that one is visually experiencing a moving, color-changing circle. We might wonder further about what the causal antecedents are of this HOT, especially as concerns links in the causal chain after the information from the stationary flashing circles has hit the eye of the beholder.

One possibility is that none of the causal antecedents of the HOT is a visual experience of motion and color change. Instead, the causal antecedents are visual experiences of the stationary red and green circles. Further, it is a consistent elaboration on this possible scenario that no causal consequence of the HOT is a visual experience as of motion and color change. Since nothing antecedent or consequent to the HOT answers to the description that constitutes the HOTs content, the HOT is false. Since the HOT is not itself an experience (it is instead a thought) and has occurred after the experiences that triggered its occasion, we can regard it as a memory (albeit, a false one). Given the possibility we’ve just consistently described, this reading of the HOT theory casts it as close to Orwellian. However, to be fully Orwellian, there needs to be posited, in addition to a false memory, an accurate conscious experience. Can we complete an Orwellian explanation sketch that is consistent with HOT theory? I think that we can, but some care needs to be taken.

The way to introduce an accurate conscious experience into the above sketch in a way that is consistent with HOT theory is to go looking for one or more states that the HOT is about. If this sketch is to be Orwellian, some choices for what the HOT is about will be better than others. On a highly natural reading of what the HOT is about, it is about an inexistent state, namely a visual experience of motion and color change. The inexistence of such a state is what makes the HOT false. One problem with this reading is that the Orwellian is supposed to be positing the existence of a conscious state, and it is highly strained to posit the existence of something that is admitted in the same breath to not exist. I hope I will be forgiven in dismissing the Meinongian perspective required to view existing inexistents as welcome company. Anyway, there is another problem: It is difficult to regard the inexistent state as accurate. The inexistent state is a representation of movement and color change upon the computer screen, and, in actuality, no such motion or color change exists. And since Meinongianism is here not taken seriously, there is no serious way of taking the suggestion that the inexistent state is an accurate representation, albeit one that accurately represents an inexistent state of affairs.

There is another possibility for interpreting what the HOT is about, namely that it is about the two separate experiences of the differently colored circles. In being about those accurate experiences, they are thereby rendered conscious: On the occasion of the HOT about them, the experiences become conscious. This may have a slight air of strangeness, but there’s no obvious problem in a representation of something representing it falsely. Indeed, the scenario described here is a possibility that Rosenthal explicitly endorses (2005b, pp. 240–241) (That is, he endorses it as a possibility. He does not assert that it is an actuality).

Thus completes my sketch of a HOT Orwellian explanation of color phi. Let’s try to fit a Stalinesque explanation into the HOT mold as follows: Recall that a Stalinesque explanation posits a false conscious experience of motion and mid-trajectory color-change that has as causal antecedents the unconscious receipt of information concerning the stationary presentations of the green circle and the red circle. To fit such an explanation into the HOT mold, the HOT theorist needs to posit a HOT that is about an experience that is itself (the experience) a false representation of motion and color change. Otherwise, without such a HOT, the false experience won’t be conscious. But in order to introduce this HOT, a means must be devised of determining that the HOT is about the false representation and not about the accurate representations. Otherwise, the accurate representations will be the conscious ones and the proposed explanation won’t be Stalinesque. Supposing that such a means can be determined, we therefore have a Stalinesque reading of a HOT-theoretical explanation of color phi.

It looks, at least prima facie, that HOT theory is consistent with Orwellian and Stalinesque explanations. However, once these explanations are fit into the HOT mold, are opportunities thereby made available for adjudicating between them?

Note the key similarities in the Orwellian and Stalinesque stories. On both stories there is a HOT, the content of which is that there’s an experience of motion and color change. Also, on both stories there are accurate experiences of the stationary red and green circles. The key differences are that, on the Orwellian story, the HOT bears the representation relation to the accurate experiences and not to the (inexistent) inaccurate experience of motion and color change. On the Stalinesque story, the HOT bears the representation relation to the inaccurate experience of motion and color change and not to the accurate experiences of the stationary red and green circles. If we assume that the HOT theory is true, then in order to discover whether color phi is Orwellian or Stalinesque we would need to discover whether the HOT bore a representing relation to the accurate experiences or not.

To give a preview of the worry that I ultimately want to press against HOT theory, there are good reasons to think that there is no such thing as a representation relation and so, if the HOT theory is true, no such relation figures in it. But without recourse to such a relation, there is no relevant difference between the HOT Orwellian and the HOT Stalinesque explanations: On either case, the content of one’s consciousness just is the content of the HOT, and that content is the same on either story.

10.4 Non-relational HOT Theory and FPO

Elsewhere I press an argument, “the Unicorn Argument” or just “the Unicorn,” against HOT theories (Mandik 2009). At the heart of the argument is a view about how best to think of representation in the face of the representation of inexistents such as unicorns. This view can be seen as emerging as a response to the famous inconsistent triad of intentionality.Footnote 3 One way of presenting the triad is like this:

  1. 1.

    Representing is a relation borne to that which is represented.

  2. 2.

    There are representations of inexistents.

  3. 3.

    There are no relations borne to inexistents.

While all three propositions of the triad are independently plausible, they cannot be jointly true. The heart of the Unicorn involves a denial of the first item in the triad while retaining the last two. The resulting view might be summed up as holding that there is no such thing as a representing relation—representation may involve relations, but it is not constituted by a relation to that which is represented. It follows from there being no representation relation that there is no such relational property as the property of being represented.

This line of thought is pressed against the HOT theory by reading HOT theory as committed to the existence of such relations and relational properties. On what I’ll call the “relational reading” of HOT theory, a state is conscious only if a HOT bears the representing relation to that state. On this reading of HOT theory, the property of being conscious just is the property of being represented by a HOT. Read relationally, HOT theory gives a nicely straightforward explanation of how one and the same mental state can be unconscious at one time and conscious at another time. The change from being unconscious to being conscious just is the change from not being appropriately related to a HOT to being so related. And what is this relation if not a representing relation?

For examples of theorists who interpret HOT theory along such relational lines see Gennaro (2006, 2012), Wilberg (2010), and Bruno (2005). For discussions of both relational and non-relational interpretations of HOT theory, see Lau and Brown (n.d.), Brown (2012), Berger (2013), and Pereplyotchik (2015). What’s relational about the relational reading is the required existence of an actual state for the HOT to be about. One is in a conscious state, when a HOT bears a certain sort of relation toward another mental state, M. The relation borne to M is presumably that the HOT represents or is about M. On, for example, Gennaro’s view, M and the HOT are held to be proper parts of a mereological fusion and the fusion is the conscious state. Nonetheless, even on Gennero’s view, a key role is played by the HOT’s relating to M by way of an aboutness or a representing relation.

However, and this is the thrust of the Unicorn, if there are no such relations (as the representing relation) and relational properties (as the property of being represented), and there is such a property as a state’s being conscious, then being represented cannot be what a state’s being conscious consists in.

Some HOT theorists often present their view in a way that seems to invite the relational reading. However, in responding to the Unicorn and closely related objections turning on “empty” higher thoughts (e.g. Byrne 1997; Neander 1998; Block 2011), some HOT theorists have urged a reading of their view that I’ll call the “non-relational reading.”Footnote 4

Weisberg (2010), in responding to the Unicorn, cites approvingly a remark of Harman’s (1997), part of which includes the statement “I am quite willing to believe that there are not really any nonexistent objects and that apparent talk of such objects should be analyzed away somehow” (p. 423, fn. 26). Rosenthal (2011) writes, in response to Block’s (2011) attack based on empty HOTs:

Block describes me as having retreated from an ‘aboriginal’ theory, on which the targets of HOTs always exist, to a ‘new version’ on which they need not […]. This is not so; in my earliest publication about consciousness I noted the possibility of absent first-order states […]. For ease of exposition, I often introduce the theory by saying that a state is conscious when it’s accompanied by a HOT, noting that this characterization is not strictly accurate. And there’s no harm in putting things in those relational terms when the existence of HOTs’ targets is not under consideration.

All that matters for a state’s being conscious is its seeming subjectively to one that one is in that state. On the HOT theory, that’s determined by a HOT’s intentional content […]. (p. 436)

With this nonrelational reading of HOT theory in mind, it becomes overwhelmingly difficult to see how HOT theory isn’t just a version of FPO. In publications attacking FPO, Rosenthal describes FPO as, among other things, a view whereby “facts about…when states become conscious are exhausted by how things appear to consciousness” (Rosenthal 2005c, p. 323). Note how similar such a description of FPO is to Rosenthal’s own description of HOT theory in publications highlighting its invulnerability to empty-HOT based attacks: “A state’s being conscious is a matter of mental appearance—of how one’s mental life appears to one” (Rosenthal 2011, p. 431). The core similarities between FPO and nonrelational HOT theory are (1) a state’s being conscious is its appearing to one that one is in such-and-such mental state, and (2) the relevant way in which one is appeared to is via thought—it appears to one that one is in such-and-such mental state when one thinks (as opposed to senses or imagines) that one is in such and such mental state.

I find it hard to shake the impression that there is a tension within HOT theory itself between a relational reading and a nonrelational reading. Further it seems that the nonrelational reading is highlighted when defending against empty-HOT and Unicorn types of objections and that the relational reading is highlighted when defending against FPO. In a publication targeting FPO Rosenthal (1995) seems himself to be promoting a relational reading of HOT theory:

Because many mental states aren’t conscious at all, it’s implausible that the property of being conscious is an intrinsic property. All mental states have some sort of content properties—intentional content in the case of intentional states and sensory content in the case of bodily and perceptual sensations and most emotions. Such content properties are arguably intrinsic to mental states. By contrast, mental states can be conscious at one moment and not at another; so we have no reason to regard the property of being conscious as being intrinsic to such states. Accordingly, a state’s being conscious requires the occurrence of something extrinsic to it. And it may well be, therefore, that no mental state is conscious when it first occurs. But this doesn’t mean there are no facts of the matter about consciousness; states are conscious when, and only when, the relevant events occur. (p. 364)

Describing the requirements on a state’s being conscious in terms of “the occurrence of something extrinsic to it” points quite strongly in the direction of the relational reading of the HOT theory. There is posited here a key role for a relation between two states: the conscious state and the HOT that is about that state. And this is in clear tension with the non-relational reading that seems most naturally applicable to the insistence, in Rosenthal (2011), that the HOT all on its own suffices for state consciousness and there being something it’s like.

If there is a way to resolve the apparent tension between relational and nonrelational readings of HOT theory, I do not know what it is. I do hope, though, that the present paper aids in progress toward a resolution. It has been my aim in this paper to argue that the HOT theory can be defended as an alternative to Dennett’s FPO only by reading HOT theory as a relational theory. It seems to me, however that the balance is tipped toward a nonrelational reading of HOT theory and thus, if my arguments are correct, a reading of HOT committing it to FPO.