_____________________________________
Skeptical theses in general claim that we cannot
know what we think we know. Content skepticism in particular claims that we
cannot know the contents of our own occurrent thoughts—at least not in the way we think we can. I argue that an externalist account of
content does engender a mild form of content skepticism but that the condition
is no real cause for concern. Content externalism
forces us to reevaluate some of our assumptions about introspective knowledge,
but it is compatible with privileged access and the distinctive epistemic
character of introspective judgments.
I.
Skeptical theses are disturbing because they tell us
we cannot know something that we are ordinarily certain that we can know. Content skepticism in particular claims that
we cannot know the contents of our own occurrent thoughts in the way we think
we can. We are ordinarily certain that
we can know the contents of our own active and available thoughts without
inference or empirical investigation, and it would be disturbing for most to be
told that we cannot.[1] Those who support an externalist account of
the individuation of propositional thought content have been particularly
worried about the disturbing threat of content skepticism—so worried that they
have expended considerable energy and ink arguing that content externalism
poses no threat whatsoever to our ordinary conception of introspective
self-knowledge. I will argue to the
contrary that if content externalism is true, then there are possible thoughts
whose contents we cannot know non-empirically.
But I will also urge that this implication of content externalism is not
really that disturbing. Content
externalism does engender a mild form a content skepticism but the condition is
no real cause for concern, for it remains compatible with a hearty privileged
access thesis and with the distinctive character of introspective judgments.
Content
externalism (CE) is the thesis that the contents of some thoughts do not
supervene on the intrinsic properties of their thinker, and it brings with it
the possibility of distinct but introspectively indistinguishable twin thoughts.[2] Several authors have worried that this
implication of CE supports the following Cartesian-style argument for content
skepticism (CS)[3]:
(P1) If S introspectively knows that she is
thinking T, then S can introspectively rule out the possibility that she is
thinking T*.
(P2) S cannot introspectively rule out that she
is thinking T*.
(C) So S
does not introspectively know that she is thinking T.
If T is an externally individuated thought and T* is
its twin, then they are distinct and thinking T* needs to be ruled out as a
possibility of error. If T and T* are
twins, then they are introspectively indistinguishable and S cannot
introspectively rule out the possibility that she is thinking T*. Thus the possibility of twin thoughts looks
like it leads to the strong CS conclusion that we cannot introspectively know
the contents of any externally individuated thoughts.[4],
[5]
This is not the argument I
will use to show that CE implies CS, but my argument will have implications for
the treatment of this Cartesian-style argument. The predominant strategy to combat the Cartesian argument for CS
has been to reject P1 because there is simply no possibility of being mistaken
about the contents of our own occurrent thoughts.[6] On this view, it is not possible for one to
really be thinking T* when one believes that one is thinking T, at least so long
as one is suffering no rational or cognitive deficit—the way introspection
works precludes brute errors about the content of occurrent
thoughts regardless of whether those contents are externalistically
individuated.[7]
If my argument for content
skepticism works, however, this predominant strategy to combat the Cartesian
argument is unavailable. I agree that
we should reject the first premise, but not because we cannot be wrong about
the contents of our own occurrent thoughts.
We should reject it because we should be reliabilists about
introspective knowledge.[8] If we are externalists about the
individuation of propositional thought content, then we should be externalists
as well about our introspective knowledge of that content.
II.
Content externalism is committed to the possibility
that two individuals with all the same intrinsic properties (let them be I) can
nevertheless have distinct conceptual repertoires as a result of having
distinct extrinsic properties (E and E*).
Let’s say that one twin possesses the concept C that refers exclusively
to c-stuff, whereas the other twin possesses a different concept C* that refers
exclusively to some distinct c*-stuff.[9]
If such twins are possible,
then it is possible for a properly functioning rational agent S to undergo a
change in conceptual repertoire without being aware that she has. Here is how: Let S at t1 be just like the C-twin. She possess C, has intrinsic properties I, and extrinsic
properties E. At t2, imagine that S has
undergone a change in environment whereby she has acquired the same extrinsic
properties as the C*-twin, E*.[10] S now meets all of the necessary conditions
for possessing a concept that refers to c*-stuff. For the C*-twin has a concept that refers to c*-stuff, and S now
has all the same relevant properties as the C*-twin. One might object that there is
a relevant difference between S and the C*-twin: S has been related to c-stuff
in the past (t1) but the C*-twin has not.
And perhaps having not been related to c-stuff is a necessary condition
for having a concept that refers to c*-stuff.
I don’t think so: Imagine that S
had from the start been appropriately related to both c-stuff and c*-stuff. In such a case, having been related to
c-stuff would not bar S from having a concept that referred to c*-stuff. Of course, we might not want to say that S
would have had C*, which refers exclusively to c*-stuff, but rather an amalgam
concept that refers disjunctively to either c-stuff or c*-stuff. Still S would have had a concept that refers
in part to c*-stuff. Coming back to the
case where S starts in a c-stuff environment and then is switched to a c*-stuff
environment, I want to claim only that S will acquire some concept that refers
in part to c*-stuff. Let’s call this
new concept C# and leave open whether C# refers exclusively to c*-stuff or
disjunctively to either c*-stuff or c-stuff.
I also want to claim only that acquiring C# will involve some change in S’s conceptual repertoire—at
t1 S did not have any concept that referred to c*-stuff and at t2 she
does. I do not want to take a stand on
whether C# will replace or be an addition to C in S’s repertoire.[11]
So
CE implies the possibility of an individual S for whom a change in environment
is sufficient for a change in conceptual repertoire. It is now easy enough to imagine that S undergoes the relevant
change in environment without being aware that
she has, and thereby that S undergoes a change in conceptual repertoire without
being aware that she has. In such a case, S is aware of the new environment but not that it is new, and so she is not aware
that there has been a change. Likewise,
S may be aware of her new concept C#,
but she is not aware that there has
been a change in her conceptual repertoire.
Now, if S is not aware that there has been a change in her
conceptual repertoire, then S is not aware that
she possesses C#. Indeed S is not aware
that there has been a change in her
conceptual repertoire precisely because
she is not aware that she has
C#. If S were to become aware that C#
is in her conceptual repertoire, she would thereby come to realize that some
kind of change has occurred. Consider
an analogy: You have a young woman named
Sherry in your relatively small critical thinking class in which you are familiar
with each of your students and know them by name. In such a situation it would be safe to say that you are aware that Sherry is in your class. But imagine that Sherry has an identical
twin sister Terry who, without your knowledge, begins coming to your class in
Sherry’s place midway through the semester.
(We can imagine that Terry has either taken Sherry’s place or that the
twins take turns coming to class.)
There has been a change in the make-up of your class (either a
replacement or an addition), but you are unaware that it has taken place. In this case it would be quite natural to
say that you are not aware that Terry
is in your class. You may be aware of Terry when she attends, but you are
not aware that Terry is in your
class. Indeed the fact that you are not
aware that Terry is in your class is
what explains your not being aware that
there has been a change. If you were to
become aware at some point that Terry
is in your class, you would at that point come to realize that there has been a
change in the constitution of your class.
I have at this point argued
that CE implies the possibility that a properly functioning rational agent may
acquire a new concept through an unwitting change in environment and that this
implies that such a person can possess a concept without being aware that she
does. Thus, CE implies the possibility
of a properly functioning rational agent S who possesses a concept C# but is
not aware that she does. To continue
the argument, having the capacity to be aware that she is thinking C#-thoughts
is sufficient for S to be aware that she has the concept C#: To be aware that she possesses a certain
concept requires only that S be aware that she has the ability to think certain
thoughts, it does not require any kind of sophisticated (meta-linguistic or
meta-conceptual) understanding of that ability. So if S is aware that she has the ability to think C#-thoughts,
she is aware that she has C#. Now, we
do not want to say that S is aware that she has an ability only when she is in
fact exercising that ability. It is
enough to say that S has been aware that she has the ability and that she can
be again. This is what I call having
the capacity to be aware that one has
an ability. Thus if S has the capacity
to be aware that she is thinking C#-thoughts, then she is aware that she has
the ability to think C#-thoughts and is thereby aware that she has the concept
C#.
If S has the capacity to be
aware that she is thinking C#-thoughts, then S is aware that she has the concept
C#. Furthermore, if S has the capacity
to form higher-order introspective beliefs about her C#-thoughts that correctly
represent their content, then S has the capacity to be aware that she is
thinking C#-thoughts.[12] Thus if S has the capacity to make
higher-order introspective judgments about her C#-thoughts that correctly
represent their content, then S is aware that she possesses C#. Since S is not aware that she possesses C#,
she does not have that capacity and so S cannot introspectively know the
contents of her own occurrent C#-thoughts.[13]
CE implies the possibility
that one can acquire a concept without being aware that she has. When this happens the individual will have
occurrent first-order thoughts that involve a certain concept but be unable to
form correct introspective judgments about their content. Thus CE implies the possibility that an
individual who suffers no cognitive or rational deficiency will have certain
occurrent thoughts whose contents she cannot know introspectively.
III.
CE implies a form of CS but it is a relatively mild
form, since it applies only to a limited range of thoughts and does not
undermine the distinctive character of introspective knowledge.
My CS argument does
undermine our introspective knowledge of the contents of some external thoughts, for it undermines our introspective
knowledge of thoughts that involve some external concept we are not aware that
we possess. Such thoughts are bound to
be rather extraordinary. The
circumstances where they can arise involve undergoing a radical enough change
in environment to induce a change in conceptual repertoire without realizing
that such a change has occurred.[14] My argument does not undermine our
introspective knowledge of all
external thoughts. I argue that it is
possible to possess some external
concepts without being aware that you do, but there will be plenty of external
concepts that one is aware that one
possesses. For all that I have said
there is no reason to think that we cannot introspectively know the contents of
occurrent thoughts involving those external concepts.
There is still the
Cartesian-style argument that threatens to undermine our introspective
knowledge of all external thoughts. The
predominant strategy to combat that argument is now unavailable. That strategy depends on the claim that
brute errors with respect to the contents of our own occurrent thoughts are
impossible. I have argued that it is
possible for a properly functioning individual S to be unable to make correct introspective judgments about
the contents of her C# thoughts. Since
there is nothing wrong with S’s introspective mechanisms in general, there is
no reason to think that S cannot make any
introspective judgments about her C#-thoughts.
If S can make some
introspective judgments about her own C#-thoughts but cannot make correct ones, then brute introspective
errors concerning the content of occurrent thoughts is possible.[15],
[16]
The
predominant strategy to undermine the Cartesian-style argument for CS is
unavailable, but thankfully it is also unnecessary. We can reject P1 of the Cartesian argument by being reliabilists about introspective
knowledge.[17] If we say that a true introspective judgment
counts as knowledge whenever it is produced by a reliable introspective mechanism,
then we can say that one introspectively knows that she is thinking T even if
she cannot rule out any relevant alternatives.[18] S’s introspective judgments about her
C#-thoughts will probably not count as having been reliably produced, but there
is no reason to think that S’s judgments about other external thoughts are not
reliably produced. So my argument
implies only that some external
thoughts are introspectively unknowable; it does not imply that all are.
If
we are reliabilists about introspective knowledge, we can avoid the more
radical CS conclusion of the Cartesian-style argument without claiming that
introspective judgments are immune to brute error. Reliabilism can also explain the privileged character of
introspective knowledge by explaining how introspective judgments are warranted
directly and non-empirically. If
introspective mechanisms simply mediate the causal production of introspective
judgments by lower-order thoughts, then we can understand how introspective
judgments can be warranted directly. If introspective causal mechanisms are
intra-cranial (or wholly “internal” to an individual subject), then we can
understand how the warrant for introspective judgments is non-empirical. Reliabilism
about introspective knowledge is consistent with privileged access.[19]
IV.
Still there is something troubling about a
reliabilist account of introspective knowledge. Such an account makes it look like introspective knowledge is not
really so distinctive after all.
Introspective knowledge starts to look very much like ordinary
perceptual knowledge, where certain causal mechanisms are responsible for the
production of warranted beliefs that are nevertheless susceptible to brute
error.[20] Personally I don’t see what’s so bad about
introspective judgments turning out to be more like perceptual judgments, but I
do think there remains something distinctive about introspective judgments.[21]
Introspective judgments seem
to occupy a place somewhere between ordinary perceptual judgments and traditional
a priori judgments. Like ordinary perceptual judgments, they are
neither necessary nor analytic nor self-evident. Like traditional a priori judgments,
they are less susceptible to doubt and ordinary cases of error than perceptual
judgments. Mistaken perceptual
judgments occur under very ordinary circumstances, whereas mistaken a priori judgments occur only when there
is some kind of rational or conceptual deficiency on the part of the
thinker. The worry is that if
introspective judgments are warranted in virtue of being reliably produced and
are susceptible to brute error, then introspective judgments do not occupy a
place between ordinary perceptual
judgments and traditional a priori
judgments but slip wholly over to the perceptual judgment side. I think, however, that there is still
something distinctive about introspective judgments that keeps them from being
just like ordinary perceptual judgments.
If I am right and CE is
true, then introspective judgments are not immune to brute error. However, the brute errors that are possible
occur only in rather extraordinary circumstances and they do bear a certain
similarity to mistaken a priori
judgments. The errors that are possible
involve a kind of conceptual deficiency just as mistaken a priori judgments often do.
If S mistakenly judges that not all bachelors are male, the mistake is
due to a conceptual incompetence. S
does not fully grasp the concept bachelor. If S mistakenly judges that she is thinking
a C-thought when she is really thinking a C#-thought, it is also because of a
kind of conceptual deficiency. In a
sense, S does not fully grasp the concept C#--S can use the concept but doesn’t know enough about it to even be aware
that she has it.
Introspective and perceptual
judgments are both susceptible to brute error, but perceptual judgments are so
under far more ordinary and frequent circumstances. Because introspective brute errors are relatively rare,
introspective judgments are ‘more’ reliable than perceptual judgments and less
susceptible to doubt. The difference in
warrant that accounts for this difference in epistemic character is a quantitative one rather than a qualitative one, as has been usually
thought. Because brute errors with
respect to introspective judgments always involve a kind of conceptual
deficiency they retain a place somewhere between ordinary perceptual judgments
and traditional a priori
judgments.
V.
CE does imply a mild form of CS, but it is nothing
to be too concerned about:
Introspective judgments are susceptible to brute error, but only under
extreme circumstances that involve a type of conceptual deficiency; and
introspective knowledge does come out a little closer to ordinary perceptual
knowledge than we may have thought, but it retains an air of the a priori. Content externalism forces us to reevaluate some of our
assumptions about introspective knowledge, but it is compatible with our having
privileged access to the contents of our ordinary occurrent thoughts and with
the distinctive epistemic character of introspective judgments.
[1] Available thoughts are ones that are present and accessible (i.e., not repressed or sub-conscious or anything of the sort). Active thoughts are ones that are present and not merely standing or dispositional. I am defining ‘occurrent thoughts’ as active and available propositional attitude states that do not involve any essential qualitative feel.
[2] To say that content does not supervene on intrinsic properties, which includes neuro-physiological properties and narrowly described functional properties, is to say that such properties do not completely determine content. There can be two individuals exactly alike with respect to all of their intrinsic properties yet whose thought contents are different. Content does not supervene on intrinsic properties because being appropriately related to a certain physical and/or social environment is necessary for the acquisition of certain concepts (the constituents of thought contents). See Putnam’s “The Meaning of ‘Meaning’” in Gunderson (ed.) Language, Mind and Knowledge, 1975; and Tyler Burge’s “Individualism and the Mental”, Midwest Studies in Philosophy IV, 1979.
To say that twin thoughts are introspectively indistinguishable is to say that there is no introspective evidence available that would enable one to discrimate between them.
[3] See Paul Boghosian, “Content and Self-Knowledge” Philosophical Topics 17, 1989; Burge “Individualism and Self-Knowledge”, The Journal of Philosophy 85, 1988; Donald Davidson, “Knowing One’s Own Mind”, The Proceedings and Addresses of the American Philosophical Association 60, 1987; John Heil, “Privileged Access”, Mind 47, 1988; Anthony Brueckner, “Skepticism about Knowledge of Content”, Mind 99, 1990; and Ted Warfield, “Privileged Self-Knowledge and Externalism are Compatible”, Analysis 52, 1992.
[4] Every externalistically individuated thought has at least a possible twin. So it would seem that if the argument works, it works to undermine our knowledge of any externalistically individuated thought. (But see note below.)
[5] An obvious response to this Cartesian argument is to reject P1 by saying (a) that we only need to rule out relevant alternative possibilities, and (b) twin thoughts are rarely or never relevant. Whether twin thoughts are ever relevant certainly depends in part on what is meant by ‘relevant’. And there can be disagreement about how often twin thoughts are relevant even when some standard of relevance is agreed upon. See Ted Warfield op. cit. and “Externalism, Self-Knowledge, and the Irrelevance of Slow-Switching” Analysis 57, 1997; and Ludlow, Peter, “Externalism Self-Knowledge and the Prevalence of Slow-Switching” Analysis 55, 1995. These issues will obviously affect whether the argument applies to all externalistically individuated thoughts or not.
That we need to be able to rule out only relevant alternatives is I think problematic as well. On a typical internalist reading, ruling out a relevant alternative P requires having available evidence adequate to warrant the belief that not-P. (And it seems that such an internalist reading is needed to warrant P2.) This implies that in ordinary cases of introspective knowledge, we do have evidence that is introspectively available to rule out alternatives. But this does not seem right. For in ordinary cases of introspective knowledge we do not base our judgments on evidence, and it thus seems that there needn’t be any introspectible evidence around that would justify that I am not thinking certain other possible thoughts.
[6] Here it may be admitted that there is a general requirement for evidentially ruling out relevant possibilities, but only ones that are possibilities of error. The strategy then denies that errors are ever possible, and so effectively shows that we do not have to rule out any alternative possibilities of error. This more sophisticated relevant alternative strategy does not suffer the same problem as the simple relevant alternative response considered in the note above; for there is no requirement even in ordinary cases to evidentially rule out alternatives, because none of them are genuine error possibilities.
[7] See Burge op. cit., Falvey and Owens op. cit., Heil op. cit., and John Gibbons “Externalism and Knowledge of Content” The Philosophical Review 105, 1996 for importantly different ways of arguing that there cannot be introspective brute errors about the contents of occurrent thoughts.
[8] We would actually be rejecting the internalist version of the ruling out requirement. We keep a ruling out requirement by giving a reliabilist/tracking reading of that requirement as follows: S can rule out a (relevant) alternative SK to a proposition P if and only if S can discriminate between P and SK in the sense that if SK were true S would believe SK and not P.
[9] For those of you who like your examples a little more concrete: Recall Putnam’s Twin Earth thought experiment and think of the C-twin as Oscar, a lifelong resident of Earth who possess the concept water that refers exclusively to samples of H2O; and think of the C*-twin as Toscar, a lifelong resident of Twin Earth who possesses the concept twater that refers exclusively to samples of XYZ (which is distinct but superficially indiscernible from H2O).
[10] The duration between t1 and t2 may be considerable given that some of the relevant environmental-relation properties may take a long time to acquire.
[11] The kind of situation that I have described here for S is what is commonly known as a “slow-switching” scenario and are discussed in Boghossian (1989) op. cit.; Burge (1988) op. cit. and “Memory and Self-Knowledge” in Ludlow (ed.) Externalism and Self-Knowledge (Stanford: CSLI Publications, 1998); John Gibbons “Externalism and Knowledge of Content”, The Philosophical Review 105, 1996; Peter Ludlow, “Externalism, Self-Knowledge and the Prevalence of Slow-Switching”, Analysis 55, 1995; and Warfield “Externalism, Self-Knowledge, and the Irrelevance of Slow-Switching”, Analysis 57, 1997.
I have tried to be general in the text concerning the concepts involved (using C and C*) for two reasons: (1) To avoid any complicating issues deriving from differing intuitions about actual concepts like water and twater, and (2) To emphasize that such a scenario is made possible simply by the possibility of intrinsic twins with distinct conceptual repertoires (which is immediately implied by CE and its denial of a certain supervenience claim), and does not involve any further controversial assumptions.
There has been a lot of discussion but little agreement concerning slow-switching cases. I have tried to avoid making any controversial claim by remaining non-committal on whether C#=C* and on whether C# is an addition to S’s conceptual repertoire or a replacement for C.
[12] I am assuming only that having a higher-order belief about a thought T (something like the belief that I am thinking T) is sufficient for awareness of T. I am not claiming that it is necessary (there is a worry about conceptually unsophisticated individuals being aware of their own thoughts). And I am making no claims about what is necessary or sufficient for being aware of qualitative states. Thus I am avoiding some of the more controversial aspects of a higher-order thought account of our awareness of our own mental states. For a positive defense of such an account see David Armstrong, A Materialist Theory of Mind (London: Routeledge, 1968); and David Rosenthal, “A Theory of Consciousness”, ZIF Technical Report, Bielfeld, Germany, 1990.
Awareness that is factive and thus at least requires that the higher-order belief correctly represents the content of the thought.
[13] I put forth a prototype of this argument in “Brute Error with Respect to Content”, Philosophical Studies 94 (May 1999) to a somewhat different end; but the present version is much simpler and free of several controversial assumptions concerning concept possession employed in the earlier version.
[14] Peter Ludlow has argued that slow-switching situations are not so extraordinary in “Externalism, Self-Knowledge, and the Prevalence of Slow-Switching”, op. cit.. I think that even if slow-switching situations are prevalent, they are still extraordinary and extreme epistemically speaking.
[15] Presumably, S will represent the contents of her C#-thoughts as C-thoughts, unless she has lost the concept C, in which case she would probably represent their contents using some compound descriptive concept (if C had been water perhaps S would now be using some concept equivalent to odorless, colorless liquid…) or something of the sort.
[16] A paradigmatic case of brute error in the perceptual realm might involve mistakenly representing some mere barn façade as a real barn. In such a case, nothing may be malfunctioning with one’s perceptual apparatus, there is rather something funny going on on the object side of the relation. Similarly here, nothing may be wrong with S’s introspective mechanisms, there is just something funny going on on the object side of the relation.
[17] Sarah Sawyer in “An Externalist Account of Introspective Knowledge”, Pacific Philosophical Quarterly 80, 1999, makes such a response to the Cartesian-style argument for CS, though her reliabilist model seems to be somewhat different from the one described below in the text. She also criticizes Burge’s particular self-verification version of the predominant strategy, though I doubt that Burge thinks that self-verification alone accounts for immunity to brute error. There are also several other versions of the predominant strategy that self-consciously do not rely on any self-verification mechanism. See Falvey and Owens op. cit., Heil op. cit., and Gibbons op. cit..
[18] Again, the reliabilist is rejecting the requirement that one be able to evidentially rule out relevant alternatives. A reliabilist may require that we be able to rule out relevant alternatives in the discriminating/tracking way.
[19] David Armstrong argues for a kind of reliabilist “thermometer” model of warrant for all non-inferential knowledge in Belief, Truth, and Knowledge (Cambridge: Cambridge University Press, 1973), and of course introspective knowledge is non-inferential knowledge.
[20] A similar kind of worry was expressed by Anthony Brueckner in “Skepticism about Knowledge of Content”, op. cit.. Brueckner argued there that Heil’s particular version of the predominant strategy to undermine the Cartesian-style argument for CS fails. When considering other prospects for undermining the argument he effectively concedes that a reliabilist/externalist account of justification would be a way to reject the first premise. However, he says that to adopt such a response is tantamount to admitting, “refuting such skepticism is no easier than refuting traditional Cartesian skepticism about knowledge of the external world”. But since introspective knowledge is supposed to be on a better epistemic footing than perceptual knowledge, it ought to be easier to avoid content skepticism than external world skepticism—there ought to be something distinctive about self-knowledge in particular that allows us to reject any attempt to establish such a strong CS claim.
[21] If introspective knowledge is more like perceptual knowledge, then we will have decreased our epistemological workload, taken some of the mystery out of introspection, and primed it for “naturalization”. Still I think that there is something special about self-knowledge, even beyond what I say below. For I think that there is a sense in which it is ‘easier’ to be a reliabilist about introspective knowledge than it is to be a reliabilist about perceptual knowledge, mainly because reliability does not rule out the threat of subjective irrationality in the case of perceptual judgments but it effectively does so in the case of introspective judgments, owing to what I call their “practical incorrigibility”—the fact that, for rather mundane reasons, there can be no cogent counter-evidence raised against introspective judgments. But that is really a story for another time—I am currently working on that story in a MS tentatively entitled “Reliable Self-Knowledge”.